uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,564,915 | arxiv | \section{Introduction}
The starting point of this work arises from the construction in \cite{moulinos2019universal} of the \emph{filtered circle}, an object of algebro-geometric nature, capturing the $k$-linear homotopy type of $S^1$, the topological circle. This construction is motivated by the schematization problem due to Grothendieck, stated most generally in finding a purely algebraic description of the $\Z$-linear homotopy type of an arbitrary topological space $X$.
In the process of doing this, the authors realized that there was an inextricable link between this construction, and the theory of formal groups and Cartier duality, as set out in \cite{cartier1962groupes}.
We briefly review the relationship. The filtered circle is obtained as the classifying stack $B \mathbb{H} $
where $\mathbb{H}$ is a $\GG_m$-equivariant family of group schemes parametrized by the affine line, $\AAA^1$. This family of schemes interpolates between two affine group schemes, $\mathsf{Fix}$ and $\mathsf{Ker}$; these can be traced to the work of \cite{sekiguchi2001note} where they are shown to arise via Cartier duality from the formal multiplicative and formal additive groups, $\widehat{\GG_m}$ and $\widehat{\GG_a}$ respectively. The filtered circle $S^1_{fil}$ is then obtained as $B \mathbb{H}$, the classifying stack over $\filstack$ of $\mathbb{H}$. By taking the derived mapping space out of $S^1_{fil}$ in $\filstack$-parametrized derived stacks, one recovers precisely Hochshild homology together with a functorial filtration.
There is no reason to stop at $\widehat{\GG_m}$ or $\widehat{\GG_a}$ however. In loc. cit., the authors proposed, given an arbitrary $1$-dimensional formal group $\formalgroup$, the following generalized notion of Hochshild homology of simplicial commutative rings:
$$
\operatorname{HH}^{\formalgroup}(-): \operatorname{sCAlg}_k \to \operatorname{sCAlg}_k, \, \, \, \, \, A \mapsto \operatorname{HH}^{\formalgroup}(A) := R \Gamma ( \Map_{\operatorname{dStk}_k}(B \formalgroup^\vee, \Spec A)).
$$
The right hand side is the derived mapping space out of $B \formalgroup^\vee$, the classifying stack of the Cartier dual of $\formalgroup$.
For $\formalgroup= \widehat{\GG_m}$ one recovers Hochshild homology, via a natural equivalence of derived schemes
$$
\Map(B \mathsf{Fix}, X) \to \Map(S^1, X)
$$
and for $\formalgroup= \widehat{\GG_a}$ one recovers the derived de Rham algebra (cf. \cite{toen2011derham})via an equivalence
$$
\Map(B \mathsf{Ker}, X) \simeq \mathbb{T}_{X|k}[-1] = \Spec(\operatorname{Sym}(\mathbb{L}_{X|k}[1])
$$
with the shifted (negative) tangent bundle. One may now ask the following natural questions: if one replaces $\widehat{\GG_m}$ with an arbitrary formal group $\formalgroup$, does one obtain a similar degeneration? Is there a sense in which such a degeneration is canonical?
The overarching aim of this paper is to address some of these questions by further systematizing some of the above ideas, particularly using further ideas from spectral and derived algebraic geometry.
\subsection{Filtered formal groups}
The first main undertaking of this paper is to introduce a notion of \emph{filtered formal group}. For now, we give the following rough definition, postponing the full definition to Section \ref{filteredformalgroupsection}:
\begin{defn} [cf. Definition \ref{cogroupdefinition}]
A \emph{filtered formal group} is an abelian cogroup object $A$ in the category of complete filtered algebras $\operatorname{CAlg}(\widehat{\operatorname{Fil}}_R)$ which are discrete at the level of underlying algebras.
\end{defn}
Heuristically, these give rise to stacks
$$
\formalgroup \to \filstack,
$$
for which the pullback $\pi^*(\formalgroup)$ along the the smooth atlas $\pi: \AAA^1 \to \filstack$ is a formal group over $\AAA^1$ in the classical sense.
From the outset we restrict to a full subcategory of complete filtered algebras, for which there exists a well-behaved duality theory. Our setup is inspired by the framework of \cite{ellipticII} and the notion of smooth coalgebra therein. Namely, we restrict to complete filtered algebras that arise as the duals of \emph{smooth filtered coalgebras} (cf. Definition \ref{smoothfilteredcoalg}). The abelian cogroup structure on a complete filtered algebra $A$ then corresponds to the structure of an abelian group object on the corresponding coalgebra. As everything in sight is discrete, hence $1$-categorical (cf. Remark \ref{abelianobjects1cat}) this is precisely the data of a comonoid in smooth coalgebras, i.e. a filtered Hopf algebra.
Inspired by the classical Cartier duality correspondence over a field between formal groups and affine group schemes, we refer to this as as filtered Cartier duality.
\begin{rem}
We acknowledge that the phrase ``Cartier duality" has a variety of different meanings throughout the literature (e.g. duality between finite group schemes, $p$-divisible groups, etc.) For us, this will always mean a contravariant correspondence between (certain full subcategories of) formal groups and affine group schemes, originally observed by Cartier over a field in \cite{cartier1962groupes}.
\end{rem}
\begin{rem}
In this paper we are concerned with filtered formal groups $\formalgroup \to \filstack$ whose ``fiber over $\Spec k \to \filstack$" recovers a classical (discrete) formal group. We conjecture that the duality theory of Section \ref{filteredformalgroupsection} holds true in the filtered, spectral setting. Nevertheless, as this takes us away from our main applications, we have stayed away from this level of generality.
\end{rem}
As it turns out, the notion of a complete filtered algebra, and hence ultimately the notion of a filtered formal group is of a rigid nature. To this effect, we demonstrate the following unicity result on complete filtered algebras $A_n$ with a specified associated graded:
\begin{thm} \label{introadicunicity}
Let $A$ be an commutative ring which is complete with respect to the $I$-adic topology induced by some ideal $I \subset A$. Let $A_n \in \operatorname{CAlg}(\widehat{\operatorname{Fil}}_k)$ be a (discrete) complete filtered algebra with underlying object $A$. Suppose there is an inclusion
$$
A_1 \to I
$$
of $A$-modules
inducing an equivalence
$$
\gr(A_n) \simeq \gr(F_I^*(A)) = \operatorname{Sym}_{gr}(I/I^2)
$$
of graded objects, where $I/I^2$ is of pure weight $1$. Then $A_n = F_{I}^*A$, namely the filtration in question is the $I$-adic filtration.
\end{thm}
Hence, if $A$ is an augmented algebra, there can only be one (multiplicative) filtration on $A$ satisfying the conditions of \ref{introadicunicity}, the $I$-adic filtration. We will observe that the comultipliciation on the coordinate algebra of a formal group preserves this filtration, so that the formal group structure lifts uniquely as well.
\subsection{Deformation to the normal cone}
Our next order of business is to study a deformation to the normal cone construction in the setting of derived algebraic geometry. In essence this takes a closed immersion $\EuScript{X} \to \EuScript{Y}$ of classical schemes and gives a $\GG_m$ equivariant family of formal schemes over $\AAA^1$, generically equivalent to the formal completion $\widehat{\EuScript{Y}_{\EuScript{X}}}$ which degenerate to the normal bundle of $N_{\EuScript{X}|\EuScript{Y}}$ formally completed at the identity section. When applied to a formal group $\formalgroup$ produces a $\GG_m$-equivariant $1$-parameter family of formal groups over the affine line.
\begin{thm} \label{maintheorem1}
Let $f: \Spec(k) \to \formalgroup$ be the unit section of a formal group $\formalgroup$. Then there exists a stack $Def_{\filstack}(\formalgroup) \to \AAA^1/ \GG_m$ such that there is a map
$$
\EuScript{X} \times \AAA^1 / \GG_m \to Def_{\filstack}(\formalgroup)
$$
whose fiber over $1 \in \AAA^1 / \GG_m$ is
$$
\Spec k \to \formalgroup
$$
and whose fiber over $0 \in \AAA^1/\GG_m$ is
$$
\Spec k \to \widehat{T_{\formalgroup| k}} \simeq \widehat{\GG_a},
$$
the formal completion of the tangent Lie algebra of $\formalgroup$.
\end{thm}
We would like to point out that the constructions occur in the derived setting, but the outcome is a degeneration between formal groups, which belongs to the realm of classical geometry.
One may then apply the aforementioned \emph{filtered Cartier duality} to this construction to obtain a group scheme $Def_{\filstack}(\formalgroup)^\vee$ over $\filstack$, thereby equipped with a canonical filtration on the cohomology of the (classical) Cartier dual $\formalgroup^\vee$.
By \cite[Proposition 7.3]{geometryofilt}, $\OO(Def_{\filstack}(\formalgroup)$ acquires the structure of a complete filtered algebra. We have the following characterization of the resulting filtration on $\OO(Def_{\filstack}(\formalgroup)$ relating the deformation to the normal cone construction with the $I$-adic filtration of Theorem \ref{introadicunicity}.
\begin{cor} \label{Adicfiltrationdefcone}
Let $\formalgroup$ be a formal group over $k$. Then there exists a unique filtered formal group with $\OO(\formalgroup)$ as its underlying object. In particular, there is an equivalence
$$
\OO(Def_{\filstack}(\formalgroup) \simeq F^*_{ad}A
$$
of abelian cogroup objects in $\operatorname{CAlg}(\widehat{\operatorname{Fil}}_k)$.
\end{cor}
Hence, the deformation to the normal cone construction applied to a formal group $\formalgroup$ produces a \emph{filtered formal group}.
Next, we specialize to the case of the formal multiplicative group $\widehat{\GG_m}$. By applying Theorem \ref{maintheorem1} to the unit section $\Spec k \to \widehat{\GG_m}$, we recover the filtration on the group scheme
$$
\mathsf{Fix} := \operatorname{Ker}(F -1 : \W(-) \to \W(-))
$$
of Frobenius fixed points on the Witt vector scheme and show that this filtration arises via Cartier duality precisely from a certain $\GG_m$-equivariant family of formal groups over $\AAA^1$. As a consequence, the formal group defined is precisely an instance of the deformation to the normal cone of the unit section $\Spec k \to \widehat{\GG_m}$.
\begin{thm}
Let $\mathbb{H} \to \filstack$ be the filtered group scheme of \cite{moulinos2019universal}. This arises as the Cartier dual $Def_{\filstack}(\formalgroup_m)^\vee$ of the deformation to the normal cone of the unit section $\Spec k \to \GG_m$. Namely, there exists an equivalence of group schemes over $\filstack$
$$
Def_{\filstack}(\formalgroup)^\vee \to \mathbb{H}.
$$
\end{thm}
Putting this together with Corollary \ref{Adicfiltrationdefcone}, we obtain the following curious characterization of the HKR filtration on Hochschild homology studied in \cite{moulinos2019universal}:
\begin{cor}
The HKR filtration on Hochschild homology is functorially induced by way of filtered Cartier duality, by the $I$-adic filtration on $\OO(\formalgroup_m) \simeq k[[t]]$.
\end{cor}
\subsection{Filtration on $\formalgroup$-Hochschild homology}
One may of course apply the deformation to the normal cone construction to an arbitrary formal group of height $n$ over any base commutative ring. As a consequence, one obtains a canonical filtration on the aforementioned $\formalgroup$-Hochschild homology
\begin{cor}
(cf. \ref{filtrationonourguy})
Let $\formalgroup$ be an arbitrary formal group. The functor
$$
\operatorname{HH}^{\formalgroup}(-): \operatorname{sCAlg}_R \to \Mod_R
$$
admits a refinement to the $\infty$-category of filtered $R$-modules
$$
\widetilde{\operatorname{HH}^{\formalgroup}(-)}: \operatorname{sCAlg}_R \to \Mod_R^{filt},
$$
such that
$$
\operatorname{HH}^{\formalgroup}(-) \simeq \operatorname{} \colim_{(\Z, \leq)}\widetilde{\operatorname{HH}^{\formalgroup}(-)}
$$
In other words, $\operatorname{HH}^{\formalgroup}(A)$ admits an exhaustive filtration for any formal group $\formalgroup$ and simplicial commutative algebra $A$.
\end{cor}
\subsection{A family of group schemes over the sphere}
We now shift our attention over to the topological context. In \cite{ellipticII}, Lurie defines a notion of formal groups intrinsic to the setting of spectral algebraic geometry. We explore a weak notion of Cartier duality in this setup, between formal groups over an $E_{\infty}$-ring and affine group schemes, interpreted as group like commutative monoids in the category of spectral schemes. Leveraging this notion of Cartier duality, we demonstrate the existence a family of spectral group schemes for each height $n$. Since Cartier duality is compatible with base-change, one rather easily sees that these spectral schemes provide lifts of various affine group schemes.
\begin{thm}
Let $\formalgroup$ be a formal group over $\Spec k$, for $k$ a finite field of height $n$. Let $\mathsf{Fix}_{\formalgroup} := \formalgroup^\vee$ be its Cartier dual affine group scheme. Then there exists a functorial lift $\mathsf{Fix}^{un}_{\formalgroup} \to \Spec R^{un}_{\formalgroup}$ giving the following Cartesian square of affine spectral schemes:
$$
\xymatrix{
&\mathsf{Fix}_{\formalgroup} \ar[d]_{\phi'} \ar[r]^{p'} & \spectralift \ar[d]^{\phi}\\
& Spec(\mathbb{F}_p) \ar[r]^{p}& \Spec(R^{un}_{\formalgroup})
}
$$
Moreover, $\spectralift$ will be a group-like commutative monoid object in the $\infty$-category of spectral stacks $sStk_{R^{un}_{\formalgroup}}$ over $R^{un}_{\formalgroup}$.
\end{thm}
The spectral group scheme of the theorem arises as the weak Cartier dual of the universal deformation of the formal group $\formalgroup$; this naturally lives over the \emph {spectral deformation ring} $R^{un}_{\formalgroup_0}$. This $E_{\infty}$ ring studied in \cite{ellipticII}, corepresents the formal moduli problem sending a complete (noetherian) $E_{\infty}$ ring $A$ to the space of deformations of $\formalgroup_0$ to $A$ and is a spectral enhancement of the classical deformation rings of Lubin and Tate.
A key such example arises from the restriction to $\mathbb{F}_p$ of the subgroup scheme $\mathsf{Fix}$ of of fixed points on the Witt vector scheme, in height one.
\subsection{Liftings of $\formalgroup$-twisted Hochshild homology}
Finally we study an $E_\infty$ (as opposed to simplicial commutative) variant of $\formalgroup$-Hochshild homology. For an $E_\infty$ $k$-algebra, this will be defined in an analogous manner to $\operatorname{HH}^{\formalgroup}(A)$ (see Definition \ref{E_inftyvariant}). We conjecture that for a simplicial commutative algebra $A$ with underlying $E_\infty$-algebra, denoted by $\theta(A)$, this recovers the underlying $E_\infty$ algebra of the simplicial commutative algebra $HH^{\formalgroup}(A)$. In the case of the formal multiplicative group $ \widehat{\GG_m}$, we verify this to be true, so that one recovers Hochschild homology.
These theories now admit lifts to the associated spectral deformation rings:
\begin{thm}
Let $\formalgroup$ be a height $n$ formal group over a finite field $k$ of characteristic $p$, and let $R^{un}_{\formalgroup}$ be the associated spectral deformation $E_\infty$ ring. Then there exists a functor
$$
\operatorname{THH}^{\formalgroup}: \operatorname{CAlg}_{R^{un}_{\formalgroup}} \to \operatorname{CAlg}_{R^{un}_{\formalgroup}}
$$
defined as
$$
\operatorname{THH}^{\formalgroup}(A):= R\Gamma( \Map(B \spectralift, \Spec A ), \OO )
$$
This lifts $\formalgroup$-Hochshild homology in the sense that if $A$ is a $k$-algebra for which there exists a $R^{un}_{\formalgroup}$-algebra lift $\widetilde{A}$ with
$$
\widetilde{A} \otimes_{R^{un}_{\formalgroup}} k \simeq A
$$
then there is a canonical equivalence, cf Theorem \ref{representability},
$$
\operatorname{THH}^{\formalgroup}(\widetilde{A}) \otimes_{R^{un}_{\formalgroup}} k \simeq \operatorname{HH}^{\formalgroup}(A)
$$
\end{thm}
We tie the various threads of this work together in the speculative final section where we discuss the question of lifting the filtration on $\operatorname{HH}^{\formalgroup}(-)$, defined in section \ref{additionstostory} as a consequence of the degeneration of $\formalgroup$ to $\filstack$, to a filtration on the topological lift $\operatorname{THH}^{\formalgroup}(-)$.
\subsection{Future work}
We work over a ring of integers $\OO_K$ in a local field extension $K \supset \mathbb{Q}_p$ of degree one obtains a formal group, known as the \emph{Lubin-Tate formal group}. This is canonically associated to a choice of uniformizer $\pi \in \OO_K$. In future work, we investigate analogues of the construction of $\mathbb{H}$ in \cite{moulinos2019universal}, which will be related by Cartier duality to this Lubin-Tate formal group. By the results of this paper, these filtered group schemes will have a canonical degeneration arising from the deformation to the normal cone construction of the Cartier dual formal groups.
In another vein, we expect the study of these spectral lifts $\operatorname{THH}^{\formalgroup}(-)$ to be an interesting direction. For example, there is the question of filtrations, and to what extent they lift to $\operatorname{THH}^{\formalgroup}(-)$. One could try to base-change this along the map to the orientation classifier
$$
R^{un}_{\formalgroup} \to R^{or}_{\formalgroup},$$
cf. \cite{ellipticII}. Roughly, this is a complex orientable $E_\infty$ ring with the universal property that it classifies oriented deformations of the spectral formal group $\formalgroup^{un}$; these are oriented in that they coincide with the formal group corresponding to a complex orientation on the underlying $E_\infty$ algebra of coefficients. For example, one obtains $p$-complete $K$-theory in height one. It is conceivable questions about filtrations and the like would be more tractable over this ring.
\vskip \baselineskip
\noindent {\bf Outline}
We begin in section 2 with a short overview of the perspective on formal groups which we adopt. In section 3 we describe some preliminaries from derived algebraic geometry. In section 4, we construct the deformation to the normal cone and apply it to the case of the unit section of a formal group. In section 5 we apply this construction to the formal multiplicative group $\widehat{\GG_m}$ and relate the resulting degeneration of formal groups to constructions in \cite{moulinos2019universal}. In section 6, we study resulting filtrations on the associated $\formalgroup$-Hochshild homologies. We begin section 7 with a brief overview of the ideas which we borrow from \cite{ellipticII} in the context of formal groups spectral algebraic geometry, and lift describe a family of spectral group schemes that arise in this setting that correspond to height $n$ formal groups over characteristic $p$ finite fields. In section 8, we study lifts $\operatorname{THH}^{\formalgroup}(-)$ of $\formalgroup$-Hochschild homology to the sphere, with a key input the group schemes of the previous section. Finally, we end with a short speculative discussion in section 9 about potential filtrations on $\operatorname{THH}^{\formalgroup}(-)$
\vskip \baselineskip
\noindent {\bf Conventions}
We often work over the $p$-local integers $\Z_{(p)}$, and so we typically use $k$ to denote a fixed commutative $\Z_{(p)}$-algebra. If we use the notation $R$ for a ring or ring spectrum, then we are not necessarily working $p$-locally. In another vein, we work freely in the setting of $\infty$-categories and higher algebra from \cite{luriehigher}. We would also like to point out that our use of the notation $\Spec(-)$ depends on the setting; in particular when working with spectral schemes, $\Spec(A)$ denotes the spectral scheme corresponding to the $E_\infty$-algebra $A$. Finally, we will always be working in the commutative setting, so we implicitly assume all relevant algebras, coalgebras, formal groups, etc. are (co)commutative.
\vskip\baselineskip
\noindent{\bf Acknowledgements.} I would like to thank Marco Robalo and Bertrand To\"{e}n for their collaboration in \cite{moulinos2019universal} which led to many of the ideas presented in this work.
I would also like to thank Bertrand To\"{e}n for various helpful conversations and ideas which have made their way into this paper. This work is supported by the grant NEDAG ERC-2016-ADG-741501.
\section{Basic notions from derived algebraic geometry}
In this section we review some of the relevant concepts that we shall use from the setting of derived algebraic geometry. We recall that there are two variants, one whose affine objects are connective $E_{\infty}$-rings, and one where the affine objects are simplicial commutative rings. We review parallel constructions from both simultaneously, as we will switch between both settings.
Let $\EuScript{C} = \{ \operatorname{CAlg}^{\operatorname{cn}}_R, \operatorname{sCAlg}_R \}$ denote either of the $\infty$-category of connective $R$-algebras or the $\infty$-category of simplicial commutative algebras. Recall that the latter can be characterised as the completion via sifted colimits of the category of (discrete) free $R$-algebras. Over a commutative ring $R$, there exists a functor
$$
\theta: \operatorname{sCAlg}_R \to \operatorname{CAlg}^{\operatorname{cn}};
$$
which takes the underlying connective $E_\infty$-algebra of a simplicial commutative algebra. This preserves limits and colimits so is in fact monadic and comonadic.
In any case one may define a derived stack via its functor of points, as an object of the $\infty$-category
$\operatorname{Fun}(\EuScript{C}, \mathcal{S})$ satisfying hyperdescent with respect to a suitable topology on $\EuScript{C}^{op}$, e.g the \'{e}tale topology. Throughout the sequel we distinguish the context we are work in by letting $\operatorname{dStk}_{R}$ denote the $\infty$-category of derived stacks and let $\operatorname{sStk}_R$ denote the $\infty$-category of ``spectral stacks".
In either cases, one obtains an $\infty$-topos, which is Cartesian closed, so that it makes sense to talk about internal mapping objects: given any two $X,Y \in \operatorname{Fun}(\EuScript{C}, \mathcal{S})$, one forms the mapping stack $\Map_{\EuScript{C}}(X, Y)$, In various cases of interest, if the source and/or target is suitably representable by a derived scheme or a derived Artin stack, then this is the case for $\Map_{\EuScript{C}}(X, Y)$ as well.
There is a certain type of base-change result that we will use, cf. \cite[Proposition A.1.5]{halpern2014mapping} \cite[Proposition 9.1.5.7]{lurie2016spectral}.
\begin{prop} \label{morebasechangeshiiiii}
Let $f: \EuScript{X} \to \Spec R$ be a geometric stack over $\Spec R$. Assume that one of the two conditions hold :
\begin{itemize}
\item $\EuScript{X}$ is a derived scheme
\item The morphism $f$ is of finite cohomological dimension over $\Spec R$, so that the global sections functor sends $\on{QCoh}(\EuScript{X})_{\geq 0} \to ({\Mod_R})_{\geq -n}$ for some positive integer $n$.
\end{itemize}
Then, for $g: \Spec R' \to \Spec R$, the following diagram of stable $\infty$-categories
$$
\xymatrix{
&\Mod_R \ar[d]^{g^*} \ar[r]^{f^*} & \on{QCoh}(\EuScript{X}) \ar[d]^{g'^*}\\
& \on{Sp} \ar[r]^{f'^*} & \on{QCoh}(\EuScript{X}_{R'})
}
$$
is right adjointable, and so, the Beck-Chevalley natural transformation of functors $ g^* f_* \simeq f'_*g'^{*}: \on{QCoh}(\EuScript{X}) \to \Mod_{R'}$ is an equivalence.
\end{prop}
\subsection{Formal algebraic geometry and derived formal descent}
In this paper, we shall often find ourselves in the setting of formal algebraic geometry and formal schemes. Hence we recall some basic notions in this setting. We end this subsection with a notion of formal descent which is intrinsic to the derived setting. This phenomenon will be exploited in Section \ref{deformationsection}.
A (underived) \emph{formal affine scheme} corresponds to the following piece of data:
\begin{defn}
We define an adic $R$-algebra to be an $R$-algebra $A$ together with an ideal $I \subset A$ endowing a topology on $A$.
\end{defn}
\begin{const}
Let $A$ be an adic commutative ring having a finitely generated ideal of definition $I \subseteq \pi_0 A$. Then there exists a tower $ ... \to A_3 \to A_2 \to A_1$ with the properties that
\begin{enumerate}
\item each of the maps $A_{i+1} \to A_i$ is a surjection with nilpotent kernel.
\item the canoncial map $\colim \Map_{\operatorname{CAlg}}(A_n, B) \to \Map_{\operatorname{CAlg}}(A,B)$ induces an equivalence of the left hand side with the sumamand of $\Map_{\operatorname{CAlg}}(A,B)$ consisting of maps $\phi:A \to B $ annihilating some poer of the ideal $I$.
\item Each of the rings $A_i$ is finitely projective when regarded as an $A$-module.
\end{enumerate}
\noindent One now defines $\operatorname{Spf} A$ to be the filtered colimit of
$$
\colim_{i} \Spec A_i
$$
in the category of locally ringed spaces. In fact, is is the left Kan extension of the $\Spec(-)$ functor along the inclusion $\operatorname{CAlg} \to \operatorname{CAlg}^{ad}$.
\end{const}
\begin{defn}
A formal scheme over $R$ is a functor
$$
X: \operatorname{CAlg}_R^0 \to \operatorname{Set}
$$
which is Zariski locally of the above form. A (commutative) formal group is an abelian group object in the category of formal schemes. By remark \ref{abelianobjects1cat}, this consists of the data of a formal scheme $\formalgroup$ which takes values in groups,
which commutes with direct sums.
\end{defn}
There is a rather surprising descent statement one can make in the setting of derived algebraic geometry. For this we first recall the notion of formal completion. We remark that in this section we are always working in the locally Noetherian context.
\begin{defn}
Let $f: X \to Y$ be a closed immersion of schemes. We define the formal completion to be the following stack $\widehat{Y}_{X}$ whose functor of points is given by
$$
\widehat{Y}_{X}(R)= Y(R) \times_{Y(R_{red})} X(R_{red})
$$
where $R_{red}$ denotes the reduced ring $(\pi_0 R)_{red}$.
\end{defn}
Although defined in this way as a stack, this is actually representable by an object in the category of formal schemes, commonly referred to as the formal completion of $Y$ along $X$.
We form the nerve $N(f)_\bullet$ of the map $f: X \to Y$, which we recall is a simplicial object that in degree $n$ is the $(n+1)$-fold product
$$
N(f)_n= X\times_Y X \cdot \cdot \cdot \times_Y X
$$
The augmentation map of this simplicial object naturally factors through the formal completion (by the universal property the formal completion satisfies). We borrow the following key proposition from \cite{toen2014derived}:
\begin{thm} \label{formaldescent}
The augmentation morphism $N(f)_\bullet \to \widehat{Y}_X$ displays $\widehat{Y}_X$ as the colimit of the diagram $N(f)_\bullet$ in the category of derived schemes: this gives an equivalence
$$
\Map_{dStk}(\widehat{Y}_X, Z) \simeq \lim_{n \in \Delta} \Map_{dSch}(N(f)_n, Z)
$$
for any derived scheme
\end{thm}
\begin{rem}
At its core, this is a consequence of \cite[Theorem 4.4]{carlsson2008derived} on derived completions in stable homotopy, giving a model for the completion of a $A$-module spectrum along a map of ring spectra $f: A \to B $ to be the totalization of a certain cosimplicial diagram of spectra obtained via a certain co-Nerve construction.
\end{rem}
\subsection{Tangent and Normal bundles}
Let $X$ be a derived stack, and $E \in \on{Perf}(X)$ a perfect complex of Tor amplitude concentrated in degrees $[0,n]$ Then the we have the following notion, cf \cite[Section 3]{toen2014derived}:
\begin{defn}
We defined to linear stack associated to $E$ to be the functor $ \mathbb{V}(E)$ sending
$$
(\Spec A \to X) \mapsto \Map_{\Mod_A}(u^*(E), A )
$$
\end{defn}
\begin{ex} \label{kgaexample}
Let $\OO[n] \in \on{Perf}(X)$ be a shift of the structure sheaf. Then $\mathbb{V}(\OO[n])$ is simply $K(\GG_a, -n)$. For a general perfect complex $E$, this $\mathbb{V}(E)$ may be obtained by taking various twisted forms and finite limits of these $K(\GG_a, -n)$.
\end{ex}
\begin{defn}
Let $f:X \to Y$ be a map of derived stacks. We define the normal bundle stack to be $\mathbb{V}(T_{X|Y}[1])$. This will be a derived stack over $X$; if $f$ is a closed immersion of classical schemes then this will be representable by a derived scheme.
\end{defn}
\begin{ex}
Let $i : \Spec k \to \formalgroup$ be the unit section of a formal group. This is a lci closed immersion, hence the cotangent complex is concentrated in (homological) degree $1$; thus the tangent complex is just $k$ in degree $-1$. It follows that the normal bundle $\mathbb{V}(T_{k| \formalgroup}[1])$ is just $K(\GG_a,0) = \GG_a$, the additive group. In fact we may identify the normal bundle with the tangent Lie algebra of $\formalgroup$.
\end{ex}
\section{Formal groups and Cartier duality} \label{ogdiscussion}
In this section we review some ideas pertaining to the theory of (commutative) formal groups which will be used throughout this paper. In particular we carefully review the notion of Cartier duality as introduced by Cartier in \cite{cartier1962groupes}, and also described in \cite[Section 37]{hazewinkel1978formal}.
There are several perspectives one may adopt when studying formal groups. In general, one may think of them as an abelian group object in the category of formal schemes or representable formal moduli problems.
In this paper we will be focusing on the somewhat restricted setting of formal groups which arise from certain types of Hopf algebras. In this setting one has a particularly well behaved duality theory which we shall exploit. Furthermore it is this structure which has been generalized by Lurie in \cite{ellipticII} to the setting of spectral algebraic geometry.
\subsection{Abelian group objects}
We start off with the notions of abelian group and commutative monoid objects in an arbitrary $\infty$-category and review their distinction.
\begin{defn}
Let $\mathcal{C}$ be an $\infty$-category which admits finite limits. A commutative monoid object is a functor $M: \operatorname{Fin}_* \to \mathcal{C}$ with the property that for each $n$, the natural maps $M(\rho(\langle n \rangle) \to M(\rho \langle 1 \rangle) $ induce equivalences $M(\rho \langle n \rangle) \simeq M(\langle 1 \rangle)^{n}$ in $\mathcal{C}$.
In addition, a commutative monoid $M$ is grouplike if for every object $C\in \mathcal{C}$, the commutative monoid $\pi_0 \Map(C,M)$ is an abelian group.
\end{defn}
We now define the somewhat contrasting notion of abelian group object. This will be part of the relevant structure on a formal group in the spectral setting.
\begin{defn}
Let $\mathcal{C}$ be an $\infty$-category. Then the $\infty$-category of abelian objects of $\mathcal{C}$, $\operatorname{Ab}(\mathcal{C})$ is defined to be
$$
\operatorname{Fun}^\times(\operatorname{Lat}^{op}, \mathcal{C}),
$$
the category of product preserving functors from the category $\operatorname{Lat}$ of finite rank abelian groups into $\mathcal{C}$.
\end{defn}
\begin{rem} \label{abelianobjects1cat}
Let $\mathcal{C}$ be a small category. Then an abelian group object $A$ is such that its representable presheaf $h_A$ takes values in abelian groups. Furthermore, in this setting, the two notions of abelian groups and grouplike commutative monoid objects coincide.
\end{rem}
\subsection{Formal groups and Cartier duality over a field} \label{cartierdualityfield}
Before setting the stage for the various manifestations of Cartier duality to appear
we say a few things about Hopf algebras, as they are central to this work. We begin with a brief discussion of what happens over a field $k$.
\begin{defn}
For us, a (commmutative, cocommutative) Hopf algebra $H$ over $R$ is an abelian group object in the category of coalgebras over $k$.
\end{defn}
Unpacking the definition, and using the fact the category of coalgebras is equipped with a Cartesian monoidal structure (it is the opposite category of a category of commutative algebra objects), we see that this is just another way of identifying bialgebra objects $H$ with an antipode map
$$
i: H \to H;
$$
this arises from the ``abelian group structure" on the underlying coalgebra.
\begin{const}
Let $H$ be a Hopf algebra. Then one may define a functor
$$
\operatorname{coSpec}(H): \operatorname{CAlg} \to \operatorname{Ab} , \, \, \, \, \, R \mapsto \operatorname{Gplike}(H \otimes_k R)= \{x | \Delta(x) = x \otimes x\},
$$
assigning to a commutative ring $R$ the set of grouplike elements of $R \otimes_k H$.
The Hopf algebra structure on $H$ endows these sets with an abelian group structure, which is what makes the above an abelian group object-valued functor. In fact, this will be a formal scheme and there will be an equivalence
$$
\operatorname{coSpec} (H) \simeq \operatorname{Spf}(H^\vee)
$$
where $H^{\vee}$, the linear dual of $H$ is an $R$-algebra, complete with respect to an $I$-adic topology induced by an ideal of definition $I \subset R$. Hence we arrive at our first interpretation of a formal group; these correspond precisely to Hopf algebras.
\end{const}
\begin{const} \label{indproduality}
Let us unpack the previous construction from an algebraic vantage point. Over a field $k$, there is an equivalence
$$
\operatorname{cCAlg}_k \simeq \operatorname{Ind}(\operatorname{cCAlg}^{fd}_k)
$$
where $\operatorname{cCAlg}^{fd}_k$ denotes the category of coalgebras whose underlying vector space is finite dimensional. By standard duality, there is an equivalence between
$$
\operatorname{Ind}(\operatorname{cCAlg}^{fd}_k) \simeq \operatorname{Pro}({\operatorname{CAlg}^{fd}_k})
$$
where we remark that $\operatorname{cCAlg}^{fd}_k \simeq ({\operatorname{CAlg}^{fd}_k})^{op}$. This may then be promoted to a duality between abelian group/cogroup objects:
\begin{equation} \label{somefuckingshit}
\mathsf{Hopf}_k := \operatorname{Ab}(\operatorname{cCAlg}_k) \simeq \operatorname{coAb}(\operatorname{Pro}({\operatorname{CAlg}^{fd}_k}))
\end{equation}
\end{const}
\begin{rem}
The interchange of display (\ref{somefuckingshit}) is precisely the underlying idea of Cartier duality of formal groups and affine group schemes.
Recall that Hopf algebras correspond contravariantly via the $\Spec(-)$ functor to affine group schemes. Hence one has
$$
AffGp_{k}^{op} \simeq \mathsf{Hopf}_k \simeq \operatorname{FG}_k,
$$
where the left hand side denotes the category of affine group schemes over $k$. The functor on the right is given by the functor $\operatorname{coSpec}(-)$ described above.
We remark that in this setting, the category of Hopf algebras over the field $k$ is actually abelian, hence the categories of formal groups and affine group schemes are themselves abelian.
\end{rem}
\subsection{Formal groups and Cartier duality over a commutative ring}
Over a general commutative ring $R$, the duality theory between formal groups and affine group schemes isn't quite as simple to describe. In practice, one restricts to certain subcategories on both sides, which then fit under the Ind-Pro duality framework of Construction \ref{indproduality}. This will be achieved by imposing a condition on the underlying coalgebra of the Hopf algebras at hand.
\begin{rem}
We study coalgebras following the conventions of \cite[Section 1.1]{ellipticII}. In particular, if $C$ is a coalgebra over $R$, we always require that the underlying $R$-module of $C$ is flat. This is done as in \cite{ellipticII}, to ensure that $C$ remains a coalgebra in the setting of higher algebra. Furthermore, we implicitly assume that all coalgebras appearing in this text are (co)commutative.
\end{rem}
To an arbitrary coalgebra, one may functorially associate a presheaf on the category of affine schemes given by the cospectrum functor
$$
\operatorname{coSpec}: \operatorname{cCAlg}_R \to \operatorname{Fun}(\operatorname{CAlg}_R, \operatorname{Set}).
$$
\begin{defn} \label{cospectrumshit}
Let $C$ be a coalgebra. We define $\operatorname{coSpec}(C)$ to be the functor
$$
\operatorname{coSpec}(C): \operatorname{CAlg}_R \to \operatorname{Set}
$$
sending $R \mapsto \operatorname{Gplike}(C \otimes_k R)= \{x | \Delta(x) = x \otimes x\}$
\end{defn}
The $\operatorname{coSpec}(-)$ functor is fully faithful when restricted to a certain class of coalgebras. We borrow the following definition from \cite{ellipticII}. See also \cite{strickland1999formal} for a related notion of \emph{coalgebra with good basis}.
\begin{defn} \label{smoothcoalgoriginal}
Fix $R$ and let $C$ be a (co-commutative) coalgebra over $R$. We say $C$ is \emph{smooth} if its underlying $R$-module is flat, and if it is isomorphic to the divided power coalgebra
$$
\Gamma^*_R(M):= \bigoplus_{n \geq 0} \Gamma^n_R(M)
$$
for some projective $R$-module $M$. Here, $\Gamma^n_R(M)$ denotes the invariants for the action of the symmetric group $\Sigma_n$ on $M^{\otimes n}$.
\end{defn}
Given an arbitrary coalgebra $C$ over $R$, the linear dual $C^\vee =\Map(C, R)$ acquires a canonical $R$-algebra structure. In general $C$ cannot be recovered from $C^\vee$. However, in the smooth case, the dual $C$ acquires the additional structure of a topology on $\pi_0$ giving it the structure of an adic $R$ algebra. This allows us to recover $C$, via the following proposition, c.f. \cite[Theorem 1.3.15]{ellipticII}:
\begin{prop}
Let $C, D \in \operatorname{cCAlg}^{sm}_R$ be smooth coalgebras. Then $R$-linear duality induces a homotopy equivalence
$$
\Map_{\operatorname{cCAlg}_R}(C, D) \simeq \Map^{\operatorname{cont}}_{\operatorname{CAlg}_R}(C^\vee, D^\vee).
$$
\end{prop}
\begin{rem}
One can go further and characterize intrinsically all adic $R$-algebras that arise as duals of smooth coalgebras. These will be equivalent to $\widehat{\operatorname{Sym}^*(M)}$, the completion along the augmentation ideal $\operatorname{Sym}^{\geq 1}(M)$ for some $M$ a projective $R$-module of finite type.
\end{rem}
\begin{rem}
Fix $C$ a smooth coalgebra. There is always a canonical map of stacks $\operatorname{coSpec}(C) \to \Spec(A)$ where $A= C^\vee$, but it is typically not an equivalence. The condition that $C$ is smooth guarantees precisely that there is an induced equivalence $\operatorname{coSpec}(C) \to \operatorname{Spf}(A) \subseteq \Spec A$, where $\operatorname{Spf}(A)$ denotes the formal spectrum of the adic $R$ algebra $A$. In particular $\operatorname{coSpec}(C)$ is a formal scheme in the sense of \cite[Chapter 8]{lurie2016spectral}
\end{rem}
\begin{prop}[Lurie] \label{fullyfaithfultobecited}
Let $R$ be an commutative ring. Then the construction $C \mapsto \operatorname{cSpec}(C)$ induces a fully faithful embedding of $\infty$-categories
$$
\operatorname{cCAlg}^{sm}_R \to \operatorname{Fun}(\operatorname{CAlg}^{0}_R, \mathcal{S})
$$
Moreover this comutes with finite products and base-change.
\end{prop}
\begin{proof}
This essentially follows from the fact that a smooth coalgebra can be recovered from its adic $E_{\infty}$-algebra.
\end{proof}
\begin{const}
As a consequence of the fact that the $\operatorname{coSpec}(-)$ functor preserves finite products, this can be upgraded to a fully faithful embedding of abelian group objects in smooth coalgebras
$$
\operatorname{Ab}(\operatorname{cCAlg}) \to \operatorname{Ab}(f\operatorname{Sch})
$$
into formal groups. Unless otherwise mentioned we will focus on formal groups of this form. Hence, we use the notation $\operatorname{FG}_R$ to denote the category of coalgebraic formal groups over $R$. We refer to this equivalence as Cartier duality.
\end{const}
We would like to interpret the above correspondence geometrically.
Let $AffGrp^{b}_R$ be the subcategory of affine group schemes, corresponding via the $\Spec(-)$ functor to the category $\mathsf{Hopf}^{sm}$, which we use to denote the category of Hopf algebras whose underlying coalgebra is smooth. Meanwhile, a cogroup object $\widehat{H}$ in the category of adic algbras corepresents a functor
$$
F: \operatorname{CAlg}^{ad} \to Grp, \, \, \, R \mapsto Hom_{\operatorname{CAlg}^{ad}}(\widehat{H}, R),
$$
where the group structure arises from the co-group structure on $H$.
Essentially by definition, this is exactly the data of a formal group, so we may identify the category of formal groups with the category $\operatorname{coAb}(\operatorname{CAlg}^{ad})$.
We have identified the categories in question as those of affine group schemes and formal groups respectively; one can further conclude that these dualities are representable by certain distinguished objects in these categories.
\begin{prop} cf \cite[Proposition 37.3.6, 37.3.11 ]{hazewinkel1978formal}
There exist natural bijections
$$
\Hom_{\mathsf{Hopf}^{sm}}(A[t, t^{-1}], C) \cong \Hom_{\operatorname{CAlg}^{ad}}(D(C), A)
$$
$$
\Hom_{\operatorname{CoAb}({\operatorname{CAlg}_B^{ad}})}(B[[T]], A) \cong \Hom_{\operatorname{CAlg}}(D^T(A), B).
$$
Here, for a coalgebra $C$, $D(C)$ is the linear dual and for a topological algebra $A$ $D^T(A)= \Map_{cont}(A, R)$ \emph{continuous dual}
\end{prop}
One can put this all together to see that there are duality functors which are moreover represented by the multiplicative group and the formal multiplicative group respectively.
\noindent
One has the following expected base-change property:
\begin{prop}\label{basechange}
Let $\formalgroup$ be a formal group over $\Spec R$, and suppose there is a map $f: R \to S$ be a map of commutative rings. Let $\formalgroup_{S}$ denote the formal group over $\Spec S$ obtained by base change. Then there is a natural isomorphism
$$
D^T(\formalgroup|_{S}) \simeq D^T(\formalgroup)_{S}
$$
of affine group schemes over $\Spec S$.
\end{prop}
\section{Filtered formal groups} \label{filteredformalgroupsection}
We define here a notion of a filtered formal group, along with Cartier duality for these. We discuss here only (``underived") formal groups over discrete commutative rings but we conjecture that these notions generalize to the case where $R$ is a connective $E_\infty$ ring.
\subsection{Filtrations and $\filstack$}
We first recall a few preliminaries about filtered objects.
\begin{defn}
Let $R$ be an $E_\infty$-ring. We set
$$
\operatorname{Fil}_R := \operatorname{Fun}(\Z^{op}, \Mod_R),
$$
where $\Z$ is viewed as a category with morphisms given by the partial ordering and refer to this as the $\infty$-category of filtered $R$-modules.
\end{defn}
\begin{rem}
The $\infty$-category $\operatorname{Fil}_R$ is symmetric monoidal with respect to the Day convolution product.
\end{rem}
\begin{defn}
There exist functors
$$
\operatorname{Und}: \operatorname{Fil}_R \to \Mod_R \, \, \, \, \, \, \, \, \, \operatorname{gr}: \operatorname{Fil}_R \to \operatorname{Gr}_R,
$$
such that to a filtered $R$-module $M$, one associates its underlying object $\operatorname{Und}(M)= \colim_{n \to -\infty} M_n$ and $\operatorname{gr}(M) = \oplus_n \operatorname{cofib}(M_{n+1} \to M_{n})$ respectively.
\end{defn}
\begin{ex}
Let $A$ be a commutative ring, and $I \subset A$ be an ideal of $A$. We define a filtration $F^*_I(A)$ with
$$
F^n_I(A) =
\begin{cases}
A , \, \, \, \, \, \, n \leq 0 \\
I^n \, \, \, \, \, \, n \geq 1
\end{cases}
$$
This is the \emph{I-adic} filtration on $A$.
\end{ex}
\begin{defn} \label{completenessfiltered}
There exists a notion of completeness in the setting of filtrations. We say a filtered $R$-module $M$ is complete if
$$
\lim_n M_n \simeq 0
$$
Alternatively, $M$ is complete if $\lim M_{-\infty}/M_n \simeq M_{- \infty} = \operatorname{Und}(M)$. We denote that $\infty$-category of filtered modules which are complete by $\widehat{\operatorname{Fil}}_R$. This will be a localization of $\widehat{\operatorname{Fil}}_R$ and will come equipped with a completed symmetric monoidal,such that the \emph{completion} functor
$$
\widehat{(-)}: \operatorname{Fil}_R \to \widehat{\operatorname{Fil}}_R
$$
is symmetric monoidal.
\end{defn}
\begin{const}
The category of filtered $R$-modules, as a $R$-linear stable $\infty$-category can be equipped with several different $t$-structures. We will occasionally work with the \emph{neutral} t-structure on $\operatorname{Fil}_R$, defined so that
$F^*(M) \in (\operatorname{Fil}_R)_{\geq 0}$ if $F^n(M) \in \operatorname (\Mod_{k})_{\geq 0}$ for all $n \in \Z$. Similarly,
$F^*(M) \in (\operatorname{Fil}_R)_{\leq 0}$ if $F^n(M) \in \operatorname (\Mod_{R})_{\leq 0}$ for all $n \in \Z$.
We remark that the standard $t$-structure on $\Mod_R$ is compatible with sequential colimits (cf. \cite[Definition 1.2.2.12]{luriehigher}. This has the consequence that if $F^*(M) \in \operatorname{Fil}_R^{\heartsuit}$ then
$$
\colim_{n \to - \infty} F^n(M) = \operatorname{Und}( F^*(M)) \in \Mod_k^\heartsuit.
$$
We occasionally refer to filtered $R$-modules with are in the heart of this $t$-structure as discrete.
\end{const}
We now briefly recall the description of filtered objects in terms of quasi-coherent sheaves over the stack $\filstack$. This quotient stack may be defined as the quotient of $\AAA^1 = \Spec(R[t])$ by the canonical $\GG_m = \Spec(R[t, t^{-1}]$ action induced by the inclusion $\GG_m \hookrightarrow \AAA^1$ arrow of group schemes. This comes equipped with two distinguished points
$$
0: \Spec k \cong \GG_m / \GG_m \to \filstack
$$
$$
1: B \GG_m= \Spec k / \GG_m \to \filstack
$$
which we often refer to in this work as the generic and special/closed point respectively.
We remark that the quotient map $\pi: \AAA^1 \to \filstack$ is a smooth (and hence fppf) atlas for $\filstack$, making $\filstack$ into an Artin stack.
\begin{thm}
There exists a symmetric monoidal equivalence
$$
\operatorname{Fil}_R \to \on{QCoh}(\filstack)
$$
Furthermore, under this equivalence, one may identify the underlying object and associated graded functors with pullbacks along $1$ and $0$ respectively.
\end{thm}
\subsection{Formal algebraic geometry over $\filstack$}
We propose in this section the rough heuristic that an affine formal scheme over $\filstack$ should be interpreted as none other than a complete filtered algebra. We justify this by showing that a complete filtered algebra quasi-coherent, as sheaf over $\filstack$ satisfies a form of completeness directly related to the standard notion of $t$-adic completeness for a $R[t]$-algebra $A.$ This may then be pulled back along the atlas $\AAA^1 \to \filstack$ to an adic commutative $R[t]$-algebra, which is complete with respect to multiplication by $t$.
\begin{const}
Recall, e.g., by \cite{lurie2015rotation}, that there is an equivalence
$$
\operatorname{Fil}_R \simeq \Mod_{R[t]}(\operatorname{Gr}_R),
$$
where $R[t]$ is given the grading such that $t$ sits in weight $-1$. More precisely it is the graded $E_\infty$ algebra given by
$$
R[t] = \begin{cases}
R, \, \, \, \, \, n \leq 0, \\
0, \, \, \, \, \, \, n > 0
\end{cases}
$$
One has a map
$$
R[t] \to \underline{\Map}_{gr}(X,X)
$$
in $\operatorname{Gr}_R$ making $X \in \operatorname{Gr}_R)$ into a $R[t]$-module. There is an equivalence of $E_1$-algebras $R[t] \simeq \operatorname{Free}_{E_1}(R(1))$ making $R[t]$ expressible as the free $E_1$ algebra on $R(1)$. Unpackaging all this, we obtain a map
$$
R \to \underline{\Map}_{gr}(X,X) \otimes R(-1)
$$
which precisely singles out the structure maps of the filtration on $X$.
\end{const}
\begin{defn} \label{gradedcompleteness}
We say a graded $R[t]$-module $X \in \Mod_{R[t]}(\operatorname{Gr}_R)$ is $(t)$-complete if and only if the limit of the following sequence of multiplication by $t$
$$
... \xrightarrow{t} X \otimes R(n+1) \xrightarrow{t} X \otimes R(n) \xrightarrow{t} X \otimes R(n-1) \xrightarrow{t}...
$$
vanishes, where the product here is the Day convolution symmetric monoidal structure on $\operatorname{Gr}_R$
\end{defn}
It is immediately clear that the above agrees with the notion of completeness in the sense of Definition \ref{completenessfiltered}. Namely $X \in \operatorname{Fil}_R$ is complete if it is complete in the sense of Definition \ref{gradedcompleteness} when viewed as an object of $\Mod_{R[t]}(\operatorname{Gr}_R)$.
We would like to use this observation to show that completeness may further be checked after ``forgetting the grading", i.e upon pullback of the associated quasi-coherent sheaf on $\filstack$ along $\pi: \AAA^1 \to \filstack$. First, recall the relevant (unfiltered/ ungraded) classical notion of $t$-completeness:
\begin{defn} \label{classicalcompleteness}
Fix $R[t]$, the polynomial algebra in one generator (with no additional structure of a grading). An $R[t]$-module $M$ is $t$-complete if the limit of the tower
$$
...M \xrightarrow{t} M \xrightarrow{t} M \xrightarrow{t} ...
$$
vanishes. By \cite[8.2.4.15]{lurie2016spectral}, there is an equivalence
$$
\on{QCoh}(\widehat{\AAA^1}) \simeq \Mod^{\operatorname{Cpl}(t)}
$$
where the right hand side denotes $t$-complete $R[t]$-modules and the left hand side denotes the $R$-linear $\infty$-category of quasi-coherent sheaves on the formal completion of the affine line $\widehat{\AAA^1}= \operatorname{Spf} R[[t]]$.
\end{defn}
Now we use this to show that completeness can be tested upon pullback to $\AAA^1$.
\begin{prop} \label{completenessnotionsagree}
Let $X \in \operatorname{Fil}_R \in \on{QCoh}(\filstack)$ be a filtered $R$-module. Then $X$ is complete as a filtered object if and only if its pullback $\pi^*(X) \in \on{QCoh}(\AAA^1)$ is complete, as an $R[t]$-algebra.
\end{prop}
\begin{proof}
By the above discussion, we express completeness as the property that
$$
\lim( ... \xrightarrow{t} X \otimes R(n) \xrightarrow{t} X \otimes R(n-1) \xrightarrow{t} ...)
$$
vanishes in the $\infty$-category
$$
\operatorname{Gr}_R \simeq \operatorname{Fun}(\Z, \Mod_R)
$$
of of graded $R$-modules, where $\Z$ is viewed as discrete $E_\infty$-space. We would like to show that the limit vanishes upon applying
$$
\bigoplus: \operatorname{Gr}_R \to \Mod_R \, \, \, \, \, (X)_n \mapsto \bigoplus_n X_n
$$
By \cite[Proposition 4.2]{geometryofilt} this functor will preserve the limit, as it satisfies the equivalent conditions for the comonadic Barr-Beck theorem, so that the limit vanishes in $\Mod_R$. Conversely, suppose $X$ is a filtered $R$-module which has the property that $\bigoplus_{n \in \Z} X_n$ is complete as an $R[t]$-module. This means that the limit along multiplication by $t$ in $\Mod_R$ vanishes. However, we may apply \cite[Proposition 4.2]{geometryofilt} again to see that this limit is actually created in $\operatorname{Gr}_R$, and moreover the functor $\bigoplus$ preserves this limit. In particular, this means that $\lim( ... \xrightarrow{t} X \otimes R(n) \xrightarrow{t} X \otimes R(n-1) \xrightarrow{t} ...)$ vanishes in $\operatorname{Gr}_R$, as we wanted to show.
\end{proof}
\begin{rem}
We see therefore that if $A$ is a complete filtered algebra over $R$, then it gives rise to a commutative algebra $\pi^*(A) \in \on{QCoh}(\filstack) \simeq \Mod_{R[t]}$, which can be endowed with a topology with respect to the ideal $(t)$ with respect to which it is complete. By \cite[Proposition 8.1.2.1, 8.1.5.1]{lurie2016spectral}, algebras of this form embed fully faithfully into $\operatorname{sStk}_{R[t]} $ the $\infty$-category of spectral stacks over $\AAA^1_R$, with essential image being precisely the \emph{formal affine schemes} over $\AAA^1_R$.
\end{rem}
\subsection{Filtered Cartier duality}
We adopt the approach to formal groups in \cite{ellipticII}, described above where they are in particular smooth coalgebras $C$ with
$$
C = \bigoplus_{i \geq 0} \Gamma^{i}(M)
$$
where $M$ is a (discrete) projective module of finite type. Here, $\Gamma^n$ for each $n$ denotes the $n$the divided power functor, which for a dualizable module $M$, can be alternatively defined as
$$
\Gamma^n(M):= \operatorname{Sym}^n(M^\vee)^\vee,
$$
that is to say as the dual of the symmetric powers functor
\begin{const}
By the results of $\cite{brantner2019deformation}, \cite{raksit2020hochschild}$, these can be extended to the $\infty$-categories $\Mod_k$, $\operatorname{Gr}(\Mod_R)$, $\operatorname{Fil}(\Mod_k)$ of $R$-modules, graded $R$-modules and filtered $R$-modules, respectively. These are referred to as the \emph{derived symmetric powers}
In particular, the $n$th (derived) divided power functors
$$
\Gamma_{gr}^n: \operatorname{Gr}_R \to \operatorname{Gr}_R \, \, \, \, \, \, \Gamma_{fil}^n: \operatorname{Fil}_R \to \operatorname{Fil}_R
$$
make sense in the graded and filtered contexts as well.
\end{const}
\begin{defn} \label{smoothfilteredcoalg}
Let $M$ be a filtered $R$-module whose underlying object is a discrete projective $R$-module of finite type
such that $\gr(M)$ is concentrated in non-positive weights. A smooth filtered coalgebra is a coalgebra of the form
$$
C = \bigoplus_{n \geq 0} \Gamma_{fil}^n(M)
$$
Note that this acquires a canonical coalgebra structure, as in \cite[Construction 1.1.11]{ellipticII}. Indeed if we apply $\Gamma^*$ to $M \oplus M$, we obtain compatible maps
$$
\Gamma^{n' + n''}(M \oplus M) \to \Gamma^{n'}(M) \otimes \Gamma^{n''}(M)
$$
where this is to be interpreted in terms of the Day convolution product. As in the unfiltered case in \cite[Construction 1.1.11]{ellipticII}, these assemble to give equivalences
$$
\Gamma^*(M \oplus M) \simeq \Gamma^*(M) \otimes \Gamma^*(M)
$$
Via the diagonal map $M \to M \oplus M$ (recall $\operatorname{Fil}(\Mod_k)$ is stable), this gives rise to a map
$$
\Delta: \Gamma^*(M) \to \Gamma^{*}(M \oplus M) \simeq \Gamma^*(M) \otimes \Gamma^*(M)
$$
which one can verify exhibits $\Gamma^*(M)$ as a coalgebra in the category of filtered $k$-modules.
\end{defn}
\begin{prop}
Let $M$ be a dualizable filtered $R$-module. Then the formation of divided powers is compatible with the associated graded and underlying object functors.
\end{prop}
\begin{proof}
Let $\operatorname{Und}: \operatorname{Fil}_R \to \Mod_R$ and $\gr: \operatorname{Fil}_R \to \operatorname{Gr}_R$ denote the underlying object and associated graded functors respectively. Each of these functors commute with colimits and are symmetric monoidal. Thus, we are reduced to showing that each of these functors commutes with the divided power functor
$$
\Gamma_{fil}^n(M) = \Sym^n(M^\vee)^\vee
$$
The statement now follows from the fact that $\operatorname{Und}$ and $\gr$, being symmetric monoidal, commute with dualizable objects and that they commute with $\operatorname{Sym}^n$, which follows from the discussion in \cite[4.2.25]{raksit2020hochschild}.
\end{proof}
\begin{defn}
The category of smooth filtered coalgebras $\operatorname{cCAlg}(\operatorname{Fil}_k)^{sm}$ is the full subcategory of filtered coalgebras generated by objects of this form. Namely,
$C \in \operatorname{cCAlg}(\operatorname{Fil}_R)^{sm}$ if there exists a filtered module $M$ which is dualizable, discrete and zero in positive degrees for which
$$
C \simeq \bigoplus_{n \geq 0} \Gamma^n_{fil}(M)
$$
\end{defn}
\begin{rem}
The filtered module $M$ in the above defintion is of the form
$$
... \supset M_{-2} \supset M_{-1} \supset M_0 \supset 0 ...
$$
which is eventually constant.
\end{rem}
We now give the first defintion of a filtered formal group:
\begin{defn} \label{filteredformalgroupdefinition}
A filtered formal group is an abelian group object in the category of smooth coalgebras. That is to say it is a product preserving functor
$$
F: Lat^{op} \to \operatorname{cCAlg}(\operatorname{Fil}_R)^{sm}
$$
\end{defn}
\begin{const}
Let $M \in \operatorname{Fil}_R$ be a filtered $R$-module. We denote the (weak) dual by $\underline{Map}_{Fil}(M, R)$. Note that if $M$ has a commutative coalgebra structure, then this acquires the structure of a commutative algebra.
\end{const}
\begin{ex}
Let $C = \oplus \Gamma_{fil}^n(M)$. Then one has an equivalence
$$
C^\vee \simeq (\bigoplus \Gamma^n(M))^\vee \simeq \prod_n \operatorname{Sym}^n(M^\vee)
$$
This is a complete filtered algebra.
\end{ex}
\begin{prop} \label{compatibilityofdual}
Let $C$ be a filtered smooth coalgebra, and let $C^\vee$ denote its (filtered) dual. Then at the level of the underlying object there is an equivalence
$$
\operatorname{Und} C^\vee \simeq \prod \operatorname{Sym}^*(N)
$$
for some projective module $N$ of finite type.
\end{prop}
\begin{proof}
We unpack what the weak dual functor does on the $n$th filtering degree of a filtered $R$-module. If $M \in \operatorname{Fil}_R$, then this may be described as
$$
M^\vee_{n} = \underline{Map}_{Fil}(M, R)_n \simeq \on{fib}(M_\infty^\vee \to M^\vee_{1-n})
$$
where $M^\vee_\infty$ is the dual of the underlying $R$-module. Now let $M= C$ be a smooth coalgebra, so that
$$
C= \bigoplus \Gamma^n(N)
$$
for $N$ as in Definition \ref{smoothfilteredcoalg}. Then $\Gamma^n(N)$ for each $n$ will be concentrated in negative filtering degrees so that $C_{1-n}^\vee \simeq 0$ for all $n$ where $C_n$ is nontrivial. Hence we have the following description for the underlying object of $C^\vee$:
$$
\operatorname{Und}(C^\vee) \simeq \colim_n \on{fib}(C^\vee_\infty \to C^\vee_{1-n}) \simeq \on{fib} \colim_{n} (C_\infty^\vee \to C_{1-n}^\vee) = \colim_n C^\vee_\infty.
$$
In particular, since $C_{1-n}$ eventually vanishes, we obtain the colimit of the constant diagram associated to $C^\vee_\infty$. Hence
$$
\operatorname{Und}(C^\vee) \simeq \operatorname{Und}(C)^\vee \simeq \prod_{m \geq 0} \operatorname{Sym}_R^m(N)
$$
This shows in particular that weak duality of these smooth filtered coalgebras commutes with underlying object functor.
\end{proof}
\begin{rem}
The above proposition justifies the definition \ref{smoothfilteredcoalg} of smooth filtered coalgebras which we propose. In general it is not clear that weak duality commutes with the underlying object functor (although this of course hold true on dualizable objects).
\end{rem}
\begin{prop} \label{keyproposition}
The assignment
$\operatorname{cCAlg}^{sm}(\operatorname{Fil}_R) \to \operatorname{CAlg}(\widehat{\operatorname{Fil}_R})$ given by
$$
C \mapsto C^\vee = \Map(C, R)
$$
is fully faithful
\end{prop}
\begin{proof}
Let $D$ and $C$ be two arbitrary smooth coalgebras. We would like to display an equivalence of mapping spaces
\begin{equation} \label{maptobeshownasequivalence}
\operatorname{Map}_{\operatorname{cCAlg}^{sm}(\operatorname{Fil}_R)}(D, C) \simeq \Map_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_R})}(C^\vee, D^\vee);
\end{equation}
Each of $C$ and $D$ may be written as a colimit, internally to filtered objects,
$$
C \simeq \colim C_k , \, \, \, \, D \simeq \colim D_m
$$
where
$$
C_k = \bigoplus_{0 \leq i \leq k} \Gamma^i(M) ; \, \, \, \, \, \, D_m = \bigoplus_{0 \leq i \leq m} \Gamma^i(N).
$$
Hence the map (\ref{maptobeshownasequivalence}) may be rewriten as a limit of maps of the form
\begin{equation} \label{moreofthesame}
\operatorname{Map}_{\operatorname{cCAlg}^{sm}(\operatorname{Fil}_R)}(D_m, C) \to \Map_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_R})}(C^\vee, D_m^\vee)
\end{equation}
The left side of this may now be rewritten as
$$
\operatorname{Map}_{\operatorname{cCAlg}^{sm}(\operatorname{Fil}_R)}(D_m, \colim_k C_k)
$$
Now, the object $D_m$ will be compact by inspection (in fact, its underlying object is just a compact projective $k$-module) so that the above mapping space is equivalent to
$$
\colim_k \operatorname{Map}_{\operatorname{cCAlg}^{sm}(\operatorname{Fil}_R)}(D_m, C_k)
$$
We would now like to make a similar type of identification on the right hand side of the map (\ref{moreofthesame}).
For this note that as a complete filtered algebra, $C^\vee \simeq \lim_k C_k^{\vee}$.
Note that there is a canonical map
$$
\colim_k \Map(C_k^\vee, D_m) \to \Map(\lim C_k^\vee, D_m)
$$
By lemma \ref{provethisshit} this is an equivalence.
Each term $C_k^{\vee}$ as a filtered object is zero in high enough positive filtration degrees. As limits in filtered objects are created object-wise, one sees that the essential image of the above map consists of morphisms
$$
\lim_k C_k^\vee \to C_j^\vee \to D_m
$$
which factor through some $C_j^\vee$. Since $D_m$ is itself of the same form, then every map factors through some $C_j^\vee$. Hence we obtain the desired decomposition on the right hand side of (\ref{moreofthesame}). It follows that the morphism of mapping spaces (\ref{maptobeshownasequivalence}) decomposes into maps
$$
\Map_{}(D_m, C_k) \to \Map(C_k^\vee, D_m^\vee).
$$
These are equivalences because $D_j$ and $C_k$ are dualizable for every $j,k$, and the duality functor $(-)^\vee$ gives rise to an anti-equivalence between commutative algebra and commutative coalgebra objects whose underlying objects are dualizable. Assembling this all together we conclude that (\ref{maptobeshownasequivalence}) is an equivalence.
\end{proof}
\begin{lem} \label{provethisshit}
The canonical map of spaces
$$
\colim \Map_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_R})}(C_k^\vee, D_m ) \to \Map_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_R})}(\lim_k C_k^\vee, D_m)
$$
induced by the projection maps $\pi_k: \lim_k C_k \to C_k$ is an equivalence.
\end{lem}
\begin{proof}
Fix an index $k$. We claim that the following is a pullback square of spaces:
\begin{equation} \label{pullbacksquare}
\xymatrix{
& \Map_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_R})}(C_k^\vee, D_m ) \ar[d]^{\pi_k^*} \ar[r]^-{\operatorname{Und}} & \Map_{\operatorname{CAlg}}(C_k^\vee, D_m ) \ar[d]^{\pi_k^*}\\
& \Map_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_R})}(\lim_k C_k^\vee, D_m ) \ar[r]^-{\operatorname{Und}} & \Map_{\operatorname{CAlg}}(\lim_k C_k^\vee, D_m )
}
\end{equation}
Note first that even though $\operatorname{Und}(-)$ does not generally preserve limits, it will preserve these particular limits by Proposition \ref{compatibilityofdual}. To prove the claim, we see that the pullback
$$
\Map_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_R})}(\lim_k C_k^\vee, D_m ) \times_{\Map_{\operatorname{CAlg}}(\lim_k C_k^\vee, D_m )}\Map_{\operatorname{CAlg}}(C_k^\vee, D_m )
$$
parametrizes, up to higher coherent homotopy, ordered pairs $(f, g)$ with
$$
f: \lim C_k^\vee \to D_m
$$
a map of filtered algebras and
$$
g_k: C_k^\vee \to D_m
$$
a map at the level of underlying algebras, such that there is a factorization of the underlying map
$$
\operatorname{Und}(f) \simeq \pi_k^*(g_k) = g_k \circ \pi_k
$$
along the map $\pi_k: \lim_k C_k^\vee \to C_k^\vee$. Recall that $\pi_k$ is also the underlying map of a morphism of filtered objects; since the composition $\operatorname{Und}(f) = g_k \circ \pi_k$ respects the filtration this means that $g_k$ itself must respect the filtration as well. This in particular gives rise to an inverse
$$
\Map_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_R})}(\lim_k C_k^\vee, D_m ) \times_{\Map_{\operatorname{CAlg}}(\lim_k C_k^\vee, D_m )}\Map_{\operatorname{CAlg}}(C_k^\vee, D_m ) \to \Map_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_R})}(C_k^\vee, D_m )
$$
of the canonical map
$$
\Map_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_R})}(C_k^\vee, D_m ) \to \Map_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_R})}(\lim_k C_k^\vee, D_m ) \times_{\Map_{\operatorname{CAlg}}(\lim_k C_k^\vee, D_m )}\Map_{\operatorname{CAlg}}(C_k^\vee, D_m )
$$
induced by the universal property of the pullback, which proves the claim.
Now let $P_k$ denote the fiber of the left vertical map of \ref{pullbacksquare}. One sees that the fiber of the map
$$
\Map_{\operatorname{CAlg}(\widehat{\operatorname{Fil}_R})}(\lim_k C_k^\vee, D_m ) \times_{\Map_{\operatorname{CAlg}}(\lim_k C_k^\vee, D_m )}\Map_{\operatorname{CAlg}}(C_k^\vee, D_m )
$$
of the statement is $\colim P_k$. We would like to show that this is contractible. By the claim, this is equivalent to $\colim P^{und}_k $, where $P^{und}_k$ for each $k$ is the fiber of the right hand vertical map of
\ref{pullbacksquare}. By \cite{lurie2016spectral}, this is contractible. We will be done upon showing that the essential image the map in the statement is all of $\Map_{\operatorname{CAlg}}(\lim_k C_k^\vee, D)$. To this end we see that the essential image consists of maps
$$
\lim_k C_k^\vee \to C_j^\vee \to D_m
$$
which factor through some $C_j^\vee$. However, since the underlying algebra of $D_m$ is nilpotent, every map factors through such a $C_j^\vee$.
\end{proof}
\begin{rem}
We remark that this is ultimately an example of the standard duality between ind and pro objects of an $\infty$-category $\mathcal{C}$. Indeed, one has a duality between algebras and coalgebras in $\operatorname{Fil}_k$ whose underlying objects are dualizable. The equivalence of proposition \ref{keyproposition} is an equivalence between certain full subcategories of $\operatorname{Ind}(\operatorname{cCAlg}^{ \omega, fil})$ and $\operatorname{Pro}(\operatorname{CAlg}^{\omega, fil})$.
\end{rem}
\begin{defn} \label{cogroupdefinition}
Let $\mathcal{D}$ denote the essential image of the duality functor of Proposition \ref{keyproposition}. Then, we define a the category of (commutative) cogroup objects $\operatorname{coAb}(\mathcal{D})$ to just be an abelian group object of the opposite category (i.e. of the category of smooth filtered coalgebras. As $(-)^\vee$ is an anti-equivalence of $\infty$-categories, this implies that Cartesian products on $\operatorname{cCAlg(\operatorname{Fil}_k})^{sm}$ are sent to coCartesian products on $\mathcal{D}$. Hence, this functor sends group objects to cogroup object. We refer to an object $C \in \operatorname{coAb}(\mathcal{D})$ as a \emph{filtered formal group}.
\end{defn}
\begin{rem}
If $C^\vee$ is discrete (which is the setting we are primarily concerned with for the moment) then a commutative cogroup structure on $C$ is none other than a (co)commutative comonoid structure on $C^\vee$, making it into a bialgebra in complete filtered $R$-modules.
\end{rem}
\begin{const}[Cartier duality] \label{filteredcartierduality}
Let
$$
(-)^\vee: \operatorname{cCAlg(\operatorname{Fil}_R})^{sm} \to \mathcal{D}
$$
be the equivalence of Proposition \ref{keyproposition}. This may now be promoted to an equivalence
$$
(-)^{\vee}: \operatorname{Ab}(\operatorname{cCAlg(\operatorname{Fil}_R})^{sm}) \to \operatorname{CoAb}(\mathcal{D})
$$
We refer to the correspondence which is implemented by this equivalence as \emph{filtered Cartier duality}.
\end{const}
\begin{rem}
We explain our usage of the term \emph{filtered Cartier duality}. As we saw in Section \ref{cartierdualityfield}, classical Cartier duality gives rise to an (anti)-equivalence between formal groups and affine groups schemes, at least in the most well-behaved situation over a field. An abelian group object in smooth filtered coalgebras will be none other than a filtered Hopf algebra. This is due to the fact that we ultimately still restrict to the a $1$-categorical setting where remark \ref{abelianobjects1cat} applies, so abelian group objects agree with grouplike commutative monoids. Out of this, therefore, one may extract an relative affine group scheme over $\filstack$. Hence, \ref{filteredcartierduality} may be viewed as a correspondence between filtered formal groups and a full subcategory of relatively affine group schemes over $\filstack$.
\end{rem}
Next we prove a unicity result on complete filtered algebra structures with underlying object a commutative ring $A$ and specified associated graded, (cf. Theorem \ref{introadicunicity}).
\begin{prop} \label{unicityalgebras}
Let $A$ be an commutative ring which is complete with respect to the $I$-adic topology induced by some ideal $I \subset A$. Let $A_n \in \operatorname{CAlg}(\widehat{\operatorname{Fil}}_R)$ be a (discrete) complete filtered algebra with underlying object $A$. Suppose there is an inclusion
$$
A_1 \to I
$$
of $A$-modules
inducing an equivalence
$$
\gr(A_n) \simeq \gr(F_I^*(A)) = \operatorname{Sym}_{gr}(I/I^2)
$$
of graded objects, where $I/I^2$ is of pure weight $1$. Then $A_n = F_{I}^*A$, namely the filtration in question is the $I$-adic filtration.
\end{prop}
\begin{proof}
Let $A_n$ be a complete filtered algebra with these properties. The map
$$
A_1 \to I
$$
in the hypothesis extends by multiplicativity to a map
$$
A_n \to F_I^*(A).
$$
In degree 2 for example, being that $A_2 \to A_1$ is the fiber of the map $A_1 \to I/I^2$, there is an induced $A$-module map
$$
A_2 \to I^2
$$
fitting into the left hand column of the following diagram:
$$
\xymatrix{
& A_2 \ar[d]_{} \ar[r]^{} & A_1 \ar[d]^{} \ar[r] & I/I^2 \ar[d] \\
& I^2 \ar[r]^{}& I \ar[r] & I/I^2
}
$$
By assumption, one obtains an isomorphism of graded objects
$$
\operatorname{gr}(A_n) \cong \gr (F_I^*(A))
$$
after passing to the associated graded of this map. Since both filtered objects are complete, and since the associated graded functor when restricted to complete objects is conservative, we deduce that the map
$$
A_n \to F_I^*(A)
$$
is an equivalence of filtered algebras. In particular, this implies that the inclusion $A_1 \to I$ is surjective at the level of discrete modules, so that $A_1 = I$. We claim that this is enough to deduce that $A_n$ is the $I$-adic filtration, up to equality. For this, we need to show that there is an equality $A_n = I^n$ for every positive integer $n$ and that the structure maps $A_{n+1} \to A_n$ of the filtration are simply the inclusions. Indeed, in each degree, we now have equivalences
$$
A_n \simeq I^n
$$
of $A$-modules, which moreover admit monomorphisms into $A$. The category of such objects is a poset category, and so any isomorphic objects are equal; hence we conclude $A_n = I^n$ for all $n$.
\end{proof}
\begin{rem}
In particular, we may choose $A_n \in \mathcal{D}$, the image of the duality functor from smooth filtered coalgebras. In this case, $I= \Sym^{\geq 1}(M)$, the augmentation ideal of $\widehat{\Sym}(M)$ for $M$ some projective module of finite type.
\end{rem}
Now let $\GG$ be a formal group law over $\Spec k$, and let $\OO(\GG)$ be its complete adic algebra of functions. This acquires a comultiplication
$$
\OO(\formalgroup) \to \OO(\formalgroup) \widehat{\otimes} \OO(\formalgroup)
$$
and counit
$$
\epsilon: \OO(\formalgroup) \to R
$$
making $\OO(\formalgroup)$ into a abelian cogroup object in $\mathcal{D}$. By Proposition \ref{unicityalgebras}, at the level of underlying $k$-algebras, there is a uniquely determined complete filtered algebra $F_{ad}^*A$ such that
$$
\colim_{n \to - \infty} F^n_{ad}A \simeq \OO(\formalgroup)
$$
We show that this inherits the cogroup structure as well:
\begin{cor} \label{uniqueformalgroupstructure}
The comultiplication
$$
\Delta : \OO(\formalgroup) \to \OO(\formalgroup) \widehat{\otimes} \OO(\formalgroup)
$$
can be promoted to a map of filtered complete algebras. Thus, there is a unique filtered formal group, i.e. an abelian cogroup object in the category $\mathcal{D}$ with associated graded free on a filtered module concentrated in weight one and with underlying object is $\OO(\formalgroup)$, which refines the comultiplication on $\OO(\formalgroup)$.
\end{cor}
\begin{proof}
We need to show that the comultiplication
$$
\Delta: \OO(\formalgroup) \to \OO(\formalgroup) \widehat{\otimes} \OO(\formalgroup)
$$
preserves the adic filtration. Let us assume first that the formal group is $1$-dimensional and oriented so that $\OO(\GG) \simeq R[[x]]$. We remark that every formal group is locally oriented. In this case, by the formal group law is given in coordinates by the power series
$$
f(x_1, x_2) = x_1 + x_2 + \sum_{i, j \geq 1} a_{i,j} x^i y^j
$$
with suitable $a_{i,j}$.
In particular, the image of the ideal commensurate with the filtration is contained in $I^{\otimes 2} = (x_1, x_2)$, the ideal commensurate with the filtration on $\OO(\formalgroup) \widehat{\otimes} \OO(\formalgroup) \cong R[[x_1, x_2]]$. Note that this is itself the $(x_1, x_2)$-adic filtration on $R[[x_1, x_2]]$. By multiplicativity, $\Delta(I^n) \subset I^{\otimes 2 n}$ for all $n$. This shows that $\Delta$ preserves the filtration, making
giving $F^*_{I}A$ a unique coalgebra structure compatible with the formal group structure on $\formalgroup$. The same argument works in higher dimensions.
\end{proof}
\section{The deformation of a formal group} \label{deformationsection}
\subsection{Deformation to the normal cone}
To a pointed formal moduli problem (such as a formal group) one may associate an equivariant family over $\AAA^1$, whose fiber over $\lambda \neq 0$ recovers $G$. We will use this construction in the sequel to produce filtrations on the associated Hochschild homology theories. The author would like to thank Bertrand To\"{e}n for the idea behind this construction, and in fact related constructions appear in \cite{toen2020classes}. A variant of this construction in the characteristic zero setting also appears in \cite[Chapter IV.5]{gaitsgory2017study}. We would also like to point out \cite{khan2018virtual}.
The construction pertains to more than just formal groups. Indeed let $\EuScript{X} \to \EuScript{Y}$ be closed immersion of locally Noetherian schemes. We construct a filtration on $\widehat{\EuScript{Y}_{\EuScript{X}}}$, the formal completion of $\EuScript{Y}$ along $\EuScript{X}$, with associated graded the shifted tangent complex $T_{\EuScript{X}|\EuScript{Y}}[1]$.
\begin{prop} \label{identifyingthezerocircle}
There exists a filtered stack $S_{fil}^0 \to \AAA^1 / \GG_m$, whose underlying object is the constant stack $S^0 = \Spec k \sqcup \Spec k $ and whose associated graded is $\Spec(k[\epsilon]/ (\epsilon^2))$.
\end{prop}
\begin{proof}
Morally one should think of this as families of two points degenerating into each other over the special fiber. For a more rigorous construction, one may begin with the nerve of the unit map of commutative algebra objects in the $\infty$-category $\on{QCoh}(\AAA^1 /\GG_m)$:
\begin{equation}{\label{mapforcechnerve}}
\OO_{\AAA^1 / \GG_m} \to 0_{*}(\OO_{B \GG_m}),
\end{equation}
where $0: B\GG_m \to \AAA^1 /\GG_m$ is the closed point. This gives rise to a groupoid object (cf. \cite[Section 6.1.2]{lurie2009higher})in the $\infty$-category $\operatorname{CAlg}(\on{QCoh}(\AAA^1 / \GG_m))$.
We now give a more explicit description of this groupoid object. The structure sheaf $\OO_{\AAA^1 / \GG_m}$ may be identified with the graded polynomial algebra $k[t]$, where $t$ is of weight $1$. In degree $1$ one obtains the following fiber product
\begin{equation}\label{fiberprod}
\OO_{\AAA^1 / \GG_m} \times_{0_{*}(\OO_{B \GG_m})} \OO_{\AAA^1 / \GG_m}
\end{equation}
which may be thought of as the graded algebra
$$
k[t_1, t_2]/(t_1+t_2)(t_1-t_2).
$$
viewed as an algebra over $k[t]$.
If we apply the $\Spec$ functor relative to $\AAA^1 / \GG_m$, we obtain the scheme corresponding to the union of the diagonal and antidiagonal in the plane. The pullback of this fiber product to $\Mod_{k}$ is
$$
k \times_{1^{*}0_{*}(\OO_{B \GG_m})} k \simeq k \times_{0} k = k \oplus k
$$
The pullback to $\on{QCoh}(B \GG_m)$ is $k[\epsilon]/ \epsilon^2$, the trivial square-zero extension of $k$ by $k$. To see this we pull back the fiber product (\ref{fiberprod}) to $\on{QCoh}(B \GG_m)$, which gives the following homotopy cartesian square
$$
\xymatrix{
& k[\epsilon]/(\epsilon^2) \ar[d]_{} \ar[r]^{} & k \ar[d]^{}\\
& k \ar[r]^{}& k \oplus k[1]
}
$$
in this category.
Hence, we may define
$$
S^0_{fil}:= \Spec_{\filstack} (\OO_{\AAA^1 / \GG_m} \times_{0_{*}(\OO_{B \GG_m})} \OO_{\AAA^1 / \GG_m})
$$
as the relative spectrum (over $\filstack$).
\end{proof}
By construction, this admits a map
$$
S^0_{fil} \to \AAA^1/\GG_m
$$
making it into a filtered stack, with generic fiber and special fiber described in the above proposition. We remark that we may think of $S^{0}_{fil}$ as the degree $1$ part of a \emph{cogroupoid object} $S^{0, \bullet}_{fil}$ in the $\infty$-category of (derived) schemes over $\filstack$; indeed we may apply $Spec(-)$ to the entire Cech nerve of the map \ref{mapforcechnerve}. We can then take mapping spaces out of this cogroupoid to obtain a groupoid object.
Now let $\EuScript{X} \to \EuScript{Y}$ be as above. We will focus our attention on the following derived mapping stack, defined in the category $dStk_{\EuScript{Y} \times \filstack}$ of derived stacks over $\EuScript{Y} \times \AAA^1/ \GG_m$:
$$
\Map_{\EuScript{Y} \times \AAA^1 / \GG_m }(S_{fil}^0, \EuScript{X} \times \AAA^1 / \GG_m)
$$
By composing with the projection map $\EuScript{Y}\times \AAA^1 / \GG_m \to \AAA^1 / \GG_m$, we obtain a map,
$$
\Map_{\EuScript{Y} \times \AAA^1 / \GG_m }(S_{fil}^0, \EuScript{X}) \to \AAA^1 / \GG_m
$$
allowing us to view this as a filtered stack. The next proposition identifies its fiber over $1 \in \AAA^1 / \GG_m$:
\begin{prop}
There is an equivalence
$$
1^*( \Map(S_{fil}^0, \EuScript{X})) \simeq \EuScript{X}\times_{\EuScript{Y}} \EuScript{X},
$$
\end{prop}
\begin{proof}
By formal properties of base change of mapping objects of $\infty$-topoi, there is an equivalence
$$
1^*( \Map(S_{fil}^0, \EuScript{X})) \simeq \Map_{\EuScript{Y}}(1^{*}S_{fil}^0, 1^*(\EuScript{X} \times \AAA^1 / \GG_m ))
$$
The right hand side is the mapping object out of a disjoint sum of final objects, and therefore is directly seen to be equivalent to $\EuScript{X}\times_{\EuScript{Y}} \EuScript{X}$
\end{proof}
Next we identify the fiber over the ``closed point" $0: B \GG_m \to \AAA^1/ \GG_m$.
\begin{prop}
There is an equivalence of stacks
$$
0^*( \Map(S_{fil}^0, \EuScript{X})) \simeq T_{\EuScript{X}|\EuScript{Y}},
$$
where $T_{\EuScript{X}|\EuScript{Y}}$ denotes the relative tangent bundle of $\EuScript{X} \to \EuScript{Y}$.
\end{prop}
\begin{proof}
We base change along the map
$$
\Spec k \to B \GG_m \to \AAA^1 / \GG_m.
$$
Invoking again the standard properties of base change of mapping objects we obtain the equivalence
$$
0^*( \Map(S_{fil}^0, \EuScript{X})) \simeq \Map_{\EuScript{Y}}(0^{*}S_{fil}^0, 0^*(\EuScript{X} \times \AAA^1 / \GG_m )).
$$
By construction, we may identify $0^{*}S_{fil}^0$ with $\Spec(k[\epsilon]/ \epsilon^2)$. Of course, this means that the right hand side of the above display is precisely the relative tangent complex $T_{\EuScript{X}| \EuScript{Y}}$.
\end{proof}
To summarize, we have constructed a cogroupoid object in the category of schemes over $\AAA^1 / \GG_m$, whose piece in cosimplicial degree $1$ is $S^0_{fil}$, and formed the derived mapping stack
$$
\Map_{\EuScript{Y} \times \AAA^1 / \GG_m }(S_{fil}^0, \EuScript{X} \times \AAA^1 / \GG_m),
$$
which will in turn be the degree one piece of a groupoid object in derived schemes over $\filstack$.
\begin{const}\label{delooping}
Let $\EuScript{M}_{\bullet}:= \Map_{\EuScript{Y} \times \AAA^1 / \GG_m }(S_{fil}^{0,\bullet}, \EuScript{X} \times \AAA^1 / \GG_m)$. Note that we can interpret the degeneracy map
$$
\EuScript{X} \times \AAA^1 / \GG_m \to \Map_{\EuScript{Y} \times \AAA^1 / \GG_m }(S_{fil}^0, \EuScript{X} \times \AAA^1 / \GG_m)
$$
as the ``inclusion of the constant maps". We reiterate that this is a groupoid object in the $\infty$-category of derived schemes over $\AAA^1 / \GG_m$. We let
$$
Def_{\filstack}(\EuScript{X}/ \EuScript{Y}) := \colim_{\Delta}\EuScript{M}_\bullet
$$
denote the colimit of this groupoid object. Note that the colimit is taken in the $\infty$-category of derived schemes over $\filstack$ (as opposed to all of derived stacks).
\end{const}
\noindent By construction, $Def_{\filstack}(\EuScript{X}/ \EuScript{Y)}$ is a derived scheme over $\AAA^1 / \GG_m$. The following proposition identifies its ``generic fiber" with the formal completion $\widehat{\EuScript{Y}_{\EuScript{X}}}$ of $\EuScript{X}$ in $\EuScript{Y}$.
\begin{prop} \label{genericfibercompletion}
There is an equivalence
$$
1^* Def_{\filstack}(\EuScript{X}/ \EuScript{Y}) \simeq \widehat{\EuScript{Y}_{\EuScript{X}}}
$$
\end{prop}
\begin{proof}
As pullback commutes with colimits, this amounts to identifying the delooping in the category of derived schemes over $\EuScript{Y}$. Note again that all objects are schemes and not stacks so that this statement makes sense. By the above identifications, delooping the above groupoid corresponds to taking the colimit of the nerve $N(f)$ of the map $f: \EuScript{X} \to \EuScript{Y}$, a closed immersion. Hence, it amounts to proving that
$$
\colim_{\Delta^{op}} N(f) \simeq \widehat{\EuScript{Y}_{\EuScript{X}}}
$$
This is precisely the content of Theorem \ref{formaldescent}.
\end{proof}
A consequence of the above proposition is that the resulting object is pointed by $\EuScript{X}$ in the sense that there is a well defined map $\EuScript{X} \to \widehat{\EuScript{Y}_{\EuScript{X}}}$, arising from the structure map in the associated colimit diagram. This map is none other than the ``inclusion" of $\EuScript{X}$ into its formal thickening.
Our next order of business is somewhat predictably at this point, to identify the fiber over $ B \GG_m$ of $Def_{\filstack}(\EuScript{X}/ \EuScript{Y})$ with the normal bundle of $\EuScript{X}$ in $\EuScript{Y}$.
\begin{prop} \label{specialfiberformalgroup}
There is an equivalence
$$
0^{*} Def_{\filstack}(\EuScript{X}/ \EuScript{Y}) \simeq \widehat{\mathbb{V}(T_{\EuScript{X}|\EuScript{Y}}[1])} =: \widehat{N_{\EuScript{X}|\EuScript{Y}}}
$$
in the $\infty$-category of derived schemes over $B \GG_m$ of our stack $Def_{\filstack}(\EuScript{X}/ \EuScript{Y})$.
\end{prop}
\begin{proof}
As in the proof of the previous proposition, it amounts to understanding the pull-back along $\Spec k \to B \GG_m \to \filstack$ of the groupoid object $\EuScript{M}_\bullet$ . This is given by
$$
\EuScript{X} \leftleftarrows T_{\EuScript{X}| \EuScript{Y}}...
$$
where we abuse notation and identify $T_{\EuScript{X}| \EuScript{Y}}$ with $\mathbb{V}(T_{\EuScript{X}| \EuScript{Y}})$.
Note that $T_{\EuScript{X}| \EuScript{Y}} \simeq \Omega_{\EuScript{X}}(T_{\EuScript{X}|\EuScript{Y}}[1])$ and so, we may identify the above colimit diagram with the simplicial nerve $N(f)$ of the unit section $\EuScript{X} \to T_{\EuScript{X}|\EuScript{Y}}[1] \simeq N_{\EuScript{X}|\EuScript{Y}}$. The result now follows from another application of Theorem \ref{formaldescent}.
\end{proof}
The following statement summarizes the above discussion:
\begin{thm}
Let $f: \EuScript{X} \to \EuScript{Y}$ be a closed immersion of schemes. Then there exists a filtered stack $Def_{\filstack}(\EuScript{X}/ \EuScript{Y}) \to \AAA^1/ \GG_m$ (making it into a relative scheme over $\AAA^1/\GG_m$) with the property that there exists a map
$$
\EuScript{X} \times \AAA^1 / \GG_m \to Def_{\filstack}(\EuScript{X}/ \EuScript{Y})
$$
whose fiber over $1 \in \AAA^1 / \GG_m$ is
$$
\EuScript{X} \to \widehat{\EuScript{Y}_{\EuScript{X}}}
$$
and whose fiber over $0 \in \AAA^1/\GG_m$ is
$$
\EuScript{X} \to \widehat{N_{\EuScript{X}| \EuScript{Y}}},.
$$
the formal completion of the unit section of $\EuScript{X}$ in its normal bundle.
\end{thm}
\subsection{Deformation of a formal group to its normal cone} \label{deformationforever}
Fix a (classical) formal group $\widehat{\GG}$. We now apply the above construction to the unit section of the formal group, $\iota: \Spec k \to \widehat{\GG}$. Note that $\formalgroup$ is already formally complete along $\iota$. We set
$$
Def_{\filstack}(\formalgroup) := Def_{\filstack}( \Spec k / \formalgroup)
$$
This will be a relative scheme over $\filstack$.
\begin{prop} \label{gottausethistoo}
Let $\Spec k \to \formalgroup$ be the unit section of a formal group. Then, the stack $Def_{\filstack}(\formalgroup)$ of Construction \ref{delooping} is a filtered formal group.
\end{prop}
\begin{proof}
We will show that there exists a filtered dualizable (and discrete) $R$-module $M$ for which
$$
\OO(Def_{\filstack}(\formalgroup)) \simeq \Gamma^*_{fil}(M)^\vee \simeq \widehat{\operatorname{Sym}_{fil}^*}(M^\vee).
$$
As was shown above, there is an equivalence of
$$
Def_{\filstack}(\formalgroup)_1 \simeq \formalgroup_m
$$
where the left hand side denotes the pullback along $\Spec k \to \filstack$; hence we conclude that the underlying object of $\OO(Def_{\filstack}( \Spec k / \formalgroup))$ is of the form $k[[t]] \simeq \widehat{\operatorname{Sym}^*}(M)$ for $M$ a free rank $k$-module of rank $n$. We now identify the associated graded of the filtered algebra corresponding to $\OO(Def_{\filstack}(\formalgroup))$. For this, we use the equivalence
$$
Def_{\filstack}(\formalgroup)_0 \simeq \widehat{T_{\GG|k}}
$$
of stacks over $B \GG_m$. We note that the right hand side may indeed be viewed as a stack over $B\GG_m$, arising from the weight $-1$ action of $\GG_m$ by homothety on the fibers. This is the $\GG_m$ action which will be compatible with the grading on the dual numbers $k[\epsilon]$ (which appears in Proposition \ref{identifyingthezerocircle}) such that $\epsilon$ is of weight one.
In particular, since $\formalgroup$ is a one dimensional formal group, it follows that the associated graded is none other than
$$
\operatorname{Sym}_{gr}^*(M(1))
$$
the graded symmetric algebra on the graded $k$-module $M(1)$ which is $M$ concentrated in weight $1$. Putting this all together we see that at the level of filtered objects, there is an equivalence
$$
\OO(Def_{\filstack}(\formalgroup)) \simeq \widehat{\operatorname{Sym}_{fil}}(M^f(1)),
$$
where $M^f(1)$ is the filtered $k$-module
$$
M^f(1)=
\begin{cases}
M^f(1)_n = 0, \, \, \, \, \, \, n>1 \\
M^f(1)_n = M, \, \, \, \, \, \, \, \, n \leq 1
\end{cases}
$$
Recall the deformation to the normal cone will be equipped with a ``unit" map
$$
\filstack \to Def_{\filstack}(\formalgroup).
$$
By passing to functions, we deduce from this map that the degree $1$ piece of the filtration on $\OO(Def_{\filstack}(\formalgroup))$ is a submodule of the augmentation ideal of $\widehat{\operatorname{Sym}(M)}$. Thus, the conditions of Proposition \ref{unicityalgebras} are satisfied here, so we conclude that this filtration is none other than the adic filtration of $\widehat{\operatorname{Sym}(M)}$ with respect to the augmentation ideal. Finally by Corollary \ref{uniqueformalgroupstructure}, this acquires a canonical abelian cogroup structure which is a filtered enhancement of that of $\formalgroup$, making $Def_{\filstack}(\formalgroup)$ into a filtered formal group.
\end{proof}
Now we combine this construction with the $\filstack$-parametrized Cartier duality of Section \ref{filteredformalgroupsection}.
\begin{cor}
Let $\formalgroup$ be a formal group over $\Spec k$, and let $\formalgroup^\vee$ denote its Cartier dual. Then the cohomology $R\Gamma(\formalgroup^{\vee}, \OO)$ acquires a canonical filtration.
\end{cor}
\begin{proof}
By Construction \ref{filteredcartierduality}, the coordinate algebra $\OO(Def_{\filstack}(\formalgroup)$ corresponds via duality to an abelian group object in smooth filtered coalgebras. As we are in the discrete setting, this is equivalent to the structure of a grouplike commutative monoid in this category. In particular, this is a filtered Hopf algebra object, so it determines a group stack $Def_{\filstack}(\formalgroup)^\vee$ over $\filstack$.
\end{proof}
\section{The deformation to the normal cone of $\widehat{\GG_m}$} \label{deformationofGm}
By the above, given any formal group $\widehat{\GG}$, one may define a filtration on its Cartier dual $\formalgroup^\vee = \Map( \widehat{\GG}, \widehat{\GG_m})$ in the sense of \cite{geometryofilt}. In the case of the formal multiplicative group, this gives a filtration on its Cartier dual $D(\GG_m)=\mathsf{Fix}$. In \cite{moulinos2019universal}, the authors defined a canonical filtration on this affine group scheme (defined over a $\Z_{(p)}$-algebra $k$) given by a certain interpolation between the kernel and fixed points on the Frobenius on the Witt vector scheme. We would like to compare the filtration on $\Map(\widehat{\GG_m}, \widehat{\GG_m})$ with this construction.
\begin{cor}
The filtration defined on $\mathsf{Fix}$ is Cartier dual to the $(x)$-adic filtration on
$$
\OO(\widehat{\GG_m}) \simeq k[[x]].
$$
Furthermore, this filtration corresponds to the
deformation to the normal cone construction \\
$Def_{\filstack}(\widehat{\GG_m})$ on $\formalgroup_m$.
\end{cor}
\begin{proof}
Let
$$
\mathcal{G}_t= \Spec k[X,1/1 + tX]
$$
This is an affine group scheme; one sees by varying the parameter $t$ that this is naturally defined over $\AAA^1$. If $t$ is invertible, then this is equivalent to $\GG_m$; if $t=0$, this is just the formal additive group ${\GG_a}$. If we take the formal completion of this at the unit section, we obtain a formal group $\widehat{\mathcal{G}_{t}}$, with corresponding formal group law
\begin{equation}
F(X,Y)= X + Y + tXY
\end{equation}
which we may think of as a formal group over $\AAA^1$. In \cite{sekiguchi2001note} the authors describe the Cartier dual of the resulting formal group, for every $t \in k$, as the group scheme
$$
\ker (F - t^{p-1} \id: \W_p \to \W_p)
$$
These of course assemble, by way of the natural $\GG_m$ action on the Witt vector scheme $\W$, to give the filtered group scheme $\mathbb{H} \to \filstack$ of \cite{moulinos2019universal}, whose classifying stack is the filtered circle. The algebra of functions $\OO(\mathbb{H})$ acquires is a acquires a comultiplication; by results of \cite{geometryofilt}, we may think of this as a filtered Hopf algebra.
Let us identify this filtered Hopf algebra a bit further, which by abuse of notation, we refer to as $\OO(\mathbb{H})$. After passing to underlying objects, it is the divided power coalgebra $\bigoplus \Gamma^n(k)$. The algebra structure on this comes from the multiplication on $\widehat{\GG_m}$, via Cartier duality. On the graded side, we have the coordinate algebra of $\mathsf{Ker}$, which by \cite[Lemma 3.2.6]{drinfeld2020prismatization}, is none other than the free divided power algebra
$$
k \langle x \rangle \cong k[x, \frac{x^2}{2!},...]
$$
One gives this the grading where each $\frac{x^n}{n!}$ is of pure weight $-n$. The underlying graded smooth coalgebra is
$$
\bigoplus_{n}\Gamma_{gr}(k(-1))
$$
We deduce by weight reasons that there is an equivalence of filtered coalgebras
$$
\OO(\mathbb{H}) \simeq \bigoplus_n \Gamma^n(k^{f}(-1))
$$
where $k^{fil}(-1)$ is trivial in filtering degrees $n > 1 $ and equal to $k$ otherwise.
The consequence of the analysis of the above paragraph is that the Hopf algebra structure on $\OO(\mathbb{H})$ corresponds to the data of an abelian group object in smooth filtered coalgebras, cf. section \ref{filteredformalgroupsection}. In particular, it corresponds to a coAbelian group object structure on the dual, $\widehat{\operatorname{Sym}_{fil}^* (k^f(1))}$. This is a complete filtered algebra satisfying the conditions of Proposition \ref{unicityalgebras} and thus coincides with the adic filtration on $k[[x]]$. The corresponding filtered coalgebra structure is the unique one commensurate with the adic filtration, since by corollary \ref{uniqueformalgroupstructure}, the comultiplication preserves the adic filtration. Thus, there exists a unique filtered formal group which recovers $\widehat{\GG_m}$ and $\widehat{\GG_a}$ upon taking underlying objects and associated gradeds respectively. In the setting of the filtered Cartier duality of Section \ref{filteredformalgroupsection}, this must then be dual to the specified abelian group object structure on $\OO(\mathbb{H})$.
Finally we relate this to the deformation to the normal cone constuction applied to $\widehat{\GG_m}$, which also outputs a filtered formal group. Indeed by the reasoning of Proposition \ref{gottausethistoo}, this filtered formal group is itself given by the adic filtration on $k[[t]]$ together with the filtered coalgebra structure uniquely determined by the group structure on $\widehat{\GG_m}$.
\end{proof}
\section{$\formalgroup$-Hochschild homology} \label{additionstostory}
As an application to the above deformation to the normal cone constructions associated to a formal group, we further somewhat the following proposal of \cite{moulinos2019universal} described in the introduction.
\begin{const}
Let $k$ be a $\Z_{(p)}$-algebra. Let $\formalgroup$ be a formal group over $k$. Its Cartier dual $\formalgroup^\vee$ is an affine commutative group scheme We let $B \formalgroup^\vee$ denote the classifying stack of the group scheme $\formalgroup^\vee$. Let $X = \Spec A$ be an affine derived scheme, corresponding to a simplicial commutative ring $A$. One forms the derived mapping stack
$$
\Map_{dStk_k}(B \formalgroup^\vee, X).
$$
\end{const}
If $\formalgroup= \widehat{\GG_m}$, then by the affinization techniques of \cite{toen2006champs, moulinos2019universal}, one recovers, at the level of global sections
$$
R\Gamma(\Map_{dStk}(B \widehat{\GG_m}^\vee, X), \OO ) \simeq \operatorname{HH}(A),
$$
the Hochschild homology of $A$ as the global sections of this construction. Following this example one can make the following definition (cf. \cite[Section 6.3]{moulinos2019universal})
\begin{defn}
Let $\formalgroup$ be a formal group over $k$. Let
$$
\operatorname{HH}^{\formalgroup}: \operatorname{sCAlg}_k \to \Mod_k
$$
be the functor defined by
$$
\operatorname{HH}^{\formalgroup}(A) := R\Gamma(\Map_{dStk}(B \formalgroup^\vee, X), \OO)
$$
\end{defn}
As was shown in section \ref{deformationforever}, given a formal group $\formalgroup$ over a commutative ring $R$, one can apply a deformation to the normal cone construction to obtain a formal group $Def_{\filstack}(\formalgroup)$ to obtain a formal group over $\AAA^1 / \GG_m$. By applying $\filstack$-parametrized Cartier duality, one obtains a group scheme over $\filstack$.
\begin{thm} \label{filtrationonourguy}
Let $\formalgroup$ be an arbitrary formal group. The functor
$$
\operatorname{HH}^{\formalgroup}(-): \operatorname{sCAlg}_R \to \Mod_R
$$
admits a refinement to the $\infty$-category of filtered $R$-modules
$$
\widetilde{\operatorname{HH}^{\formalgroup}(-)}: \operatorname{sCAlg}_R \to \Mod_R^{filt},
$$
such that
$$
\operatorname{HH}^{\formalgroup}(-) \simeq \operatorname{} \colim_{(\Z, \leq)}\widetilde{\operatorname{HH}^{\formalgroup}(-)}
$$
\end{thm}
\begin{proof}
Let $Def_{\filstack}( \formalgroup)^\vee$ be the Cartier dual of the deformation to the normal cone $Def_{\filstack}(\formalgroup)$. Form the mapping stack
$$
\Map_{dStk_{/\filstack}}(B Def_{\filstack}( \formalgroup)^\vee, X \times \filstack).
$$
This base-changes along the map
$$
1: \Spec k \to \filstack
$$
to the mapping stack
$$
\Map_{dStk_k}(B \formalgroup^\vee, X),
$$
which gives the desired geometric refinement. The stack $\Map_{dStk_{/\filstack}}(B Def_{\filstack}( \formalgroup)^\vee, X \times \filstack)$ is a derived scheme relative to the base $\filstack$. Indeed, it is nilcomplete, infinitesimally cohesive and admits an obstruction theory by the arguments of \cite[Section 2.2.6.3]{toen2008homotopical}. Finally its truncation is the relative scheme $t_0 X \times \filstack$ over $\filstack$- this follows from the identification
$$t_0 \Map( B \formalgroup^\vee, X) \simeq t_0 \Map(B \formalgroup^\vee, t_0 X)$$
and from the fact that there are no nonconstant (nonderived) maps $BG \to t_0 X$ for $G$ a group scheme.
Hence we conclude by the criteria of \cite[Theorem C.0.9]{toen2008homotopical} that this is a relative affine derived scheme. By Proposition \ref{morebasechangeshiiiii}, we conclude that
$\mathcal{L}_{fil}^{\formalgroup}(X) \to \filstack$ is of finite cohomological dimension and so $\widetilde{\operatorname{HH}^{\formalgroup}(A)}$ defines an exhaustive filtration on $\operatorname{HH}^{\formalgroup}(A)$.
\end{proof}
\begin{rem}
In characteristic zero, all one-dimensional formal groups are equivalent to the additive formal group $\widehat{\GG_a}$, via an equivalence with its tangent Lie algebra. In particular the above filtration splits canonically, one one obtains an equivalence of derived schemes
$$
\Map_{dStk}(B \formalgroup^\vee, X) \simeq \mathbb{T}_{X|R}[-1]
$$
\end{rem}
In positive or mixed characteristic this is of course not true. However, one can view all these theories as deformations along the map $B \GG_m \to \filstack $ of the de Rham algebra $DR(A)= \operatorname{Sym}(\mathbb{L}_{A|k}[1])$
\section{Liftings to spectral deformation rings} \label{spectralll}
In this section we lift the above discussion to the setting of spectral algebraic geometry over various ring spectra that parametrize \emph{deformations} of formal groups. These are defined in \cite{ellipticII} in the context of the elliptic cohomology theory. As we will be switching gears now and working in this setting, we will spend some time recalling and slightly clarifying some of the ideas in \cite{ellipticII}. Namely, we introduce a correspondence between formal groups over $E_{\infty}$-rings, and spectral affine group schemes, and show it to be compatible with Cartier duality in the classical setting. We stress that the necessary ingredients already appear in \cite{ellipticII}.
\subsection{Formal groups over the sphere}
We recall various aspects of the treatment of formal groups in the setting of spectra and spectral algebraic geometry. The definition is based on the notion of smooth coalgebra studied in Section \ref{ogdiscussion}.
\begin{defn}
Fix an arbitrary $E_{\infty}$ ring $R$. and let $C$ be a coalgebra over $R$. Recall that this means that $C \in \operatorname{CAlg}(\Mod_R^{op})^{op}$. Then $C$ is smooth if it is flat as an $R$-module, and if $\pi_0 C$ is smooth as a coalgebra over $\pi_0(R)$, as in Definition \ref{smoothcoalgoriginal}.
\end{defn}
Given an arbitrary coalgebra $C$ over $R$, the linear dual $C^\vee =\Map(C, R)$ acquires a canonical $E_{\infty}$-algebra structure. In general $C$ cannot be recovered from $C^\vee$. However, in the smooth case, the dual $C$ acquires the additional structure of a topology on $\pi_0$ giving it the structure of an adic $E_{\infty}$ algebra. This allows us to recover $C$, via the following proposition, c.f. \cite[Theorem 1.3.15]{ellipticII}:
\begin{prop}
Let $C, D \in \operatorname{cCAlg}^{sm}_R$ be smooth coalgebras. Then $R$-linear duality induces a homotopy equivalence
$$
\Map_{\operatorname{cCAlg}_R}(C, D) \simeq \Map^{\operatorname{cont}}_{\operatorname{CAlg}_R}(C^\vee, D^\vee).
$$
\end{prop}
\begin{rem}
One can go further and characterize intrinsically all adic $E_{\infty}$ algebras that arise as duals of smooth coalgebras. These (locally) have underlying homotopy groups a formal power series ring.
\end{rem}
\begin{const}
Given a coalgebra $C \in \operatorname{cCAlg}_R$, one may define a functor
$$
\operatorname{cSpec}(C): \operatorname{CAlg}_R^{cn} \to \mathcal{S};
$$
this associates, to a connective $R$-algebra $A$, the space of grouplike elements:
$$
\operatorname{GLike}(A \otimes_{R}C) = \operatorname{Map}_{\operatorname{cCAlg}_A}(A, A \otimes_R C).
$$
\end{const}
\begin{rem}
Fix $C$ a smooth coalgebra. There is always a canonical map of stacks $\operatorname{coSpec}(C) \to \Spec(A)$ where $A= C^\vee$, but it is typically not an equivalence. The condition that $C$ is smooth guarantees precisely that there is an induced equivalence $\operatorname{coSpec}(C) \to \operatorname{Spf}(A) \subseteq \Spec A$, where $\operatorname{Spf}(A)$ denotes the formal spectrum of the adic $E_{\infty}$ algebra $A$. In particular $\operatorname{coSpec}(C)$ is a formal scheme in the sense of \cite[Chapter 8]{lurie2016spectral}
\end{rem}
One has the following proposition, to be compared with Proposition \ref{fullyfaithfultobecited}
\begin{prop}[Lurie]
Let $R$ be an $E_{\infty}$-ring. Then the construction $C \mapsto \operatorname{cSpec}(C)$ induces a fully faithful embedding of $\infty$-categories
$$
\operatorname{cCAlg}^{sm} \to \operatorname{Fun}(\operatorname{CAlg}^{cn}_R, \mathcal{S})
$$
\end{prop}
This facilitates the following definition of a formal group in the setting of spectral algebraic geometry
\begin{defn}
A functor $X: \operatorname{CAlg}^{cn}_R \to \mathcal{S}$ is a formal hyperplane if it is in the essential image of the $\operatorname{coSpec}$ functor. We now define a formal group to be an abelian group object in formal hyperplanes, namely an object of $\operatorname{Ab}(\operatorname{HypPlane})$.
\end{defn}
As is evident from the thread of the above construction, one may define a formal group to be a certain type of Hopf algebra, but in a somewhat strict sense. Namely we can define a formal group to be an object of $\operatorname{Ab}(\operatorname{cCAlg}^{sm})$; namely an abelian group object in the $\infty$-category of smooth coalgebras.
\begin{rem}
The monoidal structure on $\operatorname{cCAlg}_R$ induced by the underlying smash product of $R$-modules is Cartesian; in particular it is given by the product in this $\infty$-category. Hence, a ``commutative monoid object" in the category of $R$-coalgebras will be coalgerbras which are additionally equipped with an $E_{\infty}$-algebra structure. In particular, they will be bialgebras.
\end{rem}
\begin{const} \label{justlabelyourshit}
Let $\formalgroup$ be a formal group over an $E_{\infty}$-algebra $R$. Let $\mathcal{H}$ be a strict Hopf algebra $\mathcal{H}$ in the above sense, for which
$$
\operatorname{coSpec}\mathcal{H} = \formalgroup.
$$
Let
$$U: \operatorname{Ab}(\operatorname{cCAlg}_R) \to \operatorname{CMon}(\operatorname{cCAlg}_R)$$
be the forgetful functor from abelian group objects to commutative monoids. Since the monoidal structure on $\operatorname{cCAlg}_R$ is cartesian, the structure of a commutative monoid in $\operatorname{cCAlg}_R$ is that of a commutative algebra on the underlying $R$-module, and so we may view such an object as a bialgebra in $\Mod_{R}$. Finally, applying $\Spec(-)$ (the spectral version) to this bialgebra to obtain a group object in the category of spectral schemes. This is what we refer to as the \emph{Cartier dual} $\formalgroup^{\vee}$ of $\formalgroup$.
\end{const}
\begin{rem}
The above just makes precise the association, for a strict Hopf algebra $\mathcal{H}$, (i.e. an abelian group object) the association
$$
\operatorname{Spf}(H^\vee) \simeq \operatorname{coSpec}(H) \mapsto \Spec(H)
$$
Unlike the $1$-categorical setting studied so far, there is no equivalence underlying this, as passing between abelian group objects to commutative monoid objects loses information; hence this is not a duality in the precise sense. In particular, it is not clear how to obtain a spectral formal group from a grouplike commutative monoid in schemes, even if the underlying coalgebra is smooth.
\end{rem}
\begin{prop} \label{basechangeinspectralsetting}
Let $R \to R'$ be a morphism of $E_{\infty}$-rings and let Let $\formalgroup$ be a formal group over $\Spec R$, and $\formalgroup_{R'}$ its extension to $R'$. Then Cartier duality satisfies base-change, so that there is an equivalence
$$
D(\formalgroup|_R') \simeq D(\formalgroup)|_R
$$
\end{prop}
\begin{proof}
Let $\formalgroup = \operatorname{Spf}(A)$ be a formal group corresponding to the adic $E_{\infty}$ ring $A$. Then the Cartier dual is given by $\Spec(\mathcal{H})$ for $\mathcal{H}= A^\vee$, the linear dual of $A$ which is a smooth coalgebra. The linear duality functor $(-)^\vee= \operatorname{Map}_R(-,R)$-for example by \cite[Remark 1.3.5]{ellipticII} - commutes with base change and is an equivalence between smooth coalgebras and their duals. Moreover it preserves finite products and so can be upgraded to a functor between abelian group objects.
\end{proof}
\subsection{Deformations of formal groups}
Let us recall the definition of a deformation of a formal group. These are all standard notions.
\begin{defn}
Let $\widehat{\GG_0}$ be formal group defined over a finite field of characteristic $p$. Let $A$ be a complete Noetherian ring equipped with a ring homomorphism $\rho: A \to k$, further inducing an isomorphism $A / \mathfrak{m} \cong k$. A deformation of $\widehat{\GG_0}$ along $\rho$ is a pair $(\widehat{\GG}, \alpha)$ where $\widehat{\GG}$ is a formal group over $A$ and $\alpha: \widehat{\GG_0} \to \widehat{\GG}|_k$ is an isomorphism of formal groups over $k$.
\end{defn}
The data $(\widehat{\GG}, \alpha)$ can be organized into a category $\operatorname{Def}_{\widehat{\GG_0}}(A)$.
The following classic theorem due to Lubin and Tate asserts that there exists a universal deformation, in the sense that there is a ring which corepresents the functor $A \mapsto \operatorname{Def}_{\widehat{\GG_0}}(A)$.
\begin{thm}[Lubin-Tate]
Let $k$ be a perfect field of characteristic $p$ and let $\widehat{\GG_0}$ be a one dimensional formal group of height $n < \infty$ over $k$. Then there exists a complete local Noetherian ring $R^{cl}_{\formalgroup}$ a ring homomorphism
$$
\rho: R^{cl}_{\formalgroup} \to k
$$
inducing an isomorphism $R^{cl}_{\formalgroup}/ \mathfrak{m} \cong k$, and a deformation $(\widehat{\GG}, \alpha)$ along $\rho$ with the following universal property:
for any other complete local ring $A$ with an isomorphism $A \cong A / \mathfrak{m}$, extension of scalars induces an equivalence
$$
\operatorname{Hom}_{k}(A_n, A) \simeq \operatorname{Def}_{\widehat{\GG_0}}(A, \rho)
$$
(here, we regard the right hand side as a category with only identity morphisms)
\end{thm}
For the purposes of this text, we can interpret the above as saying that every formal group over a complete local ring $A$ with residue field $k$ can be obtained from the universal formal group over $A_0$ by base change along the map $A_0 \to A$. We let $\GG^{un}$ denote the universal formal group over this ring.
\begin{rem}
As a consequence of the classification of formal groups due to Lazard, one has a description
$$
A_0 \cong W(k)[[v_1,...,v_{n-1}]],
$$
where the map $\rho: W(k)[[v_1,...,v_{n-1}]] \to k $ has kernel the maximal ideal $\mathfrak{m}= (p, v_1,..., v_{n-1})$.
\end{rem}
\subsection{Deformations over the sphere } \label{Cartierdualoversphere}
As it turns out the ring $A_0$ has the special property that it can be lifted to the $K(n)$-local sphere spectrum. To motivate the discussion, we restate a classic theorem attributed to Goerss, Hopkins and Miller. We first set some notation.
\begin{defn}
Let $\mathcal{FG}$ denote the category with
\begin{itemize}
\item objects being pairs $(k, \widehat{\GG})$ where $k$ is a perfect field of characteristic $p$, and $\widehat{\GG}$ is a formal group over $K$
\item A morphism from $(K, \widehat{\GG})$ to $(k', \widehat{\GG}')$ is a pair $(f,\alpha)$ where $f: k \to k'$ is a ring homomorphism, and $\alpha: \widehat{\GG} \cong \widehat{\GG}'$ is an isomorhism of formal groups over $k'$
\end{itemize}
\end{defn}
\begin{thm}[Goerss-Hopkins-Miller]
Let $k$ be a perfect field of characteristic $p >0$, and let $\widehat{\GG_0}$ be a formal group of height $n < \infty$ over $k$.
Then there is a functor
$$
E: \mathcal{FG} \to \operatorname{CAlg}, \, \, \, \, \, (k, \widehat{\GG}) \mapsto E_{k, \widehat{\GG}}
$$
such that for every $(k, \widehat{\GG})$, the following holds
\begin{enumerate}
\item $E_{k, \widehat{\GG}}$ is even periodic and complex orientable.
\item the corresponding formal group over $\pi_0 E_{k, \widehat{\GG}}$ is the universal deformation of $(k, \widehat{G})$. In particular, $\pi_0 E_{k, \widehat{\GG}} \cong A_0 \cong \W(k)[[v_1,...v_{n-1}]] $
\end{enumerate}
\end{thm}
\noindent If we set $E_{k, \widehat{\GG}} = (\F_{p^n}, \Gamma)$, where $\Gamma$ is the $p$-typical formal group of height $n$, we denote
$$
E_n := E_{\F_{p^n}, \Gamma};
$$
this is the $n$th \emph{Morava E-theory}.
\begin{rem}
The original approach to this uses Goerss-Hopkins obstruction theory. A modern account due to Lurie can be found in \cite[Chapter 5]{ellipticII}
\end{rem}
As it turns out, this ring can be thought of as parametrizing oriented deformations of the formal group $\widehat{\mathbb{G}}$. This terminology, introduced in \cite{ellipticII}, roughly means that the formal group in question is equivalent to the Quillen formal group arising from the complex orientation on the base ring. However, there exists an $E_{\infty}$-algebra parametrizing \emph{unoriented deformations} of the formal group over $k$.
\begin{thm}[Lurie] \label{deformation}
Let $k$ be a perfect field of characteristic $p$, and let $\widehat{\GG}$ be a formal group of height $n$ over $k$.
There exists a morphism of connective $E_{\infty}$-rings
$$
\rho: R^{un}_{\widehat{\GG}} \to k
$$
and a deformation of $\widehat{\GG}$ along $\rho$ with the following properties
\begin{enumerate}
\item $R^{un}_{\widehat{\GG}}$ is Noetherian, there is an induced surjection $\epsilon: \pi_0 R^{un}_{\widehat{\GG}} \to k $ and $R^{un}_{\widehat{\GG}}$ is complete with respect to the ideal $\ker(\epsilon)$.
\item Let $A$ be a Noetherian ring $E_{\infty}$-ring for which the underlying ring homorphism $\epsilon: \pi_0(A) \to k$ is surjective and $A$ is complete with respect to the ideal $\ker(\epsilon)$. Then extension of scalars induces an equivalence
$$
\Map_{\operatorname{CAlg}_/k}(R^{un}_{\widehat{\GG}}, A) \simeq \operatorname{Def}_{\widehat{\GG}}(A)
$$
\end{enumerate}
\end{thm}
\begin{rem}
We can interpret this theorem as saying that the ring $R^{un}_{\formalgroup_0}$ corepresents the spectral formal moduli problem classifying deformations of $\formalgroup_0$. Of course this then means that there exists a universal deformation (this is non classical) over $R^{un}_{\formalgroup_0}$ which base-changes to any other deformation of $\formalgroup$
\end{rem}
\begin{rem}
This is actually proven in the setting of \emph{$p$-divisible groups} over more general algebras over $k$. However, the formal group in question is the identity component of a $p$-divisible group over $k$; moreover, any deformation of the formal group will arise as the identity component of a deformation of the corresponding $p$-divisible group.(cf. \cite[Example 3.0.5]{ellipticII})
\end{rem}
\noindent
Now fix an arbitrary formal group $\widehat{\GG}$ of height $n$ over a finite field, and take its Cartier dual $\mathsf{Fix}_{\formalgroup} := \formalgroup^\vee$. From Construction \ref{justlabelyourshit}, we see that this is an affine group scheme over $\Spec k$.
\begin{thm}
There exists a spectral scheme $\spectralift$ defined over the $E_\infty$ ring $R^{un}_{\widehat{\GG}}$, which lifts $\mathsf{Fix}_{\formalgroup}$, giving rise to the following Cartesian diagram of spectral schemes:
$$
\xymatrix{
&\mathsf{Fix}_{\formalgroup} \ar[d]_{\phi'} \ar[r]^{p'} & \spectralift \ar[d]^{\phi}\\
& Spec(\mathbb{F}_p) \ar[r]^{p}& \Spec(R^{un}_{\formalgroup})
}
$$
\end{thm}
\begin{proof}
By Theorem \ref{deformation} above, given a formal group $\formalgroup$ over a perfect field, the functor associating to an augmented ring $A \to k$ the groupoid of deformations $\operatorname{Def}(A)$ is corepresented by the spectral (unoriented) deformation ring $R^{un}_{\formalgroup}$.
Hence we obtain a map
$$
R^{un}_{\formalgroup} \to \mathbb{F}_p
$$
of $E_{\infty}$-algebras over $k$. Over $\Spec(R^{un}_{\formalgroup})$, one has the universal deformation $\formalgroup_{un}$. This base-changes along the above map to $\formalgroup$.
By definition, this formal group is of the form $\operatorname{coSpec}(\mathcal{H})$ for some $\mathcal{H} \in \operatorname{Ab}(\operatorname{cCAlg}^{sm}_{{R^{un}_{\formalgroup}}})$. Let
$$
U: \operatorname{Ab}(\operatorname{cCAlg}^{sm}_{{R^{un}_{\formalgroup}}}) \to \operatorname{CMon}^{gp}(\operatorname{cCAlg}^{sm}_{{R^{un}_{\formalgroup}}})
$$
be the forgetful functor from abelian group objects to grouplike commutative monoid objects. We recall that the symmetric monoidal structure on cocommutative coalgebras is the cartesian one. Hence, grouplike commutative monoids will have the strucure of $E_{\infty}$-algebras in the symmetric monoidal $\infty$-category of $R^{un}_{\formalgroup}$-modules. In particular we obtain a commutative and cocommutative bialgebra, so we can take $\Spec(\mathcal{H})$; this will be a grouplike commutative monoid object in the category of affine spectral schemes over $\Spec(R^{un}_{\formalgroup})$. Since Cartier duality commutes with base change (cf. Proposition \ref{basechangeinspectralsetting}), we conclude that $\Spec(\mathcal{H})$ base-changes to $\mathsf{Fix}_{\formalgroup}$ under the map $ R^{un}_{\formalgroup}$.
\end{proof}
\begin{ex}
As a motivating example, let $\formalgroup= \widehat{\GG_m}$, the formal multiplicative group over $\mathbb{F}_p$. As described in \emph{loc. ci}, this formal group is Cartier dual to $\operatorname{Fix} \subset \W_p$, the Frobenius fixed point subgroup scheme of the Witt vectors $\W_p(-)$. This lifts to $R^{un}_{\widehat{\GG_m}}$, which in this case is none other than the $p$-complete sphere spectrum $\mathbb{S}\hat{_p}$. In fact, this object lifts to the sphere itself, by the discussion in \cite[Section 1.6]{ellipticII}. Hence we obtain an abelian group object in the category $\operatorname{cCAlg}_{\mathbb{S}\hat{_p}}$ of smooth coalgebras over the $p$-complete sphere. Taking the image of this along the forgetful functor
$$
Ab(\operatorname{cCAlg}_{\mathbb{S}\hat{_p}}) \to \operatorname{CMon}(\operatorname{cCAlg}_{\mathbb{S}\hat{_p}})
$$
we obtain a grouplike commutative monoid $\mathcal{H}$ in $\operatorname{cCAlg}_{\mathbb{S}\hat{_p}}$, namely a bialgebra in $p$-complete spectra. We set $\Spec \mathcal{H} = \spectralift$. Then
base changing $\mathsf{Fix}^{\Sph}$ along the map
$$
\mathbb{S}\hat{_p} \to \tau_{\leq 0}\mathbb{S}\hat{_p} \simeq \Z_p \to \mathbb{F}_p
$$
recovers precisely the affine group scheme $\spectralift$, by compatibility of Cartier duality with base change.
One may even go further and base-change to the orientation classifier (cf. \cite[Chapter 6]{ellipticII})
$$
\mathbb{S}\hat{_p} \simeq R^{un}_{\widehat{\GG_m}} \to R^{or}_{\widehat{\GG_m}} \simeq E_1
$$
and recover height one Morava $E$-theory, a complex orientable spectrum. Moreover, in height one, Morava $E$-theory is the $p$-complete complex $K$-theory spectrum $KU\hat{_p}$. Applying the above procedure, one obtains the Hopf algebra corresponding to
$$
C_{*}(\C P^{\infty}, KU\hat{_p})
$$
whose algebra structure is induced by the abelian group structure on $\C P^{\infty}$. We now take the spectrum of this bi-algebra; note that this is to be done in the nonconnective sense (see \cite{lurie2016spectral}) as $KU\hat{_p}$ is nonconnective. In any case, one obtains an affine nonconnective spectral group scheme
$$
\Spec(C_{*}(\C P^{\infty}, KU\hat{_p}))
$$
which arises via base change $\Spec KU_{\hat{p}} \to \Spec R^{un}_{\formalgroup_m}$
We summarize this discussion with the following diagram of pullback squares in the $\infty$-category of nonconnective spectral schemes:
$$
\xymatrix{
&\operatorname{Fix} \ar[d]_{\phi'} \ar[r]^{p'} & \Spec(T) \ar[d]^{\phi} & \Spec( C_{*}(\C P^{\infty}, KU\hat{_p})) \ar[d] \ar[l] \\
& Spec(\mathbb{F}_p) \ar[r]^{p}& \Spec(R^{un}_{\formalgroup}) & \Spec(KU_{\hat{p}}) \ar[l]
}
$$
Note that we have the following factorization of the map
$$
\Sph\hat{_p} \to ku\hat{_p} \to KU\hat{_p}
$$
through $p$-complete connective complex $K$-theory, so these lifts exists there as well.
\end{ex}
\begin{comment}
\begin{prop}
Let $\mathsf{Fix}^{\Sph}$ denote the lift of $\mathsf{Fix}$ to the spectral deformation ring $R^{un}_{\widehat{\GG_m}}$. Then the classifying stack $B \mathsf{Fix}^{\Sph}$ is of finite cohomological dimension.
\end{prop}
\begin{proof}
In \cite{moulinos2019universal}, it was shown that $B \mathsf{Fix}$ was of finite cohomological dimension over $\Spec \Z_p$. We reduce the proof here to this calculation. Indeed, we need to show that for $M \in \on{QCoh}(B \mathsf{Fix})^0$, that the pushforward along the structure map $\pi: B \mathsf{Fix}^0 \to \Spec \Sph^{\hat{ }}_p$ belongs to $\on{Sp}_{\geq - d}$ for some $d$. Recall that $B \mathsf{Fix}^{\Sph}$ is geometric, and so by \cite{lurie2016spectral}, $\on{QCoh}(B \mathsf{Fix}^{\Sph}$ has a natural $t$-structure. The heart of this t-structure agrees precisely with the abelian category of quasi-coherent sheaves on the truncation which in this case is none other than $\mathsf{B} \mathsf{Fix}$. {\red using the preygel reference as well} We know therefore that $ i^* \circ \pi_*(M) \in (\Mod_{\Z_p})_{\geq -d}$ where $i: \Spec \Z_p \to \Spec \Sph_p$. If
$$
\pi_*(M) \simeq \colim _{i \in I} \Sigma^{n_i} \Sph
$$
the fact that $B \mathsf{Fix}$ is of finite cohomological dimension means that
$$
(\colim _{i \in I} \Sigma^{n_i} \Sph)\otimes_{\Sph_p} \Z_p \simeq \colim _{i \in I} \Sigma^{n_i} \Z_p
$$
with $n_i \geq d$ for all $n_i$. Hence it follows that for any $M \in \on{QCoh}(B \mathsf{Fix}^{\Sph})$, $\pi_*(M)$ lies in the $\geq -d$ piece of the $t$-structure for some $d$.
\end{proof}
\end{comment}
\section{Lifts of $\formalgroup$-Hochschild homology to the sphere} \label{liftsofourshittosphere}
Let $ \formalgroup$ be a height $n$ formal group over a perfect field $k$. We study a variant of $\formalgroup$-Hochschild homology which is more adapted to the tools of spectral algebraic geometry. Roughly speaking, we take mapping stacks in the setting of spectral algebraic geometry over $k$, instead of derived algebraic geometry
\begin{defn} \label{E_inftyvariant}
Let $\formalgroup$ be a formal group. We define the \emph{$E_{\infty}$-$\formalgroup$ Hochschild homology} to be the functor defined by
$$
HH^{\formalgroup}_{E_\infty}(A): \operatorname{CAlg}_k^{cn} \to \operatorname{CAlg}_k^{cn}, \, \, \, \, \, HH^{\formalgroup}_{E_\infty}(A)= \Map_{sStk_k}(B \formalgroup^{\vee}, \Spec A),
$$
where $\Map_{sStk_k}(-,-)$ denotes the internal mapping object of the $\infty$-topos $sStk_k$.
\end{defn}
It is not clear how the two notions of $\formalgroup$-Hochschild homology compare.
\begin{conj}
Let $\formalgroup$ be a formal group and $A$ a simplicial commutative $k$-algebra. Then there exists a natural equivalence
$$
\theta(\operatorname{HH}^{\formalgroup}(A)) \to \operatorname{HH}_{E_\infty}^{\formalgroup}(\theta(A))
$$
In other words, the underlying $E_\infty$ algebra of the $\GG$-Hochschild homology coincides with the $E_\infty- \formalgroup$-Hochschild homology of $A$, viewed as an $E_\infty$-algebra.
\end{conj}
At least when $\formalgroup= \widehat{\GG_m}$, we know that this is true. In particular, this also recovers Hocshild homology. (relative to the base ring $k$)
\begin{prop} \label{hochschildhomologygm}
There is a natural equivalence
$$
\operatorname{HH}(A/k) \simeq \operatorname{HH}^{\widehat{\GG_m}}_{E_\infty}(A)
$$
of $E_\infty$ algebra spectra over $k$.
\end{prop}
\begin{proof}
This is a modification of the argument of \cite{moulinos2019universal}. We have the (underived) stack $\mathsf{Fix} \simeq \widehat{\GG_m}^\vee$ and in particular a map
$$
S^1 \to B \mathsf{Fix} \simeq B\widehat{\GG_m}^\vee
$$
This can also be interpreted, by Kan extension as a map of spectral stacks. This further induces a map between the mapping stacks
$$
\Map_{sStk_k}(S^1, X) \to \Map_{sStk_k}(B\widehat{\GG_m}^\vee, X)
$$
Recall that all (connective) $E_\infty$ $k$-algebras may be expressed as a colimits of free algebras, and all colimits of free algebras may be expressed as colimits of the free algebra on one generator $k\{t\}$. This follows from \cite[Corollary 7.1.4.17]{luriehigher}, where it is shown that $\operatorname{Free}(k)$ is a compact projective generator for $\operatorname{CAlg}_k$. Hence, it is enough to test the above equivalence in the case where $X= \AAA_{sm}^1$; this is the "smooth" affine line, i.e. $\AAA_{sm}^1 = \Spec k\{t\}$, the spectrum of the free $E_\infty$ $k$-algebra on one generator. For this we check that there is an equivalence on functor of points
$$
B \mapsto \Map(B \widehat{\GG_m}^\vee \times B, \AAA^1) \simeq \Map(S^1 \times B, \AAA^1)
$$
for each $B \in \operatorname{CAlg}^{\operatorname{cn}}$. Each side may be computed as $\Omega^{\infty}( \pi_{*}\OO)$ where $\pi: B G \times B \to \Spec k$ denotes the structural morphism (where $G \in \{ \Z, \widehat{\GG_m}^\vee\}$. The result now follows from the following two facts:
\begin{itemize}
\item there is an equivalence of global sections $C^*(B \mathsf{Fix}, \OO) \simeq k^{S^1}$ \cite[Proposition 3.3.2]{moulinos2019universal}.
\item $B \mathsf{Fix}$ is of finite cohomological dimension, cf. \cite[Proposition 3.3.7]{moulinos2019universal},
\end{itemize}
as we now obtain an equivalence on $B$-points
$$
\Omega^\infty(\pi_* (B\widehat{\GG_m}^\vee \times B)) \simeq \Omega^\infty(\pi_*(B\widehat{\GG_m}^\vee) \otimes_k B )\simeq \Omega^\infty(\pi_*(S^1) \otimes_k B )\simeq \Omega^\infty(\pi_* (S^1 \times B)).
$$
Note that the second equivalence follows from the finite cohomological dimension of $B\widehat{\GG_m}^\vee$. Applying global sections $R\Gamma(-, \OO)$ to this equivalence gives the desired equivalence of $E_\infty$-algebra spectra.
\end{proof}
We show that $\formalgroup$-Hochschild homology possesses additional structure which is already seen at the level of ordinary Hochshchild homology. Recall that for an $E_{\infty}$ ring $R$, its topological Hochschild homology may be expressed as the tensor with the circle:
$$
\operatorname{THH}(R) \simeq S^1 \otimes_{\Sph} R.
$$
Thus, when applying the $\Spec(-)$ functor to the $\infty$-category of spectral schemes, this becomes a cotensor over $S^1$. In fact this coincides with the internal mapping object $\Map(S^1, X)$, where $X= \Spec R$. Furthermore, one has the the following base change property of topological Hochshild homology: for a map $R \to S$ of $E_\infty$ rings, there is a natural equivalence:
$$
\operatorname{THH}(A/ R) \otimes_{R}S \simeq \operatorname{THH}(A \otimes_R S/ S)
$$
\noindent
In particular if $R$ is a commutative ring over $\mathbb{F}_p$ which admits a lift $\widetilde{R}$ over the sphere spectrum, then one has an equivalence
$$
\operatorname{THH}(\tilde{R}) \otimes_{\Sph} \mathbb{F}_p \simeq \operatorname{HH}(R/ \mathbb{F}_p)
$$
This can be interpreted geometrically as an equivalence of spectral schemes
$$
\Map(S^1, \Spec(\tilde{R})) \times \Spec \mathbb{F}_p \simeq \Map(S^1, \Spec(R))
$$
over $\Spec \mathbb{F}_p$.
We show that such a geometric lifting occurs in many instances in the setting of $\formalgroup$-Hochschild homology.
\begin{const}
Let $\formalgroup$ be a height $n$ formal group over $\mathbb{F}_p$ and let $R$ be an commutative $\mathbb{F}_p$-algebra. Let $\formalgroup_{un}$ denote the universal deformation of $\formalgroup$, which is a formal group over the $R_{\formalgroup}^{un}$. As in section \ref{Cartierdualoversphere}, we let $\spectralift$ denote its Cartier dual over this $E_\infty$-ring.
\end{const}
\begin{thm}
Let $\formalgroup$ be a height $n$ formal group over $\mathbb{F}_p$ and let $X$ be an $\mathbb{F}_p$ scheme. Suppose there exists a lift $\tilde{X}$ over the spectral deformation ring $R^{un}_{\formalgroup}$. Then there exists a homotopy pullback square of spectral algebraic stacks
$$
\xymatrix{
& \Map(B \Fix_{\formalgroup}, X) \ar[d]_{\phi'} \ar[r]^{p'} & \Map(B \spectralift, \tilde{X}) \ar[d]^{\phi}\\
& Spec(\mathbb{F}_p) \ar[r]^{p}& \Spec(R^{un}_{\formalgroup})
}
$$
displaying $\Map(B \spectralift, \tilde{X})$ as a lift of $\Map(B \Fix_{\formalgroup}, X)$.
\end{thm}
\begin{proof}
Given a map $p:X \to Y$ of spectral schemes, there is an induced morphism of $\infty$-topoi
$$
p^*: \Shv^{\acute{e}t}_Y \to \Shv^{\acute{e}t}_X
$$
This pullback functor is symmetric monoidal, and moreover behaves well with respect to the internal mapping objects. Now let $X= \Spec \mathbb{F}_p$ and $Y= \Spec R_{\formalgroup}^{un}$ and let $p$ be the map induced by the universal property of the spectral deformation ring $R$. In this particular case, this means there will be an equivalence
$$
p^* \Map(B\spectralift, \tilde{X}) \simeq \Map(p^* B \spectralift, p^* \tilde{X}) \simeq \Map(B \Fix_{\formalgroup}, X)
$$
since $\tilde{X} \times \Spec \mathbb{F}_p \simeq\ X $ and $p^* B {\spectralift \simeq B \Fix_{\formalgroup}}$.
\end{proof}
From this we conclude that the $\formalgroup$-Hochschild homology has a lift in the geometric sense, in that there is a spectral mapping stack over $\Spec R^{un}_{\formalgroup}$ which base changes to $\Map(B \formalgroup^{\vee}, X)$. We would like to conclude this at the level of global section $E_\infty$ algebras. This is not formal unlesss we have a more precise understanding of the regularity properties of $\Map(B \spectralift, X)$ for an affine spectral scheme $X= \Spec A$.
Indeed, there is a map
\begin{equation} \label{suhdude}
R\Gamma(\Map( B \spectralift, \tilde{X}), \OO) \otimes \mathbb{F}_p \to
R\Gamma(\Map(p^* B \spectralift, p^* \tilde{X}), \OO)
\end{equation}
but it is not a priori clear that this is an equivalence. In particular, we have the following diagram of stable $\infty$-categories
$$
\xymatrix{
& \Mod_{R^{un}_{\formalgroup}} \ar[d]^{p^*} \ar[r]^-{\phi^*} & \on{QCoh}(\Map( B \spectralift, \tilde{X})) \ar[d]^{p'^*}\\
& \Mod_{\mathbb{F}_p} \ar[r]^-{\phi'^*} & \on{QCoh}(\Map( B \spectralift, \tilde{X}))
}
$$
for which we would like to verify the Beck-Chevalley condition holds; i.e. that the following canonically defined map
$$
\rho: p^* \circ \phi_* \to \phi'_* \circ p'^*
$$
is an equivalence. Here $\phi_*$ and $\phi'_*$ are the right adjoints and may be thought of as global section functors. This construction applied to the structure sheaf $\OO$ recovers the map (\ref{suhdude}).
This would follow from Proposition \ref{morebasechangeshiiiii} upon knowing either that the spectral stack $\Map(B \spectralift, \tilde{X})$ is representable by a derived scheme or, more generally if it is of finite cohomological dimension. In fact it is the former:
\begin{thm} \label{representability}
Let $\formalgroup$ be as above and let $X= \Spec A$ denote a spectral scheme. Then the mapping stack
$\Map(B \spectralift, X)$
is representable by a spectral scheme.
\end{thm}
\begin{proof}
This will be an application of the Artin-Lurie representability theorem, cf. \cite[Theorem 18.1.0.1]{lurie2016spectral}. Given spectral stacks $X, Y$, the derived spectral mapping stack
$
\Map(Y, X)
$
is representable by a spectral scheme if and only if it is nilcomplete, infinitesimally cohesive and admits a cotangent complex and if the truncation $t_0(\Map(Y, X) ) $ is representable by a classical scheme. By Proposition 5.10 of \cite{halpern2014mapping} if $Y$ is of finite tor-amplitude and $X$ admits a cotangent complex, then so does the mapping stack $\Map(Y, X)$; in our case $X$ is an honest spectral scheme which has a cotangent complex. Note that the condition of being finite tor-amplitude is local on the source with respect to the flat topology (cf. \cite[Proposition 6.1.2.1]{lurie2016spectral}. Thus if there exists a flat cover $U \to Y$ such that the composition $U \to Y \to \Spec R$ is of finite tor amplitude, then $Y \to \Spec R$ itself has this property.
Infinitesimal cohesion follows from \cite[Lemma 2.2.6.13]{toen2008homotopical}. The following lemma takes care of nilcompleteness:
\begin{lem}
Let $Y$ be a spectral stack over $\Spec(R)$ which may be written as a colimit of affine spectral schemes
$$
Y \simeq \colim \Spec A_i
$$
where each $A_i$ is flat over $R$ and let $X$ be a nilcomplete spectral stack. Then $\Map_{Stk_R}(Y,X)$ is nilcomplete.
\end{lem}
\begin{proof}
The argument is similar to that of an analogous claim appearing in the proof of Theorem 2.2.6.11 in \cite{toen2008homotopical}. Let $Y$ be as above. Then
$$
\Map(Y, X) \simeq \lim_i \Map(\Spec A_i, X)
$$
and so it amounts to verify this for when $Y = \Spec A$ for $A_i$ flat. In this case we see that for $B \in \operatorname{CAlg}^{\operatorname{cn}}$,
$$
\Map(\Spec A, X)(B) \simeq X( A \otimes_{R} B).
$$
The map
$$
\Map(\Spec A, X)(B) \to \lim \Map(\Spec A, X)(\tau_{\leq n} B_n)
$$
which we need to check is an equivalence now translates to a map
\begin{equation}\label{benchen}
X(A \otimes_R B) \to X(\tau_{\leq n}B \otimes_R A)
\end{equation}
\noindent
We now use the flatness assumption on $A$. Using the general formula (cf. \cite[Proposition 7.2.2.13]{luriehigher})in this case
$$
\pi_n(A \otimes B) \simeq \operatorname{Tor}^0_{\pi_0(R)}(\pi_0 A, \pi_n B)
$$
we conclude that $\tau_{\leq n }(A \otimes B) \simeq A \otimes \tau_{\leq n} B$. Thus, \ref{benchen} above becomes a map
$$
X(A \otimes_R B) \to X(\tau_{\leq n}(B \otimes_R A))
$$
which is an equivalence because $X$ was itself assumed to be nilcomplete.
\end{proof}
Finally we show that the truncation is an ordinary scheme. Note first of all that the truncation functor
$$
t_0: SStk \to Stk
$$
preserves limits and colimits. It is induced from the Eilenberg Maclane functor
$$
H: \operatorname{CAlg}^0 \to \operatorname{CAlg}, \, \, \, A \mapsto HA
$$
which is itself adjoint to the truncation functor on $E_\infty$ rings. One sees that the truncation functor $t_0 = H^*: SStk \to Stk$ will have as a right adjoint the functor
$$
\pi_0^*: Stk \to SStk,
$$
induced by the $\pi_0$ functor
$$
R \mapsto \pi_0 R
$$
Thus it is right exact and preserves colimits. Hence if $Y = B G$ for some spectral group scheme $G$, then $t_0 BG \simeq B t_0 G$. Now, one has the identification
$$
t_0 \Map(Y, X) \simeq \Map(t_0 Y, t_0 X),
$$
which in our situation becomes
$$
t_0 \Map(B \spectralift, X) \simeq \Map(B G, t_0 X)
$$
for some (classical) affine group scheme $G$. Recall that the only classical maps between $f: B G \to t_0 X$ between a classifying stack and a scheme $t_0 X$ are the constant ones. Hence we conclude that the truncation of this spectral mapping stack is equivalent to the scheme $t_0 X$, the truncation of $X$.
\end{proof}
\begin{comment}
When $n=1$, and $\formalgroup = \widehat{\GG_m}$, even more is true. We can in fact identify $\operatorname{THH}^{\widehat{\GG_m}}$ with ordinary topological Hochshhild homology (hence justifying the notation). Indeed, this identification depends on our notion of Cartier duality. We begin with the following identification
\begin{prop}
There is an equivalence $\operatorname{Map}_{D S^1}(\Sph, \Sph) \simeq \widehat{\Sph [[u]]}$ where $\Sph[[u]]$ denotes the completion of the spectrum $\Sph[\Z]$ at the ideal $u= (t-1)$.
\end{prop}
\begin{thm}
There is an equivalence
$$
C^*(B \mathsf{Fix}^{\Sph}, \OO) \simeq \Sph^{S^1}
$$
where the right hand side denotes the $E_{\infty}$-algebra of cochains on $S^1$.
\end{thm}
Before we show this, we recall some facts about the convergence of the Eilenberg-Moore spectral sequence. Recall that if $X$ is a finite CW complex, there exist equivalences
$$
\Sph^{X} \simeq \End_{\Sigma_+^\infty X}(\Sph);
$$
if morever $X$ is \emph{simply connected}, one also has an equivalence
$$
\Sigma_+^\infty X \simeq \End_{\Sph^X}(\Sph)
$$
\begin{proof}
This follows by identifying both sides as the Koszul duals of the algebra $\Sph[[u]]$ above. Since Koszul duality satisfies a universal property any two such choices are equivalent. Note that since $S^1$ is not simply connected, this is not an immediate fact. Namely, we cannot conclude automatically that
$$
\operatorname{End}_{\Sph^{S^1}}(\Sph, \Sph) \simeq \Sph[\Omega S^1] = \Sph[\Z].
$$
However, by the exotic convergence of the Eilenberg-Moore spectral sequence as described in {\red cite Dwyer}, there is an equivalence
$$
\Sph \otimes_{\Sph^{S^1}} \Sph \simeq \Sph[[u]]^{\vee},
$$
where the right hand side denotes the linear dual of $\Sph[[u]]$. This is the adic $E_{\infty}$-algebra, which is obtained as the $I$-adic completion of $\Sph[\Z]$ at the ideal $(t-1) \subset \pi_0 \Sph[[\Z]]$. Note that geometrically this corresponds to functions on the formal completion of the spectral version of $\GG_m$ at the unit section.
By definition, then $\Sph[[u-1]]^{\vee}$ is the $E_{\infty }$-bi-algebra of functions on the group scheme $\mathsf{Fix}^{\Sph}$. In the $\infty$-category of spectral derived stacks, one has the following pullback square
$$
\xymatrix{
&\operatorname{Fix}^{\Sph} \ar[d]_{} \ar[r]^{} & \pt \ar[d]^{}\\
& \pt \ar[r]^{}& \mathsf{B} \mathsf{Fix}^{\Sph}
}
$$
which upon taking global sections cohomology gives the pushout square
$$
\xymatrix{
&C^*( \mathsf{B}\operatorname{Fix}^{\Sph}, \OO) \ar[d]_{} \ar[r]^{} & \Sph \ar[d]^{}\\
& \Sph \ar[r]^{}& C^* (\mathsf{Fix}^{\Sph}, \OO);
}
$$
in $\operatorname{CAlg}$ this expresses $C^* (\mathsf{Fix}^{\Sph}, \OO)$ as $\Sph \otimes_{C^*( B\operatorname{Fix}^{\Sph}, \OO)} \Sph$. By taking duals we recover, via Cartier duality, $\Sph[[u]]$. Hence we have dispayed $\widehat{\Sph[\Z]} \simeq \Sph[[u]]$ as the Koszul dual of both. Since Koszul duality is unique up to contractible choice, this determines an equivalence $C^*(B \mathsf{Fix}^{\Sph}, \OO) \simeq \Sph^{S^1}$.
\end{proof}
\begin{rem}
We remark that the spectrum $\Sph[[u]]$ appearing in the statement of the above theorem is not merely the standard power series ring $\Sph[[t]]$. However, the underlying $E_1$-algebra structures agree.
\end{rem}
\begin{cor}
Let $X= \Spec A$ be an affine spectral scheme. There exists an equivalence of spectral schemes
$$
\Map(B \mathsf{Fix}^{\Sph}, X) \simeq \Map( S^1, X)
$$
Hence one recovers topological Hochschild homology
$$
\operatorname{THH}(A) \simeq R \Gamma( \Map(B \mathsf{Fix}^{\Sph}, X))
$$
as the global sections of the mapping stack $\Map(B \mathsf{Fix}^{\Sph}, X)$.
\end{cor}
\begin{proof}
There is a map
$$
S^1 \simeq B \Z \to B \operatorname{Fix}^{\Sph}
$$
which upon taking mapping stacks, gives rise to the map in the statement. Recall that all (connective) $E_\infty$ algebras may be expressed as a colimits of free algebras, and all colimits of free algebras may be expressed as colimits of the free algebra on one generator $\Sph\{t\}$. This follows from \cite[Corollary 7.1.4.17]{luriehigher}, where it is shown that $\operatorname{Free}(R)$ is a compact projective generator for $\operatorname{CAlg}_R$. Hence, it is enough to test the above equivalence in the case where $X= \AAA_{sm}^1$; this is the smooth spectral affine line, i.e. $\AAA_{sm}^1 = \Spec \Sph\{t\}$, the spectrum of the free $E_\infty$-algebra on one generator. For this we check that there is an equivalence on functor of points
$$
B \mapsto \Map(B \mathsf{Fix}^{\Sph} \times B, \AAA^1) \simeq \Map(S^1 \times B, \AAA^1)
$$
for each $B \in \operatorname{CAlg}^{\operatorname{cn}}$. Each side may be computed as $\Omega^{\infty}( \pi_{*}\OO)$ where $\pi: B G \times B \to \Spec \Sph$ denotes the structural morphism (where $G$ is either $S^1$ or $\Z$. The proof now follows from the following two facts:
\begin{itemize}
\item there is an equivalence of global sections $C^*(B \mathsf{Fix}^{\Sph}, \OO) \simeq \Sph^{S^1}$
\item $B \mathsf{Fix}^{\Sph}$ is of finite cohomological dimension.
\end{itemize}
\end{proof}
\end{comment}
\subsection{Topological Hochschild homology}
As we saw, for a height $n$ formal group $\formalgroup$ over a finite field $k$, there exists a lift $\spectralift$ of the Cartier dual of $\formalgroup$; this allows one to define a lift of $\formalgroup$-Hochschild homology. We show that when the formal group is $\widehat{\GG_m}$ this lift is precisely topological Hochschild homology, at least after $p$-completion, as one would expect. For the remainder of this section we let $\formalgroup= \widehat{\GG_m}$, the formal multiplicative group.
Let $X$ be a fixed spectral stack. We remark that there exists an adjunction of $\infty$-topoi:
$$
\mathcal{S} \rightleftarrows SStk_{X}
$$
where one has on the right hand side the $\infty$-category of spectral stacks over $X$.
First, one has the following proposition; here we think of $S^1$ as a ``constant stack induced by the adjunction
$$
\pi^*: \mathcal{S} \rightleftarrows SStk_{R} :\pi_*
$$
\begin{prop}
There exists a canonical map
$$
S^1 \to B \spectralift
$$
of group objects in the $\infty$-category of spectal stacks over $\Sph_{p}$.
\end{prop}
\begin{proof}
By \cite[Construction 3.3.1]{moulinos2019universal}, there is a canonical map
\begin{equation} \label{randomequation}
\Z \to \mathsf{Fix}
\end{equation}
in the category of fpqc abelian sheaves over $\Spec \Z_{(p)}$.
We claim that the (discrete) group scheme $\mathsf{Fix}$ is none other than the truncation of the spectral group scheme
$$
\formalgroup_{un}^{\vee} \to \Spec \Sph_p
$$
This follows from the fact that $\formalgroup_{un}^{\vee}$ is flat over $\Sph_p$, as the corresponding Hopf algebra is flat. As a result, the base change of this spectral group scheme along the map
$$
\Spec \Z_p \to \Spec \Sph_p
$$
is itself flat over $\Z_p$ and in particular is $0$-truncated. By definition, this is $\mathsf{Fix}$. Now,
there is an adjunction
$$
i^*: SStk_{\Sph_p} \rightleftarrows Stk_{\Z_p} : t_0
$$
against which the map (\ref{randomequation}) is lifted to a map
$$
\Z \to \formalgroup_{un}^{\vee}.
$$
in $SStk_{\Sph_p}$.
This will be a map of group objects, since the adjoint pair preserves the group structure.
Delooping this, we obtain the desired map
$$
S^1 \simeq B \Z \to B \formalgroup_{un}^{\vee}.
$$
\end{proof}
Let $X= \Spec A$ be an affine spectral scheme. By taking mapping spaces, the above proposition furnishes a map
$$
\Map( B \formalgroup_{un}^{\vee} , X) \to \Map(S^1 ,X);
$$
applying global sections further begets a map
$$
f: \operatorname{THH}(A) \to R \Gamma(\Map( B \formalgroup_{un}^{\vee} , X), \OO )
$$
of $E_{\infty}$ $\Sph_{p}$-algebras.
\begin{thm}
Let
$$
f_p: \operatorname{THH}(A; \mathbb{Z}_p) \to R \Gamma(\Map( B \formalgroup_{un}^{\vee} , X), \OO )^{\widehat{ }}_p
$$
denote the $p$-completion of the above map. Then $f$ is an equivalence.
\end{thm}
\begin{proof}
Since this is a map of $p$-complete spectra, it is enough to verify it is an equivalence upon tensoring with the Moore spectrum $\Sph_p / p$. In fact, since these are both connective spectra, one can go further and test this simply by tensoring with $\mathbb{F}_p$ (eg. by \cite[Corollary A.33]{mao2020perfectoid}) Hence, we are reduced to showing that
$$
\operatorname{THH}(A; \mathbb{Z}_p) \otimes_{\Sph_p} \mathbb{F}_p \to R \Gamma(\Map( B \formalgroup_{un}^{\vee} , X), \OO )^{\widehat{ }}_p \otimes \mathbb{F}_p
$$
is an equivalence of $E_\infty$ $\mathbb{F}_p$-algebras.
By generalities on topological Hochschild homology, we have the following identification of the left hand side:
$$
\operatorname{THH}(A; \mathbb{Z}_p) \otimes_{\Sph_p} \mathbb{F}_p \simeq \operatorname{HH}(A \otimes_{\Sph_p} \mathbb{F}_p / \mathbb{F}_p ).
$$
Now we can use Theorem \ref{representability} to identify the right hand side with the global sections of the following mapping stack
$$
\Map(B \formalgroup_{un}^{\vee}, X) \times \Spec \mathbb{F}_p \simeq \Map(B \formalgroup_{un}^{\vee} \times \Spec \mathbb{F}_p, X \times \Spec \mathbb{F}_p)
$$
By Proposition \ref{hochschildhomologygm}, this is precisely $\operatorname{HH}(A \otimes_{\Sph_p} \mathbb{F}_p / \mathbb{F}_p )$, whence the equivalence.
\end{proof}
\section{Filtrations in the spectral setting }
In Section \ref{deformationofGm} an interpretation of the HKR filtration on Hochshild homology was given in terms of a degeneration of $\widehat{\GG_m}$ to $\widehat{\GG_a}$. Moreover, this was expressed as an example of the deformation to the normal cone construction of section \ref{deformationsection}.
In Section \ref{liftsofourshittosphere}, we further saw that these $\formalgroup$-Hochshchild homology theories may be lifted beyond the integral setting. A natural question then arises: do the filtrations come along for the ride as well? Namely, does there exist a filtration on $\operatorname{THH}^{\formalgroup}(-)$ which recovers
upon base-changing along $R^{un}_{\formalgroup} \to k$, the filtered object corresponding to $HH^{\formalgroup}(-)$?
We will not seek to answer this question here. However we do give a reason why some negative results might be expected. As mentioned in the introduction, many of the constructions do work integrally. For example, one can talk about the deformation to the normal cone $Def_{\filstack}(\formalgroup)$ of an arbitrary formal group over $\Spec \Z$. If we apply this to $\widehat{\GG_m}$ we obtain a degeneration of the formal multiplicative group to the formal additive group. We let $Def_{\filstack}(\widehat{\GG_m})^{\vee}$ the Cartier dual, as in section \ref{filteredformalgroupsection}. In \cite{scheme} the Cartier dual to $\widehat{\GG_m}$ is described to be $\Spec(Int(\Z))$, the spectrum of the ring of integer valued polynomials on $\Z$. Moreover it is shown that $ B \Spec(Int(\Z))$ is the affinization of $S^1$, hence one can recover (integral) Hochshild homology mapping out of this.
Let us suppose there exists a lift of $Def(\widehat{\GG_m})^{\vee}$ to the sphere spectrum, which we shall denote by $Def^{\Sph}(\widehat{\GG_m})^{\vee}$. This would allow us to define a mapping stack in the $\infty$-category $sStk_{\filstack}$ of spectral stacks over the spectral variant of $\filstack$. By the results of \cite{geometryofilt}, this comes equipped with a filtration on its cohomology, which we would like to think of as recovering topological Hochschild homology.
However, over the special fiber $B \GG_m \to \filstack$, we would expect that such a lift $Def^{\Sph}(\widehat{\GG_m})^{\vee}$ recovers the formal additive group $\widehat{\GG_a}$. More precisely, we would get a formal group over the sphere spectrum $\formalgroup \to \Spec \Sph$ which pulls back to the formal additive group $\GG_a$ along the map $\Sph \to \Z$. However, by \cite[Proposition 1.6.20]{ellipticII}, this can not happen. Indeed there it is shown that $\widehat{\GG_a}$ does not belong to the essential image of $\operatorname{FGroup}(\Sph) \to \operatorname{FGroup}(\Z)$.
We summarize this discussion into the following proposition.
\begin{prop}
There exists no lift of $Def_{\filstack}(\formalgroup_m)$ over to the sphere spectrum. In particular, there exists no formal group $\widetilde{\formalgroup}$ over $\filstack$, relative to $\Sph$ such that $\widetilde{\formalgroup} \times \Spec \Z \simeq Def_{\filstack}(\widehat{\GG_m})$.
\end{prop}
\begin{comment}
\section{Trash}
\subsection{Derived commutative algebras}
Fix $R$ a commutative ring; let $\EuScript{C}$ denote either that $\infty$-category of simplicial commutative $R$-algebras, or the $\infty$-category of $E_\infty$-$R$-algebras in the $\infty$-category of spectra. Recall that the functor that assigns the underlying $E_\infty$ algebra of a simplicial commutative ring is monadic and so there exists a monad $T$ for which $\operatorname{sCAlg}_R \simeq \operatorname{Alg}_T(\operatorname{CAlg}_R)$.
In fact, since the discussion will be relevant for us, we
analyze the difference between the two settings a bit further.
On the heart $\Mod_R^0$, for any discrete module one has the symmetric power functor $M \mapsto Sym^n(M)$. Since this generates the category of connective $R$-modules via sifted colimits, one may left Kan extend along this completion functor to obtain a monad $LSym^*(M)$ on $\Mod_R^{cn}$, such that algebras over this monad are precisely the $\infty$-category $\operatorname{sCAlg}$ of simplicial commutative $R$-algebras. By the work of \cite{brantner2019deformation}, this monad extends to a monad $T: \Mod_R \to \Mod_R$. We refer to algebras over this monad as derived algebras. In the connective case, one of course recovers the homotopy theory of simplicial commutative algebras.
\begin{rem}
Let $A \in \Mod_T(\Mod_R)$ be a derived $T$-algebra, whose underlying $R$-module is coconnective. It seems to be an open question to compare the homotopy theory of such objects to the most classical notion of cosimplicial commutative rings. By for example, \cite{toen2006champs}, there exists a model category structure on the category of cosimplicial commutative algebras. Note that unlike the case of simplicial commutative rings, which are characterized by a universal property, this is not so straightforward.
\end{rem}
\end{comment}
\begin{comment}
\subsection{Formal derived geometry}
Let $X$ be a derived scheme, and $x \in X$ be a closed point. One defines $\widehat{X}$ to be the functor.... the value of this is determined on Artinian algebras. Formal groups are a notion intrinsic to the setting of formal geometry, and so it is worth placing the relevant ideas in context. Formal geometry governs the local structure of various moduli and is typically determined by Lie algebraic data.
As one may gather, there are once again two variants.
Formal stacks (often referred to as formal moduli problems) are determined by their restrictions to Artinian simplicial commutative algebras; recall that $A \in \operatorname{sCalg}_k$ is Artinian if
\begin{itemize}
\item $\pi_0 A$ is local, Artinian ring.
\item $\pi_n A = 0$ for $n >> 0$
\end{itemize}
The analogous definition holds in the spectral setting. Then a formal moduli problem is a functor
$$
F: \operatorname{sCAlg}^{\operatorname{Art}} \to \mathcal{S}
$$
satisfying various conditions. These formal moduli problems occur occur naturally as the formal completions of various derived stacks along a closed immersion $f: X \to Y$; the \emph{formal completion of Y along X} is defined to be the functor
$$
R \mapsto Y(R) \times_{Y(R_{red}}) X(R_{red})
$$
where $R_{red}:= \pi_0 R_{red}$. As a stack this is representable by the formal scheme which is the formal completion of $Y$ along $X$.
\subsection{Derived formal groups}
We use the various ideas to define a notion of a formal group, in the setting of derived algebraic geometry, based on simplicial commutative rings.
The basic idea is the following. We define derived formal groups to be abelian group objects in the category $\EuScript{D}$, where $\EuScript{D}$ is defined to be the full-subcategory category of Adic $E_{\infty}$ algebras.
\begin{const}
Let $A \in \operatorname{Alg}_T(\Mod_R)$ be a derived $R$-algebra. Then the structure of an Adic derived algebra is that of a topology on the commutative ring $\pi_0 A$. This prescription makes sense because we can just take $\pi_0 A $ of the underlying $E_\infty$-algebra $A^{\circ}$. In particular we may define the $\infty$-category of adic derived algebras to be the fiber product $\Alg_{T,ad} := \Alg_{T} \times_{\operatorname{CAlg}^0} \operatorname{CAlg}^0_{ad}$, where $\operatorname{CAlg}^0$ denotes the category of discrete $R$ algebras, and $\operatorname{CAlg}^0_{ad}$ denotes the category of such algebras together with a topology.
\end{const}
\begin{rem}
An object in $\Alg_{T,ad}$ whose underlying $E_\infty$-algebra is connective may be identified with simplicial commutative algebras for which $\pi_0$ is an adic commutative ring.
\end{rem}
In the spectral setting, the underlying formal scheme (or formal hyperplane as it is called) arises functorially from a \emph{smooth} coalgebra Given an adic derived algebra. The key point in isolating the definition of a derived formal group is to isolate those formal schemes which have an additional derived structure. As these are essentially affine objects of the form $\operatorname{Spf} A$, this amounts to identifying a $T$-algebra structure on the $E_{\infty}$ algebra $A$.
We are now ready to define formal groups in the derived setting:
\begin{defn}
Let $\EuScript{D}$ be the category above, of formal hyperplanes $\operatorname{Spf} A$ such that $A$ arises in the image of forgetful functor $\Alg_{T, ad} \to \operatorname{CAlg}$. Then we define the category of derived formal groups over $R$ to be $\operatorname{Ab}(\EuScript{D})$, the category of abelian group objects of $\EuScript{D}$.
\end{defn}
\begin{rem} Need some sort of statement, that Spec(A) for A a simplicial commutative ring is a derived scheme. Can this be gleaned just by looking at the value the functors take?
\end{rem}
\begin{rem}
Unpacking these definitions, this implies that a derived formal group is a pro-representable functor $F: \operatorname{sCAlg} \to \operatorname{Ab}(\mathcal{S}) \simeq \operatorname{sAb}$ such that $F \simeq \operatorname{Spf}(A)$, where $A$ is a simplicial
\end{rem}
There is an obvious functor from derived formal groups into spectral formal groups but it is neither fully faithful nor is it essentially surjective, essenetially owing to the fact that the forgetful functor from derived commutative algebras does not have this property.
{\red this section obviously needs some work.... but hopefully the main idea is now here...}
\end{comment}
\bibliographystyle{amsalpha}
|
1,108,101,564,916 | arxiv | \section{}
\section{Acknowledgements}\label{sec:Acknowledgements}
The authors acknowledge J.G.M. Kuerten and T. Hazenberg for useful discussions in developing this paper. We would like to thank D. Ning and A. Panahi for sharing the experimental data. We would like to thank B. Cuenot for initiating the collaboration between Eindhoven University of Technology and Imperial College London.
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under Grant Agreement no. 884916. and Opzuid (Stimulus/European Regional Development Fund) Grant agreement No. PROJ-02594.
\section{Conclusions}\label{sec:conclusions}
Molecular dynamics simulations have been performed to investigate the thermal and mass accommodation coefficients for the combination of iron(-oxide) and air. The obtained relations for the TAC and MAC are used in a point-particle Knudsen model to investigate the effects on the combustion behavior of (fine) iron particles.
The TAC for the interaction of $\mathrm{Fe}$ with $\mathrm{N_2}$ is almost independent of the surface temperature and equals $\alpha_\mathrm{T} = 0.17$. For $\mathrm{Fe_xO_y}$-$\mathrm{O_2}$ interactions, the TAC remains close to unity when the oxidation degree of the surface is low, but decreases abruptly to $0.2$ once it reaches the stoichiometry of $\mathrm{FeO}$. The MAC decreases almost linearly as a function of $Z_\mathrm{O}$. Two different slopes are observed: A steeper slope for $Z_\mathrm{O} < 0.5$ and a shallower one for $Z_\mathrm{O} > 0.5$, indicating that it becomes more difficult for an iron(-oxide) particle to absorb more oxygen for $Z_\mathrm{O} > 0.5$.
By incorporating the MD information into the single iron particle model, a new temperature-time curve for the single iron particles is observed compared to results obtained with previously developed continuum models. Since the rate of oxidation slows down as the MAC decreases with an increasing oxidation stage, the rate of heat release decreases when reaching the maximum temperature, such that the rate of heat loss exceeds that of heat release. In addition, the oxidation beyond $Z_\mathrm{O} = 0.5$ (from stoichiometric $\mathrm{FeO}$ to $\mathrm{Fe_3O_4}$) is modeled. The effect of the transition-regime heat and mass transfer on the burn time becomes more than 10\% if the particles are smaller than $10$ \textmu m.
In some cases, the model overestimates the particle temperature. The reactive cooling slope observed after the maximum particle temperature, however, reasonably agree with the experimentally observed slope. This overestimation could be attributed to the assumption of an infinitely fast transport of oxygen inside the particle. If the particle does not have a homogeneous composition, the mass accommodation coefficient significantly decreases. With a high oxygen concentration $X_\mathrm{O_2}$ in the gas phase, the rate of internal transport could become important, and therefore limit the maximum temperature.
\section{Introduction}
Iron powder is considered as a promising metal fuel since it is inherently carbon-free, recyclable, compact, cheap and widely available \citep{Bergthorson2015}. To design and improve real-world iron-fuel burners, an in-depth understanding of the fundamentals underlying the combustion of fine iron particles is required.
Over the past five years, the interest in using iron powder as a circular carrier of renewable energy has drastically increased. \cite{Soo2017}, \cite{Tang2009}, \cite{McRae2019}, \cite{Toth2020} and \cite{Li2020} performed experiments with iron dust flames to study fundamental characteristics of iron combustion, such as flame structure and flame propagation. \cite{Toth2020} and \cite{Li2020} identified the formation of nano-particles in their experiments, where they observed halos of nano-particles surrounding the burning iron particles. To gain a more in-depth understanding of the oxidation processes of iron particles, the canonical configuration of single iron particle combustion has been investigated by multiple researchers. \cite{Ning2020, Ning2021} performed single particle combustion experiments where the particles are ignited by a laser. They showed that the duration of burning process is sensitive to the surrounding oxygen concentration. They also observed that the maximum particle temperature increases with gas-phase oxygen concentration while reaching a plateau at sufficiently elevated oxygen concentrations. Formation of oxide nano-particles during the combustion was also recorded. \cite{Li2022} investigated the ignition and combustion process of single micron-sized iron particles in the hot gas flow of a burned methane-oxygen-nitrogen mixture, and drew similar conclusions as \cite{Ning2020}. \cite{Panahi2022} used a drop-tube furnace to burn iron particles at a high gas temperature ($\approx 1350~\mathrm{K}$) with oxygen concentrations of $21\%$, $50\%$ and $100\%$. They showed that particle temperature increases significantly when the oxygen concentration increases from $21\%$ to $50\%$, but barely further increases when increasing from $50\%$ to $100\%$.
In the past few years, the number of theoretical models for single iron particles has increased. \cite{Mi2022} investigated the ignition behavior of iron particles via solid-phase oxidation kinetics described by a parabolic rate. \cite{Philyppe2022} extended this model and investigated the ignition behavior of fine iron particles in the Knudsen transition regime. They stated that the transition effect on the ignition characteristics becomes important if the particle diameter is below $30$~\textmu m. \cite{Senyurt2022} studied the ignition of metal particles other than iron in the transition regime, and stated that transition effects could always be neglected for very large (i.e., $>200$ \textmu m) particles. While these models only focused on the ignition, \cite{Soo2018} developed a generic model for the full combustion behavior for non-volatile solid-fuel particles, wherein the particle oxidation rate depends on reaction kinetics at the surface of the particle and the external diffusion of oxygen to the particle. \cite{Hazenberg2020} further extended this model by taking into account the growth of the particle during oxidation. In \cite{ThijsPCI_2022}, a boundary layer resolved model was developed so that mass and heat transfer are accurately modeled, including Stefan flow. Temperature-dependent properties were used for the gas- and condensed-phase species and evaporation was implemented to investigate the formation of nano-sized iron-oxides products. It was shown that, although the particle temperature remains below the boiling point of iron(-oxide), a non-negligible amount of iron mass is lost due to the evaporation of the particle. Furthermore, it was shown that when only the conversion of $\mathrm{Fe}$ to $\mathrm{FeO}$ is considered up to the maximum temperature, and when internal transport is neglected, a good agreement was obtained with the experimental data of \citep{Ning2021} for the combustion time and maximum temperature as a function of oxygen concentration. The further oxidation after the particle peak temperature was, however, not modeled. In a later work of \citep{Thijs2022}, the point-particle model of \cite{Hazenberg2020} was extended, with aid of a boundary-layer-resolved model, to include temperature-dependent properties, slip velocity, Stefan flow, and evaporation.
In most of the previously discussed models for iron-particle combustion, the continuum assumption is used to describe the transport processes. It is known that, when the size of the particle becomes too small, modeling the heat and mass transfer using the continuum approach becomes invalid. The particle radius, $r_\mathrm{p}$, compared to the mean free path length of the gas molecules, $\lambda_\mathrm{MFP}$, is described by the Knudsen number \citep{Liu2006}
\begin{equation}
\mathrm{Kn} = \lambda_\mathrm{MFP}/r_\mathrm{p}.
\end{equation}
Typically, when $\mathrm{Kn}$ is larger than 0.01 \citep{Liu2006}, the continuum approach is invalid. An elaborate review of modeling heat transfer in the transition regime for nano-particles in the context of laser-induced incandescence is presented by \cite{Liu2006}. In the previous study of \cite{ThijsPCI_2022}, only relatively large particles of $40$ and $50$ \textmu m were considered, ensuring that the transition-regime heat and mass transfer has a negligible effect. While \cite{Philyppe2022} and \cite{Senyurt2022} studied the transition effect on the ignition behavior of metal particles, such a study was not performed for the complete combustion process of single iron particles.
The heat and mass transfer in the free-molecular regime is dependent on the thermal and mass accommodation coefficients. The average energy transfer when gas molecules scatter from the surface is described by the thermal accommodation coefficient (TAC). The mass accommodation coefficient (MAC) or absorption coefficient is defined as the fraction of incoming oxygen molecules that are absorbed (accommodated) rather than reflected when they collide with the iron surface. Molecular dynamics simulation can be used to investigate these accommodation coefficients. Multiple authors \citep{Daun2009, Daun2012, Sipkens2014, Sipkens2018} performed molecular beam simulations between metal-gas surfaces to investigate the TAC under different conditions. \cite{Daun2012} showed that the TAC obtained with such a molecular dynamics simulation well agrees with experimental data. Alas, to the authors' knowledge, a systematic analysis of the TAC and MAC for a system of iron(-oxide) surface exposed to air has not yet been reported. Such a study of the MAC is also of importance to derive effective chemical kinetics governing the rate of further oxidation beyond the stoichiometry of $\mathrm{FeO}$, which are missing in literature.
In this work, molecular dynamics simulations are performed to determine the TAC and MAC for an iron(-oxide)-air system. Then, these values are used to examine the effect of the Knudsen transition regime on the combustion behavior of single iron particles. The paper is organized as follows. Section 2 describes the methodology of the boundary-sphere model used to describe the combustion of single iron particles. In Section 3 the procedure for the molecular dynamics simulations is discussed. Section 4 presents the results of the molecular dynamics simulations. In Section 5 the results of the single iron particle model are discussed. Conclusions are provided in Section 6.
\section{Model formulation for single iron particle combustion}
In the current study, a two-layer point-particle model is used as shown in Figure \ref{fig:KnudsenConfig_Senyurt2022}. An iron particle oxidation model is coupled to a boundary sphere method to take into account the Knudsen transition regime. The iron particle oxidation model is based on the previous work \citep{Thijs2022}.
\begin{figure}[h]
\centering
{\includegraphics[width=0.5\columnwidth]{Figures/KnudsenConfig.eps}}
\caption{The configuration for heat and mass transfer analysis considered in the Knudsen model.}%
\label{fig:KnudsenConfig_Senyurt2022}
\end{figure}
In \cite{Thijs2022}, only the oxidation from $\mathrm{Fe}$ into $\mathrm{FeO}$ was taken into account via
\begin{equation} \label{eq:Fe_into_FeO}
\mathrm{R1}: \mathrm{Fe} + \frac{1}{2}\mathrm{O_2} \rightarrow \mathrm{FeO}.
\end{equation}
Here, the oxidation mechanism is extended by considering the further oxidation into $\mathrm{Fe_3O_4}$. Once all the $\mathrm{Fe}$ is converted into $\mathrm{FeO}$, the $\mathrm{FeO}$ continues to oxidize into $\mathrm{Fe_3O_4}$ via
\begin{equation}\label{eq:FeO_into_Fe3O4}
\mathrm{R2}: 3\mathrm{FeO} + \frac{1}{2}\mathrm{O_2} \rightarrow \mathrm{Fe_3O_4}.
\end{equation}
Based on the thermodynamics of the Fe-O system \citep{Wriedt1991}, it is known that iron-oxide can be formed in multiple crystalline structures, i.e., $\mathrm{FeO}$ (wustite), $\mathrm{Fe_3O_4}$ (magnetite), and $\mathrm{Fe_2O_3}$ (hematite). In the liquid phase of an Fe-O mixture, there are no clear crystalline structures. This region on the Fe-O phase diagram consists mainly of two different liquids denoted by L1 (iron) and L2 (iron-oxide).
Therefore, considering these phases as ``liquid $\mathrm{FeO}$'' and ``liquid $\mathrm{Fe_3O_4}$'' is not in line with the phase diagram.
The thermodynamic data for the L1 and L2 mixture are, however, unknown. In this work, the thermodynamic data for the L1 and L2 mixtures are therefore approximated to be a linear combination of the liquid-phase data of $\mathrm{Fe}$, $\mathrm{FeO}$, and $\mathrm{Fe_3O_4}$. The total enthalpy of a liquid iron particle, $H_\mathrm{liq,tot}$, is determined via
\begin{equation} \label{eq:total_enth_liq}
H_\mathrm{liq,tot} = m_{\mathrm{Fe}} h_{\mathrm{Fe}} + m_{\mathrm{FeO}} h_{\mathrm{FeO}} + m_{\mathrm{Fe_3O_4}} h_{\mathrm{Fe_3O_4}},
\end{equation}
with, $m_{{i}}$ the mass of species $i$ in the particle and $h_{{i}}$ the mass specific enthalpy of species $i$ (calculated with the NASA polynomials). This method was also used in the previous studies \citep{Thijs2022, ThijsPCI_2022}.
The phase diagram of the Fe-O system is dependent on the molar ratio of $\mathrm{O}/\left(\mathrm{O} + \mathrm{Fe}\right)$, which defines the oxidation stage of the (liquid) iron particle. Here, this ratio is denoted by $Z_\mathrm{O}$, which is the elemental mole fraction of oxygen in the particle, and is defined as
\begin{equation}
Z_\mathrm{O} = \frac{m_{\mathrm{O,p}}/M_{\mathrm{O}}}{m_{\mathrm{Fe,p}}/M_{\mathrm{Fe}} + m_{\mathrm{O,p}}M_{\mathrm{Fe}}},
\end{equation}
where $m_{\mathrm{O,p}}$ is the mass of oxygen, $m_{\mathrm{Fe,p}}$ the total mass of iron in the particle, and $M_{\mathrm{O}}$ and $M_{\mathrm{Fe}}$ are the molar mass of oxygen and iron, respectively. In this work, $Z_\mathrm{O}$ denotes the oxidation stage of the iron particle, where $Z_\mathrm{O} = 0.5$ represents ``liquid FeO'' and $Z_\mathrm{O} = 0.57$ ``liquid $\mathrm{Fe_3O_4}$''.
To calculate the total enthalpy of the particle, the rates of change of $m_{\mathrm{Fe}}$, $m_{\mathrm{FeO}}$, and $m_{\mathrm{Fe_3O_4}}$ need to be tracked. Since the vapor pressure calculated with the thermodynamic data of liquid $\mathrm{Fe_3O_4}$ is orders-of-magnitude lower than those of liquid $\mathrm{Fe}$ and $\mathrm{FeO}$ \citep{Ning2021}, evaporation of $\mathrm{Fe_3O_4}$ is negligible compared to the evaporation of $\mathrm{Fe}$ and $\mathrm{FeO}$. The rate of change of $m_{\mathrm{Fe}}$, $m_{\mathrm{FeO}}$, and $m_{\mathrm{Fe_3O_4}}$ are related to the rate of oxidation and the rate of evaporation via
\begin{equation}
\frac{\mathrm{d}m_{\mathrm{Fe}}}{\mathrm{d}t} = -\frac{1}{s_\mathrm{Fe,1}} \dot{m}_\mathrm{O_2,1} - \frac{\mathrm{d}m_{\mathrm{evap,\mathrm{Fe}}}}{\mathrm{d}t},
\end{equation}
\begin{equation}
\frac{\mathrm{d}m_{\mathrm{FeO}}}{\mathrm{d}t} = -\frac{1}{s_\mathrm{FeO,1}} \dot{m}_\mathrm{O_2,1} -\frac{1}{s_\mathrm{FeO,2}} \dot{m}_\mathrm{O_2,2} - \frac{\mathrm{d}m_{\mathrm{evap,\mathrm{FeO}}}}{\mathrm{d}t},
\end{equation}
\begin{equation}
\frac{\mathrm{d}m_{\mathrm{Fe_3O_4}}}{\mathrm{d}t} = -\frac{1}{s_\mathrm{Fe_3O_4,2}} \dot{m}_\mathrm{O_2,2},
\end{equation}
with $\dot{m}_{\mathrm{O_2},j}$ the mass transfer rate of oxygen to the particle via either reaction R1 (denoted by 1) or R2 (denoted by 2) and $s_{i,j}$ is the stoichiometric mass ratio of species $i$ for reaction $j$. The total mass transfer rate of oxygen is given by $\dot{m}_\mathrm{O_2} = \dot{m}_\mathrm{O_2,1} + \dot{m}_\mathrm{O_2,2}$. Note that R1 and R2 are sequential reactions in this work.
To calculate $Z_\mathrm{O}$, the mass of $\mathrm{O}$ and $\mathrm{Fe}$ in the particle must be known. The rate of changes of $\mathrm{O}$ is related to the rate of oxidation and the rate of evaporation of $\mathrm{FeO}$, while the rate of change of $m_{\mathrm{Fe}}$ which is in the particle, is only affected by evaporation
\begin{equation}
\frac{\mathrm{d}m_{\mathrm{O,p}}}{\mathrm{d}t} = \dot{m}_\mathrm{O_2} - \frac{M_\mathrm{O}}{M_\mathrm{FeO}}\frac{\mathrm{d}m_{\mathrm{evap,\mathrm{FeO}}}}{\mathrm{d}t},
\end{equation}
\begin{equation}
\frac{\mathrm{d}m_{\mathrm{Fe,p}}}{\mathrm{d}t} = - \frac{\mathrm{d}m_{\mathrm{evap,\mathrm{Fe}}}}{\mathrm{d}t} - \frac{M_\mathrm{Fe}}{M_\mathrm{FeO}}\frac{\mathrm{d}m_{\mathrm{evap,\mathrm{FeO}}}}{\mathrm{d}t},
\end{equation}
with $M_\mathrm{FeO}$ the molar mass of $\mathrm{FeO}$. Temperature-dependent density functions are used to relate the mass of the particle to the volume and diameter, see \cite{Gool2022} for the specific polynomials.
The rate of change of the particle enthalpy is described by
\begin{equation}
\frac{\mathrm{d}H_{\mathrm{p}}}{\mathrm{d}t} = q + q_\mathrm{rad} + \dot{m}_\mathrm{O_2} h_\mathrm{O_2} - \sum_i h_{i,v} \frac{\mathrm{d}m_{\mathrm{evap}, i}}{\mathrm{d}t},
\end{equation}
with $q$ the heat transfer rate, $q_\mathrm{rad}$ the radiative heat transfer rate, $h_\mathrm{O_2}$ the mass-specific enthalpy of the consumed oxygen and $h_{i,v}$ the mass-specific enthalpy of the evaporated species.
To model the mass transfer rate $\dot{m}_\mathrm{O_2}$ and heat transfer rate $q$, a boundary sphere method is used. Figure \ref{fig:KnudsenConfig_Senyurt2022} illustrates the configuration which is used for the boundary sphere method. The iron particle is surrounded by a spherical Knudsen layer $\delta$ with a thickness equal to the mean free path $\lambda_\mathrm{MFP}$ of the gas molecules \citep{Liu2006}
\begin{equation}
\lambda_\mathrm{MFP} = \frac{k_\delta}{p} \frac{\gamma_\delta - 1}{9 \gamma_\delta - 5}\sqrt{\frac{8 \pi m_\mathrm{O_2} T_\delta}{k_b}},
\end{equation}
with $k_\delta$ and $\gamma_\delta$ being the thermal conductivity and specific heat ratio derived at the Knudsen layer, $p$ the ambient pressure, $m_\mathrm{O_2}$ the mass of an oxygen molecule, and $k_b$ the Boltzmann constant.
At the Knudsen layer, the gas has a temperature $T_\delta$ and mole fraction $X_\delta$. In the boundary sphere method, it is assumed that the heat and mass transfer across the Knudsen layer is quasi-steady \citep{Liu2006}. Therefore, the mass balance at $r = \delta$ yields $\dot{m}_\mathrm{O_2,FM} = \dot{m}_\mathrm{O_2,c}$, with $\dot{m}_\mathrm{O_2,FM}$ the mass transfer rate in the free-molecular regime and $\dot{m}_\mathrm{O_2,c}$ the mass transfer rate in the continuum regime. Similarly, the heat balance can be written as $q_\mathrm{FM} = q_\mathrm{c}$.
The mass transfer rate and heat transfer rate in the continuum regime can be described as \citep{Thijs2022,Senyurt2022}
\begin{equation}
\dot{m}_\mathrm{O_2,c} = 2 \pi \mathrm{Sh} \left(\delta + r_p\right) \rho_\mathrm{O2,f} D_\mathrm{f} \left(X_{\mathrm{O_2},\delta} - X_\mathrm{O_2,g}\right),
\end{equation}
\begin{equation}
q_\mathrm{c} = 2 \pi \mathrm{Nu} \left(\delta + r_p\right) k_\mathrm{f} \left(T_\delta - T_\mathrm{g}\right),
\end{equation}
with $\rho_\mathrm{O2,f}$, $D_\mathrm{f}$ and $k_\mathrm{f}$ the density of oxygen, the mixture-averaged diffusion coefficient and the thermal conductivity derived at 1/2-film temperature and composition \citep{Thijs2022}. $T_\mathrm{g}$ is the gas temperature, $X_\mathrm{O_2,g}$ the mole fraction of oxygen in the gas phase, and $\mathrm{Nu}$ and $\mathrm{Sh}$ the Nusselt and Sherwood numbers, respectively. Here, $\rho_\mathrm{O_2}$ is defined as
\begin{equation}
\rho_\mathrm{O_2} = \frac{M_\mathrm{O_2} p}{R_\mathrm{u} T},
\end{equation}
with $M_\mathrm{O2}$ the molar mass of oxygen and $R_\mathrm{u}$ the universal gas constant. A constant slip velocity is assumed, and the Stefan flow correction for the Nusselt and Sherwood numbers is incorporated. For more details of the continuum model, the reader is referred to \cite{Thijs2022}.
The mass transfer rate in the free-molecular regime can be described as \citep{Senyurt2022}
\begin{equation}
\dot{m}_\mathrm{O_2,FM} = \alpha_\mathrm{m} \pi r_p^2 v_\mathrm{\delta} \rho_\mathrm{O2,\delta} X_{\mathrm{O_2},\delta},
\end{equation}
with $\alpha_\mathrm{m}$ the mass accommodation coefficient and $v_{\delta}$ the velocity of the gas molecules calculated as
\begin{equation}
v_\delta = \sqrt{\frac{8 k_b T_\delta}{\pi m_\mathrm{O_2}}}.
\end{equation}
Similarly, the heat transfer rate in the free-molecular regime yields \citep{Liu2006}
\begin{equation}
q_\mathrm{FM} = \alpha_\mathrm{T} \pi r_\mathrm{p}^2 p \sqrt{\frac{k_b T_\delta}{8 \pi m_\mathrm{O_2}}} \frac{\gamma^* + 1}{\gamma^* - 1}\left(\frac{T_\mathrm{p}}{T_\delta} - 1 \right),
\end{equation}
with $\alpha_\mathrm{T}$ the thermal accommodation coefficient and $\gamma^*$ the averaged specific heat ratio \citep{Liu2006}. It is assumed that the slip velocity and Stefan flow do not affect the free-molecular transport and the effect of the transition regime on the evaporation rate is neglected.
The boundary sphere method is implicit---a coupled system of nonlinear equations must be solved to find the mass and heat transfer rates. The molar fraction of oxygen $X_\mathrm{O_2,g}$ and the temperature $T_\mathrm{g}$ in the gas phase are known. Therefore, $X_{\mathrm{O_2},\delta}$ and $T_\delta$ remain unknown and should be found via an iterative method. In this work, the lsqnonlin method of MATLAB was used to solve the coupled system of nonlinear equations. The Cantera toolbox was used to calculate the transport properties in the gas phase.
\section{Model formulation for molecular dynamics simulations}\label{sec:method}
Molecular beam simulations, where a large number of independent scattering events between a single gas molecule and a surface are simulated, are performed \citep{Sipkens2018} to determine the TAC and MAC. For the interaction of $\mathrm{N_2}$ with an iron surface, no chemical absorption is expected. Furthermore, to the authors' knowledge, there is no reactive force field available in the literature for an Fe-O-N system. Therefore, only a TAC value for the nitrogen molecule in combination with an iron surface is determined. As a result, the $\mathrm{Fe}$-$\mathrm{N_2}$ interactions are modeled using non-reactive potentials. On the contrary, reactive molecular dynamics are considered for the $\mathrm{Fe_xO_y}$-$\mathrm{O_2}$ interactions, to compute the TAC and MAC. Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) \citep{LAMMPS} is used to perform the molecular dynamics simulations.
\subsection{Thermal and mass accommodation coefficients}
As previously described, the free-molecular heat transfer rate is dependent on the TAC. The TAC describes the average energy transfer when gas molecules scatter from the surface and is defined as
\begin{equation}
\alpha_\mathrm{T} = \frac{\left<E_0 - E_i\right>}{3k_\mathrm{B}\left(T_\mathrm{s} - T_\mathrm{g}\right)},
\end{equation}
with $\left<\cdot\right>$ denoting an ensemble average, $E_0$ the total energy of the scattered molecule, and $E_i$ the energy of the incident molecule. The denominator represents the maximum energy that could be transferred from the surface to the gas molecule, with $T_\mathrm{s}$ the surface temperature and $T_\mathrm{g}$ the gas temperature.
The free-molecular mass transfer rate is dependent on the MAC. The MAC or absorption coefficient is defined as the fraction of incoming oxygen molecules that, upon collision with the iron surface, are absorbed (accommodated) rather than reflected. The MAC is defined as
\begin{equation}
\alpha_\mathrm{m} = \frac{n_\mathrm{abs,g}}{n_\mathrm{tot,g}},
\end{equation}
with $n_\mathrm{abs,g}$ the number of absorbed gas molecules and $n_\mathrm{tot,g}$ the total number of gas molecules colliding the surface.
When iron is burned in air, both oxygen and nitrogen may contribute to the total TAC. The MAC determines the contribution of oxygen to total TAC. In this work, the total TAC is calculated as
\begin{equation} \label{eq:TACTotal}
\alpha_\mathrm{T,tot} = \left[1 - X_\mathrm{O_2}\left(1 - \alpha_\mathrm{m}\right)\right] \alpha_\mathrm{T,N_2} + X_\mathrm{O_2}\left(1 - \alpha_\mathrm{m}\right) \alpha_\mathrm{T,O_2}.
\end{equation}
\subsection{$\mathrm{Fe}$ - $\mathrm{N_2}$ interaction}
To determine the thermal accommodation coefficients, the procedure as performed by \cite{Sipkens2018} is followed. Figure \ref{fig:InitialConfig} shows the initial configuration used to determine the thermal accommodation coefficient of the $\mathrm{Fe}$-$\mathrm{N_2}$ system. A molecular system is defined with a surface of 686 iron atoms initially arranged in a body-centered cubic (BCC) lattice, with a lattice constant of $2.856 \: \mathrm{\AA}$, and a nitrogen molecule, of which the latter is modeled as a rigid rotor. The gas molecule is positioned around $10\:\mathrm{\AA}$ above the surface, beyond the range of the potential well. In order to represent a specific surface temperature, a heating process is required for the iron surface to increase the kinetic energy of the system. \cite{Yan2017} showed that the phase change temperature obtained with MD simulations depends on the used heating process. Therefore, if the surface is expected to be in a solid phase, the surfaces are heated for 30 ps using the canonical (NVT) ensemble. To keep a constant temperature in the NVT ensemble simulations, the Nose-Hoover thermostat is applied on the translational degrees of freedom of the atoms with a temperature damping period of 100 fs. The warmed surfaces are then allowed to run first in the NVT ensemble and then in the micro-canonical (NVE) ensemble for 5~ps after which their state is saved to a file. If the surface is expected to be in a liquid phase, the surfaces are warmed using the Nose-Hoover thermostat for 30 ps to $2800\:\mathrm{K}$, equilibrated for 5~ps at $2800\:\mathrm{K}$ and then gradually cooled down to the target temperature within 10 ps. Six different surfaces, each with different initial velocities, are generated to obtain a statistically meaningful set of data. Then, incident gas molecules are introduced with their velocities sampled from the Maxwell-Boltzmann (MB) distribution. 500 cases per warmed surface are sampled, which results in 3000 data points per configuration. The equations of motion are advanced using the Verlet algorithm with a timestep of $1$~fs.
\begin{figure}[t]
\centering
{\includegraphics[width=0.35\columnwidth]{Figures/N2_4.eps}}
\caption{Initial configuration showing a nitrogen molecule (blue particles) placed above an iron surface (brown particles), with $Z_\mathrm{O} = 0$ and $T_\mathrm{p} = 2000 \: \mathrm{K}$. The size of the particles equal the atomic radii.}%
\label{fig:InitialConfig}
\end{figure}
The interactions between the iron atoms are modeled using the embedded atom method (EAM) potential. The EAM force fields are based on the principle of embedding atoms within an electron cloud. The EAM potential reads
\begin{equation}
U_i = \frac{1}{2} \sum_{j\neq i} U_2\left(r_{ij}\right) - F\left(\sum_{j\neq i} \rho_j\left(r_{ij}\right) \right),
\end{equation}
where $U_2$ is a pairwise potential between atoms $i$ and $j$, $r_{ij}$ is the distance between atoms $i$ and $j$, $F$ is the embedding function, and $\rho_j$ is the contribution to the electron charge density from atom $j$. \cite{Sipkens2018} investigated the effect of different surface potentials on the characteristics of an iron lattice, and found that the EAM potential of \cite{Zhou2004} is the most robust choice since this potential well predicts the phase transitions and experimentally measured densities. Therefore, this work uses the EAM potential of \cite{Zhou2004} for the iron surface potentials.
The most used gas-surface pairwise potentials are the Lennard-Jones (LJ) 6-12 potential and the Morse potential. The LJ potential reads
\begin{equation}
U_{ij} = 4D_0\left[\left( \frac{\sigma}{r_{ij}} \right)^{12} - \left( \frac{\sigma}{r_{ij}} \right)^6 \right],
\end{equation}
where $D_0$ is the well depth, and $\sigma$ is the distance at which $U_{ij}$ is zero. The Morse potential reads
\begin{equation}
U_{ij} = D_0 \left(1 - e^{-\alpha\left(r-r_0\right)} \right)^2,
\end{equation}
where $r_{0}$ is the equilibrium bond distance and $\alpha$ controls the width of the potential.
\cite{Daun2012} demonstrated the significance of using a well-defined gas-surface potential. They compared the TAC calculated with a Lennard-Jones 6-12 gas-surface potential with parameters derived from the Lorentz-Berthelot (LB) combination rules to the TAC calculated with a Morse potential with parameters derived from \textit{ab initio} technique. For the LJ potential with LB combination rules, the well depth $D_0$ and finite distance $\sigma$ were calculated by combining these values of the two individual atoms. For the \textit{ab initio} calculations a molecule was moved towards a surface and then the potential was calculated from first principles. \cite{Daun2012} showed that the TAC is overestimated when using a Lennard-Jones 6-12 gas-surface potential with parameters derived from the Lorentz-Berthelot (LB) combination rules, but found a good agreement with the values obtained from experiments when using a Morse potential with parameters derived from \textit{ab initio} technique. The \textit{ab initio} calculations for the $\mathrm{Fe}$ - $\mathrm{N_2}$ interaction were performed by \cite{Sipkens2014}. They found $D_0 = 2.162 \: \mathrm{meV}$, $\alpha = 0.932 \: \mathrm{\AA^{-1}}$ and $r_0 = 4.819\: \mathrm{\AA}$.
\subsection{$\mathrm{Fe_xO_y}$ - $\mathrm{O_2}$ interaction} \label{sec:FexOy_O2_interaction}
Different iron-oxide surfaces will be generated to investigate the effect of the oxidation stage of the iron-oxide surface on the TAC and MAC. The surface oxidation stage is denoted by the elemental mole fraction of oxygen in the particle and can be calculated as
\begin{equation}
Z_\mathrm{O} = \frac{n_{\mathrm{O,s}}}{n_\mathrm{tot,s}},
\end{equation}
with $n_\mathrm{O,s}$ being the number of oxygen atoms and $n_\mathrm{tot,s}$ the total number of atoms in the surface.
Before applying the Nose-Hoover thermostats, an $\mathrm{FeO}$ lattice is deposited in a specific ratio on top of a BCC lattice of $\mathrm{Fe}$ atoms. An initial distance of $4 \mathrm{\AA}$ between the two layers is introduced to ensure a natural mixing. The height of the $\mathrm{FeO}$ lattice compared to the $\mathrm{Fe}$ lattice is increased to increase the $Z_\mathrm{O}$ of the specific surface. Two different heating strategies are used to investigate the effect of the distribution of oxygen atoms over the surface:
\begin{enumerate}
\item The same heating strategy previously discussed for the $\mathrm{Fe}$ surface is used, with a different heating strategy for a solid or liquid surface.
\item An annealing process is employed for all the surfaces to enhance the mixing of the O and Fe atoms. The surfaces are warmed using the Nose-Hoover thermostat for 30 ps to $2800\:\mathrm{K}$, equilibrated for 30 ps at $2800\:\mathrm{K}$ and then gradually cooled down to the target temperature within 30 ps.
\end{enumerate}
Figure \ref{fig:Surface_FeO_O2_Tp2000_ZO1} shows the preparation of an iron-oxide surface with $Z_\mathrm{O} = 0.11$ and $T_\mathrm{p} = 2000 \: \mathrm{K}$ via heating strategy 1. After the surface realization, an $\mathrm{O_2}$ molecule is located around $10\:\mathrm{\AA}$ above the surface, beyond the range of the potential well. Figure \ref{fig:InitialConfigFeyOxO2} shows the initial configuration used for the interaction between $\mathrm{Fe_xO_y}$ and $\mathrm{O_2}$.
\begin{figure*}
\centering
\subfloat[$0$ ps]
{\includegraphics[width=0.5\columnwidth , height=10cm]{Figures/Surface_FeO_O2_Tp2000_ZO1_1.eps}}
\subfloat[$2$ ps]
{\includegraphics[width=0.5\columnwidth , height=10cm]{Figures/Surface_FeO_O2_Tp2000_ZO1_2.eps}}
\subfloat[$30$ ps]
{\includegraphics[width=0.5\columnwidth , height=10cm]{Figures/Surface_FeO_O2_Tp2000_ZO1_3.eps}}
\subfloat[$60$ ps]
{\includegraphics[width=0.5\columnwidth , height=10cm]{Figures/Surface_FeO_O2_Tp2000_ZO1_4.eps}}
\caption{Realization of an iron-oxide surface with $Z_\mathrm{O} = 0.11$ and $T_\mathrm{p} = 2000 \: \mathrm{K}$ for different moments in time during the preparation phase with heating strategy 1. The figures show a lateral surface, with the top surface used for the scattering gas molecule. Red particles represent oxygen atoms, while the brown particles are iron atoms. The size of the particles equal the atomic radii.}%
\label{fig:Surface_FeO_O2_Tp2000_ZO1}
\end{figure*}
\begin{figure}[h]
\centering
{\includegraphics[width=0.35\columnwidth]{Figures/O2_4.eps}}
\caption{Initial configuration showing an oxygen molecule (red particles) placed above a partially oxidized iron surface (brown particles) with $Z_\mathrm{O} = 0.11$ and $T_\mathrm{p} = 2000 \: \mathrm{K}$. The size of the particles equal the atomic radii.}%
\label{fig:InitialConfigFeyOxO2}
\end{figure}
Reactive molecular dynamics is used to simulate the interaction between $\mathrm{Fe_xO_y}$-$\mathrm{O_2}$. Reactive MD uses reactive force fields to accurately describe bond formation and breaking. ReaxFF \citep{Duin2001} is a bond order potential that describes the total energy of the system as
\begin{align}
E_\mathrm{system} = E_\mathrm{bond} + E_\mathrm{over} + E_\mathrm{under} + E_\mathrm{val} + E_\mathrm{tor} \notag \\ + E_\mathrm{vdWaals} + E_\mathrm{Coulomb} + E_\mathrm{additional},
\end{align}
with $E_\mathrm{bond}$ the bond formation/breaking energy, $E_\mathrm{over}$ and $E_\mathrm{under}$ the over- and undercoordination energy penalties, $E_\mathrm{val}$ and $E_\mathrm{tor}$ are respectively the valence and torsion angle energies, $E_\mathrm{vdWaals}$ and $E_\mathrm{Coulomb}$ are the non-bonded van der Waals and Coulomb long-range interactions and $E_\mathrm{additional}$ are additional correction terms. The atomic charges are computed at every timestep using the charge equilibration method. A time step of $0.1 \: \mathrm{fs}$ is used, which is recommended for reactive MD simulations at high temperatures \citep{kritikos2022}. The timestep is held sufficiently small to capture all reaction events at high temperatures.
\section{Results of the molecular dynamics simulations}\label{sec:results}
Results from MD simulations are discussed below. First, the effect of the available reactive force fields is investigated by examining the predicted thermal expansion. Then, the mass and thermal accommodation coefficients are determined. It is of importance to note that the findings might be limited by the available (reactive) force fields, and could therefore change if different force fields are used.
\subsection{Effect of reactive force field on iron oxide surface}
Two different reactive force fields for Fe-O interactions, which are available in the literature, are compared. The ReaxFF $\mathrm{FeOCHCl}-2010$ \citep{ReaxFF2010} is compared with the ReaxFF $\mathrm{FeCHO-2016}$ \citep{ReaxFF2016}. The reactive force field (ReaxFF $\mathrm{FeOCHCl}-2010$) proposed by \cite{ReaxFF2010} was developed for Iron-Oxyhydroxide Systems. \cite{ReaxFF2016} merged the ReaxFF force field for an Fe-C-H system \citep{ReaxFFZOU} with the latest ReaxFF carbon parameters of \cite{reaxFFSrinivasan}. They developed their reactive force field to mainly study the interaction of hydrogen with pure and defective ferrite-cementite interfaces.
A similar approach as \cite{Sipkens2018} is used to evaluate the reactive force fields; the effect of the reactive force fields on the surface characteristics is investigated by examining the thermal expansion of iron oxides. Figure \ref{fig:Density_MD_Exp_reaxFF} shows the MD-derived densities of $\mathrm{FeO}$ and $\mathrm{Fe_3O_4}$ as a function of the surface temperature. In this section, the definitions ``liquid $\mathrm{FeO}$'' and ``liquid $\mathrm{Fe_3O_4}$'' will be used again. The experimentally obtained densities of $\mathrm{FeO}$ \citep{Lee1974,Saxena1994, Xin2019} and $\mathrm{Fe_3O_4}$ \citep{Hahn1984} are added as a reference. The density of liquid $\mathrm{FeO}$ is extrapolated based on the $\mathrm{d}V/\mathrm{d}t$ expression stated in Table 4 of \cite{Xin2019}. To the authors' knowledge, the density above the melting point of $\mathrm{Fe_3O_4}$ is unavailable in literature. Therefore, the comparison for $\mathrm{Fe_3O_4}$ is limited to the solid phase.
There is a clear difference between the MD-derived densities with ReaxFF $\mathrm{FeOCHCl}-2010$ and ReaxFF $\mathrm{FeCHO-2016}$ for the density curves of $\mathrm{FeO}$. The ReaxFF $\mathrm{FeOCHCl}-2016$ surface consistently underestimates densities and lacks a clear phase transition; the $\mathrm{FeO}$ surface resembles a liquid surface over the complete temperature range. The ReaxFF $\mathrm{FeOCHCl}-2010$ model matches the experimentally obtained values better, particularly in the solid phase, and there is a clear phase transition from solid to liquid.
The difference between ReaxFF $\mathrm{FeOCHCl}-2010$ and ReaxFF $\mathrm{FeCHO}-2016$ for the density curves of $\mathrm{Fe_3O_4}$ is small. In the solid-phase regime, the density obtained with ReaxFF $\mathrm{FeOCHCl}-2010$ provides a better match. As a result, it could be concluded that ReaxFF $\mathrm{FeOCHCl}-2010$ is better suited to reproduce a $\mathrm{FeO}$ and $\mathrm{Fe_3O_4}$ surface, and therefore, this reactive force field is used in this work.
\begin{figure*}
\centering
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/Density_FeO_MD_Exp_reaxFFb.eps}}
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/Density_Fe3O4_MD_Exp_reaxFFb_solid.eps}}
\caption{MD-derived density of $\mathrm{FeO}$ and $\mathrm{Fe_3O_4}$ as a function of temperature for ReaxFF $\mathrm{FeOCHCl}-2010$ \citep{ReaxFF2010} with the ReaxFF $\mathrm{FeCHO-2016}$ \citep{ReaxFF2016}. Experimentally obtained values (solid line) are shown as a reference.}%
\label{fig:Density_MD_Exp_reaxFF}
\end{figure*}
\subsection{Mass accommodation coefficients}\label{sec:macresults}
The mass accommodation coefficient for iron with oxygen is investigated for different initial oxidation stages, ranging from $Z_\mathrm{O} = 0$ to $Z_\mathrm{O} = 0.57$, and three different surface temperatures, namely $T_\mathrm{p} = 1500$, $2000$ and $2500~\mathrm{K}$, of which the latter two are in the liquid-phase regime.
As discussed in Section \ref{sec:FexOy_O2_interaction}, two different heating strategies are employed to investigate the effect of the non-uniform distribution of oxygen atoms in the surface. Figure \ref{fig:pdfo_O} shows the probability density function of oxygen atoms $z$-position as a function of surface height for the two different heating strategies. For $T_\mathrm{p} = 2000 \: \mathrm{K}$, the oxygen atoms are characterized by a relatively uniform distribution, independent of the heating strategy. This lack of dependency is, however, not the case for $T_\mathrm{p} = 1500 \: \mathrm{K}$. With heating strategy~1 (HS1), the oxygen atoms are not uniformly distributed---a higher concentration is observed near the top surface. Because of the heating rate of HS1, not all of the oxygen atoms are sufficiently diffused to reach a uniform distribution. As a result, more oxygen atoms are near the surface. When the annealing process of HS2 is used, a spatially more uniform distribution of oxygen atoms is obtained.
\begin{figure*}
\centering
\subfloat[$Z_\mathrm{O} = 0.22$]
{\includegraphics[width=1\columnwidth, clip]{Figures/pdf_ZO2.eps}}
\subfloat[$Z_\mathrm{O} = 0.45$]
{\includegraphics[width=1\columnwidth, clip]{Figures/pdf_ZO4.eps}}
\caption{Probability density function of oxygen atoms $z$-position for two different heating strategies, $T_\mathrm{p}$ and $Z_\mathrm{O}$. }%
\label{fig:pdfo_O}
\end{figure*}
Figure \ref{fig:MAC_Tg300K}a shows the MAC of oxygen as a function of initial oxidation stage for the three different surface temperatures, obtained with HS2, while \ref{fig:MAC_Tg300K}b shows a detailed view around $Z_\mathrm{O} = 0.57$. To show the effect of a higher oxygen concentration at the top surface, the MAC obtained with HS1 for $T_\mathrm{p} = 1500 \: \mathrm{K}$ is added as the red line. With a homogeneously distributed surface obtained via HS2, the MAC is barely depends on the surface temperature and decreases almost linearly as a function of $Z_\mathrm{O}$. Two different slopes are observed, one steeper if $Z_\mathrm{O} < 0.5$ and one shallower if $Z_\mathrm{O} > 0.5$. When $Z_\mathrm{O} > 0.5$, the MAC decreases with a different slope, indicating that once the particle reaches stoichiometric $\mathrm{FeO}$, it becomes more difficult to absorb oxygen. Figure \ref{fig:MAC_Tg300K}b shows a detailed view of the MAC in the region $0.5 < Z_\mathrm{O} < 0.57$. In this region, a more prominent effect of the surface temperature is observed---the MAC increases with an increasing surface temperature.
The effect of the non-uniform distribution of oxygen atoms on the MAC is shown in Figure \ref{fig:MAC_Tg300K}a with the red line. The MAC decreases significantly if there are more oxygen atoms near the surface. This accumulation at the top surface prevents new oxygen atoms from being accommodated, indicating that the mass accommodation coefficient is affected by the local oxygen concentration near the surface. Therefore, if the iron particle does not have a spatially uniform composition, internal transport may limit the oxidation rate of iron particles.
\begin{figure*}
\centering
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/MAC_all_hs2hs1.eps}}
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/MAC_higherox_hs2.eps}}
\caption{The mass accommodation coefficient of oxygen as a function of initial oxidation stage at three different surface temperatures. A detailed view around $Z_\mathrm{O} = 0.5$ is shown in (b).}%
\label{fig:MAC_Tg300K}
\end{figure*}
\subsection{Thermal accommodation coefficients}
\subsubsection{$\mathrm{Fe}$ - $\mathrm{N_2}$ interactions}
The TAC for the $\mathrm{Fe}$ - $\mathrm{N_2}$ interaction is investigated for a smooth and rough surface. An initially rough surface was created by projecting iron atoms on the initial smooth lattice. Figure \ref{fig:TAC_02_shake_Sipkens_surface} shows the thermal accommodation coefficients of the $\mathrm{Fe}$-$\mathrm{N_2}$ interaction as a function of iron surface temperature for the two different surfaces. For the smooth surface, the TAC increases as a function of surface temperature from $\alpha_\mathrm{T} = 0.08$ up to $\alpha_\mathrm{T} = 0.19$. This trend is consistent with the values experimentally obtained by \cite{Sipkens2014}, who reported a value of $0.10$ and $0.17$. The increasing TAC value can be attributed to the change of a relatively smooth surface in the solid phase into a rough surface in the liquid face when temperature increases \citep{Sipkens2018}. With a rough surface, the TAC values in the solid phase increase to $\alpha_\mathrm{T} = 0.17$, almost independent of surface temperature. From the point where the iron surface becomes a liquid ($1800 \: \mathrm{K}$), the difference between initially smooth and rough surfaces disappears.
\begin{figure}[h]
\centering
{\includegraphics[width=1\columnwidth, clip]{Figures/TAC_02_shake_Sipkens_surface.eps}}
\caption{Total thermal accommodation coefficients of the $\mathrm{Fe}$-$\mathrm{N_2}$ interaction as a function of surface temperature for smooth and rough iron surfaces.}%
\label{fig:TAC_02_shake_Sipkens_surface}
\end{figure}
In addition to the surface temperature of iron, the temperature of the surrounding gas can change significantly during iron combustion. \cite{Daun2009} and \cite{Mane2018} investigated the effect of gas temperature on the TAC of nitrogen with soot and hydrogen with aluminum, respectively. They showed that the TAC is almost independent of gas temperature and primarily influenced by surface roughness and gas molecular weight. Based on the MD results, a recommended value for the TAC of the interaction between $\mathrm{Fe}$ with $\mathrm{N_2}$ equals $0.17$.
\subsubsection{$\mathrm{Fe_xO_y}$ - $\mathrm{O_2}$ interactions}
The oxygen molecules that do not stick to the surface during $\mathrm{Fe_xO_y}$-$\mathrm{O_2}$ interactions still contribute to the TAC. Figure \ref{fig:TAC_O2_Tg300K} shows the total thermal accommodation coefficients of the $\mathrm{Fe_xO_y}$-$\mathrm{O_2}$ interaction as a function of $Z_\mathrm{O}$ at three different surface temperatures. When the oxidation degree of the surface is low, the TAC remains close to unity but decreases sharply to $0.2$ once $Z_\mathrm{O} > 0.5$. Note that, if $Z_\mathrm{O}$ is small, and thus MAC is large, the amount of scattered oxygen atoms over which the TAC is calculated is low. Since the total thermal accommodation coefficient is calculated with Equation~\eqref{eq:TACTotal}, however, this uncertainty in the low $Z_\mathrm{O}$ regime could be neglected due to the small contribution of $\alpha_\mathrm{T,O_2}$ given by the large MAC number.
\begin{figure}[h]
\centering
{\includegraphics[width=1\columnwidth, clip]{Figures/TAC_all_hs2.eps}}
\caption{Total thermal accommodation coefficients of the $\mathrm{Fe_xO_y}$-$\mathrm{O_2}$ interaction as a function of $Z_\mathrm{O}$ at three different surface temperatures.}%
\label{fig:TAC_O2_Tg300K}
\end{figure}
\subsection{Discussion of the MD results}
A limitation of the above molecular beam approach to determine the TAC and MAC is the assumption of a clean surface. Within experiments, surfaces could be (partly) covered by a layer of gas molecules. \cite{Song1987} proposed a semi-empirical correlation for the TAC of engineering surfaces, which takes into account this absorded layer. They state that the correlation is general and can be used for any combination of gases with a solid surface, in a wide temperature range. The correlation of \cite{Song1987} is also used in the work of \cite{Philyppe2022} to investigate the effect of the Knudsen transition regime on the ignition of single iron particles. Figure \ref{fig:TAC_02_SongYovanovich} shows the comparison between the TAC of the $\mathrm{Fe}$-$\mathrm{N_2}$ interaction obtained with the MD simulations compared to the semi-empirical correlation of \cite{Song1987}. While \cite{Song1987} do not claim that the correlation is also valid in the liquid phase regime, their curve is extrapolated in the liquid phase regime for comparison with the MD results. It can be seen that the TAC of \cite{Song1987} is about three times larger than the TAC obtained with the molecular beam approach. However, \cite{Song1987} state that a common strategy to create an adsorption-free surface within experiments is to heat the surface to high temperatures above, $1000 \: \mathrm{K}$, to desorb all of the impurities in the surface. This work is mainly focused on the accommodation coefficients in the liquid-phase regime. Since the current MD simulations better represent the condition of a liquid-phase surface rather than a solid-phase surface with intrinsic roughness, the resulting accommodation coefficients in the temperature range above the melting points are likely more accurate.
\begin{figure}[h]
\centering
{\includegraphics[width=1\columnwidth, clip]{Figures/TAC_02_SongYovanovich.eps}}
\caption{Thermal accommodation coefficients of the $\mathrm{Fe}$-$\mathrm{N_2}$ interaction obtained with the MD simulations compared to the semi-empirical correlation of \cite{Song1987}.}%
\label{fig:TAC_02_SongYovanovich}
\end{figure}
\cite{Nejad20202} performed molecular dynamics simulations to investigate the influence of gas-wall and gas-gas interaction on different accommodation coefficients. They used a parallel wall approach to determine the accommodation coefficients, which means that an intermediate Knudsen number is modeled. They showed that using a molecular beam approach results in lower TAC values with respect to the parallel wall approach. In the case of a molecular beam approach, the effect of gas-gas interactions is neglected, which has an effect in the Knudsen transition regime. They showed that the TAC could be around 1.5 times larger if this effect were included. However, in the two-layer model used in this work, a free-molecular regime is assumed within the Knudsen layer, implying that each incoming gas molecules do not interact with each other. Therefore, the accommodation coefficients obtained via a molecular beam approach are consistent with the used two-layer model.
\subsection{Implementation of the MD results in the single iron particle combustion model}
The TAC and MAC obtained from the MD simulations are used in the single iron particle combustion model. In the iron particle model, it is assumed that the particle has a homogeneous mixture, such that there is no internal gradient in oxygen concentration. Therefore, the TAC and MAC values for $\mathrm{Fe_xO_y}$-$\mathrm{O_2}$ interactions are used from HS2, which are a function of $Z_\mathrm{O}$ and nearly independent of the surface temperature. In Table \ref{tab:TACMACFeO2} the TAC and MAC for the $\mathrm{Fe_xO_y}$ - $\mathrm{O_2}$ interactions are listed and averaged over the three surface temperatures.
Within the iron particle combustion model, these values are used for linear interpolation. However, if no further oxidation is modeled after reaching the stoichiometry of $\mathrm{FeO}$, a linear fit is used for the MAC such that $\alpha_\mathrm{m} = 0$ if $Z_\mathrm{O} = 0.5$. Equation \eqref{eq:TACTotal} describes the total TAC with $\alpha_\mathrm{T,N_2} = 0.17$ and $\alpha_\mathrm{T,O_2} = f(Z_\mathrm{O})$.
\begin{table*}
\centering
\caption{TAC and MAC for the $\mathrm{Fe_xO_y}$ - $\mathrm{O_2}$ interactions, averaged over the three phase temperatures.}
\resizebox{\textwidth}{!}{\begin{tabular}{ccccccccccccc}
\hline
$Z_\mathrm{O}$ & $0$ & $0.11$ & $0.22$ & $0.34$ & $0.45$ & $0.5$ & $0.52$ & $0.53$ & $0.54$ & $0.55$ & $0.56$ & $0.57$
\\
\hline
$\alpha_\mathrm{m}$ & $0.902$ & $0.730$ & $0.609$ & $0.285$ & $0.063$ & $0.018$ & $0.011$ & $0.008$ & $0.005$ & $0.003$ & $0.001$ & $0.001$\\
$\alpha_\mathrm{T}$ & $0.796$ & $0.832$ & $0.934$ & $0.821$ & $0.831$ & $0.629$ & $0.344$ & $0.482$ & $0.405$ & $0.2510$ & $0.190$ & $0.221$\\
\hline
\end{tabular}}
\label{tab:TACMACFeO2}
\end{table*}
\section{Results of single iron particle combustion simulations}
In this section, the results of the MD-informed Knudsen model for a single iron particle burning in an $\mathrm{O_2}$-$\mathrm{N_2}$ atmosphere are presented. An initial particle temperature just above the ignition temperature \citep{Mi2022}, $T_{\mathrm{p,0}} = 1100\: \mathrm{K}$, is considered.
\subsection{Combustion behavior}
First, the effect of the new model by only considering the first oxidation stage up to $Z_\mathrm{O} = 0.5$ is investigated. The initial conditions are chosen such that the laser-ignited experiments performed by \cite{Ning2020} are mimicked. A cold gas of $T_{\mathrm{g,0}} = 300 \: \mathrm{K}$ at $1 \: \mathrm{atm}$ is considered. The temperature profiles are shifted such that the particle temperature equals $T_{\mathrm{p,0}} = 1500\: \mathrm{K}$ at $t = 0 \: \mathrm{ms}$ \citep{Ning2021}.
Figure \ref{fig:Tvst_MD_Cont_dp50_XO21}a shows the comparison of the temperature profiles between the continuum model and the MD based Knudsen model for a $50$ \textmu m particle burning in $X_\mathrm{O_2} = 0.21$. The temperature \textit{vs}. time curve for the MD-informed Knudsen model changes significantly compared to the previously used continuum model. This difference can be explained by examining the $Z_\mathrm{O}$ value and heat transfer rates plotted in Figure \ref{fig:YQKnudsenCont}. In the continuum model, the maximum temperature was located at the position where $Z_\mathrm{O} = 0.5$ is reached. At that time, the available iron is completely oxidized, and therefore the heat release rate immediately drops to zero. With the MD based Knudsen model this behavior changes, and $Z_\mathrm{O} = 0.5$ is reached after the peak temperature. Since the rate of oxidation slows down, as the MAC decreases with an increasing oxidation stage, the rate of heat loss exceeds the rate of heat release upon reaching the maximum temperature of the particle.
With the continuum model, it was discussed that the particle burns in a regime limited by the external diffusion of oxygen up to the maximum temperature. One can derive a normalized Damköhler number $\mathrm{Da^*}$. If $\mathrm{Da^*}$ is close to zero, the particle burns in a kinetic- (or chemical-) absorption-limited regime, and if it is close to unity, the particle burns in an external-diffusion-limited regime. Figure \ref{fig:Tvst_MD_Cont_dp50_XO21}b shows the normalized Damköhler number for the same configuration. The normalized Damköhler number of the continuum model is determined according to the definition of \cite{Hazenberg2020}. For the Knudsen + MD model, the normalized Damköhler number is defined as
\begin{equation}
\mathrm{Da^*} = 1 - \frac{\alpha_\mathrm{m} X_\mathrm{O_2,\delta}}{X_\mathrm{O_2,g}},
\end{equation}
where $\alpha_\mathrm{m} X_\mathrm{O_2,\delta}$ denotes the molar fraction of oxygen at the particle surface $X_\mathrm{O_2,p}$. As discussed before, with the continuum model the particle burns purely in an external-diffusion-limited regime up to the maximum temperature. This behavior changes in the MD-informed Knudsen model: Due to the decreasing MAC value with an increasing oxidation stage, the particle burns in an intermediate regime.
\begin{figure*}
\centering
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/ContKnudsenNingFeO_dp50_XO21Tp.eps}}
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/ContKnudsenNingFeO_dp50_XO21Da.eps}}
\caption{(a) Temperature profile and (b) normalized Damköhler number for an iron particle of $50\:$\textmu m burning at 21\% oxygen concentration. The continuum model is compared to the MD-informed Knudsen model.} \label{fig:Tvst_MD_Cont_dp50_XO21}
\end{figure*}
\begin{figure*}
\centering
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/ContNing_dp50_XO21Q_ZO.eps}}
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/KnudsenNingFeO_dp50_XO21Q_ZO.eps}}\\
\caption{Heat release and heat loss rates (left axis) and $Z_\mathrm{O}$ (right axis) for an iron particle of $50\:$\textmu m burning at 21\% oxygen concentration. The results are shown for (a) the continuum model and (b) the MD-informed Knudsen model.} \label{fig:YQKnudsenCont}
\end{figure*}
\subsection{Comparison with experimental results}
In \cite{Thijs2022,ThijsPCI_2022}, only the first stage of combustion, which is the conversion from up to $Z_\mathrm{O} = 0.5$, was investigated. Due to this assumption, an inert cooling stage was observed after the peak temperature. However, \cite{Choisez2022} investigated combusted iron powders and discovered that it primarily consisted of a magnetite and hematite mixture, indicating a $Z_\mathrm{O}$ greater than $0.5$. With the MD-informed Knudsen model, the oxidation beyond $Z_\mathrm{O} = 0.5$ can be included in the combustion of a single particle.
The results of the MD-informed Knudsen model are compared with two sets of experimental data. First, the model is compared to the laser-ignited experiments of \cite{Ning2021} wherein the particles burn in air at $300 \: \mathrm{K}$. Then, the new temperature curve is compared to the drop-tube experiments of \cite{Panahi2022} wherein the particles burn in varying oxygen concentrations at $1350 \: \mathrm{K}$. The experimental data are averaged over multiple independent single-particle measurements to obtain a smooth curve.
\subsubsection{Comparison with Ning et al.}
Figure \ref{fig:Knudsen_furtherOx_Tp_XO26_Exp} shows the temperature profiles for the MD-informed Knudsen model with and without further oxidation beyond $Z_\mathrm{O} = 0.5$ for a $34$ \textmu m and $50$ \textmu m particle burning in air with $X_\mathrm{O_2} = 0.26$. The dotted line and gray area in Figure \ref{fig:Knudsen_furtherOx_Tp_XO26_Exp} are the mean and standard deviation of the experimentally obtained temperature profiles, respectively.
\begin{figure*}
\centering
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/Knudsen_furtherOx_Tp_XO26_dp34Exp.eps}}
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/Knudsen_furtherOx_Tp_XO26_dp50Exp.eps}}
\caption{Temperature profile for an iron particle of (a) $34\:$\textmu m and (b) $50\:$\textmu m burning at 26\% oxygen concentration, with and without further oxidation beyond $Z_\mathrm{O} = 0.5$. The dotted line and gray area are the mean and standard deviation of measurements obtained with the setup of \cite{Ning2020}, respectively. }\label{fig:Knudsen_furtherOx_Tp_XO26_Exp}
\end{figure*}
The particle temperature for smaller particles is overestimated is overestimated. Overall, the temperature curve obtained with the MD-informed Knudsen model shows a better agreement with the experimentally obtained temperature curve than the continuum-model prediction. The sharp transition at the peak temperature is now a smooth curve. The inclusion of the oxidation above $Z_\mathrm{O} = 0.5$ results in a higher temperature in the tail of the curve. Instead of inert cooling after $Z_\mathrm{O} = 0.5$, a reactive cooling regime is observed. The new numerically obtained slopes after the peak temperature qualitatively better agree with the experimental measurement during the cooling stage.
\subsubsection{Comparison with Panahi et al.}
Figure \ref{fig:Panahi_et_al_dp49} shows the temperature profiles for the MD-informed Knudsen model with further oxidation beyond $Z_\mathrm{O} = 0.5$ for a $49$ \textmu m particle burning in air with $X_\mathrm{O_2} = 0.21$, $0.5$ and $0.99$. The dotted line and gray area are the mean and standard deviation of experimentally obtained temperature profiles, respectively, which are calculated via the setup described by \cite{Panahi2022}. Note that the effect of the Stefan flow correction on the evaporation rate is not taken into account. The data of \cite{Panahi2022} are time-shifted so that the first experimental data point approximates the numerical temperature.
Although the model overestimates the particle temperature at the two higher oxygen concentrations, the agreement after the maximum temperature is reasonable. The reactive cooling slope which is observed after the maximum particle temperature seems to match the experimentally observed slope. This qualitative agreement implies that the particle keeps on oxidizing after the maximum particle temperature is reached.
A possible explanation for the overestimation of the particle temperature at higher oxygen concentrations could be due to assume infinitely fast internal transport for these high oxygen concentrations. As shown in Section \ref{sec:macresults}, the mass accommodation coefficient significantly decreases when the particle does not have a homogeneous composition, but a higher concentration of oxygen atoms near the surface. Since for high oxygen concentrations in the gas, the external diffusion of oxygen is fast, the diffusion of oxygen in the condensed phase could be rate-limiting. With an increasing oxidation stage $Z_\mathrm{O}$, and an increased oxygen concentration $X_\mathrm{O_2}$ in the gas phase, internal transport could limit the absorption rate of new oxygen molecule, and therefore limit the maximum particle temperature.
\begin{figure*}
\centering
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/Panahi_et_al_dp49_O21.eps}}
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/Panahi_et_al_dp49_O5.eps}}\\
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/Panahi_et_al_dp49_O99.eps}}
\caption{Temperature profile for an iron particle of $49\:$\textmu m burning at (a) 21\%, (b) 50\% and (c) 99 \% oxygen concentration. The dotted line and gray area are the mean and standard deviation of measurements obtained with the setup of \cite{Panahi2022}, respectively. } \label{fig:Panahi_et_al_dp49}
\end{figure*}
\subsection{Effect of $\alpha_\mathrm{T}$ and $\alpha_\mathrm{m}$}
The results of the molecular dynamics simulations are dependent on the accuracy of the inter-atomic potentials and, perhaps, also on the configuration. Related changes in the TAC and MAC have an effect on the temperature profile during combustion. In this section, the TAC and MAC are varied independently in the Knudsen model to investigate the effect on the combustion behavior.
The MD-informed TAC and MAC are varied by 50\%, with a maximum value of $1$. Figure \ref{fig:EffectTACMAC_dp50um_xO26_Tp} shows the effect on the temperature profile and $Z_\mathrm{O}$ for a $50$ \textmu m particle burning in $X_\mathrm{O_2} = 0.21$. Both $\alpha_\mathrm{T}$ and $\alpha_\mathrm{m}$ have a significant effect on the particle temperature. The maximum temperature could vary by $150 \: \mathrm{K}$ due to the variations of $\alpha_\mathrm{T}$ and $\alpha_\mathrm{m}$.
An increasing $\alpha_\mathrm{T}$ results in a lower particle temperature, while the opposite is seen for a decreasing $\alpha_\mathrm{T}$. With an increasing $\alpha_\mathrm{T}$, the exchange of kinetic energy between the gas molecules and the surface is more efficient, resulting in a greater rate of heat loss and therefore a decreasing particle temperature. A variation in $\alpha_\mathrm{T}$ hardly affects the oxidation rate.
A variation in $\alpha_\mathrm{m}$ affects both the temperature profile as well as the oxidation stage $Z_\mathrm{O}$. An increasing $\alpha_\mathrm{m}$ results in a faster oxidation rate for the iron particle, leading to a faster heat release and therefore an increase in particle temperature. Note that the variation is relative, and thus the absolute difference with respect to the original values becomes less when $\alpha_\mathrm{m}$ is close to zero.
\begin{figure*}
\centering
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/EffectTACMAC_dp50um_xO21_Tp.eps}}
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/EffectTACMAC_dp50um_xO21_ZO_Inset.eps}}
\caption{The effect of changes in $\alpha_\mathrm{T}$ and $\alpha_\mathrm{m}$ for a iron particle of $50\:$\textmu m burning at 21\% oxygen concentration. (a) The particle temperature and (b) $Z_\mathrm{O}$. } \label{fig:EffectTACMAC_dp50um_xO26_Tp}
\end{figure*}
\subsection{Effect of transition modeling}
The effect of the transition regime on the temperature profile of a variety of particles burning in $X_\mathrm{O_2} = 0.21$ is investigated finally. Figure \ref{fig:BasetTmax_Tmaxco_vsdp_O21} shows the difference for the time to maximum temperature $t_\mathrm{max}$ and the maximum temperature $T_\mathrm{max}$ obtained with either the continuum and transition model as a function of particle diameter. Due to the large difference in values for $t_\mathrm{max}$, the relative difference with respect to the continuum model is plotted, while for the maximum temperature two separate curves are plotted. The error bars show the effect of varying both $\alpha_\mathrm{T}$ and $\alpha_\mathrm{m}$ by 50 \%.
When using the MD-informed Knudsen model, it is clear that the time to maximum temperature increases for smaller particles. Since the free-molecular regime inhibits mass transfer towards the particle, $t_\mathrm{max}$ increases. However, with an increasing particle size, the new $t_\mathrm{max}$ value becomes smaller than with the continuum model. This result is recognized as an effect of the decreasing MAC with an increasing $Z_\mathrm{O}$.
$T_\mathrm{max}$ as a function of particle size also changes with respect to the continuum model. It is hard to distinguish any transition regime effects in this curve since due to the decreasing MAC with increasing $Z_\mathrm{O}$, the maximum temperature already decreased with respect to the continuum model. \cite{Ning2021} observed a decreasing maximum particle temperature with a decreasing particle size in the range of $25$-$54$ \textmu m. However, this trend is still not observed in the numerical model, and can therefore not be explained by the effect of the transition-regime heat and mass transfer.
\begin{figure*}
\centering
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/Basetimemax_tmaxco_vsdp_O21.eps}}
\subfloat[]
{\includegraphics[width=1\columnwidth, clip]{Figures/BaseTmax_Tmaxco_vsdp_O21.eps}}
\caption{The effect of the MD-informed Knudsen model compared to the continuum model with (a) $t_\mathrm{max}/t_\mathrm{max,co}$ and (b) $T_\mathrm{max}$. Values are obtained with $X_\mathrm{O_2} = 0.21$.} \label{fig:BasetTmax_Tmaxco_vsdp_O21}
\end{figure*}
A critical diameter $d_\mathrm{p,c}$ is defined, which means that if $d_\mathrm{p} < d_\mathrm{p,c}$, it is important to include the transition regime. For $d_\mathrm{p,c}$ we define that If one considers $t_\mathrm{max}/t_\mathrm{max,co} < 1.10$ as the criterion, critical particle size is found to be $d_\mathrm{p,c} \approx 10$ \textmu m. This criterion suggests that, if one uses a continuum-based model to describe the combustion of iron particles smaller than $10$~\textmu m, the burn time can be underestimated by more than $10\%$ due to neglecting the effects of transition-regime transport phenomena.
|
1,108,101,564,917 | arxiv | \section{\label{sec:Intro}Introduction}
\newcommand{\AtlasCoordFootnote}{
ATLAS uses a right-handed coordinate system with origin at the nominal interaction point in the centre of the detector and the $z$-axis along the beam pipe.
The $x$-axis points from the IP to the centre of the LHC ring,
and the $y$-axis points upwards.
Cylindrical coordinates $(r,\phi)$ are used in the transverse plane,
$\phi$ being the azimuthal angle around the $z$-axis.
The pseudorapidity is defined in terms of the polar angle $\theta$ as $\eta = -\ln \tan(\theta/2)$.
Angular distance is measured in units of $\Delta R \equiv \sqrt{(\Delta\eta)^{2} + (\Delta\phi)^{2}}$.
The transverse energy of a photon or electron is $\et= E/\cosh(\eta)$, where $E$ is its energy.}
Light-by-light~(LbyL) scattering, $\gamma\gamma\rightarrow\gamma\gamma$, is a process in the Standard Model~(SM) that proceeds at lowest order in quantum electrodynamics~(QED) via virtual one-loop box diagrams involving charged fermions (leptons and quarks) and $W^{\pm}$ bosons~(Figure~\ref{fig:UPC}). LbyL interactions can occur in relativistic heavy-ion collisions at any impact parameters. However, the large impact parameters i.e. larger than twice the radius of the ions, are experimentally preferred as the strong interaction does not play a role in these ultra-peripheral collision~(UPC) events.
In general, UPC events allow studies of processes involving nuclear photoexcitation, photoproduction of hadrons, and two-photon interactions. Comprehensive reviews of UPC physics can be found in Refs.~\cite{Baltz:2007kq, Klein:2020fmr}.
The electromagnetic (EM) fields produced by the colliding Pb nuclei
can be treated as a beam of quasi-real photons with a small virtuality of $Q^2 < 1/R^2$, where $R$ is the radius of the nuclear charge distribution and so $Q^2 < 10^{-3}~\GeV^2$~\cite{Fermi:1925fq,vonWeizsacker:1934nji,PhysRev.45.729}.
The cross section for the reaction $\textrm{Pb+Pb}\,(\gamma\gamma)\rightarrow \textrm{Pb}^{(\ast)}\textrm{+}\textrm{Pb}^{(\ast)}\,\gamma\gamma$ can then be calculated by convolving the respective photon flux with the elementary cross section for the process $\gamma\gamma\rightarrow \gamma\gamma$, with a possible EM excitation~\cite{ALICE:2012aa}, denoted by~$(^{\ast})$.
Since the photon flux associated with each nucleus scales as $Z^2$, the LbyL cross section is strongly enhanced relative to proton--proton ($pp$) collisions.
In this measurement, the final-state signature of interest is the exclusive production of two photons, where the diphoton final state is measured in the detector surrounding the Pb+Pb interaction region, and the incoming Pb ions survive the EM interaction.
Hence, one expects that two low-energy photons will be detected with no further activity in the central detector. In particular, no reconstructed charged-particle tracks originating from the Pb+Pb interaction point are expected.
\begin{figure}[b!]
\centering
\includegraphics[width=0.45\textwidth]{fig_01a.pdf}
\includegraphics[width=0.45\textwidth]{fig_01b.pdf}
\caption{\label{fig:UPC}Schematic diagrams of (left) SM LbyL scattering and (right) axion-like particle production in \PbPb\ UPC.
A potential electromagnetic excitation of the outgoing Pb ions is denoted by $(^{\ast})$.}
\end{figure}
The LbyL process has been proposed as a sensitive channel to study physics beyond the SM.
Modifications of the $\gamma\gamma\rightarrow\gamma\gamma$ scattering rates can be induced by new exotic charged particles~\cite{Fichet:2014uka}
and by the presence of extra spatial dimensions~\cite{Inan:2019ugz}.
The LbyL cross sections are also sensitive to Born--Infeld extensions of QED~\cite{Ellis:2017edi},
Lorentz-violating operators in electrodynamics~\cite{Kostelecky:2018yfa},
and the presence of space-time non-commutativity in QED~\cite{Horvat:2020ycy}.
Additionally, new neutral particles, such as axion-like particles~(ALP), can also contribute in the form of
narrow diphoton resonances~\cite{Knapen:2016moh}, as shown in Figure~\ref{fig:UPC}.
ALPs are relatively light, gauge-singlet (pseudo-)scalar particles that appear in many theories with a spontaneously broken global
symmetry. Their masses and couplings to SM particles may range over many orders of
magnitude.
The previous ATLAS searches involving ALP decays to photons are based on $pp$ collision data~\cite{EXOT-2013-24,EXOT-2017-08}.
LbyL scattering via an electron loop has been precisely, albeit indirectly, tested in measurements of the anomalous magnetic moment of the electron and muon~\cite{Hanneke:2008tm, Bennett:2006fi}.
The $\gamma\gamma\rightarrow \gamma\gamma$ reaction has been measured in photon scattering in the Coulomb field of a nucleus (Delbrück scattering)~\cite{Wilson:1953zz,PhysRevD.8.3813,Schumacher:1975kv,PhysRevC.58.2844} and in the photon splitting process~\cite{Akhmadaliev:2001ik}.
A related process, in which initial photons fuse to form a pseudoscalar meson which subsequently decays into a pair of photons, has been studied at electron--positron colliders~\cite{Bartel:1985zw, Aihara:1985uc, Williams:1988sg}.
The authors of Ref.~\cite{Enterria:2013yra} proposed to measure LbyL scattering by exploiting the large photon fluxes available in heavy-ion collisions at the LHC.
The first direct evidence of the LbyL process in \PbPb UPC at the LHC was established by the ATLAS~\cite{Aaboud:2017bwk} and CMS~\cite{Sirunyan:2018fhl} Collaborations.
The evidence was obtained from \PbPb data recorded in 2015 at a centre-of-mass energy of $\sqn~=~5.02$~TeV with integrated luminosities of 0.48~$\textrm{nb}^{-1}$~(ATLAS) and 0.39~$\textrm{nb}^{-1}$~(CMS).
The CMS Collaboration also set upper limits on the cross section for ALP production, $\gamma\gamma\rightarrow a \rightarrow \gamma\gamma$, over a mass range of 5--90~\GeV.
Exploiting a data sample of \PbPb\ collisions collected in 2018 at the same centre-of-mass energy with an integrated luminosity of 1.73~$\textrm{nb}^{-1}$, the ATLAS Collaboration observed LbyL scattering with a significance of $8.2\sigma$~\cite{Aad:2019ock}. These two ATLAS measurements used tight requirements on the diphoton invariant mass~($>6$~\GeV) and single-photon transverse energy~($>3$~\GeV).
This paper presents a measurement of the cross sections for $\textrm{Pb+Pb}\,(\gamma\gamma)\rightarrow \textrm{Pb}^{(\ast)}\textrm{+}\textrm{Pb}^{(\ast)}\,\gamma\gamma$ production at $\sqn~=~5.02$~\TeV\ using a combination of Pb+Pb collision data recorded in 2015 and 2018 by the ATLAS experiment, corresponding to an integrated luminosity of 2.2~$\textrm{nb}^{-1}$.
This analysis follows the approach proposed in Ref.~\cite{Enterria:2013yra} and the methodology used in the previous measurements~\cite{Aaboud:2017bwk,Aad:2019ock}. However, as a result of improvements in the trigger efficiency and purity of the photon identification, a broader kinematic range in diphoton invariant mass~($>5$~\GeV) and single-photon transverse energy~($>2.5$~\GeV) is covered. This extension results in an increase of about 50\% in expected signal yield in comparison with the previous tighter requirements.
The integrated fiducial cross section and four differential distributions involving kinematic variables of the final-state photons are measured.
Two of the distributions characterise the energy of the process: the invariant mass of the diphoton system, $\Minvgg$, and the average transverse momentum of two photons, $(\pt^{\gamma1}+\pt^{\gamma2})/2$. The remaining ones probe angular correlations of the $\gamma\gamma$ system. These are the rapidity\footnote{Rapidity is defined as $y=\frac{1}{2}\ln{\frac{E+p_z}{E-p_z}}$, where $E$ and $p_z$ are particle's energy and the component of momentum along the beam axis, respectively.} of the diphoton system, $y_{\gamma\gamma}$, and $|\cos(\theta^*)|$, defined as:
\begin{linenomath*}
\begin{equation*}
|\cos(\theta^*)| = \left|\tanh\left(\frac{\Delta y_{\gamma 1,\gamma 2}}{2} \right) \right|,
\end{equation*}
\end{linenomath*}
where $\theta^*$ is the $\gamma\gamma$ scattering angle in the $\gamma\gamma$ centre-of-mass frame, and $\Delta y_{\gamma 1,\gamma 2}$ is the difference between the rapidities of the photons.
The measured diphoton invariant mass distribution is used to set limits on
ALP production via the process $\gamma\gamma\rightarrow a \rightarrow \gamma\gamma$.
\section{\label{sec:Atlas}ATLAS detector}
The ATLAS detector~\cite{ATLAS_detector_paper} at the LHC covers nearly the entire solid angle around the collision point.
It consists of an inner tracking detector surrounded by a thin superconducting solenoid, EM and hadronic calorimeters,
and a muon spectrometer incorporating three large superconducting toroid magnets.
The inner-detector system (ID) is immersed in a \SI{2}{\tesla} axial magnetic field
and provides charged-particle tracking in the pseudorapidity\footnote{\AtlasCoordFootnote} range $|\eta| < 2.5$.
The high-granularity silicon pixel detector (Pixel) covers the collision region.
Typically, it provides four measurements per track,
with the first hit being in the insertable B-layer~(IBL)~\cite{ATLAS-TDR-19, Abbott:2018ikt}, which was installed at a mean distance of 3.3~cm from the beam pipe before the start of Run~2.
It is followed by the silicon microstrip tracker (SCT), which usually provides four two-dimensional measurement points per track.
These silicon detectors are complemented by the transition radiation tracker,
which enables radially extended track reconstruction up to $|\eta| = 2.0$.
The calorimeter system covers the pseudorapidity range $|\eta| < 4.9$.
Within the region $|\eta|< 3.2$, EM calorimetry is provided by barrel and
endcap lead/liquid-argon (LAr) EM calorimeters (high-granularity for $|\eta| < 2.5$),
with an additional thin LAr presampler covering $|\eta| < 1.8$
to correct for energy loss in material upstream of the calorimeters.
Hadronic calorimetry is provided by the steel/scintillator-tile calorimeter,
segmented into three barrel structures within $|\eta| < 1.7$, and two copper/LAr hadronic endcap calorimeters.
The solid angle coverage is completed with forward copper/LAr and tungsten/LAr calorimeter modules (FCal)
optimised for EM and hadronic measurements respectively.
The muon spectrometer (MS) comprises high-precision tracking chambers measuring the deflection of muons in a magnetic field generated by the superconducting air-core toroids.
The precision chamber system covers the region $|\eta| < 2.7$ with three layers of monitored drift tubes,
complemented by cathode strip chambers in the forward region, where the background is highest.
The ATLAS minimum-bias trigger scintillators (MBTS) consist of scintillator slats positioned between the ID and the endcap calorimeters, with each side having an outer ring of four slats segmented in azimuthal angle, covering $2.07 < |\eta| < 2.76$, and
an inner ring of eight slats, covering $2.76 < |\eta| < 3.86$.
The ATLAS zero-degree calorimeters (ZDC) consist of four longitudinal compartments on each side of the interaction point~(IP), each with one nuclear interaction length of tungsten absorber, with the Cerenkov light read out by 1.5~mm quartz rods.
The detectors are located 140 m from the nominal IP in both directions, covering $|\eta|>8.3$.
The ATLAS LUCID-2 detector~\cite{Avoni:2018iuv} consists of 32 photomultiplier tubes
for luminosity measurements and luminosity monitoring. Its two modules are placed symmetrically at about $\pm17$~m from the nominal IP.
The ATLAS trigger system~\cite{Aaboud:2016leb} consists of a Level-1 trigger implemented using a combination of dedicated electronics and programmable logic, and a software-based high-level trigger (HLT).
\newenvironment{DIFnomarkup}{}{}
\begin{DIFnomarkup}
\section{\label{sec:DataMC}Data and Monte Carlo simulation samples}
\end{DIFnomarkup}
The data used in this measurement is from \PbPb collisions with a centre-of-mass energy of $\sqn~=~5.02$~\TeV, recorded in 2015 and 2018 with the ATLAS detector at the LHC.
The full data set corresponds to an integrated luminosity of 2.2~\invnb.
Only high-quality data with all detectors operating normally are analysed.
Monte Carlo~(MC) simulated events for the LbyL signal process were generated at leading order~(LO) using SuperChic~v3.0~\cite{Harland-Lang:2018iur}.
They take into account box diagrams with leptons and quarks (such as the diagram in Figure~\ref{fig:UPC}), and $W^{\pm}$ bosons, including interference effects. The $W^\pm$ contribution is only important for diphoton masses $m_{\gamma\gamma} > 2 m_{W}$.
Next-to-leading-order QCD and QED corrections are not included. They increase the \gggg\ cross section by a few percent~\cite{Bern:2001dg, Klusek-Gawenda:2016nuo}.
An alternative LbyL signal sample was generated using calculations from Ref.~\cite{Klusek-Gawenda:2016euz}.
The difference between the nominal and alternative signal prediction is mainly in the implementation of the non-hadronic overlap condition of the Pb ions.
In SuperChic~v3.0 the probability for exclusive $\gamma\gamma$ interactions turns on
smoothly for Pb+Pb impact parameters in the range of 15--20~fm and it is unity for larger values, while the alternative prediction fully suppresses these interactions for impact parameters below 14~fm when two nuclei overlap during the collision. This difference leads to a fiducial cross section for LbyL scattering that is by about 3\% larger in the alternative calculation than in the prediction from SuperChic~v3.0.
The exclusive diphoton final state can also be produced via the strong interaction through a quark loop in the exchange of two gluons in a colour-singlet state.
This central exclusive production~(CEP) background contribution, $gg\rightarrow\gamma\gamma$, was modelled using SuperChic~v3.0.
Background from two-photon production of quark--antiquark pairs was estimated using \textsc{Herwig}++ 2.7.1~\cite{Bahr:2008pv} where the Equivalent Photon Approximation~(EPA) formalism in \pp collisions is implemented. The sample was then normalised to cover the differences in equivalent photon fluxes between the \PbPb\ and $pp$ cases.
Exclusive dielectron pairs from the reaction $\textrm{Pb+Pb}\,(\gamma\gamma)\rightarrow \textrm{Pb}^{(\ast)}\textrm{+}\textrm{Pb}^{(\ast)}\,e^+e^-$ are used for various aspects of the analysis, in particular to validate the EM calorimeter energy scale and resolution.
This \yyee\ process was modelled with the STARlight~v2.0 MC generator~\cite{Klein:2016yzr},
in which the cross section is computed by combining the Pb+Pb photon flux with the LO formula for \ggee.
The background contribution from a related process, $\gamma\gamma\rightarrow\tau^+\tau^-$, was modelled using STARlight~v2.0 interfaced with Pythia 8.212~\cite{Sjostrand:2014zea} for the simulation of $\tau$-lepton decays.
Events for the ALP signal were generated using STARlight~v2.0 for ALP masses~($\ma$) ranging between 5 and 100 \GeV. A mass spacing of 1~\GeV\ was used for $5<\ma<30$~\GeV, while for $\ma>30$~\GeV\ a 10~\GeV\ mass spacing was used.
The width of the simulated ALP resonance is well below the detector resolution in all
simulated samples.
All generated events were passed through a detector simulation~\cite{Aad:2010ah} based on GEANT4~\cite{Agostinelli:2002hh} and are reconstructed with the standard ATLAS reconstruction software.
\section{\label{sec:Selection}Event selection}
Candidate diphoton events were recorded using a dedicated trigger for events with moderate activity in the calorimeter but little additional activity in the entire detector. The trigger strategies for the 2015 and 2018 data sets were different. In particular, the latter aimed at improving the trigger efficiency at low photon transverse energy, \et, values.
At Level-1 in 2015, the total \et\ registered in the calorimeter after noise suppression was required to be between 5 and 200~\GeV.
In 2018, a logical OR of two Level-1 conditions was required: (1)~at least one EM cluster with $\et>1\;\gev$ in coincidence with total \et\ registered in the calorimeter between 4--200~\GeV, or (2)~ at least two EM clusters with $\et>1\;\gev$ with total \et\ registered in the calorimeter below 50~\GeV.
At the HLT, events in 2015 were rejected if more than one hit was found in the inner ring of the MBTS (MBTS veto).
In 2018, a requirement of total \et on each side of the FCal detector to be below 3~\GeV\
was imposed.
Additionally, in both data sets a veto condition on activity in the Pixel detector, hereafter referred to as Pixel-veto, had to be satisfied. The number of hits was required to be at most 10 in 2015, and at most 15 in 2018.
Photons are reconstructed from EM clusters in the calorimeter and tracking information provided by the ID, which allows the identification of photon conversions~\cite{Aaboud:2018yqu}.
Selection requirements are applied to remove EM clusters with a large amount of energy from poorly functioning calorimeter cells, and a timing requirement is made to reject out-of-time candidates. An energy calibration specifically optimised for photons~\cite{Aad:2019tso} is applied to the candidates to account for upstream energy loss and both lateral and longitudinal shower leakage. The calibration is derived for nominal $pp$ collisions with dedicated factors applied to account for a negligible contribution from multiple Pb+Pb collisions at the same bunch crossing.
A correction~\cite{Aad:2019tso} is applied to photons in MC samples to account for potential mismodelling of quantities which describe shower shapes of the associated EM showers.
The photon particle identification~(photon PID) in this analysis is based on a selection of the shower-shape variables, optimised for the signal events.
Only photons with $\et>2.5~\gev$ and $|\eta|<2.37$, excluding the calorimeter transition region $1.37<|\eta|<1.52$, are considered. The pseudorapidity requirement ensures that the photon candidates pass through regions of the EM calorimeter where the first layer is segmented into narrow strips, providing good separation between genuine prompt photons and photons coming from the decay of neutral hadrons.
The identification is based on a neural network trained on background photons extracted from data and photons from the signal MC simulation, as already used in the previous ATLAS measurement~\cite{Aad:2019ock}. The PID requirements are optimised for low-\et photons~($\et<20$~\GeV) to maintain a constant photon PID efficiency of 95\% as a function of $\eta$ and $\et$ with respect to reconstructed photon candidates. They also select a purer sample of photons than obtained with the cut-based photon PID utilised in \pp\ collisions~\cite{Aaboud:2018yqu}.
Preselected events are required to have exactly two photons satisfying the above selection criteria, with a diphoton invariant mass greater than 5~\GeV.
In order to suppress the \ggee\ background, a veto on charged-particle tracks (with $\pt>100$~\MeV, $|\eta|<2.5$, at least one hit in the Pixel detector and at least six hits in the Pixel and SCT detectors in total) is imposed.
In order to reduce the background from electrons with poorly reconstructed tracks,
candidate events are required to have no `pixel tracks' in the vicinity of the photon candidate.
Pixel tracks are reconstructed using only the information from the Pixel detector, and are required to have $\pt>50$~\MeV, $|\eta|<2.5$, and at least three hits in the Pixel detector.
In order to suppress fake pixel tracks due to noise in the Pixel detector, only pixel tracks with $\Delta\eta<0.5$ from the photons are considered.
These requirements reduce the fake-photon background from the dielectron final state by a factor of about $10^4$, according to simulation.
They have minor impact on \gggg\ signal events (93\% efficiency for the track veto and 99\% for the pixel-track veto), since the probability of photon conversion in the Pixel detector is relatively small and the converted photons have a low probability of being reconstructed at very low \et\ due to the presence of low-momentum electron tracks.
Due to the absence of tracks in the LbyL signal events, no primary vertex is reconstructed. The photon direction is estimated using the barycentre of the cluster with respect to the origin of the ATLAS coordinate system.
To reduce other sources of fake-photon background (involving mainly calorimeter noise and cosmic-ray muons), the transverse momentum of the diphoton system~(\ptgg) is required to be below 1~\GeV\ for $\Minvgg<12~\gev$ and below 2~\GeV\ for $\Minvgg>12~\gev$.
To reduce real-photon background from CEP $gg\rightarrow\gamma\gamma$ reactions, an additional requirement on the diphoton acoplanarity, $\Aco=(1-|\Delta\phi_{\gamma\gamma}|/{\pi})<0.01$, is used.
The CEP process exhibits a significantly broader acoplanarity distribution than the
$\gamma\gamma\rightarrow\gamma\gamma$ process because gluons recoil against the Pb nucleus, which then dissociates.
To select \ggee candidates, events are required to pass the same trigger as in the diphoton selection.
Each electron is reconstructed from an EM energy cluster in the calorimeter matched
to a track in the ID~\cite{Aaboud:2019ynx}.
The electrons are required to have a transverse energy $\et>2.5~\GeV$ and pseudorapidity $|\eta|<2.47$ with the calorimeter transition region $1.37<|\eta|<1.52$ excluded.
They are also required to meet loose identification criteria based on shower-shape and track-quality variables~\cite{Aaboud:2019ynx}.
The \yyee\ events are selected by requiring exactly two oppositely charged electrons, no further charged-particle tracks coming from the interaction region (with the selection requirements as described above), and dielectron acoplanarity below 0.01.
\section{\label{sec:DetCalib}Detector calibration}
\subsection{Trigger efficiency\label{sec:trigger}}
The trigger sequence used in the analysis consists of three independent requirements: Level-1, MBTS/FCal veto, and the requirement on low activity in the ID.
The Level-1 trigger efficiency was estimated with \ggee\ events passing one of the independent supporting triggers.
These triggers are designed to select events with single or double dissociation of Pb nuclei and small activity in the ID.
They are based on a coincidence of signals in one or both ZDC sides with a requirement on the total \et\ in the calorimeter to be below 50~\GeV.
Dielectron event candidates are required to have exactly two reconstructed tracks and two geometrically matched EM clusters, each with a minimum \et\ of 1~\GeV{} and $|\eta| <1.47$, excluding the calorimeter transition region $1.37<|\eta|<1.52$.
The electron identification requirements are removed in order to accept more events in this very low \et\ region, where the efficiencies to reconstruct and identify electrons are low.
Furthermore,
dielectron acoplanarity evaluated using electron charged-particle tracks is required to be below 0.01.
The extracted Level-1 trigger efficiency is provided as a function of the sum of \et of the two EM clusters reconstructed offline~($\sum\et^{\textrm{clusters}} = \etclone+\etcltwo$).
For $\sum\et^{\textrm{clusters}}=5~\gev$ this efficiency, shown in Figure~\ref{fig:level1_eff}, reaches $60\%$ for 2018 trigger settings, while it is consistent with $0\%$ for 2015 trigger settings due to higher trigger thresholds.
The Level-1 trigger efficiency grows to about $25\%$~($95\%$) for $\sum\et^{\textrm{clusters}}=7.5~\gev$ for 2015~(2018) data.
The efficiency plateau is reached around $\sum\et^{\textrm{clusters}}=10~\gev$ for the 2015 data-taking period and around $\sum\et^{\textrm{clusters}}=9~\gev$
for the 2018 one. The error bars associated with the data points represent statistical uncertainties.
The efficiency is parameterised using an error function fit that is used to reweight the MC simulation.
The statistical uncertainty is estimated by varying the fit parameters by their uncertainty values. The systematic uncertainty is estimated using modified \ggee\ selection criteria.
\begin{figure}[b!]
\begin{center}
\includegraphics[width=0.65\textwidth]{fig_02.pdf}
\caption{ The Level-1 trigger efficiency extracted from \ggee\ events that pass the supporting triggers as a function of the sum of \et of the two EM clusters. Data are shown as points with error bars representing statistical uncertainties, separately for two data-taking periods: 2015 (open squares) and 2018 (full circles). The efficiency is parameterised using the error function fit, shown as a dashed (2015) or solid (2018) line. Shaded bands denote total (statistical and systematic) uncertainty.}
\label{fig:level1_eff}
\end{center}
\end{figure}
The MBTS and FCal veto efficiencies are estimated using \yyee\ events recorded by supporting triggers.
The MBTS veto efficiency is estimated to be $(98 \pm 2)\%$~\cite{Aaboud:2017bwk} and the FCal veto efficiency is found to be $(99.1 \pm 0.6)\%$. Both efficiencies are independent of kinematics.
Due to low conversion probability of signal photons in the Pixel detector, inefficiency of the Pixel-veto requirement at the trigger level is found to be negligible for diphoton event candidates.
The efficiency for selected \yyee\ events to satisfy the Pixel-veto requirement is evaluated using a dedicated
supporting trigger accepting events with at most 15 tracks at the HLT, out of which at least two had $\pt>1$~\GeV. At Level-1, the same trigger condition was applied as in the diphoton trigger. The FCal veto requirement was also imposed at the HLT.
The Pixel-veto efficiency is parameterised using a second-order polynomial as a function of dielectron rapidity, $y_{ee}$.
The efficiency reaches 80--85\% for dielectron rapidity $|y_{ee}|<1$ and drops to 45--50\% at $|y_{ee}|\approx 2.5$.
This efficiency correction is applied to the \yyee\ MC simulation.
\subsection{Photon reconstruction and identification\label{sec:photon-eff}}
The photon reconstruction efficiency is extracted from data using \ggee events, where one of the electrons emits a hard-bremsstrahlung photon when interacting with the material of the detector.
A tag-and-probe method is performed for events collected by the diphoton trigger with exactly one identified electron and exactly two reconstructed charged-particle tracks.
The electron is considered a tag if it can be matched to one of the tracks with a $\Delta R < 1.0$ requirement.
The electron is required to have $\eT^{e}>4$~\GeV\ and the track that is unmatched with the electron (trk2) must have $\pT < 1.5$~\GeV.
The electron--trk2 transverse momentum difference is treated as the transverse energy of the probe, since the additional hard-bremsstrahlung photon is expected to have $\eT^{\gamma}\approx (\eT^e-\pT^\textrm{trk2})$.
The $\pT^\textrm{trk2}<1.5$~\GeV\ requirement ensures a sufficient $\Delta R$ separation between the expected photon and the second electron. A hard-bremsstrahlung photon is expected to be within a distance of $\Delta R = 1.0$ around trk2 direction.
Any additional background contribution to the exclusive \yyee\ reaction is found to be very small in Pb+Pb UPC~\cite{Abbas:2013oua}, and therefore it is considered negligible.
The data sample contains 2905 $\ggee(\gamma)$ bremsstrahlung photons and is used to extract the photon reconstruction efficiency, which is presented in Figure~\ref{fig:hardbrem_eff2}.
The efficiency in data is approximately $60\%$ for $\eT^{\gamma}=2.5~\GeV$ and reaches 90\% at $\eT^{\gamma}=6~\GeV$.
Reasonable agreement between data and simulation is found.
The distribution from Figure~\ref{fig:hardbrem_eff2} is used to obtain the data-to-simulation scale factors that are used to correct the MC simulation.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig_03.pdf}
\caption{Photon reconstruction efficiency as a function of photon $\eT^\gamma$ (approximated with $\eT^e-\pT^\textrm{trk2}$) extracted from \ggee events with a hard-bremsstrahlung photon. Data~(full symbols) are compared with \ggee MC simulation~(open symbols). The error bars denote statistical uncertainties.}
\label{fig:hardbrem_eff2}
\end{center}
\end{figure}
High-\pt\ exclusive dilepton production~(\ggll with $\ell^\pm=e^\pm, \mu^\pm$) with final-state radiation~(FSR) is used for data-driven measurements of the photon PID efficiency, defined as the probability for a reconstructed photon to satisfy the identification criteria.
Events with exactly two oppositely charged tracks with $\pt>0.5\;\gev$ are selected in UPC events recorded by the diphoton or dimuon\footnote{ The dimuon trigger required a muon candidate with $\pt>4$~\GeV\ reconstructed at Level-1 and at least two tracks with \pt above 1~\GeV\ among up to 15 tracks found at the HLT.} triggers. In addition a requirement to reconstruct a photon candidate with $\et^\gamma>2.5$~\GeV\ and $|\eta|<2.37$, excluding the calorimeter transition region $1.37<|\eta|<1.52$, is imposed. A photon candidate is required to be separated from each track with the requirement $\Delta R>0.3$. This condition avoids the leakage of the photon cluster energy to an electron cluster from the \ggee\ process. The mass of the dilepton system is required to be above 1.5~\GeV.
The FSR event candidates are identified using a $\pTttg<1$~\GeV\ requirement, where \pTttg\ is the transverse momentum of the three body system consisting of two oppositely charged tracks and a photon. The FSR sample consists of 1333~(212) photon candidates in the 2018~(2015) data set and is statistically independent from the hard-bremsstrahlung photon sample used in the photon reconstruction efficiency measurement.
Figure~\ref{fig:pt-fsr-eff2} shows the photon PID efficiency as a function of the reconstructed photon \et\ for 2015 and 2018 data. The efficiency in data is compared with the efficiency extracted from the signal MC sample.
Photon PID efficiencies in MC simulation with 2015 and 2018 data-taking conditions are in good agreement.
In the data for photons with $\et < 5$~\GeV, the photon PID efficiency is in the range of 91-93\% in the 2018 set, while it is found to be 97-100\% in the 2015 set. This difference is due to slightly different detector conditions between the 2015 and 2018 data-taking periods, causing the photon shower-shape distributions to be narrower in the 2015 data.
Based on these studies, MC simulated events are corrected using photon \et-dependent data-to-simulation scale factors separately for the 2015 and 2018 data sets.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig_04a.pdf}
\includegraphics[width=0.45\textwidth]{fig_04b.pdf}
\caption{Photon PID efficiency as a function of photon \et\ extracted from FSR event candidates in 2015~(left) and 2018~(right) data~(full symbols) and signal MC sample~(open symbols). The error bars denote statistical uncertainties.}
\label{fig:pt-fsr-eff2}
\end{center}
\end{figure}
\subsection{Photon energy calibration}
The EM energy scale and energy resolution are validated in data using \ggee\ events.
The two electrons from the \ggee\ reaction exhibit balanced transverse momenta with $|\pt^{e^+}-\pt^{e^-}|$, expected to be below 30~\MeV, which is much smaller than the EM calorimeter energy resolution.
Therefore, the energy resolution, $\sigma_{\et^\textrm{cluster}}$, can be determined from the measurement of $\etclone-\etcltwo$ distributions in \ggee\ events from the formula:
\begin{equation*}
\sigma_{\et^\textrm{cluster}} \approx \frac{\sigma_{(\et^\textrm{cluster1}-\et^\textrm{cluster2})}}{\sqrt{2}}~,
\end{equation*}
where $\etclone$ and $\etcltwo$ are the transverse energies of the two clusters.
At low electron-\et (below 10~\GeV{}) the value of $\sigma_{\et^\textrm{cluster}}/\et^\textrm{cluster}$ is observed to be 8--10\% in data, which agrees well with the resolution obtained from simulation.
The EM energy scale is cross-checked using the ratio of electron cluster \et to electron track \pt.
It is observed that the simulation provides a good description of the $\et^e / \pt^{\textrm{trk}}$ distribution.
\subsection{Control distributions for exclusive \yyee\ production }
Figure~\ref{fig:ele-kin_2018} presents detector-level distributions for events passing the \yyee\ selection (outlined in Section~\ref{sec:Selection}) in the 2018 \PbPb\ data.
In total, 28\,045 \yyee\ event candidates are observed. The shaded bands reflect systematic uncertainties due to electron energy scale and resolution, electron reconstruction and identification, and trigger efficiency.
In general, the STARlight prediction describes the normalisation and shapes of distributions well.
Small systematic differences between the central values of the exclusive dielectron data and the MC prediction are seen in the tail of the dielectron~\pt\ distribution, likely due to a missing contribution from the QED final-state radiation which is not simulated by the MC generator.
The low number of \yyee\ events collected by a control trigger in the 2015 \PbPb\ data precludes precision comparisons between data and MC simulation in that sample.
In particular, the tighter Pixel-veto requirement imposed at the HLT
necessitates a dedicated pseudorapidity-dependent trigger efficiency correction which, due to the limited number of \yyee\ events, could only be extracted with 20\%
precision. Nevertheless, overall reasonable agreement was found within large uncertainties as demonstrated in the previous ATLAS publication~\cite{Aaboud:2017bwk}.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig_05a.pdf}
\includegraphics[width=0.45\textwidth]{fig_05b.pdf}
\includegraphics[width=0.45\textwidth]{fig_05c.pdf}
\includegraphics[width=0.45\textwidth]{fig_05d.pdf}
\caption{Kinematic distributions for $\textrm{Pb+Pb}\,(\gamma\gamma)\rightarrow \textrm{Pb}^{(\ast)}\textrm{+}\textrm{Pb}^{(\ast)}
\,e^+e^-$ event candidates in the 2018 data set: dielectron mass (top-left), dielectron rapidity (top-right), dielectron \pt~(bottom-left) and electron transverse energy (bottom-right).
Data (points) are compared with MC expectations (histograms). The simulation prediction is normalised to the same integrated luminosity as the data.
Systematic uncertainties due to electron energy scale and resolution, electron reconstruction and identification, and trigger efficiency, are shown as shaded bands.
The lower panels display the ratio of data to MC predictions. Some values are outside the plotting range.}
\label{fig:ele-kin_2018}
\end{center}
\end{figure}
\section{\label{sec:Background}Background estimation}
\subsection{Dielectron final states\label{sec:eeEstimate}}
The $\ggee$ process has a relatively high cross section and can be a source of fake diphoton events.
The electron-to-photon misidentification can occur when the electron track is not reconstructed or the electron emits a hard bremsstrahlung photon.
The $\ggee$ yield in the signal region defined in Section~\ref{sec:Selection} is estimated using a fully data-driven method.
A control region is defined requiring exactly two photon candidates passing the signal selection, and one or two pixel tracks. This control region is denoted by \CROneTwo.
The event yield observed in \CROneTwo\ is extrapolated to the signal region using the probability of missing the electron pixel track if the standard track is not reconstructed (\pmis).
The \pmis value is measured in data
using events with exactly one standard track and two photon candidates having $\Aco < 0.01$.
It is measured to be $\pmis = (47\pm9)\%$, where the uncertainty is estimated by relaxing the \Aco requirement.
It is also found that $\pmis$ does not depend on the probed photon $\et$ and $\eta$.
The number of \ggee events in the signal region is estimated to be $N_{\gamma\gamma\rightarrow e^+e^-} = 15\pm7$, where the uncertainty accounts for the \pmis uncertainty and limited event yield in \CROneTwo.
This uncertainty also covers the differences if the $\ggee$ yield is instead extrapolated from event yields for individual pixel-track multiplicities ($N =1$ or $N =2$).
The distribution shapes of various kinematic variables of $\ggee$ background in the signal region are taken from data in \CROne.
The shape uncertainty is constructed by comparing kinematic distributions from data in \CROne\ with the distributions from data in \CRTwo.
\subsection{Central exclusive diphoton production}
The CEP $gg\rightarrow\gamma\gamma$ background is estimated from MC simulation with the overall rate of this process evaluated in the \Aco control region in the data.
The normalisation is constrained using the condition:
\begin{equation*}
N_\textrm{data}(\Aco > 0.01) = N_{gg\rightarrow\gamma\gamma}(\Aco > 0.01) + N_\textrm{sig}(\Aco > 0.01) + N_{\gamma\gamma\rightarrow ee}(\Aco > 0.01)~,
\end{equation*}
where $N_\textrm{data}$ denotes the number of observed events, $N_{gg\rightarrow\gamma\gamma}$ is the expected CEP $gg\rightarrow\gamma\gamma$ event yield, $N_\textrm{sig}$ is the expected number of signal events (from MC simulation) and $N_{\gamma\gamma\rightarrow ee}$ is the $e^+e^-$ background yield.
The $N_{\gamma\gamma\rightarrow ee}$ is estimated using the same data-driven method as described in Section~\ref{sec:eeEstimate}.
The diphoton acoplanarity distribution for events satisfying the signal region selection, but before applying the $\Aco < 0.01$ requirement is shown in Figure~\ref{fig:acoNorm}.
The predictions provide a fair description of the shape of the data distribution.
The uncertainty in the CEP $gg\rightarrow\gamma\gamma$ background process takes into account the limited number of events in the $\Aco > 0.01$ control region ($11\%$), as well as experimental and modelling uncertainties.
It is found that all experimental uncertainties have a negligible impact on the CEP $gg\rightarrow\gamma\gamma$ background estimate.
The impact of the MC modelling uncertainty on the shape of the acoplanarity distribution is estimated using an alternative SuperChic~v2.0 MC sample with extra gluon interactions (no absorptive effects). This leads to a $21\%$ change in the CEP background yield in the signal region, which is taken as a systematic uncertainty.
An additional check is performed by varying the parton distribution function (PDF) of the gluon. The differences between leading-order MMHT 2014~\cite{Harland-Lang:2014zoa}, CT14~\cite{Dulat:2015mca} and NNPDF3.1~\cite{Ball:2017nwa} PDF sets have negligible impact on the shape of the diphoton acoplanarity distribution.
In addition, the energy deposition in the ZDC, which is sensitive to the dissociation of Pb nuclei, is studied for events before the $\Aco <0.01$ requirement is imposed.
Good agreement is observed in the $\Aco > 0.01$ control region between the data-driven CEP estimate and the observed events with a signal corresponding to at least one neutron in the ZDC.
In the signal region ($\Aco < 0.01$), approximately 70\% of observed events have a signal corresponding to no neutrons in the ZDC, which is consistent with the signal-plus-background hypothesis.
The background due to CEP in the signal region is estimated to be $12\pm3$ events.
In the differential cross-section measurements, the shape uncertainty is evaluated using the alternative SuperChic~v2.0 MC sample.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.47\textwidth]{fig_06.pdf}
\caption{The diphoton acoplanarity distribution for events satisfying the signal region selection, but before applying the $\Aco < 0.01$ requirement.
Data are shown as points with statistical error bars, while the histograms represent the expected signal and background levels.
The CEP $gg\rightarrow\gamma\gamma$ background is normalised in the $\Aco > 0.01$ control region.
The signal prediction is normalised to the same integrated luminosity as the data. The shaded band represents the uncertainties in signal and background predictions, excluding the uncertainty in the luminosity.
}
\label{fig:acoNorm}
\end{center}
\end{figure}
\begin{DIFnomarkup}
\subsection{Other background sources with prompt photons}
\end{DIFnomarkup}
The contribution from the $\gamma\gamma\rightarrow e^+e^-\gamma\gamma$ process is evaluated using the \textsc{MadGraph5}$\_$aMC$@$NLO v2.4.3 MC generator~\cite{Alwall:2014hca} and the Pb+Pb photon flux from STARlight. This contribution is estimated to be below 1\% of the expected signal and is consequently ignored in the analysis.
The contribution from bottomonia production (for example, $\gamma\gamma\rightarrow \eta_b \rightarrow \gamma\gamma$ or $\gamma\textrm{Pb}\rightarrow \Upsilon \rightarrow \gamma\eta_b \rightarrow 3\gamma$) is calculated using relevant branching fractions from Refs.~\cite{Ebert:2002pp, Segovia:2016xqb} and found to be negligible.
The contribution from UPC events where both nuclei emit a bremsstrahlung photon is estimated using calculations from Ref.~\cite{Bertulani:1987tz}. The cross section for single-bremsstrahlung photon production from a Pb ion in the fiducial region of the measurement is calculated to be below \SI{e-4}{\pb}, so the coincidence of two such occurrences is negligible.
\subsection{Other fake-photon background}
The background contribution from \ggqq\ production is estimated using MC simulation based on \textsc{Herwig}++ and it contributes less than 1 event to the total number of events in the signal region.
The expected yield for the background from $\gamma\gamma\rightarrow\tau^+\tau^-$ process is estimated using MC simulation based on STARlight + Pythia 8 and is found to be less than 0.5 events.
Both of these background sources are considered negligible.
Exclusive two-meson production can be a potential source of background for LbyL scattering events, mainly due to their similar back-to-back topology. Mesons can fake photons either by their decay into photons ($\pi^0$, $\eta$, $\eta'$) or by mis-reconstructed charged-particle tracks (for example $\pi^+, \pi^-$ states).
Estimates for such contributions are reported in Refs. \cite{KlusekGawenda:2011ib, Klusek-Gawenda:2019ijn,Enterria:2013yra, HarlandLang:2011qd, Harland-Lang:2013ncy} and these contributions are considered to be negligible in the signal region.
The background from fake diphoton events induced by cosmic-ray muons is estimated using a control region with at least one track reconstructed in the muon spectrometer and further studied using the reconstructed photon-cluster time distribution.
The latter method is also used to estimate the background originating from calorimeter noise.
After imposing the $\pt^{\gamma\gamma}$ requirements, these background contributions are below 1 event and are considered negligible.
\section{\label{sec:Systematics}Systematic uncertainties}
Systematic uncertainties in the $\gamma\gamma\rightarrow\gamma\gamma$ cross-section measurements arise from the reconstruction of photons, the background determination, and integrated luminosity uncertainty, as well as the procedures used to correct for detector effects.
The precision of the Level-1 trigger efficiency estimation is limited by the number of events recorded by the supporting trigger.
As a systematic check, the $e^+e^-$ event selection is varied.
In total, the impact of the Level-1 trigger efficiency uncertainty on the expected signal yield is 5\%.
The uncertainty in the MBTS/FCal veto efficiency has negligible impact on the results.
The uncertainty in the photon reconstruction and PID efficiencies is estimated by parameterising the scale factors as a function of the photon pseudorapidity, instead of the photon transverse momentum.
This affects the expected signal yield by 4\% (photon reconstruction efficiency) and 2\% (photon PID efficiency).
The variation of the selection criteria used in data-driven efficiency measurements has negligible impact on the results.
The statistical uncertainty of the photon reconstruction and PID efficiency corrections is propagated using the pseudo-experiment method in which the correction factors are randomly shifted in an ensemble of pseudo-experiments according to the mean and standard deviation of the correction factor.
This has negligible impact on the expected signal.
The uncertainties related to the photon energy scale and resolution affect the expected signal yield by 1\% and 2\%, respectively.
The uncertainty due to imperfect knowledge of the photon angular resolution is estimated using electron clusters from the $\ggee$ process. The data--MC difference in the electron cluster $\phi$ resolution is applied as an extra smearing to photons from the signal MC sample. This results in a 2\% shift of the signal yield, which is taken as a systematic uncertainty.
The uncertainty due to the choice of signal MC generator is estimated by using an alternative signal MC sample, as detailed in Section~\ref{sec:DataMC}. This affects the signal yield by 1\% which is taken as a systematic uncertainty.
The uncertainty due to the limited signal MC sample size is 1\%.
The uncertainties in the background estimation are evaluated as described in Section~\ref{sec:Background}.
The uncertainty in the integrated luminosity of the data sample is 3.2\%.
It is derived from the calibration of the luminosity scale using $x$--$y$ beam-separation scans, following a methodology similar to that detailed in Ref.~\cite{ATLAS-CONF-2019-021}, and using the LUCID-2 detector for the baseline luminosity measurements.
Systematic uncertainties associated with the background estimate, the photon PID and reconstruction efficiency, photon energy scale, and photon angular and energy resolution are fully correlated between the 2015 and 2018 data-taking periods. Systematic uncertainties in the trigger efficiency are computed separately for each data-taking period. They are dominated by the statistical uncertainty of each data set and are thus uncorrelated.
\section{\label{sec:Results}Results}
\subsection{Kinematic distributions}
Photon kinematic distributions comparing the selected data
with the sum of expected event yields from simulated signal and background processes in the signal region are shown in Figure~\ref{fig:LbL-control_opt}. In total, 97 events are observed in data where 45 signal events and 27 background events are expected.
This excess of observed events is visible in all distributions shown in Figure~\ref{fig:LbL-control_opt}.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.42\textwidth]{fig_07a.pdf}
\includegraphics[width=0.42\textwidth]{fig_07b.pdf}
\includegraphics[width=0.42\textwidth]{fig_07c.pdf}
\includegraphics[width=0.42\textwidth]{fig_07d.pdf}
\includegraphics[width=0.42\textwidth]{fig_07e.pdf}
\includegraphics[width=0.42\textwidth]{fig_07f.pdf}
\caption{Kinematic distributions for \gggg\ event candidates: diphoton invariant mass (top-left), diphoton rapidity (top-right),
diphoton transverse momentum (mid-left), diphoton $|\cos(\theta^*)|$ (mid-right),
leading photon transverse energy (bottom-left) and leading photon pseudorapidity (bottom-right). Data (points) are compared with the sum of signal and background expectations (histograms). The signal prediction is normalised to the same integrated luminosity as the data. Systematic uncertainties in the signal and background processes, excluding that in the luminosity, are shown as shaded bands.}
\label{fig:LbL-control_opt}
\end{center}
\end{figure}
\subsection{\label{sec:fiducial-xsec}Integrated fiducial cross section}
The inclusive cross section for the $\gamma\gamma\rightarrow \gamma\gamma$ process is measured in a fiducial phase space, defined by the following requirements on the diphoton final state, reflecting the selection at reconstruction level: both photons have to be within $|\eta|<2.4$ with a transverse momentum of $\pT>2.5~\GeV$. The invariant mass of the diphoton system has to be $m_{\gamma\gamma}>5\,\GeV$ with transverse momentum of $\pT^{\gamma\gamma}<1~\GeV$. In addition, the photons must fulfil an acoplanarity requirement of $\Aco<0.01$.
The integrated fiducial cross section is obtained as follows:
\begin{equation}
\label{EQN:CrossSectionFid}
\sigmafid = \frac{N_{\textrm{data}}-N_{\textrm{bkg}}}{C \times \int L \text{d} t}\,,
\end{equation}
where $N_{\textrm{data}} = 97$ is the number of selected events in data, $N_{\textrm{bkg}}=27\pm5$ is the number of background events, $\int L \text{d} t = 2.22 \pm 0.07~\textrm{nb}^{-1}$ is the integrated luminosity of the data sample and $C=0.263 \pm 0.021$ is
the overall correction factor that accounts for detector efficiencies and resolution effects, and for signal events passing the event selection but originating from outside the fiducial phase space (fiducial corrections).
The $C$ factor is defined as the ratio of the number of reconstructed MC signal events passing the selection to the number of generated MC signal events satisfying the fiducial requirements.
The uncertainty in $C$ is estimated by varying the data/MC correction factors within their uncertainties as described in Section~\ref{sec:Systematics}, in particular for the photon reconstruction and PID efficiencies, photon energy scale and resolution and trigger efficiency.
An overview of the various uncertainties in $C$ is given in Table~\ref{tab:AandCFactor}.
The uncertainty in $N_{\textrm{bkg}}$ is dominated by the uncertainty
in the \ggee\ background. This has a 6\% impact on the estimated
integrated fiducial cross section.
\begin{table}[t!]
\begin{center}
\begin{tabular}{l |c}
\hline
Source of uncertainty & Detector correction ($C$) \\
\hline \hline
& $0.263\pm 0.021$ \\
\hline
Trigger efficiency & 5\% \\
Photon reco. efficiency & 4\% \\
Photon PID efficiency & 2\% \\
Photon energy scale & 1\% \\
Photon energy resolution & 2\% \\
Photon angular resolution & 2\% \\
Alternative signal MC & 1\% \\
Signal MC statistics & 1\% \\
\hline
Total & 8\% \\
\end{tabular}
\caption{The detector correction factor, $C$, and its uncertainties
for the integrated fiducial cross-section measurement.
The second row lists the numerical value of $C$ together with the total uncertainty.
The total uncertainty on $C$ is a quadratic sum of systematic and statistical components.}
\label{tab:AandCFactor}
\end{center}
\end{table}
The measured integrated fiducial cross section is $\sigmafid = 120 \pm
17~\textrm{(stat.)} \pm 13~\textrm{(syst.)}\pm 4~\textrm{(lumi.)}$~nb,
which can be compared with the predicted values of $80 \pm 8$~nb from
Ref.~\cite{Klusek-Gawenda:2016euz} and $78 \pm 8$~nb from the SuperChic~v3.0
MC generator~\cite{Harland-Lang:2018iur}. The data-to-theory ratios are $1.50
\pm 0.32$ and $1.54\pm 0.32$, respectively.
The theoretical uncertainty in the cross section is primarily due to limited knowledge of the nuclear (EM) form-factors and the related initial photon fluxes. This is extensively studied in Ref.~\cite{Azevedo:2019fyz} and the relevant uncertainty is estimated to be $10\%$ within the fiducial phase space of the measurement.
For masses below 100 \GeV, this uncertainty does not exhibit a dependence
on the diphoton mass.
Higher-order corrections (not included in the calculations) are also part of the theoretical uncertainty and are of the order of 1--3\% in the corresponding invariant mass range~\cite{Bern:2001dg, Klusek-Gawenda:2016nuo}.
\subsection{\label{sec:diff-xsec}Differential fiducial cross sections}
Differential fiducial cross sections as a function of diphoton invariant mass,
diphoton absolute rapidity, average photon transverse momentum
and diphoton $|\cos{\theta^*}|$ are unfolded to particle level in the fiducial phase space
described in the previous section.
The differential fiducial cross sections are determined using an iterative Bayesian unfolding method~\cite{DAgostini:1994fjx} with one iteration for all distributions.
The unfolding procedure corrects for bin migrations between particle-
and detector-level distributions due to detector resolution effects,
and applies reconstruction efficiency as well as fiducial corrections.
The reconstruction efficiency corrects for events inside the fiducial region that are not reconstructed in the signal region due to detector inefficiencies; the fiducial corrections take into account events that are reconstructed in the signal region, but originate from outside the fiducial region.
The background contributions are subtracted from data prior to unfolding.
The statistical uncertainty of the data is estimated using 1000 Poisson-distributed pseudo-data sets, constructed by smearing the observed number of events in each bin of the detector-level distribution.
The root mean square of the differences between the resulting unfolded distributions and the unfolded data is taken as the statistical uncertainty in each bin.
In the measurement of differential fiducial cross sections, the full
set of experimental systematic uncertainties described in
Section~\ref{sec:Systematics} is considered.
In addition, uncertainties due to the unfolding procedure and the
modelling of the signal process are considered by repeating the
cross-section extraction with modified inputs~\cite{Malaescu:2009dm}.
The distributions are reweighted at generator level to obtain better
agreement between data and simulation after event reconstruction.
The obtained prediction at detector level, which is then similar to
data, is unfolded with the input of the default unfolding and the difference from the
reweighted prediction at generator level is considered as an
uncertainty. The size of this uncertainty is typically below 1\%.
The impact of statistical uncertainties in the signal simulation is estimated using pseudo-data and
is found to be 1--3\%.
The unfolded differential fiducial cross sections are shown in Figure~\ref{fig:LbL-diff}.
They are compared with the predictions from SuperChic~v3.0, which provide a fair description of the data, except for the overall normalisation differences.
For nearly all variables and bins the total uncertainties in the cross-section measurements are dominated by statistical uncertainties, ranging from 25\% to 75\%.
The background systematic uncertainties are large and comparable to statistical uncertainties in some bins (up to 40\%, mainly at high $|y_{\gamma\gamma}|$) due to the limited number of events in the data control regions.
Global $\chi^2$ comparisons are carried out for the shapes of differential distributions.
They do not display any significant differences between predictions and data, with the largest $\chi^2$ per degree of freedom being 4.3/3 when comparing the shape of $|\cos(\theta^*)|$ distribution.
The $\Minvgg$ differential fiducial distribution is measured up to $\Minvgg=30~\GeV$. For $\Minvgg>30~\GeV$, no events are observed in data versus a total expectation of 0.8 events.
The cross sections for all distributions shown in this paper,
including normalised differential fiducial cross sections, are available in HepData~\cite{hepdata}.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.43\textwidth]{fig_08a.pdf}
\includegraphics[width=0.43\textwidth]{fig_08b.pdf}
\includegraphics[width=0.43\textwidth]{fig_08c.pdf}
\includegraphics[width=0.43\textwidth]{fig_08d.pdf}
\caption{Measured differential fiducial cross sections of \gggg\ production in
Pb+Pb collisions at $\sqn=5.02$~\TeV\ for four observables (from left
to right and top to bottom): diphoton invariant mass, diphoton absolute rapidity,
average photon transverse momentum and diphoton $|\cos(\theta^*)|$.
The measured cross-section values are shown as points with error bars giving the statistical uncertainty and grey bands indicating the size of the total uncertainty.
The results are compared with the prediction from the SuperChic~v3.0 MC
generator (solid line) with bands denoting the theoretical uncertainty.
}
\label{fig:LbL-diff}
\end{center}
\end{figure}
\subsection{\label{sec:Alps}Search for ALP production}
Any particle coupling directly to photons could be produced in an $s$-channel process in
photon--photon collisions, leading to a resonance peak in the invariant mass
spectrum. One popular candidate for producing a narrow diphoton resonance is an
axion-like particle~(ALP) \cite{Knapen:2016moh}.
The measured diphoton invariant mass spectrum, as shown in Figure~\ref{fig:LbL-control_opt}, is used
to
search for
$\gamma\gamma\rightarrow a \rightarrow\gamma\gamma$ process, where $a$ denotes the ALP.
The LbyL, $\ggee$ and CEP $gg\rightarrow\gamma\gamma$ processes are considered as background.
The contribution from $\ggee$ and CEP $gg\rightarrow\gamma\gamma$ processes is estimated using data-driven techniques as described in Section~\ref{sec:Background}.
The LbyL background is estimated using simulated events generated with SuperChic~v3.0.
These events are normalised to the data yield, after subtracting $\ggee$ and CEP $gg\rightarrow\gamma\gamma$ contributions and excluding the mass search region. To smooth statistical fluctuations in the background shape at high mass, a Crystal Ball function is fitted to the sum of all background contributions, while assigning the fit residuals as additional systematic uncertainty.
Events simulated with STARlight~v2.0~\cite{Klein:2016yzr}, which implements the ALP couplings as described in Ref.~\cite{Knapen:2016moh}, for various ALP masses between 5~\GeV\ and 100~\GeV\ are used to build an analytical model of the ALP
signal, interpolating between the simulated mass points.
The efficiency of ALP events to satisfy the selection criteria (outlined in Section~\ref{sec:Selection}) is about 20\% for $\ma=6~\GeV$ and increases up to 45\% for $\ma=12~\GeV$.
An efficiency plateau of about 80\% is reached for an ALP mass around $40~\GeV$.
The diphoton invariant mass resolution for simulated ALP signal ranges from 0.5~\GeV\ at $m_a=6$~\GeV\ to 1.5~\GeV\ at $m_a=100$~\GeV\ and is dominated by the photon energy resolution.
The impact of the uncertainty on the primary-vertex position has a subdominant effect on the diphoton invariant mass resolution over the full mass range.
In every analysis bin a cut-and-count analysis is performed to estimate the
expected numbers of background and signal events.
The bin-width is chosen to include at least 80\% of a reconstructed ALP signal peak within a given bin
and ranges from 2~\GeV\ to 20~\GeV.
To cover the entire mass range, the analysis bins overlap and have an equidistant distance of 1~\GeV\ between the bin centres.
The signal contribution is fitted individually for every bin using a maximum-likelihood fit
implemented in the HistFitter software~\cite{Read:2002hq, Baak:2014wma,Cowan:2010js} which is based on
HistFactory~\cite{Cranmer:2012sba}, RooFit~\cite{RooFit} and RooStats~\cite{Moneta:2010pm}.
Since no significant deviation from the background-only hypothesis is observed, the result is then used to estimate the upper
limit on the ALP signal strength~($\mu_\mathrm{CLs}$) at 95\% confidence level~(CL).
The corresponding test-statistic distributions are evaluated using pseudo-experiments.
Experimental systematic uncertainties affecting the ALP signal model originate from the
trigger, photon PID and reconstruction efficiencies, and photon energy scale and resolution.
The systematic uncertainties are evaluated identically to the treatment in the cross-section measurements, described in Section~\ref{sec:Systematics}.
The theoretical uncertainty in the calculated ALP signal cross section is 10\% in the full
mass range, due to the limited knowledge of the initial photon fluxes~\cite{Azevedo:2019fyz}.
This uncertainty is considered uncorrelated with other sources of uncertainty.
The limits set on the signal strength $\mu_\mathrm{CLs}$ are transformed into
limits on the cross section $\sigma_{\gamma\gamma\rightarrow a \rightarrow\gamma\gamma}^\mathrm{CLs} = \mu_\mathrm{CLs} \cdot \sigma^\mathrm{MC}_{a,\mathrm{gen}}$.
Additionally, limits on the ALP coupling to photons ($1/\Lambda_a^\mathrm{CLs}$) are calculated from $1/\Lambda_{a}^\mathrm{CLs} = \sqrt{\mu_\mathrm{CLs}} \cdot
1/\Lambda^\mathrm{gen}_{a}$. $\sigma^\mathrm{MC}_{a,\mathrm{gen}}$ and $\Lambda^\mathrm{gen}_{a}$ are the cross section and coupling used in the MC generator.
The observed and expected 95\% CL limits on the ALP production cross section and ALP coupling to photons are presented in Figure~\ref{fig:ALP_Limits}.
The limits set on the cross section $\sigma_{\gamma\gamma\rightarrow a \rightarrow\gamma\gamma}$ for an ALP with a mass of 6--100~\GeV\ range from 70~nb to 2~nb.
The derived constraints on $1/\Lambda_a$ range from 0.3~\TeV$^{-1}$ to 0.06~\TeV$^{-1}$.
The widths of the one- and two-standard-deviation bands of
the expected limit distribution decrease for ALP masses above 30~\GeV. This behaviour is driven by the
change in the background rate, which has a low Poisson mean for high ALP masses. For low ALP masses the
background rate is sufficiently high to populate the $N > 0$ expected background outcomes and raise the +1 and +2-standard-deviation boundaries. The discontinuity at $\ma=70~\GeV$ is caused by the increase of the mass-bin width which brings an increase in signal acceptance.
Assuming a 100\% ALP decay branching fraction into photons, the derived constraints on the ALP mass and its coupling to photons are compared in Figure~\ref{fig:ALP_Limits_comp} with those obtained from various experiments~\cite{Bauer:2017ris, Sirunyan:2018fhl, Aloni:2019ruo, Banerjee:2020fue, BelleII:2020fag}.
The exclusion limits from this analysis are the strongest so far for the mass range of $6<\ma <100~\GeV$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.49\textwidth]{fig_09a.pdf}
\includegraphics[width=0.49\textwidth]{fig_09b.pdf}
\caption{
The 95\% CL upper limit on the ALP
cross section $\sigma_{\gamma\gamma\rightarrow a \rightarrow\gamma\gamma}$ (left) and ALP coupling $1/\Lambda_{a}$ (right) for the $\gamma\gamma\rightarrow a \rightarrow \gamma\gamma$ process as a function of ALP mass \ma.
The observed upper limit is shown as a solid black line and the expected upper limit is shown by the dashed black line with its $\pm1$ and $\pm2$ standard deviation bands.
The discontinuity at $\ma=70~\GeV$ is caused by the increase of the mass-bin width which brings an increase in signal acceptance.}
\label{fig:ALP_Limits}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.49\textwidth]{fig_10a.pdf}
\includegraphics[width=0.49\textwidth]{fig_10b.pdf}
\caption{Compilation of exclusion limits at 95\% CL in the ALP--photon coupling ($1/\Lambda_{a}$) versus ALP mass ($\ma$) plane obtained by different experiments. The existing limits, derived from Refs.~\cite{Bauer:2017ris, Sirunyan:2018fhl, Aloni:2019ruo, Banerjee:2020fue, BelleII:2020fag} are compared with the limits extracted from this measurement. The exclusion limits labelled ``LHC ($pp$)'' are based on $pp$ collision data from ATLAS and CMS. All measurements assume a 100\% ALP decay branching fraction into photons. The plot on the right is a zoomed-in version covering the range $1<\ma<120$~\GeV.
}
\label{fig:ALP_Limits_comp}
\end{center}
\end{figure}
\clearpage
\section{\label{sec:Conclusion}Conclusions}
This paper presents a measurement of the light-by-light scattering process in quasi-real photon interactions from ultra-peripheral \PbPb\ collisions at $\sqn=5.02$~\TeV\ by the ATLAS experiment at the LHC. The measurement is based on the full Run 2 data set corresponding to an integrated luminosity of 2.2~\invnb. After the selection criteria, 97 events are selected in the data while $27\pm5$ background events are expected. The dominant background processes
are estimated using data-driven methods.
After background subtraction and corrections for detector effects are applied, the integrated fiducial cross section of the \gggg\ process
is measured to be $\sigmafid = 120 \pm 17~\textrm{(stat.)} \pm 13~\textrm{(syst.)}\pm 4~\textrm{(lumi.)}$ nb.
The data-to-theory ratios are $1.50 \pm 0.32$ and $1.54\pm 0.32$ for predictions from Ref.~\cite{Klusek-Gawenda:2016euz} and from the SuperChic~v3.0 MC generator, respectively.
Differential fiducial cross sections are measured as a function of several properties of the final-state photons and are compared with Standard Model theory predictions for light-by-light scattering.
All measured cross sections are consistent within 2 standard deviations with the predictions.
The measurement precision is limited in all kinematic regions by statistical uncertainties.
The measured diphoton invariant mass distribution is used to search for axion-like particles and set new exclusion limits on their production in the $\textrm{Pb+Pb}\,(\gamma\gamma)\rightarrow \textrm{Pb}^{(\ast)}\textrm{+}\textrm{Pb}^{(\ast)}\,\gamma\gamma$ reaction. Integrated cross sections above 2 to 70 nb are excluded at the 95\% CL, depending on the diphoton invariant mass in the range 6--100~\GeV. These results provide, to this date and within the aforementioned mass range, the most stringent constraints in the search for ALP signals.
\section*{Acknowledgements}
We thank CERN for the very successful operation of the LHC, as well as the
support staff from our institutions without whom ATLAS could not be
operated efficiently.
We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWFW and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; ANID, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic; DNRF and DNSRC, Denmark; IN2P3-CNRS and CEA-DRF/IRFU, France; SRNSFG, Georgia; BMBF, HGF and MPG, Germany; GSRT, Greece; RGC and Hong Kong SAR, China; ISF and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; NWO, Netherlands; RCN, Norway; MNiSW and NCN, Poland; FCT, Portugal; MNE/IFA, Romania; JINR; MES of Russia and NRC KI, Russian Federation; MESTD, Serbia; MSSR, Slovakia; ARRS and MIZ\v{S}, Slovenia; DST/NRF, South Africa; MICINN, Spain; SRC and Wallenberg Foundation, Sweden; SERI, SNSF and Cantons of Bern and Geneva, Switzerland; MOST, Taiwan; TAEK, Turkey; STFC, United Kingdom; DOE and NSF, United States of America. In addition, individual groups and members have received support from BCKDF, CANARIE, Compute Canada, CRC and IVADO, Canada; Beijing Municipal Science \& Technology Commission, China; COST, ERC, ERDF, Horizon 2020 and Marie Sk{\l}odowska-Curie Actions, European Union; Investissements d'Avenir Labex, Investissements d'Avenir Idex and ANR, France; DFG and AvH Foundation, Germany; Herakleitos, Thales and Aristeia programmes co-financed by EU-ESF and the Greek NSRF, Greece; BSF-NSF and GIF, Israel; La Caixa Banking Foundation, CERCA Programme Generalitat de Catalunya and PROMETEO and GenT Programmes Generalitat Valenciana, Spain; G\"{o}ran Gustafssons Stiftelse, Sweden; The Royal Society and Leverhulme Trust, United Kingdom.
The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN, the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA), the Tier-2 facilities worldwide and large non-WLCG resource providers. Major contributors of computing resources are listed in Ref.~\cite{ATL-SOFT-PUB-2020-001}.
\printbibliography
\clearpage \input{atlas_authlist}
\end{document}
|
1,108,101,564,918 | arxiv | \section{Introduction}
Deep neural networks present impressive performance in computer vision tasks, such as image classification and object detection \cite{imagenet, object_detection}, but they are vulnerable to small designed perturbations and visually unrecognizable images giving high confidence predictions \cite{dnn_easily_fooled,intriguing_prop_nn}. This vulnerability can cause severe security issues. Just imagine a self-driving car controlled by these neural networks. Are they reliable? In computer vision literature, there are attacking methods for crafting small perturbations that are imperceptible by the human eye, but result in deep neural networks incorrectly identifying them with absolute certainty \cite{intriguing_prop_nn}. The images produced by those attacking methods are called \textit{adversarial examples}. Other methods produce unrecognizable images for which deep neural networks give highly confident predictions \cite{dnn_easily_fooled}.
The idea of bidirectional learning (BL) is to make the output layer of a discriminative neural network only active when real input data is given, like the behavior of a generative classifier. That is done by teaching the same model to learn how to "read" (discriminative) and "write" (generative). Because of that, an undirected neural network can be a classifier and a generator at the same time and thereby improve classifier's robustness to random noise and adversarial examples. Our goal is to make multilayer perceptrons behave as a generative classifier, such as radial basis function \cite{rbf, rbf_universal}, deep Bayes classifier \cite{genclassifier}, and many others. Generative classifiers were identified as robust to adversarial examples \cite{genclassifier}. Weights and bias adaptation of multilayer perceptrons under BL is performed only by backward propagation of errors (backpropagation). Only real data is utilized for training the neural networks.
The main contribution of this paper is the introduction of two BL methods.\footnote{Complete project available at \url{https://github.com/sidneyp/bidirectional}.} The first method, called bidirectional propagation of errors, trains a hybrid undirected neural network to map images to labels (classifier) and labels to images (generator) in the opposite direction. The second method replaces the training of its generator by using the framework of generative adversarial networks (GAN) introduced by Goodfellow et al. \cite{gan}. This leads to hybrid adversarial networks (HAN), where the generator that has as input a latent variable and is trained by an adversarial discriminator. The HAN classifier uses the transposed weights of the generator. Therefore it contains a hybrid model which merges the generator and classifier. To evaluate the performance of these two approaches, we perform experiments on many models for measuring accuracy on unmodified test data, test data with noise addition, and adversarial test data. We also assess the robustness of the models to white noise static by checking their rates of maximum output for noise data over real test data.
\section{Related work}
\label{sec:rel_work}
Bidirectional learning has similarities to deep belief networks (DBNs) \cite{deepbeliefnet} because they are also hybrid models. However, DBNs perform a pre-training phase with restricted Boltzmann machines (RBM) \cite{deepbeliefnet, rbm} for an unsupervised input reconstruction layer-by-layer, from training data input layer to a final associative memory. Then an output layer for the discriminative model is added representing the ground truth, and backpropagation is executed for a fine-tuned classification training. Some autoencoder frameworks contain encoder and decoder sharing their weights for dimensionality reduction tasks. Such "mirrored" autoencoders are described in \cite{xu1993least,hinton2006reducing}. There exist also deep hybrid models \cite{deep_hybrid_models} where discriminative and generative models share the same latent variables. Another similar method, called Eigenboosting \cite{eigenboost}, which its authors present a generative classifier by its hybrid training with Harr-like features \cite{viola_jones}.
Since the discovery that deep neural networks for image classification can be easily fooled by random noise; unrecognizable images; and adversarial examples \cite{dnn_easily_fooled,intriguing_prop_nn}, several defensive and attacking strategies were described in literature \cite{towards_adv_evaluation,explain_adv,cleverhans}. One way to make neural networks more robust is adversarial re-training \cite{explain_adv,learning_with_adv,adv_atk_def}. It consists of generating adversarial examples every epoch or iteration, and using them as training data. Another defensive strategy is adding an auxiliary classifier for adversarial examples detection \cite{safetynet,detecting_adv,adv_atk_def}. Since the creation of adversarial examples is by adding noise into real data, a denoising method can be useful. Therefore \cite{towards_adv_robust} applies denoising autoencoder before feeding the data into a classifier. There are defensive methods that use generative adversarial networks. Adversarial perturbation elimination with GAN (APE-GAN) uses the generator of GAN as a denoising autoencoder \cite{adv_survey,ape_gan}. Another method using GAN is the Generative Adversarial Trainer \cite{adv_survey,gat} which the generator of GAN produces adversarial perturbations into the training set. These previous defensive methods use adversarial examples during training, so those neural networks can be biased to the method which designs the adversarial examples.
The method of network distillation increases the robustness without the need for adversarial examples in training set \cite{net_distillation,adv_atk_def}. Its idea is to train a neural network to behave as another trained neural network. Instead of giving hard labels to a neural network, the \textit{temperature}-controlled softmax output of the trained neural network is given as ground truth. However, \cite{distillation_not_robust} verified that network distillation is still vulnerable to adversarial examples. A generative classifier presented as a defensive method to adversarial examples is called Gaussian process hybrid deep neural networks \cite{gpdnn,adv_atk_def}. The last layer of that robust convolutional neural network architecture consists of radial basis function kernels. Therefore it behaves as a generative classifier. The authors state that their deep architecture knows when it doesn't know. A biologically inspired defense against adversarial examples for deep neural networks is presented by \cite{adv_survey,bio_adv_protection}. Its principle is the creation of highly nonlinear neural networks which produces a saturated weight distribution found in the brain. All these three previous methods do not use adversarial examples during training and that is also our goal in this work.
\section{Bidirectional learning}
\label{bidirectional_learning}
Bidirectional learning produces a classifier and a generator in undirected neural network using backward propagation of errors in both directions. So each direction of this network has its own biases and the weights are shared. The idea is that the same positive weights of the last layer of a generator for producing white pixels can be the first layer of a classifier for identifying white pixels. Negative weights are similar regarding black pixels. Formally and over-simplified, any perceptron without bias that contains a weight vector $\mathbf{w} \in \{-1,1\}, \exists w=1$ and an input $\mathbf{x} \in \{0,1\}$ which its output $\mathbf{y}=f(\mathbf{w}\cdot \mathbf{x})$, where $f$ is the threshold activation function defined by
\begin{equation}
\label{eq0}
f(a) =
\begin{cases}
0 & \quad \text{if } a \leq 0\\
1 & \quad \text{if } a > 0
\end{cases}
\end{equation}
The perfect activation input $\mathbf{\hat{x}}=\argmax_{\mathbf{x}} \mathbf{w}\cdot \mathbf{x}$ must have active inputs for positive weights and inactive inputs for negative weights, therefore $\mathbf{\hat{x}}=max(\mathbf{w},0)$. It shows $\mathbf{w}$ can also be adapted to be a contrast template of $\mathbf{\hat{x}}$, so the perceptron becomes a generative classifier with that fast adaptation procedure by "copying" its input when an activation occurs, as the Hebbian theory states \cite{hebb-organization-of-behavior-1949}. In biology, a real neuron produces a back-propagating action potential where the activation that goes through the axon back-propagates to its dendrites for plasticity regulation \cite{Grewe2010}. When the activation output of $\hat{x}$ is back-propagated, the result is equal to itself and expressed by
\begin{equation}
\label{eq1}
\mathbf{\hat{x}}\equiv f(\mathbf{w}^T\cdot f(\mathbf{w}\cdot \mathbf{\hat{x}})).
\end{equation}
When different activation functions and multilayers are used, \ref{eq1} becomes an approximation. So bidirectional learning forces equivalence in these cases. We infer that adding a supporting backpropagation to a classifier in the opposite direction that it is normally used can make the classifier's outputs less active when non-real data are given as input and avoid the vulnerability to adversarial examples. Since biological neurons learn by inputs, bidirectional learning uses a common training algorithm of artificial neural networks for trying to mimic Hebbian learning to the excitatory synapses (positive weights), because "neurons wire together if they fire together" \cite{Lowel209}; and for anti-Hebbian learning \cite{Vogels2013} to the inhibitory ones, because negative weights are strengthened when classifier's inputs remain inactive.
\subsection{Bidirectional propagation of errors}
\label{bidirectional_prop}
Our supervised learning approach for both directions of a hybrid undirected neural network is the bidirectional propagation of errors. It consists of using backward propagation of errors (backpropagation) for mapping data to ground truth, and then ground truth to data. The mapping order can be reverted. The same batch of pairs of data and labels is utilized in normal and reversed backpropagation in a training iteration. Fig.~\ref{fig:biprop} shows how it works.
\begin{figure}[!ht]
\includegraphics[width=0.4\textwidth]{figures/biprop_v3}
\centering
\caption{Illustration of one training iteration in bidirectional propagation of errors. Dark green arrows represent the training with backpropagation (backprop) of discriminative model. Dark blue arrows represent the training of generative model. Same data and class labels are used for both in an iteration.}
\label{fig:biprop}
\end{figure}
\subsection{Hybrid adversarial networks}
\label{ganc}
The previous method explained in Section~\ref{bidirectional_prop} has a limitation because the generator is trained with constant input per class. To avoid that, the hybrid adversarial networks (HAN) are introduced. There are three models in this framework based on GAN \cite{gan}: classifier $C$, generator $G$ that shares the same weights of $C$, and discriminator $D$ for being an adversary to $G$. The input of $G$ is a random vector $z$ of size equal to the number of classes for $C$. While $C$ is trained normally, $D$ and $G$ compete in a minimax game where $G$ tries to reproduce the real data to increase the error of $D$, and $D$ learns how to distinguish real data and data from $G$.
Our hybrid model merges $G$ and $C$, so the generator of GAN can be trained simultaneously as a transposed classifier for more robustness because we infer it can produce neurons that become active when images look "realistic". Fig.~\ref{fig:han} presents this framework. The training order in an iteration is $C$, then $D$, finishing with $G$.
\begin{figure}[!ht]
\includegraphics[width=0.48\textwidth]{figures/han_model_v4}
\centering
\caption{Illustration of hybrid adversarial networks. Same color scheme is used in the hybrid model. Dark green arrow represents the discriminative model (classifier). Dark blue arrow represents the generative model.}
\label{fig:han}
\end{figure}
\section{Experiments}
\label{experiments}
\begin{table*}[!t]
\centering
\caption{Description of architectures used on two methods with and without bias. NN stands for fully connected neural network and CNN for convolutional neural network. Convolutional layers are described by the number of kernels, inside the parenthesis is the kernel size and its stride (str).}
\label{architectures}
\begin{tabular}{|l|l|l|}
\hline
Method & Architecture & Units in discriminative hidden layer \\ \hline
& NN no hidden layer & - \\
Bidirectional & NN one hidden layer & 16 \\
propagation & NN two hidden layers & 16,16 \\
of errors & NN four hidden layers & 200,100,60,30 \\
& CNN three conv. layers & 4 (5x5str1), 8(5x5str2),12(4x4str2),200 \\ \hline
Hybrid & NN one hidden layer & 128 \\
generative nets & CNN two conv. layers & infoGAN architecture for MNIST \cite{infogan} \\ \hline
\end{tabular}
\end{table*}
These two methods were evaluated using the architectures with and without bias described in Table~\ref{architectures}. The architectures without bias are introduced to force all neurons in the network to have the same likelihood of activation and thereby increasing robustness. Architectures have been trained by mini-batch gradient descent with Adam optimizer \cite{adam} and mini-batch size of 100 data samples with ground truth (one mini-batch means one iteration). Bidirectional propagation of errors was trained with 50,000 iterations and HAN with 500,000 iterations because the adversarial training in HAN takes more time to converge. The implementation is based on TensorFlow 1.7 \cite{tensorflow}. The datasets used in the experiments are MNIST \cite{mnist} and CIFAR-10 \cite{cifar10}. Training set consists of 60,000 samples and test set of 10,000 samples. The adversarial attacking method to test the robustness of bidirectional learning is the fast gradient sign method (FGSM) \cite{explain_adv,cleverhans}. It disturbs real images to fool the classifier to make predictions for wrong classes. The equation for disturbing an image $x$ is
\begin{equation}
\label{eq:fgsm}
x_{adv}=x+\epsilon* sign(\nabla_x J_\theta (x,y)),
\end{equation}
where the adversarial image $x_{adv}$ is produced by adding to the normal image $x$ with the sign method result of the gradient ascent $\nabla$ for $x$ of the loss function $J$ for model $\theta$ when image $x$ and label $y$ are given. This addition is limited by $\epsilon$ which is the maximum change in the pixels of $x$. The method which we use is from CleverHans v2.0.0 \cite{cleverhans}. The testing images were modified by FGSM with a max-norm epsilon ($\epsilon$) of 0.3 for MNIST, and 0.03 for CIFAR-10. Minimum and maximum pixel values of disturbed images are 0 and 1, respectively. We tested the robustness to white noise static by adding 10~\% of it into the test set for accuracy verification, and giving 100\% of that noise as classifier's input for measuring the sigmoid \cite{sigmoid} and softmax \cite{softmax} output layer. The maximum output for random noise $x_{noise}$ is divided by the maximum output for real test data $x_{test}$. Both $x_{noise}$ and $x_{test}$ have the same shape. That gives a rate of outputs to white noise static over real data. The sigmoid rate is for measuring output layer activity and expressed by
\begin{equation}
\label{eq3}
r_{sigmoid}=\frac{\max(C_{sigmoid}(x_{noise}))}{\max(C_{sigmoid}(x_{test}))}.
\end{equation}
The softmax rate for classification probability and formally denoted as
\begin{equation}
\label{eq4}
r_{softmax}=\frac{\max(C_{softmax}(x_{noise}))}{\max(C_{softmax}(x_{test}))}.
\end{equation}
All architectures with and without biases were trained by:
\begin{enumerate}
\item{Backpropagation (BP)}
\item{Bidirectional learning on first half of iterations, then backpropagation (BL then BP)}
\item{Bidirectional learning (BL)}
\end{enumerate}
\section{Results}
\label{results}
This section presents the results of our two methods on MNIST and CIFAR-10 dataset. It contains the accuracy for real test data, for test data with noise addition, and for test data modified by FGSM. The desired accuracy is 1.0 or 100~\%. We measure robustness to white noise static with sigmoid (activity) and softmax (class probability) rate of noise over real test data. The desired sigmoid output for noise data is 0.0 (fully inactive) and for test data is 1.0 (fully active). Therefore, the desired sigmoid rate is 0.0. Since we use datasets with ten classes, the desired softmax output for noise data is 0.1 or 10~\% confidence, and for test data is 1.0 or 100~\%. It means a softmax rate of 0.1.
\subsection{Results of bidirectional propagation of errors}
\label{result_biprop}
Table~\ref{biprop_mnist_table} shows in the first row the architecture with most relative improvement in accuracy on adversarial examples. It is the architecture without hidden layer and bias, then it is a linear classifier. Backpropagation presents accuracy on adversarial examples of 4.17~\%, while bidirectional learning shows 60.14~\%. Since this is a simple architecture, the learned weights are easy to understand and to verify the causes of difference in robustness. Fig.~\ref{fig:biprop_mnist} shows that for MNIST dataset including adversarial examples and generated images for each class. The second row of Table~\ref{biprop_mnist_table} shows the best result regarding robustness to white noise static measured by the sigmoid and softmax rate of maximum output for noise over test data. The learning method that reached that was BL. The value of sigmoid rate is 0.5 which is a low value for sigmoid, meaning that the input for this activation function was zero. The value of softmax rate was the best value possible, 0.1 or 10~\%.
Table~\ref{biprop_cifar_table} shows the results of bidirectional propagation of errors for CIFAR-10 dataset. The first row contains the best relative accuracy improvement on adversarial examples. It is reached by the architecture trained by BL. Its weights, adversarial examples and generated images of all three learning methods are in Fig.~\ref{fig:biprop_cifar}. The order of CIFAR-10 classes is: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. The weights of BP are noisy representations of those classes, but when we check the weights learned by BL, they are smooth and recognizable. We can see, for example, the weights of the blue color channel with high values for representing the sky and sea in airplane and ship classes. The second architecture in Table~\ref{biprop_cifar_table} presents an increase of 2.25~\% accuracy on test data when trained partially by BL compared with only BP. The architecture with two hidden layers and no bias is not shown here, but the training with BL then BP increases the accuracy on normal test data as well when compared to the results with BP.
\begin{table*}[!t]
\centering
\caption{Most significant result of bidirectional propagation of errors on MNIST. Selected iteration with best accuracy test. Bold numbers are the best results for each model.}
\label{biprop_mnist_table}
\begin{tabular}{|c|l|l|l|l|l|l|}
\hline
Model & Learning & Accuracy & Accuracy & Accuracy & Sigmoid & Softmax \\
& & test & noisy & adversarial & rate & rate \\ \hline
Fully connected& BP & \textbf{0.9273} & \textbf{0.7138} & 0.0417 & 3.34E-12 & 1 \\
no hidden & BL then BP & 0.9265 & 0.3216 & 0.045 & \textbf{0} & 1 \\
layer \& no bias & BL & 0.8781 & 0.6419 & \textbf{0.6014} & \textbf{0} & 1 \\ \hline
Fully connected& BP & \textbf{0.9456} & \textbf{0.6502} & 0.0318 & 0.9983 & 0.984 \\
one hidden & BL then BP & 0.9338 & 0.3807 & 0.06 & 0.9923 & 0.6429 \\
layer \& no bias & BL & 0.905 & 0.5148 & \textbf{0.0814} & \textbf{0.5} & \textbf{0.1} \\ \hline
\end{tabular}
\end{table*}
\begin{table*}[!t]
\centering
\caption{Most significant result of bidirectional propagation of errors on CIFAR-10. Selected iteration with best accuracy test. Bold numbers are the best results for each model.}
\label{biprop_cifar_table}
\begin{tabular}{|c|l|l|l|l|l|l|}
\hline
Model & Learning & Accuracy & Accuracy & Accuracy & Sigmoid & Softmax \\
& & test & noisy & adversarial & rate & rate \\ \hline
Fully connected & BP & \textbf{0.3769} & \textbf{0.373} & 0.1853 & 0.9999 & 0.996 \\
no hidden &BL then BP & 0.374 & 0.3678 & 0.1882 & \textbf{0} & \textbf{0.9725} \\
layer \& no bias & BL & 0.3211 & 0.3203 & \textbf{0.2711} & \textbf{0} & 0.9999 \\ \hline
Fully connected & BP & 0.4208 & 0.4137 & 0.351 & \textbf{0.9791} & 0.8627 \\
four hidden & BL then BP & \textbf{0.4433} & \textbf{0.4334} & \textbf{0.3658} & 0.9911 & 0.8359 \\
layers \& no bias & BL & 0.4314 & 0.4283 & 0.3596 & 0.9807 & \textbf{0.8289} \\ \hline
\end{tabular}
\end{table*}
\begin{figure*}[!t]
\centering
\subfloat[MNIST]{\includegraphics[width=\textwidth]{figures/biprop_mnist}\label{fig:biprop_mnist}} \\
\subfloat[CIFAR-10]{\includegraphics[width=\textwidth]{figures/biprop_cifar}\label{fig:biprop_cifar}}
\caption[Short caption.] {\label{fig:res_biprop} Weights of the first layer, generated adversarial examples and images generated by a class label in bidirectional propagation of errors with a fully connected architecture without hidden layer and bias in all three learning methods on each row.}
\end{figure*}
\subsection{Results of hybrid adversarial networks}
\label{result_han}
Table~\ref{han_mnist_table} shows the best robustness of all experiments performed in this work. Hybrid adversarial networks on the architecture of infoGAN for MNIST \cite{infogan} achieved that. The model trained by BL then BP and without biases reached the 95.92~\% on adversarial examples of MNIST test set, while BP presented 5.08~\%. Even though there was a small reduction of accuracy on real test set compared with BP, from 99.21~\% to 98.49~\%. Fig.~\ref{fig:han_mnist} shows the adversarial examples for this architecture and the images generated by a random vector. That presents that HAN can also be a generative method when trained with BL.
Table~\ref{han_cifar_table} is for the results of HAN on CIFAR-10 dataset. These results are not as good as the ones of HAN on MNIST dataset. The reason is that accuracy on real test set reduced drastically while presenting small improvement in robustness. However, Fig.~\ref{fig:res_han} presents that generator of HAN trained by BL has recovered the data distribution of CIFAR-10. Even though a generative model was not our goal.
\begin{table*}[!t]
\centering
\caption{Most significant result of hybrid adversarial networks on MNIST. Selected iteration with best accuracy test. Bold numbers are the best results for each model.}
\label{han_mnist_table}
\begin{tabular}{|c|l|l|l|l|l|l|}
\hline
Model & Learning & Accuracy & Accuracy & Accuracy & Sigmoid & Softmax \\
& & test & noisy & adversarial & rate & rate \\ \hline
CNN & BP & \textbf{0.9925} & \textbf{0.9913} & 0.0477 & 1 & 1 \\
two conv. & BL then BP & 0.9854 & 0.9783 & \textbf{0.9375} & 1 & 1 \\
layers & BL & 0.9823 & 0.9696 & 0.9084 & 1 & 1 \\ \hline
CNN & BP & \textbf{0.9921} & \textbf{0.9906} & 0.0508 & 1 & 1 \\
two conv. & BL then BP & 0.9849 & 0.9768 & \textbf{0.9592} & 1 & 1 \\
layers \& no bias & BL & 0.9829 & 0.9491 & 0.9566 & 1 & 1 \\ \hline
\end{tabular}
\end{table*}
\begin{table*}[!t]
\centering
\caption{Most significant result of hybrid adversarial networks on CIFAR-10. Selected iteration with best accuracy test. Bold numbers are the best results for each model.}
\label{han_cifar_table}
\begin{tabular}{|c|l|l|l|l|l|l|}
\hline
Model & Learning & Accuracy & Accuracy & Accuracy & Sigmoid & Softmax \\
& & test & noisy & adversarial & rate & rate \\ \hline
CNN & BP & \textbf{0.7101} & \textbf{0.6973} & 0.161 & 1 & 1 \\
two conv. & BL then BP & 0.574 & 0.5645 & \textbf{0.258} & 1 & 1 \\
layers & BL & 0.565 & 0.5429 & 0.2445 & 1 & 1 \\ \hline
CNN & BP & \textbf{0.7134} & \textbf{0.7067} & 0.1733 & 1 & 1 \\
two conv. & BL then BP & 0.5419 & 0.531 & \textbf{0.3366} & 1 & 1 \\
layers \& no bias & BL & 0.4264 & 0.4114 & 0.1981 & \textbf{3.61E-07} & \textbf{0.9014} \\ \hline
\end{tabular}
\end{table*}
\begin{figure*}[!t]
\centering
\subfloat[MNIST]{\includegraphics[width=0.5\textwidth]{figures/han_mnist}\label{fig:han_mnist}}
\hspace*{\fill}
\subfloat[CIFAR-10]{\includegraphics[width=0.5\textwidth]{figures/han_cifar}\label{fig:han_cifar}}
\caption[Short caption.] {\label{fig:res_han} Generated adversarial examples and images generated by latent variable of hybrid adversarial networks in a CNN with two convolutional layers.}
\end{figure*}
\section{Analysis and discussion}
\label{analysis}
We can see in Fig.~\ref{fig:biprop_mnist} that backpropagation in the architecture without layer and no bias presents a noisy representation for each digit of MNIST dataset. Disturbances can be manually designed to this neural network; for example, the weights for number two have high positive values in the right. These positive weights in this part do not represent the most important white pixels in images with number two, but backpropagation identifies that part as the most relevant white pixels to recognize an image as number two. When increasing the pixel values in that part of an image with a different class, the resulting disturbed image can be recognized as number two. The adversarial examples generated by FGSM show that as well, and how noisy adversarial examples can be. Partial bidirectional learning (BL then BP) is similar to backpropagation, the only difference is that it knows that the pixels of the border are black because of high negative weights in that region. On the other hand, bidirectional learning performed in all iterations increased the robustness of this neural network. We can easily see the reasons in the learned weights, the adversarial examples, and the images generated by the label of each class. The creation of adversarial examples for BL became harder. FGSM tries in some of the test images to draw another number for fooling the neural network trained by BL.
By analysis of these characteristics, we infer the biological function of neural networks is not of a fine-tuned universal function approximator \cite{universal}, but it is of a multilayer contrast matching algorithm like a multilayer of Harr-like features from Viola-Jones object detection framework \cite{viola_jones}. The reason is that more robust weights present the characteristics of Harr-like features and that explains how real neurons learn so fast new complex patterns, just by "copying" the contrast of input that produces activation. The neurons of the primary visual cortex, from the retina through the lateral geniculate nucleus of the thalamus to V1 visual cortex \cite{neuroscience_book}, have the attributes for contrast detection of the weights trained by bidirectional learning or of the Harr-like features. The activation of a neuron depends on inputs with negative weights remaining inactive and inputs with positive weights being active. That also gives some light to the functionality of Hebbian and anti-Hebbian learning. The exclusion of bias makes neural networks more robust since it reduces the difference of neurons for activation likelihood. Therefore it tries to maintain neurons with equal importance in the network. Results on CIFAR-10 dataset show there are some architectures that when trained with full or partial BL can increase the accuracy on normal test data. Batch normalization also works to balance the neurons by keeping their inputs for activation function closer to zero. HAN results of infoGAN architecture without bias on MNIST dataset supports our analysis for equality in neuron importance and that a hybrid undirected neural network can be robust to adversarial examples.
\section{Conclusion and future work}
\label{conclusion}
Bidirectional learning produces a classifier and a generator in an undirected neural network, giving benefits to the classification task which is our main goal; moreover, it can also support generation of images too. Producing supporting methods and alternatives to backpropagation algorithm regarding robustness is essential for a reliable neural network. The defensive and learning method proposed in this paper was created by only adding a generative backpropagation in a discriminative multilayer perceptron. However, the difference of results on MNIST and CIFAR-10 dataset and on different architectures should be investigated.
For future work, we list the following advances possible after the proposal of bidirectional learning:
\begin{itemize}
\item{application of BL on different datasets and architectures;}
\item{the generator of bidirectional propagation of errors receiving as an input the label and the image together, then giving some variation to the generator's input and because of that it becomes an autoencoder;}
\item{hybrid adversarial networks framework can be improved like its first version for generation \cite{gan} because several improvements to GAN appeared since its introduction and they can be applied to extend HAN as well, but for classification purposes;}
\item{the decoder (generator) of an autoencoder as a transposed classifier;}
\item{weight decay can improve accuracy for data with white noise static and mimic non-Hebbian learning for positive weights and Hebbian learning for negative weights, because they can reduce weight of connections with, respectively, constantly active and inactive inputs, since constant inputs are meaningless for neurons like random inputs;}
\item{HAN can be verified as a generative method;}
\item{other tasks can be performed by coding input or desired output as images or binary strings;}
\item{alternatives to backward propagation of errors can be verified by analysis of BL behavior.}
\end{itemize}
\section*{Acknowledgments}
We are grateful for the support of Stefano Nichele in the review process and for the suggestions of all reviewers. We thank Benjamin Grewe for helpful discussions. We also thank Grant Sanderson of 3Blue1Brown channel on Youtube. His videos helped the main author of this work to come with bidirectional learning idea. Finally, we would like to thank Insiders Technologies GmbH for providing us with the necessary computational resources.
\bibliographystyle{IEEEtran}
|
1,108,101,564,919 | arxiv |
\section{Calculations in Lagrangian quantum homology}
\label{s:app-calc} \setcounter{thm}{0
At several instances along the paper we have appealed to basic
techniques for calculating the Lagrangian quantum homology. The main
ingredient is a spectral sequence whose initial page is the singular
homology of a given Lagrangian and which converges to its quantum
homology. This is well known to specialists and most of the details
can be recollected from several references indicated below. For the
sake of readability we summarize below the main ingredients of this
method. We begin in~\S\ref{sb:spec-seq} with the general setup of the
spectral sequence and in~\S\ref{sb:HF-lag-spheres} specialize to the
case of Lagrangian spheres.
\subsection{A spectral sequence in Lagrangian quantum homology}
\label{sb:spec-seq} \setcounter{thm}{0
This is a homological version of the spectral sequence that was
introduced in~\cite{Oh:spectral} and further elaborated
in~\cite{Bi:Nonintersections}, see
also~\cite{Bi-Co:Yasha-fest,Bi-Co:rigidity}.
Let $L \subset M$ be a monotone Lagrangian submanifold with minimal
Maslov number $N_L$ and denote $n = \dim L$. Let $K$ be a commutative
ring which will serve as the ground ring for the quantum homology. In
case the characteristic of $K$ is not $2$ we assume that $L$ is spin
(i.e. endowed with a given spin structure).
Denote by $\Lambda_K = K[t^{-1},t]$ the ring of Laurent polynomials in
$t$, graded so that the degree of $t$ is $|t| = -N_L$. Let $f: L
\longrightarrow \mathbb{R}$ be a Morse function and fix in addition a
generic almost complex structure $J$, compatible with the symplectic
structure of $M$ and a generic Riemannian metric on $L$. With this
data fixed one can define the pearl complex $(\mathcal{C}, d)$ whose
homology $QH(L; \Lambda_K)$ is (by definition) the Lagrangian quantum
homology of $L$, and which turns out to be isomorphic to the
self-Floer homology of $L$ (See~\cite{Bi-Co:Yasha-fest,
Bi-Co:rigidity, Bi-Co:qrel-long, Bi-Co:lagtop} for the foundations
of Lagrangian quantum homology.)
Consider now the graded free $K$-module $C$ whose basis is formed by
the critical points of $f$, where the degree $i$ part is generated by
the critical points of index $i$:
$$C_i := \bigoplus_{x \in \textnormal{Crit}_i(f)} Kx, \quad
C_* := \bigoplus_{i=0}^n C_i.$$ Morse
theory~\cite{Ba-Hu:Morse-homology-book, Au-Da:Morse-Floer-book,
Au-Da:Morse-Floer-book-eng} gives rise to a differential
$\partial^{\textnormal{m}}: C_i \longrightarrow C_{i-1}$ on $C$ whose
homology $H_*(C,\partial^{\textnormal{m}})$ is canonically isomorphic
to the singular homology $H_*(L;K)$ of $L$.
Below it will be useful to write $\Lambda_K = \oplus_{i \in
\mathbb{Z}} P_i$, where
\begin{equation*}
P_i =
\begin{cases}
K t^{-i / N_L}, & \text{if } i \equiv
0 \; (\mathrm{mod } \, N_L),\\
0, & \text{otherwise}.
\end{cases}
\end{equation*}
The pearl complex $(\mathcal{C}, d)$ is related to $C$ as follows.
Its underlying module is defined by $\mathcal{C}_* = C_* \otimes_K
\Lambda_K$, where the grading is induced from both factors in the
tensor product. Thus we have:
$$\mathcal{C}_l = \bigoplus_{k \in \mathbb{Z}} C_{l - k N_L} \otimes
P_{k N_L}, \quad \forall \, l \in \mathbb{Z}.$$
The differential $d$ can be written as a sum of $K$-linear operators
as follows:
\begin{equation} \label{eq:pearl-diff}
d = \partial_0 \otimes 1 + \partial_1 \otimes t + \cdots +
\partial_{\nu} \otimes t^{\nu},
\end{equation}
with $\partial_i: C_j \to C_{j + i N_L -1}$ and $\nu = \left[
\tfrac{n+1}{N_L}\right]$. Moreover, the first operator in this sum
coincides with the Morse differential, i.e. $\partial_0 =
\partial^{\textnormal{m}}$. We refer the reader
to~\cite{Bi-Co:Yasha-fest, Bi-Co:rigidity, Bi-Co:qrel-long,
Bi-Co:lagtop} for the precise definition of the operators
$\partial_i$. As far as this section is concerned, the only relevant
thing is the precise shift in grading for each $\partial_i$.
Consider now the following increasing filtration
$\mathcal{F}_{\bullet}\Lambda_K$ on $\Lambda_K$:
$$\mathcal{F}_p \Lambda_K :=
\Biggl\{ h(t) \in \Lambda_K \, \Big| \, h(t) = \sum_{-p \leq k} a_k
t^k\Biggr\} = \bigoplus_{j \leq p} P_{j N_L}.$$ This filtration
induces an increasing filtration on the chain complex $(\mathcal{C},
d)$ by setting $\mathcal{F}_p \mathcal{C} = \mathcal{C} \otimes
\mathcal{F}_p \Lambda_K$ or more specifically:
$$(\mathcal{F}_p \mathcal{C})_l = \bigoplus_{j \leq p} C_{l - j N_L}
\otimes P_{j N_L}, \quad \forall \, p, l \in \mathbb{Z}.$$ The fact
that the differential preserves the filtration follows
from~\eqref{eq:pearl-diff}. Note also that for degree reasons the
filtration $\mathcal{F}_{\bullet} \mathcal{C}$ is bounded.
According to standard spectral sequence
theory~\cite{Weibel:book-hom-alg} the filtration
$\mathcal{F}_{\bullet}\mathcal{C}$ induces a spectral sequence $\{
E^r_{p, q}, d^r \}_{_{r \geq 0}}$ which converges to $H_*(\mathcal{C},
d) = QH_*(L; \Lambda_K)$.
The following theorem is an obvious homological adaptation of
Theorem~5.2.A from~\cite{Bi:Nonintersections}.
\begin{thm} \label{t:spectral-seq} The spectral sequence $\{ E^r_{p,
q}, d^r \}$ has the following properties:
\begin{enumerate}
\item[(1)] $E^0_{p,q} = C_{p + q - pN_L} \otimes P_{p N_L}$, $d^0
= \partial_0 \otimes 1$;
\item[(2)] $E^1_{p,q} = H_{p + q - pN_L}(L;K) \otimes P_{p N_L}$,
$d^1 = [\partial_1] \otimes t$, where
\[
[\partial_1]: H_{p + q - pN_L}(L; K) \longrightarrow H_{p + q -1
- (p-1) N_L}(L; K)
\]
is induced by the map $\partial_1$.
\item[(3)] $\{ E^r_{p, q}, d^r \}$ collapses at the $\nu + 1$
step, namely $d^r = 0$ for every $r \geq \nu + 1$ (hence we
denote $E^{\infty}_{p, q} = E^r_{p, q}$ for $r \geq \nu + 1$).
Moreover, the sequence converges to $QH_*(L; \Lambda_K)$. In
particular, when $K$ is a field we have:
\[
\bigoplus_{p+q = l} E^{\infty}_{p, q} \cong QH_l(L; \Lambda_K)
\; \; \forall \, l \in \mathbb{Z}.
\]
\end{enumerate}
\end{thm}
\subsection{Quantum homology of Lagrangian spheres}
\label{sb:HF-lag-spheres} \setcounter{thm}{0
\begin{prop} \label{p:HF-lag-spheres} Let $L \subset M$ be an
$n$-dimensional monotone Lagrangian submanifold which is a
$\mathbb{Q}$-homology sphere. Then:
\begin{enumerate}[(i)]
\item If $n$ is even then $QH_*(L; \Lambda_{\mathbb{Q}}) \cong
H_*(L; \mathbb{Q}) \otimes \Lambda_{\mathbb{Q}}$.
\item Assume $n$ is odd. If $N_L \centernot| n + 1$ or $N_L \,|\,
n + 1$ and $[L] \neq 0$, then $QH_*(L; \Lambda_{\mathbb{Q}})
\cong H_*(L; \mathbb{Q}) \otimes \Lambda_{\mathbb{Q}}$. If $N_L
\,|\, n + 1$ and $[L] = 0$, then $QH_*(L; \Lambda_{\mathbb{Q}})$
is either 0 or isomorphic to $H_*(L; \mathbb{Q}) \otimes
\Lambda_{\mathbb{Q}}$.
\end{enumerate}
\end{prop}
Note that the isomorphisms in~(i) might not be canonical in case $N_L
| n$ (for more on this phenomenon see~\S 4.5
in~\cite{Bi-Co:rigidity}).
\begin{proof}
The proof is based on the spectral sequence of~\S\ref{sb:spec-seq}
and on Theorem~\ref{t:spectral-seq}.
Before we start recall that $N_L$ must be even since $L$ is
orientable.
Assume that $n$ is even. Then $E^1_{p,q} = 0$ if $p +q =$ odd,
since $N_L$ is even. Thus for $r \geq 1$ the higher differentials
$d^r: E^r_{p,q} \to E^r_{p-r,q + r - 1}$ all vanish, hence
$E^1_{p,q} = E^{\infty}_{p,q}$. This gives us $QH_*(L;
\Lambda_{\mathbb{Q}}) \cong H_*(L; \mathbb{Q}) \otimes
\Lambda_{\mathbb{Q}}$.
Assume now that $n$ is odd. If $p + q = $ odd, then the only
non-trivial terms in $E^1_{p,q}$ are
\[
E^1_{p,q} = H_n(L; \mathbb{Q}) \otimes P_{p N_L},
\]
where $p + q = n + p N_L$. If $p + q =$ even, then the only
non-trivial terms are
\[
E^1_{p,q} = H_0(L; \mathbb{Q}) \otimes P_{p N_L},
\]
where $p + q = p N_L$. Now for degree reasons the maps $d^r:
E^r_{p,q} \to E^r_{p-r,q+r-1}$ are $0$ if $p + q =$ odd, since
either $E^r_{p,q} = 0$ or $E^r_{p-r,q+r-1} = 0$ or both are
trivial. It remains to consider the maps $d^r: E^r_{p,q} \to
E^r_{p-r,q+r-1}$ for $p + q =$ even.
We now assume that $N_L \centernot| n + 1$. Then $d^1: H_0(L;
\mathbb{Q}) \otimes P_{pN_L} \to H_{N_L-1}(L; \mathbb{Q}) \otimes
P_{(p-1)N_L}$ and the assumption implies that this operator is $0$.
By the same reasoning the higher differentials $d^r$ vanish for all
$r \geq 2$. Thus we obtain $QH_*(L; \Lambda_{\mathbb{Q}}) \cong
H_*(L; \mathbb{Q}) \otimes \Lambda_{\mathbb{Q}}$.
Assume $N_L \,|\, n + 1$ and $[L] \neq 0$. Since $i_L(e_L) = [L]
\neq 0$, this implies that $e_L \in QH_n(L; \Lambda_{\mathbb{Q}})$
is non-zero and hence not a boundary (we are using $\mathbb{Q}$ as
our ground ring). Therefore the operators $d^r$ must vanish for all
$r \geq 1$. We obtain the desired isomorphism.
In the case $N_L \,|\, n + 1$ and $[L] = 0$ there exists either an
$r \geq 1$ such that $d^r$ is non-zero or $d^r$ is always zero.
This corresponds to both cases in the assertion.
\end{proof}
\begin{rem}
In Proposition~\ref{p:HF-lag-spheres} the case $N_L \,|\, n + 1$
and $[L] = 0$ leads to two possibilities for $QH(L;
\Lambda_{\mathbb{Q}})$. One can distinguish between them by
counting the algebraic number of pseudo-holomorphic disks of Maslov
index $n + 1$ through two generic points of $L$. If this number is
$0$ then $QH(L; \Lambda_{\mathbb{Q}}) \cong H_*(L;\mathbb{Q})
\otimes \Lambda_{\mathbb{Q}}$, otherwise $QH(L;
\Lambda_{\mathbb{Q}})$ vanishes.
\end{rem}
\section{Relations to enumerative geometry of holomorphic disks}
\setcounter{thm}{0 \label{s:enum}
Let $L^n \subset M^{2n}$ be an $n$-dimensional oriented Lagrangian
sphere in a monotone symplectic manifold $M$ with $n=$~even and $C_M =
\tfrac{n}{2}$. Note that $L$ satisfies Assumption~$\mathscr{L}$ hence
we can define its discriminant $\Delta_L \in \mathbb{Z}$ by the recipe
in~\S\ref{sb:discr-intro} or more generally $\widetilde{\Delta}_L \in
\widetilde{\Lambda}^+$ as described in~\S\ref{s:coeffs}.
The purpose of this section is to give an interpretation of the
discriminant in terms of enumeration of holomorphic disks with
boundary on $L$. A related previous result was established
in~\cite{Bi-Co:lagtop} for $2$-dimensional Lagrangian tori and the
same arguments from that paper easily generalize to our setting.
We will use below the notation from~\S\ref{s:coeffs}. Let $A \in
H_2^D$ and $J$ an almost complex structure compatible with the
symplectic structure of $M$. Denote by $\mathcal{M}_p(A, J)$ the space
of simple $J$-holomorphic disks with boundary on $L$ in the class $A$
and with $p$ marked points on the boundary (the space is defined
modulo parametrization by the group $Aut(D) \cong PSL(2,\mathbb{R})$
of biholomorphisms of the disk $D$. See Section~A.1.11
in~\cite{Bi-Co:lagtop} for the precise definitions). Denote by $ev_i:
\mathcal{M}_p(A, J) \longrightarrow L$ the evaluation at the $i$'th
marked point, where $1 \leq i \leq p$.
Fix three points $P, Q, R \in L$. Choose an oriented smooth path
$\overrightarrow{PQ}$ in $L$ starting at $P$ and ending at $Q$.
Similarly choose another two oriented paths $\overrightarrow{QR}$ and
$\overrightarrow{RP}$.
Let $A \in H_2^D$ with $\mu(A) = n$. Define $n_P(A) \in \mathbb{Z}$ to
be the number of $J$-holomorphic disks in the class $A$ whose boundaries
pass through both the path $\overrightarrow{QR}$ and the point $P$. In
other words we count the number of disks $u: (D, \partial D)
\longrightarrow (M,L)$ in the class $A$ with two marked points $z_1,
z_2 \in \partial D$ such that $u(z_1) \in \overrightarrow{QR}$ and
$u(z_2) = P$. (The disks with marked points $(u, z_1, z_2)$ are
considered modulo parametrization by $Aut(D)$ of course.) Standard
arguments show that for a generic choice of $J$ the number $n_P(A)$ is
finite.
The count $n_P(A)$ should take into account the orientations of all
the spaces involved. To this end we will use here the orientation
conventions from~\cite{Bi-Co:lagtop} and describe $n_P(A)$ via a fiber
product. More precisely we use the spin structure on $L$ to orient
$\mathcal{M}_2(A,J)$ and define: $$n_P(A) = \#
\bigl(\overrightarrow{QR} \times_L \mathcal{M}_2(A,J) \times_L \{P\}
\bigr),$$ where the left fiber product is defined using $ev_1$, the
right one using $ev_2$ and $\#$ stands for the total number of points
in an oriented finite set, counted with signs.
Similarly, set:
\begin{align*}
& n_Q(A) := \# \bigl(\overrightarrow{RP} \times_L
\mathcal{M}_2(A,J) \times_L \{Q\} \bigr), \\
& n_R(A) := \# \bigl(\overrightarrow{PQ} \times_L
\mathcal{M}_2(A,J) \times_L \{R\} \bigr).
\end{align*}
Define now $$n_P := \sum n_P(A) T^A \in \widetilde{\Lambda}^+,$$ where
the sum runs over all $A \in H_2^D$ with $\mu(A)=n$. Similarly define
$n_Q, n_R \in \widetilde{\Lambda}^+$.
Next, let $B \in H_2^D$ with $\mu(B) = 2n$. We would like to count the
number of $J$-holomorphic disks in the class $B$ with boundary passing
through $P,Q,R$ (in this order!). The precise definition goes as
follows. Consider the map
$$ev_{1,2,3} = ev_1 \times ev_2 \times ev_3: \mathcal{M}_3(B,J)
\longrightarrow L \times L \times L.$$ Standard arguments imply that
for a generic choice of $J$, $(ev_{1,2,3})^{-1}(P, Q, R)$ is a finite
oriented set. Consider the number of points in that set, namely
define:
$$n_{PQR}(B) := \# (ev_{1,2,3})^{-1}(P,Q,R),$$
where the count takes orientations into account. Finally define
$$n_{PQR} : = \sum n_{PQR}(B) T^B \in \widetilde{\Lambda}^+,$$where the
sum is taken over all classes $B \in H_2^D$ with $\mu(B) = 2n$.
We remark that the numbers $n_P(A)$ (as well as the element $n_P \in
\widetilde{\Lambda}^+$) are not invariant in the sense that they
depend on the choices of the points $P,Q,R$ and of $J$. The same
happens with $n_Q, n_R$ and presumably with $n_{PQR}$ too.
\begin{thm}[c.f. Theorem~6.2.2 in~\cite{Bi-Co:lagtop}]
\label{t:enum}
Let $L \subset M$ be as above. Then
\begin{equation} \label{eq:discr-enum} \widetilde{\Delta}_L= 4
n_{PQR} + n_P^2 + n_Q^2 + n_R^2 - 2n_P n_Q - 2n_Q n_R - 2n_R
n_P.
\end{equation}
\end{thm}
We omit the proof since it is a straightforward generalization of the
proof of the analogous theorem in~\cite{Bi-Co:lagtop} (see
Section~6.2.3 in that paper).
In view of the Lagrangian cubic equation~\eqref{eq:cubic-L-full-ring}
from page~\pageref{eq:cubic-L-full-ring} and
Corollary~\ref{c:sig_L-sphere} we can calculate the right-hand side
of~\eqref{eq:discr-enum} via the ambient quantum homology of $M$.
Note that if we choose the points $P,Q,R$ in specific positions
formula~\eqref{eq:discr-enum} might become simpler. For example, if we
fix the point $P$ then for a suitable (yet generic) choice of the
points $Q$ and $R$ we can make $n_P = 0$.
The formula then becomes $\widetilde{\Delta}_L= 4 n_{PQR} + (n_R-n_Q)^2$.
\begin{rem} \label{r:enum-full-ring} In contrast to
Theorem~\ref{t:enum} the analogous statement
from~\cite{Bi-Co:lagtop} (Theorem~6.2.2 in that paper) for
Lagrangian tori does not work over $\widetilde{\Lambda}^+$. The
reason is that Lagrangian tori are often not wide over
$\widetilde{\Lambda}^+$ in the sense that for such Lagrangians
$QH_*(L; \widetilde{\Lambda}^+)$ might not be isomorphic to $H_*(L;
\widetilde{\Lambda}^+)$. For this reason Theorem~6.2.2
in~\cite{Bi-Co:lagtop} is stated over the variety of
representations $\rho: H_2^D \to \mathbb{C}^*$ for which the
Lagrangian quantum homology $QH_*(L;\Lambda^{\rho})$ with $\rho$-twisted
coefficients is isomorphic to $H_*(L)$. In contrast, if $L$ is an
even dimensional Lagrangian sphere then we always have $QH_*(L;
\widetilde{\Lambda}^+) \cong H_*(L; \widetilde{\Lambda}^+)$ (though
possibly not in a canonical way).
\end{rem}
\section{Finer invariants over the positive group ring} \setcounter{thm}{0
\label{s:coeffs}
Much of the theory developed in the previous sections can be enriched
so that the discriminant $\Delta_L$ and the cubic equation take into
account the homology classes of the holomorphic curves involved in
their definition. The result is clearly a finer invariant.
We now briefly explain this generalization. Let $L \subset (M,
\omega)$ be a monotone Lagrangian submanifold. Denote by $H_2^D(M,L)
\subset H_2(M,L;\mathbb{Z})$ the image of the Hurewicz homomorphism
$\pi_2(M,L) \longrightarrow H_2(M,L;\mathbb{Z})$. We abbreviate $H_2^D
= H_2^D(M,L)$ when $L$ is clear from the discussion.
We will use here the ring $\widetilde{\Lambda}^+$, introduced
in~\cite{Bi-Co:rigidity}, which is the most general ring of
coefficients for Lagrangian quantum homology. It can be viewed as a
positive version (with respect to $\mu$) of the group ring over
$H_2^D$. Specifically, denote by $\widetilde{\Lambda}^+$ the following
ring:
\begin{equation} \label{eq:lambda-tilde+}
\widetilde{\Lambda}^+ = \biggl \{ p(T) \mid
p(T) = c_0 + \sum_{\substack{A \in H_2^D \\ \mu(A)>0}} c_A T^{A},
\quad c_0, c_A \in \mathbb{Z}\biggr \}.
\end{equation}
We grade $\widetilde{\Lambda}^+$ by assigning to the monomial $T^A$
degree $|T^A| = -\mu(A)$. Note that the degree-$0$ component of
$\widetilde{\Lambda}^+$ is just $\mathbb{Z}$ (not linear combinations
of $T^{A}$ with $\mu(A)=0$). As explained in~\cite{Bi-Co:rigidity} we
can define $QH(L;\widetilde{\Lambda}^+)$, and in fact $QH(L;
\mathcal{R})$ for rings $\mathcal{R}$ which are
$\widetilde{\Lambda}^+$-algebras.
Similarly to $\widetilde{\Lambda}^+$ we associate to the ambient
manifold the ring $\widetilde{\Gamma}^+$. This ring is defined in the
same way as $\widetilde{\Lambda}^+$ but with $H_2^D$ replaced by
$H_2^S := \textnormal{image\,} (\pi_2(M) \longrightarrow
H_2(M;\mathbb{Z}))$ and with $\mu(A)>0$ replaced by $\langle c_1, A
\rangle > 0$ in~\eqref{eq:lambda-tilde+}. To avoid confusion we will
denote the formal variable in $\widetilde{\Gamma}^+$ with $S$ and we
grade $|S^A| = -2\langle c_1, A \rangle$. Similarly to $QH(L;
\widetilde{\Lambda}^+)$ we can define the ambient quantum homology
$QH(M; \widetilde{\Gamma}^+)$ with coefficients in
$\widetilde{\Gamma}^+$ and in fact with coefficients in any ring
$\mathcal{A}$ which is a $\widetilde{\Gamma}^+$-algebra. In
particular, since the map $H_2^S \longrightarrow H_2^D$ gives
$\widetilde{\Lambda}^+$ the structure of an
$\widetilde{\Gamma}^+$-algebra and we can define $QH(M;
\widetilde{\Lambda}^+) = QH(M; \widetilde{\Gamma}^+)
\otimes_{\widetilde{\Gamma}^+} \widetilde{\Lambda}^+$.
Assume for simplicity that $L$ satisfies the assumptions of
Proposition~\ref{p:criterion}. Then the conclusion of
Proposition~\ref{p:criterion} holds with $HF(L,L)$ replaced by
$QH(L;\widetilde{\Lambda}^+)$ in the sense that
{$\textnormal{rank}_{\mathbb{Z}} \, QH_0(L;
\widetilde{\Lambda}^+)/\widetilde{\Lambda}^+_{-n} e_L = 1$, where
$\widetilde{\Lambda}^+_{-n} \subset \widetilde{\Lambda}^+$ stands
for the subgroup generated by the homogeneous elements of degree
$-n$.} Assume further that $L$ is oriented and spinable. Again, the
main example satisfying all these assumptions is $L$ being a
Lagrangian sphere in a monotone symplectic manifold $M$ with $2 C_M |
\dim L$.
The definition of the discriminant $\Delta_L$ carries over to this
setting as follows. Pick an element $x \in QH_0(L;
\widetilde{\Lambda}^+)$ which lifts $[\textnormal{point}] \in H_0(L)$
as in~\S\ref{sbsb:Q=QH_n}. Write $$x*x = \widetilde{\sigma} x +
\widetilde{\tau} e_L,$$ where $\widetilde{\sigma}, \widetilde{\tau}
\in \widetilde{\Lambda}^+$ are elements of degrees
$|\widetilde{\sigma}| = - n$ and $|\widetilde{\tau}| = -2n$
respectively. As before, the elements $\widetilde{\sigma}$ and
$\widetilde{\tau}$ depend on $x$. Define
$$\widetilde{\Delta}_L = \widetilde{\sigma}^2 + 4 \widetilde{\tau} \in
\widetilde{\Lambda}^+.$$ The same arguments as
in~\S\ref{sb:more-discr} show that $\widetilde{\Delta}_L$ is
independent of the choice of $x$.
Theorems~\ref{t:cubic-eq-sphere},~\ref{t:cubic-eq} continue to hold
but the cubic equation~\eqref{eq:cubic-L} now has the form:
\begin{equation} \label{eq:cubic-L-full-ring} [L]^{*3} - \varepsilon
\chi \widetilde{\sigma}_L [L]^{*2} - \chi^2 \widetilde{\tau}_L [L]
= 0,
\end{equation}
where $\widetilde{\sigma}_L \in \tfrac{1}{\chi^2}
\widetilde{\Lambda}^+$, $\widetilde{\tau}_L \in \tfrac{1}{\chi^3}
\widetilde{\Lambda}^+$ are uniquely determined. (Note that
in~\eqref{eq:cubic-L-full-ring} we do not have the variable $q$
anymore since the elements $\chi^2 \widetilde{\sigma}_L, \chi^3
\widetilde{\tau}_L$ are assumed in advance to be in the ring
$\widetilde{\Lambda}^+$.) As for identity~\eqref{eq:GW-LLL}, it now
becomes:
\begin{equation} \label{eq:GW-LLL-full-ring} \widetilde{\sigma}_L =
\frac{1}{\chi^2} \sum_A GW_{A,3}([L],[L],[L])T^{j(A)},
\end{equation}
where $j: H_2^S \longrightarrow H_2^D$ is the map induced by
inclusion.
Analogous versions of Theorem~\ref{t:cubic_eq-ccl} hold over
$\widetilde{\Lambda}^+$ too.
Denoting by $\bar{L}$ the Lagrangian $L$ with the opposite orientation,
it is easy to check that
\begin{equation} \label{eq:phi-action}
\widetilde{\sigma}_{\bar{L}} =
-\widetilde{\sigma}_L, \quad \widetilde{\tau}_{\bar{L}} =
\widetilde{\tau}_L, \quad \widetilde{\Delta}_{\bar{L}} =
\widetilde{\Delta}_L.
\end{equation}
We now discuss the action of symplectic diffeomorphisms on these
invariants. Let $\varphi: M \longrightarrow M$ be a symplectomorphism.
The action $\varphi^M_*: H_2^S \longrightarrow H_2^S$ of $\varphi$ on
homology induces an isomorphism of rings $\varphi_{\Gamma}:
\widetilde{\Gamma}^+ \longrightarrow \widetilde{\Gamma}^+$. Put $L' =
\varphi(L)$. Instead of the preceding ring $\widetilde{\Lambda}^+$ we
now have two rings $\widetilde{\Lambda}_L^+$ and
$\widetilde{\Lambda}_{L'}^+$ associated to $L$ and to $L'$
respectively. The action $\varphi^{(M,L)}_*: H_2^D(M,L) \longrightarrow
H_2^D(M,L')$ of $\varphi$ on homology induces an isomorphism of rings
$\varphi_{\Lambda}: \widetilde{\Lambda}_L^+ \longrightarrow
\widetilde{\Lambda}_{L'}^+$. Moreover, writing an
$\mathcal{R}$-algebra $\mathcal{A}$ as $_{\mathcal{R}}\mathcal{A}$,
the pair of maps $(\varphi_{\Lambda}, \varphi_{\Gamma})$ gives rise to
an isomorphism of algebras
$_{\widetilde{\Gamma}^+}\widetilde{\Lambda}_L^+ \longrightarrow \,
_{\widetilde{\Gamma}^+} \widetilde{\Lambda}_{L'}^+$.
Turning to quantum homologies, standard arguments together with the
previous discussion yield two ring isomorphisms (both denoted
$\varphi_Q$ by abuse of notation): $$\varphi_Q: QH(L;
\widetilde{\Lambda}^+_{L}) \longrightarrow QH(L';
\widetilde{\Lambda}^+_{L'}), \quad \varphi_Q: QH(M;
\widetilde{\Lambda}^+_{L}) \longrightarrow QH(M;
\widetilde{\Lambda}^+_{L'}),$$ which are linear over
$\widetilde{\Gamma}^+$ via $\varphi_{\Gamma}$ and also
$(\widetilde{\Lambda}_L^+, \widetilde{\Lambda}_{L'}^+)$ linear via
$\varphi_{\Lambda}$. Most of the theory from~\S\ref{sb:HF} extends,
with suitable modifications, to the present setting.
The following follows immediately from the preceding discussion
and~\eqref{eq:phi-action} above:
\begin{thm} \label{t:phi-action}
Let $\varphi: M \longrightarrow M$ be a symplectomorphism. Then:
$$\widetilde{\sigma}_{\varphi(L)} =
\varphi_{\Lambda}(\widetilde{\sigma}_L), \quad
\widetilde{\tau}_{\varphi(L)} =
\varphi_{\Lambda}(\widetilde{\tau}_L), \quad
\widetilde{\Delta}_{\varphi(L)} =
\varphi_{\Lambda}(\widetilde{\Delta}_L).$$ In particular
$\widetilde{\tau}_L$ and $\widetilde{\Delta}_L$ are invariant under
the action of the group $\textnormal{Symp}(M,L)$ of
symplectomorphisms $\varphi: (M, L) \longrightarrow (M,L)$ and
$\widetilde{\sigma}_L$ is invariant under the action of the
subgroup $\textnormal{Symp}^+(M,L) \subset \textnormal{Symp}(M,L)$
of those $\varphi$'s that preserve the orientation on $L$. If
$\varphi \in \textnormal{Symp}(M,L)$ reverses orientation on $L$
then $\varphi_{\Lambda}(\widetilde{\sigma}_L) =
-\widetilde{\sigma}_L$.
\end{thm}
Next we have the following analogue of Corollary~\ref{c:sig=0}:
\begin{cor} \label{c:sig_L-sphere} Let $L \subset M$ be a Lagrangian
sphere, where $M$ is a monotone symplectic manifold with $2C_M |
\dim L$. Then $\widetilde{\sigma}_L = 0$. In particular,
$\widetilde{\Delta}_L = 4 \widetilde{\tau}_L$.
\end{cor}
\begin{proof}
Denote by $\varphi : M \longrightarrow M$ the Dehn-twist associated
to the Lagrangian sphere $L$. Since $n=\dim L=$ even, the
restriction $\varphi|_L$ reverses orientation on $L$. By
Theorem~\ref{t:phi-action},
$\varphi_{\Lambda}(\widetilde{\sigma}_L) = -\widetilde{\sigma}_L$.
Thus the corollary would follow if we show that $\varphi_{\Lambda}
= \textnormal{id}$. To show the latter we need to prove that the
map induced by $\varphi$ on homology $\varphi^{(M,L)}_*: H_2(M, L)
\longrightarrow H_2(M, L)$ is the identity.
Assume first that $n>2$. Then the map induced by inclusion $H_2(M)
\to H_2(M,L)$ is an isomorphism. Moreover, for every $A \in H_2(M)$
we can find a a cycle $C$ representing $A$ which lies in the
complement of the support of $\varphi$. This shows that
$\varphi^{M}_*(A)=A$ hence $\varphi^{(M,L)}_* = \textnormal{id}$.
Assume now that $n=2$. Then we have $H_2(M,L) \cong
H_2(M)/\mathbb{Z}[L]$. By the Picard-Lefschetz formula, the action
of $\varphi^{M}_*$ on $H_2(M)$ is given by:
$$\varphi^{M}_*(A) = A + \#(A \cdot [L])[L].$$ It immediately follows that
$\varphi^{(M,L)}_*: H_2(M,L) \longrightarrow H_2(M,L)$ is trivial.
\end{proof}
\subsection{Other rings of interest} \setcounter{thm}{0 \label{sb:other-rings}
The results in this section continue to hold if we replace the ring
$\widetilde{\Lambda}^+$ by any $\widetilde{\Lambda}^+$-algebra
$\mathcal{R}$ (graded or not). See Section 2.1.2
of~\cite{Bi-Co:rigidity} for the precise definitions (in the graded
case). Such a structure is defined e.g. by specifying a ring
homomorphism $\eta: \widetilde{\Lambda}^+ \longrightarrow
\mathcal{R}$. The most natural examples are:
\begin{enumerate}
\item $\mathcal{R} = \mathbb{Z}$, $\mathbb{Q}$ or $\mathbb{C}$,
where $\eta(T^A) = 1$.
\item $\mathcal{R} = \mathbb{Z}[t^{-1}, t]$, where $\eta(T^{A}) =
t^{\mu(A)/N_L}$.
\item $\mathcal{R} = \mathbb{C}$, with $\eta(T^A) = \rho(A)$, where
$\rho: H_2^D \longrightarrow \mathbb{C}^*$ is a given group
homomorphism. This is sometime referred to as {\em twisted
coefficients}.
\item $\mathcal{R} = \Lambda_{\textnormal{Nov}}$ is the Novikov
ring (say in the variable $u$), and $\eta(T^A) = u^{\omega(A)}$.
\item Combinations of~(3) with any of the other possibilities.
\item \label{i:rings-quotient} $\mathcal{R}$ is defined similarly to
$\widetilde{\Lambda}^+$ but instead of taking powers $T^A$ of with
$A \in H_2^D$ we take $A \in H_2^D/K$, where $K \subset \ker \mu$.
See Remark~\ref{r:cob-full-ring} for such an example. (Of course
we can take quotients by a subgroup $K \subset H_2^D$ with $\mu|_K
\neq 0$. Then we can still define an
$\widetilde{\Lambda}^+$-algebra $\mathcal{R}$ by taking all linear
combinations of $T^A$ with $A \in H_2^D/K$.)
\end{enumerate}
In all cases the Lagrangian cubic equation will hold with coefficients
in $\mathcal{R}$ and the coefficients $\sigma^{\mathcal{R}}_L,
\tau^{\mathcal{R}}_L$ and discriminant $\Delta^{\mathcal{R}}_L$ will
now be elements of $\mathcal{R}$. Moreover if $\eta:
\widetilde{\Lambda}^+ \to \mathcal{R}$ is the ring homomorphism
defining the $\widetilde{\Lambda}^+$-algebra structure on
$\mathcal{R}$ then $\eta$ induces ring homomorphisms $QH(L;
\widetilde{\Lambda}^+) \longrightarrow QH(L; \mathcal{R})$ and
$\eta_{Q}: QH(M; \widetilde{\Lambda}^+) \longrightarrow QH(M;
\mathcal{R})$. Applying $\eta_Q$ to the cubic
equation~\eqref{eq:cubic-L-full-ring} we obtain the cubic equation
over $\mathcal{R}$. Similarly
$$\eta(\widetilde{\sigma}_L) =
\sigma^{\mathcal{R}}_L,\quad \eta(\widetilde{\tau}_L) =
\tau^{\mathcal{R}}_L,\quad \eta (\widetilde{\Delta}_L) =
\Delta^{\mathcal{R}}_L.$$ Of course if we take $\mathcal{R} =
\mathbb{Z}$ or $\mathbb{Q}$ with $\eta(T^A) = 1$ then $\eta_Q$ sends
equation~\eqref{eq:cubic-L-full-ring} to the original cubic
equation~\eqref{eq:cubic-L} with $q = 1$ and
$\eta(\widetilde{\sigma}_L) = \sigma_L$, $\eta(\widetilde{\tau}_L) =
\tau_L$, $\eta(\widetilde{\Delta}_L) = \Delta_L$.
\begin{rem} \label{r:cob-full-ring} Analogues of Theorem~\ref{t:cob}
and Corollary~\ref{c:del_1=del_2} should carry over to the present
setting if we replace $\widetilde{\Lambda}^+$ by the
$\widetilde{\Lambda}^+$-algebra $\mathcal{R}$ defined as in
point~\eqref{i:rings-quotient} of the above list where we quotient
$H_2^D$ by the subgroup $K = \ker \bigl( H^D_2(M,\partial V)
\longrightarrow H_2(\mathbb{R}^2 \times M, V) \bigr)$.
\end{rem}
\subsection{Examples revisited} \setcounter{thm}{0 \label{sb:examples-revisited}
Here we briefly present the outcome of the calculation of our
invariants $\widetilde{\tau_L}$ and $\widetilde{\Delta}_L$ for
Lagrangian spheres on blow-ups of ${\mathbb{C}}P^2$ at $2 \leq k \leq
6$ points. (As for $\widetilde{\sigma}_L$, recall that it vanishes
when $L$ is a sphere.) We use similar notation as
in~\S\ref{sbsb:intro-exp-blcp2}. For simplicity we denote by $u \in
H_4(M_k)$ the fundamental class viewed as the unity of $QH(M_k)$. As
before we appeal to~\cite{Cra-Mir:QH} for the calculation of the
quantum homology of the ambient manifolds. Since the explicit
calculations in $QH(M_k)$ turn out to be very lengthy we often omit
the details and present only the end results (full details can be
found in~\cite{Mem:PhD-thesis}). We recall again that in $QH(M;
\widetilde{\Gamma}^+)$ the quantum variables are denoted now by
$S^{A}$ where $A \in H_2^S$.
\subsubsection{2-point blow-up of $\mathbb{C}P^2$}
$QH(M_2; \widetilde{\Gamma}^+)$ has the following ring structure:
\begin{align*}
& p * p = H S^H + uS^{2H - E_1- E_2} \\
& p * H = (H - E_1)S^{H-E_1} + (H - E_2)S^{H - E_2} + u S^H \\
& p * E_1 = (H - E_1) S^{H - E_1} \\
& p * E_2 = (H - E_2) S^{H - E_2} \\
& H*H = p + (H - E_1 - E_2) S^{H - E_1 - E_2} +
u(S^{H - E_1} + S^{H - E_2} ) \\
& H * E_1 = (H - E_1 - E_2) S^{H - E_1 - E_2} + u S^{H - E_1} \\
& H * E_2 = (H - E_1 - E_2) S^{H - E_1 - E_2} + u S^{H - E_2} \\
& E_1 * E_1 = -p + (H - E_1 - E_2) S^{H - E_1 - E_2} + E_1
S^{E_1} + u S^{H - E_1} \\
& E_2 * E_2 = -p + (H - E_1 - E_2) S^{H - E_1 - E_2} + E_2
S^{E_2} + u S^{H - E_2} \\
& E_1 * E_2 = (H - E_1 - E_2) S^{H - E_1 - E_2}.
\end{align*}
Let $L \subset M_2$ be a Lagrangian sphere in the class $[L]=E_1-E_2$.
Then $H_2^D = H_2(M,L) \cong H_2(M) / H_2(L)$ and as a basis for
$H_2^D$ we can choose $\{H, E\}$, where $E$ stands for the image of
both $E_1$ and $E_2$ in $H_2(M) / H_2(L)$. (Thus in
$\widetilde{\Lambda}^+$ we have $S^{E_1} = S^{E_2} = T^{E}$.)
A straightforward calculation gives:
$$(E_1 - E_2)^{*3} = (T^{2E} + 4 T^{H - E}) (E_1 - E_2), \quad
\widetilde{\Delta}_L = 4\widetilde{\tau}_L = T^{2E} + 4 T^{H}.$$
\subsubsection{3-point blow-up of $\mathbb{C}P^2$}
The multiplication table for $QH(M_3; \widetilde{\Gamma}^+)$ is rather
long hence we omit it here (see~\cite{Mem:PhD-thesis} for these
details).
Consider first Lagrangian spheres $L \subset M_3$ in the class $[L] =
E_1 - E_2$. We choose $\{H, E, E_3\}$ for a basis for $H_2^D$ where
$E$ stands for the image of both of $E_1$ and $E_2$ in $H_2^D$. A
straightforward calculation using the Lagrangian cubic equation gives
$$\widetilde{\Delta}_L = 4 \widetilde{\tau}_L =
4 T^{H - E} + T^{2E} - 2 T^{H - E_3} + T^{2H - 2E - 2 E_3}.$$
As explained in Remark~\ref{r:1-point}, we expect that there exist
Lagrangian spheres $L_1$, $L_2$ with $[L_1] =E_1 - E_2, [L_2] = E_2 -
E_3$ such that $L_1$ and $L_2$ intersect transversely at exactly one
point. By Remark~\ref{r:cob-full-ring} we should have
$$\widetilde{\Delta}_{L_1} = \widetilde{\Delta}_{L_2} =
\textnormal{perfect square},$$ if we replace the ring
$\widetilde{\Lambda}^+$ by a quotient of it where $T^{E_1}, T^{E_2},
T^{E_3}$ are all identified. The discriminant of both of $L_1$ and
$L_2$ (which now denote $\widetilde{\Delta}'$) becomes in this
setting:
$$\widetilde{\Delta}' =
2 T^{H - E} + T^{2E} + T^{2H - 4E} = (T^{H-2E} + T^E)^2,$$ where we
have written here $T^{E}$ for the $T^{E_i}$'s. Similar calculations
should apply to the examples discussed
in~\S\ref{sbsb:M_4-full-ring}~--~\S\ref{sbsb:M_6-full-ring}.
Next we consider Lagrangian $L \subset M_3$ with $[L] = H - E_1 - E_2
- E_3$. We work with the basis $\{E_1, E_2, E_3\}$ for $H_2^D$. Direct
calculation gives
$$\widetilde{\Delta}_L = 4\widetilde{\tau}_L = T^{2 E_1} + T^{2 E_2} +
T^{2 E_3} - 2 T^{E_1 + E_2} - 2 T^{E_1 + E_3} - 2 T^{E_2 + E_3}.$$
\subsubsection{4-point blow-up of $\mathbb{C}P^2$}
\label{sbsb:M_4-full-ring} Consider Lagrangian spheres in the class
$[L] = E_1 - E_2$ and work with the basis $\{H, E, E_3, E_4\}$, where
$E = [E_1]=[E_2] \in H_2^D$. Omitting the details of a rather long
calculation we obtain:
$$\widetilde{\Delta}_L = 4 \widetilde{\tau}_L = T^{2E} + 4 T^{H - E} -
2T^{H - E_3} - 2T^{H - E_4} + T^{2H - 2E - 2E_3} +T^{2H - 2E - 2E_4} -
2T^{2H - 2E - E_3 - E_4}.$$
For Lagrangian spheres in the class $[L] = H - E_1 - E_2 - E_3$ we
obtain: $$\widetilde{\Delta}_L = 4 \widetilde{\tau}_L = T^{2E_1} +
T^{2E_2} + T^{2E_3} - 2T^{E_1+E_2} - 2T^{E_1+E_3} - 2T^{E_2+E_3} + 4
T^{E_1+E_2+E_3-E_4},$$ where we have worked here with the basis
$\{E_1, E_2, E_3, E_4\}$ for $H_2^D$.
\subsubsection{5-point blow-up of $\mathbb{C}P^2$}
Consider Lagrangian spheres in the class $[L] = E_1 - E_2$ and work
with the basis $\{H, E, E_3, E_4, E_5\}$, where $E = [E_1]=[E_2] \in
H_2^D$. Omitting the details of a rather long calculation we obtain:
\begin{align*}
\widetilde{\Delta}_L = 4 \widetilde{\tau}_L = \, & T^{2E} + 4 T^{H
- E} -
2T^{H - E_3} - 2T^{H - E_4} - 2T^{H - E_5} \\
& + T^{2H - 2E - 2E_3} + T^{2H - 2E - 2E_4} + T^{2H - 2E - 2E_5} \\
& - 2T^{2H - 2E - E_3 - E_4} - 2T^{2H - 2E - E_3 - E_5} -
2T^{2H - 2E - E_4 - E_5} \\
& + 4 T^{2H - E - E_3 - E_4 - E_5}.
\end{align*}
Consider now a Lagrangian sphere in the class $[L] = H - E_1 - E_2 -
E_3$. We work with the basis $\{E_1, E_2, E_3, E_4, E_5\}$ for
$H_2^D$. We obtain:
\begin{align*}
\widetilde{\Delta}_L = 4 \widetilde{\tau}_L = \, & T^{2E_1} +
T^{2E_2} + T^{2E_3} -
2T^{E_1+E_2} - 2T^{E_1+E_3} - 2T^{E_2+E_3} \\
& + 4T^{E_1 + E_2 + E_3 - E_4} + 4T^{E_1 + E_2 + E_3 - E_5} +
T^{2(E_1 + E_2 + E_3 - E_4 - E_5)} \\
& - 2 T^{2E_1 + E_2 + E_3 - E_4 - E_5} - 2 T^{E_1 + 2E_2 + E_3 -
E_4 - E_5} - 2 T^{E_1 + E_2 + 2E_3 - E_4 - E_5}.
\end{align*}
\subsubsection{6-point blow-up of $\mathbb{C}P^2$}
\label{sbsb:M_6-full-ring} Due to the complexity of the calculation we
restrict here to Lagrangians in the class $[L] = E_1 - E_2$. We work
with the basis $\{H, E, E_3, E_4, E_5, E_5, E_6\}$ for $H_2^D$, where
$E = [E_1] = [E_2]$.
\begin{align*}
\widetilde{\Delta}_L = \, & T^{2E} + 4 T^{H - E} - 2T^{H - E_3} -
2T^{H - E_4} - 2T^{H - E_5} - 2T^{H - E_6} \\
& + T^{2H - 2E - 2E_3} +T^{2H - 2E - 2E_4} + T^{2H - 2E - 2E_5} +
T^{2H - 2E - 2E_6} \\
& - 2T^{2H - 2E - E_3 - E_4} - 2T^{2H - 2E - E_3 - E_5} -
2T^{2H - 2E - E_3 - E_6} \\
& - 2T^{2H - 2E - E_4 - E_5} - 2T^{2H - 2E - E_4 - E_6} -
2T^{2H - 2E - E_5 - E_6} \\
& -2 T^{2H - E_3 - E_4 - E_5 - E_6} + 4 T^{2H - E - E_3 - E_4 -
E_5} +
4 T^{2H - E - E_3 - E_4 - E_6} \\
& + 4 T^{2H - E - E_3 - E_5 - E_6} +
4 T^{2H - E - E_4 - E_5 - E_6} \\
& - 2 T^{3H - 2E - 2E_3 - E_4 - E_5 - E_6} -
2 T^{3H - 2E - E_3 - 2E_4 - E_5 - E_6} \\
& - 2 T^{3H - 2E - E_3 - E_4 - 2E_5 - E_6} -
2 T^{3H - 2E - E_3 - E_4 - E_5 - 2E_6} \\
& + 4 T^{3H - 3E - E_3 - E_4 - E_5 - E_6} + T^{4H -2E - 2E_3 -
2E_4 - 2E_5 - 2E_6}
\end{align*}
\section{Examples} \label{s:examples} \setcounter{thm}{0
This section is a continuation of~\S\ref{sb:exps} in which we provide
more details to the examples. We will work here with the following
setting. $(M, \omega)$ will be a monotone symplectic manifold with
minimal Chern number $C_M$. To keep the notation short we will denote
here by $QH(M)$ the quantum homology of $M$ with coefficients in the
ring $R=\mathbb{Z}[q^{-1}, q]$ (with $|q|=-2$), instead of writing
$QH(M;R)$.
\subsection{Lagrangian spheres in symplectic blow-ups of
$\mathbb{C}P^2$} \label{sb:lag-spheres-blow-ups} \setcounter{thm}{0
Denote as in~\S\ref{sbsb:intro-exp-blcp2} by $M_k$ the blow-up of
${\mathbb{C}}P^2$ at $k \leq 6$ points endowed with a K\"{a}hler
symplectic structure $\omega_k$ in the cohomology class of $c_1 \in
H^2(M_k)$. Note that $-K_{M_k}$ is ample hence $c_1$ represents a
K\"{a}hler class. Note that $C_{M_k}=1$. As will be seen
in~\S\ref{s:non-monotone} some of our results (e.g.
Theorem~\ref{t:cubic-eq-sphere}) continue to hold in dimension $4$
also for non-monotone Lagrangian spheres. In this section however we
still stick to the monotone case.
We first claim that the set of classes in $H_2(M_k)$ which are
represented by Lagrangian spheres are precisely those that appear in
Table~\ref{tb:classes-intro}. This is well known and there are many
ways to prove it (see e.g.~\cite{Se:lect-4-Dehn, Ev:lag-spheres,
Li-Wu:lag-spheres, Shev:secondary-stw}). For the classes $A = E_i -
E_j \in H_2(M_k)$ when $k=2$ and $k=3$ it is easy to find Lagrangian
spheres in the class $A$ by an explicit construction which we outline
below (see~\cite{Ev:lag-spheres} for more details). For $k \geq 4$, as
well as $k=3$ with $A = H - E_1 - E_2 - E_3$, it seems less trivial to
perform explicit constructions and one could appeal instead to less
transparent methods such as (relative) inflation, as in
in~\cite{Li-Wu:lag-spheres, Shev:secondary-stw} (we will briefly
outline this in a special case below). Another approach which works
for some of the $k$'s is to realize $M_k$ as a fiber in a Lefschetz
pencil and obtain the Lagrangian spheres as vanishing cycles (e.g.
$M_6$ is the cubic surface in ${\mathbb{C}}P^3$ and $M_5$ is a
complete intersection of two quadrics in ${\mathbb{C}}P^4$). Yet
another approach comes from real algebraic geometry, where one can
obtain Lagrangian spheres in some of the $M_k$'s as a component of the
fixed point set of an anti-symplectic involution. This works for $k=5,
6$ and all classes $A$, and for $k=3$ with $A = E_i-E_j$.
See~\cite{Koll:real-alg-surf} for more details. Finally note that for
$2 \leq k \leq 8$, $k \neq 3$, the group of symplectomorphisms of
$M_k$ acts transitively on the set of classes that can be represented
by Lagrangian spheres~\cite{Demazure:del-pezzo-1, Li-Wu:lag-spheres},
hence it is enough to construct one Lagrangian sphere in each $M_k$.
(This also explains why the invariants in Table~\ref{tb:classes-intro}
coincide for different classes within each of the $M_k$'s with the
exception $k=3$.)
Despite the many ways to establish Lagrangian spheres in the $M_k$'s,
the shortest (albeit not the most explicit) path to this end is to
appeal to the work Li-Wu~\cite{Li-Wu:lag-spheres}. According
to~\cite{Li-Wu:lag-spheres} a homology class $A \in H_2(M_k)$ can be
represented by a Lagrangian sphere iff it satisfies the following two
conditions:
\begin{enumerate}
\item[(LS-1)] $A$ can be represented by a smooth embedded $2$-sphere.
\item[(LS-2)] $\langle [\omega_k], A \rangle = 0$.
\item[(LS-3)] $A \cdot A = -2$.
\end{enumerate}
We remark again that we have assumed that $[\omega_k] = c_1$
(otherwise one has to assume in addition that $\langle c_1, A \rangle
= 0$).
It is straightforward to see that all the classes in
Table~\ref{tb:classes-intro} satisfy conditions~(LS-2) and~(LS-3)
above. As for condition~(LS-1), note that if $C', C'' \subset M^4$ are
two {\em disjoint} embedded smooth $2$-spheres in a $4$-manifold
$M^4$, then by performing the connected sum operation one obtains a new smooth
embedded $2$-sphere in the class $[C']+[C'']$. From this it follows
that any non-trivial class of the form $\sum_{i=1}^k \epsilon_i E_i$
with $\epsilon_i \in \{-1, 0, 1\}$ can be represented by a smooth
embedded $2$-sphere. This settles the cases $\pm(E_i - E_j)$. For the
other type of classes, note that $H$ and $2H$ can both be represented
by smooth embedded $2$-spheres (e.g. a projective line and a conic
respectively) hence the same holds also for for classes of the form
$\pm(H - E_i - E_j - E_l)$ and $\pm(2H - \sum_{i=1}^6 E_i)$.
We remark that in fact there are no other classes but the ones in
Table~\ref{tb:classes-intro} that can be represented by Lagrangian
spheres in $M_k$. This can be proved by elementary means using
conditions~(LS-2) and~(LS-3) above.
\subsubsection{Construction of Lagrangian spheres in $M_2$ and $M_3$}
\label{sbsb:constructions}
We now outline a more explicit way to construct Lagrangian spheres in
some of the $M_k$'s (c.f.~\cite{Ev:lag-spheres}). Consider $Q =
\mathbb{C}P^1 \times \mathbb{C}P^1$ endowed with the symplectic form
$\omega = 2\omega_{\mathbb{C}P^1} \oplus 2\omega_{\mathbb{C}P^1}$,
where $\omega_{\mathbb{C}P^1}$ is the standard K\"{a}hler form on
$\mathbb{C}P^1$ normalized so that $\mathbb{C}P^1$ has area $1$. Note
that the first Chern class of $Q$ satisfies $c_1 = [\omega]$. The
symplectic manifold $Q$ contains a Lagrangian sphere
$\overline{\Delta}$ in the class $[\mathbb{C}P^1 \times
\textnormal{pt}] - [\textnormal{pt} \times \mathbb{C}P^1]$ (i.e. the
class of the anti-diagonal). For example, we can write
$\overline{\Delta}$ as the graph of the antipodal map, given in
homogeneous coordinates by
\[
\mathbb{C}P^1 \longrightarrow \mathbb{C}P^1, \qquad [z_0 : z_1]
\longmapsto [-\overline{z_1} : \overline{z_0}].
\]
Next, we claim that $Q$ admits a symplectic embedding of two disjoint
closed balls $B_1, B_2$ of capacity $1$ whose images are disjoint from
$\overline{\Delta}$. This can be easily seen from the toric picture.
Indeed the image of the moment map of $Q$ is the square $[0,2] \times
[0,2]$ and the image of $\overline{\Delta}$ under that map is given by
the anti-diagonal $\{(x, y) \mid x,y \in [0,2], x+y=2\}$. By standard
arguments in toric geometry we can symplectically embed in $Q$ a ball
$B_1$ of capacity $1$ whose image under the moment map is $\{(x,y)
\mid x,y \in [0,2], x+y \leq 1\}$. Similarly we can embed another ball
$B_2$ whose image is $\{(x,y) \mid x,y \in [0,2], x+y \geq 3\}$.
Clearly $B_1$, $B_2$ and $\overline{\Delta}$ are mutually disjoint.
Denote by $\widetilde{Q}_1$ the blow-up of $Q$ with respect to $B_1$
and by $\widetilde{Q}_2$ the blow-up of $Q$ with respect to both balls
$B_1$ and $B_2$. It is well known that $\widetilde{Q}_1$ is
symplectomorphic to $M_2$ via a symplectomorphism that sends the class
$\overline{\Delta}$ to $E_1 - E_2$. And $\widetilde{Q}_2$ is
symplectomorphic to $M_3$ by a similar symplectomorphism. It follows
that $E_1 - E_2$ represents Lagrangian spheres both in $M_2$ and in
$M_3$. Construction of Lagrangian spheres in the other classes of the
type $E_i - E_j$ in $M_3$ can be done in a similar way.
\subsubsection*{Lagrangian spheres in the class $H-E_1-E_2-E_3$ in $M_3$}
We start with the complex blow-up of ${\mathbb{C}}P^2$ at three points
that {\em lie on the same projective line}. Denote by $E_i$ the
exceptional divisors over the blown-up points. The result of the blow
up is a complex algebraic surface $X$ which contains an embedded
holomorphic rational curve $\Sigma$ in the class $H - E_1 - E_2 -
E_3$. Note also that there are three embedded holomorphic curves $C_i
\subset X$, $i=1,2,3$, in the classes $[C_i] = H - E_i$. Since $[C_i]
\cdot [\Sigma] = 0$ the curves $C_i$ are disjoint from $\Sigma$. Pick
a K\"{a}hler symplectic structure $\omega_0$ on $X$. After a suitable
normalization we can write $[\omega_0] = h - \lambda_1 e_1 - \lambda_2
e_2 - \lambda_3 e_3$, where $h, e_1, e_2, e_3$ are the Poincar\'{e}
duals to $H, E_1, E_2, E_3$ respectively. It is easy to check that
$\lambda_i \geq 0$ and that $\lambda_1 + \lambda_2 + \lambda_3 < 1$.
We now change $\omega_0$ to a new symplectic form $\omega'$ such that:
\begin{enumerate}
\item $\omega'$ coincides with $\omega_0$ outside a small
neighborhood $\mathcal{U}$ of $\Sigma$, where $\mathcal{U}$ is
disjoint from the curves $C_1, C_2, C_3$.
\item $\omega'|_{T(\Sigma)} \equiv 0$, i.e. $\Sigma$ becomes a
Lagrangian sphere with respect to $\omega'$.
\item $\omega'$ and $\omega$ are in the same deformation class of
symplectic forms on $X$ (i.e. they can be connected by a path of
symplectic forms).
\end{enumerate}
This can be achieved for example using the {\em deflation}
procedure~\cite{Shev:secondary-stw} (see also~\cite{Li-Ush:neg-sq}).
Alternatively, one can construct $\omega'$ using Gompf fiber-sum
surgery~\cite{Go:fiber-sum-surgery} with respect to $\Sigma \subset X$
and the diagonal in $\mathbb{C}P^1 \times \mathbb{C}P^1$: $$(Y,
\omega'') = (X, \omega_0) \, _\Sigma\#_{\textnormal{diag}} \,
(\mathbb{C}P^1 \times \mathbb{C}P^1, a \omega_{\mathbb{C}P^1} \oplus a
\omega_{\mathbb{C}P^1}),$$ where $a = \tfrac{1}{2} \int_{\Sigma}
\omega_0$, and $S^2$ is symplectically embedded in $X$ as $\Sigma$ and
in $\mathbb{C}P^1 \times \mathbb{C}P^1$ as the diagonal. Since the
anti-diagonal $\overline{\Delta}$ is a Lagrangian sphere in
$\mathbb{C}P^1 \times \mathbb{C}P^1$ which is disjoint from the
diagonal it gives rise to a Lagrangian sphere $L'' \subset Y$.
Finally observe that the surgery has not changed the diffeomorphism
type of $X$, namely there exists a diffeomorphism $\phi: Y
\longrightarrow X$ and moreover $\phi$ can be chosen in such a way
that $\phi(L'')=\Sigma$. Take now $\omega' = \phi_* \omega''$. To
obtain a symplectic deformation between $\omega'$ and $\omega_0$ one
can perform the preceding surgery in a suitable one-parametric family,
where the symplectic form on $\mathbb{C}P^1 \times \mathbb{C}P^1$ is
rescaled so that the area of one of the factors becomes smaller and
smaller and the area of the other increases so that the area of the
diagonal stays constant.
Having replaced the form $\omega_0$ by $\omega'$ we have a Lagrangian
sphere in the desired homology class $H-E_1-E_2-E_3$ but the form
$\omega'$ might not be in the cohomology class of $c_1$. We will now
correct that using inflation.
After a normalization we can assume that $[\omega'] = h - \lambda'_1
e_1 - \lambda'_2 e_2 - \lambda'_3 e_3$. Since $\Sigma$ is Lagrangian
with respect to $\omega'$ we have $\lambda'_1 + \lambda'_2 +
\lambda'_3 = 1$. Recall also that the surfaces $C_1, C_2, C_3$ are
symplectic with respect to $\omega'$, hence $\lambda'_i < 1$ for
every $i$. Moreover, by construction, the surfaces $C_1, C_2, C_3$ can
be made simultaneously $J$-holomorphic for some $\omega'$-compatible
almost complex structure $J$. Since the $C_i$'s are disjoint from
$\Sigma$ we can find neighborhoods $U_i$ of $C_i$ such that the
$U_i$'s are disjoint from $\Sigma$. We now perform inflation
simultaneously along the three surfaces $C_1, C_2, C_3$. More
specifically, by the results of~\cite{Bi:connected-pack,
Bi:connected-pack-arxiv} there exist closed $2$-forms $\rho_i$
supported in $U_i$, representing the Poincar\'{e} dual of $[C_i]$
(i.e. $[\rho_i] = h-e_i$) and such that the $2$-form
$$\omega_{t_1, t_2, t_3} = \omega' + t_1
\rho_1 + t_2 \rho_2 + t_3 \rho_3$$ is symplectic for every $t_1, t_2,
t_3 \geq 0$. See Lemma~2.1 in~\cite{Bi:connected-pack} and
Proposition~4.3 in~\cite{Bi:connected-pack-arxiv} (see
also~\cite{La:isotopy-balls, LM:classif-1, LM:classif-2, McD:From,
McD-Op:sing-inf}.) The cohomology class of $\omega'_t$ is:
$$[\omega'_t] = (1 + t_1 + t_2 + t_3)h - (\lambda'_1+t_1)e_1 -
(\lambda'_2+t_2)e_2 - (\lambda'_3 + t_3)e_3.$$ Choosing $t^0_i =
1-\lambda'_i$ we have $t^0_i>0$ and $1+t_1^0 + t_2^0 + t_3^0 = 4 -
(\lambda'_1 + \lambda'_2 + \lambda'_3) = 3$, hence:
$$[\omega'_{t^0_1, t_2^0, t_3^0}] = 3h - e_1 - e_2 - e_3 = c_1.$$
Due to the support of the forms $\rho_i$ the surface $\Sigma$ remains
Lagrangian for $\omega'_{t^0_1, t^0_2, t^0_3}$. Finally note that
$\omega'_{t^0_1, t^0_2, t^0_3}$ is in the same symplectic deformation
class of $\omega_0$ hence by standard results $(X, \omega'_{t^0_1,
t^0_2, t^0_3})$ is symplectomorphic to $M_3$.
\subsubsection{Calculation of the discriminant for $M_k$, $2 \leq k
\leq 6$}
We now give more details on the calculation of the discriminant
$\Delta_L$ for each of the examples in Table~\ref{tb:classes-intro}.
In what follows, for a symplectic manifold $M$, we denote by $p \in
H_0(M)$ the homology class of a point. As before we write $QH(M)$ for
the quantum homology ring of $M$ with coefficients in $R =
\mathbb{Z}[q^{-1}, q]$ where $|q| = -2$. The calculations below make
use of the ``multiplication table'' of the quantum homology of the
$M_k$'s which can be found in~\cite{Cra-Mir:QH}.
Recall that for $M_k$ with $4 \leq k \leq 6$ the group of
symplectomorphisms of $M_k$ acts transitively on the set of classes
that can be represented by Lagrangian
spheres~\cite{Demazure:del-pezzo-1, Li-Wu:lag-spheres}. Therefore, for
$k \geq 4$ we will perform explicit calculations only for Lagrangians
in the class $E_1 - E_2$.
Before we go on we remark that all the calculations for the $M_k$'s
below extend without any change in case we endow $M_k$ with a
non-monotone symplectic structure (provided that a Lagrangian sphere
in the respective class still exists). This is special to dimension
$4$ and is explained in detail in~\S\ref{s:non-monotone}.
\subsubsection{2-point blow-up of $\mathbb{C}P^2$} \label{sbsb:M_2}
\setcounter{thm}{0 $QH(M_2)$ has the following ring structure:
\begin{align*}
& p * p = H q^3 + [M_2]q^4 \\
& p * H = (H - E_1) q^2 + (H - E_2) q^2 + [M_2] q^3 \\
& p * E_i = (H - E_i) q^2 \\
& H * H = p + (H - E_1 - E_2) q + 2[M_2] q^2 \\
& H * E_i = (H - E_1 - E_2) q + [M_2] q^2 \\
& E_1 * E_2 = (H - E_1 - E_2) q \\
& E_1 * E_1 = -p + (H - E_2) q + [M_2] q^2 \\
& E_2 * E_2 = -p + (H - E_1) q + [M_2] q^2.
\end{align*}
Consider Lagrangian spheres $L \subset M_2$ in the class $E_1-E_2$. A
straightforward calculation shows that:
\[
(E_1 - E_2)^{*3} - 5 (E_1 - E_2) q^2 = 0,
\]
and thus we obtain $\Delta_L = 5$. Multiplication of $c_1$ with $[L]$
gives: $c_1 * (E_1 - E_2) = (-1)(E_1 - E_2)q$, hence $\lambda_L = -1$.
The associated ideal (see~\S\ref{sb:eigneval}) $\mathcal{I}_L \subset
QH_*(M_2)$ is:
\[
\mathcal{I}(E_1 - E_2) = R (-2p + (E_1 + E_2)q + 2 [M_2] q^2) \oplus R
(E_1 - E_2).
\]
We now turn to Theorem~\ref{t:cubic_eq-ccl} and calculate explicitly
the coefficients $\sigma_{c,L}$, $\tau_{c,L}$ from
equation~\eqref{eq:cubic_eq_ccL}. Consider a general element $c = dH -
m_1 E_1 - m_2 E_2\in H_2(M_2)$, where $d, m_1, m_2 \in \mathbb{Z}$.
Then $\xi := c \cdot [L] = m_1 - m_2$ and we assume that $m_1 \neq
m_2$. A straightforward calculation gives:
$$\sigma_{c,L} = - \frac{m_1+m_2}{m_1-m_2}, \quad
\tau_{c,L} = \frac{m_1^2 - 3m_1 m_2 + m_2^2}{(m_1-m_2)^2}.$$ One can
easily check that $\sigma_{c,L}^2 + 4 \tau_{c,L} = 5$.
\subsubsection{3-point blow-up of $\mathbb{C}P^2$}
$QH(M_3)$ has the following ring structure:
\begin{align*}
& p * p = (3H - E_1 - E_2 - E_3)q^3 + 3 [M_3]q^4 \\
& p * H = (3H - E_1 - E_2 - E_3)q^2 + 3 [M_3] q^3 \\
& p * E_i = (H - E_i)q^2 + [M_3]q^3 \\
& H * H = p + (3H - 2E_1 - 2E_2 - 2E_3)q + 3 [M_3]q^2 \\
& H * E_i = (2H - 2E_i - E_j - E_k)q + [M_3]q^2,
\quad i \neq j \neq k \neq i \\
& E_i * E_i = -p + (2H - E_1 - E_2 - E_3)q + [M_3]q^2 \\
& E_i * E_j = (H - E_i - E_j )q, \quad i \neq j.
\end{align*}
Consider Lagrangians $L, L' \subset M_3$ in the classes $[L] = E_i -
E_j$ and $[L'] = H - E_1 - E_2 - E_3$. The corresponding Lagrangian
cubic equations are given by:
\begin{align*}
& (E_i - E_j)^{*3} - 4 (E_i - E_j) q^2 = 0, \\
& (H - E_1 - E_2 - E_3)^{*3} + 3 (H - E_1 - E_2 - E_3) q^2 = 0,
\end{align*}
and thus obtain $\Delta_L = 4$ and $\Delta_{L'} = -3$. Multiplication
with $c_1$ gives:
\begin{align*}
& c_1 * (E_i - E_j) = (-2)(E_i - E_j)t, \\
& c_1 * (H - E_1 - E_2 - E_3) = (-3) (H - E_1 - E_2 - E_3)t,
\end{align*}
hence $\lambda_L = -2$ and $\lambda_{L'} = -3$. The associated ideals
in $QH(M_3)$ are:
\begin{align*}
& \mathcal{I}_L =
R (-2p + 2(H - E_3)t + 2[M_3]q^2) \oplus R (E_1 - E_2), \\
& \mathcal{I}_{L'} = R(-2p + (3H - E_1 - E_2 - E_3)q + 4 [M_3]q^2 )
\oplus R (H - E_1 - E_2 - E_3).
\end{align*}
The Lagrangian spheres in different homology classes of the type $E_i
- E_j$ in $M_3$ have the same discriminant and the same eigenvalue
$\lambda_L$. This is so because for every $i<j$ there is a
symplectomorphism $\varphi: M_3 \longrightarrow M_3$ such that
$\varphi_*(E_1-E_2) = E_i - E_j$. In contrast, note that there exists
no symplectomorphism of $M_3$ sending $E_1-E_2$ to $H - E_1 - E_2 -
E_3$.
\subsubsection{4-point blow-up of $\mathbb{C}P^2$} $QH(M_4)$ has the
following ring structure:
\begin{align*}
& p * p = (9H - 3E_1 - 3E_2 - 3E_3 - 3E_4)q^3 + 10 [M_4]q^4 \\
& p * H = (8H - 3E_1 - 3E_2 - 3E_3 - 3E_4)q^2 + 9[M_4] q^3 \\
& p * E_i = (3H - 2E_i - \sum\limits_{j \neq i}E_j )q^2 + 3[M_4]q^3 \\
& H*H = p + (6H - 3E_1 - 3E_2 - 3E_3 - 3E_4)q + 8[M_4] q^2 \\
& H*E_i = (3H - 3E_i - \sum\limits_{j \neq i}E_j)q + 3[M_4] q^2 \\
& E_i*E_i = -p + (3H - 2E_i - \sum\limits_{j \neq i}E_j)q + 2[M_4] q^2 \\
& E_i*E_j = (H - E_i - E_j)q + [M_4] q^2
\end{align*}
As explained above it is enough to calculate our invariants for
Lagrangians in the class $E_1-E_2$. A straightforward calculation
shows that: $$(E_1-E_2)^{*3} = (E_1-E_2)q^2, \quad c_1*(E_1 - E_2) =
-3 (E_1-E_2)q,$$ hence $\Delta_L = 1$ and $\lambda_L = -3$. The
associated ideals for Lagrangians $L$, $L'$ with $[L] = E_1-E_2$ and
$L' = H - E_1 - E_2 - E_3$ are:
\begin{align*}
& \mathcal{I}_L = R (-2p + (4H - E_1 - E_2 - 2E_3 - 2E_4)q +
2[M_4]q^2)
\oplus R (E_1 - E_2), \\
& \mathcal{I}_{L'} = R (-2p + (3H - E_1 - E_2 - E_3)q + 2[M_4]q^2)
\oplus R (H - E_1 - E_2 - E_3).
\end{align*}
\subsubsection{5-point blow-up of $\mathbb{C}P^2$} $QH(M_5)$ has the
following ring structure:
\begin{align*}
& p * p = (36H - 12E_1 - 12E_2 - 12E_3 - 12E_4 - 12E_5)q^3 + 52[M_5]q^4 \\
& p * H = (25H - 9E_1 - 9E_2 - 9E_3 - 9E_4 - 9E_5)q^2 + 36[M_5]q^3 \\
& p * E_i = (9H - 5E_i - 3\sum\limits_{j \neq i}E_j)q^2 + 12[M_5]q^3 \\
& H*H = p + (18H - 8E_1 - 8E_2 - 8E_3 - 8E_4 - 8E_5)q + 25[M_5]q^2 \\
& H*E_i = (8H - 6E_i - 3\sum\limits_{j \neq i}E_j)q + 9[M_5]q^2 \\
& E_i*E_i = -p + (6H - 4E_i - 2\sum\limits_{j \neq i}E_j)q + 5[M_5]q^2 \\
& E_i*E_j = (3H - 2E_i - 2E_j - \sum\limits_{k \neq i,j}E_k)q +
3[M_5]q^2
\end{align*}
As before, it is enough to consider only the case $[L] = E_1 - E_2$.
A direct calculation gives:
$$(E_1-E_2)^{*3} = 0, \quad c_1*(E_1-E_2) = -4(E_1 - E_2)q,$$
hence $\Delta_L = 0$, $\lambda_L = -4$.
The associated ideals for Lagrangians $L$, $L'$ with $[L] = E_1 - E_2$
and $[L'] = H - E_1 - E_2 - E_3$ are:
\begin{align*}
& \mathcal{I}_L = R (-2p +
(6H - 2E_1 - 2E_2 - 2E_3 - 2E_4 - 2E_5)q + 4[M_5]q^2) \oplus R (E_1 - E_2), \\
& \mathcal{I}_{L'} = R (-2p + (6H - 2E_1 - 2E_2 - 2E_3 - 2E_4 - 2 E_5)q +
4[M_5]q^2) \oplus R (H - E_1 - E_2 - E_3).
\end{align*}
\subsubsection{6-point blow-up of $\mathbb{C}P^2$} $QH(M_6)$ has the
following the ring structure:
\begin{align*}
& p * p = (252H - 84 E_1 - 84 E_2 - 84 E_3 - 84 E_4 - 84 E_5 -
84 E_6)q^3 + 540[M_6]q^4 \\
& p * H = (120 H - 42 E_1 - 42 E_2 - 42 E_3 - 42 E_4 - 42 E_5 -
42 E_6)q^2 + 252 [M_6] q^3 \\
& p * E_i = ( 42H - 20 E_i - 14\sum\limits_{j\neq i} E_j )q^2 + 84 [M_6]q^3 \\
& H*H = p + (63H - 25 E_1 - 25 E_2 - 25 E_3 - 25 E_4 - 25 E_5 -
25 E_6)q + 120[M_6]q^2 \\
& H*E_i = (25H - 15 E_i - 9\sum\limits_{j\neq i} E_j)q + 42 [M_6] q^2 \\
& E_i*E_i = -p + (15H - 9E_i - 5\sum\limits_{j\neq i} E_j )q + 20 [M_6] q^2 \\
& E_i*E_j = (9H - 5 E_i - 5 E_j - 3 \sum\limits_{k\neq i,j} E_j)q +
14 [M_6] q^2
\end{align*}
Again, we may assume without loss of generality that $[L] = E_1 -
E_2$. A direct calculation gives:
$$(E_1-E_2)^{*3} = 0, \quad c_1*(E_1-E_2) = -6(E_1 - E_2)q,$$
hence $\Delta_L = 0$, $\lambda_L = -6$.
Interestingly, the associated ideals $\mathcal{I}_L$ for Lagrangians
$L$ in any of the classes: $E_i - E_j$, $2H - E_i - E_j - E_l$, $2H -
E_1 - E_2 - E_3 - E_4 - E_5 - E_6$ all coincide:
$$\mathcal{I}_L =
R ( -2p + (12H - 4 \sum_{j=1}^6 E_j)q + 12[M_6]q^2) \bigoplus R (2H -
\sum_{j=1}^6 E_j).$$
\begin{rem} \label{r:1-point}
Note that all Lagrangian spheres in each of $M_4$, $M_5$ and $M_6$
have the same discriminant and the same holds for the Lagrangian
spheres in $M_3$ in the classes $E_1 - E_2$, $E_2-E_3$ and $E_1 -
E_3$. This follows of course from the fact that all these classes
belong to the same orbit of the action of the symplectomorphism
group (on each of the $M_k$'s). However, here is a different
potential explanation which might give more insight. Consider for
example the classes $E_1 - E_2$ and $E_2 - E_3$ in $M_3$. It seems
reasonable to expect that there exist Lagrangian spheres $L_1, L_2
\subset M_3$ with $[L_1] = E_1 - E_2$, $[L_2] = E_2 - E_3$ such
that $L_1$ and $L_2$ intersect transversely at exactly one point.
(We have not verified the details of that, but this seems plausible
in view of the constructions outlined at the beginning
of~\S\ref{sbsb:constructions}). The fact that $\Delta_{L_1} =
\Delta_{L_2}$ would now follow from Corollary~\ref{c:del_1=del_2}.
Similar arguments should apply to many other pairs of classes on
$M_4$, $M_5$ and $M_6$. This would also explain why in all these
cases the discriminants turn out to be perfect squares.
\end{rem}
\subsection{Lagrangian spheres in hypersurfaces of
${\mathbb{C}}P^{n+1}$} \setcounter{thm}{0 \label{sb:exp-hypsurf}
Let $M^{2n} \subset \mathbb{C}P^{n+1}$ be a Fano hypersurface of
degree $d$, where $n \geq 3$. We endow $M$ with the symplectic
structure induced from ${\mathbb{C}}P^{n+1}$. It is easy to check that
$M$ is monotone and that the minimal Chern number is $C_M = n+2 - d$.
We view the homology $H_*(M;\mathbb{Q})$ as a ring, endowed with the
intersection product which we denote by $a \cdot b$ for $a, b \in
H_*(M;\mathbb{Q})$. Write $h \in H_{2n-2}(M;\mathbb{Q})$ for the class
of a hyperplane section. The homology $H_*(M;\mathbb{Q})$ is generated
as a ring by the class $h$ and the subspace of primitive classes,
denoted by $H_n(M; \mathbb{Q})_0$. (Recall that the latter is by
definition the kernel of the map $H_n(M; \mathbb{Q}) \longrightarrow
H_{n-2}(M;\mathbb{Q})$, $a \longmapsto a \cdot h$).
Assume that $d \geq 2$. Then by Picard-Lefschetz theory $M$ contains
Lagrangian spheres (that can be realized as vanishing cycles of the
Lefschetz pencil associated to the embedding $M \subset
{\mathbb{C}}P^{n+1}$).
Let $L \subset M$ be a Lagrangian sphere and assume further that $d
\geq 3$. To calculate $[L]^{*3}$ we appeal to the work of
Collino-Jinzenji~\cite{Co-Ji:QH-hpersurf} (see
also~\cite{Gi:equivariant, Beau:quant, Ti:qh-assoc} for related
results). We set $x := h + d![M]q$ if $C_M = 1$, and $x := h$, if $C_M
\geq 2$. Specifically, we will need the following:
\begin{thm}[Collino-Jinzenji~\cite{Co-Ji:QH-hpersurf}]
\label{t:qh-hypersuf}
In the quantum homology ring of $M$ with coefficients in
$\mathbb{Q}[q]$ we have the following identities:
\begin{enumerate}
\item $x * a = 0$ for every $a \in H_n(M; \mathbb{Q})_0$.
\item $a * b = \frac{1}{d} \#(a \cdot b) (x^{*n} - d^d x^{*(d-2)}
q^{n+2-d})$ for every $a, b \in H_n(M; \mathbb{Q})_0$.
\end{enumerate}
\end{thm}
Coming back to our Lagrangian spheres $L \subset M$, we clearly have
$[L] \in H_n(M; \mathbb{Q})_0$. Therefore we obtain from
Theorem~\ref{t:qh-hypersuf}:
\begin{equation} \label{eq:L*3-hypersuf} [L]*[L]*[L] = \frac{1}{d}
\#([L]\cdot [L]) (x^{*n}*[L] - d^d x^{*(d-2)}*[L] q^{n+2-d}) = 0,
\end{equation}
where in the last equality we have used that $d>2$ (hence
$x^{*(d-2)}*[L]=0$).
If we also assume that $2C_M | n$, then the Lagrangian spheres $L
\subset M$ have minimal Maslov number $N_L = 2C_M$ and it is easy to
see that they satisfy Assumption~$\mathscr{L}$ (see e.g.
Proposition~\ref{p:criterion}). Therefore in this case the
discriminant $\Delta_L$ is defined and we clearly have $\Delta_L = 0$.
(Note that when $2C_M | n$ we must have $d>2$.)
Finally, we discuss the case $d=2$. A straightforward calculation
based on the quantum homology ring structure of the quadric (see
e.g.~\cite{Beau:quant}) shows that Lagrangian spheres $L \subset M$
satisfy $[L]^{*3} = (-1)^{\frac{n(n-1)}{2}+1}4 [L] q^n$
if $n=$~even and $[L]=0$ (hence
$[L]^{*2}=0$) if $n=$~odd.
\subsubsection{An example which is not a sphere}
\label{sbsb:prod-spheres} All our examples so far were for Lagrangians
that are spheres. However, our theory is more general and applies to
other topological types of Lagrangians (see e.g.
Assumption~$\mathscr{L}$, Proposition~\ref{p:criterion} and
Theorem~\ref{t:cubic-eq}). Here is such an example with $L \approx
S^{m} \times S^{m}$.
Let $Q \subset \mathbb{C}P^{m+1}$ be the complex $n$-dimensional
quadric $Q = \{ [z_0: \ldots : z_{m+1}] \,|\, -z_0^2 + \ldots +
z_{m+1}^2 = 0 \}$ endowed with the symplectic structure induced from
${\mathbb{C}}P^{m+1}$. Then $S := \{ [z_0: \ldots : z_{m+1}] \,|\,
-z_0^2 + \ldots + z_{m+1}^2 = 0, z_i \in \mathbb{R} \}$ is a
Lagrangian sphere. The first Chern class $c_1$ of $Q$ equals the
Poincar\'e dual of $m h$, where $h$ is a hyperplane section of $Q$
associated to the projective embedding $Q \subset
{\mathbb{C}}P^{m+1}$. The minimal Chern number is $C_Q = m$ and $S$
has minimal Maslov number $N_S = 2m$. Note that $S$ does not satisfy
Assumption~$\mathscr{L}$ (since $N_S$ does not divide $m$). {\em
Henceforth we will assume that $m=$ even.}
Put $M = Q \times Q$ endowed with the split symplectic structure
induced from both factors and consider the Lagrangian submanifold $L
\subset M$ which is the product of two copies of $S$:
$$L := S \times S \subset Q \times Q.$$
Put $2n = \dim_{\mathbb{R}} M$ so that $\dim L = n = 2m$.
The symplectic manifold $Q \times Q$ has minimal Chern number $C_M =
m$ and the minimal Maslov number of $L$ is $N_{L} = 2m = n$. By
Proposition~\ref{p:criterion}, $L$ satisfies Assumption~$\mathscr{L}$.
For our calculations the following identities in the quantum homology
ring of $Q$ will be relevant (see e.g.~\cite{Beau:quant}):
\begin{enumerate}
\item $h * [S] = 0$.
\item $a * b = \frac{1}{2} \#(a \cdot b) (h^{*m} - 4 [Q]q^m)$ for
every $a, b \in H_m(Q;\mathbb{Q})_0$.
\end{enumerate}
To calculate $\Delta_L$ we compute $[L]^{*3}$ in $QH(Q \times Q)$. By
the K\"{u}nneth formula in quantum homology~\cite{McD-Sa:jhol} we have
$QH(Q \times Q; \mathbb{Z}[q]) \cong QH(Q; \mathbb{Z}[q])
\otimes_{\mathbb{Z}[q]} QH(Q; \mathbb{Z}[q])$. Together with the
previous identities (with $a=b=[S]$) this gives:
$$[L]*[L] = ([S]*[S]) \otimes ([S]*[S]) =(h^{*m} - 4[Q]q^m)
\otimes (h^{*m} - 4[Q]q^m),$$ and therefore
$$[L]^{*3} = (h^{*m}*[S] - 4[S]q^m) \otimes (h^{*m}*[S] - 4[S]q^m) =
16 [S] \otimes [S] q^{2m} = 16[L]q^{2m}.$$ It follows that $\sigma_L =
0$ and $\tau_L = 1$ (in the notation of Theorem~\ref{t:cubic-eq}),
hence $\Delta_L = 4\tau_L = 4$.
\section{Lagrangian Floer theory} \label{s:floer-setting} \setcounter{thm}{0
Here we briefly recall some ingredients from Floer theory that are
relevant for this paper. These include Lagrangian Floer homology and
especially its realization as Lagrangian quantum homology (a.k.a pearl
homology). The reader is referred to~\cite{Oh:HF1, Oh:spectral,
FO3:book-vol1, FO3:book-vol2, Bi-Co:rigidity, Bi-Co:lagtop} for more
details.
\subsection{Monotone symplectic manifolds and Lagrangians} \setcounter{thm}{0
\label{sb:monotone}
Let $(M, \omega)$ be a symplectic manifold. Denote by $c_1 \in H^2(M)$
the first Chern class of the tangent bundle $T(M)$ of $M$. Denote by
$H_2^S(M)$ the image of the Hurewicz homomorphism $\pi_2(M)
\longrightarrow H_2(M)$. We call $(M, \omega)$ {\em monotone} if there
exists a constant $\vartheta>0$ such that
$$A_{\omega} = \vartheta I_{c_1},$$ where
$A_{\omega}: H_2^S(M) \longrightarrow \mathbb{R}$ is the homomorphism
defined by integrating $\omega$ over spherical classes and $I_{c_1}$
is viewed as a homomorphism $H_2^S(M) \longrightarrow \mathbb{Z}$. We
denote by $C_M$ the positive generator of the subgroup
$\textnormal{image\,} I_{c_1} \subset \mathbb{Z}$ so that
$\textnormal{image\,} I_{c_1} = C_M \mathbb{Z}$. If
$\textnormal{image\,} I_{c_1} = 0$ we set $C_M = \infty$.
$L \subset M$ a Lagrangian submanifold. Denote by $H_2^D(M,L)$ the
image of the Hurewicz homomorphism $\pi_2(M,L) \longrightarrow
H_2(M,L)$. We say that $L$ is {\em monotone} if there exists a
constant $\rho >0$ such that
$$A_{\omega} = \rho \mu,$$ where $A_{\omega}:H_2^D(M,L) \longrightarrow
\mathbb{R}$ is the homomorphism defined by integrating $\omega$ over
homology classes and $\mu: H_2^D(M,L) \longrightarrow \mathbb{Z}$ is
the Maslov index homomorphism. We denote by $N_L$ the positive
generator of the subgroup $\textnormal{image\,} \mu \subset
\mathbb{Z}$ so that $\textnormal{image\,}\mu = N_L \mathbb{Z}$.
Finally, denote by $j: H_2^S(M) \longrightarrow H_2^D(M,L)$ the
obvious homomorphism. Then we have $\mu(j(A)) = 2I_{c_1}(A)$ for every
$A \in H_2^S(M)$. Therefore, if $L$ is a monotone Lagrangian and
$I_{c_1} \neq 0$ then $(M, \omega)$ is also monotone and we have $N_L
\mid 2C_M$. When $\pi_1(L) = \{1\}$ we actually have $N_L = 2C_M$.
\subsection{Floer homology and Lagrangian quantum homology}
\label{sb:HF} \setcounter{thm}{0
Let $L \subset M$ be a closed monotone Lagrangian submanifold with $2
\leq N_L \leq \infty$. Under the additional assumptions that $L$ is
spin one can define the self Floer homology $HF(L,L)$ with
coefficients in $\mathbb{Z}$. This group is cyclically graded, with
grading in $\mathbb{Z} / N_L \mathbb{Z}$.
From the point of view of the present paper it is more natural to work
with Lagrangian quantum homology $QH(L)$ rather than with the Floer
homology $HF(L,L)$. This is justified by the fact that for an
appropriate choice of coefficients we have an isomorphism of rings
$QH(L) \cong HF(L,L)$. The advantage of $QH(L)$ in our context is that
it bears a simple and explicit relation to the singular homology
$H(L)$ of $L$. For example, under certain circumstances (relevant for
our considerations) and with the right coefficient ring, $QH(L)$ can
be viewed as a deformation of the singular homology ring $H(L)$
endowed with the intersection product.
We will now summarize the most basic properties of Lagrangian quantum
homology. The reader is referred to~\cite{Bi-Co:rigidity,
Bi-Co:lagtop} for the foundations of the theory.
Denote by $\Lambda = \mathbb{Z}[t^{-1}, t]$ the ring of Laurent
polynomials over $\mathbb{Z}$ graded so that the degree of $t$ is
$|t|=-N_L$. We denote by $QH^{\#}(L)$ the Lagrangian quantum homology
of $L$ with coefficients in $\mathbb{Z}$ and by $QH(L; \Lambda)$ the
one with coefficients in $\Lambda$. Thus $QH^{\#}(L)$ is cyclically
graded modulo $N_L$ and $QH(L;\Lambda)$ is $\mathbb{Z}$-graded and
$N_L$-periodic, i.e. $QH_i(L;\Lambda) \cong QH_{i-N_L}(L;\Lambda)$,
the isomorphism being given by multiplication by $t$. And we have
$QH_i(L; \Lambda) \cong QH^{\#}_{i \pmod{N_L}}(L)$, hence the grading
on $QH(L;\Lambda)$ is an unwrapping of the cyclic grading of
$QH^{\#}(L)$. Sometimes, when the context is clear we will write
$QH(L)$ for $QH(L;\Lambda)$.
The Lagrangian quantum homology has the following algebraic
structures. There exists a quantum product $$QH_i(L;\Lambda) \otimes
QH_j(L;\Lambda) \longrightarrow QH_{i+j-n}(L;\Lambda), \quad \alpha
\otimes \beta \longmapsto \alpha * \beta,$$ which turns
$QH(L;\Lambda)$ into a unital associative ring with unity $e_L \in
QH_n(L;\Lambda)$.
We now briefly recall relations between the Lagrangian and ambient
quantum homologies. Denote by $R = \mathbb{Z}[q^{-1}, q]$ the ring of
Laurent polynomials in the variable $q$, whose degree we set to be
$|q|=-2$. Denote by $QH(M;R)$ the quantum homology of $M$ with
coefficients in $R$, endowed with the quantum product $*$. The
Lagrangian quantum homology $QH(L; \Lambda)$ is a module over the
subring $QH(M; \Lambda) \subset QH(M;R)$, where $\Lambda$ is embedded
in $R$ by $t \mapsto q^{N_L/2}$. We denote this operation by
$$QH_i(M; \Lambda) \otimes QH_j(L; \Lambda)
\longrightarrow QH_{i+j-2n}(L; \Lambda), \quad a \otimes \alpha
\longmapsto a*\alpha.$$ The reason for using the same notation $*$ as
for the quantum product on $L$ is that the module operation is
compatible with the latter in the following sense:
\begin{equation} \label{eq:alg-identity} c*(\alpha*\beta) =
(c*\alpha)*\beta = (-1)^{(2n-|c|)(n-|\alpha|)}\alpha*(c*\beta),
\end{equation}
for every $c \in QH(M;\Lambda)$, $\alpha, \beta \in QH(L;\Lambda)$.
{Note that the sign conventions in~\eqref{eq:alg-identity} are
compatible with the standard sign conventions for the intersection
product in singular homology.}
The proof of identity~\eqref{eq:alg-identity} has been carried out
in~\cite{Bi-Co:qrel-long, Bi-Co:rigidity} over $\mathbb{Z}_2$ (hence
without taking signs into account), and the same proof carries over in
a straightforward way over $\mathbb{Z}$ using~\cite{Bi-Co:lagtop}.
Thus $QH(L;\Lambda)$ is an algebra (in the graded sense) over
$QH(M;\Lambda)$.
There is also a quantum inclusion map
$$i_L: QH_i(L; \Lambda) \longrightarrow QH_i(M; \Lambda),$$ which
is linear over the ring $QH(M; \Lambda)$, i.e. $i_L(c*\alpha) =
c*i_L(\alpha)$ for every $c \in QH(M;\Lambda)$ and $\alpha \in
QH(L;\Lambda)$. An important property of $i_L$ is that $i_L(e_L) =
[L]$, see~\cite{Bi-Co:lagtop}.
Next there is an augmentation morphism
$$\epsilon_L: QH(L;\Lambda) \longrightarrow \Lambda,$$ which is
induced from a chain level extension of the classical augmentation.
The augmentation satisfies the following identity,
see~\cite{Bi-Co:rigidity}:
\begin{equation} \label{eq:kron-aug} \langle PD(h), i_L(\alpha)
\rangle = \epsilon_L(h * \alpha), \quad \forall h \in H_*(M), \;
\alpha \in QH(L; \Lambda),
\end{equation}
where $PD$ stands for Poincar\'{e} duality and $\langle \cdot, \cdot
\rangle$ denotes the Kronecker pairing extended over $\Lambda$ in an
obvious way. Sometimes it will be more convenient to view the
augmentation as a map
$$\widetilde{\epsilon}_L: QH(L;\Lambda) \longrightarrow H_0(L;\Lambda)
= \Lambda [\textnormal{point}].$$ This augmentations $\epsilon_L$ and
$\widetilde{\epsilon}_L$ descend also to $QH^{\#}(L)$ and by slight
abuse of notation we denote them the same:
$$\epsilon_L: QH^{\#}(L) \longrightarrow \mathbb{Z}, \quad
\widetilde{\epsilon}_L: QH^{\#}(L) \longrightarrow H_0(L).$$
As mentioned earlier we will not really use Floer homology in this
paper, but Lagrangian quantum homology instead. The justification for
replacing $HF(L,L)$ by $QH^{\#}(L)$ is due to the PSS isomorphism
$$PSS: HF_*(L,L) \longrightarrow QH^{\#}_*(L).$$ This is a ring
isomorphism which intertwines the Donaldson product and the quantum
product on $QH^{\#}(L)$. A version of $PSS$ works with coefficients
in $\Lambda$ too. For more details on the PSS isomorphism
see~\cite{Alb:PSS, Bar-Cor:NATO, Cor-La:Cluster-1, Bi-Co:rigidity}.
See also~\cite{Hu-La:Seidel-morph, Hu-La-Le:monodromy} for the
extension to $\mathbb{Z}$-coefficients.
Finally, we remark that everything mentioned above in this section
continues to hold (with obvious modifications) also with other choices
of base rings, replacing $\mathbb{Z}$ by $\mathbb{Q}$ or $\mathbb{C}$.
For $K = \mathbb{Q}$ or $\mathbb{C}$ we write $\Lambda_K =
K[t^{-1},t]$, $R_{K} = K[q^{-1}, q]$ for the associated rings of
Laurent polynomials and by $HF(L,L; \Lambda_K)$, $QH(L;\Lambda_K)$ and
$QH(M; R_K)$ the corresponding homologies. Sometimes it will be
useful to drop the Laurent polynomial rings $\Lambda_K$ and $R_K$ and
simply work with $HF(L,L;K)$, $QH(L;K)$ and $QH(M;K)$. Another
variation that will be used in the sequel is to replace $\Lambda_K$
and $R_K$ by polynomial rings (rather than Laurent polynomials), i.e.
work with coefficients in $\Lambda^+_K = K[t]$ and $R^+_K = K[q]$.
See~\cite{Bi-Co:rigidity, Bi-Co:Yasha-fest, Bi-Co:lagtop} for a
detailed account on this choice of coefficients. When the base ring
$K$ is obvious we will abbreviate $Q^+H(L) := QH(L; \Lambda^+_K)$ and
similarly for $Q^+H(M)$. (There has been only one exception to this
notation. In the introduction~\S\ref{s:intro} we denoted by $QH(M)$
the quantum homology $QH(M; R^+)$ in order to facilitate the notation,
but henceforth we will stick to the notation we have just described.)
The homologies of the type $Q^+H$ will be called {\em positive quantum
homologies}. Again, everything described above continues to work for
the positive versions of quantum homologies with one important
exception: the PSS isomorphism does not hold over $\Lambda^+_K$ (at
least not for a straightforward version of Floer homology).
\subsection{Proof of Proposition~\ref{p:criterion}}
\label{sb:prf-p-criterion} \setcounter{thm}{0
The proof appeals to a spectral sequence for calculating Lagrangian
quantum homology which is rather standard in symplectic topology. For
the sake of readability we have included in~\S\ref{s:app-calc} a
summary of the main ingredients of this technique.
Let $\{E^r_{p,q}, d^r\}_{_{r \geq 0}}$ be the spectral sequence
described in~\S\ref{sb:spec-seq}. By Theorem~\ref{t:spectral-seq} and
the assumptions of Proposition~\ref{p:criterion} we see that the $E^1$
terms of the sequence has the following form:
\[
\bigoplus_{p + q = n} E^1_{p,q} = (H_n(L; \mathbb{Q})\otimes P_0)
\oplus (H_0(L; \mathbb{Q})\otimes P_{n}).
\]
It now follows easily that the dimension of
$QH_n(L;\Lambda_{\mathbb{Q}})$ as a $\mathbb{Q}$-vector space is at
most $2$. We will now show that the dimension is exactly $2$.
We first claim that the unity is not trivial, $e_L \neq 0 \in
QH_n(L;\Lambda_{\mathbb{Q}})$. To see this consider the quantum
inclusion map $i_L:QH_n(L;\Lambda_{\mathbb{Q}}) \longrightarrow
QH_n(M;R_{\mathbb{Q}})$ from~\S\ref{sb:HF}. It is well
known~\cite{Bi-Co:lagtop} that $i_L(e_L) = [L]$. As $[L] \neq 0$ it
follows that $e_L \neq 0$.
By Poincar\'{e} duality there exists a class $c \in H_n(M;\mathbb{Q})$
such that $c \cdot [L] \neq 0$. Put $x: = c * e_L \in
QH_0(L;\Lambda_{\mathbb{Q}})$. From~\eqref{eq:kron-aug} we get that
$\epsilon_L(x) \neq 0$. This implies that the two elements $xt^{-\nu},
e_L \in QH_n(L; \Lambda_{\mathbb{Q}})$ are linearly independent. It
follows that $\dim QH_n(L;\Lambda_{\mathbb{Q}}) = 2$.
From the above it now follows that the rank of of $QH^{\#}_n(L)$ is
$2$. Finally, from the PSS isomorphism we obtain that $HF_n(L,L)$ has
rank $2$. \hfill \qedsymbol \medskip
\subsection{Eigenvalues of $c_1$ and Lagrangian submanifolds}
\label{sb:eigneval} \setcounter{thm}{0
Let $L \subset M$ be a closed spin monotone Lagrangian submanifold
with $QH(L; \mathbb{C}) \neq 0$. Assume in addition that $N_L=2$. With
these assumptions one can define an invariant $\lambda_L \in
\mathbb{Z}$ which counts the number of Maslov-$2$ pseudo-holomorphic
disks $u:(D,
\partial D) \longrightarrow (M,L)$ whose boundary $u(\partial D)$ pass
through a generic point $p \in L$. The value of $\lambda_L$ turns out
to be independent of the almost complex structure as well as of the
generic point $p$. See~\cite{Bi-Co:lagtop} for more details. We
extend the definition of $\lambda_L$ to the case $N_L > 2$ by setting
$\lambda_L=0$.
Consider now the following operator $$P: QH(L;\Lambda_\mathbb{C})
\longrightarrow QH(L; \Lambda_{\mathbb{C}}), \quad \alpha \longmapsto
PD(c_1)*\alpha,$$ where $PD$ stands for Poincar\'{e} duality. By abuse
of notation we have denoted here by $c_1 \in H^2(M; \mathbb{C})$ the
image of the first Chern class of $T(M)$ under the change of
coefficients map $H^2(M;\mathbb{Z}) \to H^2(M;\mathbb{C})$.
The following is well known:
\begin{enumerate}
\item If $N_L = 2$, then $P(\alpha) = \lambda_L \alpha t$ for every
$\alpha \in QH(L;\Lambda_{\mathbb{C}})$.
\item If $N_L > 2$, then $P \equiv 0$.
\end{enumerate}
For the proof of~$(1)$, See~\cite{Aur:t-duality} for a special case
(where the statement is attributed to folklore, in particular also to
Kontsevich and to Seidel) and~\cite{Sher:Fano} for the general case.
As for~$(2)$, it follows immediately from the fact that the
restriction of $c_1$ to $L$ vanishes, $c_1|_{L} = 0 \in
H^2(L;\mathbb{C})$, together with degree reasons.
Denote by $\mathcal{I}_L \subset QH(M;R_{\mathbb{C}})$ the image of
the quantum inclusion map $i_L: QH(L;\Lambda_{\mathbb{C}})
\longrightarrow QH(M;R_{\mathbb{C}})$. Note that $\mathcal{I}_L$ is an
ideal of the ring $QH(M; R_{\mathbb{C}})$.
\begin{prop} \label{p:I_L-lambda} $\mathcal{I}_L \neq 0$ iff $QH(L;
\Lambda_{\mathbb{C}}) \neq 0$ and in that case $\lambda_L$ is an
eigenvalue of the operator
$$Q: QH(M; R_{\mathbb{C}}) \longrightarrow QH(M; R_{\mathbb{C}}),
\quad a \longmapsto PD(c_1)*a q^{-1}.$$ Moreover, $\mathcal{I}_L$
is a subspace of the eigenspace of $Q$ corresponding to the
eigenvalue $\lambda_L$. In particular if $[L] \neq 0 \in
H_n(M;\mathbb{C})$ then $[L]$ is an eigenvector of $Q$
corresponding to $\lambda_L$.
\end{prop}
\begin{rem}
Denote by $Q': QH(M;\mathbb{C}) \longrightarrow QH(M;\mathbb{C})$
the same operator as $Q$ but acting on $QH(M;\mathbb{C})$ instead
of $QH(M;\Lambda_{\mathbb{C}})$. Similarly, denote by
$\mathcal{I}'_L \subset QH(M;\mathbb{C})$ the image of $i_L$. The
statement of Proposition~\ref{p:I_L-lambda} continues to hold for
$Q'$ and $\mathcal{I}'_L$. Moreover, if $[L] \neq 0$ then
$$\dim_{\mathbb{C}} \mathcal{I}'_L \geq 2,$$ hence the multiplicity of the
eigenvalue $\lambda_L$ with respect to the operator $Q'$ is at
least $2$. Indeed, $[L] = i_L(e_L) \in \mathcal{I}'_L$. Now take $c
\in H_n(M;\mathbb{C})$ with $c \cdot [L] \neq 0$. As
$\mathcal{I}'_L$ is an ideal we have $c*[L] \in \mathcal{I}'_L$.
But $c*[L] = \#(c \cdot [L]) [\textnormal{point}] +
(\textnormal{other terms})$, hence $c*[L]$ is not proportional to
$[L]$. (Here $\#(c \cdot [L])$ stands for the intersection number
of $c$ and $[L]$.)
\end{rem}
\begin{proof}[Proof of Proposition~\ref{p:I_L-lambda}]
Assume that $QH(L;\Lambda_{\mathbb{C}}) \neq 0$. By duality for
Lagrangian quantum homology there exists $x \in
QH_0(L;\Lambda_{\mathbb{C}})$ with $\epsilon_L(x) \neq 0$.
(See~\cite{Bi-Co:rigidity}, Proposition~4.4.1. The proof there is
done over $\mathbb{Z}_2$ but the extension to any field is
straightforward in view of~\cite{Bi-Co:lagtop}).
From~\eqref{eq:kron-aug} (with $h=[M]$ and $\alpha = x$) it follows
that $i_L(x) \neq 0$, hence $\mathcal{I}_L \neq 0$. The opposite
assertion is obvious.
The statement about the eigenspace of $Q$ follows immediately from
the discussion about the operator $P$ and the fact that $i_L$ is a
$QH(M;R_{\mathbb{C}})$-module map.
Finally, note that $[L] \in \mathcal{I}_L$ since $[L] = i_L(e_L)$.
\end{proof}
The following observation shows that the eigenvalues corresponding to
different Lagrangians coincide under certain circumstances.
\begin{prop} \label{p:lambda-lambda'} Let $L, L' \subset M$ be two
closed monotone spin Lagrangian submanifolds. Assume that $[L]
\cdot [L]' \neq 0$. Then $\lambda_{L} = \lambda_{L'}$.
\end{prop}
\begin{proof}
We view $[L], [L']$ as elements of $QH_n(M;\mathbb{C})$. We have
$$\textnormal{PD}(c_1)*([L]*[L']) = (\textnormal{PD}(c_1)*[L])*[L'] =
\lambda_{L}[L]*[L'].$$ At the same time, since
$|\textnormal{PD}(c_1)| = $ even we also have
$$\textnormal{PD}(c_1)*([L]*[L']) = [L]*(\textnormal{PD}(c_1)*[L']) =
\lambda_{L'} [L]*[L'].$$ Since $[L]\cdot [L'] \neq 0$ we have
$[L]*[L'] \neq 0$ and the results follows.
\end{proof}
\subsection{More on the discriminant} \label{sb:more-discr} \setcounter{thm}{0
\subsubsection{Well-definedness} \label{sbsb:discr-well-def} We start
with showing that the discriminant, as defined
in~\S\ref{sb:discr-intro} is independent of the choices of $p$ and
$x$. We first fix $p$ and show independence of its lift $x$. Indeed if
$y$ is another lift of $p$ then $y = x + r$ for some $r \in
\mathbb{Z}$. A straightforward calculation shows that
$$\sigma(p, y) = \sigma(p,x) + 2r, \quad \tau(p, y) = \tau(p,x) -
\sigma(p,x)r - r^2.$$ Another direct calculation shows that
$$\sigma(p,y)^2 + 4 \tau(p, y) = \sigma(p,x)^2 + 4\tau(p,x).$$
Assume now that $p' \in A / \mathbb{Z}$ is a different generator. We
then have $p' = -p$ and so we can choose $x' = -x$ as a lift of $p'$.
It easily follows that $$\sigma(p', x') = - \sigma(p,x), \quad
\tau(p', x') = \tau(p,x),$$ hence again $\sigma(p',x')^2 + 4 \tau(p',
x') = \sigma(p,x)^2 + 4\tau(p,x)$.
\hfill \qedsymbol \medskip
\subsubsection{The discriminant determines the isomorphism type of a
quadratic algebra} \label{sbsb:discr-inv-class}
\begin{lem}
Let $A$ and $B$ be two quadratic algebras over $\mathbb{Z}$. Then
$A$ is isomorphic to $B$ if and only if $\Delta_A = \Delta_B$.
\end{lem}
\begin{proof}
Fix group isomorphisms
\[
A \cong \mathbb{Z} \oplus \mathbb{Z}x, \quad B \cong \mathbb{Z}
\oplus \mathbb{Z} x',
\]
where $x \in A$, $x' \in B$ and write $x^2 = \sigma x + \tau$,
$x'^2 = \sigma' x + \tau'$ with $\sigma, \sigma', \tau, \tau' \in
\mathbb{Z}$. Define two quadratic monic polynomials with integral
coefficients: $$f(X) = X^2 - \sigma X -\tau, \quad g(X) = X^2 -
\sigma' X - \tau'.$$ The map $\mathbb{Z}[X] \longrightarrow A$,
induced by $X \longmapsto x$, descends to a map $\mathbb{Z}[X] / (
f(X) ) \longrightarrow A$ which is easily seen to be an isomorphism
or rings. In a similar way we obtain a ring isomorphism
$\mathbb{Z}[X] / ( g(X) ) \cong B$. Note that $\Delta_A$ is equal
to the discriminant of $f$ and $\Delta_B$ to the discriminant of
$g$.
Assume that $A \cong B$. It is easy to see that all the ring
isomorphisms $\mathbb{Z}[X] / ( f(X) ) \cong \mathbb{Z}[X] / ( g(X)
)$ are induced by $X \longmapsto \pm X + r$, where $r \in
\mathbb{Z}$. It follows that $g(X) = f(\pm X+r)$ for some $r \in
\mathbb{Z}$ and thus $f$ and $g$ have the same discriminants, hence
$\Delta_A = \Delta_B$.
Conversely, assume that $\Delta_A = \Delta_B$. Then
$$\sigma^2 + 4\tau = \sigma'^2 + 4 \tau',$$ hence
$\sigma$ and $\sigma'$ have the same parity. Set $r = (\sigma -
\sigma') /2 \in \mathbb{Z}$ and consider the ring homomorphism
$\varphi: \mathbb{Z}[X] \longrightarrow \mathbb{Z}[X]$ induced by
$X \longrightarrow X - r$. A simple calculation shows that
$\varphi(f) = g$, hence it descends to $\bar{\varphi} :
\mathbb{Z}[X] / ( f(X) ) \longrightarrow \mathbb{Z}[X] / ( g(X) )$.
It is easy to see that $\bar{\varphi}$ is invertible.
\end{proof}
\subsubsection{A useful extension over other rings}
\label{sbsb:discr-ext-Q}
Let $A$ be a quadratic algebra over $\mathbb{Z}$ as described
in~\S\ref{sb:discr-intro}. Let $K$ be a commutative ring which extends
$\mathbb{Z}$, i.e. we have $\mathbb{Z} \subset K$ as a subring. For
simplicity we will assume that $K$ is torsion-free. We will mainly
consider $K = \mathbb{Q}$ or $K = \mathbb{C}$. Write $A_{K} = A
\otimes K$.
For practical purposes it will be sometimes useful to calculate
$\Delta_A$ using $A_{K}$ rather than via $A$ itself. This can be done
as follows. From the sequence~\eqref{eq:ex-seq-A} we obtain the
following exact sequence:
\begin{equation}
0 \longrightarrow K \longrightarrow A_{K}
\xrightarrow{\;\;\epsilon \;\;} K p \longrightarrow 0,
\end{equation}
where as before, $\epsilon$ is the projection to the quotient and $p$
stands for a generator of $A / \mathbb{Z} \subset A_{K}/K$. Pick a
lift $x \in A_{K}$ of $p$ and define $\sigma(p,x), \tau(p,x)$ by the
same recipe as in~\S\ref{sb:discr-intro}, only that now these two
numbers belong to $K$ rather than to $\mathbb{Z}$. A simple
calculation, similar to~\S\ref{sbsb:discr-well-def} above shows that
we still have $\Delta_A = \sigma(p,x)^2 + 4\tau(p,x)$ (and of course
despite the calculation being done in $K$ we still have $\Delta_A \in
\mathbb{Z}$).
\begin{rem}
It is essential here that the generator $p$ is integral, i.e. that
$p \in A_{K}/K$ was chosen to come from $A/\mathbb{Z}$. If we allow
to replace $p$ by any non-trivial element of $A_{K}/K$ then the
corresponding discriminant will depend on that choice, but not on
the choice of the lift $x$. In fact, if $p' = c p$, $c \in K$ then
the discriminants corresponding to $p'$ and $p$ are related by
$\Delta(p') = c^2 \Delta(p)$. Therefore, when $K = \mathbb{Q}$ for
example, the sign of the discriminant is an invariant of
$A_{\mathbb{Q}}$. The algebraic properties of $A_{\mathbb{Q}}$
change depending on the sign of the discriminant and whether it is
a perfect square or not.
\end{rem}
\subsubsection{The case of $A = QH^{\#}_n(L)$} \label{sbsb:Q=QH_n} Let
$L \subset M$ be a Lagrangian submanifold satisfying
conditions~$(1)$~--~$(3)$ of Assumption~$\mathscr{L}$. Fix a spin
structure on $L$. Denote by $e_L \in QH^{\#}_n(L)$ the unity. Without
loss of generality we may assume that $QH^{\#}_n(L)$ is torsion-free,
otherwise we just replace it by $QH^{\#}_n(L)/T$, where $T$ is the
torsion ideal. Thus $QH^{\#}_n(L)$ is a quadratic algebra over
$\mathbb{Z}$.
By duality for Lagrangian quantum homology~\cite{Bi-Co:rigidity,
Bi-Co:lagtop}, the augmentation $\widetilde{\epsilon}_L :
QH^{\#}_0(L) \longrightarrow H_0(L;\mathbb{Z})$ is surjective. Keeping
in mind that in our case $QH^{\#}_0(L) = QH^{\#}_n(L)$ (since $N_L
\mid n$) we obtain the following exact sequence:
$$0 \longrightarrow \mathbb{Z} e_L \longrightarrow QH^{\#}_n(L)
\xrightarrow{\;\;\widetilde{\epsilon}_L \;\;} H_0(L;\mathbb{Z})
\longrightarrow 0.$$ Let $K$ be a torsion-free commutative ring that
contains $\mathbb{Z}$. Let $p = [\textnormal{point}] \in
H_0(L;\mathbb{Z})$ be the homology class of a point. Tensoring the
last sequence by $K$ we obtain:
\begin{equation} \label{eq:quad-alg-QHn-2} 0 \longrightarrow K e_L
\longrightarrow QH^{\#}_n(L;K)
\xrightarrow{\;\;\widetilde{\epsilon}_L \;\;} K p \longrightarrow
0.
\end{equation}
In order to calculate $\Delta_L$, choose a lift $x \in QH^{\#}_n(L;K)$
of $p$ with respect to $\widetilde{\epsilon}_L$. Then we have
\begin{equation} \label{eq:x*x-QH} x*x = \sigma(p,x) x + \tau(p,x)e_L,
\end{equation}
with some $\sigma(p,x), \tau(p,x) \in K$. The discriminant can then be
calculated by $$\Delta_L = \sigma(p,x)^2 + 4 \tau(p,x).$$
In the following we will need to use the equality~\eqref{eq:x*x-QH}
but in $QH_n(L;\Lambda_K)$ rather than in $QH^{\#}_n(L;K)$. We have
$QH_0(L;\Lambda_K) = t^{\nu} QH_n(L;\Lambda_K)$, with $\nu = n/N_L$.
The lift $x$ of $p$ has now to be chosen in $QH_0(L;\Lambda_K)$ and
the previous equation now takes place in $QH_0(L;\Lambda_K)$ and has
the following form:
\begin{equation} \label{eq:x*x-QH-t} x*x = \sigma(p,x) x t^{\nu} +
\tau(p,x)e_L t^{2 \nu}.
\end{equation}
Finally, we mention that sometimes it is more convenient to define the
discriminant using the positive Lagrangian quantum homology $QH(L;
\Lambda^+_K)$ rather than $QH(L; \Lambda_K)$. The resulting
discriminant is obviously the same.
\section{Introduction and main results} \setcounter{thm}{0
\label{s:intro}
Let $M^{2n}$ be a closed symplectic $2n$-dimensional manifold. Assume
further that $M$ is monotone with minimal Chern number $C_M$
(see~\S\ref{sb:monotone} below for the definitions). Denote by $QH(M)$
the quantum homology of $M$ with coefficients in the ring
$\mathbb{\mathbb{Z}}[q]$, where the degree of the variable $q$ is
$|q|=-2$. Denote by $*$ the quantum product on $QH(M)$ and for a class
$a \in QH(M)$, $k \in \mathbb{N}$, we write $a^{*k}$ for the $k$'th
power of $a$ with respect to this product.
Let $S \subset M$ be an oriented Lagrangian $n$-sphere. Denote by $[S]
\in QH_n(M)$ the homology class represented by $S$ in the quantum
homology of $M$. Our first result shows that $[S]$ always satisfies a
cubic or quadratic equation of a very specific type:
\begin{mainthm} \label{t:cubic-eq-sphere}
\begin{enumerate}
\item \label{i:n-odd} If $n=$~odd then $[S]*[S]=0$.
\item Assume $n=$~even. Then:
\begin{enumerate}[(i)]
\item \label{i:div} If $C_M | n$ then there exists a unique
$\gamma_S \in \mathbb{Z}$ such that $[S]^{*3} = \gamma_S [S]
q^{n}$. If we assume in addition that $2C_M \centernot| n$,
then $\gamma_S$ is divisible by $4$, while if $2C_M | n$ then
$\gamma_S$ is either $0 (\bmod \,4)$ or $1 (\bmod \,4)$.
\item \label{i:ndiv} If $C_M \centernot| n$ then $[S]^{*3} =
0$.
\end{enumerate}
\end{enumerate}
\end{mainthm}
The proof of Theorem~\ref{t:cubic-eq-sphere}, given
in~\S\ref{sb:prf-cubic-eq-S}, follows from a simple argument involving
Lagrangian Floer homology. The cases~\eqref{i:n-odd},~\eqref{i:ndiv}
are particularly simple, whereas case~\eqref{i:div} splits into two
sub-cases:
\begin{enumerate}
\item [(2i-a)] \label{i:2CM-n} $2C_M | n$.
\item[(2i-b)] \label{i:CM-n} $C_M | n$, but $2C_M \centernot| n$.
\end{enumerate}
We will see below that out of these two sub-cases the most interesting
is~(2i-a). In that case the constant $\gamma_S$ has other
interpretations coming from Floer theory and enumerative geometry of
holomorphic disks. These will be explained in detail in the sequel.
\begin{rem}
\begin{enumerate}
\item When $n$ is even it is easy to see that $[S]
\in H_n(M)$ is neither $0$ nor a torsion class. Therefore in that
case $\gamma_S$ is uniquely determined.
\item Points~\eqref{i:n-odd} and~\eqref{i:ndiv} of the theorem
cover the symplectically aspherical case (i.e.
$[\omega]|_{\pi_2(M)}=0$) if we set $C_M = \infty$. Of course,
the statement in that case is completely obvious.
\item A version of Theorem \ref{t:cubic-eq-sphere} also holds
in the non-monotone case for Lagrangian 2-spheres, the precise
statement can be found in~\S\ref{s:non-monotone}.
\item Theorem~\ref{t:cubic-eq-sphere} continues to hold also when
$S$ is a $\mathbb{Z}$-homology sphere, except possibly when $2C_M
| n$. The difference between the case $2C_M | n$ and the others
is that in that case $[S]$ a priori satisfies only the cubic
equation~\eqref{eq:cubic-L} from Theorem~\ref{t:cubic-eq} below.
For the vanishing of the coefficient of $[S]^{*2}$ we will use
the Dehn twist along $S$ (see Corollary~\ref{c:sig=0} and the
short discussion after it), hence we need to assume that $S$ is
diffeomorphic to a sphere. At the same time we are not aware of
interesting computable examples where $S$ is a
$\mathbb{Z}$-homology sphere yet not a genuine sphere.
\end{enumerate}
\end{rem}
For the rest of the introduction we concentrate on case~(2i-a) and its
possible generalizations. Assume from now on that $L \subset M$ is a
Lagrangian submanifold (not necessarily a sphere). Denote by
$HF_*(L,L)$ the self Floer homology of $L$ with coefficients in
$\mathbb{Z}$. See~\S\ref{s:floer-setting} for the Floer theoretical
setting. In what follows we will recurringly appeal to the following
set of assumptions or to a subset of it:
\begin{assumption-L}
\begin{enumerate}
\item $L$ is closed (i.e. compact without boundary). Furthermore
$L$ is monotone with minimal Maslov number $N_L$ that satisfies
$N_L \mid n$ (see~\S\ref{sb:monotone} for the definitions). Set
$\nu = n/N_L$.
\item $L$ is oriented. Moreover we assume that $L$ is spinable
(i.e. can be endowed with a spin structure).
\item $HF_n(L,L)$ has rank $2$.
\item Write $\chi = \chi(L)$ for Euler-characteristic of $L$. We
assume that $\chi \neq 0$.
\end{enumerate}
\end{assumption-L}
Note that conditions $(1)$ and $(2)$ together imply that $n=$ even,
since orientable Lagrangians have $N_L=$ even. Independently,
conditions $(2)$ and $(4)$ also imply that $n=$even. As we will see
later there are many Lagrangian submanifolds that satisfy
Assumption~$\mathscr{L}$ -- for example, even dimensional Lagrangian
spheres in monotone symplectic manifolds $M$ with $2C_M | n$.
See~\S\ref{sb:exps} and~\S\ref{s:examples} for more examples.
{Unless otherwise stated, from now on we implicitly assume all
Lagrangian submanifolds to be connected.}
\subsection{The Lagrangian cubic equation} \label{sb:lcubic-eq} Here
we need to work with $\mathbb{Q}$ as the base ring. Denote by $QH(M;
\mathbb{Q}[q])$ the quantum homology of $M$ with coefficients in the
ring $\mathbb{Q}[q]$. Given an oriented Lagrangian submanifold $L
\subset M$ denote by $[L] \in QH_n(M;\mathbb{Q}[q])$ its homology
class in the quantum homology of the ambient manifold $M$. We will
also make use of the following notation $\varepsilon =
(-1)^{n(n-1)/2}$.
Our first result is the following.
\begin{mainthm}[The Lagrangian cubic equation] \label{t:cubic-eq} Let
$L \subset M$ be a Lagrangian submanifold satisfying
assumption~$\mathscr{L}$. Then there exist unique {constants
$\sigma_L \in \tfrac{1}{\chi^2}\mathbb{\mathbb{Z}}$, $\tau_L \in
\tfrac{1}{\chi^3} \mathbb{Z}$} such that the following equation
holds in $QH(M;\mathbb{Q}[q])$:
\begin{equation} \label{eq:cubic-L} [L]^{*3} - \varepsilon \chi
\sigma_L [L]^{*2}q^{n/2} - \chi^2 \tau_L [L]q^{n} = 0.
\end{equation}
{If $\chi$ is square-free then $\sigma_L \in
\tfrac{1}{\chi}\mathbb{Z}$ and $\tau_L \in
\tfrac{1}{\chi^2}\mathbb{Z}$.} Moreover, the constant $\sigma_L$
can be expressed in terms of genus $0$ Gromov-Witten invariants as
follows:
\begin{equation} \label{eq:GW-LLL} \sigma_L = \frac{1}{\chi^2}
\sum_A GW^M_{A,3}([L],[L],[L]),
\end{equation}
where the sum is taken over all classes $A \in H_2(M)$ with
$\langle c_1, A \rangle = n/2$.
\end{mainthm}
In~\S\ref{s:lag-cubic-eq} we will prove a more general result
concerning a Lagrangian submanifold $L$ and an arbitrary class $c \in
H_n(M)$ which satisfies $c \cdot [L] \neq 0$. We will prove that they
satisfy a mixed equation of degree three involving $[L]$ and $c$.
Equation~\eqref{eq:cubic-L} is the special case $c = [L]$.
Here is an immediate corollary of Theorem~\ref{t:cubic-eq}:
\begin{maincor} \label{c:sig=0} Let $L \subset M$ be a Lagrangian
submanifold satisfying Assumption $\mathscr{L}$. Assume in addition
that there exists a symplectic diffeomorphism $\varphi: M
\longrightarrow M$ such that $\varphi_*([L]) = -[L]$. Then
$\sigma_L = 0$, hence equation~\eqref{eq:cubic-L} reads in this
case:
\begin{equation*}
[L]^{*3} - \chi^2 \tau_L [L]q^{n} = 0.
\end{equation*}
\end{maincor}
When $L$ is a Lagrangian sphere in a symplectic manifold $M$ with
$2C_M | n$ then point~\eqref{i:div} of Theorem~\ref{t:cubic-eq-sphere}
follows from Corollary~\ref{c:sig=0}. Indeed, we can take $\varphi$ to
be the Dehn twist along $L$. The Picard-Lefschetz formula (see
e.g.~\cite{Dimca:sing-hyper-book, Ar:sing-theory-I}) gives
$\varphi_*([L]) = -[L]$ since $n=\dim L$ is even and $\chi = 2$.
Corollary~\ref{c:sig=0} then implies that $\sigma_L = 0$ (and we have
$\gamma_L = 4 \tau_L$). { Note that in this case we have
$\tau_L \in \tfrac{1}{4}\mathbb{Z}$.}
\begin{proof}[Proof of Corollary~\ref{c:sig=0}]
Applying $\varphi_*$ to the equation~\eqref{eq:cubic-L} and
comparing the result to~\eqref{eq:cubic-L} yields $\varepsilon \chi
\sigma_L [L]^{*2}=0$. Since $\chi \neq 0$ it follows that $\sigma_L
[L]^{*2}=0$. But $[L] \cdot [L] = \varepsilon \chi \neq 0$, hence
$[L]^{*2} \neq 0$. This implies that $\sigma_L=0$.
\end{proof}
\subsection{The discriminant} \label{sb:discr-intro} \setcounter{thm}{0 Let $A$
be a quadratic algebra over $\mathbb{Z}$. By this we mean that $A$ is
a commutative unital ring such that $\mathbb{Z}$ embeds as a subring
of $A$, $\mathbb{Z} \to A$, and furthermore that $A/\mathbb{Z} \cong
\mathbb{Z}$. Thus the underlying additive abelian group of $A$ is a
free abelian group of rank $2$. Pick a generator $p \in A/\mathbb{Z}$
so that $A/\mathbb{Z} = \mathbb{Z} p$. We have the following exact
sequence:
\begin{equation} \label{eq:ex-seq-A} 0 \longrightarrow \mathbb{Z}
\longrightarrow A \xrightarrow{\;\;\epsilon \;\;} \mathbb{Z} p
\longrightarrow 0,
\end{equation}
where the first map is the ring embedding and $\epsilon$ is the
obvious projection. Choose a lift $x \in A$ of $p$, i.e. $\epsilon(x)
= p$. Then additively we have $A \cong \mathbb{Z} x \oplus
\mathbb{Z}$. With these choices there exist $\sigma(p,x), \tau(p,x)
\in \mathbb{Z}$ such that $$x^2 = \sigma(p,x) x + \tau(p,x).$$ The
integers $\sigma(p,x), \tau(p,x)$ depend on the choices of $p$ and of
$x$. However, a simple calculation (see~\S\ref{sbsb:discr-well-def})
shows that the following expression
\begin{equation} \label{eq:discr-A} \Delta_A := \sigma(p,x)^2 + 4
\tau(p,x) \in \mathbb{Z}
\end{equation}
is independent of $p$ and $x$, hence is an invariant of the
isomorphism type of $A$. In fact in~\S\ref{sbsb:discr-inv-class} we
show that $\Delta_A$ determines the isomorphism type of $A$. We call
$\Delta_A$ the discriminant of $A$.
\begin{remsnonum}
\begin{enumerate}
\item Another description of $\Delta_A$ is the following. Write
$A$ as $A \cong \mathbb{Z}[T] / (f(T))$, where $f(T) \in
\mathbb{Z}[T]$ is a monic quadratic polynomial. Then $\Delta_A$
is the discriminant of $f(T)$ (and is independent of the choice
of $f(T)$). In particular $A_{\mathbb{C}} := A \otimes
\mathbb{C}$ is semi-simple iff $\Delta_A \neq 0$.
\item When $\Delta_A$ is not a square $A_{\mathbb{Q}} := A
\otimes \mathbb{Q}$ is a quadratic number field. The
discriminant $\Delta_A$ is related to the discriminant of
$A_{\mathbb{Q}}$ as defined in number theory.
\item It is easy to see from~\eqref{eq:discr-A} that the only
values $\Delta_A (\bmod \,4)$ can assume are $0$ and $1$.
\end{enumerate}
\end{remsnonum}
Let $L$ be a Lagrangian submanifold satisfying conditions $(1)-(3)$ of
Assumption~$\mathscr{L}$ and choose a spin structure on $L$ compatible
with its orientation. Consider $A = HF_n(L,L)$ endowed with the
Donaldson product $$*: HF_n(L,L) \otimes HF_n(L,L) \longrightarrow
HF_n(L,L), \quad a \otimes b \longmapsto a*b.$$ Recall that $A$ is a
unital ring with a unit which we denote by $e_L \in HF_n(L,L)$. The
conditions $(1)-(3)$ of Assumption~$\mathscr{L}$ ensure that $A$ is a
quadratic algebra over $\mathbb{Z}$. (In case $A$ has torsion we just
replace it by $A/T$, where $T$ is its torsion ideal.) Denote by
$\Delta_L$ the discriminant of $A$, $\Delta_L := \Delta_A$ as defined
in~\eqref{eq:discr-A}. (We suppress here the dependence on the spin
structure, as we will soon see that in our case $\Delta_L$ does not
depend on it.)
The following theorem shows that the discriminant $\Delta_L$ depends
only on the class $[L] \in QH_n(M)$ and can be computed by means of
the ambient quantum homology of $M$.
\begin{mainthm} \label{t:rel-to-discr} Let $L \subset M$ be a
Lagrangian submanifold satisfying Assumption~$\mathscr{L}$. Let
$\sigma_L, \tau_L \in \mathbb{Q}$ be the constants from the cubic
equation~\eqref{eq:cubic-L} in Theorem~\ref{t:cubic-eq}. Then
$$\Delta_L = \sigma_L^2 + 4 \tau_L.$$
\end{mainthm}
The proof appears in~\S\ref{s:lag-cubic-eq}.
\begin{remsnonum}
\begin{enumerate}
\item \textbf{Warning:} The pair of coefficients $\sigma_L,
\tau_L$ and $\sigma(p,x), \tau(p,x)$ should not be confused.
The first pair is always uniquely determined by $[L]$ and can be
read off the ambient quantum homology of $M$ via the cubic
equation~\eqref{eq:cubic-L}. In contrast, the second pair
$\sigma(p,x), \tau(p,x)$ are defined via Lagrangian Floer
homology and strongly depend on the choice of the lift $x$ of
$p$. For example, we have seen that if $L$ is a sphere then
$\sigma_L=0$, but as we will see later (e.g.
in~\S\ref{s:disc-lcob}) for some (useful) choices of $x$ we have
$\sigma(p,x) \neq 0$. Additionally, $\sigma(p,x), \tau(p,x) \in
\mathbb{Z}$ while $\sigma_L, \tau_L \in \mathbb{Q}$. Still, the
two pairs of coefficients are related in that $\sigma(p,x)^2 + 4
\tau(p,x) = \sigma_L^2 + 4 \tau_L = \Delta_L$.
As we will see in the proof of Theorem~\ref{t:rel-to-discr}, the
coefficients $\sigma_L, \tau_L$ do occur as $\sigma(p,x_0),
\tau(p,x_0)$ but for a special choice of $x_0$, which however
requires working over $\mathbb{Q}$.
\item A different version of the discriminant $\Delta_L$ was
previously defined and studied by Biran-Cornea
in~\cite{Bi-Co:lagtop}. In that paper the discriminant occurs as
an invariant of a quadratic form defined on $H_{n-1}(L)$ via
Floer theory. In the case $L$ is a $2$-dimensional Lagrangian
torus the discriminant from~\cite{Bi-Co:lagtop} and $\Delta_L$,
as defined above, happen to coincide due to the associativity of
the product of $HF_n(L,L)$. Moreover, in dimension $2$,
$\Delta_L$ has an enumerative description in terms of counting
holomorphic disks with boundary on $L$ which satisfy certain
incidence conditions. This description continues to hold also
for $2$-dimensional Lagrangian spheres with $N_L = 2$ (or more
generally for all $2$-dimensional Lagrangian submanifolds
satisfying Assumption~$\mathscr{L}$) and the proof is the same
as in~\cite{Bi-Co:lagtop}.
\item Since $\sigma_L, \tau_L$ do not depend on the spin
structure chosen for $L$ (although $\sigma(p,x)$ and $\tau(p,x)$
do) it follows from Theorem~\ref{t:rel-to-discr} that $\Delta_L$
does not depend on that choice either. As for the orientation on
$L$, if we denote $\bar{L}$ the Lagrangian $L$ with the opposite
orientation then it follows from Theorem~\ref{t:cubic-eq} that
$\sigma_{\bar{L}} = - \sigma_{L}$ and $\tau_{\bar{L}} = \tau_L$.
In particular $\Delta_{\bar{L}} = \Delta_L$.
\end{enumerate}
\end{remsnonum}
The next theorem is concerned with the behavior of the discriminant
under Lagrangian cobordism. We refer the reader to~\cite{Bi-Co:cob1}
for the definitions.
\begin{mainthm} \label{t:cob} Let $L_1, \ldots, L_r \subset M$ be
monotone Lagrangian submanifolds, each satisfying conditions $(1)$
-- $(3)$ of Assumption~$\mathscr{L}$. Let $V^{n+1} \subset
\mathbb{R}^2 \times M$ be a connected monotone Lagrangian cobordism
whose ends correspond to $L_1, \ldots, L_r$ and assume that $V$
admits a spin structure. Denote by $N_V$ the minimal Maslov number
of $V$ and assume that:
\begin{enumerate}
\item $H_{j N_V}(V,\partial V) = 0$ for every $j$.
\item $H_{1+jN_V}(V) = 0$ for every $j$.
\end{enumerate}
Then $\Delta_{L_1} = \cdots = \Delta_{L_r}$. Moreover if $r \geq 3$
then $\Delta_{L_i}$ is a perfect square for every $i$.
\end{mainthm}
The proof is given in~\S\ref{s:disc-lcob}. As a corollary we obtain:
\begin{maincor} \label{c:del_1=del_2} Let $(M, \omega)$ be a monotone
symplectic manifold with $2 C_M \mid n$, where $C_M$ is the minimal
Chern number of $M$. Let $L_1, L_2 \subset M$ be two Lagrangian
spheres that intersect transversely at exactly one point. Then
$\Delta_{L_1} = \Delta_{L_2}$ and moreover this number is a perfect
square.
\end{maincor}
We will in fact prove a stronger result in~\S\ref{sb:lag-1-pt} (see
Corollary~\ref{c:L1-L2-1pt}).
\subsection{Examples} \label{sb:exps} \setcounter{thm}{0
We begin with a topological criterion that assures that condition~(3)
in Assumption~$\mathscr{L}$ is satisfied. This provides us with examples
of Lagrangian submanifolds to which the theory applies.
\begin{mainprop} \label{p:criterion} Let $L \subset M$ be an oriented
Lagrangian submanifold satisfying condition~$(1)$ of
Assumption~$\mathscr{L}$. Assume in addition that:
\begin{enumerate}
\item $[L] \neq 0 \in H_n(M;\mathbb{Q})$ (this is satisfied e.g.
when $\chi(L) \neq 0$).
\item $H_{j N_L}(L) = 0$ for every $0<j<\nu$.
\end{enumerate}
Then condition $(3)$ in Assumption~$\mathscr{L}$ is satisfied too.
In particular Lagrangian spheres $L$ that satisfy condition $(1)$
of Assumption~$\mathscr{L}$ satisfy the other three conditions in
Assumption~$\mathscr{L}$.
\end{mainprop}
The proof appears in~\S\ref{sb:prf-p-criterion}.
We now provide a sample of examples. More details will be given
in~\S\ref{s:examples}
\subsubsection{Lagrangian spheres in blow-ups of $\mathbb{C}P^2$}
\label{sbsb:intro-exp-blcp2}
Let $(M_k, \omega_k)$ be the monotone symplectic blow-up of
${\mathbb{C}}P^2$ at $2 \leq k \leq 6$ points. We normalize $\omega_k$
so that it is cohomologous to $c_1$. Denote by $H \in H_2(M_k)$ the
homology class of a line not passing through the blown up points and
by $E_1, \ldots, E_k \in H_2(M_k)$ the homology classes of the
exceptional divisors over the blown up points. With this notation the
Poincar\'{e} dual of the cohomology class of the symplectic form
$[\omega_k] \in H^2(M_k)$ satisfies $$PD [\omega_k] = PD(c_1) = 3H -
E_1 - \cdots - E_k.$$
The Lagrangian spheres $L \subset M_k$ lie in the following homology
classes (see~\S\ref{sb:lag-spheres-blow-ups} for more details):
\begin{enumerate}
\item For $k=2$: $\pm (E_1 - E_2)$.
\item For $2 \leq k \leq 5$: $\pm(E_i - E_j)$, $i < j$, and $\pm(H -
E_i - E_j - E_l)$ with $i<j<l$.
\item For $k=6$ we have the same homology classes as in~$(2)$ and in
addition the class $\pm(2H - E_1 - \cdots - E_6)$.
\end{enumerate}
Note that all these Lagrangian spheres satisfy
Assumption~$\mathscr{L}$ since $N_L = 2$.
The discriminants of these Lagrangian spheres are gathered in
Table~\ref{tb:classes-intro}, the detailed computations being
postponed to~\S\ref{s:examples}. The column under $\lambda_L$ will be
explained in~\S\ref{sb:eigneval}.
\begin{table}[h]
\begin{center}
\begin{tabular}{l | l | r | r}
& $[L]$ & $\Delta_L$ & $\lambda_L$ \\
\hline
$M_2$ & $\pm(E_1 - E_2)$ & 5 & -1 \\
\hline
$M_3$ & $\pm(E_i - E_j)$ & 4 & -2 \\
& $\pm(H - E_1 - E_2 - E_3)$ & -3 & -3 \\
\hline
$M_4$ & $\pm(E_i - E_j)$ & 1 & -3 \\
& $\pm(H - E_i - E_j - E_l)$ & 1 & -3 \\
\hline
$M_5$ & $\pm(E_i - E_j)$ & 0 & -4 \\
& $\pm(H - E_i - E_j - E_l)$ & 0 & -4 \\
\hline
$M_6$ & $\pm(E_i - E_j)$ & 0 & -6 \\
& $\pm(H - E_i - E_j - E_l)$ & 0 & -6 \\
& $\pm(2H - E_1 - \ldots - E_6)$ & 0 & -6
\end{tabular}
\vspace{4mm}
\caption{Classes representing Lagrangian spheres and their
discriminants.}
\label{tb:classes-intro}
\end{center}
\end{table}
The Lagrangian spheres in the three homology classes $E_i - E_j$,
$i<j$, of $M_3$ all have the same discriminant. This can also be seen
by noting that one can choose three Lagrangian spheres $L_1, L_2,
L_3$, one in each of these homology classes so that every pair of them
intersects transversely at exactly one point. The equality of their
discriminants as well (as the fact that they are perfect squares)
follows then by Corollary~\ref{c:del_1=del_2}. We elaborate more on
these examples in~\S\ref{s:examples}.
\subsubsection{Lagrangian spheres in hypersurfaces of
${\mathbb{C}}P^{n+1}$} \label{sbsb:exp-intro-hypsurf} Let $M^{2n}
\subset \mathbb{C}P^{n+1}$ be a hypersurface of degree $d \leq n+1$
endowed with the induced symplectic form. By the assumption on $d$,
$M$ is monotone (in fact Fano) and the minimal Chern number is $C_M =
n+2-d$. Note that when $d \geq 2$, $M$ contains Lagrangian spheres.
Assume further that $n\geq 3$, and $d \geq 3$. Let $L \subset M$ be a
Lagrangian sphere, hence $[L]$ belongs to the primitive homology of
$M$ (see~\cite{Gr-Ha:alg-geom, Voi:hodge-book-I}). Using the
description of the quantum homology of $M$
from~\cite{Co-Ji:QH-hpersurf, Gi:equivariant} we obtain $[L]^{*3}=0$.
Whenever $n$ is a multiple of $2C_M = 2(n+2-d)$ the Lagrangian spheres
$L \subset M$ satisfy Assumption~$\mathscr{L}$, hence the discriminant
is defined and we obtain $\Delta_L = 0$.
Consider now the case $d=2$, i.e. $M$ is the quadric of complex
dimension $n$, and let $S \subset M$ be a Lagrangian sphere. We have
$C_M = n$, so case~\eqref{i:div} of Theorem~\ref{t:cubic-eq-sphere}
applies. If $n=$~odd, then $H_n(M)=0$, hence $[S]=0$. If $n=$~even,
then from the quantum product in the quadric we obtain:
\[ [S]^{*3} = (-1)^{\frac{n(n-1)}{2}+1}4 [S] q^n.
\]
More details on all the above calculations are given
in~\S\ref{s:examples}.
\subsection*{Acknowledgments}
We would like to thank Jean-Yves Welschinger for a discussion
convincing us that all Lagrangian tori in symplectic $4$-manifolds
with $b_2^+=1$ are null-homologous.
\section{The discriminant and Lagrangian cobordisms} \setcounter{thm}{0
\label{s:disc-lcob}
This section provides the proofs of Theorem~\ref{t:cob} and a
generalization of Corollary~\ref{c:del_1=del_2}.
{In what follows Lagrangian cobordisms $V$ will be generally
assumed to be connected. In contrast, their boundaries $\partial V$
are allowed to have several connected components.}
We begin with:
\begin{proof}[Proof of Theorem~\ref{t:cob}]
Before going into the details of the proof, here is the rationale
behind it. To the Lagrangian cobordism $V$ we can associate a
(relative) quantum homology $QH(V, \partial V)$ which has a quantum
product. The quantum product on $QH(V, \partial V)$ is related to
the quantum products for the ends of $V$ via a quantum connectant
$\delta: QH(V, \partial V) \longrightarrow QH(\partial V) =
\oplus_{i=1}^r QH(L_i)$. This makes it possible to find relations
between the products on the quantum homologies $QH(L_i)$ of
different ends of $V$ and the quantum product on $QH(V, \partial
V)$. In particular this gives the desired relation between the
discriminants of the different ends.
We now turn to the details of the proof. We will use here several
versions of the pearl complex and its homology (also called
Lagrangian quantum homology) both for Lagrangian cobordisms as well
as for their ends. We refer the reader to~\cite{Bi-Co:Yasha-fest,
Bi-Co:rigidity, Bi-Co:lagtop} for the foundations of the theory
in the case of closed Lagrangians and to~\S5 of~\cite{Bi-Co:cob1}
in the case of cobordisms.
Throughout this proof we will work with $\mathbb{Q}$ as the base
field and with $\Lambda = \mathbb{Q}[t^{-1}, t]$ or $\Lambda^{+} =
\mathbb{Q}[t]$ as coefficient rings. We denote by $\mathcal{C}$ and
$\mathcal{C}^+$ the pearl complexes with coefficients in $\Lambda$
and $\Lambda^+$ respectively, and by $QH$ and $Q^+H$ their
homologies. The latter is sometimes called the positive Lagrangian
quantum homology.
Before we go on, a small remark regarding the coefficients is in
order. Throughout this proof we grade the variable $t \in \Lambda$
as $|t|=-N_V$. This is the standard grading for $QH(V)$ and $QH(V,
\partial V)$ and their positive versions. We use the same
coefficient rings (and grading) also for $QH(L_i)$ and its positive
version. This is possible since $N_V | N_{L_i}$, hence our ring
$\Lambda^+$ is an extension of the corresponding ring in which the
degree of $t$ is $-N_{L_i}$.
Recall that (for any Lagrangian submanifold) the positive quantum
homology $Q^+H$ admits a natural map $Q^+H \longrightarrow QH$
induced by the inclusion $\mathcal{C}^+ \longrightarrow
\mathcal{C}$. Again, for degree reasons the induced map in homology
is an isomorphism in degree $0$ and surjective in degree $1$:
\begin{equation} \label{eq:Q^+H-QH} Q^+H_0 \xrightarrow{\; \; \cong
\; \;} QH_0, \quad Q^+H_1 \rrightarrow QH_1.
\end{equation}
In fact, the last map is an isomorphism whenever the minimal Maslov
number is $> 2$. We also have $Q^+H_n(K) \cong H_n(K)$ for every
$n$-dimensional Lagrangian submanifold $K$.
Coming back to the proof of the theorem, we first claim there is a
commutative diagram
\begin{equation} \label{eq:diag-QH-H}
\begin{CD}
Q^+H_1(V) @>j_Q>>Q^+H_1(V, \partial V) @ > \delta >>
Q^+H_0(\partial V) @>i_Q>> Q^+H_0(V) \\
@V s VV @V s VV @V s VV @V s VV \\
H_1(V) @>j>> H_1(V, \partial V) @ > \partial >> H_0(\partial
V) @>i>> H_0(V)\\
@. @VVV @VVV @. \\
@. 0 @. 0 @.
\end{CD}
\end{equation}
with exact rows and columns. The second row of the diagram is the
classical homology sequence for the pair $(V, \partial V)$ with
$\partial$ being the connecting homomorphism (we use $\mathbb{Q}$
coefficients here). The first row is its quantum homology analogue,
and we remark that the quantum connectant $\delta$ is
multiplicative with respect to the quantum product (see~\S5
of~\cite{Bi-Co:cob1} and~\cite{BSing:msc}). The vertical maps $s$
come from the following general exact sequence of chain complexes:
\begin{equation} \label{eq:ses-s} 0 \longrightarrow t \mathcal{C}^+
\xrightarrow{\;\, \iota \;\,} \mathcal{C}^+ \xrightarrow{\;\, s
\; \, } CM \longrightarrow 0,
\end{equation}
where $CM$ stand for the Morse complex (defined using the same
Morse function and metric as used for the pearl complex, but with
coefficient in $\mathbb{Q}$ rather than $\Lambda^+$). The second
map in this exact sequence, $s: \mathcal{C}^+ \longrightarrow CM$,
is induced by $t \mapsto 0$ (i.e. it sends a pearly chain to its
classical part, omitting the $t$'s), and $\iota$ stand for the
inclusion. We now explain why the two middle $s$ maps
in~\eqref{eq:diag-QH-H} are surjective. We start with the third $s$
map (i.e. the one before the rightmost $s$). We have:
\begin{equation} \label{eq:H_0(del-V)}
H_0(\partial V) = \bigoplus_{i=1}^r H_0(L_i), \quad
Q^+H_0(\partial V) = \bigoplus_{i=1}^r Q^+H_0(L_i).
\end{equation}
Next, note that the composition of $s: Q^+H_0(L_i) \longrightarrow
H_0(L_i)$ with the inclusion $H_0(L_i) \subset H_0(L_i; \Lambda^+)$
coincides with the augmentation $\widetilde{\epsilon}_{L_i} :
Q^+H_0(L_i) \longrightarrow H_0(L; \Lambda^+)$. The fact that $s$
is surjective now follows easily from~\S\ref{sbsb:Q=QH_n}
and~\eqref{eq:Q^+H-QH}.
The surjectivity of the second to the left $s$ map requires a
different argument. Consider the chain complex $\mathcal{D}_* = (t
\mathcal{C}^+)_*$, viewed as a subcomplex of $\mathcal{C}^+$. In
view of the exact sequence~\eqref{eq:ses-s} the surjectivity of the
second to the left $s$ map in~\eqref{eq:diag-QH-H} would follow if
we show that $H_0(\mathcal{D})=0$. To this end consider the
following filtration $\mathcal{F}_{\bullet}\mathcal{D}$ of
$\mathcal{D}$ by subcomplexes, defined by:
\begin{align*}
& \mathcal{F}_m \mathcal{D} := t^{-m} \mathcal{D} = t^{-m+1}
\mathcal{C}^+ \quad \forall \, m \leq 0, \\
& \mathcal{F}_k \mathcal{D} := \mathcal{D} \quad \forall \, k
\geq 0.
\end{align*}
Note that this filtration is very similar to the one described
in~\S\ref{sb:spec-seq} only that here it is applied to the complex
$\mathcal{D}$ rather than to $\mathcal{C}$.
A simple calculation (similar to the one in~\S\ref{sb:spec-seq})
shows that the first page of the spectral sequence associated to
this filtration satisfies:
\begin{align*}
& E^1_{p,q} \cong t^{-p+1} H_{{\scriptscriptstyle
p+q+N_V-pN_V}}(V, \partial V) \quad
\forall \, p \leq 0, \\
& E^1_{p,q} = 0 \quad \forall \, p \geq 1.
\end{align*}
It follows from the assumption of the theorem that for all $p,q$
with $p+q=0$ we have $E^1_{p,q}=0$, hence also
$E^{\infty}_{p,q}=0$. Since this spectral sequence converges to
$H_*(\mathcal{D})$ this implies that $H_0(\mathcal{D})=0$. This
completes the proof of the surjectivity of the second to the left
$s$ map in~\eqref{eq:diag-QH-H}.
We proceed now with the proof of the theorem, based on the
diagram~\eqref{eq:diag-QH-H} and its properties. We first remark
that due to the assumptions of the theorem the number of ends of
$V$ must be $r \geq 2$. Indeed, by the results of~\cite{Bi-Co:cob1}
if a Lagrangian submanifold $L_1$ is Lagrangian null-cobordant
(i.e. there exists a monotone Lagrangian cobordism $V$ with only
one end being $L_1$) then $HF(L_1,L_1)=0$, in contrast with the
assumption that $L_1$ satisfies condition $(3)$ of
Assumption~$\mathscr{L}$. We therefore assume from now on that $r
\geq 2$.
Denote by $p_i \in H_0(L_i) \subset H_0(\partial V)$ the class
corresponding to a point in $L_i$. Let $\alpha_2, \ldots, \alpha_r
\in H_1(V, \partial V)$ be classes with $\partial \alpha_i = p_1 -
p_i$. Choose lifts $\overline{p}_i \in Q^+H_0(\partial V)$ of the
$p_i$'s under the map $s$ as well as lifts $\overline{\alpha}_2,
\ldots, \overline{\alpha}_r \in Q^+H_1(V,\partial V)$ of $\alpha_2,
\ldots, \alpha_r$. Denote by $e_V \in Q^+H_{n+1}(V, \partial V)$
the unity and by $e_{L_i} \in Q^+H_n(L_i)$ the unities
corresponding to the $L_i$'s. Note that $\delta(e_V) = e_{L_1} +
\cdots + e_{L_r}$. Finally, put $\nu = n / N_V$. (Recall that
$N_{L_i} | n$ by assumption, and since $N_V | N_{L_i}$ we have $N_V
| n$.) Since the Lagrangians $L_i$ satisfy conditions~(1)~--~(3) of
Assumption~$\mathscr{L}$ and in view of~\S\ref{sbsb:Q=QH_n}, we
have:
$$Q^+H_0(\partial V)
\cong QH_0(\partial V) = \mathbb{Q} \overline{p}_1 \oplus \cdots
\oplus \mathbb{Q} \overline{p}_r \oplus \mathbb{Q} e_{L_1} t^{\nu}
\oplus \cdots \oplus \mathbb{Q} e_{L_r} t^{\nu}.$$
\begin{prop} \label{p:basis-delta} $\dim_{\mathbb{Q}}
(\textnormal{image\,} \delta) = r$. Moreover, for every choice
of $\alpha_i$'s and $\overline{\alpha}_i$'s the elements
$$\delta(\overline{\alpha}_2), \ldots, \delta(\overline{\alpha}_r),
(e_{L_1} + \cdots + e_{L_r}) t^{\nu}$$ form a basis (over
$\mathbb{Q}$) of the vector space $\textnormal{image\,}{\delta}
\subset QH_0(\partial V)$.
\end{prop}
We defer the proof of the lemma and continue with the proof of our
theorem.
Denote by $\mathcal{B} \subset Q^+H_1(V, \partial V)$ the kernel of
$\delta : Q^+H_1(V, \partial V) \longrightarrow Q^+H_0(\partial
V)$. By Proposition~\ref{p:basis-delta} the elements
$$\overline{\alpha}_2, \ldots, \overline{\alpha}_r, e_V t^{\nu}$$
induce a basis for the vector space $Q^+H_1(V, \partial V) /
\mathcal{B}$.
We now continue by proving that $\Delta_{L_1} = \Delta_{L_2}$. The
other equalities follow by the same recipe. Using the preceding
basis we can write:
\begin{equation} \label{eq:a2*a2-delta-1}
\begin{aligned} & \overline{\alpha}_2 *
\overline{\alpha}_2 = \sum_{j=2}^r \xi_j \overline{\alpha}_j
t^{\nu} + Bt^{\nu} + \rho e_V t^{2 \nu}, \\
& \delta(\overline{\alpha}_2) = \overline{p}_1 - \overline{p}_2 +
\sum_{k=1}^r a_k e_{L_k} t^{\nu},
\end{aligned}
\end{equation}
for some $\xi_j, a_k, \rho \in \mathbb{Q}$ and $B \in \mathcal{B}$.
For the first equality we have used the fact that
$\overline{\alpha}_2 * \overline{\alpha}_2 \in Q^+H_{1-n}(V,
\partial V) \cong t^{\nu} Q^+H_1(V, \partial V)$.
We will also need a similar equality to the second one
in~\eqref{eq:a2*a2-delta-1}, but for $\delta(\overline{\alpha}_i)$:
\begin{equation} \label{eq:delta-alpha-i}
\delta(\overline{\alpha}_i) = \overline{p}_1 - \overline{p}_i +
\sum_{k=1}^r a_k^{(i)} e_{L_k} t^{\nu}, \quad \forall \, 2 \leq
i \leq r,
\end{equation}
where $a_k^{(i)} \in \mathbb{Q}$. (Note that according to our
notation $a_k = a_k^{(2)}$.)
At this point we need to separate the arguments to the cases $r
\geq 3$ and $r=2$. (As we have already remarked, $r=1$ is
impossible under the assumptions of the theorem.) We assume first
that $r \geq 3$. The case $r=2$ will be treated after that.
We now perform a little change in the basis and the choice of the
lift $\overline{p}_i$ as follows:
\begin{align*}
& \overline{\alpha}_2 \longrightarrow \overline{\alpha}_2 - a_3
e_V t^{\nu}, \quad \overline{\alpha}_i \longrightarrow
\overline{\alpha}_i \quad
\forall i \geq 3, \\
& \overline{p}_1 \longrightarrow \overline{p}_1 +
(a_1-a_3)e_{L_1} t^{\nu}, \quad \overline{p}_2 \longrightarrow
\overline{p}_2 - (a_2-a_3) e_{L_2} t^{\nu}, \quad \overline{p}_i
\longrightarrow \overline{p}_i \quad \forall i \geq 3.
\end{align*}
To simplify notation we continue to denote the new basis elements
by $\overline{\alpha}_i$ and similarly for the $\overline{p}_i$'s.
By abuse of notation we also continue to denote the new
coefficients $a_k$, $a_k^{(i)}$, $\xi_j$ and $\rho$ resulting from
the basis change by the same symbols, and similarly for the term $B
\in \mathcal{B}$. The outcome of the basis change is that now the
second equality in~\eqref{eq:a2*a2-delta-1} becomes:
\begin{equation} \label{eq:delta-alpha-2-new}
\delta(\overline{\alpha}_2) = \overline{p}_1 - \overline{p}_2 +
\sum_{k=4}^r a_k e_{L_k} t^{\nu}.
\end{equation}
(Of course, if $r=3$ then the third term in the last equation is
void.) We now use the fact that $\delta$ is multiplicative
(see~\cite{Bi-Co:cob1}):
\begin{equation} \label{eq:delta-alpha-2*alpha-2}
\delta(\overline{\alpha}_2 * \overline{\alpha}_2) =
\delta(\overline{\alpha}_2) * \delta(\overline{\alpha}_2) =
\overline{p}_1^{*2} + \overline{p}_2^{*2} + \sum_{k=4}^r a_k^2
e_{L_k} t^{2 \nu}.
\end{equation}
We now express $\overline{p}_1^{*2} \in Q^+H_{-n}(L_1) \cong
t^{\nu}Q^+H_{0}(L_1)$ in terms of the basis $\{ \overline{p}_1
t^\nu, e_{L_1}t^{2\nu}\}$ and similarly for $\overline{p}_2^{*2}$:
$$\overline{p}_1^{*2} = \sigma_1 \overline{p}_1 t^{\nu} +
\tau_1 e_{L_1}t^{2\nu}, \qquad \overline{p}_2^{*2} = \sigma_2
\overline{p}_2 t^{\nu} + \tau_2 e_{L_2}t^{2\nu},$$ where $\sigma_1,
\sigma_2 \in \mathbb{Q}$ and $\tau_1, \tau_2 \in \mathbb{Q}$. (In
fact, by choosing the $\alpha_i$'s, $\overline{\alpha}_i$'s and
$\overline{p}_i$'s carefully, over $\mathbb{Z}$, the coefficients
$\sigma_1, \sigma_2, \tau_1, \tau_2$ will in fact be in
$\mathbb{Z}$, but we will not need that.) Substituting this
into~\eqref{eq:delta-alpha-2*alpha-2} we obtain:
\begin{equation} \label{eq:delta-alpha-2*alpha-2-II}
\delta(\overline{\alpha}_2 * \overline{\alpha}_2) = \sigma_1
\overline{p}_1 t^{\nu} + \sigma_2 \overline{p}_2 t^{\nu} +
\tau_1 e_{L_1}t^{2\nu} + \tau_2 e_{L_2}t^{2\nu} + \sum_{k=4}^r
a_k^2 e_{L_k} t^{2 \nu}.
\end{equation}
Applying $\delta$ to the first equality in~\eqref{eq:a2*a2-delta-1}
and using~\eqref{eq:delta-alpha-2-new}
and~\eqref{eq:delta-alpha-2*alpha-2-II} we obtain:
\begin{align*}
& \xi_2\Bigl(\overline{p}_1 - \overline{p}_2 + \sum_{k=4}^r a_k
e_{L_k} t^{\nu}\Bigr) t^{\nu} + \sum_{i=3}^r \xi_i
\Bigl(\overline{p}_1 - \overline{p}_i + \sum_{q=1}^r a_q^{(i)}
e_{L_q} t^{\nu}\Bigr) t^{\nu} + \rho (e_{L_1} + \cdots +
e_{L_r})t^{2 \nu} \\
& = \sigma_1 \overline{p}_1 t^{\nu} + \sigma_2 \overline{p}_2
t^{\nu} + \tau_1 e_{L_1}t^{2\nu} + \tau_2 e_{L_2}t^{2\nu} +
\sum_{k=4}^r a_k^2 e_{L_k} t^{2 \nu}.
\end{align*}
Comparing the coefficients of $\overline{p}_3, \ldots,
\overline{p}_r$ we deduce that $\xi_3 = \cdots = \xi_r = 0$. The
last equation thus becomes:
\begin{equation} \label{eq:coeffs-xi=0}
\begin{aligned}
& \xi_2\Bigl(\overline{p}_1 - \overline{p}_2 + \sum_{k=4}^r
a_k e_{L_k} t^{\nu}\Bigr) t^{\nu} + \rho (e_{L_1} + \cdots +
e_{L_r})t^{2 \nu} \\
& = \sigma_1 \overline{p}_1 t^{\nu} + \sigma_2 \overline{p}_2
t^{\nu} + \tau_1 e_{L_1}t^{2\nu} + \tau_2 e_{L_2}t^{2\nu} +
\sum_{k=4}^r a_k^2 e_{L_k} t^{2 \nu}.
\end{aligned}
\end{equation}
Comparing the coefficients of $\overline{e}_3$ on both sides
of~\eqref{eq:coeffs-xi=0} (recall that $r \geq 3$) we deduce that
$\rho=0$. It easily follows now that $\tau_1 = \tau_2 = 0$ and that
$\sigma_1 = \xi_2 = -\sigma_2$. By the definition of the
discriminant it follows that
$$\Delta_{L_1} = \sigma_1^2 = \sigma_2^2 = \Delta_{L_2}.$$
Note that the relation between our $\sigma_i$'s and $\tau_i$'s and
the notation used in~\S\ref{sb:discr-intro} and
in~\S\ref{sbsb:Q=QH_n} is $\sigma_1 = \sigma_1(p_1,
\overline{p}_1)$, $\sigma_2 = \sigma_2(p_2, \overline{p}_2)$ and
similarly for $\tau_1, \tau_2$. Finally we remark that since
$\Delta_{L_1} = \sigma_1^2 \in \mathbb{Z}$ we must have $\sigma_1
\in \mathbb{Z}$, hence $\Delta_{L_1}$ is a perfect square.
We now turn to the case $r=2$. In that case we can
write~\eqref{eq:a2*a2-delta-1} as
\begin{equation} \label{eq:a2*a2-delta-2}
\begin{aligned} & \overline{\alpha}_2 *
\overline{\alpha}_2 = \xi \overline{\alpha}_2
t^{\nu} + Bt^{\nu} + \rho e_V t^{2 \nu}, \\
& \delta(\overline{\alpha}_2) = \overline{p}_1 - \overline{p}_2 +
a_1 e_{L_1} t^{\nu} + a_2 e_{L_2} t^{\nu},
\end{aligned}
\end{equation}
By an obvious basis change (among $\overline{p}_1$,
$\overline{p}_2$) we may assume that $a_1 = a_2 = 0$. Then the
identity $\delta(\overline{\alpha}_2 * \overline{\alpha}_2) =
\delta(\overline{\alpha}_2) * \delta(\overline{\alpha}_2)$ becomes:
$$\xi(\overline{p}_1 - \overline{p}_2)t^{\nu} +
\rho(e_{L_1} + e_{L_2})t^{2 \nu} = \sigma_1 \overline{p}_1 t^{\nu}
+ \sigma_2 \overline{p}_2 t^{\nu} + \tau_1 e_{L_1} t^{2\nu} +
\tau_2 e_{L_2} t^{2\nu}.$$ It follows immediately that
$\sigma_1=-\sigma_2$ and $\tau_1 = \tau_2$. Consequently
$\Delta_{L_1} = \Delta_{L_2}$.
To complete the proof of the theorem it remains to prove
Proposition~\ref{p:basis-delta}. For this purpose we will need the
following Lemma.
\begin{lem} \label{l:div-t} Let $j \geq 0$ and consider the
connecting homomorphism $$\delta: Q^+H_{1+jN_V}(V, \partial V)
\longrightarrow Q^+H_{jN_V}(\partial V).$$ Let $\eta \in
Q^+H_{1+jN_V}(V,
\partial V)$ and assume that $\delta(\eta)$ is divisible by $t$.
Then $\eta$ is also divisible by $t$.
\end{lem}
\begin{proof}[Proof of the lemma]
The connecting homomorphism $\delta$ is part of the following diagram:
\begin{equation} \label{eq:diag-QH-H-2}
\begin{CD}
@. Q^+H_{1+jN_V}(V, \partial V) @ > \delta >>
Q^+H_{jN_V}(\partial V) \\
@. @V s VV @V s VV \\
H_{1+jN_V}(V) @>j>> H_{1+jN_V}(V, \partial V) @ > \partial >>
H_{jN_V}(\partial
V) \\
\end{CD}
\end{equation}
where the vertical $s$-maps are induced by~\eqref{eq:ses-s}.
Since $\delta(\eta)$ is divisible by $t$ we have $s
(\delta(\eta))=0$ hence $\partial (s(\eta))=0$. By assumption
$H_{1+jN_V}(V)=0$ hence the bottom map $\partial$ is injective,
and therefore we have $s(\eta)=0$. Looking again
at~\eqref{eq:ses-s} it follows that $$\eta \in
\textnormal{image\,} \Bigl( H_{1+jN_V}(t\mathcal{C}^+)
\xrightarrow{\;\; \iota_* \;\;} Q^+H_{1+jN_V}(V,
\partial V) \Bigr),$$ where $\mathcal{C}^+$ stands for the
positive pearl complex of $(V,
\partial V)$. But $$H_{1+jN_V}(t \mathcal{C}^+) \cong
tQ^+H_{1+(j+1)N_V}(V,\partial V)$$ via an isomorphism for which
$\iota_*$ becomes the inclusion $$tQ^+H_{1+(j+1)N_V}(V,\partial
V) \subset Q^+H_{1+jN_V}(V, \partial V).$$ This proves that
$\eta$ is divisible by $t$.
\end{proof}
We are finally in position to prove the preceding proposition.
\begin{proof}[Proof of Proposition~\ref{p:basis-delta}]
Note that $$\{\overline{p}_1, \delta(\overline{\alpha}_2),
\ldots, \delta(\overline{\alpha}_r), \delta(e_V)t^{\nu}, e_{L_2}
t^{\nu}, \ldots, e_{L_r} t^{\nu}\}$$ is a basis for
$Q^+H_0(\partial V)$ (recall that $\delta(e_V) = e_{L_1} +
\cdots + e_{L_r}$). Therefore it is enough to show that the
subspace of $Q^+H_0(\partial V)$ generated by $\overline{p}_1,
e_{L_2} t^{\nu}, \ldots, e_{L_r} t^{\nu}$ has trivial
intersection with $\textnormal{image\,}(\delta)$.
Let $\gamma = c \overline{p}_1 + \sum_{j=2}^r b_j e_{L_j}
t^{\nu}$, where $c, b_j \in \mathbb{Q}$ and assume that $\gamma
= \delta(\beta)$ for some $\beta \in Q^+H_1(V, \partial V)$. We
have $s(\gamma) = c p_1$, where the map $s$ is the third
vertical map from diagram~\eqref{eq:diag-QH-H}. It follows from
that diagram that $\partial (s(\beta))= c p_1$. But this is
possible only if $c=0$ since $p_1 \not\in
\textnormal{image\,}(\partial)$.
Thus $\gamma = \sum_{j=2}^r b_j e_{L_j} t^{\nu}$ and we have to
show that $\gamma=0$. Recall that $\gamma = \delta(\beta)$. We
claim that $\beta$ is divisible by $t^{\nu}$, i.e. there exists
$\beta' \in Q^+H_{n+1}(V,\partial V)$ such that $\beta = t^{\nu}
\beta'$. To prove this we first note that $\gamma$ is divisible
by $t$. By Lemma~\ref{l:div-t}, $\beta$ is also divisible by
$t$. Thus there exists $\beta_1 \in Q^+H_{1+N_V}(V, \partial V)$
with $\beta = t \beta_1$. In particular $\delta(\beta_1) =
\sum_{j=2}^r b_j e_{L_j} t^{\nu-1}$. Continuing by induction, using
Lemma~\ref{l:div-t} repeatedly, we obtain elements $\beta_j \in
Q^+H_{1+jN_V}(V, \partial V)$ with $t \beta_{j+1} = \beta_j$ for
every $1 \leq j \leq \nu-1$. Take $\beta' = \beta_{\nu}$.
It follows that $t^{\nu} \delta(\beta') = \sum_{j=2}^r b_j
e_{L_j} t^{\nu}$ for some $\beta' \in Q^+H_{n+1}(V, \partial
V)$. As $Q^+H_{n+1}(V, \partial V) = \mathbb{Q} e_V$ we have
$\beta' = a e_V$ for some $a \in \mathbb{Q}$. But $\delta(e_V) =
e_{L_1} + \cdots +e_{L_r}$ hence $a(e_{L_1} + \cdots +
e_{L_r})t^{\nu} = (\sum_{j=2}^r b_j e_{L_j}) t^{\nu}$. Since by
condition~(3) of Assumption~$\mathscr{L}$ the element $e_{L_1}
\in Q^+H_n(\partial V)$ is not torsion (over $\Lambda^+$), it
follows that $a=0$. Consequently $b_2 = \cdots = b_r =0$ and so
$\gamma = 0$. This concludes the proof of
Proposition~\ref{p:basis-delta}.
\end{proof}
Having proved Proposition~\ref{p:basis-delta}, the proof of
Theorem~\ref{t:cob} is now complete.
\end{proof}
\subsection{Lagrangians intersecting at one point} \label{sb:lag-1-pt}
\setcounter{thm}{0
We start with a stronger version of Corollary~\ref{c:del_1=del_2}
from~\S\ref{sb:discr-intro}.
\begin{cor} \label{c:L1-L2-1pt} Let $(M, \omega)$ be a monotone
symplectic manifold. Let $L_1, L_2 \subset M$ be two Lagrangian
submanifolds that satisfy conditions $(1)$ -- $(3)$ of Assumption
$\mathscr{L}$ and such that $N_{L_1} = N_{L_2}$. Denote by $N =
N_{L_i}$ their mutual minimal Maslov number and assume further
that:
\begin{enumerate}
\item $H_{1+jN}(L_1)=H_{1+jN}(L_2)=0$ for every $j$;
\item $H_{jN-1}(L_1) = H_{jN-1}(L_2)=0$ for every $j$;
\item either $\pi_1(L_1 \cup L_2) \to \pi_1(M)$ is injective, or
$\pi_1(L_i) \to \pi_1(M)$ is trivial for $i=1,2$.
\end{enumerate}
Finally, suppose that $L_1$ and $L_2$ intersect transversely at
exactly one point. Then $$\Delta_{L_1} = \Delta_{L_2}$$ and
moreover this number is a perfect square.
\end{cor}
Note that if $L_1$, $L_2$ are even dimensional Lagrangian spheres then
conditions~(1)~--~(3) of Corollary~\ref{c:L1-L2-1pt} are obviously
satisfied, hence Corollary~\ref{c:del_1=del_2} follows from
Corollary~\ref{c:L1-L2-1pt}.
We now turn to the proof of Corollary~\ref{c:L1-L2-1pt}. We will need
the following Proposition.
\begin{prop} \label{p:surgery} Let $L_1, L_2 \subset (M, \omega)$ be
two Lagrangian submanifolds intersecting transversely at one point.
Then there exists a Lagrangian cobordism $V \subset \mathbb{R}^2
\times M$ with three ends, corresponding to $L_1$, $L_2$ and $L_1
\# L_2$ and such that $V$ has the homotopy type of $L_1 \vee L_2$.
If $L_1$ and $L_2$ are monotone with the same minimal Maslov number
$N$ and they satisfy assumption (3) from
Corollary~\ref{c:L1-L2-1pt} then $V$ is also monotone with minimal
Maslov number $N_V = N$. Moreover, if $L_1$ and $L_2$ are spin
then $V$ admits a spin structure that extends those of $L_1$ and
$L_2$.
\end{prop}
Before proving this proposition we show how to deduce
Corollary~\ref{c:L1-L2-1pt} from it.
\begin{proof}[Proof of Corollary~\ref{c:L1-L2-1pt}]
Consider the Lagrangian cobordism provided by
Proposition~\ref{p:surgery}. Since $V$ is homotopy equivalent to
$L_1 \vee L_2$ and $L_i$ satisfy assumptions (1) and~(2) of
Corollary~\ref{c:L1-L2-1pt} a simple calculation shows that
$$H_{jN}(V, \partial V) = 0, \quad H_{1+jN}(V) = 0, \quad \forall j.$$
The result now follows immediately from Theorem~\ref{t:cob}.
\end{proof}
We now turn to the proof of the Proposition.
\begin{proof}[Proof of Proposition~\ref{p:surgery}]
The proof is based on a version of the Polterovich Lagrangian
surgery~\cite{Po:surgery} adapted to the case of
cobordisms~\cite{Bi-Co:cob1}. We briefly outline those parts of
the construction that are relevant here. More details can be found
in~\cite{Bi-Co:cob1}.
Consider two plane curves $\gamma_1, \gamma_2$ as in
Figure~\ref{f:gamma-12}.
\begin{figure}[htbp]
\begin{center}
\epsfig{file=pic-1.eps, width=0.5\linewidth}
\end{center}
\caption{\label{f:gamma-12}}
\end{figure}
Consider the Lagrangian submanifolds $\gamma_1 \times L_1, \gamma_2
\times L_2 \subset \mathbb{R}^2 \times M$. The surgery construction
from~\cite{Bi-Co:cob1} produces a Lagrangian cobordism $V \subset
\mathbb{R} \times M$ with two negative ends which coincide with
negative ends of $\gamma_i \times L_i$ and with whose positive end
looks like the positive end of $\gamma_3 \times (L_1 \# L_2)$,
where the curve $\gamma_3$ is depicted in Figure~\ref{f:pi-V} and
$L_1 \# L_2$ stands for the Polterovich surgery (in $M$) of $L_1$
and $L_2$ (which coincides with the connected sum of the $L_i$'s
because they intersect transversely at exactly one point). The
projection of $V$ to $\mathbb{R}^2$ is depicted in
Figure~\ref{f:pi-V}.
\begin{figure}[htbp]
\begin{center}
\epsfig{file=pic-2.eps, width=0.85\linewidth}
\end{center}
\caption{\label{f:pi-V}}
\end{figure}
Next we determine the topology of $V$. Consider the curves
$\widetilde{\gamma}_1, \widetilde{\gamma}_2$ (which are extensions
of the $\gamma_i$'s to curves with positive ends as in
Figure~\ref{f:gamma-tilde-12}.)
\begin{figure}[htbp]
\begin{center}
\epsfig{file=pic-3.eps, width=0.7\linewidth}
\end{center}
\caption{\label{f:gamma-tilde-12}}
\end{figure}
Consider the Polterovich surgery $W = (\widetilde{\gamma}_1 \times
L_1) \# (\widetilde{\gamma}_2 \times L_2) \subset \mathbb{R}^2
\times M$ (note that the latter two Lagrangians also intersect
transversely at a single point). See Figure~\ref{f:cob-W}.
\begin{figure}[htbp]
\begin{center}
\epsfig{file=pic-4.eps, width=0.85\linewidth}
\end{center}
\caption{\label{f:cob-W}}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\epsfig{file=pic-5.eps, width=0.85\linewidth}
\end{center}
\caption{\label{f:V_0-strip-S}}
\end{figure}
Denote by $\pi: \mathbb{R}^2 \times M \longrightarrow \mathbb{R}^2$
the projection, and by $S \subset \mathbb{R}^2$ the strip depicted
in Figure~\ref{f:V_0-strip-S}. Put $V_0 = W \cap \pi^{-1}(S)$.
According to~\cite{Bi-Co:cob1}, $V_0$ is a manifold with boundary,
with two obvious boundary components corresponding to the $L_i$'s
and a third boundary component which is $W \cap \pi^{-1}(0)$. The
latter is exactly the Polterovich surgery $L_1 \# L_2$. Moreover
$V_0$ is homotopy equivalent to $V$ (in fact $V_0 \subset V$ and is
a deformation retract of $V$). A straightforward calculation shows
that there is an embedding $L_1 \vee L_2 \subset V_0$ and moreover
that $L_1 \vee L_2$ is a deformation retract of $V_0$. (In fact,
one can show that $V_0$ is diffeomorphic to the boundary connected
sum of $[0,1] \times L_1$ and $[0,1] \times L_2$, where the
connected sum occurs among the boundary components $\{1\} \times
L_i$, $i=1,2$.)
The statement on monotonicity follows from the Seifert - Van~Kampen
theorem (see also~\cite{Bi-Co:cob1}).
Assume now that $L_1, L_2$ are spin. Then $\widetilde{\gamma}_1
\times L_1$ and $\widetilde{\gamma}_2 \times L_2$ are also spin,
with a spin structure extending those of the ends. Recall that the
connected sum of spin manifolds is also
spin~\cite{La-Mi:spin-geom}. Thus $W = (\widetilde{\gamma}_1 \times
L_1) \# (\widetilde{\gamma}_2 \times L_2)$ is spin too and by
standard arguments it follows that the spin structure on $W$ can be
chosen so that it extends those given on the ends. By restriction
we obtain a spin structure on $V_0 \subset W$ and consequently also
the desired one on $V$.
\end{proof}
\section{The Lagrangian cubic equation} \label{s:lag-cubic-eq} \setcounter{thm}{0
We begin by proving the following result that generalizes
Theorems~\ref{t:cubic-eq} and~\ref{t:rel-to-discr}.
Theorem~\ref{t:cubic-eq-sphere} will be proved
in~\S\ref{sb:prf-cubic-eq-S} below.
\begin{thm}\label{t:cubic_eq-ccl}
Let $L \subset M$ be a Lagrangian submanifold satisfying
conditions~$(1)$~--~$(3)$ of Assumption~$\mathscr{L}$. Assume in
addition that $[L] \neq 0 \in H_n(M;\mathbb{Q})$. Let $c \in H_n(M;
\mathbb{Z})$ be a class satisfying $\xi := \#(c \cdot [L]) \neq 0$.
Then there exist unique {constants $\sigma_{c,L} \in
\tfrac{1}{\xi^2}\mathbb{Z}$, $\tau_{c,L} \in
\tfrac{1}{\xi^3}\mathbb{Z}$} such that the following equation
holds in $QH(M;R_{\mathbb{Q}}^+)$:
\begin{equation} \label{eq:cubic_eq_ccL} c*c*[L] - \xi \sigma_{c,L}
\, c*[L] q^{n/2} - \xi^2 \tau_{c,L} \, [L] q^{n} = 0.
\end{equation}
The coefficients $\sigma_{c,L}, \tau_{c,L}$ are related to the
discriminant of $L$ by $\Delta_L = \sigma_{c,L}^2 + 4 \tau_{c,L}$.
{If $\xi$ is square-free, then $\sigma_{c,L} \in
\tfrac{1}{\xi} \mathbb{Z}$ and $\tau_{c,L} \in \tfrac{1}{\xi^2}
\mathbb{Z}$.} Moreover, $\sigma_{c,L}$ can be expressed in terms
of genus $0$ Gromov-Witten invariants as follows:
\begin{equation} \label{eq:GW-CCL} \sigma_{c,L} = \frac{1}{\xi^2}
\sum_A GW_{A,3}(c,c,[L]),
\end{equation}
where the sum is taken over all classes $A \in H_2(M)$ with
$\langle c_1, A \rangle = n/2$.
\end{thm}
As we will see soon, Theorem~\ref{t:cubic-eq} follows immediately from
Theorem~\ref{t:cubic_eq-ccl} by taking $c=[L]$ and in the notation of
Theorem~\ref{t:cubic-eq} we have $\sigma_L = \sigma_{[L],L}$, $\tau_L
= \tau_{[L], L}$. Recall also from Corollary~\ref{c:sig=0} that if $L$
is a Lagrangian sphere then $\sigma_L = 0$ (see also
Theorem~\ref{t:cubic-eq-sphere}, case~\eqref{i:div}). We remark that
in contrast to $\sigma_L$, the constants $\sigma_{c,L}$ {\em might not
vanish} for general $c \neq [L]$. See for example~\S\ref{sbsb:M_2},
for an explicit calculation of the constants $\sigma_{c,L},
\tau_{c,L}$ (for all possible $c$'s) for Lagrangian spheres in the
blow-up of ${\mathbb{C}}P^2$ at two points.
\begin{proof}[Proof of Theorem~\ref{t:cubic_eq-ccl}]
Fix a spin structure on $L$. In view of~\S\ref{sb:HF} we replace
$HF_n(L,L; \mathbb{Q})$ by $QH_n(L; \Lambda_{\mathbb{Q}})$. By
assumption, this is a $2$-dimensional vector space over
$\mathbb{Q}$. Recall also that $QH_0(L; \Lambda_{\mathbb{Q}}) \cong
QH_n(L; \Lambda_{\mathbb{Q}})$. Put $$x := \tfrac{1}{\xi} c * e_L
\in QH_0(L;\Lambda_{\mathbb{Q}}),$$ where $c$ is viewed here as an
element of $QH_n(M;R_{\mathbb{Q}})$ and $*$ is the module operation
mentioned in~\S\ref{sb:HF}. Let $p = [\textnormal{point}] \in
H_0(L;\mathbb{Q})$ be the class of a point. We have
$$\widetilde{\epsilon}_L(x) = \tfrac{1}{\xi} \#(c \cdot [L])p = p.$$
It follows that $\{x, e_L t^{\nu}\}$ is a basis for
$QH_0(L;\Lambda_{\mathbb{Q}})$. Following the recipe
in~\S\ref{sbsb:Q=QH_n} and formula~\eqref{eq:x*x-QH-t} there exist
$\sigma_{c,L}, \tau_{c,L} \in \mathbb{Q}$ such that
\begin{equation} \label{eq:x*x-sig_cL} x*x = \sigma_{c,L} x t^{\nu}
+ \tau_{c,L} e_L t^{2\nu},
\end{equation}
where $*$ stands here for the Lagrangian quantum product on
$QH(L)$.
We now apply the quantum inclusion map $i_L$ (see~\S\ref{sb:HF}) to
both sides of~\eqref{eq:x*x-sig_cL}. We have
$$i_L(x*x) = \tfrac{1}{\xi^2} i_L((c*e_L)*(c*e_L)) =
\tfrac{1}{\xi^2} c*c*i_L(e_L) = \tfrac{1}{\xi^2} c*c*[L].$$ Here we
have used properties of the operations described in~\S\ref{sb:HF},
and in particular identity~\eqref{eq:alg-identity}. We also have
$$i_L(x) = \tfrac{1}{\xi} c*i_L(e_L) = \tfrac{1}{\xi} c*[L].$$
Recall also that we can view $\Lambda$ as a subring of $R =
\mathbb{Z}[q, q^{-1}]$ via the embedding $t \longmapsto q^{N_L/2}$,
so that under this embedding we have $t^{\nu} \longmapsto q^{n/2}$.
Therefore by applying $i_L$ to~\eqref{eq:x*x-sig_cL} we immediately
obtain the equation claimed by the theorem. The statement on
$\Delta_L$ follows at once from~\S\ref{sbsb:Q=QH_n}.
{Next we claim that $\xi^2 \sigma_{c,L}, \xi^3 \tau_{c, L}
\in \mathbb{Z}$ and moreover, if $\xi$ is square-free, then in
fact $\xi \sigma_{c,L}, \xi^2 \tau_{c, L} \in \mathbb{Z}$. To
this end we will denote $\Lambda$ by $\Lambda_{\mathbb{Z}}$ to
emphasize that the ground ring is $\mathbb{Z}$. To prove the
claim, set $y := \xi x$ and note that $y \in QH_0(L;
\Lambda_{\mathbb{Z}})$. For $y$ we obtain the resulting equation
in $QH_{-n}(L; \Lambda_{\mathbb{Z}})$ using~\eqref{eq:x*x-sig_cL}
\begin{equation} \label{eq:y*y-sig_cL} y * y = \xi \sigma_{c,L} y
t^{\nu} + \xi^2 \tau_{c,L} e_L t^{2\nu}.
\end{equation}
We apply the augmentation morphism $\epsilon_L :
QH(L;\Lambda_{\mathbb{Z}}) \longrightarrow \Lambda_{\mathbb{Z}}$
and obtain
$$\epsilon_L(y * y) =
\xi \sigma_{c,L} \epsilon_L (y) t^{\nu} = \xi^2 \sigma_{c,L}
t^{\nu}. $$ Since the left-hand side lies in $\Lambda_{\mathbb{Z}}$
it follows that $\xi^2 \sigma_{c,L} \in \mathbb{Z}$. Multiplying
equation~\eqref{eq:y*y-sig_cL} with $\xi$ we see that $\xi^3
\tau_{c,L} \in \mathbb{Z}$. We now write $\sigma_{c,L} = u/\xi^2$
and $\tau_{c,L} = v/ \xi^3$ with $u, v \in \mathbb{Z}$. The
discriminant is then
$$ \Delta_L = \frac{u^2}{\xi^4} + 4 \frac{v}{\xi^3} \in \mathbb{Z} $$
and thus we have $\xi^4 \Delta_L = u^2 + 4 \xi v$. Since $\xi
\,|\, (u^2 + 4 \xi v)$ it follows that $\xi \,|\, u^2$. If $\xi$
is square-free then $\xi \,|\, u$ and hence $\xi \sigma_{c,L} = u/
\xi \in \mathbb{Z}$. Now using equation~\eqref{eq:y*y-sig_cL} we
see that $y*y - \xi \sigma_{c,L} y t^{\nu} \in QH_{-n}(L;
\Lambda_{\mathbb{Z}})$ and therefore $\xi^2 \tau_{c,L} \in
\mathbb{Z}$. }
It remains to prove the statement on the relation between
$\sigma_{c,L}$ and the Gromov-Witten invariants. For this purpose we
will need the following Lemma. We denote by $p_M \in H_0(M)$ the
class of a point.
\begin{lem} \label{l:p-class} Let $a, b \in H_*(M)$ be two {\em
classical} elements of pure degree. Then
$$\widetilde{\epsilon}_M(a*b) = \widetilde{\epsilon}_M(a \cdot
b),$$ where $\cdot$ is the classical intersection product. In
particular, the class $p_M$ appears as a summand in $a*b$ if and
only if $|a|+|b| = 2n$ and $a \cdot b \neq 0$.
\end{lem}
We postpone the proof of the Lemma and proceed with the proof of
the theorem.
Denote by $k = C_M$ the minimal Chern number of $M$
(see~\S\ref{sb:monotone}). Write $$c*[L] = c\cdot [L] + \sum_{j\geq
1} \alpha_{2jk}q^{jk},$$ with $\alpha_{2jk} \in H_{2jk}(M)$. (The
choice of the sub-indices was made to reflect the degree in
homology.) Then we have
$$c*c*[L] = \#(c \cdot [L]) c*p_M + \sum_{j \geq 1}
c*\alpha_{2jk} q^{jk},$$ which together
with~\eqref{eq:cubic_eq_ccL} give:
\begin{equation} \label{eq:cubic-alpha} \xi \sigma_{c,L}
c*[L]q^{n/2} + \xi^2 \tau_{c,L} [L]q^n = \#(c \cdot [L]) c*p_M +
\sum_{j \geq 1} c*\alpha_{2jk} q^{jk}.
\end{equation}
Applying $\widetilde{\epsilon}_M$ to~\eqref{eq:cubic-alpha} we
obtain using Lemma~\ref{l:p-class} that
\begin{equation} \label{eq:xi-2=c-alpha} \xi^2 \sigma_{c,L} p_M
q^{n/2} = \widetilde{\epsilon}_M(c \cdot \alpha_n) q^{n/2} = \#
(c \cdot \alpha_n) p_M q^{n/2}.
\end{equation}
By the definition of the quantum product we have:
$$\#(c \cdot \alpha_n) = \sum_{A} GW^M_{A,3}(c,c,[L]),$$ where the sum
goes over $A \in H_2(M)$ with $\langle c_1, A \rangle = n/2$. (Note
that since $n$=even the order of the classes $(c,c,[L])$ in the
Gromov-Witten invariant does not make a difference.) Substituting
this in~\eqref{eq:xi-2=c-alpha} yields the desired identity.
Note that we have carried the proof above for the quantum homology
$QH(M; R)$ with coefficients in the ring $R = \mathbb{Z}[q^{-1},
q]$ but since $(M, \omega)$ is monotone, it is easy to see that
equation~\eqref{eq:cubic_eq_ccL} involves only positive powers of
$q$ hence it holds in fact in $QH(M;R^+)$, where $R^+ =
\mathbb{Z}[q]$.
To complete the proof of the theorem we still need the following.
\begin{proof}[Proof of Lemma~\ref{l:p-class}]
Write $$a*b = a\cdot b + \sum_{j \geq 1} \gamma_j q^{jk},$$
where $a \cdot b \in H_{|a|+|b|-2n}(M)$ is the classical
intersection product of $a$ and $b$, $k$ is the minimal Chern
number, and $\gamma_j \in H_{|a|+|b|-2n+2jk}(M)$. In order to
prove the lemma we need to show that $\gamma_{j_0} = 0$, where
$2j_0 k = 2n-|a|-|b|$.
Suppose by contradiction that $\gamma_{j_0} \neq 0$. Then there
exists $A \in H_2(M)$ with $$2\langle c_1, A \rangle = 2j_0 k =
2n-|a|-|b|$$ such that $GW_{A,3}(a,b,[M]) \neq 0$, where $[M]
\in H_{2n}(M)$ is the fundamental class. Since $[M]$ poses no
additional incidence conditions on $GW$-invariants, this implies
that for a generic almost complex structure there exists a
pseudo-holomorphic rational curve passing through generic
representatives of the classes $a$ and $b$. More precisely
denote by $\mathcal{M}_{0,2}(A,J)$ the space of simple rational
$J$-holomorphic curves with $2$ marked points in the class $A$.
Denote by $ev: \mathcal{M}_{0,2}(A,J) \longrightarrow M \times
M$ the evaluation map. Since $GW_{A,3}(a,b,[M]) \neq 0$, then
for a generic choice of (pseudo) cycles $D_a, D_b$ representing
$a, b$ and for a generic choice of $J$ the map $ev$ is
transverse to $D_a \times D_b$ and moreover $ev^{-1}(D_a \times
D_b) \neq \emptyset$. However this is impossible because
\begin{align*}
& \dim \mathcal{M}_{0,2}(A,J) + \dim (D_a \times D_b) = \\
& \bigl(2n + 2 \langle c_1, A \rangle -2\bigr) + |a|+|b| = 4n - 2
< \dim (M \times M).
\end{align*}
\end{proof}
The proof of Theorem~\ref{t:cubic_eq-ccl} is now complete.
\end{proof}
\subsection{Proof of Theorems~\ref{t:cubic-eq}
and~\ref{t:rel-to-discr}} \label{sb:prf-tmain-cubic} \setcounter{thm}{0
The proof follows immediately from Theorem~\ref{t:cubic_eq-ccl}.
Indeed, since $\#([L] \cdot [L]) = \varepsilon \chi \neq 0$ we can
take $c = [L]$, $\xi = \varepsilon \chi$ in
Theorem~\ref{t:cubic_eq-ccl}. The constants $\sigma_L, \tau_L$ from
Theorem~\ref{t:cubic-eq} are now $\sigma_{[L],L}, \tau_{[L],L}$
respectively, and we have $\Delta_L = \sigma_{[L],L}^2 +
4\tau_{[L],L}$. \hfill \qedsymbol \medskip
\subsection{Proof of Theorem~\ref{t:cubic-eq-sphere}} \setcounter{thm}{0
\label{sb:prf-cubic-eq-S}
We will prove here the following more general result, {from
which Theorem~\ref{t:cubic-eq-sphere} follows directly.} We call an
element $a \in QH_*(M)$ \emph{classical}, if it lies in the image of
the canonical inclusion $H_*(M) \subset QH_*(M)$.
\begin{thm} \label{t:cubic-eq-sphere-gnrl} Let $S \subset M$ be a
monotone Lagrangian sphere in closed $2n$-dimensional symplectic
manifold $M$.
\begin{enumerate}
\item \label{ig:n-odd} If $n=$~odd then $[S]*[S]=0$. More
generally, when $n=$~odd, for all $a \in H_n(M)$ with $a \cdot
[S]=0$ we have $a*[S]=0$.
\item \label{ig:n-even} Assume $n=$~even. Then:
\begin{enumerate}[(i)]
\item \label{ig:div} If $C_M | n$ then there exists a unique
$\gamma_S \in \mathbb{Z}$ such that $[S]^{*3} = \gamma_S [S]
q^{n}$. If we assume in addition that $2C_M \centernot| n$,
then $\gamma_S$ is divisible by $4$. Moreover for every (not
necessarily classical) element $b \in QH_0(M)$ there exists a
unique $\eta_b \in \mathbb{Z}$ such that we have $b*[S] =
\eta_b [S]q^n$.
\item \label{ig:ndiv} If $C_M \centernot| n$ then for every
(not necessarily classical) element $b \in QH_0(M)$ we have
$b*[S]=0$. In particular, by taking $b = [S]*[S]$ we obtain
$[S]^{*3}=0$.
\end{enumerate}
\end{enumerate}
\end{thm}
\begin{proof}
Fix once and for all a spin structure on $S$. Denote by $e_S \in
QH_n(S;\Lambda)$ the unity.
Note that the case $C_M = \infty$ (i.e. $\omega|_{\pi_2(M)} = 0$)
is trivial. Indeed under such assumptions we have $QH_*(M) \cong
H_*(M)$ via an isomorphism that intertwines the quantum and the
classical intersection products. The statement in~\eqref{ig:n-odd}
follows immediately. The statements
in~\eqref{ig:div},~\eqref{ig:ndiv} follow from the fact that for $b
\in QH_0(M)$ the degree of $b*[S]$ is negative. Thus, from now one
we assume that $C_M < \infty$.
We will also assume throughout the proof that $n > 1$, for
otherwise the statement is again obvious (if $n=1$, then either $M
= S^2$ and $S=$~equator, or $\omega|_{\pi_2(M)} = 0$). Thus we
assume from now that $\pi_1(S)= 1$ hence $N_S = 2C_M$.
We now appeal to the spectral sequence described
in~\S\ref{sb:spec-seq}. From Theorem~\ref{t:spectral-seq} it
follows that
\begin{equation} \label{eq:QH_i-0-n-mod-2CM} QH_i(S; \Lambda)=0 \quad
\forall \, i \centernot\equiv 0, n (\bmod \,2C_M).
\end{equation}
Moreover, if $2C_M \centernot| n$ then:
\begin{enumerate}
\item either $QH_0(S;\Lambda)=0$, or the augmentation
$\widetilde{\epsilon}_S: QH_0(S;\Lambda) \longrightarrow
H_0(S;\Lambda)$ is an isomorphism.
\item $QH_n(S;\Lambda) = \mathbb{Z} e_S$ (and $e_S$ is not a torsion
element).
\end{enumerate}
{We prove statement~\eqref{ig:n-odd} of the theorem, i.e.
when $n=$~odd.} Let $a \in H_n(M)$ be an element with $a \cdot
[S]=0$. Consider $$y = a * e_S \in QH_0(S;\Lambda).$$ We claim that
$y=0$. Indeed, either $QH_0(S;\Lambda)=0$ in which case $y=0$, or
$\widetilde{\epsilon}_S: QH_0(S; \Lambda) \longrightarrow H_0(S)$
is an isomorphism and then $\widetilde{\epsilon}_S(y) = a \cdot
[S]=0$, hence $y=0$ again.
On the other hand $i_S(y) = a * i_L(e_S) = a * [S]$, which implies
$a*[S]=0$. Note that $[S] \cdot [S] =0$. Therefore, if we take $a =
[S]$ we obtain $[S]*[S]=0$. This completes the proof for the case
$n=$~odd.
We now turn to statement~\eqref{ig:n-even} of the theorem, hence
assume that $n=$~even. We first deal with the case~\eqref{ig:ndiv},
i.e. assume that $C_M \centernot| n$. Let $b \in QH_0(M)$. Put $u
= b*e_S \in QH_{-n}(S;\Lambda)$. By~\eqref{eq:QH_i-0-n-mod-2CM} we
have $QH_{-n}(S;\Lambda)=0$, hence $u=0$. On the other hand $i_S(u)
= b*i_S(e_S) = b*[S]$. This proves the case~\eqref{ig:ndiv}.
To prove~\eqref{ig:div}, assume that $C_M | n$. We will first
assume that $2C_M \centernot| n$. Let $b \in QH_0(M)$ and put $w =
b * e_S \in QH_{-n}(S;\Lambda)$. By the discussion above we have
$$QH_{-n}(S;\Lambda) = QH_n(S;\Lambda) t^{n/C_M} = \mathbb{Z} e_S
t^{n/C_M}.$$ It follows that $w = \eta_b e_S t^{n/C_M}$ for some
$\eta_b \in \mathbb{Z}$. Applying $i_S$ to $w$ we get $$\eta_b [S]
q^{n} = b*i_S(e_S) = b*[S].$$ As before we can take $b = [S]*[S]$
and obtain $[S]^{*3} = \gamma_S [S]q^n$, where $\gamma_S =
\eta_{\scriptscriptstyle [S]*[S]} \in \mathbb{Z}$.
To complete the proof of point~\eqref{ig:div} of the theorem in the
case $2C_M \centernot| n$, it remains to show that $4 | \gamma_S$.
To this end put $z = [S]*e_S \in QH_0(S; \Lambda)$. Note that
$\widetilde{\epsilon}_S(z) = \#([S]\cdot[S])p = \pm 2p$, where $p \in
QH_0(S)$ is the class of a point. Since $\widetilde{\epsilon}_S$ is
an isomorphism it follows that $z$ is divisible by $2$ in
$QH_0(S;\Lambda)$ (this does not necessarily hold if $2 C_M | n$).
In particular $z*z \in QH_{-n}(S;\Lambda)$ is
divisible by $4$. At the same time by the theory recalled
in~\S\ref{sb:HF} we also have
$$z*z = ([S]*e_S)*([S]*e_S) = ([S]*([S]*e_S))*e_S = ([S]*[S])*e_S,$$
hence $i_S(z*z) = [S]^{*3}$. It follows that $[S]^{*3}$ is
divisible by $4$. But $[S]^{*3} = \gamma_S [S]q^n$ and $[S]$ is
neither torsion nor divisible by any integer $\geq 2$.
Consequently, $\gamma_S$ is divisible by $4$. This completes the
proof of point~\eqref{ig:div} of the theorem under the assumption
that $2C_M \centernot| n$.
Finally, it remains to treat the other case at point~\eqref{ig:div}
of the theorem, i.e. $n=$~even and $2C_M | n$. It is easy to see
that $S$ satisfies condition~$\mathscr{L}$ (e.g. by using
Proposition~\ref{p:criterion}). Therefore this case is completely
covered by Theorem~\ref{t:cubic-eq} (which has already been proved)
together with Corollary~\ref{c:sig=0} and the short discussion
after its statement.
\end{proof}
\subsection{Further results} \setcounter{thm}{0 \label{sb:further-res}
We present here a few other results that follow from the same ideas as
in the proofs of Theorems~\ref{t:cubic_eq-ccl}
and~\ref{t:cubic-eq-sphere-gnrl}.
\begin{thm} \label{t:L_1L_2} Let $L_1, L_2 \subset M$ be two
Lagrangian submanifolds satisfying conditions~$(1)$~--~$(3)$ of
Assumption~$\mathscr{L}$ (possibly with different minimal Maslov
numbers). Assume that $[L_1] \cdot [L_2] = 0$. Then one of the
following two (non exclusive) possibilities occur:
\begin{enumerate}
\item either $[L_1]$ and $[L_2]$ are proportional in $H_n(M;
\mathbb{Q})$ and moreover we have the relation $[L_1]*[L_1] =
\kappa [L_1]q^{n/2}$ in $QH(M; R^+_{\mathbb{Q}})$ for some
$\kappa \in \mathbb{Z}$;
\item or $[L_1]*[L_2] = 0$.
\end{enumerate}
\end{thm}
\begin{remnonum}
Note that if possibly~(1) occurs in the theorem and moreover
$N_{L_1} = N_{L_2} = 2$, then $\lambda_{L_1} = \lambda_{L_2}$. This
is so because by the theorem $[L_1]$ and $[L_2]$ are proportional
and $[L_i]$ is an eigenvector of the operator $P$ with eigenvalue
$\lambda_{L_i}$ (see~\S\ref{sb:eigneval}).
\end{remnonum}
Here is a simple example of Lagrangians $L_1, L_2$ satisfying the
conditions of Theorem~\ref{t:L_1L_2}. We take $M$ to be the monotone
blow-up of ${\mathbb{C}}P^2$ at $3$ points and $L_1, L_2$ Lagrangian
spheres in the classes $[L_1] = H - E_1-E_2-E_3$, $[L_2] = E_2-E_3$
(using the notation of~\S\ref{sbsb:intro-exp-blcp2}).
See~\S\ref{sb:lag-spheres-blow-ups} for more details on how to
actually construct these spheres. Clearly $[L_1]\cdot [L_2]=0$, hence
the theorem implies that $[L_1]*[L_2]=0$ (which can of course be
confirmed also by direct calculation). One can construct many other
examples of this type in monotone blow-ups of ${\mathbb{C}}P^2$ at $3
\leq k \leq 8$ points.
On the other hand, if $L \subset M$ is a Lagrangian satisfying
conditions~$(1)$~--~$(3)$ of Assumption~$\mathscr{L}$ and we assume in
addition that $\chi(L) = 0$ then we can take $L = L_1 = L_2$.
Theorem~\ref{t:L_1L_2} then implies that $[L]*[L] = [L]\kappa q^{n/2}$
for some $\kappa \in \mathbb{Z}$. The simplest example should be when
$L$ is a $2$-torus, however we are not aware of any example of a
monotone Lagrangian $2$-torus satisfying conditions~$(1)$~--~$(3)$ of
Assumption~$\mathscr{L}$ and with $[L] \neq 0$. An easy (algebraic)
argument shows that such tori cannot exist in a symplectic
$4$-manifold with $b_2^+ = 1$ (e.g. in blow-ups of ${\mathbb{C}}P^2$).
It would be interesting to know if this holds in greater generality.
Finally, we remark that if one replaces the condition $[L_1] \cdot
[L_2] = 0$ by the stronger assumption that $L_1 \cap L_2 = \emptyset$,
and drops conditions~$(3)$,~$(4)$ of Assumption~$\mathscr{L}$, then it
still follows that $[L_1]*[L_2] = 0$. This is proved
in~\cite{Bi-Co:rigidity}-Theorem~2.4.1 (see also~\S8
in~\cite{Bi-Co:Yasha-fest}).
\begin{proof}[Proof of Theorem~\ref{t:L_1L_2}]
Without loss of generality we may assume that both $[L_1]$ and
$[L_2]$ are non-trivial in $H_n(M;\mathbb{Q})$, for otherwise
possibility~(2) obviously holds.
Define $y_1 = [L_2]*e_{L_1} \in QH_0(L_1; \Lambda^1_{\mathbb{Q}})$
and $y_2 = [L_1]*e_{L_2} \in QH_0(L_2; \Lambda^2_{\mathbb{Q}})$.
Here we have denoted $\Lambda^1_{\mathbb{Q}} = \mathbb{Q}[t_1^{-1},
t_1]$ with $|t_1| = -N_{L_1}$ and $\Lambda^2_{\mathbb{Q}} =
\mathbb{Q}[t_2^{-1}, t_2]$ with $|t_2|= -N_{L_2}$ since we have to
distinguish between the coefficient rings of $L_1$ and $L_2$. Note
that under the embeddings of $\Lambda^1_{\mathbb{Q}}$ and
$\Lambda^2_{\mathbb{Q}}$ into $R_{\mathbb{Q}} =
\mathbb{Q}[q^{-1},q]$ we have $t_1^{\nu_1} = q^{n/2} =
t_2^{\nu_2}$. (See~\S\ref{sb:HF}.)
Since $[L_1] \cdot [L_2] = 0$ and due to condition~$(3)$ of
Assumption~$\mathscr{L}$, we have
$$y_1 = \kappa_1 e_{L_1}t_1^{\nu_1},
\quad y_2=\kappa_2 e_{L_2} t_2^{\nu_2},$$ for some $\kappa_1,
\kappa_2 \in \mathbb{Z}$ and where $\nu_1 = n/N_{L_1}$, $\nu_2 = n/
N_{L_2}$. At the same time we also have
$$i_{L_1}(y_1) = i_{L_2}(y_2) = [L_1]*[L_2].$$ Here we have used the
fact that $n$ must be even, hence $[L_1]*[L_2] = [L_2]*[L_1]$.
It follows that $\kappa_1 [L_1] q^{n/2} = [L_1]*[L_2] = \kappa_2
[L_2] q^{n/2}$ and the result follows. (As in the proof of
Theorem~\ref{t:cubic_eq-ccl}, note that here too, the identities
proved involve only positive powers of $q$ hence they hold in
$QH(M;R^+)$ too.)
\end{proof}
The next result is concerned with Lagrangian spheres that do not
satisfy Assumption~$\mathscr{L}$, but rather~(2i-b) on
page~\pageref{i:CM-n} (after Theorem~\ref{t:cubic-eq-sphere}).
\begin{thm} \label{t:lag-spheres-no_L} Let $L_1, L_2 \subset M$ be
oriented Lagrangian spheres in a closed monotone symplectic
manifold $M$ of dimension $2n$. Assume that $n=$~even and $C_M | n$
but $2C_M \centernot| n$.
\begin{enumerate}
\item \label{if:L1L2} If $[L_1] \cdot [L_2] = 0$ then
$[L_1]*[L_2]=0$.
\item \label{if:k-L1L2} If $k:= \#([L_1] \cdot [L_2]) \neq 0$
then $$[L_1]^{*2} = [L_2]^{*2} = \tfrac{2 \varepsilon}{k}
[L_1]*[L_2],$$ where $\varepsilon = (-1)^{n(n-1)/2}$.
Furthermore, either $[L_1]^{*3} = [L_2]^{*3} = 0$ or $[L_1] =
\pm [L_2]$ (the two possibilities not being exclusive).
\end{enumerate}
\end{thm}
\begin{rem} \label{r:lag-spheres-no_L} Recall from
Theorem~\ref{t:cubic-eq-sphere} that each of the Lagrangians $L_i$,
$i=1,2$, satisfies a cubic equation of the type: $[L_i]^{*3} =
\gamma_i [L_i]q^n$. In general, it seems that the coefficients
$\gamma_1$ and $\gamma_2$ might differ one from the other, however
in case~\eqref{if:k-L1L2} of the theorem it is easy to see that
$\gamma_1 = \gamma_2$.
\end{rem}
\begin{proof}[Proof of Theorem~\ref{t:lag-spheres-no_L}]
By standard arguments there exist canonical isomorphisms $QH_*(L_i)
\to H_*(L_i;\Lambda)$, $i=1,2$. Thus
$$QH_0(L_i) = \mathbb{Z} p_i, \quad QH_n(L_i) = \mathbb{Z} e_{L_i},$$
where $p_i$ is the class of a point in $L_i$ and $e_{L_i}$ is the
fundamental class of $L_i$.
Assume first that $[L_1] \cdot [L_2] = 0$. In view of the
isomorphism just mentioned we have $[L_1] * e_{L_2} = 0$. Applying
$i_{L_2}$ to the last equality we obtain $[L_1]*[L_2] = 0$.
Assume now that $k: = \#([L_1] \cdot [L_2]) \neq 0$. Due to our
assumptions we have:
\begin{enumerate}[(i)]
\item \label{i:L2eL1} $[L_2]*e_{L_1} = k p_1$.
\item \label{i:L1eL2} $[L_1]*e_{L_2} = k p_2$.
\item \label{i:L1eL1} $[L_1]*e_{L_1} = 2\varepsilon p_1$.
\item \label{i:L2eL2} $[L_2]*e_{L_2} = 2 \varepsilon p_2$.
\end{enumerate}
From~\eqref{i:L2eL1} and~\eqref{i:L1eL2} it follows that
$$i_{L_1}(p_1) = i_{L_2}(p_2) = \tfrac{1}{k} [L_1]*[L_2].$$
From~\eqref{i:L1eL1} and~\eqref{i:L2eL2} we obtain:
$$i_{L_1}(p_1) = \tfrac{\varepsilon}{2} [L_1]*[L_1], \quad
i_{L_2}(p_2) = \tfrac{\varepsilon}{2} [L_2]*[L_2].$$ This implies
the first result of point~\eqref{if:k-L1L2} of the
theorem.
To prove the other statements, we use point~\eqref{i:div} of
Theorem~\ref{t:cubic-eq-sphere}. By that theorem there exist
$\gamma_1, \gamma_2 \in \mathbb{Z}$ such that
$$[L_1]^{*3} = \gamma_1 [L_1]q^n, \quad [L_2]^{*3} = \gamma_2 [L_2] q^n.$$
It follows that $$\gamma_1 [L_1]q^n = [L_1]^{*3} = [L_2]^{*2}*[L_1]
= \tfrac{k \varepsilon}{2} [L_2]^{*3} = \tfrac{k \varepsilon}{2}
\gamma_2 [L_2]q^n,$$ hence $\gamma_1[L_1] = \tfrac{k
\varepsilon}{2}\gamma_2[L_2]$. It follows that $\gamma_1=0$ if
and only if $\gamma_2 = 0$. Now, if $\gamma_1 \neq 0$ then
$$\gamma_1 [L_1]\cdot [L_2] = \tfrac{k \varepsilon}{2} \gamma_2 [L_2]\cdot [L_2] =
\tfrac{k \varepsilon}{2} \gamma_2 2 \varepsilon p,$$ where $p \in
H_0(M)$ is the class of a point. At the same time we have
$[L_1]\cdot [L_2] = k p$ and so $k \gamma_1 = k\gamma_2$. It
follows that $\gamma_1 = \gamma_2$ and $[L_1] = \tfrac{k
\varepsilon}{2}[L_2]$. Squaring the last equality with respect to
the (classical) intersection product we obtain: $2 \varepsilon =
\tfrac{k^2}{4} 2 \varepsilon$, hence $k = \pm 2$. This shows that
$[L_1] = \pm [L_2]$.
\end{proof}
\section{What happens in the non-monotone case} \setcounter{thm}{0
\label{s:non-monotone}
Here we briefly outline how to extend, in certain situations, part of
the results of the paper to non-monotone Lagrangians.
Let $L^n \subset M^{2n}$ be a Lagrangian submanifold, which is not
necessarily monotone. Under such general assumptions, the Lagrangian
Floer and Lagrangian quantum homologies might not be well defined, at
least not in a straightforward way. There are several problems with the
definition. The main one has to do with transversality related to
spaces of pseudo-holomorphic disks which cannot be controlled easily
(see~\cite{FO3:book-vol1, FO3:book-vol2} for a sophisticated general
approach to deal with this problem). The other problem (which is very
much related to the first one) comes from bubbling of holomorphic
disks with non-positive Maslov index. This leads to complications in
the algebraic formalism of Lagrangian Floer theory.
Nevertheless, the theory does work sufficiently well in dimension $4$
and we can still push some of our results to this case. Henceforth we
assume that $\dim M = 2n = 4$. We denote the symplectic structure of
$M$ by $\omega$. For simplicity assume that $L$ is a Lagrangian
sphere. We fix for the rest of the section an orientation and spin structure on
$L$.
We first introduce the coefficient ring $\widetilde{\Lambda}^+_{nov}$
which is a hybrid between the Novikov ring and
$\widetilde{\Lambda}^+$. More precisely, we define
$\widetilde{\Lambda}^+_{nov}$ to be the set of all elements $p(T)$ of
the form $$p(T) = a_0 + \sum_A a_A T^A, \quad a_0, a_A \in
\mathbb{Z},$$ satisfying the following conditions. The sum is allowed
to be infinite (in contrast to $\widetilde{\Lambda}^+$) and is taken
over all $A \in H_2^D(M,L)$ satisfying both $\mu(A)>0$ and
$\omega(A)>0$. In addition we require that for every $S \in
\mathbb{R}$ the number of non-trivial coefficients $a_A \neq 0$ in
$p(T)$ with $\omega(A) < S$ is finite. It is easy to see that
$\widetilde{\Lambda}^+_{nov}$ is a commutative ring with respect to
the usual operations. We endow $\widetilde{\Lambda}^+_{nov}$ with the
same grading as $\widetilde{\Lambda}^+$, i.e. $|T^A| = -\mu(A)$.
Similarly to the monotone case, we define the minimal Chern number
$C_M$ of $(M, \omega)$ as follows. Let $H_2^S = \textnormal{image\,}
(\pi_2(M) \longrightarrow H_2(M))$ be the image of the Hurewicz
homomorphism. Define: $C_M = \min \bigl\{ \langle c_1, A \rangle \mid
A \in H_2^S, \, \langle c_1, A \rangle >0, \; \langle [\omega], A
\rangle > 0 \bigr\}$.
The following version of Theorem~\ref{t:cubic-eq-sphere} continues to
hold for all Lagrangian $2$-spheres, whether monotone or not, provided
we work over the ring $\widetilde{\Lambda}^+_{nov}$ {in
$QH(M)$.}
\begin{thm} \label{t:non-monotone} Let $L^2 \subset M^4$ be a
Lagrangian $2$-sphere (without any monotonicity assumptions). Then
there exists $\widetilde{\gamma}_L \in \widetilde{\Lambda}^+_{nov}$
such that $[L]^{*3} = \widetilde{\gamma}_L [L]$. If $C_M = 2$ then
$\widetilde{\gamma}_L$ is divisible by $4$. Moreover, all the
calculations made in~\S\ref{sb:examples-revisited} continue to hold
without any changes in this setting.
\end{thm}
We will now outline the main points in the proof of the theorem,
paying attention to the main difficulties in the non-monotone case.
Recall that the proof of Theorem~\ref{t:cubic-eq-sphere} made use of
both the ambient quantum homology $QH(M)$ and the Lagrangian one
$QH(L)$, as well as the relations between them, e.g. the quantum
inclusion map $i_L : QH(L) \longrightarrow QH(M)$.
The ambient quantum homology $QH(M)$ can be defined (over
$\widetilde{\Lambda}^+_{nov}$) in the semi-positive case
(see~\cite{McD-Sa:jhol}) in a very similar way as in the monotone
case. This covers our case since $4$-dimensional symplectic manifolds
are always semi-positive. As for the Lagrangian quantum homology
things are less straightforward, and we explain the difficulties next.
Denote by $\mathcal{J}$ the space of almost complex structures
compatible with $\omega$. Then for generic $J \in \mathcal{J}$ there are no
non-constant $J$-holomorphic disks $u:(D, \partial D) \to (M, L)$ with
Maslov index $\mu(u) \leq 0$. This follows from the fact that the
spaces of such disks have negative virtual dimentions, together with
standard transversality arguments from the theory of
pseudo-holomorphic curves (see~\cite{McD-Sa:jhol, Laz:discs,
Laz:decomp, Kw-Oh:discs}). From this it follows by the theory
from~\cite{Bi-Co:qrel-long, Bi-Co:rigidity} that for a generic choice
of $J$ (and other auxiliary data) the associated pearl complex is well
defined and its homology $QH(L;\widetilde{\Lambda}^+_{nov}; J)$
satisfies all the algebraic properties described in~\S\ref{sb:HF} as
long as we work with coefficients in $\widetilde{\Lambda}^+_{nov}$.
The reason to work over $\widetilde{\Lambda}^+_{nov}$ comes from the
fact that there might be infinitely many pearly trajectories
connecting two critical points that all contribute to the differential
of the pearl complex. However, for any given $0< S \in \mathbb{R}$ the
number of such trajectories with disks of total area bounded above by
$S$ is finite, and therefore the differential of the pearl complex is
well defined over $\widetilde{\Lambda}^+_{nov}$. A detailed account on
this approach to the pearl complex in dimension $4$ has been carried
out in~\cite{Cha:uniruling-dim-4}.
Since $L$ is an even dimensional sphere, for degree reasons
$QH(L;\widetilde{\Lambda}^+_{nov}; J)$ is isomorphic (possibly in a
non-canonical way) to the singular homology $H_*(L;
\widetilde{\Lambda}^+_{nov})$. However, it is not clear whether the
continuation maps $QH(L;\widetilde{\Lambda}^+_{nov}; J_0) \longrightarrow
QH(L;\widetilde{\Lambda}^+_{nov}; J_1)$ are well defined for every two
regular $J$'s, and moreover, it is a priori not clear whether the
quantum ring structure on $QH(L;\widetilde{\Lambda}^+_{nov}; J)$ is
independent of $J$.
To understand these problems better denote by $\mathcal{J}_{\mu \leq
0} \subset \mathcal{J}$ the subspace of all $J$'s for which there
exists either a non-constant $J$-holomorphic disk with $\mu \leq 0$ or
a $J$-holomorphic rational curve with Chern number $\leq 0$.
{Roughly speaking the space $\mathcal{J}_{\mu \leq 0}$ has
strata of codimension $1$ in $\mathcal{J}$.} Denote by
$\mathcal{J}_{\mu>0} = \mathcal{J} \setminus \mathcal{J}_{\mu \leq 0}$
its complement. Let $J_0, J_1 \in \mathcal{J}_{\mu>0}$ be two regular
almost complex structures. If $J_0, J_1$ happen to belong to the same
path connected component of $\mathcal{J}_{\mu>0}$ then we have a
canonical isomorphism $QH(L;\widetilde{\Lambda}^+_{nov}; J_0)
\longrightarrow QH(L;\widetilde{\Lambda}^+_{nov}; J_1)$ which is in
fact a ring isomorphism. However, for $J_0, J_1$ lying in different
path connected components of $\mathcal{J}_{\mu>0}$ this might not be
the case. The problem is that when joining $J_0$ with $J_1$ by a path
$\{J_t\}_{t \in [0,1]}$ there will be instances of $t$ where the path
goes through $\mathcal{J}_{\mu \leq 0}$, hence the spaces of pearly
trajectories used in defining the continuation maps might not be
compact due to bubbling of holomorphic disks with Maslov index $0$.
Under such circumstances ``wall crossing'' analysis is necessary in
order to try to rectify the situation.
Despite these difficulties, Theorem~\ref{t:non-monotone} still holds.
The point is that although the Lagrangian quantum homology does depend
on the choice of $J$, the ambient quantum homology
$QH(M;\widetilde{\Lambda}^+_{nov}; J)$ is independent of that choice.
Inspecting the proof of Theorem~\ref{t:cubic-eq-sphere} one can see
that the invariance of $QH(L;\widetilde{\Lambda}^+_{nov}; J)$ under changes
of $J$ does not play any role. The only important thing is that
$QH(M;\widetilde{\Lambda}^+_{nov};J)$ is independent of $J$ and that the
quantum inclusion map $i_L: QH(L;\widetilde{\Lambda}^+_{nov}; J)
\longrightarrow QH(M;\widetilde{\Lambda}^+_{nov}; J)$ is well defined and
satisfies the algebraic properties described in~\S\ref{sb:HF}.
The rest of the arguments proving Theorem~\ref{t:cubic-eq-sphere} go
through with mild modifications and yield
Theorem~\ref{t:non-monotone}. \qed
\begin{rem} \label{r:discr-determines-prod} Assume that $C_M = 1$.
Change the ground ring from $\mathbb{Z}$ to $\mathbb{Q}$ and define
$\widetilde{\Lambda}^+_{nov, \mathbb{Q}}$ in the same way as
$\widetilde{\Lambda}^+_{nov}$ but over $\mathbb{Q}$. It is easy to
see that the discriminant $\widetilde{\Delta}_L =
\widetilde{\gamma}_L \in \widetilde{\Lambda}^+_{nov}$ determines
the isomorphism type of the ring $QH(L;\widetilde{\Lambda}^+_{nov,
\mathbb{Q}}; J)$. Since the discriminant is independent of $J$ it
follows that the ring isomorphism type of
$QH(L;\widetilde{\Lambda}^+_{nov, \mathbb{Q}}; J)$ is in fact
independent of $J$ too. However, as mentioned earlier, it is not
clear if an isomorphism between the Lagrangian quantum homologies
corresponding to $J$'s in different components of
$\mathcal{J}_{\mu>0}$ can be realized via continuation maps.
If $C_M = 2$ the situation is simpler. In this case there is no
need to work over $\mathbb{Q}$, i.e. the isomorphism type of the
Lagrangian quantum homology with coefficients in
$\widetilde{\Lambda}^+_{nov}$ is determined by
$\widetilde{\gamma}_L$.
\end{rem}
\subsection*{Organization of the paper}
The rest of the paper is organized as follows.
In~\S\ref{s:floer-setting} we briefly recall the necessary ingredients
from Lagrangian Floer and quantum homologies used in the sequel.
In~\S\ref{sb:more-discr} we also give more details on the
discriminant. \S\ref{s:lag-cubic-eq} is devoted to the Lagrangian
cubic equation. We prove in that section more general versions of
Theorems~\ref{t:cubic-eq} and~\ref{t:rel-to-discr}. Then
in~\S\ref{sb:prf-cubic-eq-S} we prove Theorem~\ref{t:cubic-eq-sphere}.
We also prove in~\S\ref{sb:further-res} additional corollaries derived
from these theorems. In~\S\ref{s:disc-lcob} we study the discriminant
in the realm of Lagrangian cobordism and prove Theorem~\ref{t:cob} and
Corollary~\ref{c:del_1=del_2}. \S\ref{s:examples} is dedicated to
examples. We briefly explain how to construct Lagrangian spheres in
various homology classes on symplectic Del Pezzo surfaces and carry
out the calculation of the discriminants of those Lagrangians. We
discuss some higher dimensional examples too. In~\S\ref{s:coeffs} we
explain an extension of the discriminant and the Lagrangian cubic
equation over a more general ring of coefficients that takes into
account the different homology classes of the holomorphic curves that
contribute to our invariants. In~\S\ref{sb:examples-revisited} we
recalculate some of the examples from~\S\ref{s:examples} over this
ring. In~\S\ref{s:enum} we discuss the relation of the discriminant to
the enumerative geometry of holomorphic disks. Finally,
in~\S\ref{s:non-monotone} we consider the non-monotone case and state
a version of Theorem~\ref{t:cubic-eq-sphere} for not necessarily
monotone Lagrangian 2-spheres.
|
1,108,101,564,920 | arxiv | \section{\label{sec:intro}Introduction}
In the past almost two decades, a large number of observational evidences indicate that our Universe is undergoing a phase of accelerated expansion \cite{Riess1998, Perlmutter1999, Weinberg2013, PlanckCollaboration2013}. Dark energy (DE) is introduced to understand this phenomenon of late-time accelerating expansion. The most popular candidate of dark energy is the cosmological constant (CC) or $\Lambda$ term \cite{Carroll1992} in the framework of general relativity (GR). The $\Lambda$-cold-dark-matter ($\Lambda$CDM) model can explain the current cosmological observations very well, but there are two unsolved puzzles, i.e., the so-called fine-tuning \cite{weinberg1989} and coincidence problems \cite{Ostriker1995}. The former indicates that the $\Lambda$ value inferred by observations ($\rho_{\Lambda}=\Lambda/8\pi G\lesssim10^{-47} GeV^{4}$) differs from theoretical estimates given by quantum field theory ($\rho_{\Lambda}\sim10^{71} GeV^{2}$) by almost 120 orders of magnitude, while the latter, which implies that the problem with $\Lambda$ is to understand why dark energy density is not only small, but also of the same order of magnitude of the energy density of cold dark matter (CDM). As a consequence, to alleviate or even solve these problems, a flood of dark energy models are proposed and studied by cosmologists, such as phantom \cite{Caldwell2002}, quintessence \cite{Fujii1982, Ford1987, WETTERICH1988668, Hebecker2001}, decaying vacuum \cite{Wang2005, Lima1996dv, Lima1999}, bulk viscosity \cite{Meng2007, Ren2006, Ren:2005nw, Wang2017b}, Chaplygin gas \cite{Kamenshchik2001,Bento2002} and so on.
In this study, we focus on decaying vacuum model (DVM), which attempts to alleviate the coincidence problem by allowing the CDM and DE to interact with each other \cite{Ferreira2012}. We follow the model proposed by Wang and Meng that assume the form of the modification of the CDM expansion rate due to the DV effect. This decaying vacuum scenario has been discussed, for example in Ref. \cite{Alcaniz2005} the authors used SNe Ia, Chandra measurements of the X-ray gas mass fraction in 26 galaxy clusters and CMB data from WMAP and get the vacuum decay rate parameter lies on the interval $\epsilon=0.11\pm0.12$. Ref. \cite{Jesus2008} have shown that the constraint of decay rate parameter $\epsilon=0.000_{-0.000}^{+0.057}$ after using SNe Ia, BAO and CMB shift parameter data sets. In this work, we plan to re-examine this DVM by using the latest observational data and use the \textbf{CAMB} and \textbf{CosmoMC} \cite{Lewis1999bs,Lewis2002ah} packages with the Markov Chain Monte Carlo (MCMC) method.
This paper is organized as follows. In Sect. \ref{sec:model}, we review briefly the model that proposed by Wang and Meng. In Sect. \ref{sec:obsercations and numerical calculations}, we constrain this model by using the recent cosmological observations and analyze the results we obtain. The conclusions are presented in Sect. \ref{sec:summary}.
\section{decaying vacuum model}
\label{sec:model}
Here, we follow the arguments exhibited in Ref. \cite{Wang2005}, where the standard continuity equation for CDM has been modified,
\begin{eqnarray}
\label{eq.1}
\dot{\rho}_{m}+3H\rho_{m} &=& -\dot{\rho}_{\Lambda},
\end{eqnarray}
where $\rho_{\Lambda}$ and $\rho_{m}$ are the energy density of vacuum and CDM, respectively.
Since consider the interaction between vacuum energy and CDM, there is a deviation of CDM's evolution from the standard evolution, which can characterize by a term $\epsilon$, i.e.,
\begin{eqnarray}
\label{eq.2}
\rho_{m} &=& \rho_{m0}a^{-3+\epsilon},
\end{eqnarray}
where $\rho_{m0}$ is the current CDM energy density, $\epsilon$ is a constant parameter that describe the vacuum energy decay rate. If $\epsilon>0$ implies vacuum energy decay into CDM, thus the CDM component will dilute more slowly. On the contrary, $\epsilon<0$ implies CDM decay into vacuum. And $\epsilon=0$ corresponding to non-interacting scenario.
Now, by integrating Eq. \ref{eq.1} one can get
\begin{eqnarray}
\label{eq.3}
\rho_{\Lambda} &=& \tilde{\rho}_{\Lambda 0}+\frac{\epsilon \rho_{m0}}{3-\epsilon}a^{-3+\epsilon},
\end{eqnarray}
where $\tilde{\rho}_{\Lambda 0}$ is an integration constant representing the ground state of the vacuum. Note that, for this DVM, we have ignored the contribution from the radiation and spatial curvature components, and considered the physical equation of stat (EOS) of the vacuum $\omega_{\Lambda}\equiv p_{\Lambda}/\rho_{\Lambda}$ equal to constant $-1$. So the Friedmann equation can be rewritten as
\begin{equation}\label{eq.4}
\frac{H^{2}}{H_{0}^{2}}=(\frac{3 \Omega_{m0}}{3-\epsilon}a^{-3+\epsilon}+1-\frac{3\Omega_{m0}}{3-\epsilon}).
\end{equation}
\section{numerical calculations and analysis results}
\label{sec:obsercations and numerical calculations}
\begin{figure}
\centering
\includegraphics[scale=0.6]{dv_eight.pdf}
\caption{The one dimensional distributions on the individual parameters and two dimensional marginalized contours of the DVM, where the contour lines represent $68\%$ and $95\%$ C. L., respectively. }
\label{fig:triangle}
\end{figure}
To study quantitatively the properties of dark energy, we perform global constraints on our DVM by using the latest cosmological observations, which are exhibited as follows: (i) \emph{CMB}: the CMB temperature and polarization data from \emph{Planck 2015} \cite{PlanckCollaboration2015}, which includes the likelihoods of temperature (TT) at $30 \leqslant l \leqslant2500$, the cross-correlation of temperature and polarization (TE), the polarization (EE) power spectra, and the Planck low-$l$ temperature and polarization likelihood at $2\leqslant l \leqslant29$. (ii) \emph{BAO}: we employ four BAO data points: Six Degree Field Galaxy Survey (6dFGS) sample at effective redshift $z_{eff}=0.106$ \cite{Beutler2011}, the SDSS main galaxy sample (MGS) at $z_{eff}=0.15$ \cite{Ross2015}, and the LOWZ at $z_{eff}=0.32$ and CMASS $z_{eff}=0.57$ samples of the SDSS-III BOSS DR12 sample \cite{Cuesta2016}. (iii) \emph{JLA}: the "Join Light-curve Analysis" (JLA) sample of Type Ia supernova \cite{Betoule2014}, used in this paper, is constructed from the SNLS, SDSS, HST and several samples of low-$z$ SN. This sample consists of 740 SN Ia data points covering the redshift range $z \in [0.01,1.3]$. (iv) \emph{OHD}: the observational Hubble parameter data with 30 point \cite{Moresco2016a,Wang2017,Geng2017a}.
We adopt the Markov Chain Monte Carlo (MCMC) method with the data above mention to constrain this DVM. We modifies the public package \textbf{CosmoMC} and Boltzmann code \textbf{CAMB} to infer the posterior probability distributions of cosmological parameters. In addition, the $\chi^{2}$ function for the data from $H(z)$ is take to be
\begin{equation}\label{}
\chi^{2}=\sum_{i=1}^{n}\frac{(H_{th}(z_{i})-H_{o}(z_{i}))^{2}}{E^{i}}
\end{equation}
where the $H_{th}$ is the theoretical prediction, calculated from CAMB, and $H_{o}$ and $E$ represent the observational value and error, respectively.
\begin{table}[!hbp]
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}
\caption{The prior ranges, the best-fitting values and 1$\sigma$ marginalized uncertainties of cosmological parameters of the DVM, and the numbers in the bracket represent the best-fit values of the $\Lambda$CDM model. }\label{Table:parameters}
\scalebox{0.9}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Parameters& Priors&
\tabincell{c}{CMB+BAO} &
\tabincell{c}{CMB+BAO+JLA}&
\tabincell{c}{CMB+BAO+OHD}&
\tabincell{c}{CMB+BAO\\+JLA+OHD} \\
\hline
{\boldmath$\epsilon $} & $[-0.3, 0.3]$ &
$-0.00029\pm0.00023$ &
$-0.00028\pm 0.00024$&
$-0.00032\pm 0.00024$ &
$-0.00030\pm 0.00024$ \\
\hline
{\boldmath$\Omega_b h^2 $} & $[0.005, 0.1]$ &
\tabincell{c}{$0.02234\pm 0.00015$\\$(0.02233)$} &
\tabincell{c}{$0.02234\pm 0.00014$\\(0.02232)} &
\tabincell{c}{$0.02234\pm 0.00015$\\(0.02233)} &
\tabincell{c}{$0.02234\pm 0.00014$\\(0.02232)} \\
\hline
{\boldmath$\Omega_c h^2 $} & $[0.001, 0.99]$ &
\tabincell{c}{$0.1188\pm 0.0012$\\(0.1181)} &
\tabincell{c}{$0.1187\pm 0.0011$\\(0.1182)} &
\tabincell{c}{$0.1187\pm 0.0011$\\(0.1182)} &
\tabincell{c}{$0.1187\pm 0.0011$\\(0.1181)} \\
\hline
{\boldmath$\Sigma m_\nu $} & $[0,5]$ &
$<0.0781(<0.0787)$ &
$<0.0859(<0.0689)$ &
$<0.0809(<0.0763)$ &
$<0.0773(<0.0698)$ \\
\hline
{\boldmath$n_s $} & $[0.8, 1.2]$ &
\tabincell{c}{$0.9677\pm 0.0036$\\(0.9698)} &
\tabincell{c}{$0.9680\pm 0.0042$\\(0.9693)} &
\tabincell{c}{$0.9678\pm 0.0039$\\(0.9696)} &
\tabincell{c}{$0.9683\pm 0.0040$\\(0.9698)} \\
\hline
{\boldmath$H_{0}$} & $[20,100]$ &
\tabincell{c}{$67.85_{-0.52}^{+0.64}$\\(68.02)} &
\tabincell{c}{$67.82\pm 0.55$\\(68.03)} &
\tabincell{c}{$67.84\pm 0.55$\\(68.00)} &
\tabincell{c}{$67.87_{-0.50}^{+0.55}$\\(68.06)} \\
\hline
{\boldmath$\Omega_{m}$} & - &
\tabincell{c}{$0.3081_{-0.0082}^{+0.0068}$\\(0.3050)} &
\tabincell{c}{$0.3083\pm 0.0069$\\(0.3049)} &
\tabincell{c}{$0.3081\pm 0.0069$\\(0.3054)} &
\tabincell{c}{$0.3078\pm 0.0066$\\(0.3046)} \\
\hline
{\boldmath$\sigma_{8}$} & - &
\tabincell{c}{$0.826_{-0.013}^{+0.018}$\\(0.825)} &
\tabincell{c}{$0.826_{-0.014}^{+0.017}$\\(0.827)} &
\tabincell{c}{$0.826_{-0.014}^{+0.017}$\\(0.827)} &
\tabincell{c}{$0.828_{-0.014}^{+0.018}$\\(0.828)} \\
\hline
{\boldmath$z_{\rm{eq}}$} & - &
\tabincell{c}{$3372\pm 26$\\(3355)} &
\tabincell{c}{$3370_{-24}^{+27}$\\(3357)} &
\tabincell{c}{$3371\pm 25$\\(3358)} &
\tabincell{c}{$3371\pm 25$\\(3356)} \\
\hline
{\boldmath$\chi_{min}^{2}$} & - &
12980.634(12985.332) &
13678.974(13682.862) &
12997.854(12998.46) &
13694.348(13696.51) \\
\hline
\end{tabular}}
\end{table}
\begin{figure}
\centering
\includegraphics[width=16cm]{1d.pdf}
\caption{The 1-dimensional posterior distributions of $\Omega_{b}h^{2}$, $\Omega_{c}h^{2}$, $\Omega_{\Lambda}$ and $z_{eq}$ in the DVM(black) and $\Lambda$CDM model(red) using the data combination CMB+BAO+JLA+OHD, respectively. }
\label{fig:1d}
\end{figure}
In Table \ref{Table:parameters} and Fig. \ref{fig:triangle}, we show the values in the best-fit points and corresponding 1$\sigma$ errors of individual parameters and 1-dimensional posterior distributions on the individual parameters and 2-dimensional marginalized contours of the DVM. One can find that the best fit value of $\epsilon$ is about $\epsilon\sim -0.0003$, this value very close to the standard non-interacting case but slightly smaller than 0, which implies CDM decay to vacuum energy, and noting that the $\chi_{VDM}^{2}\lesssim\chi_{\Lambda CDM}^{2}$ in all the datasets. We conclude that the DVM deviates slightly from the $\Lambda$CDM model and is favored by the current observations.
To study the details of constrained parameters further and compare them in the DVM and non-interacting case, i.e. $\Lambda$CDM, we exhibit their 1-dimensional posterior distributions in Fig. \ref{fig:1d}. We find that the value of baryon density $\Omega_{b}h^{2}$ of DVM almost the same as that of $\Lambda$CDM model, due to baryon matter is not involved in the interaction. Furthermore the value of CDM density $\Omega_{c}h^{2}$ and redshift of the radiation-matter equality $z_{eq}$ of the DVM are larger than those of the $\Lambda$CDM model, and the value of vacuum density $\Omega_{\Lambda}$ is smaller than those of the $\Lambda$CDM model. This result is consistent with the conclusion that CDM decaying to vacuum energy.
\begin{figure}[ht]
\begin{minipage}{0.47\linewidth}
\centerline{\includegraphics[width=7.5cm]{H0vsepsilon.pdf}}
\end{minipage}
\hfill
\begin{minipage}{.47\linewidth}
\centerline{\includegraphics[width=7.5cm]{H0.pdf}}
\end{minipage}
\caption{The two dimensional marginalized contour in the $H_{0}-\epsilon$ plane and 1-dimensional posterior distribution of $H_{0}$ }
\label{fig:H0}
\end{figure}
We also interested in the tension between the improved local measurement $H_{0}=73.24\pm1.74\textrm{km s}^{-1}\textrm{Mpc}^{-1}$ from Riess et al. \cite{Riess2016} with the Planck 2015 release $H_{0}=66.93\pm0.62\textrm{km s}^{-1}\textrm{Mpc}^{-1}$ \cite{PlanckCollaboration2015}. Using the data combination CMB+BAO+JLA+OHD, in fig. \ref{figure:H0}, we obtain $H_{0}=67.87_{-0.5}^{+0.55}\textrm{km s}^{-1}\\\textrm{Mpc}^{-1}$, and the $H_{0}$ tension can be alleviated from $3.41\sigma$ to $2.95\sigma$.
Additionally, we calculate the effective EOS of vacuum \cite{Wang2005}:
\begin{equation}\label{eq.EOS}
\omega_{x}=-1+\frac{(1+z)^{3-\epsilon}-(1+z)^{3}}{\frac{3}{3-\epsilon}(1+z)^{3-\epsilon}-(1+z)^{3}+\frac{\tilde{\Omega_{\Lambda0}}}{\Omega_{m0}}}.
\end{equation}
In figure \ref{fig:wofz}, we show the relation between the redshift $z$ and the effective EoS of vacuum. One can find that the EoS is large then -1, denote the DVM is a quintessence-like, but get a value below -1 at 2$\sigma$ C.L., then the DVM become a phantom-like.
\begin{figure}
\centering
\includegraphics[width=10cm]{wofz_CMBBAOJLAOHD.pdf}
\caption{The relation between the redshift and the effective EoS of vacuum using combination CMB+BAO+JLA+OHD. The black and green (solid) lines correspond to the VD model and $\Lambda$CDM model, respectively. The orange and pink regions between the red and blue (dashed) lines are represent the $1\sigma$ and $2\sigma$ regions, respectively.}\label{fig:wofz}
\end{figure}
\section{conclusions}
\label{sec:summary}
In this work, we have reviewed the decay vacuum model. We have shown that the tightest constrain we can give, the numerical analysis results are exhibited in Table \ref {Table:parameters} and figs. \ref{fig:triangle}, \ref{fig:1d}, \ref{fig:H0}. Combining the data of CMB temperature fluctuation and polarization, the baryon acoustic oscillations (BAO), the SN Ia sample "Joint Light-curve Analysis" (JLA) and $H(z)$ measurements, we have found that $\epsilon=-0.0003\pm0.00024$. This result is consistent with Ref. \cite{Jesus2008}, but the decay rate parameter $\epsilon$ is slightly smaller than 0 in our work, which imply the dark matter energy decay to vacuum energy. The 1-dimensional posterior distributions of $\Omega_{b}h^{2}$, $\Omega_{c}h^{2}$, $\Omega_{\Lambda}$ and $z_{eq}$ constrained by the combination CMB+BAO+JLA+OHD are supporting this conclusion. Furthermore, we find the DVM can alleviate the current $H_{0}$ tension from $3.4\sigma$ to $2.95\sigma$. Finally, we shown the effective EOS defined as equation \ref{eq.EOS}, which denote our Universe is a quintessence-like and can become a phantom case at 2 $\sigma$ C.L..
\section{acknowledgements}
YangJie Yan thanks Lu Yin for helpful communications and programming. This study is supported in part by the National Science Foundation of China.
|
1,108,101,564,921 | arxiv | \section{Introduction}
\subsection*{The Onsager-Machlup Function}
In finite dimensional probability, computations can be greatly eased by working with a probability density function. These densities capture how a probability distribution compares with some kind of uniform measure - typically Lebesgue or counting measure. Not only do densities ease computations involving expectations, probabilities and related quantities; they are the optimization objective to be maximized when computing the mode (or most likely element) of a distribution. However, densities do not immediately transfer to infinite dimensional probability because there is no uniform measure on infinite dimensional spaces.
However, the so-called Onsager-Machlup function can serve the role of a density in infinite dimensions. This function was introduced by Onsager and Machlup in \cite{Onsager-Machlup-Original}. The key insight in Onsager-Machlup theory is that one can compare a probability distribution to translations of itself rather than comparing a probability distribution to some translation invariant measure. This recovers the standard density in the finite dimensional case but allows for ``densities'' on infinite dimensional spaces.
One important use of the Onsager-Machlup function is to serve as the objective to optimize when computing the mode or the most likely element of a distribution on an infinite dimensional space. This is of particular interest when looking for the ``most likely path" of a stochastic process. This was the approach originally taken in \cite{Durr-Bach}, where the authors computed the Onsager-Machlup function for solutions to stochastic differential equations. They demonstrated that the optimizers of this function served as the ``most likely path" of this diffusion.
Since then, the Onsager-Machlup has found its way into Bayesian statistics and MAP estimation such as in \cite{Stuart2,Stuart1}. Additionally, in \cite{Self1} the authors prove a ``portmanteau" theorem that relates the Onsager-Machlup function on an abstract Banach space equipped with a Gaussian measure to an information projection problem, to an ``open loop" or state-independent KL-weighted control problem, and in the case of classical Wiener space to an Euler-Lagrange equation or variational form. Furthermore, using this Portmanteau theorem the authors in \cite{self2} prove a Feynman-Kac type result for systems of ordinary differential equations. They demonstrate that the solution to a system of second order and linear ordinary differential equations is the most likely path of a diffusion. This Feynman-Kac result, like the original Feynman-Kac for parabolic partial differential equations, can (in principle) be used to efficiently solve systems of ordinary differential equations via Monte Carlo methods.
\subsection*{Small-Noise Large Deviations for Stochastic Differential Equations}
There has been extensive study of a related theory that also claims to lead to the ``most likely path" - the so-called Freidlin-Wentzell function which was first introduced in \cite{Freidlin-Wentzell}. The Freidlin-Wentzell function was shown to be the rate function for the rare event that an It\^o diffusion with vanishingly small noise strays away from the ``most likely path". This immediately poses a dilemma - if both Freidlin-Wentzell and Onsager-Machlup theories claim to produce the ``most likely path" then how do we reconcile them?
In the case of solutions to stochastic differential equations, the relationship between these two theories has been explored in \cite{Li-and-Li,Stuart-Gamma}. In these papers the authors prove that the Onsager-Machlup function $\Gamma$-converges to the Freidlin-Wentzell function, where $\Gamma$-convergence can be understood as essentially the convergence of minimizers. More precisely, for a family of stochastic differential equations
\[dX^\varepsilon=b(X^\varepsilon) dt+\varepsilon dB(t),\]
the Onsager-Machlup function for $X^\varepsilon$ is $$\operatorname{OM}_{X^\varepsilon}(z)=\frac{1}{2\varepsilon^2}\int_0^T [(z'(t)-b(z(t))^2+\varepsilon^2 b'(z(t))]dt,$$
where $z$ is sufficiently regular. They show that $\varepsilon^2\operatorname{OM}_{X^\varepsilon}(z)$ $\Gamma$-converges to the Freidlin-Wentzell function
$$\operatorname{FW}_X(z)=\frac{1}{2}\int_0^T(z'(t)-b(z(t))^2dt.$$
This implies, at least in the case of stochastic differential equations, that the Freidlin-Wentzell ``most likely path" is the small noise limit of the Onsager-Machlup ``most likely path". However, the connection between Onsager-Machlup and Freidlin-Wentzell appears to be more subtle. For instance, in the paper \cite{Dutra} the authors study numerics for estimating the most likely path of a given stochastic differential equation. The authors show that which discretization they chose led to different behavior - an Euler-Maruyama discretization led to the Freidlin-Wentzell ``most likely path" and a trapezoid discretization led to the Onsager-Machlup ``most likely path". This suggests that there is a relationship even at positive noise.
\subsection*{Small Noise Large-Deviations for Arbitrary Gaussian Measures}
The law of a Brownian motion is a probability measure (called the Wiener measure) on the space of continuous functions. As all the finite dimensional projections of this measure are Gaussian, Wiener measure is an example of an infinite-dimensional Gaussian measure. Other examples of Gaussian measures include the law of any Gaussian process, the law of a Gaussian field, random sequences with Gaussian entries and many other examples.
Freidlin-Wentzell theory has been generalized to arbitrary Gaussian measures on Banach spaces. Perhaps surprisingly, both the Onsager-Machlup and Freidlin-Wentzell functions for a Gaussian measure on a Banach space are equal. More precisely, consider a Banach space $(\mathcal B, \|\cdot\|)$ with a Borel measure $\mu$ so that all the one dimensional projections are Gaussian. Consider the measure $\mu^\varepsilon$ defined by $\mu^\varepsilon(A):=\mu(\varepsilon^{-1}A)$ for Borel $A\subset \mathcal B$. Then it is shown in e.g. \cite{Bogachev} section 4.9, that the measures $\mu^\varepsilon$ satisfy a large deviations principle (LDP) with rate function $\operatorname{FW}_\mu$ and additionally for regular enough $z$ we have that, for any $\varepsilon > 0$,
\[\varepsilon^2\operatorname{OM}_{\mu^\varepsilon}(z)=\operatorname{FW}_\mu(z)=\frac{1}{2}\|z\|_\mu^2,\]
where $\|\cdot\|_\mu$ is the so-called Cameron-Martin norm (distinct from the norm on the Banach space).
\subsection*{Small-Noise Large Deviations for Measures Equivalent to Gaussian Measures}
The primary objective of this paper is to investigate the relationship between the Onsager-Machlup and Freidlin-Wentzell functional for measures equivalent to an arbitrary Gaussian measure. In many instances, a measure equivalent to a Gaussian can be viewed as the law of a process or field that solves some SDE driven by a Gaussian process or field. For instance, by Girsanov's theorem \cite{Oksendal}, the law of the solution to the stochastic differential equation
\[dX^\varepsilon=b(X^\varepsilon) dt+\varepsilon dB(t)\]
is equivalent, in the sense that both measures agree on the same null sets, to the law of $\varepsilon B(t)$. This is merely an instance of a general phenomenon, however, and a version of Girsanov's theorem holds for general Gaussian measures.
There has been extensive work in computing small-noise rate functions for processes whose law is equivalent to a Gaussian measure, such as \cite{Guo-1,LDP-SPDE,Budhiraja-LDP-FBM} to name a few. However to the authors' best knowledge, no work has been done exploring the relationship between the Onsager-Machlup and Freidlin-Wentzell function at this level of generality.
The following theorem is the main result of this paper.
\begin{theorem}\label{theorem:main}
Let $\mathcal B$ be a separable Banach space with centered Gaussian measure $\mu_0$. Let $\mu$ be another Borel measure on $\mathcal B$ equivalent to $\mu_0$, with density $\frac{d\mu}{d\mu_0}=\exp(\Phi)$, where $\Phi$ satisfies mild regularity conditions. Define the measures $\mu_0^\varepsilon$ by $\mu_0^\varepsilon(A)=\mu_0(\frac{1}\varepsilon A)$ for Borel $A\subset \mathcal B$ and $\mu^\varepsilon = \exp(\frac{1}{\varepsilon^2} \Phi) \mu_0^\varepsilon.$ Then the Onsager-Machlup function for the measures $\mu^\varepsilon$ exist, denoted by $\operatorname{OM}_{\mu^\varepsilon}$. Additionally, we have that $\{\mu^\varepsilon\}$ satisfy a LDP with rate function $\operatorname{FW}(z):=\lim_{\varepsilon\to 0^+} \varepsilon^2 \operatorname{OM}_{\mu^\varepsilon}(z).$ Furthermore, denoting by $\operatorname{OM-Mode}(\mu^\varepsilon):=\arg \inf_{z\in \mathcal B} \operatorname{OM}_{\mu^\varepsilon}$ we have that every cluster point of the elements $\operatorname{OM-Mode}(\mu^\varepsilon)$ is a minimizer of $\operatorname{FW}$, denoted by $\operatorname{FW-Mode}(\mu):=\arg\inf_{z\in \mathcal B} \operatorname{FW}(z)$. If
$\varepsilon^2\operatorname{OM}_{\mu^\varepsilon}$ are equicoercive, then we also have that
$\lim_{\varepsilon\to 0^+}\operatorname{OM-Mode}(\mu^\varepsilon)= \operatorname{FW-Mode}(\mu).$
\end{theorem}
In doing so, this also suggests a new way of computing the Freidlin-Wentzell rate function.
\section{Onsager-Machlup Formalism}
Given a probability measure $\mu$ on $\mathbb R^n$ that is equivalent to the Lebesgue measure $\lambda$, we can express its Radon-Nikodym derivative by
$$\frac{d\mu}{d\lambda}(z)=\lim_{\varepsilon\to 0}\frac{\mu(B_\varepsilon(z))}{\lambda(B_\varepsilon(z))}=\lim_{\varepsilon\to 0}\frac{\mu(B_\varepsilon(z))}{\lambda(B_\varepsilon(0))}\propto \lim_{\varepsilon\to 0}\frac{\mu(B_\varepsilon(z))}{\mu(B_\varepsilon(0))},$$
where the final relation comes from the Lebesgue differentiation theorem. The last expression makes sense even on non-locally compact spaces where there might not be a uniform measure, motivating the following definition.
\begin{definition}\label{def:OM}
Let $(X,d)$ be a metric space. Let $\mu$ be a Borel probability measure on $X$. If the following limit exists
\begin{equation}
\lim_{\varepsilon\to 0}\frac{\mu(B_\varepsilon(z_1))}{\mu(B_\varepsilon(z_2))}=\exp\left(\operatorname{OM}_\mu(z_2)-\operatorname{OM}_\mu(z_1)\right),
\end{equation}
then $\operatorname{OM}_\mu(z)$ is called the Onsager-Machlup function for $\mu$.
\end{definition}
\begin{remark}
The Onsager-Machlup function in Definition \ref{def:OM} can be thought of as the negative log ``density" of $\mu$. That is,
\[``\frac{d\mu}{d\lambda}(z)"=\exp\left(-\operatorname{OM}_\mu(z)\right)\]
for some possibly nonexistent uniform measure $\lambda$. In the case where $X=\mathbb R^d$ then the above equality is rigorous, by the Lebesgue differentiation theorem. Also, note that the Onsager-Machlup function is only defined up to an additive constant. We also define the ``mode" or the most likely element of $\mu$ as the minimizer of $\operatorname{OM}_\mu$.
\end{remark}
\begin{proposition}\label{prop:OM}
Let $(\mathcal B,\|\cdot\|)$ be a Banach space and let $\mu$ be a Borel probability measure on $\mathcal B$ that is positive on open sets. For $z\in \mathcal B$ define the translation map $T_z:\mathcal B\to \mathcal B$ by $T_z(x)=x+z$. Let $\mathcal P:=\{T_z^\ast \mu:T_z^\ast \mu \sim \mu \}$ denote the set of equivalent mean-shift measures with corresponding shifts $z\in H\subset \mathcal B$. Suppose that for each $\mu^z:=T_z^\ast \mu$, we have
$$\frac{d\mu^z}{d\mu}=\frac{1}{E_\mu [e^{f_z}]}e^{f_z}$$
for some continuous function $f_z:\mathcal B\to \mathbb R$. Additionally suppose that for each $z\in H$, for each $\varepsilon_0>0$ and for all $\varepsilon<\varepsilon_0$ there is some increasing continuous $\phi=\phi_{z,\varepsilon_0}:[0,\varepsilon_0]\to [0,\infty)$ with $\phi(0)=0$ so that $|f_z(u)|\leq \phi_{z,\varepsilon_0}(\varepsilon)$ on $B_{\varepsilon}(0)$. Then
\begin{equation}
\operatorname{OM}_\mu(z)=
\begin{cases}
-\log E_\mu[e^{f_z}]&\text{ if } z\in H\\
\infty &\text{ else }.
\end{cases}
\end{equation}
Additionally, we have for $z\in H$ that
\begin{equation}
D_{KL}(\mu^z||\mu)=E_{\mu^z}[f_z]-\operatorname{OM}_\mu(z),
\end{equation}
where $D_{KL}(\cdot||\cdot)$ is the KL-divergence or relative entropy.
\end{proposition}
\begin{proof}
Let $z\in H$ and consider the ratio
$$\frac{\mu(B_\varepsilon(z_1))}{\mu(B_\varepsilon(z_2))}=\frac{\int_{B_\varepsilon(z_1)} \mu(du)}{\int_{B_\varepsilon(z_2)} \mu(du)}.$$
Using the density $d\mu^z/d\mu$, we get that
$$\frac{\mu(B_\varepsilon(z_1))}{\mu(B_\varepsilon(z_2))}=\frac{E_\mu [e^{f_{z_1}}]}{E_\mu [e^{f_{z_2}}]}\frac{\int_{B_\varepsilon(0)} e^{f_{z_1}(u)}\mu(du)}{\int_{B_\varepsilon(0)} e^{f_{z_2}(u)}\mu(du)}.$$
By assumption we have that $|f_{z_i}(u)|\leq \phi_i(\varepsilon)$ on $B_\varepsilon(0)$ for $i=1,2$, with $\lim_{\varepsilon\to 0}\phi_i(\varepsilon)=0$. Thus for $\phi=\phi_1-\phi_2$
\begin{equation}
\frac{E_\mu [e^{f_{z_1}}]}{E_\mu [e^{f_{z_2}}]}e^{-\phi(\varepsilon)}\leq \frac{\mu(B_\varepsilon(z_1))}{\mu(B_\varepsilon(z_2))}\leq \frac{E_\mu [e^{f_{z_1}}]}{E_\mu [e^{f_{z_2}}]} e^{\phi(\varepsilon)}.
\end{equation}
Taking the limit $\varepsilon\to 0$ concludes the proof.
\end{proof}
The above result can be specialized to the case of a Gaussian measure. For more information on Gaussian measure theory see \cite{Bogachev, hairer2009introduction}.
\begin{corollary}\label{corollary:OM-for-Gaussian}
Let $(\mathcal B, \|\cdot\|)$ be a Banach space. Let $\mu$ be a centered Gaussian measure on $\mathcal B$. Then the Onsager-Machlup function for $\mu$ is
\begin{equation}
\operatorname{OM}_\mu(z)=
\begin{cases}
\frac{1}{2}\|z\|_\mu^2&\text{ if } z\in \mathcal H_\mu\\
\infty&\text{ else }.
\end{cases}
\end{equation}
Additionally, we have that
\begin{equation}
D_{KL}(\mu^z||\mu)=\frac{1}{2}\|z\|_\mu^2.
\end{equation}
\end{corollary}
\begin{proof}
Use the Cameron-Martin theorem (\cite{hairer2009introduction}, Theorem 3.41), along with the fact that for a Gaussian measure, the $f_z$ in Proposition \ref{prop:OM} satisfy $f_z\sim \mathcal N(0,\|z\|_\mu^2)$, and Proposition \ref{prop:OM}.
\end{proof}
\begin{proposition}\label{prop:tilt-for-OM}
Let $\mu_0$ be a Borel measure on Banach space $\mathcal B$. Suppose that $\mu_0$ has an associated Onsager-Machlup function $\operatorname{OM}_{\mu_0}:\mathcal B\to [-\infty,\infty]$. Consider the measure $\mu$ with density
\begin{equation}
\frac{d\mu}{d\mu_0}=\frac{1}{E_{\mu_0}[e^{-\Phi}]}\exp(-\Phi).
\end{equation}
Suppose that for each $\varepsilon_0>0$, for each $x\in \mathcal B$ and for all $\varepsilon<\varepsilon_0$ there is some continuous increasing $\phi_{x,\varepsilon_0}:[0,\varepsilon_0]\to [0,\infty)$ with $\phi(0)=0$ so that $|\Phi(u)-\Phi(x)|\leq \phi(\varepsilon)$ on $B_\varepsilon(x)$. Then $\mu$ has an associated Onsager-Machlup function
\begin{equation}
\operatorname{OM}_\Phi(z)=\Phi(z)+\operatorname{OM}_{\mu_0}(z).
\end{equation}
\end{proposition}
\begin{proof}
We consider the ratio
\begin{equation}
\frac{\mu(B_{\varepsilon}(z_1))}{\mu(B_{\varepsilon}(z_2))}=\frac{\int_{B_\varepsilon(z_1)}\mu(du)}{\int_{B_\varepsilon(z_1)}\mu(du)}.
\end{equation}
Using the density, adding and subtracting $\Phi(z_i)$ for $i=1,2$ in both integrals yields that
\begin{align*}
\frac{\mu(B_{\varepsilon}(z_1))}{\mu(B_{\varepsilon}(z_2))}&=\frac{\int_{B_\varepsilon(z_1)}\exp(-\phi(u))\mu_0(du)}{\int_{B_\varepsilon(z_1)}\exp(-\Phi(u))\mu_0(du)}\\
&=\frac{\int_{B_\varepsilon(z_1)}\exp(-\phi(u)+\Phi(z_1)-\Phi(z_1))\mu_0(du)}{\int_{B_\varepsilon(z_1)}\exp(-\Phi(u)+\Phi(z_2)-\Phi(z_2))\mu_0(du)}\\
&=\exp\left(\Phi(z_2)-\Phi(z_1)\right)\frac{\int_{B_\varepsilon(z_1)}\exp(-\Phi(u)+\Phi(z_1))\mu_0(du)}{\int_{B_\varepsilon(z_1)}\exp(-\Phi(u)+\Phi(z_2))\mu_0(du)}.
\end{align*}
By assumption, there are some $\phi_i$ on $B_\varepsilon(z_i)$ so that
\begin{align*}
|\Phi(z_i)-\Phi(u)|\leq \phi_i( \varepsilon)
\end{align*}
for $i=1,2$. Therefore for $\phi=\phi_1-\phi_2$ we have that
\begin{equation}
\frac{\mu_0(B_{\varepsilon}(z_1))}{\mu_0(B_{\varepsilon}(z_2))} e^{-\phi(\varepsilon)} \leq \frac{\int_{B_\varepsilon(z_1)}\exp(-\Phi(u)+\Phi(z_1))\mu_0(du)}{\int_{B_\varepsilon(z_1)}\exp(-\Phi(u)+\Phi(z_2))\mu_0(du)} \leq \frac{\mu_0(B_{\varepsilon}(z_1))}{\mu_0(B_{\varepsilon}(z_2))} e^{\phi(\varepsilon)}.
\end{equation}
Taking the limit $\varepsilon\to 0$ concludes.
\end{proof}
\begin{corollary}[\cite{Stuart-OM}, Theorem 3.2]
Let $\mu_0$ be a centered Gaussian in Proposition \ref{prop:tilt-for-OM}. Then the Onsager-Machlup function for $\mu$ exists and is equal to
\begin{equation}
\operatorname{OM}_{\mu}(z) =
\begin{cases}
\Phi(z)+\frac{1}{2}\|z\|_\mu^2&\text{ if } z\in \mathcal H_\mu\\
\infty&\text{ else }.
\end{cases}
\end{equation}
\end{corollary}
\begin{remark}
Note that even though the Definition \ref{def:OM} for Onsager-Machlup depends on the norm on the space, the expressions we have given are independent of the norm. The only place that the norm affects anything is on the regularity properties of $f_z$ or of $\Phi$.
\end{remark}
As the Onsager-Machlup function plays the role of the density in infinite dimensions, one might ask to what extent it determines the measure. As it turns out, in each equivalence class of measures the Onsager-Machlup function determines the measure as the below proposition will show.
\begin{proposition}
Let $\mu_1,\mu_2$ be two Borel probability measures on some Banach space $\mathcal{B}$. Suppose that $\operatorname{OM}_{\mu_1}(z)=\operatorname{OM}_{\mu_2}(z)$ for all equivalent shifts $z\in H\subset \mathcal B$ for some dense $H$ common to both $\mu_1$ and $\mu_2$. Assume that $\mu_1$ and $\mu_2$ are equivalent with the log of their Radon-Nikodym derivative satisfying the assumptions in Proposition \ref{prop:tilt-for-OM}, then $\mu_1=\mu_2$.
\end{proposition}
\begin{proof}
By Proposition \ref{prop:tilt-for-OM}, we have that
$\operatorname{OM}_{\mu_1}(z)=\operatorname{OM}_{\mu_2}(z)=-\log\left(\frac{d\mu_1}{d\mu_2}\right)+\operatorname{OM}_{\mu_1}(z)$ for all $z\in H$. $\frac{d\mu_1}{d\mu_2}$ is continuous and $H$ is dense in $\mathcal B$ so $\frac{d\mu_1}{d\mu_2}=1$ identically.
\end{proof}
\section{$\varepsilon$-dependent Tilting Lemma}
There are multiple ways of constructing new LDPs from existing ones. One principled approach is ``tilting" - which passes large deviations principles from a sequence of reference measures to a sequence of measures equivalent to the reference measures. We direct the reader to \cite{den-hollander} Theorem III.17, for the standard tilting lemma.
However, in many cases where one would like to apply the tilting lemma, the log density depends on $\varepsilon$ in a more complicated way than the standard tilting lemma. This is apparent in the case of Freidlin-Wentzell large deviations for stochastic differential equations as we will see shortly. Therefore, we need a generalized version of the tilting lemma which we provide.
\begin{lemma}[$\varepsilon$-dependent Tilting Lemma]\label{lemma:tilt-general}
Let $\mu_0^\varepsilon$ be a collection of exponentially tight Borel measures on Banach space $\mathcal B$ satisfying a LDP with good rate function $I_0:\mathcal B\to [0,\infty]$.
Consider the functionals $F^\varepsilon(y):\mathcal B\to \mathbb R$ and suppose that they satisfy the following expansion
\begin{equation}
F^\varepsilon(y)=F_0(y)+\varepsilon F_1(y)+\frac{\varepsilon^2}{2}F_2(y)+...+\varepsilon^n R_n(\varepsilon, y),
\end{equation}
for some functionals $F_i:\mathcal B\to \mathbb R$ with $\lim_{\varepsilon\to 0} R_n(\varepsilon, y)=0$. Suppose that $F_0$ is continuous. Furthermore, assume that the functionals $F_i$ and $R_n$ satisfy the following moment condition
\begin{equation}
\limsup_{\varepsilon\to 0}\varepsilon^2 \log E_{\mu_0^\varepsilon}\left[\exp\left(\gamma_i \frac{\max\{|F_i(y)|,|R_n(\varepsilon,y)|\}}{\varepsilon^2}\right)\right]<\infty,
\end{equation}
for some $\gamma_i>0$ and for all $0 \leq i\leq n$. Define the measures equivalent to $\mu_0^\varepsilon$ by
\begin{equation}
\mu^\varepsilon=\frac{1}{E_{\mu_0^\varepsilon}\left[\exp\left(-\frac{1}{\varepsilon^2}F^\varepsilon(y)\right)\right]}\exp\left(-\frac{1}{\varepsilon^2}F^\varepsilon(y)\right)\mu_0^\varepsilon.
\end{equation}
Assume that the measures $\mu^\varepsilon$ are exponentially tight. Then $\mu^\varepsilon$ satisfies a LDP with good rate function \begin{equation}
I(y):=I_0(y)+F_0(y) -\inf_{y\in \mathbb B}\{F_0(y)+I_0(y)\}.
\end{equation}
\end{lemma}
\begin{proof}
We will apply Bryc's lemma (see \cite[Theorem 4.4.2]{LDP-Dembo-Zeitouni}). To this end, consider a bounded and continuous function $\varphi:\mathcal B\to \mathbb R$. Then consider $$L:=\varepsilon^2 \log \left(E_{\mu^\varepsilon}\left[\exp\left(-\frac{\varphi(y)}{\varepsilon^2}\right)\right]\right).$$
Using the form of $F^\varepsilon$, we get that
\begin{align*}
L&= \varepsilon^2 \log \left(E_{\mu_0^\varepsilon}\left[\exp\left(-\frac{\varphi(y)+F^\varepsilon(y)}{\varepsilon^2}\right)\right]\right)-\varepsilon^2 \log \left(E_{\mu_0^\varepsilon}\left[\exp\left(-\frac{F^\varepsilon(y)}{\varepsilon^2}\right)\right]\right).
\end{align*}
In the limit $\varepsilon\to 0$, by H\"older's inequality, reverse H\"older's inequality, Varadhan's lemma and the assumptions on $F_i$ and $R_n$, we have that
\begin{equation}\label{eq:proof-LDP-1}
\lim_{\varepsilon\to 0} L = \lim_{\varepsilon\to 0} \varepsilon^2 \log \left(E_{\mu_0^\varepsilon}\left[\exp\left(-\frac{\varphi(y)+F_0(y)}{\varepsilon^2}\right)\right]\right)-\varepsilon^2 \log \left(E_{\mu_0^\varepsilon}\left[\exp\left(-\frac{F_0(y)}{\varepsilon^2}\right)\right]\right).
\end{equation}By Varadhan's lemma \cite[Theorem 4.3.1]{LDP-Dembo-Zeitouni} and the assumptions on $F_0$, we have that
\begin{equation}
\lim_{\varepsilon\to 0} L=-\inf_{y\in \mathbb B}\{\varphi(y)+F_0(y)+I_0(y)\}+\inf_{y\in \mathbb B}\{F_0(y)+I_0(y)\}.
\end{equation}
By Bryc's lemma \cite[Theorem 4.4.2]{LDP-Dembo-Zeitouni} and the assumption of tightness of $\mu^\varepsilon$, we conclude.
\end{proof}
\section{Proof of Main Theorem}
We begin this section with a motivating example for small noise large deviations for Gaussian measures in finite dimensions.
\begin{motivating-example}
Consider a family of real valued normally distributed random variables $X^\varepsilon$ where $X^\varepsilon\sim \mathcal N(0,\varepsilon^2)$. We are interested in the decay of the probability
\begin{equation*}
P( X^\varepsilon\in A)=\int_A \frac{1}{\sqrt{2\pi \varepsilon^2}}e^{-x^2/2\varepsilon^2} dx,
\end{equation*}
for Borel $A\subset \mathbb R$. Thankfully the standard Laplace principle on $\mathbb R$ yields the appropriate scaling and gives the large deviations principle
\begin{equation*}
\lim_{\varepsilon\to 0^+}\varepsilon^2 \log P( X^\varepsilon\in A)=-\inf_{x\in A} \frac{x^2}{2}.
\end{equation*}
In this case, the Onsager-Machlup function for the law of the random variable $X^\varepsilon$ is just the term in the exponent - $\operatorname{OM}_{\mu^\varepsilon}(x)=\frac{x^2}{2\varepsilon^2}$ and we have that $\varepsilon^2 \operatorname{OM}_{\mu^\varepsilon}(x)=\frac{x^2}{2}=\operatorname{FW}(x).$
\end{motivating-example}
Small noise large deviations for arbitrary Gaussian measures on Banach space is also well known. In \cite{Bogachev}, the author proves that the Freidlin-Wentzell rate function for a general Gaussian measure $\mu$ with Cameron-Martin space $\mathcal H_{\mu}$ is $\frac{1}2\|\cdot\|_\mu^2$. In particular, the small noise rate function for $\varepsilon B(t)$ is
\begin{equation}
\operatorname{FW}(z)=
\begin{cases}
\frac{1}2\int_0^T (z'(t))^2 dt&\text{ for }z\in \mathcal W_0^{1,2}\\
\infty &\text{ else }.
\end{cases}
\end{equation}
The first instance of small noise large deviations for infinite-dimensional non-Gaussian measures came in \cite{Freidlin-Wentzell}, where the authors studied small noise large deviations for the solution to the SDE
\begin{equation}
dX^\varepsilon(t)=b(X^\varepsilon(t))dt+\varepsilon dB(t),
\end{equation}
where $b \in C^1$.
In \cite{Freidlin-Wentzell}, they show that the law of $X^\varepsilon$ satisfies a LDP with rate function
\begin{equation}
\operatorname{FW}(z)=
\begin{cases}
\frac{1}2\int_0^T (b(z(t))-z'(t))^2 dt&\text{ for }z\in \mathcal W_0^{1,2}\\
\infty &\text{ else }.
\end{cases}
\end{equation}
We offer an alternate, motivating proof below and discuss $\Gamma$-convergence; for more information on $\Gamma$-convergence see \cite{Gamma-book}.
\begin{proof}
By Girsanov, the law of $X^\varepsilon$, $\mu^\varepsilon$, has density with respect to the law of $\varepsilon B(t)$, $\mu_0^\varepsilon$ given by
\begin{equation}\label{eq:density-for-sde}
\frac{d\mu^\varepsilon}{d\mu_0^\varepsilon}=\exp\left(\frac{1}{\varepsilon^2}\left(\int_0^T b(B(t)) dB(t)-\frac{1}2\int_0^T b^2(B(t))dt\right)\right).
\end{equation}
At first glance, it might appear from equation \eqref{eq:density-for-sde} that we might not need the full $\varepsilon$-dependent Lemma \ref{lemma:tilt-general}. However, we note that the It\^o integral is not defined pathwise. On the other hand, recall that the It\^o integral is $\mu_0^\varepsilon$-a.s. equal to a Stratonovich integral, which is defined pathwise. Applying It\^o's lemma under $\mu_0^\varepsilon$ gives that
\begin{equation}
\frac{d\mu^\varepsilon}{d\mu_0^\varepsilon}=\exp\left(\frac{1}{\varepsilon^2}\left(\int_0^T b(B(t)) \circ dB(t)-\frac{\varepsilon^2}2\int_0^T b'(B(t))dt-\frac{1}2\int_0^T b^2(B(t))dt\right)\right),
\end{equation}
where $\int_0^T b(B(t)) \circ dB(t)$ represents the Stratonovich integral.
Applying the $\varepsilon$-dependent tilting Lemma \ref{lemma:tilt-general} gives that the rate function for $\mu^\varepsilon$ is
\begin{equation}
\operatorname{FW}(z)=
\begin{cases}
-\left(\int_0^T b(z(t)) dz(t)-\frac{1}2\int_0^T b^2(z(t))dt\right)+\frac12 \int_0^T (z'(t))^2dt&\text{ if } z\in \mathcal W_0^{1,2}\\
\infty &\text{ else }.
\end{cases}
\end{equation}
Applying Proposition \ref{prop:tilt-for-OM} gives that the Onsager-Machlup function for $\mu^\varepsilon$ is
\begin{align*}
\operatorname{OM}_{\mu^\varepsilon}(z)&= -\frac{1}{\varepsilon^2}\left(\int_0^T b(z(t)) dz(t)-\frac{\varepsilon^2}2\int_0^T b'(z(t))dt-\frac{1}2\int_0^T b^2(z(t))dt\right)\\
&~+\frac{1}{2\varepsilon^2} \int_0^T (z'(t))^2dt,
\end{align*}
for $z\in \mathcal W_0^{1,2}$ and infinite otherwise. It is not hard to see that $\varepsilon^2 \operatorname{OM}_{\mu^\varepsilon}$ converges to $\operatorname{FW}$ both as a pointwise and as a $\Gamma$-limit.
\end{proof}
The following proposition shows that this argument can be significantly generalized.
\begin{proposition}\label{prop:FW-first}
Let $\mu_0$ be a centered Gaussian measure on a Banach space $\mathcal B$ with Cameron-Martin norm $\|\cdot \|_{\mu_0}$ (see \cite{hairer2009introduction}, Section 3.2). Let $\mu_0^\varepsilon$ denote the measure defined by $\mu_0^\varepsilon(A)=\mu_0(\varepsilon^{-1} A)$ for Borel $A\subset \mathcal B$. Let $F:\mathcal B \to \mathbb R$ be a functional and suppose that $F(y)=F_0(y)+\varepsilon F_1(y)+...+\varepsilon^n R_n(\varepsilon,y)$, $\mu_0^\varepsilon$-a.s. where the $F_i$ and $R_n$ satisfy the assumptions of Lemma \ref{lemma:tilt-general}, and suppose that the $F_i$ are locally bounded. Define the measures $\mu^\varepsilon$ with densities
\begin{equation}
\frac{d\mu^\varepsilon}{d\mu_0^\varepsilon}=\frac{1}{E_{\mu_0^\varepsilon}[e^{-\frac{1}{\varepsilon}F^\varepsilon(y)}]}\exp\left(-\frac{1}{\varepsilon^2}F^\varepsilon(y)\right).
\end{equation}
Then the Freidlin-Wentzell rate function for $\mu^\varepsilon$ exists and is
\begin{equation}
\operatorname{FW}(z)=F_0(z)+\frac{1}{2}\|z\|_{\mu_0}^2-\inf_{z\in \mathcal H_{\mu_0}}(F_0(z)+\frac{1}{2}\|z\|_{\mu_0}^2).
\end{equation}
Furthermore, denote by $\operatorname{OM}_\varepsilon(z)$ the Onsager-Machlup function for $\mu^\varepsilon$. Then the limit
\begin{equation}
\lim_{\varepsilon\to 0^+} \varepsilon^2 \operatorname{OM}_\varepsilon(z)=\operatorname{FW}(z).
\end{equation}
holds, both as a pointwise and as a $\Gamma$-limit.
\end{proposition}
\begin{proof}
By Lemma \ref{lemma:tilt-general}, we have that the Freidlin-Wentzell rate function for $\mu^\varepsilon$ exists and is $ \operatorname{FW}(z)=F_0(z)+\frac{1}{2}\|z\|_{\mu_0}^2-\inf_{z\in \mathcal H_{\mu_0}}(F_0(z)+\frac{1}{2}\|z\|_{\mu_0}^2).$ Then we just need to check the pointwise and $\Gamma$ convergence. Without loss of generality, we may assume that $\inf_{z\in \mathcal H_{\mu_0}}(F_0(z)+\frac{1}{2}\|z\|_{\mu_0}^2)=0$. This is because the Onsager-Machlup function is only defined up to an additive constant and we may add $-\frac{1}{\varepsilon^2}\inf_{z\in \mathcal H_{\mu_0}}(F_0(z)+\frac{1}{2}\|z\|_{\mu_0}^2)$ to $\operatorname{OM}_\varepsilon$. Proceeding with that, using
Proposition~\ref{prop:tilt-for-OM} we have that
$$\varepsilon^2 \operatorname{OM}_\varepsilon(z)=F_0(z)+\varepsilon F_1(z)+...+\varepsilon^n R_n(\varepsilon,z)+\frac{1}{2}\|z\|_{\mu_0}^2,$$
as the Onsager-Machlup function for $\mu_0^\varepsilon$ is $\frac{1}{2\varepsilon^2}\|z\|_{\mu_0}^2$.
Clearly the pointwise limit of this is $F_0(z)+\frac{1}{2}\|z\|_{\mu_0}^2$. Now we just have to check $\Gamma$ convergence. To this aim, note that $F_0(z)+\frac{1}{2}\|z\|_{\mu_0}^2$ is continuous and $\Gamma$ convergence is stable under continuous pertubations. Therefore we just need to show that
\begin{equation}
\Gamma-\lim_{\varepsilon\to 0} \varepsilon F_1(z)+...+\varepsilon^n R_n(\varepsilon,z)=0.
\end{equation}
Note that the $F_i$ are locally bounded and thus on the neighborhood $N_x$ we have that
\begin{equation}
\lim_{\varepsilon\to 0} \inf_{z\in N_x} \varepsilon F_1(z)+...+\varepsilon^n R_n(\varepsilon,z)=0.
\end{equation}
\end{proof}
By $\Gamma$ convergence, we have that every cluster point of the minimizers of $\varepsilon^2 \operatorname{OM}_{\mu^\varepsilon}$ is a minimizer of $\operatorname{FW}$ (see \cite{Gamma-book}, section 1.5). If additionally we know that the functionals $\varepsilon^2 \operatorname{OM}_{\mu^\varepsilon}$ are equicoercive (which is the case with SDEs with $C^1$ drift), by the fundamental theorem of $\Gamma$ convergence (see \cite{Gamma-book} section 1.5), then we have the full version of Theorem \ref{theorem:main}.
As a consequence of Theorem \ref{theorem:main}, we can specialize to the case of the generalized Girsanov theorem given in e.g. \cite{NuaBook}, Theorem 4.1.2.
\begin{proposition}
Let $(\mathcal B, \mu_0)$ be a Gaussian Banach space with Cameron-Martin space $\mathcal H_{\mu_0}$ and Cameron-Martin norm $\|\cdot\|_{\mu_0}$. Define $\mu_0^\varepsilon$ as above and consider white noise process $\{W(h):h\in \mathcal H_{\mu_0}\}$ associated to $\mu_0$. Suppose that $H:\mathcal B\to \mathcal H_{\mu_0}$ is a continuous function so that $W(H)$ is defined and suppose that for all $\varepsilon>0$ we have
\begin{equation}
E_{\mu_0^\varepsilon}\left[\exp\left(\frac{1}{\varepsilon^2}\left(W(H)-\frac{1}{2}\|H\|_{\mu_0}^2\right)\right)\right]=1.
\end{equation}
Define the collection of measures $$\mu^\varepsilon=\exp\left(\frac{1}{\varepsilon^2}\left(W(H)-\frac{1}{2}\|H\|_{\mu_0}^2\right)\right)\mu_0^\varepsilon.$$
Then the Friedlin-Wentzell rate function for $\mu^\varepsilon$ exists and is equal to
\begin{equation}
\operatorname{FW}(z)=
\begin{cases}
\frac{1}{2}\|z-H(z)\|_{\mu_0}^2&\text{ if } z\in \mathcal H_{\mu_0}\\
\infty &\text{ else}.
\end{cases}
\end{equation}
Furthermore we have that $\lim_{\varepsilon\to 0} \varepsilon^2\operatorname{OM}_\varepsilon(z)=\operatorname{FW}(z)$ both pointwise and in sense of $\Gamma$ convergence.
\end{proposition}
\begin{proof}
Let $i:\mathcal H_{\mu_0}\to L^2([0,T],\mathbb R)$ be an isomorphic isometry of separable Hilbert spaces. Denote by $z_t^\ast=i^{-1} ( \chi_{[0,t]})$. Then $W(z_t^\ast)$ is a standard Brownian motion. Furthermore, one can verify that for all $h\in \mathcal H_{\mu_0}$ we have
$$W(\omega, h)=\int_0^T (ih)(t) dW(\omega, z_t^\ast).$$
Therefore we have that
$$W(\omega, H(\omega))=\int_0^T (iH(\omega))(t) dW(\omega, z_t^\ast).$$
Under the measure $\mu_0^\varepsilon$, we can change to Stratonovich integration to get that
$$\int_0^T (iH(\omega))(t) dW(\omega, z_t^\ast)=\int_0^T (iH(\omega))(t) \circ dW(\omega, z_t^\ast)-\frac{\varepsilon^2}{2}[(iH)(\omega),W(\omega,z_t^\ast)](T).$$
The Stratonovich integral is a continuous function of $\omega$ so long as $H$ is, and so is $\frac{1}{2}\|H\|_{\mu_0}^2$, so therefore Proposition \ref{prop:FW-first} applies and
\begin{equation}
\operatorname{FW}(z)=
\begin{cases}-\int_0^T (iH(z))(t) \circ dW(z, z_t^\ast)+\frac{1}{2}\|H(z)\|_{\mu_0}^2+\frac{1}{2}\|z\|_{\mu_0}^2&\text{ if }z\in \mathcal H_{\mu_0}\\
\infty&\text{ else}.
\end{cases}
\end{equation}
One may note that for $z\in \mathcal H_{\mu_0}$ we have
\begin{align*}
\int_0^T (iH(z))(t) \circ dW(z, z_t^\ast)&=-\langle (iH)(z),(iz)\rangle_{L^2}\\
&=-\langle H(z), z\rangle_{\mu_0}.
\end{align*}
Therefore we arrive at
\begin{equation}
\operatorname{FW}(z)=
\begin{cases}
\frac{1}{2}\|z-H(z)\|_{\mu_0}^2&\text{ if } z\in \mathcal H_{\mu_0}\\
\infty &\text{ else}.
\end{cases}
\end{equation}
Finally, note that
\begin{equation}
\operatorname{OM}_\varepsilon(z)=
\begin{cases}
\frac{1}{2\varepsilon^2}\|z-H(z)\|_{\mu_0}^2+\frac{1}{2}[(iH)(z), W(z,z_t^\ast)](T)&\text{ if } z\in \mathcal H_{\mu_0}\\
\infty &\text{ else}.
\end{cases}
\end{equation}
\end{proof}
\begin{remark}
Note that the remainder term $\varepsilon^2 \operatorname{OM}_\varepsilon(z)-\operatorname{FW}(z)=\frac{\varepsilon^2}{2}[(iH)(z), W(z,z_t^\ast)](T)$ can often be seen as a test of whether $\mu^\varepsilon$ is Gaussian or not. For example, if $iH(z)=b(z(t))$ for some sufficiently regular $b:\mathbb R\to \mathbb R$, as is the case with SDEs, then $$\frac{\varepsilon^2}{2}[(iH)(z), W(z,z_t^\ast)](T)=\frac{\varepsilon^2}{2}\int_0^T b'(z(t))dt.$$ Which is constant as a function of $z$ if and only if $b'$ is constant. That is, if $W(H)-\frac{1}{2}\|H\|_{\mu_0}^2$ is a quadratic functional.
This is not always the case, as we can show. Letting $\mu_0$ be the law of a Brownian motion on the space of continuous functions, we can consider the density defined by
\begin{equation*}
\Psi(B)=\exp\left(\frac{1}{\varepsilon^2}\left(\int_0^T \phi(t) dB(t)-\frac{1}{2}\int_0^T \phi^2(t) dt\right)\right),
\end{equation*}
where $\phi$ is the adapted process defined by
\begin{equation*}
\phi(t)=
\begin{cases}
B(t)&\text{ if } t\in [0,T/2]\\
-B(T-t)&\text{ if } t\in [T/2,T].
\end{cases}
\end{equation*}
Then converting from It\^o to Stratonovich yields zero quadratic covariation with $B(t)$, and thus the functionals are equal but the measures $\mu^\varepsilon:=\Psi(B)\mu_0^\varepsilon$ are not Gaussian.
\end{remark}
\subsection{Examples}
\textbf{Stochastic Differential Equations}
The principal examples of measures satisfying Theorem \ref{theorem:main} are the laws of solutions to stochastic differential equations. There is the classical theory of SDEs driven by Brownian motion that our proof is based on but our result also extends to stochastic differential equations driven by more general Gaussian processes (see e.g. \cite{Budhiraja-LDP-FBM}), path dependent SDEs (see e.g. \cite{Ma-path-dependent}), the solution to stochastic PDEs driven by Gaussian fields (see e.g. \cite{LDP-SPDE}), among other SDEs.
\textbf{System of Random Algberaic Equations}
We conclude with an example demonstrating that the utility of our result extends beyond the situation of SDEs. Let $a_n$ be a sequence of real numbers that is square summable. Let $\xi_n$ be a sequence of i.i.d. standard normal random variables. Then by \cite{hairer2009introduction}, exercise 3.5 we have that the law of $\mathbf g :=(a_1 \xi_1,x_2\xi_2,...)$, $\mu_0$, is a Gaussian measure on the Banach (Hilbert) space $\mathcal B$ of square summable sequences. The Cameron-Martin space of $\mu_0$, $\mathcal H_{\mu_0}$, is the collection of all sequences $\mathbf z= \{a_n^2 \phi_n\}$ for some square summable sequence $\mathbf \phi = \{\phi_n\}$. For each $\mathbf z \in \mathcal H_{\mu_0}$, we have that $f_z$ is the random variable $W(\mathbf z)=\langle\mathbf \phi, \mathbf g\rangle =\sum_{n=1}^\infty \phi_n a_n \xi_n$. The Onsager-Machlup for $\mu_0$ by Corollary \ref{corollary:OM-for-Gaussian} is $\operatorname{OM}_{\mu_0}(\mathbf z)=\frac{1}2\sum_{n=1}^\infty \phi_n^2 a_n^2=\frac{1}2\|\mathbf z\|_{\mu_0}^2$. Let $f_n$ be measurable functions so that $f_n(\xi_n)$ satisfy for all $\varepsilon>0$
\begin{equation*}
E\left[e^{\frac{1}{2\varepsilon^2} \sum f_n^2(\varepsilon \xi_n)a_n^2}\right]<\infty.
\end{equation*}
Denote the law of $\mathbf g^\varepsilon=(a_1 \varepsilon \xi_1, a_2 \varepsilon \xi_2,...)$ by $\mu_0^\varepsilon$ and define the measures
\begin{equation*}
\mu^\varepsilon=\exp\left(\frac{1}{\varepsilon^2} \left(\sum_{n=1}^\infty f_n(\xi_n) a_n \xi_n-\frac{1}2\sum f_n^2(\xi_n)a_n^2\right)\right)\mu_0^\varepsilon.
\end{equation*}
Suppose that there exist solutions $x_n$ to the random algebraic equations $x_n^\varepsilon=f_n(x_n^\varepsilon)+\varepsilon a_n \xi_n$. Then $\mu^\varepsilon$ is the law of the random sequence $\mathbf x^\varepsilon=\{x_n^\varepsilon\}$ and we have that $\mathbf x^\varepsilon$ satisfies a LDP with rate function $\operatorname{FW}_\mu(\mathbf z)=\frac{1}2\sum_{n=1}^\infty (\phi_n-f(\phi_n))^2 a_n^2$. Furthermore, cluster points of the most likely sequence of $\mathbf x^\varepsilon$ are minimizers of $\operatorname{FW}_\mu$.
\bibliographystyle{plain}
|
1,108,101,564,922 | arxiv | \section{Introduction}
Dark energy poses challenges to physics because we know so little
about it. While there are many competing models, none of them are
overwhelmingly convincing. This is of course why the field is so
exciting. We expect to learn a great deal by further study of the
dark energy. This note addresses a specific technical issue related to
modeling dark energy and forecasting the impact of future experiments.
As discussed in \cite{Linder:2006xb} it is standard practice to model
the dark energy as a perfect fluid. Its dynamical properties can then
be expressed in terms of the
the dark energy equation of state parameter $w$ as a function
of cosmic scalefactor $a$. The general case of this
parameterization implies an infinite number of degrees of freedom (in
order to describe the continuous function $w(a)$), but in most
implementations a finite parameter ansatz is used. Any simple ansatz
will exclude possible forms for $w(a)$ and thus could distort the
subsequent analysis, so the relative advantages and risks associated
with particular choices of ansatz are a topic of ongoing discussion
and debate.
Currently, the ansatz that is probably most commonly used
is the linear expression
\begin{equation}
w(a) = w_0 + (1-a)w_a \label{eq:normal}
\end{equation}
which we will call ``normal form''. This is a special ($a_p = 1$) case of the linear expression
\begin{equation}
w(a) = w_p + (a_p-a)w_a\label{eq:pivot}
\end{equation}
where $a_p$ is the ``pivot scalefactor''. Equation \eqref{eq:pivot} is
usually used as in \cite{Hu:2003pt}, where $a_p$ is chosen to be the
value of the
scalefactor where $w(a)$ is most tightly constrained by a
particular data set \cite{Huterer:2000mj}. By construction, $w(a_p)
\equiv w_p$ in this
ansatz, which we will call ``pivot form''.
Attention has been recently drawn to the pivot ansatz by the
Dark Energy Task Force (DETF) \cite{Albrecht:2006}.
The DETF defines a figure of merit for a given data set based on the
area inside a constant probability contour in the two dimensional
$w_p$--$w_a$ plane.
In \cite{Linder:2006xb} Linder looked
at the effect of moving from a description based on normal form
to a description based on pivot form (\S III
of \cite{Linder:2006xb}).
Linder's paper suggests that the normal form
suffers less from bias than the pivot form. He
considers a non-linear generalization of \eqref{eq:pivot} where $a$ has an
exponent $b$ differing from unity, and demonstrates a bias using this
generalized pivot form in his figures 2 and 3.
This brief note focuses on the {\em linear} ($b=1$) form of the pivot ansatz.
This is the only form that is relevant to the DETF work. We show
why normal form and pivot form are mathematically identical and
interchangeable when it comes to defining ``area'' figures of merit
like the one used by the DETF (this fact has already been stated in
the Fisher matrix
approximation in \cite{Albrecht:2006}.) Linder's discussion emphasizes the
bias he exhibits using his non-linear ansatz and singles out the
behavior of $w_p$. Linder's discussion favors
normal form over pivot form, stating that normal form is ``more robust''.
While it may seem that our conclusion is in conflict with
\cite{Linder:2006xb}, we do not believe there are any concrete technical
points of disagreement. For example, Linder's figures 2 and 3 shows
that the bias disappears for the linear ($b=1$) case. Also, we agree
with Linder that the choice of $a_p$ is data-dependent. So the
differences between this comment and \cite{Linder:2006xb} are
apparently only
ones of emphasis. Our message is that
\cite{Linder:2006xb} does not demonstrate any weakness
of the DETF figure of merit relative to the equivalent (in fact equal)
one based on normal form. Our motivation is to make this message
transparent even in the non-Gaussian case.
\section{A mechanical analogue}
As the scalefactor plays the role of the evolution parameter, it is
useful to think of it as time. Identifying the equation of state
$w(a)$ as $x(t)$ we transform \eqref{eq:normal} and \eqref{eq:pivot}
into the equation of a particle traveling at a constant velocity:
\begin{equation}
x(t) = x_0 - t x_a
\end{equation}
It is well known that this mechanical system comes from an almost
trivial Hamiltonian
\begin{equation}
H = \frac{p^2}{2}
\end{equation}
where the ``mass'' has been scaled to unity. As this is a
time-independent evolution of the system we can apply Liouville's
theorem: the evolution of a region of phase space is volume
preserving. That is, time evolution preserves the volume in the
$x_0$--$x_a$ plane.
\section{Comments}
The difference between Eqn. \eqref{eq:normal} and Eqn. \eqref{eq:pivot} is that
they are looking at the same system at different ``times'' or scalefactors. As these equations have a Hamiltonian evolution (as is easy
seen by constructing a mechanical analogy) the phase space volume
cannot change. As a result, the area inside the error contours cannot
depend on whether you are using parameterization \eqref{eq:normal} or
\eqref{eq:pivot}, although the shape and orientation of the contours
in $w_0$--$w_a$ space can certainly change.
This result is interesting because we have not
made any assumption about the underlying distribution in the
$w_0$--$w_a$ plane. (The case of a Gaussian distribution is considered
explicitly in the Appendix.) Both
pivot and normal form consider the exact same linear family of functions
$w(a)$, and just label them by different linear combinations of their
parameters. This particular relabeling does not change area in phase
space, and does not change the likelihood of a specific function $w(a)$
given a specific data set.
More generally, it is interesting to ask what happens when the class
of models is generalized so that the coefficient of $a$ is no longer
unity. In this case, it is \emph{no longer true} that the area in the
$w_0$--$w_a$ plane is preserved. The main reason why the above
explanation fails is that the ``canonical momentum'' changes in time,
and while the ``$q$--$p$'' area \emph{is} preserved in time, the
$w_0$--$w_a$ area is not. However, this generalization is not relevant
to normal and pivot forms, and thus does not impact our conclusions.
So we emphasize again our main point: The area figures of merit
based on pivot and normal form are mathematically identical.
\section{Acknowledgements}
We thank Lloyd Knox and Eric Linder for helpful comments. This work
was supported in part by DOE grant DE-FG03-91ER40674.
|
1,108,101,564,923 | arxiv | \section{Introduction.}\label{section: introduction}
In \cite{ks_theoretical} one can find a theoretical framework for the
computation and rigorous computer assisted verification of invariant objects
(fixed points, travelling waves, periodic orbits, attached invariant manifolds)
of semilinear parabolic equations of the form
\[
\partial_t u +Lu+N(u) = 0,
\]
where $L$ is a linear operator and $N$ is nonlinear. The two operators $L$ and
$N$ are possibly unbounded but satisfy that $L^{-1}N$ is continuous. The
methodolody of \cite{ks_theoretical}< is based on writing down an invariance
equation for these objects in suitable Banach spaces. One remarkable aspect of
this methodology is that if one applies a posteriori constructive methods one
can obtain computer assisted proofs and validity theorems.
In this paper we apply this methodology for the numerical computation and
a posteriori rigorous verification of the existence of periodic orbits in a
concrete example: the Kuramoto-Sivashinsky equation. This equation is the
parabolic semilinear partial differential equation
\begin{equation}\label{eq: ks equation}
\partial_t u + \left(\nu \partial_x^4+\partial_x^2\right)
u+\frac12 \partial_x\left(u^2\right) = 0,
\end{equation}
where $\nu > 0$ and $u\colon\mathbb{R}\times\mathbb{T}\rightarrow \mathbb{R}$.
($\mathbb{T} := \mathbb{R}/(2\pi \mathbb{Z}) $). We restrict our study to the
space of periodic odd functions, $u(t, x) = -u(t, -x)$,
\begin{equation*}
u(t, x) = \sum_{k=1}^\infty a_k(t)\sin(k x).
\end{equation*}
The PDE \eqref{eq: ks equation} is used in the study of several physical
systems. For example, instabilities of dissipative trapped ion modes in plasmas
\cite{laqueyetaltri1975, Cohenetaltri1976}, instabilities in laminar flame
fronts \cite{Sivashinksy77} and phase dynamics in reaction-diffusion systems
\cite{Kuramoto76}.
The Kuramoto-Sivashinky equation has been extensively studied both
theoretically and numerically \cite{Armbruster, Colletattracting,
Colletanalyticity, Ilyashenko, Nicolaenkoetaltri}. It satisfies that its flow
is well-posed forward in time in Sobolev, $L^2$ and analytic spaces. In fact,
it is smoothing: For positive values of $t$ the solutions with $L^2$ initial
data are analytic in the space variable $x$. The phase portrait depends on the
value of the parameter $\nu$: The zero solution $ u(t, x) = 0 $ is a fixed
point with a finite dimensional unstable manifold. Its dimension is the number
of solutions of the integer inequality $k^2-\nu k^4 > 0$, $k > 0$. For $\nu >
1$ the zero solution is a global attractor of the system. For every $\nu > 0$,
the system has a global attractor. This attractor has finite dimension (it is
confined inside an inertial manifold), \cite{Jolly_Kevrekidis_Titi_90,
Foias_Nicolaenko_Sell_Teman_88, EdenFNT94, Temam97, Chueshov02, CFNT_book,
Robinson_book}. Finally, it has plenty of periodic orbits
\cite{LanCvitanovic2008, CvitanovicDavidchackSiminos2010}. For example, it is
known empirically that there are period doubling cascades
\cite{Papageorgiou_Smyrlis_90, Papageorgiou_Smyrlis_91} satisfying the same
universality properties than in \cite{Feigenbaum78,TresserC78}. See Section
\ref{section: num explor} for a numerical exploration of the phase portrait of
the Kuramoto-Sivashinsky equation \eqref{eq: ks equation}.
In the literature several ways have been proposed for computing periodic orbits
of the Kuramoto-Sivashinsky equation: If the periodic orbit is attracting, one
can use an ODE solver for computing the evolution of the system using Galerkin
projections. Accordingly, starting at an initial point in the basin of
attraction and integrating forward in time one gets close to the periodic
orbit. If the periodic orbit is unstable, another classical technique is to
compute them as fixed points of some Poincar\'e map of the system. Another
approach, \emph{the Descent method}, is presented in
\cite{LanChandreCvitanovic2006, LanCvitanovic2008}. This is a method that,
given an initial guess of the periodic orbit, it evolves it under a variational
method minimizing the local errors of the initial guess.
In this paper we implement another method based on solving, using
\emph{Newton's method}, a functional equation that periodic orbits satisfy. The
unknowns are the frequency and the parameterization of the periodic orbit.
This methodology permits us to write down a posteriori theorems that, with the
help of rigorous computer assisted verifications, lead us to the rigorous
verification of these periodic orbits by estimating all
the sources of error (truncation, roundoff). In this paper, we
carry out this estimates, so that the results we present are
rigorous theorems on existence of periodic orbits.
The Newton method, of course, has the shortcoming that it depends on having a
close initial guess; the descent method in practice has a larger domain of
convergence.
On the other hand, the Newton
method produces solutions to machine epsilon precision $\varepsilon_M$, whereas
the descent method, being a variational method, cannot get beyond
$\sqrt\varepsilon_M$ and, moreover, slows down near the solution and
may have problems with stiffness. Other
variational algorithms (e.g. conjugate gradient, Powell \cite{Brent73}
or Sobolev gradients
\cite{Neuberger10} ) could be faster and less
sensitive to stiffness even if
limited to $\sqrt\varepsilon_M$ precision. Of course, one can combine both
methods and obtain convergent methods up to machine epsilon: Gradient
like methods at the beginning but switching to fast Newton's method for the end
game.
The goal of this paper is not only to obtain numerical computations
but also to estimate all the sources of error and to obtain computer
assisted proofs of the existence of the numerical orbits obtained
and some of their properties.
There has been other computer assisted proofs of invariant objects of the
Kuramoto-Sivashinky equation. In \cite{ArioliKoch2, Piotr1, Piotr2} the
authors prove the existence of stationary solutions and their bifurcation
diagrams, and in \cite{ArioliKoch1, Piotr3} they prove the existence of
periodic orbits. The proof is done there by combining rigorous propagation of
the (semi-) flow defined by the PDE and a fixed point theorem in a suitable
Poincar\'e section.
In this paper,
the flow property is not used: the existence of periodic
orbits is reduced to a smooth functional
defined in a Banach space. This methodology could be used for the proof of the
existence of periodic orbits in other type of PDEs
See \cite{ks_theoretical} for a systematic study.
Remarkably, in \cite{CGL_Boussinesq}
the methodology has been extended to validate numerical
periodic solutions of
\begin{equation}\label{boussinesq}
\partial_{tt} u = \left(\nu \partial_x^4+\partial_x^2\right)
u+\frac12 \partial_{xx}\left(u^2\right) \quad \mu > 0
\end{equation}
with periodic boundary conditions. It is to
be noted that \eqref{boussinesq} which does not define a flow, so
that the methods of finding periodic solutions based on propagating,
cannot get started.
Rigorous a-posteriori theorems of existence of quasi-periodic solutions in \eqref{boussinesq}
are in
\cite{LlaveS16}.
An independent implementation of the methodology in \cite{ks_theoretical} to
the Kuramoto-Shivashisly is in \cite{ks_jp_marcio}. The papers
\cite{ks_jp_marcio} and this one, even if they share
a common philosophy (explained in \cite{ks_theoretical} )
differ in several aspects: the spaces of functions considered, using different
results to control the errors of numerical. The paper \cite{ks_jp_marcio}
also considered branching of the continuations.
\paragraph{Organization of the paper}
In Section \ref{section: invariance equation} we present the invariance
equation for the periodic orbits. Then, in Section \ref{section: Newton scheme}
we develop the numerical scheme for the computation of these orbits. The
methodology for the validation of the the periodic orbits is presented in
Sections \ref{section: a posteriori theorems} and \ref{section: implementation
theorem}. In Section \ref{section: a posteriori theorems} we present a theorem
that leads to the validation of the periodic orbits, and in Section
\ref{section: implementation theorem} we deduce a rigorous numerical scheme for
the verification of the existence of periodic orbits. Later, in Section
\ref{section: numerical examples}, several examples of the numerical and the
rigorous schemes are described. In Appendix \ref{section: appendix} we define
the functional spaces and the properties used during the computer assisted
proofs. In Appendix \ref{section: multiplication} we present a fast algorithm
due to \cite{Rump_matrices_0} for multiplying high dimensional interval
matrices. This algorithm is used for the application of the rigorous
numerical scheme.
\subsection{Non-rigorous exploration: Period-doubling cascades}
\label{section: num explor}
In this heuristic chapter,
we use the remarkable fact the Kuramoto-Sivashinsky equation \eqref{eq: ks
equation} has period-doubling cascades as a source for periodic orbits
that later we will validate rigorously.
Computing nonrigorously attracting periodic orbits and period-doubling cascades
is easy: it just requires to integrate forward in time a random (but
well-selected) initial condition until it gets close to the attracting orbit.
Let's give a brief description of the method. More details can be found in
\cite{Papageorgiou_Smyrlis_91, LanCvitanovic2008}.
Given an initial condition
\begin{equation*}
u(0, x) = \sum_{k=1}^\infty a_k(0)\sin(kx),
\end{equation*}
it is easy to see that its Fourier coefficients evolve via the (infinite
dimensional) system of
differential equations
\begin{equation}
\label{eq: infinite ode}
\dot{a}_k = \left(k^2-\nu k^4\right)a_k+\frac{k}{2}
\left(\sum_{l=1}^\infty a_{k+l}a_k-\frac12\sum_{l+m=k}a_la_m\right)
.
\end{equation}
After truncating the system \eqref{eq: infinite ode}, we get a finite
dimensional ODE: Since it is rather stiff, we should be careful with the ODE
solver we choose. Numerical tests show that Runge-Kutta 4-5 is enough for our
purposes. Hence, after fixing a value of the parameter $\nu$ and starting with
the initial point $u(0, x)=\sin(x)$, we integrate it forwards in time and,
after a transient time, obtain a good approximation of the periodic orbit. In
b) to d) in Figure \ref{figure: sections po cascade} we can see the $a_1-a_2$
coordinates of some periodic orbits for different values of the parameter
$\nu$. The period-doubling cascade can be visualized by plotting the local
minima in time of the $L^2-$energy
\begin{equation*}
\text{Energy}(t) = \sqrt{\sum_{k=1}^\infty a_k(t)^2},
\end{equation*}
along the periodic orbit, see a) in Figure \ref{figure: sections po
cascade}.
\begin{figure}
\begin{center}
\resizebox{120mm}{!}{
\begin{tabular}{cc}
\resizebox{60mm}{!}{\includegraphics[angle=270]{./cascade_energy.png}}&
\resizebox{60mm}{!}
{\includegraphics[type=png,ext=.png,read=.png,angle=270]
{./orbit_3.327010e+01_20}} \\
a) Period doubling cascade &
b) {$1/\nu = 33.2701$} \\
\resizebox{60mm}{!}{\includegraphics[type=png,ext=.png,read=.png,angle=270]
{./orbit_3.333530e+01_20}} &
\resizebox{60mm}{!}{\includegraphics[type=png,ext=.png,read=.png,angle=270]
{./orbit_3.335690e+01_20}} \\
c) {$1/\nu = 33.3353$} &
d) {$1/\nu = 33.3569$} \\
\end{tabular}
} \caption{Figure a) shows a period-doubling bifurcation cascade. Figures b) to
d) show the projection on the $a_1-a_2$ coordinates of some periodic orbits of
the first three period-doublings.}\label{figure: sections po cascade}
\end{center}
\end{figure}
\begin{rem}
The period doubling cascades described above have, to the limit of numerical
precision, the same quantitative properties than the one-dimensional ones found
in \cite{Feigenbaum78,TresserC78} even if the K-S, in principle, is an infinite
dimensional dynamical system. Nevertheless, since the system admits an inertial
manifold it is plausible that the arguments of \cite{ColletEK81} apply.
\end{rem}
The papers \cite{LanChandreCvitanovic2006,
LanCvitanovic2008,CvitanovicDavidchackSiminos2010} present a very remarkable
explicit surface of section which allows to reduce approximately to a one
dimensional map. Other sources of periodic orbits can be found as a byproduct
of computations of inertial manifolds \cite{Jolly_Kevrekidis_Titi_90,
GarciaNT98, NovoTW01, JollyRT00, DCCST2016}. In this paper we will not use
these methods, but the periodic orbits found by them could be validated using
the methods here.
\section{Derivation of the invariance equation for the
periodic orbits.}\label{section: invariance equation}
Here we derive a functional equation for the periodic orbits of the
Kuramoto-Sivashinsky equation. This functional equation
is well suited for applying a
fixed point problem for a well-defined operator. Later on, with this equation,
we develop a numerical scheme for the computation of these orbits and an a
posteriori verification method.
Periodic orbits with period $T$ of the Kuramoto-Sivashinsky equation \eqref{eq:
ks equation} satisfy, under the time rescaling $\theta= \dfrac{2\pi t}{T}$,
the invariance equation
\begin{equation}\label{eq: invariance}
f \partial_\theta u + L
u+\frac12 \partial_x\left(u^2\right) = 0,
\end{equation}
where $L = \nu \partial_x^4+\partial_x^2$ and $f = \frac{2\pi}{T}$. A solution
of Equation \eqref{eq: invariance} is represented by a pair $(f, u)$, where $f$
is a real number and $u:\mathbb T^2\rightarrow \mathbb R$,
$u(\theta,x)$ is odd with respect $x$.
Given an approximate solution $(f_0, u_0)$ of Equation \eqref{eq: invariance},
we look for a correction $(\sigma, \delta )$ of it. This correction satisfies
the equation
\begin{equation}\label{eq: unbounded equation}
f_0\partial_\theta \delta+L \delta+\partial_x\left(u_0\cdot \delta\right)+
\sigma\partial_\theta u_0 = -e-\frac12\partial_x(\delta^2)
-\sigma\partial_\theta\delta,
\end{equation}
where $e = f_0\partial_\theta u_0+L u_0 +\frac12\partial_x(u_0^2)$ is the error
of the approximation $(f_0, u_0)$.
Equation \eqref{eq: unbounded equation} has two problems:
\begin{enumerate}
\item Solutions of Equation \eqref{eq: invariance} are non-unique: If $(\sigma,
\delta(\theta, x))$ is a solution, then so is $(\sigma, \delta(\theta+a, x))$,
$\forall a\in\mathbb{R}.$
\item The linear part of Equation \eqref{eq: unbounded equation} is an
unbounded operator. This leads to numerical instabilites.
\end{enumerate}
To fix the non-uniqueness problem, we impose another equation so that
Problem \eqref{eq: unbounded equation} has a unique solution.
To motivate the choice of normalization, we observe that if
$u(\theta, x)$ is a solution of Equation \eqref{eq: invariance} satisfying
\begin{equation}
\label{eq:uniqueness1}
\int_{\mathbb{T}^2} u \cdot \partial_\theta u = C \text{ (Constant)}.
\end{equation}
Then, a translation in the $\theta$ direction -- the source of
non-uniqueness changes the quantity \eqref{eq:uniqueness1} by
\begin{equation*}
\int_{\mathbb{T}^2} u(\theta+a, x)
\cdot \partial_\theta u(\theta, x)
\simeq\int_{\mathbb{T}^2} (u(\theta, x)+\partial_\theta u(\theta, x) a)
\cdot \partial_\theta u(\theta, x)
= C + a\|\partial_\theta u\|^2_{L^2(\mathbb{T}^2)}.
\end{equation*}
Thus,
\begin{equation*}
\int_{\mathbb{T}^2} u(\theta+a, x)
\cdot \partial_\theta u(\theta, x)=
C+a\|\partial_\theta u\|^2_{L^2(\mathbb{T}^2)}+O(a^2).
\end{equation*}
The above calculation can be interpreted geometrically saying that the
surface in function space given by \eqref{eq:uniqueness1} is
transversal to the symmetries of the equation.
Therefore, we impose local uniqueness for Equation \eqref{eq: unbounded
equation} by requiring that the correction $\delta$ should be
\textit{perpendicular} to the approximate parameterization $u_0$. That is,
\begin{equation*}
\int_{\mathbb{T}^2} \delta \cdot \partial_\theta u_0 = 0.
\end{equation*}
As we observed before, the linear operator $f_0\partial_\theta+L$ is unbounded,
but we can transform Equation \eqref{eq: unbounded equation} into
a smooth equation by performing algebraic manipulations. Let
$c\in\mathbb{R}$ be such that $S_c = f_0\partial_\theta+L+c {\rm Id}$ is invertible.
Then, we have that $(\sigma, \delta)$ in Equation \eqref{eq: unbounded equation}
satisfies the equation
\begin{equation*}
A
\begin{pmatrix}
\sigma\\
\delta
\end{pmatrix}
=
\tilde e
+
\tilde N
\begin{pmatrix}
\sigma\\
\delta
\end{pmatrix}
,
\end{equation*}
where
\begin{equation}\label{eq: operator A}
A =
\begin{pmatrix}
0 & \int_{\mathbb{T}^2}\cdot \partial_\theta u_0\\
S_c^{-1} \partial_\theta u_0 & {\rm Id}-cS_c^{-1}+S_c^{-1}\partial_x(u_0\cdot)\\
\end{pmatrix}
,
\end{equation}
\begin{equation*}
\tilde e =
\begin{pmatrix}
0 \\
-S_c^{-1} e
\end{pmatrix}
,
\end{equation*}
and
\begin{equation*}
\tilde N
\begin{pmatrix}
\sigma\\
\delta
\end{pmatrix}
=
-S_c^{-1}\left(\frac12 \partial_x(\delta^2)+\sigma\partial_\theta
\delta\right)
.
\end{equation*}
\subsection{Algorithm for computing periodic orbits.}\label{section: Newton
scheme}
From the discussion in Section \ref{section: invariance equation}, we have that
our solution $z=(\sigma, \delta)$ satisfies a functional equation of the form
\begin{equation}\label{eq: smooth equation}
A z = \tilde e+\tilde N(z,z),
\end{equation}
where $A$, given by Equation \eqref{eq: operator A}, is a bounded linear
operator and $\tilde N$ is the nonlinear part ($N(0)=DN(0)=0$).
The Newton scheme is based on solving Equation \eqref{eq: smooth equation}
numerically. Given an initial guess $(f_0, u_0)$, we update it by finding the
correction $z = (\sigma, \delta)$ that is a solution of the linear equation
\begin{equation}\label{eq: smooth linear system}
A z = \tilde e,
\end{equation}
and obtain $(f_1, u_1)=(f_0+\sigma, u_0+\delta)$. This process is repeated
several times until a stopping criterion, $\|\tilde e(f_k, u_k)\| < tol$, is
fullfilled. As in all Newton's methods, if $(f_k, u_k)$ is an approximate
solution, then at each step the error decreases quadratically, $\|\tilde
e(f_{k+1}, u_{k+1})\|\approx\|\tilde e(f_k, u_k)\|^2$. Since the problem is
infinite dimensional, truncation to the most significatives Fourier modes is
required. This transforms the problem to a finite dimensional one.
Summarizing, we obtain the following algorithm:
\begin{algorithm}{\ }
\begin{itemize}
\item[\textbf{Input}]
\begin{itemize}
\item An approximate solution $(f_0, u_0)$ of the
invariance equation \eqref{eq: invariance}.
\item The accuracy $\text{\bf tol}$ for the computation
of the solution.
This gives an upper bound of the accuracy of the outputs of the algorithm.
\end{itemize}
\item[\textbf{Output}]
An approximate solution $(f_k, u_k)$ of the invariance equation
with tolerance less than $\text{\bf tol}$.
\item[0.a)] Fix a norm
on the space of periodic functions on the torus (see Appendix \ref{section:
appendix} for examples of such norms).
\item[0.b)]
Set $k=0$.
\item[1)] Compute the error
$\tilde e_k = -S_c^{-1}\left(f_k\partial_\theta u_k+L u_k
+\frac12\partial_x(u_k^2)\right)$.
\item[2)] If $\|\tilde e_k\| < \text{\bf tol}$ stop the algorithm.
The pair $(f_k, u_k)$ is the approximation of the frequency and the periodic
orbit with the desired accuracy.
\item[3)] Solve the (finite dimensional truncated) linear
system \eqref{eq: smooth linear system} by means
of a linear solver, obtaining the solution pair $(\sigma, \delta)$.
\item[4)] Set $u_{k+1}=u_k+\delta_k$, $f_{k+1}=f_k+\sigma_k$, and
update $k$ with $k+1$.
\item[5)] If $\|(\sigma, \delta) \| < \text{\bf tol}$, stop the algorithm.
The pair $(f_{k+1}, u_{k+1})$ is
the approximation of the periodic orbit and its frequency with
the desired accuracy.
Otherwise, repeat the process starting from step 1).
\end{itemize}
\end{algorithm}
\begin{rem}
As said before, all computations are performed by representing all Fourier
series as Fourier polynomials of order, say, $N$. However, we notice that in
Step 1), where the error $\tilde e_k$ is computed, the computation of $u^2$ is
required, hence, when we apply the functional to
a polynomial of degree $N$, we obtain a polynomial of
degree $2N$. We have observed in our numerical tests that a way to
obtain sharp estimates is to compute the functional with $2N$ coefficients.
By doing so the bounds obtained by the algorithm are
very sharp and suitable for the validation scheme presented in Section
\ref{section: implementation theorem}.
\end{rem}
\subsection{Computation of the stability of a periodic
orbit.}\label{subsection: computation stability}
Once a periodic orbit $(f, u)$ is computed one often desires to compute its
stability (the dimension of the unstable manifold). One way of computing it is
counting the number of eigenvalues of the Floquet operator that are outside the
unit circle. That is, integrate the linear differential equation
\begin{equation*}
f \partial_\theta v = -Lv- \partial_x(u\cdot v)
\end{equation*}
with initial condition $v_0 = {\rm Id}$, up to time $1$, and compute the spectrum of
$v_1$. Then, check how many eigenvalues are outside the unit disk. Of course,
this should be done by truncating all computations in finite dimensions and
bounding the errors.
Another way is computing the spectrum of the unbounded (but closed) linear
operator
\begin{equation}\label{eq: stability operator}
f \partial_\theta +L+\partial_x(u\cdot )
.
\end{equation}
Given an eigenvalue $\lambda$ of the Floquet operator, $\log(\lambda)+i f\cdot
n$, $n\in\mathbb{Z}$, is an eigenvalue of the operator \eqref{eq: stability
operator}. Hence, restricting the spectrum on a set of the form
$\Gamma_a=\{z\in\mathbb{C} : a\leq \text{Im}(z) < a+f\}$ is in one-to-one
correspondence with the spectrum of the Floquet operator. In particular,
computing the dimension of the unstable manifold is the same as computing the
number of eigenvalues of the operator \eqref{eq: stability operator} restricted
to the left half-plane $\Gamma_a\bigcap \{z: \text{Im}(z) < 0\}$.
Even if the two methods are equivalent for the equations that define a
differentiable flow, we note that the method based on studying the
spectrum of \eqref{eq: stability operator} makes sense even in equations
that do not define a flow. Hence, this is the method that we will use.
\section{An a posteriori theorem for the rigorous verification
of the existence of periodic orbits.}\label{section: a posteriori theorems}
In this section we present an a posteriori result,
Theorem~\ref{thm: contraction 2} that, given an approximate
solution $(f, u)$ of Equation \eqref{eq: invariance} satifying
some explicit quantitative assumptions, ensures the existence of
a true solution of
\eqref{eq: invariance} and estimates the
distance between this true solution and the
approximate one. Of course, the solutions of
\eqref{eq: invariance} give periodic solutions of the
evolution equation.
Theorem~\ref{thm: contraction 2} is a tailored version of Theorem 2.3
apprearing in \cite{ks_theoretical}. For the sake of completeness, we will
state it here adapted to Equation \eqref{eq: smooth equation}.
Note that the theorem is basically an elementary contraction mapping
principle, but that we allow for the application of a preconditioner, which
makes it more applicable in practice.
\begin{thm}
\label{thm: contraction 2}
Consider the operator
\begin{equation}
\label{eq: fixed point}
F(z)=A z - \tilde e-N_{{\bar{u}}}(z),
\end{equation}
defined in $\overline{B_{{\bar{u}}}(\rho)} = \{ u : \|u-{\bar{u}}\| \le \rho
\}$, with $\rho > 0$, and where
$N_{{\bar{u}}}$ the nonlinear part of the operator $F$ at the point ${\bar{u}}$, that is
\[
N_{{\bar{u}}}(z)=F({\bar{u}}+z)-F({\bar{u}})-DF({\bar{u}})z.
\]
Let $B$ be a linear operator such that $BDF({\bar{u}})$, $BF$
and $BN_{{\bar{u}}}$ are continuous operators. If, for some $b, K > 0$ we have:
\begin{enumerate}[label=(\alph*),ref=(\alph*)]
\item
\label{item: 1}
$\|I-BDF({\bar{u}})\| = \alpha < 1$.
\item
\label{item: 2}
$\|B\left(F({\bar{u}})+N_{{\bar{u}}}(z)\right)\|\leq b$ whenever $\|z\| \leq \rho$.
\item
\label{item: 3}
$\text{Lip}_{\|z\| \leq \rho} B N_{{\bar{u}}}(z) < K$
\item
\label{item: 4}
$\frac{b}{1-\alpha} < \rho$.
\item
\label{item: 5}
$\frac{K}{1-\alpha} < 1$.
\end{enumerate}
then there exists $\delta u$ such that ${\bar{u}}+\delta u$
is in $\overline{B_{{\bar{u}}}(\rho)}$ and
is a unique solution of Equation
\eqref{eq: fixed point},
with $\|\delta u\| \leq \frac{\|B F({\bar{u}})\|}{1-\alpha-K}$.
\end{thm}
We are now in a position to write down the theorem for the existence and local
uniqueness of periodic orbits and their period for the Kuramoto-Sivashinsky
equation. This theorem has been written for the special case of the family of
Banach spaces $X_M$, that depends on the parameters $r, s_1, s_2 \geq 0$.
It is the Banach space of periodic functions
$u(\theta, x) = \sum_{(k_1, k_2)\in\mathbb{Z}^2} u_{k_1,k_2}
e^{i (k_1\cdot x+k_2\cdot \theta)}$ with finite norm
\begin{equation*}
\|u\|_{M} = \sum_{(k_1, k_2) \in\mathbb{Z}^2} M(k_1, k_2)|u_{k_1, k_2}|,
\end{equation*}
where
\[
M(k_1, k_2)=(1+|k_1|)^{s_1}(1+|k_2|)^{s_2} e^{r (|k_1|+|k_2|)}.
\]
When there is no confusion, we will denote the norm by $\|\cdot \|_{M}$. These
spaces have the property that all their elements are analytic functions for
$r\neq 0$. See Appendix \ref{section: appendix} for a more detailed
discussion.
\begin{thm}\label{thm: contraction 3}
Let $r, s_1, s_2 \geq 0$ define the Banach space $X_M$, and $(f, u)$ be an
approximate solution of Equation \eqref{eq: invariance}, with error $e$, and
consider Equation \eqref{eq: smooth equation} for the correction $(\sigma,
\delta)$. Let $B={\rm Id}+\hat B$ be a linear operator, and suppose that the
following conditions are satisfied:
\begin{enumerate}
\item[A)] $\|\hat B\hat A+\hat A+\hat B\|_{M} = \alpha < 1$,
\item[B)] $\|B\|_{M}\|\tilde e\|_{M} \leq e_1$,
\item[C)] $\|B\|_{M}
(\|S_c^{-1} \partial_\theta\|_{M}
+
\frac12\|S_c^{-1}\partial_x\|_{M}
)\leq e_2$,
\item[D)] $(1-\alpha)^2-4 e_1 e_2 > 0$,
\end{enumerate}
then there exists a solution $z_*=(\sigma_*, \delta_*)$ of Equation \eqref{eq:
smooth equation} satisfying $\|z_*\|_{M} \leq E=\frac{e_1}{1-\alpha -
\rho_-}$, where $\rho_- = 1-\alpha-\sqrt{(1-\alpha)^2-4e_1e_2}$.
\end{thm}
\begin{proof}
Let $\rho > 0$ such that $1-\alpha-\sqrt{(1-\alpha)^2-4e_1e_2} < 2e_2\rho <
1-\alpha$. We need to deduce all the conditions in Theorem \ref{thm:
contraction 2}. Notice that $z = (\sigma, \delta)$ and $Q(z, z) =
S_c^{-1}(\sigma \partial_\theta\delta+\frac12\partial_x(\delta^2))$. Condition
1) in Theorem \ref{thm: contraction 2} is the same as condition a) for the
present theorem.
$b$ in condition b) is $e_1+e_2\rho^2$, because if
$\|(\sigma, \delta)\|_{M}\leq \rho$,
then
\begin{equation*}
\begin{split}
& \left\|B\left(\tilde e+S_c^{-1}(\sigma \partial_\theta\delta+
\frac12\partial_x(\delta^2))\right)\right\|_{M}
\leq
\|B\|_{M}\|\tilde e\|_{M}
+\|B S_c^{-1}(\sigma \partial_\theta\delta+\frac12\partial_x(\delta^2))\|_{M}
\\
&\phantom{AAAAA}\leq
e_1
+\|B S_c^{-1}(\sigma \partial_\theta\delta)\|_{M}
+\frac12\|B S_c^{-1}(\partial_x(\delta^2))\|_{M}
\\
&\phantom{AAAAA}\leq e_1+
\|B\|_{M}\left(
\|S_c^{-1}\partial_\theta\|_{M}
|\sigma| \|\delta\|_{M}
+
\frac12\|S_c^{-1}\partial_x\|_{M}
\|\delta\|^2_{M}
\right)
\\
&\phantom{AAAAA}\leq
e_1+e_2 \rho^2.
\end{split}
\end{equation*}
$K$ in condition c) is $2e_2\rho$ because if $\|z\|_{M},
\|\hat z\|_{s_1, s_2}\leq \rho$,
then
\begin{equation*}
\begin{split}
\|B\left(Q(z, z)-Q(\hat z, \hat z)\right)\|_{M}
\\
\leq \|B\|_{M}\|S_c^{-1}
\left(\sigma \partial_\theta \delta-\hat\sigma\partial_\theta\hat\delta+
\delta\partial_x\delta
-\hat\delta\partial_x\hat\delta\right)\|_{M}
\\
\phantom{AA}\leq \|B\|_{M}
\left(\|S_c^{-1}(\sigma \partial_\theta (\delta-\hat\delta)+
(\sigma-\hat\sigma)\partial_\theta\hat\delta)\|_{M}
+\|S_c^{-1}(\delta\partial_x(\delta-\hat\delta)+
(\delta-\hat\delta)\partial_x\hat\delta)\|_{M}\right)
\\
\phantom{AA}\leq \|B\|_{M}
\left(
\|S_c^{-1}\partial_\theta\|_{M}
+\frac12\|S_c^{-1}\partial_x\|_{M}\right)
2\rho\|z-\hat z\|_{M}
\\
\phantom{AA}\leq 2e_2\rho\|z-\hat z\|_{M}.
\end{split}
\end{equation*}
Conditions d) and e) of Theorem \ref{thm: contraction 2} are equivalent to
\begin{equation*}
\frac{2e_2\rho}{1-\alpha} < 1
\text{ \quad and \quad} \frac{e_1+e_2\rho^2}{1-\alpha} < \rho,
\end{equation*}
which are satisfied because
$1-\alpha-\sqrt{(1-\alpha)^2-4e_1e_2} < 2e_2\rho < 1-\alpha$.
Finally, the upper bound on the norm on the solution $\|z_*\|_{M}$ is obtained
by applying Theorem \ref{thm: contraction 2} with ${\rho =
\frac{1-\alpha-\sqrt{(1-\alpha)^2-4e_1e_2}}{2e_2}}$.
\end{proof}
\begin{rem} Notice that, using the radii polynomial approach,
we obtain that $\alpha, e_2$ are functions of the radius $\rho$. Therefore, we
obtain a range of radii for which Theorem~\ref{thm: contraction 2} applies. Of
course the largest radius is a better result for the uniquess part and the
smallest radius is a better result fof the distance to the initial guess
\end{rem}
\section{Implementation of the rigorous computer assisted validation
of periodic orbits for the Kuramoto-Sivashinsky
equation.}\label{section: implementation theorem}
We use Theorem \ref{thm: contraction 3} and construct an implementation of the
computer assisted validation of periodic orbits. Our initial data, $(f, u)$,
will consist of a real number $f$, a trigonometric polynomial $u$ of degrees
$(d_1, d_2)$ in the variables $(\theta, x)$, and the operator $B={\rm Id}+\hat B$,
where $\hat B$ is a $2d_1\cdot(2d_2+1)\times 2d_1\cdot(2d_2+1)$ dimensional
matrix (this operator can be obtained by nonrigorous computations
by approximating the inverse of the operator $A$).
First of all, notice that the constants $e_1$ and $e_2$ in Theorem \ref{thm:
contraction 3} depend on the diagonal operators $S_c^{-1}$, $\partial_\theta$
and $\partial_x$, and on the norms of $B$ and $\tilde e$. The computation of
the norms of the (diagonal) operators $S_c^{-1}\partial_\theta$ and
$S_c^{-1}\partial_x$, is done in Appendix \ref{section: appendix}, lemma
\ref{lem: banach2}.
Secondly, the computation of the norms of the operator $B$ and the error
$\tilde e$ can be done with the help of computer assisted techniques because
they are finite dimensional: $\tilde e$ is a trigonometric polynomial of
dimension $2d_1\cdot (2d_2+1)$ and $B={\rm Id} +\hat B$ implies that $\|B\|_{M}\leq
1 +\|\hat B\|_{M}$ ( Note that $\| \hat B\|_M$ is the norm of a finite dimensional matrix).
Finally, it remains to show how to compute operator norm of
\begin{equation}
\label{eq: defect matrices}
\|\hat B\hat A+\hat A+\hat B\|_M.
\end{equation}
Since $u$ is a trigonometric polynomial, the operator $\hat A$ is a band
operator: $\hat A_{i, j}=0$ for $|i| > d_1$ or $|j| > 2d_2+1$. Hence
$\hat A$ decomposes as the sum of a finite matrix $\hat A_{F}$ of dimensions
$2d_1\cdot(2d_2+1)\times 2d_1\cdot(2d_2+1)$ and a linear operator $\hat A_I$.
This operator $\hat A_I$ is:
\begin{equation}
\label{eq: high terms operator}
\hat A_I = \mathbb{P}_{(> d_1, > d_2)}\left(-cS_c^{-1}+S_c^{-1}\partial_x(u\cdot)\right)
=-c\mathbb{P}_{(> d_1, > d_2)}S_c^{-1} \mathbb{P}_{(> d_1, > d_2)}+
\mathbb{P}_{(> d_1, > d_2)}S_c^{-1}\partial_x \mathbb{P}_{(> d_1, > d_2)}
(u\cdot)
,
\end{equation}
where $\mathbb{P}_{(\leq d_1, \leq d_2)}$ is the projection operator on the
$d_1\cdot (d_2+1)$-dimensional vector space spanned by the low frequencies
and $\mathbb{P}_{(> d_1, >d_2)} = {\rm Id}-\mathbb{P}_{(\leq d_1, \leq d_2)}$.
Hence,
\begin{equation}
\label{eq: bound operator}
\|\hat B\hat A+\hat A+\hat B\|_M\leq \max\left\{\|\hat B\hat A_{F}+\hat
A_{F}+\hat B\|_M,
\|{\rm Id}+\hat B\|_M\|\hat A_{I}\|_M
\right\}
.
\end{equation}
\begin{rem}
Notation $\mathbb{P}_{(> d_1, >d_2)}$ could be a little bit
misleading:
It does not mean that it is the projection operator on the high
frequencies for both variables, but the complementary of the low
frequencies projection operator.
\end{rem}
The norm $\|\hat B\hat A_{F}+\hat A_{F}+\hat B\|_M$ appearing in the upper
bound \eqref{eq: bound operator} can be estimated with the help of computer
assisted techniques, while the norm $\|{\rm Id}+\hat B\|_M\|\hat A_{I}\|_M$ is
split into the computation of $\|{\rm Id}+\hat B\|_M$ and $\|\hat A_{I}\|_M$. The
former is done as said before, while the latter (the bound of the operator
\eqref{eq: high terms operator}) is bounded above by:
\begin{equation}\label{eq: norm tail}
cK_1+K_2K_3,
\end{equation}
where
$K_1=\|\mathbb{P}_{(> d_1, > d_2)}S_c^{-1}
\mathbb{P}_{(> d_1, > d_2)}\|_M$,
$K_2 = \|\mathbb{P}_{(> d_1, > d_2)}S_c^{-1}\partial_x
\mathbb{P}_{(> d_1, > d_2)}\|_M$
and $K_3 = \|u\|_M$.
Fixing $c=\frac1\nu$ and with the help of Lemma \ref{lem: banach1}
we obtain that
\begin{tabular}{l}
$K_1 =
\sqrt2\max\left\{\max_{x > d_1}
\left\{\dfrac1{p(x)}\right\}, \dfrac1{f(d_2+1)}\right\},$
\\
$K_2 =
\sqrt2 \left(\frac{4}{3\nu}\right)^{\frac14}
\left(\max\left\{\max_{x > d_1}
\left\{\dfrac1{p(x)}\right\}, \dfrac1{f(d_2+1)}\right\}\right)^{\frac34},$
\\
$K_3 = \sup_{(i_1, i_2)\in\mathbb Z^2} |u_{i_1, i_2}|M(i_1, i_2).$
\end{tabular}
\begin{rem}
Since $u$ is a trigonometric polynomial, $K_3$ is in fact computed by
\[
\sup_{(i_1, i_2)\in [0, 2d_1+1]\times [1, 2d_2+1]} |u_{i_1, i_2}|M(i_1, i_2).
\]
\end{rem}
\begin{rem}
The upper bound given in lemma \ref{lem: banach1} tends to zero as
the number of modes used in the discretization tends to infinity. This assures
us that this methodology is reliable.
\end{rem}
\begin{rem}
The computation of the norm \eqref{eq: defect matrices} is very demaning in
terms of computer power effort. Fortunately, not very sharp
results are needed. Provided that we can prove that the norm is
less than $1$, we obtain a contraction. The final result is not
too afected by the contraction factor.
On the other hand, the bound on the error $e_1$ in Theorem \ref{thm: contraction
3} has a very direct influence in the error established.
Hence, a good strategy is to perform the matrix computations with the
lowest dimensions possible and perform
the estimate of $e_1$ with the highest possible number
of modes. This relies on the fact that given two functions $u_0$ and $u_1$ with
$\|u_0-u_1\|_M\leq \delta$, then their associated $\hat A_{u_i}$ satisfy that
$\|\hat A_{u_0}-\hat A_{u_1}\|_M\leq \|S_c^{-1}\partial_x\|_M \delta\leq K_2
\delta$. This strategy is reflected in Algorithm
\ref{algor: computation periodic orbit}.
One should also realize that the calculation of the operator $B$
does not need to be justified. Some further heuristic approximations
that reduce the computational effort could be taken (e.g. a Krylov
method that gives a finite rank approximation). We have not taken advantage of this possibilities since
they were not needed in our case.
Finally, we note that since the preconditioner is not so crucial,
and it is more expensive to compute, in continuation algorithms,
it could be good to update it less frequently than the residual.
\end{rem}
Now in a position of giving the algorithm
for the validation of the existence and local uniqueness of periodic orbits
near a given approximate one $(f, u)$. We suppose that the approximation is
obtained by the methods explained in Section \ref{section: Newton scheme}.
\begin{rem}
For more details on the computer implementation of this algorithm
(e. g. the rigorous manipulation of Fourier series), we refer to the appendix in
\cite{FiguerasHaro_CAP} or \cite{Haro_Survey}.
\end{rem}
\begin{algorithm}{\ }\label{algor: computation periodic orbit}
\begin{itemize}
\item[\textbf{Input}]
\begin{itemize}
\item $r, s_1, s_2 \geq 0$, defining the Banach space $X_{M}$.
\item An approximate solution
$(f, u)$ to Equation \eqref{eq: invariance}, of dimensions
$d_1, d_2$ in the variables $t, x$.
\item A pair of natural numbers $\tilde d_1, \tilde d_2$ such that
$\tilde d_i \leq d_i$, $i=1,2$.
\end{itemize}
\item[\textbf{Output}] If succeeded, the existence of a constant
$\rho_- > 0$ where a (unique) solution of the invariance equation
exists inside the ball centered at $(f, u)$ with radius $\rho_-$.
\item[1)] Compute the trigonometric polynomial $\tilde u$ by
truncating $u$ up to $\tilde d_1, \tilde d_2$.
\item[2)] Compute an upperbound $\delta$ of $\|\tilde u-u\|_M$.
\item[3)] Compute the matrix $\hat A_{F}$ and the matrix $\hat B$
associated to $\tilde u$.
\item[4)] Compute an upper bound, $\alpha_1$, of
$\|\hat B\hat A_{F}+\hat B + \hat A_F \|_{M}.$
\item[5)] Compute upper bounds of the constants $K_1, K_2$ and $K_3$.
\item[6)] Compute an upper bound, $\alpha_2$, of $cK_1+K_2K_3$.
\item[7)] Compute an upper bound, $b$, of $1+\|\hat B\|_{M}$.
\item[8)] Compute $\alpha = \max\{\alpha_1, \alpha_2\}+K_2 \delta b$. If
$\alpha$ is greater than $1$ then the algorithm stops and the result is that
the validation has failed, otherwise continue with Step 7).
\item[9)] Compute an upper bound, $e_0$, of $\|\tilde e\|_{M}$.
\item[10)] Compute an upper bound, $e_1$, of $b\cdot e_0$.
\item[11)] Compute an upper bound, $e_2$, of
$b\cdot(\|S_c^{-1}\partial_\theta\|_M+\frac12 \|S_c^{-1}\partial_x\|_M)$.
\item[12)] Check if $(1-\alpha)^2-4e_1e_2 > 0$.
\end{itemize}
If the inequality in 12) is true
then, by Theorem~\ref{thm: contraction 3},
there exists a unique periodic orbit $(f_*, u_*)$
such that
$\|(f_*-f, u_*-u)\|_{M}\leq E=\dfrac{e_1}{1-\alpha-\rho_-}$, where
$\rho_-$ has the expression as in Theorem \ref{thm: contraction 3}.
\end{algorithm}
\begin{rem}
The computation of the product of the interval matrices $\hat B$ and $\hat A_F$
is the bottleneck, in terms of computational time, of the algorithm: naive
multiplication of the matrices leads to disastrous speed results. To speed up
this we use the techniques in \cite{Rump_matrices_0, Rump_matrices}, which
describe algorithms for the rigorous computation of product of interval
matrices with the help of the \verb$BLAS$ package. See Appendix \ref{section:
multiplication} for a presentation of this technique.
\end{rem}
\subsection{Improving the radius of analyticity of solutions}
\label{subsection: improving}
A simple strategy for giving rigorous lower bounds of the analyticity radius of
the solutions is by first performing Algorithm \ref{algor: computation periodic
orbit} with $r\approx 0$. Then, apply a posteriori bounds for improving the
value of $r$. (See \cite{Hungria_Lessard_Mireles-James_2016} for an
application of this technique in the context of ODEs).
Denote by $\alpha_r$, $e_{1, r},$ and $e_{2, r}$ the upperbounds appearing in
Theorem \ref{thm: contraction 3} when computed with the one-parametric norm
$\|\cdot \|_{M_r}$ (the weight $M_r$ depends on the radius of analyticity).
Moreover, notice that for any trigonometric polynomial $u$ of dimensions
$d_1\times d_2$ and for any $\hat r > 0$ we have $\|u\|_{M_{\hat r}}\leq
\|u\|_{M_0}e^{\hat r d_1 d_2}$ and for any finite dimensional $A$ of dimensions
$d\times d$ $\|T\|_{M_{\hat r}}\leq \|T\|_{M_0}e^{\hat r d}$. Hence, since the
application of Theorem \ref{thm: contraction 3} is performed with finite
dimensional approximations we obtain that $\alpha_{\hat r}\leq \alpha_0 e^{\hat
r d_1 d_2}$, $e_{1,\hat r}\leq e_{1,0}e^{2\hat r d_1 d_2}$ and $e_{2, \hat
r}\leq e_{2, 0} e^{\hat r d_1 d_2}$. So, by imposing that these upperbounds
satisfy the conditions appearing in Theorem \ref{thm: contraction 3} we obtain
larger values of the radius of analyticity of the solutions.
Of course, some more detailed results could be obtained by repeating the
calculation of the norms in the spaces of analytic spaces closer to
the true value. Of course, this will require reduing all the estimates of
norms.
The analyticity properties of solutions of K-S equations have been studied
rigorously in \cite{Colletanalyticity, Grujic00} and it is shown to have
thermodynamicas properties and relations with the number of zeros, determining
modes, etc.
\section{Some numerical examples.}
\label{section: numerical examples}
In this section we present some examples of the methods developed in Section
\ref{section: Newton scheme} for the computation of periodic orbits and in
Section \ref{section: implementation theorem} for the a posteriori verification
of them.
\subsection{Example of numerical computation. Period doubling.}
We have continued some branches of the doubling period bifurcation diagram,
shown in Subfigure a) in Figure \ref{figure: sections po cascade}. This has
been done by first computing some of the attracting orbits by integration, see
Section \ref{section: introduction}. These periodic orbits have been used as
seeds for our numerical algorithm.
Specifically, for the values of the parameter $\frac 1\nu$ equal to $32.9$,
$33.1$ and $33.3$ we have computed 3 (attracting) periodic orbits at the first
3 stages of the period doubling cascades. Then, for each one of them, we have
continued them with our numerical algorithm. With the help of Algorithm 1
we have been able to cross the period doubling bifurcations, where the
attracting orbits bifurcate to a doubled period one (that is attracting) and to
an unstable one. Our continuations are able to continue these unstable orbits.
See Figure \ref{figure: cascade with periodic orbits} for a representation of
these orbits in the period doubling cascade diagram and Figure \ref{figure:
sample periodic orbits} for the representation of two of these orbits.
The computational time of its validation takes no more than 30 seconds in a
single 2.7 GHz CPU of a regular laptop. We hope that thid could be used
in the catalogue of periodic orbits computed in \cite{LanCvitanovic2008}.
Note that, of course, validating different peridic orbits is
verily easily paralellizable.
\begin{figure}
\centering
\resizebox{100mm}{!}{\includegraphics[angle=270]
{./cascade_energy_with_unstables.png}}
\caption{The continuation of the first 3 periodic orbits on the
period doubling cascade.
These are superposed to the period doubling cascade using three
different colors.}
\label{figure: cascade with periodic orbits}
\end{figure}
\begin{figure}
\resizebox{170mm}{!}{
\begin{tabular}{cc}
\resizebox{85mm}{!}{\includegraphics[type=png,ext=.png,read=.png,angle=270]
{./periodic_orbit_0.03005710850616_0.89893314191428}}&
\resizebox{85mm}{!}{\includegraphics[type=png,ext=.png,read=.png,angle=270]
{./periodic_orbit_0.02984459917211_3.49489164733297}}\\
{$1/\nu = 33.27$. Period $=0.89893314191428$.}&
{$1/\nu = 33.5069$. Period $=3.49489164733297$.}\\
\end{tabular}
} \caption{Representation of two periodic orbits computed with the Newton
method. Colors represent the value of the orbit $u$ at the $(\theta, x)$
coordinate.}
\label{figure: sample periodic orbits}
\end{figure}
\subsection{Example of a validation.}
With the help of the algorithm presented in Section \ref{section:
implementation theorem} (and with the improvment trick explained in Subsection
\ref{subsection: improving}) we can validate the existence of some periodic
orbits. For example, we validate the existence of a periodic orbit with $\frac
1\nu = 32.97$. The approximate data is given by $40$ $x$ modes and $(2\cdot
19+1)$ $t$ modes. The approximate period is $0.895839$. The validation has
been done with the finite matrix $\hat B$ with dimension 6994, and with $r =
s_1 = s_2 = 1\e{-12}$.
The output of the validation is:
\begin{itemize}
\item
The error produced by the approximate periodic orbit is
$\|S^{-1}_c \varepsilon\|_M \leq 8.489632\e{-10}$.
\item
$K_1 \leq 8.189680\e{-3}$
,
$K_2 \leq 6.332728\e{-2}$
,
$K_3 \leq 6.693947$
.
\item
The error of the tails of the operator $\hat A$ is less than or equal to
$K_1\cdot c+K_2\cdot K_3 = 6.946684\e{-1}$.
\item
The norm of the approximate inverse of the linear operator $Id+\hat A$ is
$\|\text{Id}+\hat B\|_M = 4.567111\e{1}$.
\item
$\|\hat B+\hat A+\hat B \cdot \hat A\|_M = 6.716849\e{-14}$
\item
$\alpha = \|\text{Id}+\hat B + \hat A + \hat B\hat A\|_M = 6.946684\e{-1}$
\item
$(1-\alpha)^2-4 e_1 e_2 = 9.321303\e{-2}$
\end{itemize}
As a result of the validation we obtain that the distance of the true periodic
orbit to the approximate solution is less than or equal to $1.269966\e{-7}$.
The computational time of one of these validations is no more than 1017 seconds
in a single 2.7 GHz CPU on a regular laptop.
Other validation results for other periodic orbits is shown in table
\ref{table: validation_results}.
\begin{table}
\begin{center}
\tiny
{
\begin{tabular}{| l | l | l | l | l | l | l | l |}
\hline
$\frac 1\nu$ &Period &
$E$&
Improved radius of analyticity&
Improved $E$
\\
\hline
$8.199953$ &
$2.992730$ &
$2.463363\e{-11}$ &
$1.512026\e{-4}$ &
$1.002122\e{-9}$
\\
\hline
$8.230453$ &
$3.074450$ &
$4.385135\e{-11}$ &
$1.268306\e{-4}$ &
$6.199587\e{-9}$
\\
\hline
$31.00000$ &
$0.806901$ &
$8.642523\e{-10}$ &
$9.154580\e{-5}$ &
$1.043836\e{-8}$
\\
\hline
$32.97000$ &
$0.895839$ &
$1.269966\e{-7}$ &
$1.101236\e{-4}$ &
$2.902773\e{-6}$
\\
\hline
$33.27010$ &
$0.881170$ &
$1.049117\e{-7}$ &
$1.120727\e{-4}$ &
$4.092897\e{-6}$
\\
\hline
\end{tabular}
\caption{Validation results of some periodic orbits for different values of the
parameter $\nu$. The columns show: $\frac{1}\nu$, the period of the periodic
orbit, the radius of the ball obtained from Theorem \ref{thm: contraction 3}
computed with the norm $\|\cdot\|_{M}$ with $s_1=s_2=10^{-12}$ and
$r=10^{-12}$, the improved radius of analyticity obtained applying the trick
explained in Remark \ref{subsection: improving}, and the new radius of the ball
obtained from Theorem \ref{thm: contraction 3} computed with the norm
$\|\cdot\|_{M}$ with $s_1=s_2=10^{-12}$ and $r$ equal to the value in fourth
column. All validation took around 1000 seconds in a single 2.7 GHz CPU on a
regular laptop. Some of the periodic orbits that appear in this table appear
also in \cite{Piotr3}.} \label{table: validation_results} }
\end{center}
\end{table}
|
1,108,101,564,924 | arxiv | \section{Introduction}
We consider the (2+1)-dimensional Konopelchenko-Dubrovsky (KD) equation \cite{Konopelchenko1984SomeDimensions,Konopelchenko1992IntroductionEquations}
\begin{equation}\label{e:KD}
\centering
\left\{\begin{aligned}
u_t-u_{xxx}-6 \rho u u_x +\dfrac{3}{2}\phi^2 u^2 u_x-3 v_{y}+3 \phi u_x v &= 0, \\
u_y &= v_x, \\
\end{aligned}\right.
\end{equation}
where, $u=u(x,y,t)$, $v=v(x,y,t)$, the subscripts denote partial differentiation, $\rho$ and $\phi$ are real parameters, defining the magnitude of nonlinearity in wave propagation, modelled for stratified shear flow, the internal and
shallow-water waves and the plasmas \cite{Xu2011PainleveComputation}, can also be regarded as combined KP
and modified KP equation \cite{Zhang2009Symbolic-computationMethod}, or generalized (2+1)D Gardner equation \cite{Konopelchenko1991InverseEquation}.
\subsection*{Models} The $(1+1)$-dimensional reduction of the KD equation \eqref{e:KD} is the Gardner equation\cite{Krishnan2011AEquation,Xu2009AnalyticPhysics}
\begin{equation}\label{e:gar}
u_t-u_{xxx}-6\rho u u_x+\dfrac{3}{2}\phi^2u^2u_x=0,
\end{equation}which is example of the generalized Korteweg-de-Vries equation(gKdV)\cite{Korteweg1895Waves}, that is
\[u_t+(g(u)-u_{xx})_x=0,\]
where $g:\mathbb{R}\to\mathbb{R}$ is a smooth real function. Gardner equation \eqref{e:gar} can be reduced to KdV and modified KdV equations for $\phi=0$ and $\rho=0$, respectively.
For $\phi=0$, \eqref{e:KD} is Kadomtsev-Petviashvili (KP) equation with negtaive dispersion \cite{Kadomtsev1970OnMedia}
\begin{equation}\label{e:KP}
(u_t-u_{xxx}-6\rho u u_x)_x-3u_{yy}=0,
\end{equation}
which is also known as KP-II equation. Modified KP-II\cite{Sun2009InelasticElectrodynamics}(say, mKP-II) reads from \eqref{e:KD} for $\rho=0$,
\begin{equation}\label{e:mKP}
\left(u_t-u_{xxx}+\dfrac{3}{2}\phi^2u^2u_x\right)_x-3u_{yy}=0.
\end{equation}
\subsection*{Integrability} The KD equation \eqref{e:KD} is integrable \cite{Konopelchenko1984SomeDimensions, Zhang2009Symbolic-computationMethod, Maccari1999AEquation}. Integrability is an useful property to have for evolution equations, especially in higher dimensions. It gives sufficient freedom to explore the equation through different aspects. It also helps significantly to observe nonlinear coherent structures like rogue waves, breathers, solitons and elliptic waves in the systems \cite{Ma2020MultipleEquation, Liu2019LumpEquation, Wu2018ComplexitonEquation, Yuan2018SolitonsEquations, Ren2016TheSolutions}. Some well-known integrable water wave models are classical KdV, KP, and Schr\"odinger equations. The KD equation \eqref{e:KD}, like similar evolution equations in (2+1) dimension, for example, Kadomtsev-Petviashvili equation, Davey-Stewarson equation, and the three-wave equations, is solvable through Inverse Scattering Transform (IST) \cite{Konopelchenko1984SomeDimensions}. It is among the few nonlinear evolution equations which are completely integrable in different settings. Notably, the considered KD equation is also integrable in the Painlev\'e sense and solvable through IST \cite{Xu2011PainleveComputation, Konopelchenko1984SomeDimensions, Konopelchenko1991InverseEquation}.
\subsection*{Dispersion Relation} Assuming a plane-wave solution of the form
\[u(x,y,t)=e^{i(kx-\Omega t+\gamma y)},
\]
for the linear part
\[(u_t-u_{xxx})_x-3u_{yy}=0,
\]
of the KD equation \eqref{e:KD}, we arrive at the dispersion relation
\[\Omega(k)=k^3-\dfrac{3\gamma^2}{k}.
\]
\subsection*{Small amplitude periodic traveling waves} The $y$-independent periodic traveling wave solution of the KD equation \eqref{e:KD} that are also solutions of the Gardner equation \eqref{e:gar}, is of the form \[\begin{pmatrix}
u(x,y,t)\\v(x,y,t)
\end{pmatrix}=\begin{pmatrix}
u(x-ct)\\v(x-ct)
\end{pmatrix},
\]for some $c\in\mathbb{R}$. Under this assumption, we arrive at
\begin{equation}\label{e:pt}
\centering
\left\{\begin{aligned}
-cu_x-u_{xxx}+\dfrac{\phi^2}{2}(u^3)_x-3\rho (u^2)_x+3\phi u_xv=0,\\v_x=0,\\
\end{aligned}\right.
\end{equation} which implies $v=b_1$, where $b_1$ is an arbitrary constant. Substituting $v=b_1$ and integrating, \eqref{e:pt} is reduced to
\begin{equation}\label{e:pt1}
-cu-u_{xx}+\dfrac{\phi^2}{2}u^3-3\rho u^2+3\phi u b_1=b_2,
\end{equation} where $b_1,b_2\in\mathbb{R}$. Let $u$ be a $2\pi/k$-periodic function of its argument, for some $k>0$. Then, $w(z):=u(x)$ with $z=kx$, is a $2\pi$-periodic function in $z$, satisfying
\begin{equation}
-cw-k^2w_{zz}+\dfrac{\phi^2}{2}w^3-3\rho w^2+3\phi wb_1=b_2,
\end{equation}
For a fixed $\phi$ and $\rho$, let $F:H^2(\mathbb{T})\times \mathbb{R}\times \mathbb{R}^+\times\mathbb{R}\times\mathbb{R}
\to L^2(\mathbb{T)}$ be defined as
\[
F(w,c;k,b_1,b_2)=-cw-k^2w_{zz}+\dfrac{\phi^2}{2}w^3-3\rho w^2+3\phi wb_1-b_2.
\]
We try to find a solution $w\in H^2(\mathbb{T})$ of
\begin{equation}\label{e:f}
F(w,c;k,b_1,b_2)=0.
\end{equation}
For any $c\in\mathbb{R}$, $k>0$, $b_1,b_2\in\mathbb{R}$ and $|b_1|, |b_2|$ sufficiently small, note that
\begin{equation}
w_0(c,k,b_1,b_2)=- \dfrac{1}{c}b_2+O((b_1+b_2)^2),
\end{equation}
make a constant solution of \eqref{e:f}. Note that $w_0\equiv 0$ if $b_1=b_2=0$. If non-constant solutions of \eqref{e:f} bifurcate from $w_0\equiv 0$ for some $c=c_0$ then $\text{ker}(\partial_wF(0,c_0;k,0,0))$ is non-trivial. Note that
\[\text{ker}(\partial_wF(0,c_0;k,0,0))=\text{ker}(-c_0-k^2\partial_z^2)=\text{span}\{e^{\pm iz}\},
\]provided that $c_0=k^2$.
The periodic traveling waves of \eqref{e:gar} exist (see, \cite{Bronski2016ModulationalType}), and by following the Lyapunov-Schmidt procedure, their small-amplitude expansion is as follows.
\begin{theorem}
For any $k>0$, $b_1,b_2\in\mathbb{R}$ and $|b_1|,|b_2|$ sufficiently small, a one parameter family of solutions of \eqref{e:KD}, denoted by\[
\begin{pmatrix}
u(x,t)\\v(x,t)
\end{pmatrix}=\begin{pmatrix}w(a,b_1,b_2)(z)\\v(z)
\end{pmatrix}\]where $z=k(x-c(a,b_1,b_2)t)$, $|a|$ sufficiently small, $w(a,b_1,b_2)(z)$ is smooth, even and $2\pi$-periodic in $z$ and $c$ is even in $a$, is given by
\begin{equation}\label{e:expptw}
\left\{
\begin{aligned}
w(a,b_1,b_2)(z)=&-\dfrac{1}{k^2}b_2+a\cos z+a^2(A_0+A_2\cos 2z)+a^3A_3\cos 3z+O(a^4+a^2(b_1+b_2)^2),\\
v(z)=&b_1,\\
c(a,b_1,b_2)=&k^2+3\phi b_1+\dfrac{3\rho}{k^2}b_2+a^2c_2+O(a^4+a^2(b_1+b_2)^2),
\end{aligned}\right.
\end{equation}
where
\begin{equation}
A_0=-\dfrac{3\rho}{2k^2},\quad A_2=\dfrac{\rho}{2k^2},\quad A_3=-\dfrac{\phi^2}{64k^2}+\dfrac{3\rho^2}{16k^4},\quad \text{and}\quad c_2=\dfrac{3\phi^2}{8}+\dfrac{15\rho^2}{2k^2}.
\end{equation}
\end{theorem}
\subsection*{Transverse stability} For the KD equation \eqref{e:KD}, the solution and integrability aspects have been studied thoroughly, see \cite{Xu2010IntegrableEquation, Zhang2009Symbolic-computationMethod, Yang2008TravellingMethod, Kumar2016SimilarityTheory}, for instance. However, to the best of the authors' knowledge, any result on the stability aspects of the periodic traveling waves or solitary waves of the KD equation \eqref{e:KD} has not been discussed so far. However, the transverse instability of periodic traveling waves has been studied for many similar equations, for instance, for the KP equation in \cite{Bhavna2021TransverseEquation,Hakkaev2012TransverseEquations,Johnson2010TransverseEquation,Haragus2011TransverseEquation}, for Zakharov-Kuznetsov (ZK) equation in \cite{Chen2012AEquation,Johnson2010TheEquations}. Transverse instability of solitary wave solutions of various water-wave models has also been explored by several authors, see \cite{Groves2001TransverseWaves,Pego2004OnTension,Rousset2009TransverseModels,Rousset2011TransverseWater-waves}. Motivated by the importance of nonlinear waves propagation and its stability, we investigate the transverse spectral instability of the KD equation. We aim to study the (in)stability of the $y$-independent, that is, (1+1)-dimensional periodic traveling waves \eqref{e:expptw} of \eqref{e:KD} with respect to two-dimensional perturbations which are either periodic or non-periodic in the $x$-direction and always periodic in the $y$-direction. The periodic nature of the perturbations in the $y$-direction is classified into two categories: long wavelength and finite or short-wavelength transverse perturbations. The (in)stabilities that occur due to long-wavelength transverse perturbations are termed as {\em modulational transverse (in)stabilities}. Furthermore, we use the term {\em high-frequency transverse (in)stabilities} to refer to those transverse (in)stabilities that are occurring due to finite or short-wavelength transverse perturbations.
Moreover, depending on the periodic or non-periodic nature of perturbations in the direction of propagation of the one-dimensional wave, we term the resulting instability as transverse instability with respect to periodic or non-periodic perturbations, respectively. Our main results are the following theorems depicting the transverse stability and instability of small amplitude periodic traveling waves \eqref{e:expptw} of \eqref{e:KD}.
\begin{theorem}[Transverse stability]\label{t:3}
Assume that small-amplitude periodic traveling waves \eqref{e:expptw} of \eqref{e:KD} are spectrally stable in $L^2(\mathbb T)$ as a solution of the corresponding $y$-independent one-dimensional equation. Then, for any $a$ sufficiently small, $\rho\in \mathbb{R}$, $\phi\in \mathbb{R}$, and $k>0$, periodic traveling waves \eqref{e:expptw} of \eqref{e:KD} are transversely stable with respect to two-dimensional perturbations which are either mean-zero periodic or non-periodic (localized or bounded) in the direction of propagation and finite or short wavelength in the transverse direction.
\end{theorem}
\begin{theorem}[Transverse instability]\label{t:2} For a fixed $\rho\in\mathbb{R}$ and $\phi\neq 0$, sufficiently small amplitude periodic traveling waves \eqref{e:expptw} of KD equation suffers modulational transverse instabilities with respect to periodic perturbations if
\[
k>2\left|\dfrac{\rho}{\phi}\right| \quad \text{and} \quad |\gamma|<k|a|\sqrt{\left|\dfrac{\phi^2}{4}-\dfrac{\rho^{2}}{k^2}\right|}+O(a(\gamma+a)).
\]
\end{theorem}
As a consequence of these theorems, for all $k>0$, periodic traveling waves \eqref{e:expptw} of the mKP-II equation suffers modulational transverse instability with respect to periodic perturbations, which is in accordance with results in \cite{Johnson2010TransverseEquation}.
Also, in the limit $\phi\to 0$, there is no modulational transverse instability for KP-II equation by Theorem~\ref{t:2} which again agrees with results in \cite{Spektor1988StabilityDispersion,Haragus2011TransverseEquation,Johnson2010TransverseEquation,HLP17,Bhavna2021TransverseEquation}. From Theorem \ref{t:3}, KP-II does not possess any high-frequency transverse instability \cite{HLP17}. Moreover, we have the following stability result for the mKP-II equation using Theorem~\ref{t:3}.
\begin{corollary}
For all $k>0$, sufficiently small
amplitude periodic traveling waves \eqref{e:expptw} of the mKP-II equation does not possess any high-frequency transverse instability with respect to both mean-zero periodic and non-periodic perturbations.
\end{corollary}
In Section~\ref{sec:lin}, we linearize the equation and formulate the problem. In Section~\ref{s:3}, we list all potentially unstable nodes. In Sections \ref{s:4} and \ref{s:5}, we provide transverse instability analysis to investigate modulational and high-frequency transverse instabilities with respect to periodic and non-periodic perturbations.
\subsection*{Notations}\label{sec:notations}
Throughout the article, we have used the following notations.
Here, $L^{2}(\mathbb{R})$ is the set of Lebesgue measurable, real, or complex-valued functions over $\mathbb{R}$ such that
$$
\|f\|_{L^{2}(\mathbb{R})}=\left(\int_{\mathbb{R}}|f(x)|^{2} d x\right)^{1 / 2}<+\infty,
$$
and, $L^{2}(\mathbb{T})$ denote the space of $2 \pi$-periodic, measurable, real or complex-valued functions over $\mathbb{R}$ such that
$$
\|f\|_{L^{2}(\mathbb{T})}=\left(\frac{1}{2 \pi} \int_{0}^{2 \pi}|f(x)|^{2} d x\right)^{1 / 2}<+\infty.
$$
Here, $L^2_0(\mathbb{T})$ is the space of square-integrable functions with of zero-mean,
\begin{equation}\label{e:zerom}
L^2_0(\mathbb{T})=\left\{f\in L^2(\mathbb{T})\;:\;\int_0^{2\pi}f(z)~dz=0\right\}.
\end{equation}
The space $C_{b}(\mathbb{R})$ consists of all bounded continuous functions on $\mathbb{R}$, normed with
$$
\|f\|=\sup _{x \in \mathbb{R}}|f(x)|.
$$
For $s \in \mathbb{R}$, let $H^{s}(\mathbb{R})$ consists of tempered distributions such that
$$
\|f\|_{H^{s}(\mathbb{R})}=\left(\int_{\mathbb{R}}\left(1+|t|^{2}\right)^{s}|\hat{f}(t)|^{2} d t\right)^{\frac{1}{2}}<+\infty,
$$
and
$$
H^{s}(\mathbb{T})=\left\{f \in H^{s}(\mathbb{R}): f \text { is } 2 \pi \text {-periodic }\right\}.
$$
We define $L^{2}(\mathbb{T})$-inner product as
$$
\langle f, g\rangle=\frac{1}{2 \pi} \int_{0}^{2 \pi} f(z) \bar{g}(z) d z=\sum_{n \in \mathbf{Z}} \hat{f}_{n} \overline{\hat{g}_n},
$$
where $\widehat{f}_{n}$ are Fourier coefficients of the function $f$ defined by
$$
\widehat{f}_{n}=\frac{1}{2 \pi} \int_{0}^{2 \pi} f(z) e^{i n z} d z.
$$
Throughout the article, $\Re(\mu)$ represents the real part of $\mu\in\mathbb{C}$.
\section{Linearization and the spectral problem set up}\label{sec:lin}
Linearizing \eqref{e:KD} about its one-dimensional periodic traveling wave solution $\begin{pmatrix}w\\v\end{pmatrix}$ given in \eqref{e:expptw}, and considering the perturbations to $\begin{pmatrix}w\\v\end{pmatrix}$ of the form
\begin{equation}
\begin{pmatrix}w\\v\end{pmatrix}+\epsilon\begin{pmatrix}\zeta\\\psi\end{pmatrix}+O(\epsilon^2)\quad\text{for}\quad 0<|\epsilon|\ll 1
\end{equation}
we arrive at,
\begin{equation}\label{e:lin1}
\centering
\left\{\begin{aligned}
\zeta_t-kc\zeta_z-k^3\zeta_{zzz}-6k\rho(w\zeta)_z+\dfrac{3}{2}\phi^2k(w^2\zeta)_z-3\psi_y+3\phi k w_z\psi+3\phi kb_1\zeta_z&=0,\\
\zeta_y-k\psi_z&=0.
\end{aligned}\right.
\end{equation}
We seek a solution of the form $\begin{pmatrix}\zeta(z,t,y)\\ \psi(z,t,y) \end{pmatrix}=e^{\mu t+i\gamma y} \begin{pmatrix}\zeta(z)\\ \psi(z) \end{pmatrix}$, $\mu\in\mathbb{C}$, $\gamma\in\mathbb{R}$, of \eqref{e:lin1}, which leads to
\begin{equation}\label{e:lin2}
\centering
\left\{\begin{aligned}
\mu\zeta-kc\zeta_z-k^3\zeta_{zzz}-6k\rho(w\zeta)_z+\dfrac{3}{2}\phi^2k(w^2\zeta)_z-3i\gamma\psi+3\phi k w_z\psi+3\phi kb_1\zeta_z&=0,\\
i\gamma\zeta-k\psi_z&=0.
\end{aligned}\right.
\end{equation}
We can reduce this system of equations into
\begin{equation}
\begin{aligned}\label{e:opt}
&\mathcal Q_{a,b_1,b_2}(\mu,\gamma)\psi:=\\& \left(k\left(\mu-kc\partial_z-k^3\partial_z^3-6k\rho\partial_z(w\cdot)+\dfrac{3}{2}\phi^2k\partial_z(w^2\cdot)\right)\partial_z+3\gamma^2+3\phi k(i\gamma w_z+kb_1\partial_z^2)\right)\psi=0.
\end{aligned}\end{equation}
\begin{definition}(Transverse (in)stability)
Assuming that $2\pi/k$-periodic traveling wave solution $\begin{pmatrix}u(x,y,t\\v(x,y,t\end{pmatrix}=\begin{pmatrix}w(k(x-ct))\\v(k(x-ct))\end{pmatrix}$ of \eqref{e:KD} is a stable solution of the one-dimensional equation \eqref{e:gar} where $w$, $v$ and $c$ are as in \eqref{e:expptw}, we say that the periodic wave $\begin{pmatrix}w\\v\end{pmatrix}$ in \eqref{e:expptw} is transversely spectrally stable with respect to two-dimensional periodic perturbations (resp. non-periodic (localized or bounded perturbations)) if the KD operator $\mathcal Q_{a,b_1,b_2}(\mu,\gamma)$ acting in $L^2(\mathbb{T})$ (resp. $L^2(\mathbb{R})$ or $C_b(\mathbb{R})$) is invertible, for any $\mu\in\mathbb{C}$, $\Re(\mu)>0$ and any $\gamma\neq 0$ otherwise it is deemed transversely spectrally unstable.
\end{definition}
We split the study of invertibility of $\mathcal Q_{a,b_1,b_2}(\mu,\gamma)$ into periodic ($L^2(\mathbb{T})$) and non-periodic perturbations ($L^2(\mathbb{R})$ or $C_b(\mathbb{R})$). In further study, we assume $b_1=b_2=0$. For nonzero $b_1$ and $b_2$, one may explore in like manner. However, the
calculation becomes lengthy and tedious.
\subsection*{Periodic perturbations}\label{s:per}
Here, we are considering perturbations which are periodic in $z$, that is, in the direction of the propagation of wave. We check the invertibility of the operator $\mathcal Q_{a,b_1,b_2}(\mu,\gamma)$ acting in $L^2(\mathbb{T})$ for any $\mu\in\mathbb{C}$, $\Re(\mu)>0$ and any $\gamma\neq 0$. We use the notation $\mathcal Q_{a}(\mu,\gamma)$ for $\mathcal Q_{a,b_1,b_2}(\mu,\gamma)$ for simplicity. We convert the invertibility problem
\[\mathcal Q_{a}(\mu,\gamma)\psi=0;\quad \psi\in L^2(\mathbb{T})\]
into a spectral problem which requires invertibility of $\partial_z$. Since $\partial_z$ is not invertible in $L^2(\mathbb{T})$, we restrict the problem to mean-zero subspace $L^2_0(\mathbb{T})$, defined in \eqref{e:zerom}, of $L^2(\mathbb{T})$. Since $L_0^2(\mathbb{T})\subset L^2(\mathbb{T})$, if the operator $\mathcal{Q}_a(\mu,\gamma)$ is not invertible for some $\mu\in\mathbb{C}$ in $L^2_0(\mathbb{T})$ implies that the operator $\mathcal{Q}_a(\mu,\gamma)$ is not invertible in $L^2(\mathbb{T})$ as well for the same $\mu\in\mathbb{C}$.
The operator $\mathcal Q_{a}(\mu,\gamma)$ acting on $L^2_0(\mathbb{T})$ has a compact resolvent so that the spectrum consists of isolated eigenvalues with finite multiplicity. Therefore, $\mathcal Q_{a}(\mu,\gamma)$ is invertible in $L^2_0(\mathbb{T})$ if and only if zero is not an eigenvalue of $\mathcal Q_{a}(\mu,\gamma)$. Using this and the invertibility of $\partial_z$ in $L_0^2(\mathbb{T})$, we have the following result.
\begin{lemma}\label{lem:mh}The operator $\mathcal Q_{a}(\mu,\gamma)$ is not invertible in $L^2_0(\mathbb{T})$ for some $\mu\in \mathbb{C}$ if and only if $\mu\in\operatorname{spec}_{L^2_0(\mathbb{T})}(\mathcal H_{a}(\gamma))$, that is, $L_0^2(\mathbb{T})$-spectrum of the operator, where
\begin{align*}
\mathcal H_{a}(\gamma):= ck \partial_z+k^{3} \partial_{z}^{3}+6 k \rho \partial_z(w)-\frac{3}{2} \phi^{2} k \partial_z\left(w^{2}\right)-\frac{3 \gamma^{2}}{k} \partial_z^{-1} -i 3\phi \gamma w_{z} \partial_z^{-1}.
\end{align*}
\end{lemma}
\begin{proof}
The operator $\mathcal Q_{a}(\mu,\gamma)$ is not invertible in $L_0^2(\mathbb{T})$ for some $\mu\in \mathbb{C}$, if and only if zero is an eigenvalue of $\mathcal Q_{a}(\mu,\gamma)$. Moreover, for a $\varphi\in L_0^2(\mathbb{T})$, $\mathcal Q_{a}(\mu,\gamma)\varphi=0$ if and only if $\mathcal H_{a}(\gamma)\varphi=\mu \varphi$. The proof follows trivially.
\end{proof}
Next, we analyze the spectrum of the operator $\mathcal H_{a}(\gamma)$ acting in $L^2_0(\mathbb{T})$ with domain $H^{3}(\mathbb{T})\cap L^2_0(\mathbb{T})$. Since $w$ is an even function, $w_z$ is an odd function and therefore, the spectrum of $\mathcal{H}_a(\gamma)$ is not symmetric with respect to the reflection through the origin. Moreover, the operator $\mathcal{H}_a(\gamma)$ is not real, therefore the spectrum of $\mathcal{H}_a(\gamma)$ is not symmetric with respect to the real axis as well. The spectrum of $\mathcal{H}_a(\gamma)$ inherits following symmetry property.
\begin{lemma}\label{lem:sym}
The spectrum of $\mathcal{H}_a(\gamma)$ is symmetric with respect to the reflection through the imaginary axis.
\end{lemma}
\begin{proof}
We consider $\mathcal{R}$ to be the reflection through the imaginary axis defined as follows
\begin{equation*}
\mathcal{R}\psi(z)=\overline{\psi(-z)}
\end{equation*}
Assume $\mu$ is the eigenvalue of $\mathcal{H}_a(\gamma)$ with an associated eigenvector $\varphi$, then we have
\begin{equation}\label{e:eig}
\mathcal{H}_a(\gamma)\varphi=\mu\varphi
\end{equation}
Since
\[(\mathcal{H}_a(\gamma)\mathcal{R}\psi)(z)=\mathcal{H}_a(\gamma)(\mathcal{R}\psi(z))=\mathcal{H}_a(\gamma)\overline{\psi(-z)}=-(\overline{\mathcal{H}_a(\gamma)\psi})(-z)=-(\mathcal{R}\mathcal{H}_a(\gamma)\psi)(z),
\]
therefore, $\mathcal{H}_a(\gamma)$ anti-commutes with $\mathcal{R}$. Using \eqref{e:eig}, we arrive at
\[\mathcal{H}_a(\gamma)\mathcal{R}\varphi=-\mathcal{R}\mathcal{H}_a(\gamma)\varphi=-\overline{\mu}\mathcal{R}\varphi
\]
We conclude from here that if $\mu$ is an eigenvalue of $\mathcal{H}_a(\gamma)$ with associated eigenvector $\varphi$, then $-\overline{\mu}$ is also an eigenvalue of $\mathcal{H}_a(\gamma)$ with associated eigenvector $\mathcal{R}\varphi.$
\end{proof}
\subsection*{Non-periodic perturbations}\label{s:nper}
With respect to these perturbations, we aim to study the invertibility of $\mathcal Q_a(\mu, \gamma)$ acting in $L^2(\mathbb{R})$ or $C_b(\mathbb{R})$ (with domain $H^{4}(\mathbb{R})$ or $C_b^{4}(\mathbb{R})$),
for $\mu\in\mathbb{C}$, $\Re(\mu)>0$, and $\gamma\in\mathbb{R}$, $\gamma\neq0$. In $L^2(\mathbb{R})$ or $C_b(\mathbb{R})$, the operator $\mathcal Q_a(\mu, \gamma)$ no longer have point isolated spectrum, rather it have continuous spectrum. Thus, we rely upon the Floquet Theory such that all solutions of \eqref{e:opt} in $L^2(\mathbb{R})$ or $C_b(\mathbb{R})$ are of the form $\psi(z)=e^{i\tau z}\Psi(z)$ where $\tau\in\left(-\frac12,\frac12\right]$ is the Floquet exponent and $\Psi(z)$ is a $2\pi$-periodic function, see \cite{Haragus2008STABILITYEQUATION} for a similar situation. By following same arguments as in the proof of \cite[Proposition A.1]{Haragus2008STABILITYEQUATION}, we can infer that the study of the invertibility of $\mathcal Q_a(\mu,\gamma)$ in $L^2(\mathbb{R})$ or $C_b(\mathbb{R})$ is equivalent to the invertibility of the linear operators $\mathcal Q_{a,\tau}(\mu,\gamma)$ in $L^2(\mathbb{T})$ with domain $H^{4}(\mathbb{T})$, for all $\tau\in\left(-\frac12,\frac12\right]$, where
\begin{align*}
\mathcal Q_{a,\tau}(\mu,\gamma) = \left(\mu-kc(\partial_z+i\tau)-k^3(\partial_z+i\tau)^3-6k\rho(\partial_z+i\tau)(w)\right)(k(\partial_z+i\tau))\\+\left(\dfrac{3}{2}\phi^2k(\partial_z+i\tau)(w^2)\right)(k(\partial_z+i\tau))+3\gamma^2+i3k\gamma\phi w_z.
\end{align*}
Since $\tau=0$ corresponds to the periodic perturbations we have already investigated, we would now restrict ourselves to the case of $\tau\neq0$. The $L^2(\mathbb{T})$-spectra of operator $\mathcal Q_{a,\tau}(\mu,\gamma)$ consist of eigenvalues of finite multiplicity. Therefore, $\mathcal Q_{a,\tau}(\mu,\gamma)$ is invertible in $L^2(\mathbb{T})$ if and only if zero is not an eigenvalue of $\mathcal Q_{a,\tau}(\mu,\gamma)$. We have the following result using this and the invertibility of $\partial_z+i\tau$.
\begin{lemma} The operator $\mathcal Q_{a,\tau}(\mu,\gamma)$ is not invertible in $L^2(\mathbb{T})$ for some $\mu\in \mathbb{C}$ and $\tau\neq 0$ if and only if $\mu\in\phi(\mathcal H_a(\gamma,\tau)))$, $L^2(\mathbb{T})$-spectrum of the operator,
\begin{align*}
\mathcal{H}_a(\gamma,\tau):=k c (\partial_z+i\tau)&+k^{3} (\partial_z+i\tau)^{3}+6 k \rho (\partial_z+i\tau)(w)\\&-\frac{3}{2} \phi^{2} k (\partial_z+i\tau)\left(w^{2}\right)-\left(\dfrac{3 \gamma^{2}}{k} +i 3\phi \gamma w_{z}\right) (\partial_z+i\tau)^{-1}.\\
\end{align*}
\end{lemma}
\begin{proof}
The proof is similar to Lemma~\ref{lem:mh}.
\end{proof}
We will study the $L^2(\mathbb{T})$-spectra of linear operators $\mathcal H_a(\gamma,\xi)$ for $|a|$ sufficiently small, and for $|\tau|>\delta>0$ since the operator $(\partial_z+i\tau)^{-1}$ becomes singular, as $\tau\to 0$. Note that the spectrum of $\mathcal{H}_a(\gamma,\tau)$ is not symmetric with respect to the reflection through real axis or origin. Instead, we have the following symmetry
\begin{lemma}
The spectrum of $\mathcal{H}_a(\gamma,\tau)$ is symmetric with respect to the reflection through the imaginary axis for all $\tau\in\left(-\frac12,\frac12\right]\setminus\{0\}$.
\end{lemma}
\begin{proof}
The proof is similar to Lemma \ref{lem:sym}.
\end{proof}
\section{Characterization of the unperturbed spectrum}\label{s:3}
\subsection{Periodic perturbations}\label{ss:1}
As a consequence of the symmetry of the spectrum obtained in Lemma~\ref{lem:sym}, we obtain instability if there is an eigenvalue of $\mathcal{H}_a(\gamma)$ off the imaginary axis. A straightforward calculation reveals that
\begin{align}\label{E:spec1}
\mathcal H_{0}(\gamma)e^{inz} = i\Omega_{n,\gamma}e^{inz}\quad \text{for all}\quad n \in \mathbb{Z}^\ast:=\mathbb{Z}\setminus \{0\}.
\end{align}
where
\begin{align}\label{E:omega}
\Omega_{n,\gamma} = k^{3} n\left(1-n^{2}\right)+\dfrac{3 \gamma^{2}}{kn}.
\end{align}
Therefore, the $L_0^2(\mathbb{T})$-spectrum of $\mathcal H_0(\gamma)$ is given by
\begin{equation*}\label{e:spec2}
\operatorname{spec}_{L^2_0(\mathbb{T})}(\mathcal H_0(\gamma))=\{i\Omega_{n,\gamma}; n \in \mathbb{Z}^\ast \},
\end{equation*}
which implies $\operatorname{spec}_{L^2_0(\mathbb{T})}(\mathcal H_0(\gamma))$ consists of purely imaginary eigenvalues of finite multiplicity. This is because the coefficients of the operator $\mathcal{H}_0(\gamma)$ are real, which should be the case, since zero-amplitude solutions are spectrally stable.
Spectra of $\mathcal H_a(\gamma)$ and $\mathcal H_0(\gamma)$ remain close for $|a|$ small as
\[
||\mathcal H_a(\gamma)-\mathcal H_0(\gamma)||\to 0 \text{ as } a \to 0
\]
in the operator norm. Due to the symmetry in Lemma~\ref{lem:sym}, for $|a|$ sufficiently small, bifurcation of eigenvalues of $\mathcal H_a(\gamma)$ from imaginary axis can happen only when a pair of eigenvalues of $\mathcal H_0(\gamma)$ collide on the imaginary axis.
Let $n\neq m\in \mathbb{Z}^\ast$, a pair of eigenvalues $i\Omega_{n,\gamma}$ and $i\Omega_{m,\gamma}$ of $\mathcal H_0(\gamma)$ collide for some $\gamma=\gamma_c$ when
\begin{align}\label{e:coll}
\Omega_{n,\gamma_c}=\Omega_{m,\gamma_c}.
\end{align}
We list all the collisions in the following lemma.
\begin{lemma}\label{lem111}
For a fix $\Delta \in \mathbb{N}$, eigenvalues $\Omega_{n,\gamma}$ and $\Omega_{n+\Delta,\gamma}$ of the operator $\mathcal{H}_0(\gamma)$ collide for all $n\in(-\Delta,0)\cap \mathbb{Z}$ at some $\gamma=\gamma_c(k)$. All such collisions take place away from the origin in the complex plane except when $\Delta$ is even and $n=-\Delta/2$ in which case eigenvalues $\Omega_{n,\gamma}$ and $\Omega_{-n,\gamma}$ collide at the origin.
\end{lemma}
\begin{proof}
Without any loss of generality, consider $m>n$ and $m=n+\Delta$ with $\Delta\in \mathbb{N}$ in the collision condition \eqref{e:coll} then we obtain
\begin{equation}\label{e:cc}
3\gamma_c^2=k^4n(n+\Delta)(-3n^2-3n\Delta-\Delta^2+1),
\end{equation}
which can be rewritten as
\begin{equation}\label{e:cc123}
3\gamma_c^2=-k^4[3n^2(n+\Delta)^2+n(n+\Delta)(\Delta^2-1)].\end{equation}
The above equation implies that collision between $n$ and $n+\Delta$ takes place if only if $n(n+\Delta)<0$, that is, $-\Delta<n<0$.
Observe that $\Omega_{n,\gamma_c}=\Omega_{-n,\gamma_c}=0$ for $\gamma_c^2=\dfrac{k^4n^2(n^2-1)}{3}$. Therefore, $\Omega_{n,\gamma_c}$ and $\Omega_{n+\Delta,\gamma_c}$ collide at the origin when $\Delta$ is even and $n=-\Delta/2$. All other collisions are away from origin. Hence, the lemma.
\end{proof}
From \eqref{e:cc123}, assume $\gamma^2=-\dfrac{k^4}{3}f(n)g(n)$, where $f(n)=n(n+\Delta)$ and $g(n)=3n^2+3n\Delta+\Delta^2-1$. For a fixed $\Delta\in \mathbb{N}$, $f(n)\geq f(-\Delta/2)$ and $g(n)\geq g(-\Delta/2)$ for all $n\in(-\Delta,0)\cap \mathbb{Z}$. And $f(n)g(n)\leq f(-\Delta/2)g(-\Delta/2)$ for all $n\in(-\Delta,0)\cap \mathbb{Z}$. Also, $f(n)g(n)\geq -\dfrac{(\Delta^2-1)^2}{12}$. Therefore $\dfrac{k^4}{48}\Delta^2(\Delta^2-4)\leq\gamma^2\leq\dfrac{k^4}{36}(\Delta^2-1)^2$. Collision for $\{n,n+\Delta\}=\{-1,1\}$ occur at $\gamma=0$ and all other collision mentioned in Lemma \ref{lem111} occur for $\gamma^2\in\left[\dfrac{k^4}{48}\Delta^2(\Delta^2-4),\dfrac{k^4}{36}(\Delta^2-1)^2\right]$ with $\dfrac{k^4}{48}\Delta^2(\Delta^2-4)>0$. This shows that for each $k>0$, there exist $\gamma_0\neq0$ such that all the collisions stated in Lemma \ref{lem111} occur for $|\gamma|>|\gamma_0|$, except $\{n,n+\Delta\}=\{-1,1\}$.
\subsection{Non-periodic perturbations}\label{ss:2} A standard perturbation argument assures that the $L^2(\mathbb{T})$-spectrum of $\mathcal H_a(\gamma,\tau)$ and $\mathcal H_0(\gamma,\tau)$ will stay close for $|a|$ sufficiently small \cite{Hur2015ModulationalWaves}. Therefore, in order to locate the spectrum of $\mathcal H_a(\gamma,\tau)$, we need to determine the spectrum of $\mathcal H_0(\gamma,\tau)$.
A simple calculation yields that
\[
\mathcal H_0(\gamma,\tau)) e^{inz} = i\Omega_{n,\gamma,\tau}e^{inz}, \quad n\in\mathbb{Z},
\]
where
\[
\Omega_{n, \gamma, \tau} = k^{3}(n+\tau)\left(1-(n+\tau)^{2}\right)+\dfrac{3 \gamma^{2}}{k(n+\tau)}.
\]
Therefore, the $L^2(\mathbb{T})$-spectrum of $\mathcal H_0(\gamma,\tau)$ is given by
\begin{equation}\label{e:spec}
\operatorname{spec}_{L^2(\mathbb{T})}(\mathcal H_0(\gamma,\tau))=\{i\Omega_{n,\gamma,\tau}; n \in \mathbb{Z}, \tau\in\left(-1/2,1/2\right]\setminus \{0\}\}.
\end{equation}
Since if $\mu \in \operatorname{spec}_{L^2(\mathbb{T})}(\mathcal H_0(\gamma,\tau))$ then $\bar{\mu}\in \operatorname{spec}_{L^2(\mathbb{T})}(\mathcal H_0(\gamma,-\tau))$, therefore, it is enough to consider $\tau\in\left(0,1/2\right]$.
Let $n\neq m\in \mathbb{Z}$, a pair of eigenvalues $i\Omega_{n,\gamma,\tau}$ and $i\Omega_{m,\gamma,\tau}$ of $\mathcal H_0(\gamma,\tau)$ collide for some $\gamma=\gamma_c$ and $\tau\in\left(0,1/2\right]$ when
\begin{align}\label{e:coll3}
\Omega_{n,\gamma_c,\tau}=\Omega_{m,\gamma_c,\tau}.
\end{align}
We list all the collisions in the following lemma.
\begin{lemma}\label{lem:coll3}
For a fix $\Delta \in \mathbb{N}$, eigenvalues $\Omega_{n,\gamma,\tau}$ and $\Omega_{n+\Delta,\gamma,\tau}$ of the operator $\mathcal{H}_0(\gamma,\tau)$ collide for all $n\in[-\Delta,-1]\cap \mathbb{Z}$ along a curve $\gamma=\gamma_c(\tau)$, $\tau\in(0,1/2]$; except $\{n,n+\Delta\}=\{-1,0\}$. All such collisions take place away from the origin in the complex plane except when $\Delta$ is odd and $n=-(\Delta+1)/2$ in which case eigenvalues $\Omega_{n,\gamma,\tau}$ and $\Omega_{-n-1,\gamma,\tau}$ collide at the origin for $\gamma=\gamma_c(1/2)$.
\end{lemma}
\begin{proof}
Without loss of generality, assume that $m>n$ and $m=n+\Delta$ with $\Delta\in \mathbb{N}$. Then from collision condition \eqref{e:coll3}, we obtain
\begin{equation}\label{e:cc1}
3\gamma^2=-k^4[3(n+\tau)^2(n+\tau+\Delta)^2+(n+\tau)(n+\tau+\Delta)(\Delta^2-1)].
\end{equation}
This implies that collision between $n$ and $n+\Delta$ takes place only if $(n+\tau)(n+\tau+\Delta)<0$, that is, $-\Delta\leq n<0$. In order to check for which $n\in[-\Delta,0)$, there is indeed a collision, assume $n=-t$, $t \in \mathbb{N}$ such that $-t+\tau+\Delta>0$. From collision condition \eqref{e:coll3}, we get
\begin{equation}\label{e:colll3}
3\gamma^2\left(\dfrac{1}{t-\tau}+\dfrac{1}{-t+\tau+\Delta}\right)=k^4[(t-\tau)((t-\tau)^2-1)+(-t+\tau+\Delta)((-t+\tau+\Delta)^2-1)].
\end{equation}
There exist such $\gamma$ satisfying \eqref{e:coll3} for all $t$ and $-t+\Delta$, except $\{-t,-t+\Delta\}=\{-1,0\}$. Hence the lemma.
\end{proof}
Note that $\Omega_{n,\gamma,\tau}=0$ at $\gamma^2=-\dfrac{k^4}{3}(n+\tau)^2(1-(n+\tau)^2)$. $\Omega_{n,\gamma_c,\tau}=\Omega_{n+\Delta,\gamma_c,\tau}$=0 for a fixed $\gamma_c$ is possible only for $\Delta=-2n-1$, $\tau=1/2$. Therefore, $\Omega_{n,\gamma_c,\tau}$ and $\Omega_{n+\Delta,\gamma_c,\tau}$ collide at the origin for $n=-(\Delta+1)/2$, for all $n\in[-\Delta,-1]\cap \mathbb{Z}$, $\tau=1/2$ and $\gamma_c^2=\dfrac{k^4(2n+1)^2(4n^2+4n-3)}{48}$; except the pair $\{n,n+\Delta\}=\{-1,0\}$. All other collisions are away from origin.
From \eqref{e:cc1}, assume $\gamma^2=-\dfrac{k^4}{3}d(n)h(n)$, where $d(n)=(n+\tau)(n+\tau+\Delta)$ and $h(n)=3(n+\tau)^2+3(n+\tau)\Delta+\Delta^2-1$.
\begin{equation}\label{e:d}
d(n)=(n+\tau)(n+\tau+\Delta)=(n+\tau)^2+\Delta(n+\tau)=\left(n+\tau+\dfrac{\Delta}{2}\right)^2-\dfrac{\Delta^2}{4}
\end{equation}
\begin{equation}\label{e:h}h(n)=3(n+\tau)^2+3(n+\tau)\Delta+\Delta^2-1=3\left(n+\tau+\dfrac{\Delta}{2}\right)^2+\dfrac{\Delta^2-4}{4}
\end{equation}
From \eqref{e:d} and \eqref{e:h}, for a fixed $\Delta\in \mathbb{N}$, $f(n)\geq-\dfrac{\Delta^2}{4}$ and $g(n)\geq\dfrac{\Delta^2-4}{4}$ for all $n\in\mathbb{Z}$. Collision for $\Delta=2$ occur for $\gamma^2\geq \dfrac{k^4}{4}\tau^3(2-\tau)>0$ and all other collision mentioned in Lemma \ref{lem:coll3} occur for $\gamma^2\geq\dfrac{k^4}{48}\Delta^2(\Delta^2-4)>0$. Also $\gamma^2\leq\dfrac{k^4}{36}(\Delta^2-1)^2$ for all $\Delta\in\mathbb{N}$. Therefore, Collision for $\Delta=2$ occur for $\dfrac{k^4}{4}\tau^3(2-\tau)\leq\gamma^2\leq\dfrac{k^4}{4}$ and all other collisions occur for $\gamma^2\in\left[\dfrac{k^4}{48}\Delta^2(\Delta^2-4),\dfrac{k^4}{36}(\Delta^2-1)^2\right]$ with $\dfrac{k^4}{48}\Delta^2(\Delta^2-4)>0$. This shows that for each $k>0$, there exist $\gamma_0\neq0$ such that all the collisions stated in Lemma \ref{lem:coll3} occur for $|\gamma|>|\gamma_0|$.
Since if $\mu \in \operatorname{spec}_{L^2(\mathbb{T})}(\mathcal H_0(\gamma,\tau))$ then $\bar{\mu}\in \operatorname{spec}_{L^2(\mathbb{T})}(\mathcal H_0(\gamma,-\tau))$, there will be collision between conjugate of eigenvalues mentioned in Lemma~\ref{lem:coll3}, for all $\tau\in(-1/2,0)$.
More specifically, collisions for $\{-\Delta,0\}$ occur for all $\tau\in(0,1/2]$, for $\{0,\Delta\}$ occur for all $\tau\in(-1/2,0)$, and the remaining collisions occur for all $\tau\in(-1/2,1/2]$. The perturbation analysis for the collisions mentioned in Lemma~\ref{lem:coll3} will be performed with respect to finite or short wavelength perturbations.
\section{Modulational transverse (in)stabilities}\label{s:4}
Throughout this subsection, we work in the regime $|\gamma|\ll1$, that is, with respect to long-wavelength perturbations. From Lemma~\ref{lem111}, when $\gamma=0$, there is a collision among the eigenvalues $i \Omega_{1,0}$ and $i \Omega_{-1,0}$ at the origin, while all other eigenvalues, on the other hand, remain simple and purely imaginary. Also, in the regime $|\gamma|\ll1$, there is no collision with respect to non-periodic perturbations. Since
\[
\|\mathcal H_a(\gamma) - \mathcal H_0(\gamma)\|= O(|a|)
\]
as $a \to 0$ uniformly in the operator norm. A standard perturbation argument assures that the spectrum of $\mathcal H_a(\gamma)$ and $\mathcal H_0(\gamma)$ will stay close for $|a|$ and $|\gamma|$ small \cite{Hur2015ModulationalWaves}. Therefore, we may write that
\[
\operatorname{spec}(\mathcal H_a(\gamma))=\operatorname{spec}_0(\mathcal H_a(\gamma)) \cup \operatorname{spec}_1(\mathcal H_a(\gamma)),
\]
for $a$ and $\gamma$ sufficiently small where $\operatorname{spec}_0(\mathcal H_a(\gamma))$ contains two eigenvalues bifurcating continuously in $a$ from $i\Omega_{1,0}$ and $i\Omega_{-1,0}$ while $\operatorname{spec}_1(\mathcal H_a(\gamma))$ consists of infinitely many simple eigenvalues (see \cite{Hur2015ModulationalWaves} and references therein). Further, we investigate if the pair of eigenvalues in $\operatorname{spec}_0(\mathcal H_a(\gamma))$ bifurcate away from the imaginary axis and contribute to modulational transverse instabilities.
For $a=0$, $\operatorname{spec}_0(\mathcal H_0(\gamma))=\{i \Omega_{-1,0},i \Omega_{1,0}\}$ with eigenfunctions $\{e^{-iz},e^{iz}\}$. We choose the real basis $\{\cos z,\sin z\}.$ We calculate expansion of a basis $\{\psi_1,\psi_2\}$ for the eigenspace corresponding to the eigenvalues of $\operatorname{spec}_0(\mathcal H_a(\gamma))$ in $L^2_0(\mathbb{T})$ by using expansions of $w$ and $c$ in \eqref{e:expptw}, for small $a$ and $\gamma$ as
\begin{align*}
\psi_{1}(z) & =\cos z+2 a A_{2} \cos 2 z+3a^2A_3\cos 3z+O(a^{4}),\\
\psi_{2}(z) & =\sin z+2 a A_{2} \sin 2 z+3a^2A_3\sin 3z+O(a^{4}).
\end{align*}We have the following expression for $\mathcal{H}_{a}(\gamma)$ after expanding and using $w$ and $c$
\begin{equation}
\begin{aligned}\label{e:expA}
\mathcal{H}_a(\gamma) & =\mathcal{H}_0(\gamma)+k a^{2}\left(c_{2}+6 \rho A_{0}-\frac{3}{4} \phi^{2} \right) \partial z+\left(6k\rho a-3\phi^2kA_0a^3-\dfrac{3}{2}\phi^2kA_2a^3\right)\partial_z(\cos z)+\\&ka^2\left(6\rho A_2-\dfrac{3}{4}\phi^2\right)\partial_z(\cos 2z)+\left(6 k \rho a^{3} A_{3} -\frac{3}{2} \phi^{2} k a^{3} A_{2}\right)\partial_{z}(\cos 3 z)+i3\gamma\phi(a\sin z+\\&2a^2A_2\sin 2z+3a^3A_3\sin 3z) \partial_z^{-1}+O(a^4)
\end{aligned}
\end{equation}
In order to locate the bifurcating eigenvalues for $|a|$ sufficiently small, we calculate the action of $\mathcal{H}_{a}(\gamma)$ on the extended eigenspace $\{\psi_1(z), \psi_2(z)\}$ viz.
\begin{align}\label{eq:bmat1}
\mathcal{T}_{a}(\gamma) = \left[ \frac{\langle \mathcal{H}_a(\gamma)\psi_{i}(z),\psi_{j}(z)\rangle}{\langle\psi_{i}(z),\psi_{i}(z)\rangle} \right]_{i,j=1,2}
\text{ and }
\mathcal{I}_{a} = \left[ \frac{\langle \psi_{i}(z),\psi_{j}(z)\rangle}{\langle\psi_{i}(z),\psi_{i}(z)\rangle} \right]_{i,j=1,2}.
\end{align}
We use expansion of $\mathcal H_a(\gamma)$ in \eqref{e:expA} to find actions of $\mathcal{H}_a(\gamma)$ and identity operator on $\{\psi_1,\psi_2\}$, and arrive at
\begin{align*}
\mathcal{T}_a(\gamma)&=\begin{pmatrix}
0 & -\dfrac{3\gamma^2}{k}+3a^2k\left(\dfrac{\phi^2}{4}-\dfrac{\rho^2}{k^2}\right) \\ \dfrac{3\gamma^2}{k} & 0\end{pmatrix}+O(a^2(\gamma+a)),\\
\end{align*}
To locate where these two eigenvalues are bifurcating from the origin, we analyze the characteristic equation $ \left|\mathcal{T}_a(\gamma)-\mu \mathcal{I}\right|=0$, where $\mathcal{I}_a$ is $2\times 2$ identity matrix. From which we conclude that
\begin{equation}
\mu=\pm \dfrac{3|\gamma|}{k}\sqrt{\Lambda} +O(a(\gamma+a))
\end{equation}
where
\begin{equation}\label{e:disc1}
\Lambda=-\gamma^2+ a^2 k^2 \left(\dfrac{\phi^2}{4}-\dfrac{\rho^{2}}{k^2}\right)+O(a^2(\gamma+a)).
\end{equation}
For $\gamma=a=0$, we get zero as a double eigenvalue, which agrees with our calculation. For $\gamma$ and $a$ sufficiently small, we obtain two eigenvalues which have non-zero real part with opposite sign when
\begin{equation}
\gamma^2< a^2 k^2 \left(\dfrac{\phi^2}{4}-\dfrac{\rho^{2}}{k^2}\right)+O(a^2(\gamma+a)).
\end{equation}which is possible only for
\[
k>2\left|\dfrac{\rho}{\phi}\right|.
\]
Hence the Theorem~\ref{t:2}.
\section{High-frequency transverse (in)stabilities}\label{s:5}
As discussed in Subsections \ref{ss:1} and \ref{ss:2}, all the collisions occur for $|\gamma|>|\gamma_0|>0$, therefore, here we work in the regime $|\gamma|>|\gamma_0|>0$, that is, with respect to finite or short wavelength perturbations. Note that, there is no collision for $\Delta=1$ and $2$ among all collisions mentioned in Lemma \ref{lem111}. From Lemma \ref{lem:coll3}, there are collisions for $\Delta=2$ with respect to non-periodic perturbations. For each $\Delta\geq3$, there are collisions for both periodic as well as non-periodic perturbations mentioned in Lemma \ref{lem111} and \ref{lem:coll3}, respectively.\\
\paragraph{\underline{\textbf{(In)stability analysis for $\Delta=2$.}}}
For $\Delta=2$, we have three pairs of colliding eigenvalues $\{\Omega_{-1,\gamma,\tau},\Omega_{1,\gamma,\tau}\}$, $\{\Omega_{0,\gamma,\tau},\Omega_{-2,\gamma,\tau}\}$ and $\{\Omega_{0,\gamma,\tau},\Omega_{2,\gamma,\tau}\}$. We further check if these pairs lead to instability.
Let $i\Omega_{n,\gamma,\tau}$ and $i\Omega_{n+2,\gamma,\tau}$ be such two eigenvalues for some $n\in \mathbb{Z}$. Assume that these eigenvalues collide at $\gamma=\gamma_c$, that is
\begin{align}
0 \neq \Omega_{n,\gamma_c,\tau} = \Omega_{n+2,\gamma_c,\tau} = \Omega \hspace{3px} (say).
\end{align}
That is, $i\Omega$ is an eigenvalue of $\mathcal{H}_0(\gamma_c,\tau)$ of multiplicity two with an orthonormal basis of eigenfunctions Let $i \Omega + i \nu_{a,n}$ and $i \Omega + i \nu_{a,n+2}$ be the eigenvalues of $\mathcal{H}_a(\gamma,\tau)$ bifurcating from $i\Omega_{n,\gamma_c,\tau}$ and $i\Omega_{n+2,\gamma_c,\tau}$ respectively, for $|a|$ and $|\gamma-\gamma_c|$ small. Let $\{\varphi_{a,n}(z), \varphi_{a,n+2}(z)\}$ be a orthonormal basis for the corresponding eigenspace. We assume the following expansions
\begin{align}\label{eq:eiggg1}
\varphi_{a,n}(z) =& e^{inz}+a\varphi_{n,1}+a^2\varphi_{n,2}+O(a^3), \\
\varphi_{a,n+2}(z) =& e^{i(n+2)z}+a\varphi_{n+2,1}+a^2\varphi_{n+2,2}
+O(a^3).\label{eq:eiggg2}
\end{align}
We use orthonormality of $\varphi_{a,n,\gamma}$ and $\varphi_{a,n+2,\gamma}$ to find that
\[
\varphi_{n,1} = \varphi_{n,2} = \varphi_{n+2,1} = \varphi_{n+2,2} = 0.
\]Next, we calculate the action of $\mathcal{H}_a(\gamma,\tau)$ on the eigenspace $\{\varphi_{a,n}(z), \varphi_{a,n+2}(z)\}$ for $|\gamma - \gamma_c|$ and $|a|$ small. We arrive at
\begin{align*}
\mathcal{T}_a(\gamma,\tau)&=\begin{pmatrix}
T_{11} & T_{12} \\ T_{21} & T_{22}\end{pmatrix}+O(a^3(\gamma^2+a^2)),\\
\end{align*}
where
\begin{align*}
T_{11}& =i \Omega+\dfrac{i 3 \varepsilon}{k(n+\tau)}-i(n+\tau) a^{2}k\left(\dfrac{3}{8} \phi^{2} +\dfrac{3}{2} \dfrac{\rho ^{2}}{k^2}\right) , \\
T_{12} & =i(n+2+\tau) a^{2}k\left(\dfrac{3}{2} \dfrac{ \rho ^{2}}{k^2}-\dfrac{3}{8} \phi^{2} \right)-\dfrac{i a^{2} 3 \gamma \phi A_{2} }{(n+\tau)},\\
T_{21}&=i(n+\tau) a^{2}k\left(\dfrac{3}{2} \dfrac{\rho^{2}}{k^2}-\dfrac{3}{8} \phi^{2} \right) +\dfrac{i a^{2} 3 \gamma \phi A_{2}}{(n+2+\gamma)},\\ T_{22}&=i \Omega + \dfrac{i 3 \varepsilon}{k(n+2+\gamma)}-i(n+2+\gamma) a^{2}k\left(\dfrac{3}{8} \phi^{2} +\dfrac{3}{2} \dfrac{\rho ^{2}}{k^2}\right),
\end{align*}
and $\varepsilon =\gamma^2-\gamma_c^2$, sufficiently small. Further, we obtained the equation $\det(\mathcal{T}_{a}(\gamma,\tau)-(i \Omega + i \nu) \mathcal{I}_{a}) = 0$, where $\mathcal{I}_a$ is $2\times 2$ identity matrix, and concluded the discriminant $\Delta$ as
\begin{align*}
\Delta=\dfrac{36 \varepsilon^2}{k^2(n+\tau)^{2}(n+2+\tau)^{2}}&-\dfrac{36\gamma_c^2\phi^2a^4A_2^2}{(n+\tau)(n+2+\tau)}+9a^4k^2(n+\tau+1)^2\left(\dfrac{\rho^4}{k^4}+\dfrac{\phi^4}{16}\right)\\&+\dfrac{9a^4\rho^2\phi^2}{2k^2}(1-(n+\tau)(n+2+\tau))+O(a^2|\varepsilon|+ |a|^5)
\end{align*}
Note that all the collisions stated in the Lemma \ref{lem:coll3} for $\Delta=2$ have $(n+\tau)(n+2+\tau)<0$ which implies that for $|\varepsilon|$ and $|a|$ sufficiently small, the leading term in the discriminant is always positive for all $\rho$ and $\phi$. Therefore, we do not get any instability for $\Delta =2$ case for sufficiently small amplitude parameter $a$. \\
\paragraph{\underline{\textbf{(In)stability analysis for $\Delta\geq3$.}}}
For some $n \in \mathbb{Z}^\ast$ and a fixed $\Delta\geq3$, we have
\begin{equation}\label{e:1}
i \Omega_{n, \gamma_{c},\tau}=i \Omega_{n+\Delta, \gamma_{c},\tau}=i \Omega, \quad \tau\in (-1/2,1/2]
\end{equation}
$\tau=0$ corresponds to the periodic case and $\tau\neq0$ corresponds to the non-periodic case.
\begin{align*}
\mathcal{H}_a(\gamma)=&\mathcal{H}_0(\gamma)+(\beta_2a^2+\beta_4a^4+\dots)(\partial_z+i\tau)+\alpha_1a(\partial_z+i\tau)(\cos z)+\dots\\&+\alpha_{\Delta}a^{\Delta}(\partial_z+i\tau)(\cos{(\Delta z)})+(i\delta_1a\sin z+\dots+i\delta_{\Delta}a^{\Delta}\sin{(\Delta z)})(\partial_z+i\tau)^{-1}
\end{align*}
To explicitly obtain the values of all unknown coefficients in the expansion of $\mathcal{H}_a(\gamma,\tau)$, we require coefficients of higher powers of $a$ in the expansion of solution $w$. Calculating higher coefficients is difficult as the coefficients of the solution do not seem to have any apparent symmetry. Therefore, we pursue the instability analysis without calculating the unknown coefficients explicitly.
Following the same steps as in the previous subsection, we arrive at
\begin{align*}
\mathcal{T}_a(\gamma,\tau)&=\begin{pmatrix}
T_{11} & \ T_{12} \\ T_{21} & T_{22}\end{pmatrix}+O(a^{\Delta+1}),\\
\end{align*}
where
\begin{align*}
T_{11}& =i\Omega+\dfrac{i3\varepsilon}{k(n+\tau)}+i(n+\tau)(\beta_2a^2+\beta_4a^4+\dots) , \\
T_{12} & =\dfrac{ia^{\Delta}}{2}\left((n+\Delta+\tau)\alpha_{\Delta}-\dfrac{\delta_{\Delta}}{(n+\tau)}\right),\\
T_{21}&=\dfrac{ia^{\Delta}}{2}\left((n+\tau)\alpha_{\Delta}+\dfrac{\delta_{\Delta}}{n+\Delta+\tau}\right),\\ T_{22}&=i\Omega+\dfrac{i3\varepsilon}{k(n+\Delta+\tau)}+i(n+\Delta+\tau)(\beta_2a^2+\beta_4a^4+\dots),
\end{align*}
The resulting discriminant of the characteristic equation $\det(\mathcal{T}_{a}(\gamma,\tau)-(i \Omega + i \nu) \mathcal{I}_{a}) = 0$ is
\begin{align*}
\operatorname{disc}_a(\varepsilon) = \dfrac{9\Delta^2\varepsilon^2}{k^2(n+\tau)^2(n+\Delta+\tau)^2}+\Delta^2\beta_2^2a^4+O(a^2(|\varepsilon|+|a^3|)).
\end{align*}
which is positive for sufficiently small $|\varepsilon|$ and $|a|$ which implies that no eigenvalue of $\mathcal{H}_a(\gamma)$ is bifurcating from the imaginary axis due to collision. Hence the Theorem \ref{t:3}.
\subsection*{Acknowledgement}
Bhavna and AKP are supported by the Science and Engineering Research Board (SERB), Department of Science and Technology (DST), Government of India under grant
SRG/2019/000741. Bhavna is also supported by Junior Research Fellowships (JRF) by University Grant Commission (UGC), Government of India. SS is supported through the institute fellowship from MHRD and National Institute of Technology, Tiruchirappalli, India.
\bibliographystyle{amsalpha}
|
1,108,101,564,925 | arxiv |
\section{\bf Introduction}
\setcounter{equation}{0}
\
If $A$ is a bounded self-adjoint operator on Hilbert space, the spectral theorem allows one
for Borel function $\varphi$ on the real line ${\Bbb R}$ to define the function $\varphi(A)$ of $A$.
We are going to study in this paper smoothness properties of the map $A\mapsto\varphi(A)$.
It is easy to see that if this map is differentiable (in the sense of Gateaux), then
$\varphi$ is continuously differentiable.
If $K$ is another bounded self-adjoint operator, consider the function $t\mapsto\varphi(A+tK)$,
$t\in{\Bbb R}$. In \cite{DK} it was shown that if $\varphi\in C^2({\Bbb R})$ (i.e., $\varphi$ is twice continuously
differentiable), then the map $t\mapsto\varphi(A+tK)$ is norm differentiable and
\begin{eqnarray}
\label{DKF}
\frac{d}{ds}\big(\varphi(A+sK)\big)\Big|_{s=0}=\iint\frac{\varphi(\l)-\varphi(\mu)}{\l-\mu}\,dE_A(\l)K\,dE_A(\mu),
\end{eqnarray}
where $E_A$ is the spectral measure of $A$. The expression on the right-hand side of \rf{DKF}
is a {\it double operator integral}. Later Birman and Solomyak developed their beautiful theory of
double operator integrals in \cite{BS1}, \cite{BS2}, and \cite{BS3} (see also \cite{BS4});
we discuss briefly this theory in \S 3.
Throughout this paper {\it if we integrate a function on ${\Bbb R}^d$ (or ${\Bbb T}^d$) and the domain of
integration is not specified, it is assumed that the domain of integration is ${\Bbb R}^d$ (or ${\Bbb T}^d$).}
Birman and Solomyak relaxed in \cite{BS3} the assumptions on $\varphi$ under which \rf{DKF} holds.
They also considered the case of an unbounded self-adjoint operator $A$. However, it turned out that
the condition $\varphi\in C^1({\Bbb R})$ is not sufficient for the differentiability of the function
$t\mapsto\varphi(A+tK)$ even in the case of bounded $A$. This can be deduced from an explicit example
constructed by Farforovskaya \cite{F2} (in fact, this can also be deduced from an example given
in \cite{F1}).
In \cite{Pe1} a necessary condition on $\varphi$ for the differentiability of the function
\linebreak$t\mapsto\varphi(A+tK)$ for all $A$ and $K$ was found. That necessary condition was deduced from the
nuclearity criterion for Hankel operators (see the monograph \cite{Pe4}) and it implies that
the condition $\varphi\in C^1({\Bbb R})$ is not sufficient. We also refer the reader to \cite{Pe2} where a
necessary condition is given in the case of an unbounded self-adjoint operator $A$.
Sharp sufficient conditions on $\varphi$ for the differentiability of the function
$t\mapsto\varphi(A+tK)$ were obtained in \cite{Pe1} in the case of bounded self-adjoint operators
and in \cite{Pe2} in the case of an unbounded self-adjoint operator $A$. In particular, it follows
from the results of \cite{Pe2} that if $\varphi$ belongs to the homogeneous Besov space
$B_{\be1}^1({\Bbb R}),$\footnote{see \S 2 for information on Besov spaces}
$A$ is a self-adjoint operator and $K$ is a bounded self-adjoint operator, then the function
$t\mapsto\varphi(A+tK)$ is differentiable and \rf{DKF} holds (see \S 5 of this paper for details).
In the case of bounded self-adjoint operators formula \rf{DKF} holds if $\varphi$ belongs to
$B_{\be1}^1({\Bbb R})$ locally (see \cite{Pe1}).
A similar problem for unitary operators was considered in \cite{BS3} and
later in \cite{Pe1}. Let $\varphi$ be a function on the unit circle ${\Bbb T}$.
For a unitary operator $U$ and a bounded self-adjoint operator $A$, consider the function
$t\mapsto\varphi(e^{{\rm i}tA}U)$. It was shown in \cite{Pe1} that if $\varphi$ belongs to the Besov space
$B_{\be1}^1$, then the function $t\mapsto\varphi(e^{{\rm i}tA}U)$ is differentiable and
\begin{eqnarray}
\label{du}
\frac{d}{ds}\Big(\varphi(e^{{\rm i}sA}U)\Big)\Big|_{s=o}
={\rm i}\left(\iint\frac{\varphi(\l)-\varphi(\mu)}{\l-\mu}\,dE_U(\l)A\,dE_U(\mu)\right)U
\end{eqnarray}
(earlier this formula was obtained in \cite{BS3} under more restrictive assumptions on $\varphi$). We refer
the reader to \cite{Pe1} and \cite{Pe2} for necessary conditions. We also mention here the paper
\cite {ABF}, which slightly improves the sufficient condition
$\varphi\in B_{\be1}^1$.
The problem of the existence of higher derivatives of the function $t\mapsto\varphi(A+tK)$
was studied by Sten'kin in \cite{S}. He showed that under certain conditions on $\varphi$ the function
$t\mapsto\varphi(A+tK)$ has $m$ derivatives and
\begin{eqnarray}
\label{SF}
\frac{d^m}{ds^m}\big(\varphi(A_s)\big)\Big|_{s=0}\!\!=
m!\underbrace{\int\!\!\cdots\!\!\int}_{m}(\dg^{m}\varphi)(\l_1,\cdots,\l_{m+1})
\,dE_A(\l_1)K\cdots K\,dE_A(\l_{m+1}),
\end{eqnarray}
where the {\it divided differences $\dg^k\varphi$ of order $k$} are defined inductively by
$$
\dg^0\varphi\stackrel{\mathrm{def}}{=}\varphi,
$$
$$
(\dg^{k}\varphi)(\l_1,\cdots,\l_{k+1})\stackrel{\mathrm{def}}{=}
\frac{(\dg^{k-1}\varphi)(\l_1,\cdots,\l_k)-(\dg^{k-1}\varphi)(\l_2,\cdots,\l_{k+1})}{\l_{1}-\l_{k+1}},\quad k\ge1
$$
(the definition does not depend on the order of the variables). We are also going to use
the notation
$$
\dg\varphi=\dg^1\varphi.
$$
The Birman--Solomyak theory of double
operator integrals does not generalize to the case of multiple operator integrals. In \cite{S}
to define the multiple operator integrals
$$
\underbrace{\int\cdots\int}_k\psi(\l_1,\cdots,\l_k)
\,dE_1(\l_1)T_1\,dE_2(\l_2)T_2\cdots T_{k-1}\,dE_k(\l_k),
$$
Sten'kin considered iterated integration and he defined such integrals
for a certain class of functions $\psi$. However, that approach in the case $k=2$ leads to a
considerably smaller class of functions $\psi$ than the Birman--Solomyak approach.
In particular the function $\psi$ identically equal to $1$, is not integrable in the sense of
the approach developed in \cite{S}, while it is very natural to assume that
$$
\underbrace{\int\cdots\int}_k
\,dE_1(\l_1)T_1\,dE_2(\l_2)T_2\cdots T_{k-1}\,dE_k(\l_k)=T_1T_2\cdots T_{k-1}.
$$
In \S 3 of this paper we use a different approach to the definition of multiple operator integrals.
The approach is based on integral projective tensor products. In the case $k=2$ our approach
produces the class of integrable functions that coincides with the class of so-called
Schur multipliers, which is the maximal possible class of integrable functions
in the case $k=2$ (see \S 4).
Our approach allows us to improve Sten'kin's results on the existence of higher order derivatives of
the function $t\mapsto\varphi(A+tK)$. We prove in \S 5 that formula \rf{SF} holds for functions
$\varphi$ in the intersection $B_{\be1}^m({\Bbb R})\bigcap B_{\be1}^1({\Bbb R})$ of homogeneous Besov spaces.
Note that the Besov spaces $B_{\be1}^1$ and $B_{\be1}^1({\Bbb R})$ appear in a natural way when
studying the applicability of the Lifshits--Krein trace formula for trace class perturbations
(see \cite{Pe1} and \cite{Pe2}), while the Besov spaces $B_{\be1}^2$ and $B_{\be1}^2({\Bbb R})$ arise
when studying the applicability of the Koplienko--Neidhardt trace formulae for Hilbert--Schmidt
perturbations (see \cite{Pe5}).
It is also interesting to note that the Besov class $B_{\be1}^2({\Bbb R})$ appears in a natural way
in perturbation theory in \cite{Pe3}, where the following problem is studied: in which case
$$
\varphi(T_f)-T_{\varphi\circ f}\in {\boldsymbol S}_1?
$$
($T_g$ is a Toeplitz operator with symbol $g$.)
In \S 4 we obtain similar results in the case of unitary operators and generalize formula
\rf{du} to the case of higher derivatives.
We collect in \S 3 basic information on Besov spaces.
\
\section{\bf Besov spaces}
\setcounter{equation}{0}
\
Let $0<p,\,q\le\infty$ and $s\in{\Bbb R}$. The Besov class $B^s_{pq}$ of functions (or
distributions) on ${\Bbb T}$ can be defined in the following way. Let $w$ be a $C^\infty$ function on ${\Bbb R}$ such
that
\begin{eqnarray}
\label{v}
w\ge0,\quad\operatorname{supp} w\subset\left[\frac12,2\right],\quad\mbox{and}\quad
\sum_{n=-\infty}^\infty w(2^{n}x)=1\quad\mbox{for}\quad x>0.
\end{eqnarray}
Consider the trigonometric polynomials $W_n$, and $W_n^\#$ defined by
$$
W_n(z)=\sum_{k\in{\Bbb Z}}w\left(\frac{k}{2^n}\right)z^k,\quad n\ge1,\quad W_0(z)=\bar z+1+z,\quad
\mbox{and}\quad W_n^\#(z)=\overline{W_n(z)},\quad n\ge0.
$$
Then for each distribution $\varphi$ on ${\Bbb T}$,
$$
\varphi=\sum_{n\ge0}\varphi*W_n+\sum_{n\ge1}\varphi*W^\#_n.
$$
The Besov class $B^s_{pq}$ consists of functions (in the case $s>0$) or distributions $\varphi$ on ${\Bbb T}$
such that
$$
\big\{\|2^{ns}\varphi*W_n\|_{L^p}\big\}_{n\ge0}\in\ell^q\quad\mbox{and}
\quad\big\{\|2^{ns}\varphi*W^\#_n\|_{L^p}\big\}_{n\ge1}\in\ell^q
$$
Besov classes admit many other descriptions. In particular, for $s>0$, the space $B^s_{pq}$ admits the
following characterization. A function $\varphi$ belongs to $B^s_{pq}$, $s>0$, if and only if
$$
\int_{\Bbb T}\frac{\|\Delta^n_\t f\|_{L^p}^q}{|1-\t|^{1+sq}}d{\boldsymbol m}(\t)<\infty\quad\mbox{for}\quad q<\infty
$$
and
$$
\sup_{\t\ne1}\frac{\|\Delta^n_\t f\|_{L^p}}{|1-\t|^s}<\infty\quad\mbox{for}\quad q=\infty,
$$
where ${\boldsymbol m}$ is normalized Lebesgue measure on ${\Bbb T}$, $n$ is an integer greater than $q$ and $\Delta_\t$ is the
difference operator:
$(\Delta_\t f)(\zeta)=f(\t\zeta)-f(\zeta)$, $\zeta\in{\Bbb T}$.
To define (homogeneous) Besov classes $B^s_{pq}({\Bbb R})$ on the real line, we consider the same function $w$
as in \rf{v} and define the functions $W_n$ and $W^\#_n$ on ${\Bbb R}$ by
$$
{\mathcal F} W_n(x)=w\left(\frac{x}{2^n}\right),\quad{\mathcal F} W^\#_n(x)={\mathcal F} W_n(-x),\quad n\in{\Bbb Z},
$$
where ${\mathcal F}$ is the {\it Fourier transform}. The Besov class $B^s_{pq}({\Bbb R})$ consists of
distributions $\varphi$ on ${\Bbb R}$ such that
$$
\{\|2^{ns}\varphi*W_n\|_{L^p}\}_{n\in{\Bbb Z}}\in\ell^q({\Bbb Z})\quad\mbox{and}
\quad\{\|2^{ns}\varphi*W^\#_n\|_{L^p}\}_{n\in{\Bbb Z}}\in\ell^q({\Bbb Z}).
$$
According to this definition, the space $B^s_{pq}({\Bbb R})$ contains all polynomials. However, it is not
necessary to include all polynomials.
In this paper we need only Besov spaces $B_{\be1}^d$, $d\in{\Bbb Z}_+$. In the case of functions
on the real line it is convenient to restrict the degree of polynomials in $B_{\be1}^d({\Bbb R})$ by $d$.
It is also convenient to consider the following seminorm on
$B_{\be1}^d({\Bbb R})$:
$$
\|\varphi\|_{B_{\be1}^d({\Bbb R})}=\sup_{x\in{\Bbb R}}|\varphi^{(d)}(x)|+\sum_{n\in{\Bbb Z}}2^{nd}\|\varphi*W_n\|_{L^\infty}+
\sum_{n\in{\Bbb Z}}2^{nd}\|\varphi*W^\#_n\|_{L^\infty}.
$$
The classes $B_{\be1}^d({\Bbb R})$ can be described as classes of function on ${\Bbb R}$
in the following way:
$$
\varphi\in B_{\be1}^d({\Bbb R})\quad\Longleftrightarrow\quad
\sup_{t\in{\Bbb R}}|\varphi^{(d)}(t)|+\int\limits_{\Bbb R}\frac{\|\Delta_t^{d+1}\varphi\|_{L^\infty}}{|t|^{d+1}}\,dt<\infty,
$$
where $\Delta_t$ is the difference operator defined by $(\Delta_t\varphi)(x)=\varphi(x+t)-\varphi(x)$.
We refer the reader to \cite{Pee} for more detailed information on Besov classes.
\
\section{\bf Multiple operator integrals}
\setcounter{equation}{0}
\
In this section we define multiple operator integrals using integral projective tensor products of
$L^\infty$-spaces. However, we begin with a brief review of the theory of double operator integrals
that was developed by Birman and Solomyak in \cite{BS1}, \cite{BS2}, and \cite{BS3}. We state a
description of the Schur multipliers associated with two spectral measures in terms of integral
projective tensor products. This suggests the idea to define multiple operator integrals with the
help of integral projective tensor products.
\medskip
{\bf Double operator integrals.} Let $({\mathcal X},E)$ and $({\mathcal Y},F)$ be spaces with spectral measures $E$ and $F$
on a Hilbert space ${\mathcal H}$. Let us first define double operator integrals
\begin{eqnarray}
\label{doi}
\int\limits_{\mathcal X}\int\limits_{\mathcal Y}\psi(\l,\mu)\,d E(\l)T\,dF(\mu),
\end{eqnarray}
for bounded measurable functions $\psi$ and operators $T$
of Hilbert Schmidt class ${\boldsymbol S}_2$. Consider the spectral measure ${\mathcal E}$ whose values are orthogonal
projections on the Hilbert space ${\boldsymbol S}_2$, which is defined by
$$
{\mathcal E}(\L\times\Delta)T=E(\L)TF(\Delta),\quad T\in{\boldsymbol S}_2,
$$
$\L$ and $\Delta$ being measurable subsets of ${\mathcal X}$ and ${\mathcal Y}$. Then ${\mathcal E}$ extends to a spectral measure on
${\mathcal X}\times{\mathcal Y}$ and if $\psi$ is a bounded measurable function on ${\mathcal X}\times{\mathcal Y}$, by definition,
$$
\int\limits_{\mathcal X}\int\limits_{\mathcal Y}\psi(\l,\mu)\,d E(\l)T\,dF(\mu)=
\left(\,\,\int\limits_{{\mathcal X}\times{\mathcal Y}}\psi\,d{\mathcal E}\right)T.
$$
Clearly,
$$
\left\|\int\limits_{\mathcal X}\int\limits_{\mathcal Y}\psi(\l,\mu)\,dE(\l)T\,dF(\mu)\right\|_{{\boldsymbol S}_2}
\le\|\psi\|_{L^\infty}\|T\|_{{\boldsymbol S}_2}.
$$
If
$$
\int\limits_{\mathcal X}\int\limits_{\mathcal Y}\psi(\l,\mu)\,d E(\l)T\,dF(\mu)\in{\boldsymbol S}_1
$$
for every $T\in{\boldsymbol S}_1$, we say that $\psi$ is a {\it Schur multiplier (of ${\boldsymbol S}_1$) associated with
the spectral measure $E$ and $F$}. In
this case by duality the map
\begin{eqnarray}
\label{tra}
T\mapsto\int\limits_{\mathcal X}\int\limits_{\mathcal Y}\psi(\l,\mu)\,d E(\l)T\,dF(\mu),\quad T\in {\boldsymbol S}_2,
\end{eqnarray}
extends to a bounded linear transformer on the space of bounded linear operators on ${\mathcal H}$.
We denote by $\fM(E,F)$ the space os Schur multipliers of ${\boldsymbol S}_1$ associated with
the spectral measures $E$ and $F$. The norm of $\psi$ in $\fM(E,F)$ is, by definition, the norm of the
transformer \rf{tra} on the space of bounded linear operators.
In \cite{BS3} it was shown that if $A$ is a self-adjoint operator (not necessarily bounded),
$K$ is a bounded self-adjoint operator and if
$\varphi$ is a continuously differentiable
function on ${\Bbb R}$ such that the divided difference $\dg\varphi$ is a Schur multiplier
of ${\boldsymbol S}_1$ with respect to the spectral measures of $A$ and $A+K$, then
\begin{eqnarray}
\label{BSF}
\varphi(A+K)-\varphi(A)=\iint\frac{\varphi(\l)-\varphi(\mu)}{\l-\mu}\,dE_{A+K}(\l)K\,dE_A(\mu)
\end{eqnarray}
and
$$
\|\varphi(A+K)-\varphi(A)\|\le\operatorname{const}\|\varphi\|_{\fM(E_A,E_{A+K})}\|K\|,
$$
i.e., {\it $\varphi$ is an operator Lipschitz function}.
It is easy to see that if a function $\psi$ on ${\mathcal X}\times{\mathcal Y}$ belongs to the {\it projective tensor
product}
$L^\infty(E)\hat\otimes L^\infty(F)$ of $L^\infty(E)$ and $L^\infty(F)$ (i.e., $\psi$ admits a representation
$$
\psi(\l,\mu)=\sum_{n\ge0}f_n(\l)g_n(\mu),
$$
where $f_n\in L^\infty(E)$, $g_n\in L^\infty(F)$, and
$$
\sum_{n\ge0}\|f_n\|_{L^\infty}\|g_n\|_{L^\infty}<\infty),
$$
then $\psi\in\fM(E,F)$.
For such functions $\psi$ we have
$$
\int\limits_{\mathcal X}\int\limits_{\mathcal Y}\psi(\l,\mu)\,d E(\l)T\,dF(\mu)=
\sum_{n\ge0}\left(\,\int\limits_{\mathcal X} f_n\,dE\right)T\left(\,\int\limits_{\mathcal Y} g_n\,dF\right).
$$
More generally, $\psi$ is a Schur multiplier of ${\boldsymbol S}_1$ if $\psi$
belongs to the {\it integral projective tensor product} $L^\infty(E)\hat\otimes_{\rm i}
L^\infty(F)$ of $L^\infty(E)$ and $L^\infty(F)$, i.e., $\psi$ admits a representation
\begin{eqnarray}
\label{ipt}
\psi(\l,\mu)=\int_Q f(\l,x)g(\mu,x)\,d\sigma(x),
\end{eqnarray}
where $(Q,\sigma)$ is a measure space, $f$ is a measurable function on ${\mathcal X}\times Q$,
$g$ is a measurable function on ${\mathcal Y}\times Q$, and
\begin{eqnarray}
\label{ir}
\int_Q\|f(\cdot,x)\|_{L^\infty(E)}\|g(\cdot,x)\|_{L^\infty(F)}\,d\sigma(x)<\infty.
\end{eqnarray}
If $\psi\in L^\infty(E)\hat\otimes_{\rm i}L^\infty(F)$, then
$$
\int\limits_{\mathcal X}\int\limits_{\mathcal Y}\psi(\l,\mu)\,d E(\l)T\,dF(\mu)=
\int\limits_Q\left(\,\int\limits_{\mathcal X} f(\l,x)\,dE(\l)\right)T
\left(\,\int\limits_{\mathcal Y} g(\mu,x)\,dF(\mu)\right)\,d\sigma(x).
$$
It turns out that all Schur multipliers can be obtained in this way. More precisely, the following
result holds (see \cite{Pe1}):
\medskip
{\bf Theorem on Schur multipliers.} {\em Let $\psi$ be a measurable function on
${\mathcal X}\times{\mathcal Y}$. The following are equivalent
{\rm (i)} $\fM(E,F)$;
{\rm (ii)} $\psi\in L^\infty(E)\hat\otimes_{\rm i}L^\infty(F)$;
{\rm (iii)} there exist measurable functions $f$ on ${\mathcal X}\times Q$ and $g$ on ${\mathcal Y}\times Q$ such that
{\em\rf{ipt}} holds and
\begin{eqnarray}
\label{bs}
\left\|\int_Q|f(\cdot,x)|^2\,d\sigma(x)\right\|_{L^\infty(E)}
\left\|\int_Q|g(\cdot,x)|^2\,d\sigma(x)\right\|_{L^\infty(F)}<\infty.
\end{eqnarray}
}
Note that the implication (iii)$\Rightarrow$(ii) was established in \cite{BS3}. Note also that
in the case of matrix Schur multipliers (this corresponds to discrete spectral measures
of multiplicity 1) the equivalence of (i) and (ii) was proved in \cite{Be}.
It is interesting to observe that if $f$ and $g$ satisfy \rf{ir}, then they also satisfy
\rf{bs}, but the converse is false. However, if $\psi$ admits a representation of the form \rf{ipt}
with $f$ and $g$ satisfying \rf{bs}, then it also admits a (possibly different) representation of the
form \rf{ipt} with $f$ and $g$ satisfying \rf{ir}.
Note that in a similar way we can define the {\it projective tensor product} $A\hat\otimes B$
and the {\it integral projective tensor product} $A\hat\otimes_{\rm i} B$ of
arbitrary Banach functions spaces $A$ and $B$.
The equivalence of (i) and (ii) in the Theorem on Schur multipliers suggests an idea how to define
multiple operator integrals.
\medskip
{\bf Multiple operator integrals.}
We can easily extend the definition of the projective tensor product and the integral
projective tensor product to three or more function spaces.
Consider first the case of triple operator integrals.
Let $({\mathcal X},E)$, $({\mathcal Y},F)$, and $(\cZ,G)$
be spaces with spectral measures $E$, $F$, and $G$ on a Hilbert space ${\mathcal H}$. Suppose that
$\psi$ belongs to the integral projective tensor product
$L^\infty(E)\hat\otimes_{\rm i}L^\infty(F)\hat\otimes_{\rm i}L^\infty(G)$, i.e., $\psi$ admits a representation
\begin{eqnarray}
\label{ttp}
\psi(\l,\mu,\nu)=\int_Q f(\l,x)g(\mu,x)h(\nu,x)\,d\sigma(x),
\end{eqnarray}
where $(Q,\sigma)$ is a measure space, $f$ is a measurable function on ${\mathcal X}\times Q$,
$g$ is a measurable function on ${\mathcal Y}\times Q$, $h$ is a measurable function on $\cZ\times Q$,
and
\begin{eqnarray}
\label{ner}
\int_Q\|f(\cdot,x)\|_{L^\infty(E)}\|g(\cdot,x)\|_{L^\infty(F)}\|h(\cdot,x)\|_{L^\infty(G)}\,d\sigma(x)<\infty.
\end{eqnarray}
We define the norm $\|\psi\|_{L^\infty\hat\otimes_{\rm i}L^\infty\hat\otimes_{\rm i}L^\infty}$ in the
space $L^\infty(E)\hat\otimes_{\rm i}L^\infty(F)\hat\otimes_{\rm i}L^\infty(G)$ as the infimum
of the left-hand side of \rf{ner} over all representations
\rf{ttp}.
Suppose now that $T_1$ and $T_2$ be bounded linear operators on ${\mathcal H}$. For a function $\psi$ in
$L^\infty(E)\hat\otimes_{\rm i}L^\infty(F)\hat\otimes_{\rm i}L^\infty(G)$ of the form \rf{ttp}, we put
\begin{align}
\label{opr}
&\int\limits_{\mathcal X}\int\limits_{\mathcal Y}\int\limits_\cZ\psi(\l,\mu,\nu)
\,d E(\l)T_1\,dF(\mu)T_2\,dG(\nu)\nonumber\\[.2cm]
\stackrel{\mathrm{def}}{=}&\int\limits_Q\left(\,\int\limits_{\mathcal X} f(\l,x)\,dE(\l)\right)T_1
\left(\,\int\limits_{\mathcal Y} g(\mu,x)\,dF(\mu)\right)T_2
\left(\,\int\limits_\cZ h(\nu,x)\,dG(\nu)\right)\,d\sigma(x).
\end{align}
The following lemma shows that the triple operator integral
$$
\int\limits_{\mathcal X}\int\limits_{\mathcal Y}\int\limits_\cZ\psi(\l,\mu,\nu)\,d E(\l)T_1\,dF(\mu)T_2\,dG(\nu)
$$
is well-defined.
\begin{lem}
\label{kor}
Suppose that $\psi\in L^\infty(E)\hat\otimes_{\rm i}L^\infty(F)\hat\otimes_{\rm i}L^\infty(G)$. Then the
right-hand side of {\em\rf{opr}} does not depend on the choice of a representation {\em\rf{ttp}}
and
\begin{eqnarray}
\label{in}
\left\|\int\limits_{\mathcal X}\int\limits_{\mathcal Y}\int\limits_\cZ\psi(\l,\mu,\nu)
\,dE(\l)T_1\,dF(\mu)T_2\,dG(\nu)\right\|
\le\|\psi\|_{L^\infty\hat\otimes_{\rm i}L^\infty\hat\otimes_{\rm i}L^\infty}\cdot\|T_1\|\cdot\|T_2\|.
\end{eqnarray}
\end{lem}
{\bf Proof. } To show that the right-hand side of \rf{opr} does not depend on the choice of a representation
\rf{ttp}, it suffices to show that if the right-hand side of \rf{ttp} is the zero function,
then the right-hand side of \rf{opr} is the zero operator. Denote our Hilbert space by ${\mathcal H}$
and let $\zeta\in{\mathcal H}$. We have
$$
\int\limits_\cZ\left(\int\limits_Q f(\l,x)g(\mu,x)h(\nu,x)\,d\sigma(x)\right)\,dG(\nu)=0\quad
\mbox{for almost all $\l$ and $\mu$},
$$
and so for almost all $\l$ and $\mu$,
\begin{align*}
&\int\limits_Q f(\l,x)g(\mu,x)T_2\left(\,\int\limits_\cZ h(\nu,x)\,dG(\nu)\right)\zeta\,d\sigma(x)\\=
&T_2\int\limits_\cZ\left(\int\limits_Q f(\l,x)g(\mu,x)h(\nu,x)\,d\sigma(x)\right)\,dG(\nu)\zeta={\boldsymbol{0}}.
\end{align*}
Putting
$$
\xi_x=T_2\left(\,\int\limits_\cZ h(\nu,x)\,dG(\nu)\right)\zeta,
$$
we obtain
$$
\int\limits_Q f(\l,x)g(\mu,x)\xi_x\,d\sigma(x)={\boldsymbol{0}}\quad\mbox{for almost all}\quad \l\quad\mbox{and}\quad
\mu.
$$
We can realize the Hilbert space ${\mathcal H}$ as a space of vector functions so that integration
with respect to the spectral measure $F$ corresponds to multiplication. It follows that
$$
\int\limits_Q \!f(\l,x)T_1\!\left(\int\limits_{\mathcal Y} \!g(\mu,x)\,dF(y)\!\right)\!\xi_x\,d\sigma(x)\,dF(\mu)=
T_1\!\int\limits_{\mathcal Y}\!\int\limits_Q \!f(\l,x)g(\mu,x)\xi_x\,d\sigma(x)\,dF(\mu)={\boldsymbol{0}}
$$
for almost all $\l$. Let now
$$
\eta_x=T_1\left(\int\limits_{\mathcal Y} \!g(\mu,x)\,dF(\mu)\right)\xi_x.
$$
We have
$$
\int\limits_Q f(\l,s)\eta_x\,d\sigma(x)={\boldsymbol{0}}\quad\mbox{for almost all}\quad\l.
$$
Now we can realize ${\mathcal H}$ as a space of vector functions so that integration
with respect to the spectral measure $E$ corresponds to multiplication. It follows that
$$
\int\limits_Q \left(\int\limits_{\mathcal X} f(\l,x)\,dE(\l)\right)\eta_x\,d\sigma(x)=
\int\limits_{\mathcal X}\int\limits_Q f(\l,x)\eta_x\,d\sigma(x)\,dE(\l)={\boldsymbol{0}}.
$$
This exactly means that the right-hand side of \rf{opr} is the zero operator.
Inequality \rf{in} follows immediately from \rf{opr}. $\blacksquare$
In a similar way we can define multiple operator integrals
$$
\underbrace{\int\cdots\int}_k\psi(\l_1,\cdots,\l_k)
\,dE_1(\l_1)T_1\,dE_2(\l_2)T_2\cdots T_{k-1}\,dE_k(\l_k)
$$
for functions $\psi$ in the integral projective tensor product
$\underbrace{L^\infty(E_1)\hat\otimes_{\rm i}\cdots\hat\otimes_{\rm i}L^\infty(E_k)}_k$
(the latter space is defined in the same way as in the case $k=2$).
\
\section{\bf The case of unitary operators}
\setcounter{equation}{0}
\
Let $U$ be a unitary operator and $A$ a bounded
self-adjoint on Hilbert space. For $t\in{\Bbb R}$, we put
$$
U_t=e^{{\rm i}tA}U.
$$
In this section we obtain sharp conditions on the existence of higher operator derivatives
of the function $t\mapsto \varphi(U_t)$.
Recall that it was proved in \cite{Pe1} that for a function $\varphi$ in the Besov space $B_{\be1}^1$
the divided difference $\dg\varphi$ belongs to the projective tensor product
$C({\Bbb T})\hat\otimes C({\Bbb T})$, and so for arbitrary unitary operators $U$ and $V$
the following formula holds:
\begin{eqnarray}
\label{vp}
\varphi(V)-\varphi(U)=\iint\frac{\varphi(\l)-\varphi(\mu)}{\l-\mu}\,dE_V(\l)(V-U)\,dE_U(\mu).
\end{eqnarray}
First we state the main results of this section for second derivatives.
\begin{thm}
\label{tens}
If $\varphi\in B_{\be1}^2$, then
$$
(\dg^2\varphi)\in C({\Bbb T})\hat\otimes C({\Bbb T})\hat\otimes C({\Bbb T}).
$$
\end{thm}
\begin{thm}
\label{uni}
Let $\varphi$ be a function in the Besov class $B^2_{\be1}$, then the function
\linebreak$t\mapsto \varphi(U_t)$
has second derivative and
\begin{eqnarray}
\label{vpu}
\frac{d^2}{ds^2}\big(\varphi(U_s)\big)\Big|_{s=0}=-2\left(\iiint(\dg^2\varphi)(\l,\mu,\nu)
\,dE_U(\l)A\,dE_U(\mu)A\,dE_U(\nu)\right)U^2.
\end{eqnarray}
\end{thm}
Note that by Theorem \ref{tens}, the right-hand side of \rf{vpu} makes sense and determines a bounded
linear operator.
First we prove Theorem \ref{tens} and then we deduce from it
Theorem \ref{uni}.
\medskip
{\bf Proof of Theorem \ref{tens}.} It is easy to see that
\begin{eqnarray}
\label{d2}
(\dg^2\varphi)(z_1,z_2,z_3)=\sum_{i,j,k\ge0}\hat\varphi(i+j+k+2)z_1^iz_2^jz_3^k+
\sum_{i,j,k\le0}\hat\varphi(i+j+k-2)z_1^iz_2^jz_3^k,
\end{eqnarray}
where $\hat\varphi(n)$ is the $n$th Fourier coefficient of $\varphi$. We prove that
$$
\sum_{i,j,k\ge0}\hat\varphi(i+j+k+2)z_1^iz_2^jz_3^k\in C({\Bbb T})\hat\otimes C({\Bbb T})\hat\otimes C({\Bbb T}).
$$
The fact that
$$
\sum_{i,j,k\le0}\hat\varphi(i+j+k-2)z_1^iz_2^jz_3^k\in C({\Bbb T})\hat\otimes C({\Bbb T})\hat\otimes C({\Bbb T})
$$
can be proved in the same way.
Clearly, we can assume that $\hat\varphi(j)=0$ for $j<0$.
We have
\begin{align*}
\sum_{i,j,k\ge0}\hat\varphi(i+j+k+2)z_1^iz_2^jz_3^k&=\sum_{i,j,k\ge0}\a_{ijk}\hat\varphi(i+j+k+2)z_1^iz_2^jz_3^k\\
&+\sum_{i,j,k\ge0}\b_{ijk}\hat\varphi(i+j+k+2)z_1^iz_2^jz_3^k\\
&+\sum_{i,j,k\ge0}\gamma_{ijk}\hat\varphi(i+j+k+2)z_1^iz_2^jz_3^k,
\end{align*}
where
$$
\a_{ijk}=\left\{\begin{array}{ll}\frac13,&i=j=k=0,\\[.2cm]
\frac{i}{i+j+k},& i+j+k\ne0;
\end{array}\right.
$$
\medskip
$$
\b_{ijk}=\left\{\begin{array}{ll}\frac13,&i=j=k=0,\\[.2cm]
\frac{j}{i+j+k},& i+j+k\ne0;
\end{array}\right.
$$
and
$$
\gamma_{ijk}=\left\{\begin{array}{ll}\frac13,&i=j=k=0,\\[.2cm]
\frac{k}{i+j+k},& i+j+k\ne0.
\end{array}\right.
$$
Clearly, it suffices to show that
\begin{eqnarray}
\label{nep}
\sum_{i,j,k\ge0}\a_{ijk}\hat\varphi(i+j+k+2)z_1^iz_2^jz_3^k\in C({\Bbb T})\hat\otimes C({\Bbb T})\hat\otimes C({\Bbb T}).
\end{eqnarray}
It is easy to see that
$$
\sum_{i,j,k\ge0}\a_{ijk}\hat\varphi(i+j+k+2)z_1^iz_2^jz_3^k=
\sum_{j.k\ge0}\left(\Big(\big((S^*)^{j+k+2}\varphi\big)*\sum_{i\ge0}\a_{i+j+k}z^i\Big)(z_1)\right)z_2^jz_3^k,
$$
where $S^*$ is backward shift, i.e., $(S^*)^k\varphi={\Bbb P}_+\bar z^k\varphi$ (${\Bbb P}_+$
is the orthogonal projection from $L^2$ onto the Hardy class $H^2$).
Thus
$$
\left\|\sum_{i,j,k\ge0}\a_{ijk}\hat\varphi(i+j+k+2)z_1^iz_2^jz_3^k\right\|_{L^\infty\hat\otimes
L^\infty\hat\otimes L^\infty}\!\!\!\le
\sum_{j,k\ge0}\left\|\big((S^*)^{j+k+2}\varphi\big)*\sum_{i\ge0}\a_{i+j+k}z^i\right\|_{L^\infty}.
$$
Put
$$
Q_m(z)=\sum_{i\ge m}\frac{i-m}{i}z^i,\quad m>0,\quad \mbox{and}\quad
Q_0(z)=\frac13+\sum_{i\ge1}z^i.
$$
Then it is easy to see that
$$
\left\|\big((S^*)^{j+k+2}\varphi\big)*\sum_{i\ge0}\a_{i+j+k}z^i\right\|_{L^\infty}=\|\psi*Q_{j+k}\|_{L^\infty},
$$
where $\psi=(S^*)^2\varphi$,
and so
\begin{align*}
\left\|\sum_{i,j,k\ge0}\a_{ijk}\hat\varphi(i+j+k+2)z_1^iz_2^jz_3^k\right\|_{L^\infty\hat\otimes
L^\infty\hat\otimes L^\infty}&\le\sum_{j,k\ge0}\|\psi*Q_{j+k}\|_{L^\infty}\\
&=\sum_{m\ge0}(m+1)\|\psi*Q_m\|_{L^\infty}.
\end{align*}
Consider the function $r$ on ${\Bbb R}$ defined by
$$
r(x)=\left\{\begin{array}{ll}1,&|x|\le1,\\[.2cm]\frac1x,&|x|\ge1.
\end{array}\right.
$$
It is easy to see that the Fourier transform ${\mathcal F} r$ of $h$ belongs to $L^1({\Bbb R})$.
Define the functions $R_n$, $n\ge1$, on ${\Bbb T}$ by
$$
R_n(\zeta)=\sum_{k\in{\Bbb Z}}r\left(\frac kn\right)\zeta^k.
$$
\begin{lem}
\label{Hn}
$$
\|R_n\|_{L^1}\le\operatorname{const}.
$$
\end{lem}
{\bf Proof. } For $N>0$ consider the function $\xi_N$ defined by
$$
\xi_N(x)=\left\{\begin{array}{ll}1,&|x|\le
N,\\[.2cm]\frac{2N-|x|}N,&N\le|x|\le 2N,\\[.2cm]0,&|x|\ge2N.
\end{array}\right.
$$
It is easy to see that ${\mathcal F}\xi_N\in L^1({\Bbb R})$ and
$\|{\mathcal F}\xi_N\|_{L^1({\Bbb R})}$ does not depend on $N$.
Let
$$
R_{N,n}(\zeta)=\sum_{k\in{\Bbb Z}}r\left(\frac kn\right)\xi_N\left(\frac kn\right)\zeta^k,\quad\zeta\in{\Bbb T}.
$$
It was proved in Lemma 2 of \cite{Pe1} that $\|R_{N,n}\|_{L^1}\le\|{\mathcal F}(r\xi_N)\|_{L^1({\Bbb R})}$.
Since
$$
\|{\mathcal F}(r\xi_N)\|_{L^1({\Bbb R})}\le\|{\mathcal F} r\|_{L^1({\Bbb R})}\|{\mathcal F}\xi_N\|_{L^1({\Bbb R})}=\operatorname{const},
$$
it follows that the $L^1$-norms of $R_{N,n}$ are uniformly bounded. The result
follows from the obvious fact that
$$
\lim_{N\to\infty}\|R_n-R_{N,n}\|_{L^2}=0. \quad \blacksquare
$$
\medskip
Let us complete the proof of Theorem \ref{tens}.
For $f\in L^\infty$, we have
$$
\|f*Q_m\|_{L^\infty}=\|f-f*R_m\|_{L^\infty}\le\|f\|_{L^\infty}+\|f*R_m\|_{L^\infty}\le\operatorname{const}\|f\|_{L^\infty}.
$$
Thus
\begin{align*}
\sum_{m\ge0}(m+1)\|\psi*Q_m\|_{L^\infty}&=
\sum_{m\ge0}(m+1)\left\|\sum_{n\ge0}\psi*W_n*Q_m\right\|_{L^\infty}\\[.2cm]
&\le\sum_{m,n\ge0}(m+1)\|\psi*W_n*Q_m\|_{L^\infty}\\[.1cm]
&=\sum_{n\ge0}\,\,\,\sum_{0\le m\le2^{n+1}}(m+1)\|\psi*W_n*Q_m\|_{L^\infty}\\[.1cm]
&\le\operatorname{const}\sum_{n\ge0}\,\,\,\sum_{0\le m\le2^{n+1}}(m+1)\|\psi*W_n\|_{L^\infty}\\[.1cm]
&\le\operatorname{const}\sum_{n\ge0}2^{2n}\|\psi*W_n\|_{L^\infty}\le\operatorname{const}\|\psi\|_{B_{\be1}^2},
\end{align*}
where the $W_n$ are defined in \S 3.
This proves that
$$
\sum_{i,j,k\ge0}\a_{ijk}\hat\varphi(i+j+k+2)z_1^iz_2^jz_3^k\in L^\infty\hat\otimes C({\Bbb T})\hat\otimes C({\Bbb T})
$$
and
\begin{eqnarray}
\label{ots}
\left\|\sum_{i,j,k\ge0}\a_{ijk}\hat\varphi(i+j+k+2)z_1^iz_2^jz_3^k\right\|
_{L^\infty\hat\otimes C({\Bbb T})\hat\otimes C({\Bbb T})}\le\operatorname{const}\|\varphi\|_{B_{\be1}^2}.
\end{eqnarray}
To prove \rf{nep}, it suffices to represent $\varphi$ as
$$
\varphi=\sum_{n\ge0}\varphi*W_n.
$$
Then we can apply the above reasoning to each polynomial
$\varphi*W_n$. Since
$$
\Big(\big((S^*)^{j+k+2}\varphi*W_n\big)*\sum_{i\ge0}\a_{i+j+k}z^i\Big)
$$
is obviously a polynomial, the above reasoning shows that
$$
\sum_{i,j,k\ge0}\a_{ijk}\widehat{\varphi*W_n}(i+j+k+2)z_1^iz_2^jz_3^k\in
C({\Bbb T})\hat\otimes C({\Bbb T})\hat\otimes C({\Bbb T})
$$
and by \rf{ots},
\begin{align*}
\left\|\sum_{i,j,k\ge0}\a_{ijk}\widehat{\varphi*W_n}(i+j+k+2)z_1^iz_2^jz_3^k\right\|
_{C({\Bbb T})\hat\otimes C({\Bbb T})\hat\otimes C({\Bbb T})}
&\le\operatorname{const}\|\varphi*W_n\|_{B_{\be1}^2}\\[.3cm]
&\le\const2^{2n}\|\varphi*W_n\|_{L^\infty}.
\end{align*}
The result follows now from the fact that
$$
\sum_{n\ge0}2^{2n}\|\varphi*W_n\|_{L^\infty}\le\operatorname{const}\|\varphi\|_{B_{\be1}^2}
$$
(see \S 3). $\blacksquare$
Now we are ready to prove Theorem \ref{uni}.
\medskip
{\bf Proof of Theorem \ref{uni}.} It follows from the definition of the second order divided
difference (see \S 1) that
\begin{eqnarray}
\label{div}
(\mu-\nu)(\dg^2\varphi)(\l,\mu,\nu)=(\dg\varphi)(\l,\mu)-(\dg\varphi)(\l,\nu).
\end{eqnarray}
By \rf{vp}, we have
\begin{align*}
&\frac1t\left(\frac{d}{ds}\big(\varphi(U_s)\big)\Big|_{s=t}-\frac{d}{ds}\big(\varphi(U_s)\big)\Big|_{s=0}\right)
\\[.2cm]
=&\frac{\rm i}t\left(\iint(\dg\varphi)(\l,\nu)\,dE_{U_t}(\l)A\,dE_{U_t}(\nu)U_t-
\iint(\dg\varphi)(\mu,\nu)\,dE_U(\mu)A\,dE_{U_t}(\nu)U\right)\\[.2cm]
=&\frac{\rm i}t\left(\iint(\dg\varphi)(\l,\nu)\,dE_{U_t}(\l)A\,dE_{U_t}(\nu)-
\iint(\dg\varphi)(\mu,\nu)\,dE_U(\mu)A\,dE_{U_t}(\nu)\right)U_t\\[.2cm]
&+\frac{\rm i}t\left(\iint(\dg\varphi)(\mu,\nu)\,dE_U(\mu)A\,dE_{U_t}(\nu)U_t
-\iint(\dg\varphi)(\mu,\nu)\,dE_U(\mu)A\,dE_{U_t}(\nu)U\right)\\[.2cm]
&+\frac{\rm i}t\left(\iint(\dg\varphi)(\l,\nu)\,dE_U(\l)A\,dE_{U_t}(\nu)-
\iint(\dg\varphi)(\l,\mu)\,dE_U(\l)A\,dE_U(\mu)\right)U\\\\[.2cm]
=&\frac{\rm i}t\iiint(\dg^2\varphi)(\l,\mu,\nu)
\,dE_{U_t}(\l)(e^{{\rm i}tA}-I)U\,dE_U(\mu)A\,dE_{U_t}(\nu)U_t\\[.2cm]
&+\frac{\rm i}t\left(\iint(\dg\varphi)(\mu,\nu)\,dE_U(\mu)A\,dE_{U_t}(\nu)U_t
-\iint(\dg\varphi)(\mu,\nu)\,dE_U(\mu)A\,dE_{U_t}(\nu)U\right)\\[.2cm]
&+\frac{\rm i}t\iiint(\dg^2\varphi)(\l,\mu,\nu)
\,dE_U(\l)A\,dE_U(\mu)(e^{{\rm i}tA}-I)U\,dE_{U_t}(\nu)U_t
\end{align*}
by \rf{div}.
Since $\lim\limits_{t\to0}\|U_t-U\|=0$, to complete the proof it suffices to show that
\begin{align}
\label{per}
\lim_{t\to0}\,\,\,&\frac1t\iiint(\dg^2\varphi)(\l,\mu,\nu)
\,dE_{U_t}(\l)(e^{{\rm i}tA}-I)U\,dE_U(\mu)A\,dE_{U_t}(\nu)\nonumber\\[.2cm]&=
{\rm i}\iiint(\dg^2\varphi)(\l,\mu,\nu)\,dE_U(\l)A\,dE_U(\mu)A\,dE_U(\nu)U,
\end{align}
\medskip
\begin{align}
\label{vto}
\lim_{t\to0}\iint(\dg\varphi)(\mu,\nu)\,dE_U(\mu)A\,dE_{U_t}(\nu)=\iint(\dg\varphi)(\mu,\nu)\,dE_U(\mu)A\,dE_U(\nu),
\end{align}
and
\begin{align}
\label{tre}
\lim_{t\to0}\,\,\,&\frac1t\iiint(\dg^2\varphi)(\l,\mu,\nu)
\,dE_U(\l)A\,dE_U(\mu)(e^{{\rm i}tA}-I)U\,dE_{U_t}(\nu)\nonumber\\[.2cm]
&={\rm i}\iiint(\dg^2\varphi)(\l,\mu,\nu)\,dE_U(\l)A\,dE_U(\mu)A\,dE_U(\nu)U.
\end{align}
Let us prove \rf{per}. Since $\dg^2\varphi\in C({\Bbb T})\hat\otimes C({\Bbb T})\hat\otimes C({\Bbb T})$, it suffices to show
that for $f,\,g,\,h\in C({\Bbb T})$,
\begin{align}
\label{pred}
\lim_{t\to0}\,\,&\frac1t\iiint f(\l)g(\mu)h(\nu)
\,dE_{U_t}(\l)(e^{{\rm i}tA}-I)U\,dE_U(\mu)A\,dE_{U_t}(\nu)\nonumber\\[.2cm]&=
{\rm i}\iiint f(\l)g(\mu)h(\nu)\,dE_U(\l)A\,dE_U(\mu)A\,dE_U(\nu)U.
\end{align}
We have
\begin{align*}
&\frac1t\iiint f(\l)g(\mu)h(\nu)
\,dE_{U_t}(\l)(e^{{\rm i}tA}-I)U\,dE_U(\mu)A\,dE_{U_t}(\nu)\\[.2cm]
=&f(U_t)\left(\frac1t(e^{{\rm i}tA}-I)U\right)g(U)Ah(U_t)
\end{align*}
and
\begin{align*}
\iiint f(\l)g(\mu)h(\nu)\,dE_U(\l)A\,dE_U(\mu)A\,dE_U(\nu)U
=f(U)Ag(U)Ah(U)U.
\end{align*}
Since $f$ and $h$ are in $C({\Bbb T})$, it follows that
$$
\lim_{t\to0}\|f(U_t)-f(U)\|=\lim_{t\to0}\|h(U_t)-h(U)\|=0
$$
(it suffices to prove this for trigonometric polynomials $f$ and $h$ which is evident).
This together with the obvious fact
$$
\lim_{t\to0}\left(\frac1t(e^{{\rm i}tA}-I)\right)={\rm i}A
$$
proves \rf{pred} which in turn implies \rf{per}.
The proof of \rf{tre} is similar. To prove \rf{vto}, we observe that $B_{\be1}^2\subset B_{\be1}^1$
and use the fact that $\dg\varphi\in C({\Bbb T})\hat\otimes C({\Bbb T})$ (this was proved in \cite{Pe1}). Again, it
suffices to prove that for $f,g \in C({\Bbb T})$,
$$
\lim_{t\to0}\iint f(\mu)g(\nu)\,dE_U(\mu)A\,dE_{U_t}(\nu)=\iint f(\mu)g(\nu)\,dE_U(\mu)A\,dE_U(\nu)
$$
which follows from the obvious equality:
$$
\lim_{t\to0}\|g(U_t)-g(U)\|=0.\quad\blacksquare
$$
The proofs of Theorems \ref{tens} and \ref{uni} given above generalize easily to the case of higher
derivatives.
\begin{thm}
\label{mtens}
Let $m$ be a positive integer.
If $\varphi\in B_{\be1}^m$, then
$$
\dg^m\varphi\in\underbrace{C({\Bbb T})\hat\otimes\cdots\hat\otimes C({\Bbb T})}_{m+1}.
$$
\end{thm}
\begin{thm}
\label{muni}
Let $m$ be a positive integer and
let $\varphi$ be a function in the Besov class $B^m_{\be1}$, then the function
$t\mapsto \varphi(U_t)$
has $m$th derivative and
\begin{align*}
&\frac{d^m}{ds^m}\big(\varphi(U_s)\big)\Big|_{s=0}\\[.2cm]
=&{\rm i}^mm!\left(\underbrace{\int\cdots\int}_{m+1}(\dg^m\varphi)(\l_1,\cdots,\l_{m+1})
\,dE_U(\l_1)A\cdots A\,dE_U(\l_{m+1})\right)U^m.
\end{align*}
\end{thm}
\
\section{\bf The case of self-adjoint operators}
\setcounter{equation}{0}
\
In this section we consider the problem of the existence of higher derivatives of the function
$$
t\mapsto \varphi(A_t)=\varphi(A+tK)
$$
Here $A$ is a self-adjoint operator (not necessarily
bounded), $K$ is a bounded self-adjoint operator, and $A_t\stackrel{\mathrm{def}}{=} A+tK$.
In \cite{Pe2} it was shown that if $\varphi\in B_{\be1}^1({\Bbb R})$, then
$\dg\varphi\in {\frak B}({\Bbb R})\hat\otimes_{\rm i}{\frak B}({\Bbb R})$, where
${\frak B}({\Bbb R})$ is the space of bounded Borel functions on ${\Bbb R}$ equipped with the $\sup$-norm,
and so
\begin{eqnarray}
\label{ne}
\|\varphi(A+K)-\varphi(A)\|\le\operatorname{const}\|\varphi\|_{B_{\be1}^1}\|K\|.
\end{eqnarray}
In fact, the construction given in \cite{Pe2} shows that for $\varphi\in B_{\be1}^1({\Bbb R})$, the function
$t\mapsto \varphi(A+tK)$
is differentiable and
\begin{eqnarray}
\label{fd}
\frac{d}{ds}\big(\varphi(A_s)\big)\Big|_{s=0}=\iint(\dg\varphi)(\l,\mu)\,dE_A(\l)K\,dE_A(\mu).
\end{eqnarray}
For completeness, we show briefly how to deduce \rf{fd} from the construction given in \cite{Pe2}.
We are going to give a detailed proof in the case of higher derivatives.
We need the following notion.
\medskip
{\bf Definition.} A continuous function $\varphi$ on ${\Bbb R}$ is called {\it operator continuous} if
$$
\lim_{s\to0}\|\varphi(A+sK)-\varphi(A)\|=0
$$
for any self-adjoint operator $A$ and any bounded self-adjoint operator $K$.
\medskip
It follows from \rf{ne} that functions in $B_{\be1}^1({\Bbb R})$ are operator continuous.
It is also easy to see that the product of two bounded operator continuous functions is
operator continuous.
\medskip
{\bf Proof of \rf {fd}.} The construction given in
\cite{Pe2} shows that if $\varphi\in B_{\be1}^1({\Bbb R})$, then $\dg\varphi$ admits a representation
$$
(\dg\varphi)(\l,\mu)=\int_Q f(\l,x)g(\mu,x)\,d\sigma(x),
$$
where $(Q,\sigma)$ is a measure space, $f$ and $g$ are measurable functions on ${\Bbb R}\times Q$ such that
$$
\int_Q\|f_x\|_{{\frak B}({\Bbb R})}\|g_x\|_{{\frak B}({\Bbb R})}\,d\sigma(x)<\infty,
$$
and for almost all \mbox{$x\in Q$}, and $f_x$ and $g_x$ are operator continuous functions
where \linebreak$f_x(\l)\stackrel{\mathrm{def}}{=} f(\l,x)$ and $g_x(\mu)\stackrel{\mathrm{def}}{=} g(\mu,x)$. Indeed, it is very easy to verify that
the functions $f_x$ and $g_x$ constructed in \cite{Pe2} are products of bounded functions in
$B_{\be1}^1({\Bbb R})$.
By \rf{BSF}, we have
\begin{align*}
\frac1s\big(\varphi(A_s)-\varphi(A)\big)&=\frac1s\iint(\dg\varphi)(\l,\mu)\,dE_{A_s}(\l)sK\,dE_A(\mu)\\[.2cm]
&=\int_Q f_x(A_s)Kg_x(A)\,d\sigma(x).
\end{align*}
Since $f_x$ is operator continuous, we have
$$
\lim_{s\to0}\|f_x(A_s)-f_x(A)\|=0.
$$
It follows that
\begin{align*}
&\left\|\int_Q f_x(A_s)Kg_x(A)\,d\sigma(x)-\int_Q f_x(A)Kg_x(A)\,d\sigma(x)\right\|\\[.2cm]
&\le\|K\|\int_Q\|f_x(A_s)-f_x(A)\|\cdot\|g_x(A)\|\,d\sigma(x)
\to0, \quad\mbox{as}\quad s\to0,
\end{align*}
which implies \rf{fd}. $\blacksquare$
Consider first the problem of the existence of the second operator derivative. First we prove that
if $f\in B_{\be1}^2({\Bbb R})$, then
$\dg^2\varphi\in{\frak B}({\Bbb R})\hat\otimes_{\rm i}{\frak B}({\Bbb R})\hat\otimes_{\rm i}{\frak B}({\Bbb R})$.
Actually, to prove the existence of the second derivative, we need the following slightly stronger
result.
\begin{thm}
\label{d2f}
Let $\varphi\in B_{\be1}^2({\Bbb R})$. Then there exist a measure space $(Q,\sigma)$ and measurable functions
$f,\,g$, and $h$ on ${\Bbb R}\times Q$ such that
\begin{eqnarray}
\label{ipr}
(\dg^2\varphi)(\l,\mu,\nu)=\int_Q f(\l,x)g(\mu,x)h(\nu,x)\,d\sigma(x),
\end{eqnarray}
$f_x,\,g_x$, and $h_x$ are operator continuous functions for almost all $x\in Q$, and
\begin{eqnarray}
\label{tn}
\int_Q\|f_x\|_{{\frak B}({\Bbb R})}\|g_x\|_{{\frak B}({\Bbb R})}\|h_x\|_{{\frak B}({\Bbb R})}\,d\sigma(x)\le
\operatorname{const}\|f\|_{B_{\be1}^2({\Bbb R})}.
\end{eqnarray}
\end{thm}
As before, $f_x(\l)=f(\l,x)$, $g_x(\mu)=g(\mu,x)$, and $h_x(\nu)=h(\nu,x)$.
Theorem \ref{d2f} will be used to prove the main result of this section.
\begin{thm}
\label{vps}
Suppose that $A$ is a self-adjoint operator, $K$ is a bounded self-adjoint operator.
If $\varphi\in B_{\be1}^2({\Bbb R})\bigcap B_{\be1}^1({\Bbb R})$, then the function
$s\mapsto\varphi(A_s)$
has second derivative that is a bounded operator and
\begin{eqnarray}
\label{vt}
\frac{d^2}{ds^2}\big(\varphi(A_s)\big)\Big|_{s=0}=
2\iint(\dg^2\varphi)(\l,\mu,\nu)\,dE_A(\l)K\,dE_A(\mu)K\,dE_A(\nu).
\end{eqnarray}
\end{thm}
Note that by Theorem \ref{d2f}, the right-hand side of \rf{vt} makes sense and is a bounded linear
operator.
For $t>0$ and a function $f$, we define ${\mathcal S}^*_t f$ by
$$
\big({\mathcal F}({\mathcal S}^*_t f)\big)(s)=\left\{\begin{array}{ll}({\mathcal F} f)(s-t),&t\le s,\\[.2cm]
0,&t>s.
\end{array}\right.
$$
We also define the distributions $q_t$ and $r_t$, $t>0$, by
$$
({\mathcal F} q_t)(s)=\left\{\begin{array}{ll}\frac{s}{s+t},&s\ge0,\\[.2cm]0,&s<0,
\end{array}\right.
$$
and
$$
({\mathcal F} r_t)(s)=\left\{\begin{array}{ll}1,&|s|\le t,\\[.2cm]\frac{t}{s},&|s|>t.
\end{array}\right.
$$
It is easy to see that $r_t\in L^1({\Bbb R})$ (see \S 4) and $\|r_t\|_{L^1({\Bbb R})}$ does not depend on $t$.
To prove Theorem \ref{d2f}, we need the following lemma.
\begin{lem}
\label{2m}
Let $M>0$ and let $\varphi$ be a bounded function on ${\Bbb R}$ such that \linebreak$\operatorname{supp}{\mathcal F}\varphi\subset[M/2,2M]$.
Then
\begin{align}
\label{tri}
(\dg^2\varphi)(\l,\mu,\nu)=&
-\iint\limits_{{\Bbb R}_+\times{\Bbb R}_+}\big(({\mathcal S}_{t+u}^*\varphi)*q_{t+u}\big)(\l)e^{{\rm i}t\mu}e^{{\rm i}u\nu}
\,dt\,du\nonumber\\[.2cm]
&-\iint\limits_{{\Bbb R}_+\times{\Bbb R}_+}\big(({\mathcal S}_{s+u}^*\varphi)*q_{s+u}\big)(\mu)e^{{\rm i}s\l}e^{{\rm i}u\nu}
\,ds\,du\nonumber\\[.2cm]
&-\iint\limits_{{\Bbb R}_+\times{\Bbb R}_+}\big(({\mathcal S}_{s+t}^*\varphi)*q_{s+t}\big)(\nu)e^{{\rm i}s\l}e^{{\rm i}t\mu}
\,ds\,dt.
\end{align}
\end{lem}
{\bf Proof. } Let us first assume that ${\mathcal F}\varphi\in L^1({\Bbb R})$. We have
\begin{align*}
&\iint\limits_{{\Bbb R}_+\times{\Bbb R}_+}\big(({\mathcal S}_{t+u}^*\varphi)*q_{t+u}\big)(\l)e^{{\rm i}t\mu}e^{{\rm i}u\nu}
\,d\mu\,d\nu\\[.3cm]
=&\iiint\limits_{{\Bbb R}_+\times{\Bbb R}_+\times{\Bbb R}_+}
({\mathcal F}\varphi)(s+t+u)\frac{s}{s+t+u}e^{{\rm i}s\l}e^{{\rm i}t\mu}e^{{\rm i}u\nu}\,ds\,dt\,du.
\end{align*}
We can write similar representations for the other two terms on the right-hand side of \rf{tri},
take their sum and
reduce \rf{tri} to the verification of the following identity:
$$
(\dg^2\varphi)(\l,\mu,\nu)=
-\iiint\limits_{{\Bbb R}_+\times{\Bbb R}_+\times{\Bbb R}_+}
({\mathcal F}\varphi)(s+t+u)e^{{\rm i}s\l}e^{{\rm i}t\mu}e^{{\rm i}u\nu}\,ds\,dt\,du.
$$
This identity can be verified elementarily by making the substitution $a=s+t+u$, $b=t+u$, and $c=u$.
Consider now the general case, i.e., $\varphi\in L^\infty({\Bbb R})$ and $\operatorname{supp}{\mathcal F}\varphi\subset[M/2,2M]$. Consider a
smooth function $\o$ on ${\Bbb R}$ such that $\o\ge0$, $\operatorname{supp}\o\subset[-1,1]$, and $\|\o\|_{L^1({\Bbb R})}=1$.
For $\varepsilon>0$ we put $\o_\varepsilon(x)=\o(x/\varepsilon)/\varepsilon$ and
define the function $\varphi_\varepsilon$ by ${\mathcal F}\varphi_\varepsilon=({\mathcal F}\varphi)*\o_\varepsilon$.
Clearly,
$$
{\mathcal F}\varphi_\varepsilon\in L^1({\Bbb R}),\quad\lim_{\varepsilon\to0}\|\varphi_\varepsilon\|_{L^\infty({\Bbb R})}=\|\varphi\|_{L^\infty({\Bbb R})},
$$
and
$$
\lim_{\varepsilon\to0}\varphi_\varepsilon(x)=\varphi(x)\quad\mbox{for almost all}\quad x\in{\Bbb R}.
$$
Since we have already proved that \rf{tri} holds for $\varphi_\varepsilon$ in place of $\varphi$, the result follows by
passing to the limit as $\varepsilon\to\infty$. $\blacksquare$
\medskip
{\bf Proof of Theorem \ref{d2f}.} Suppose that $\operatorname{supp}{\mathcal F}\varphi\subset[M/2,2M]$.
Let us show that each summand on the right-hand side of
\rf{2m} admits a desired representation. Clearly, it suffices to do it for the first summand.
Put
$$
\psi(\l,\mu,\nu)=
\iint\limits_{{\Bbb R}_+\times{\Bbb R}_+}
\big(({\mathcal S}_{t+u}^*\varphi)*q_{t+u}\big)(\l)e^{{\rm i}t\mu}e^{{\rm i}u\nu}\,dt\,du
=\iint\limits_{{\Bbb R}_+\times{\Bbb R}_+}f_{t+u}(\l)g_t(\mu)h_u(\nu)\,dt\,du,
$$
where
$$
f_v(\l)=\big(({\mathcal S}_v^*\varphi)*q_v\big)(\l),\quad g_t(\mu)=e^{{\rm i}t\mu},\quad
\mbox{and}\quad h_u(\nu)=e^{{\rm i}u\nu}.
$$
Clearly, $\|g_t\|_{{\frak B}({\Bbb R})}=1$ and $\|h_u\|_{{\frak B}({\Bbb R})}=1$. Since
$$
\|f_v\|_{{\frak B}({\Bbb R})}=\|f_v\|_{L^\infty}=\|\varphi-\varphi*r_v\|_{L^\infty}
\le
\left\{\begin{array}{ll}(1+\|r_v\|_{L^1})\|\varphi\|_{L^\infty},&v\le2M,\\[.2cm]0,&v>2M,
\end{array}\right.
$$
we have
$$
\|\psi\|_{{\frak B}({\Bbb R})\hat\otimes_{\rm i}{\frak B}({\Bbb R})\hat\otimes_{\rm i}{\frak B}({\Bbb R})}\le
\operatorname{const}\|\varphi\|_{L^\infty}\iint\limits_{t,u>0,t+u\le 2M}\,dt\,du
\le\mbox{\rm const}\cdot M^2\|\varphi\|_{L^\infty}.
$$
In the same way we can treat the case when $\operatorname{supp}{\mathcal F}\varphi\subset[-2M,-M/2]$. If $\varphi$ is a polynomial
of degree at most 2, the result is trivial.
Let now $\varphi\in B_{\be1}^1({\Bbb R})$ and
$$
\varphi=\sum_{n\in{\Bbb Z}}\varphi*W_n+\sum_{n\in{\Bbb Z}}\varphi*W^\#.
$$
It follows from the above estimate that
$$
\|\dg^2\varphi\|_{{\frak B}({\Bbb R})\hat\otimes_{\rm i}{\frak B}({\Bbb R})\hat\otimes_{\rm i}{\frak B}({\Bbb R})}\le\
\operatorname{const}\left(\sum_{n\in{\Bbb Z}}2^{2n}\|\varphi*W_n\|_{L^\infty}+
\sum_{n\in{\Bbb Z}}2^{2n}\|\varphi*W^\#_n\|_{L^\infty}\right).
$$
To complete the proof of Theorem \ref{d2f}, we observe that the functions $\l\mapsto e^{{\rm i}tv}$
are operator continuous, because they belong to $B^1_{\be1}({\Bbb R})$. On the other hand, it is easy to see
that if $\operatorname{supp}\varphi\subset[M/2.2M]$, then
the function $({\mathcal S}_v^*\varphi)*q_v$ is the product of $e^{{\rm i}tv}$ and a bounded function
in $B^1_{\be1}({\Bbb R})$. $\blacksquare$
\medskip
To prove \rf{vps}, we need the following lemma.
\begin{lem}
\label{ol}
Let $A$ be a self-adjoint operator and let $K$ be a bounded self-adjoint operator.
Suppose that $\varphi$ is a function on ${\Bbb R}$ such that $\dg\varphi\in L^\infty({\Bbb R})\pt L^\infty({\Bbb R})$
and $\dg^2\varphi\in L^\infty({\Bbb R})\pt L^\infty({\Bbb R})\pt L^\infty({\Bbb R})$. Then
\begin{align*}
&\iint(\dg\varphi)(\l,\mu)\,dE_{A+K}(\l)K\,dE_{A+K}(\mu)-
\iint(\dg\varphi)(\l,\nu)\,dE_{A+K}(\l)K\,dE_A(\nu)\\[.2cm]
=&\iiint(\dg^2\varphi)(\l,\mu,\nu)\,dE_{A+K}(\l)K\,dE_{A+K}(\mu)K\,dE_A(\nu).
\end{align*}
\end{lem}
{\bf Proof. } Put
$$
P_n=E_A\big([-n,n]\big),\quad Q_n=E_{A+K}\big([-n,n]\big),\quad
A_{[n]}=P_nA,\quad\mbox{and}\quad B_{[n]}=Q_n(A+K).
$$
We have
\begin{align*}
&\iint(\dg\varphi)(\l,\mu)\,dE_{A+K}(\l)K\,dE_{A+K}(\mu)-
\iint(\dg\varphi)(\l,\nu)\,dE_{A+K}(\l)K\,dE_A(\nu)\\[.2cm]
=&\iiint(\dg\varphi)(\l,\mu)\,dE_{A+K}(\l)K\,dE_{A+K}(\mu)\,dE_A(\nu)\\[.2cm]
&-\iiint(\dg\varphi)(\l,\nu)\,dE_{A+K}(\l)K\,dE_{A+K}(\mu)\,dE_A(\nu).
\end{align*}
Thus
\begin{align*}
&Q_n\left(\iint(\dg\varphi)(\l,\mu)\,dE_{A+K}(\l)K\,dE_{A+K}(\mu)-
\iint(\dg\varphi)(\l,\mu)\,dE_{A+K}(\l)K\,dE_A(\mu)\right)P_n\\[.2cm]
=&\int\limits_{-n}^n\int\limits_{-n}^n\int\limits_{-n}^n
(\dg\varphi)(\l,\mu)\,dE_{A+K}(\l)K\,dE_{A+K}(\mu)\,dE_A(\nu)\\[.2cm]
&-\int\limits_{-n}^n\int\limits_{-n}^n\int\limits_{-n}^n
(\dg\varphi)(\l,\nu)\,dE_{A+K}(\l)K\,dE_{A+K}(\mu)\,dE_A(\nu)\\[.2cm]
=&\iiint
(\mu-\nu)(\dg^2\varphi)(\l,\mu,\nu)\,dE_{B_{[n]}}(\l)K\,dE_{B_{[n]}}(\mu)\,dE_{A_{[n]}}(\nu),
\end{align*}
since
$$
(\dg\varphi)(\l,\mu)-(\dg\varphi)(\l,\nu)=(\mu-\nu)(\dg^2\varphi)(\l,\mu,\nu).
$$
On the other hand,
\begin{align*}
&Q_n\left(\iiint(\dg^2\varphi)(\l,\mu,\nu)\,dE_{A+K}(\l)K\,dE_{A+K}(\mu)K\,dE_A(\nu)\right)P_n\\[.2cm]
=&\int\limits_{-n}^n\int\limits_{-n}^n\int\limits_{-n}^n
(\dg^2\varphi)(\l,\mu,\nu)\,dE_{A+K}(\l)K\,dE_{A+K}(\mu)\Big((A+K)-A\Big)\,dE_A(\nu)\\[.2cm]
=&\int\limits_{-n}^n\int\limits_{-n}^n\int\limits_{-n}^n
(\dg^2\varphi)(\l,\mu,\nu)\,dE_{A+K}(\l)K\,dE_{A+K}(\mu)Q_n\Big((A+K)-A\Big)P_n\,dE_A(\nu)\\[.2cm]
=&\int\limits_{-n}^n\int\limits_{-n}^n\int\limits_{-n}^n
(\dg^2\varphi)(\l,\mu,\nu)\,dE_{A+K}(\l)K\,dE_{A+K}(\mu)(B_{[n]}-A_{[n]})\,dE_A(\nu)\\[.2cm]
=&\iiint(\dg^2\varphi)(\l,\mu,\nu)
\,dE_{B_{[n]}}(\l)K\,dE_{B_{[n]}}(\mu)(B_{[n]}-A_{[n]})\,dE_{A_{[n]}}(\nu)\\[.2cm]
\end{align*}
It is easy to see that this is equal to
\begin{align*}
&\iiint(\dg^2\varphi)(\l,\mu,\nu)\,dE_{B_{[n]}}(\l)K\,dE_{B_{[n]}}(\mu)B_{[n]}\,dE_{A_{[n]}}(\nu)\\[.2cm]
&-\iiint(\dg^2\varphi)(\l,\mu,\nu)\,dE_{B_{[n]}}(\l)K\,dE_{B_{[n]}}(\mu)A_{[n]}\,dE_{A_{[n]}}(\nu)\\[.2cm]
=&\iiint\mu(\dg^2\varphi)(\l,\mu,\nu)\,dE_{B_{[n]}}(\l)K\,dE_{B_{[n]}}(\mu)\,dE_{A_{[n]}}(\nu)\\[.2cm]
&-\iiint\nu(\dg^2\varphi)(\l,\mu,\nu)\,dE_{B_{[n]}}(\l)K\,dE_{B_{[n]}}(\mu)\,dE_{A_{[n]}}(\nu)\\[.2cm]
=&\iiint(\mu-\nu)(\dg^2\varphi)(\l,\mu,\nu)\,dE_{B_{[n]}}(\l)K\,dE_{B_{[n]}}(\mu)\,dE_{A_{[n]}}(\nu).
\end{align*}
The result follows now from the fact that
$$
\lim_{n\to\infty}P_n=\lim_{n\to\infty}Q_n=I
$$
in the strong operator topology. $\blacksquare$
\medskip
{\bf Proof of Theorem \ref{vps}.} It follows from Lemma \ref{ol} that
\begin{align*}
&\frac1t\left(\iint(\dg\varphi)(\l,\mu)\,dE_{A_t}(\l)K\,dE_{A_t}(\mu)-
\iint(\dg\varphi)(\l,\nu)\,dE_{A_t}(\l)K\,dE_A(\nu)\right)\\
=&\iiint(\dg^2\varphi)(\l,\mu,\nu)\,dE_{A_t}(\l)K\,dE_{A_t}(\mu)K\,dE_A(\nu).
\end{align*}
Similarly,
\begin{align*}
&\frac1t\left(\iint(\dg\varphi)(\l,\nu)\,dE_{A_t}(\l)K\,dE_A(\nu)
-\iint(\dg\varphi)(\mu,\nu)\,dE_A(\mu)K\,dE_A(\nu)\right)\\
=&\iiint(\dg^2\varphi)(\l,\mu,\nu)\,dE_{A_t}(\l)K\,dE_A(\mu)K\,dE_{A_t}(\nu).
\end{align*}
Thus
\begin{align*}
\frac1t\left(\frac{d}{ds}\varphi(A_s)\Big|_{s=t}-\frac{d}{ds}\varphi(A_s)\Big|_{s=0}\right)
&=\iiint(\dg^2\varphi)(\l,\mu,\nu)\,dE_{A_t}(\l)K\,dE_{A_t}(\mu)K\,dE_A(\nu)\\
&+\iiint(\dg^2\varphi)(\l,\mu,\nu)\,dE_{A_t}(\l)K\,dE_A(\mu)K\,dE_A(\nu).
\end{align*}
The fact that
\begin{align*}
&\lim_{t\to0}\,\iiint(\dg^2\varphi)(\l,\mu,\nu)\,dE_{A_t}(\l)K\,dE_{A_t}(\mu)K\,dE_A(\nu)\\[.2cm]
=&
\iiint(\dg^2\varphi)(\l,\mu,\nu)\,dE_A(\l)K\,dE_A(\mu)K\,dE_A(\nu)
\end{align*}
follows immediately from \rf{ipr} and \rf{tn} and from the fact that the functions
$f_x$, $g_x$, and $h_x$ in \rf{ipr} are operator continuous.
Similarly,
\begin{align*}
&\lim_{t\to0}\iiint(\dg^2\varphi)(\l,\mu,\nu)\,dE_{A_t}(\l)K\,dE_A(\mu)K\,dE_A(\nu)\\[.2cm]
=&
\iiint(\dg^2\varphi)(\l,\mu,\nu)\,dE_A(\l)K\,dE_A(\mu)K\,dE_A(\nu),
\end{align*}
which completes the proof. $\blacksquare$
\medskip
{\bf Remark.} In the case of functions on the real line the Besov space
$B_{\be1}^2({\Bbb R})$ is not contained in the space $B_{\be1}^1({\Bbb R})$. In the statement of Theorem
\ref{vps} we impose the assumption that $\varphi\in B_{\be1}^1({\Bbb R})$ to ensure that
the function $t\mapsto\varphi(A_t)$ has the first derivative. However, we can define the second derivative
of this function in a slightly different way.
Suppose that $\varphi\in B_{\be1}^2({\Bbb R})$ and
$$
\varphi=\sum_{n\in{\Bbb Z}}\varphi*W_n+\sum_{n\in{\Bbb Z}}\varphi*W_n^\#.
$$
Then the functions $\varphi_n\stackrel{\mathrm{def}}{=}\varphi*W_n$ and $\varphi_n^\#\stackrel{\mathrm{def}}{=}\varphi*W_n^\#$ belong to
$B_{\be1}^2({\Bbb R})\bigcap B_{\be1}^2({\Bbb R})$ and by Theorems \ref{d2f} and \ref{vps}, the series
$$
\sum_{n\in{\Bbb Z}}\frac{d^2}{ds^2}\big(\varphi_n(A_s)\big)\Big|_{s=0}+
\sum_{n\in{\Bbb Z}}\frac{d^2}{ds^2}\big(\varphi_n^\#(A_s)\big)\Big|_{s=0}
$$
converges absolutely and we can define the second derivative of the function
$t\mapsto\varphi(A_t)$ by
$$
\frac{d^2}{ds^2}\big(\varphi(A_s)\big)\Big|_{s=0}
\stackrel{\mathrm{def}}{=}\sum_{n\in{\Bbb Z}}\frac{d^2}{ds^2}\big(\varphi_n(A_s)\big)\Big|_{s=0}+
\sum_{n\in{\Bbb Z}}\frac{d^2}{ds^2}\big(\varphi_n^\#(A_s)\big)\Big|_{s=0}.
$$
With this definition the function {\it the function $t\mapsto\varphi(A_t)$
can possess the second derivative without having the first derivative!}
If $\varphi(\l)=\l^2$, we can write formally
$$
\frac{d^2}{ds^2}\big(\varphi(A_s)\big)\Big|_{s=0}=
\frac{d^2}{ds^2}\big(A^2+s(KA+AK)+s^2K^2\big)\Big|_{s=0}=2K^2.
$$
Then formula \rf{vt} holds for an arbitrary $\varphi\in B_{\be1}^2({\Bbb R})$.
\medskip
The proofs of Theorems \ref{d2f} and \ref{vps} given above easily generalize to the case of
derivatives of an arbitrary order.
\begin{thm}
\label{ddf}
Let $m$ be a positive integer and let
$\varphi\in B_{\be1}^m({\Bbb R})$. Then there exist a measure space $(Q,\sigma)$ and measurable functions
$f_1,\cdots,f_{m+1}$ on ${\Bbb R}\times Q$ such that
$$
(\dg^m\varphi)(\l_1,\cdots,\l_{m+1})=\int_Q f_1(\l_1,x)f_2(\l_2,x)\cdots f_{m+1}(\l_{m+1},x)\,d\sigma(x),
$$
the functions $f_1(\cdot,x),\cdots,f_{m+1}(\cdot,x)$ are operator continuous for almost
all $x\in Q$, and
$$
\int_Q\|f_1(\cdot,x)\|_{{\frak B}({\Bbb R})}\cdots\|f_{m+1}(\cdot,x)\|_{{\frak B}({\Bbb R})}\,d\sigma(x)\le
\operatorname{const}\|f\|_{B_{\be1}^m({\Bbb R})}.
$$
\end{thm}
\begin{thm}
\label{dps}
Let $m$ be a positive integer.
Suppose that $A$ is a self-adjoint operator, $K$ is a bounded self-adjoint operator.
If $\varphi\in B_{\be1}^m({\Bbb R})\bigcap B_{\be1}^1({\Bbb R})$, then the function
\linebreak$s\mapsto\varphi(A_s)$
has $m$th derivative that is a bounded operator and
$$
\frac{d^m}{ds^m}\big(\varphi(A_s)\big)\Big|_{s=0}=
m!\underbrace{\int\cdots\int}_{m+1}(\dg^{m}\varphi)(\l_1,\cdots,\l_{m+1})
\,dE_A(\l_1)K\cdots K\,dE_A(\l_{m+1}).
$$
\end{thm}
As in the case $m=2$, we can slightly change the definition of the $m$th derivative so that
for functions $\varphi\in B_{\be1}^m({\Bbb R})$ the function $s\mapsto\varphi(A_s)$ has $m$th derivative, but does not
have to possess derivatives of orders less than $m$ (see the Remark following the proof of
Theorem \ref{vps}).
\medskip
{\bf Remark.} It is easy to see that in case $A$ is a bounded self-adjoint operator for the existence
of the $m$th derivative of the function $s\mapsto\varphi(A_s)$, it suffices to assume that $\varphi$
belongs to $B^m_{\be1}$ locally, i.e., for each finite
interval $I$ there exists a function $\psi$ in $B^m_{\be1}({\Bbb R})$ such that $\varphi\big|I=\psi\big|I$.
\
|
1,108,101,564,926 | arxiv | \section{Introduction}
Anomalous $U(1)_A$ gauge symmetry appears often in compactified
string theory. The 4-dimensional (4D) spectrum of such compactification
contains a modulus-axion (or dilaton-axion) superfield which transforms non-linearly under
$U(1)_A$ to implement the Green-Schwarz (GS) anomaly cancellation
mechanism \cite{GS}. In heterotic string theory, the dilaton
plays the role of the GS modulus, however in other string
theories, the GS modulus can be either
a K\"ahler modulus of Calabi-Yau (CY) orientifold \cite{BKQ,louis}
or a blowing-up modulus of orbifold singularity
\cite{ano}.
The non-linear $U(1)_A$ transformation of the
GS modulus superfield leads to a field-dependent Fayet-Iliopoulos (FI) term
\cite{anomalous} which might play an important role for
supersymmetry (SUSY) breaking. Anomalous $U(1)_A$ might also
correspond to a flavor symmetry which generates the hierarchical
Yukawa couplings through the Froggatt-Nielsen mechanism
\cite{FN0,FN}.
The $U(1)_A$ $D$-term can give a contribution to soft scalar
masses as $\Delta m_i^2=-q_ig^2_A D_A$ where $q_i$ is the $U(1)_A$
charge of the corresponding sfermion \cite{u1soft}. Such $D$-term
contribution has an important implication to the flavor problem in
supersymmetric models. If $g_A^2D_A$ is significantly bigger than
the gaugino mass-squares $M_a^2$ which are presumed to be of order
$(1 \,\,\mbox{TeV})^2$, e.g. $g_A^2D_A\sim (10\,\, \mbox{TeV})^2$,
one can avoid the SUSY flavor problem by assuming that $q_i$ are
non-vanishing only for the first and second generations of matter
fields, which would make the first and second generations of
squarks and sleptons heavy enough to avoid dangerous
flavor-changing-neutral-current (FCNC) processes. Still one can
arrange $q_i$ to be appropriately flavor-dependent \cite{nelson}
to generate the observed pattern of hierarchical Yukawa couplings
via the Froggatt-Nielsen mechanism, e.g. $y_{ij}\sim
\epsilon^{q_i+q_j}$ for $\epsilon\sim 0.2$. In other case that
$g_A^2D_A$ is comparable to $M_a^2$, one needs $q_i$ to be
flavor-universal to avoid dangerous FCNC processes, and then
$U(1)_A$ can not be identified as a flavor symmetry for the Yukawa
coupling hierarchy. Finally, if $g_A^2D_A$ is small enough, e.g.
suppressed by a loop factor of order $10^{-2}$ compared to
$M_a^2$, $q_i$ are again allowed to be flavor-dependent. It has
been noticed that the relative importance of the $D$-term
contribution to soft masses depends on how the GS modulus is
stabilized \cite{arkani}. In this respect, it is important to
analyze the low energy consequences of anomalous $U(1)_A$ while
incorporating the stabilization of the GS modulus explicitly
\cite{moral,dudas,casas}.
In the previous studies of anomalous $U(1)_A$ in heterotic string
compactification, two possible scenarios for the stabilization of
the GS modulus (the heterotic string dilaton $S$ in this case)
have been considered. One is to use the multiple gaugino
condensations \cite{racetrack} which would stabilize $S$ at the
weak coupling regime for which the leading order K\"ahler
potential is a good approximation. In this race-track
stabilization, one typically finds the auxiliary $F$ component
$F^S=0$ and also $D_A=0$, although SUSY can be broken by the
$F$-components of other moduli.
The most serious difficulty of the race-track scenario is that in
all known examples the vacuum energy density has a {\it negative}
value of ${\cal O}(m_{3/2}^2M_{Pl}^2)$ \cite{CKN}, where
$M_{Pl}\simeq 2.4\times 10^{18}$ GeV is the 4D reduced Planck mass
and $m_{3/2}$ is the gravitino mass. Another possible scenario is
that $S$ is stabilized by (presently not calculable) large quantum
correction to the K\"ahler potential \cite{kahler}. In this case,
one can assume that the dilaton K\"ahler potential has a right
form to stabilize $S$ at a phenomenologically viable de Sitter
(dS) or Minkowski vacuum. The resulting $F^S$ and $D_A$ are
non-vanishing in general, however the relative importance of $D_A$
compared to the other SUSY breaking auxiliary components depends
sensitively on the incalculable large quantum corrections to the
K\"ahler potential \cite{arkani}.
Recently a new way of stabilizing moduli at dS vacuum
within a controllable approximation scheme has been proposed
by Kachru-Kallosh-Linde-Trivedi (KKLT) in the context of Type IIB
flux compactification \cite{KKLT}. The main idea is to stabilize moduli (and
also the dilaton) in the first step at a supersymmetric AdS vacuum for which the leading order
K\"ahler potential is a good approximation,
and then lift the vacuum to a dS state by adding
anti-brane.
For instance, in Type IIB compactification, one can first
introduce a proper set of fluxes and gaugino condensations stabilizing all
moduli at SUSY AdS vacuum.
In the next step, anti-branes can be added to get the nearly vanishing cosmological
constant under the RR charge cancellation condition. In the
presence of fluxes, the compact internal space is generically
warped \cite{GKP} and anti-branes are stabilized at the maximally
warped position \cite{KPV}. Then as long as the number of
anti-branes is small enough compared to the flux quanta,
anti-branes cause neither a dangerous instability of the
underlying compactification \cite{KPV} nor a sizable shift of the
moduli vacuum expectation values. In order to get the nearly
vanishing cosmological constant, the anti-brane energy density
should be adjusted to be close to $3m_{3/2}^2M_{Pl}^2$. This
requires that the warp factor $e^{2A}$ of the 4D metric on
anti-brane should be of ${\cal O}(m_{3/2}/M_{Pl})$. As it breaks
explicitly the $N=1$ SUSY preserved by the background geometry and
flux, one might expect that anti-brane will generate incalculable
SUSY breaking terms in the low energy effective lagrangian.
However as was noticed in \cite{choi1} and will be discussed in
more detail in this paper, the SUSY breaking soft terms in KKLT
compactification can be computed within a reliable approximation
scheme, which is essentially due to that anti-brane is red-shifted
by a small warp factor $e^{2A}\sim m_{3/2}/M_{Pl}$.
In this paper, we wish to examine the implications of anomalous
$U(1)_A$ for SUSY breaking while incorporating the stabilization
of the GS modulus explicitly. Since one of our major concerns is
the KKLT stabilization of the GS modulus, in section 2 we review
the 4D effective action of KKLT compactification and discuss some
features such as the $D$-type spurion dominance and the
sequestering of the SUSY breaking by red-shifted anti-brane which
is a key element of the KKLT compactification. In section 3, we
discuss the mass scales, $F$ and $D$ terms in generic models of
anomalous $U(1)_A$.
In section 4, we
examine in detail a model for the KKLT stabilization of the GS
modulus and the resulting pattern of soft terms. Section 5 is the
conclusion.
The following is a brief summary of our results.
The GS modulus-axion superfield $T$ transforms under
$U(1)_A$ as \begin{eqnarray} T\rightarrow T-i\alpha(x)\frac{\delta_{GS}}{2}\,, \end{eqnarray}
where $\alpha(x)$ is the $U(1)_A$ transformation function and
$\delta_{GS}$ is a constant of ${\cal O}(1/8\pi^2)$ when $T$ is
normalized as $\partial_T f_a={\cal O}(1)$ for the holomorphic
gauge kinetic functions $f_a$. There are two mass scales that
arise from the non-linear transformation of $T$:
\begin{eqnarray}
\xi_{FI}&=& \frac{\delta_{GS}}{2}\,\partial_TK_0,
\nonumber \\
M_{GS}^2&=& \frac{\delta_{GS}^2}{4}\,\partial_T\partial_{\bar{T}}K_0,
\end{eqnarray} where $\xi_{FI}$ is the FI $D$-term and $M_{GS}^2$
corresponds to the GS axion contribution to the $U(1)_A$ gauge
boson mass-square
\begin{eqnarray}
M_A^2=2g_A^2M_{GS}^2+{\cal O}(|\xi_{FI}|)
\end{eqnarray}
for the K\"ahler potential $K_0$
and the $U(1)_A$ gauge coupling $g_A$. (Unless
specified, we will use the convention $M_{Pl}=1$ throughout this
paper.) Then the $U(1)_A$ $D$-term is bounded as
\begin{eqnarray} |D_A|\,\lesssim\, {\cal O}(m_{3/2}^2M_{Pl}^2/M_A^2)
\end{eqnarray} for SUSY breaking scenarios with $m_{3/2}\ll M_A$.
It has been pointed out \cite{BKQ} that the $D$-term potential
$V_D=\frac{1}{2}g_A^2D_A^2$ in models with anomalous $U(1)_A$
might play the role of an uplifting potential which compensates
the negative vacuum energy density $-3m_{3/2}^2M_{Pl}^2$ in the
supergravity potential. As the K\"ahler metric of $T$ typically
has a vacuum expectation value of order unity, we have
$M_{GS}^2\sim M_{Pl}^2/(8\pi^2)^2$. Then, since the $U(1)_A$ gauge
boson mass-square $M_A^2\gtrsim {\cal O}(M_{GS}^2)$, the above
bound on $D_A$ implies that $V_D$ is too small to be an uplifting
potential in SUSY breaking scenarios with $m_{3/2}<
M_{Pl}/(8\pi^2)^2$. In other words, models of moduli stabilization
in which $V_D$ plays the role of an uplifting potential for dS
vacuum generically predict a rather large $m_{3/2}\gtrsim {\cal
O}(M_{Pl}/(8\pi^2)^2)$ \cite{casas}. On the other hand, in view of
that the gaugino masses receive the anomaly mediated contribution
of ${\cal O}(m_{3/2}/8\pi^2)$, one needs $m_{3/2}\lesssim {\cal
O}(8\pi^2)$ TeV
in order to realize the supersymmetric
extension of the standard model at the TeV scale.
As a result,
models with anomalous $U(1)_A$
still need an uplifting mechanism different from the $D$-term uplifting, e.g. the anti-brane uplifting
of KKLT or a hidden matter superpotential suggested in \cite{nilles}, if $m_{3/2}$ is small enough to give the weak scale superparticle masses.
Still $D_A$ can give an important contribution to soft masses.
As we will see, the relative importance of this
$D$-term contribution depends on the
size of the ratio \begin{eqnarray} R\equiv \xi_{FI}/M_{GS}^2.\nonumber
\end{eqnarray}
If ${\rm Re}(T)$ is a string dilaton or a K\"ahler modulus which
is stabilized at a vacuum expectation value of ${\cal O}(1)$ under
the normalization $\partial_T f_a={\cal O}(1)$,
the resulting $|R|$ is of ${\cal O}(8\pi^2)$. We then find the
$D$-term contribution to soft masses is generically comparable to
the GS modulus-mediated contribution. In this case of $|R|\gg 1$,
the longitudinal component of the $U(1)_A$ gauge boson comes
mostly from the phase of $U(1)_A$ charged field $X$ with a vacuum
expectation value $\langle X\rangle\sim \sqrt{\xi_{FI}}$, rather
than from the GS axion ${\rm Im}(T)$. Then $T$ is a
flat-direction of the $U(1)_A$ $D$-term potential, thus one needs
a non-trivial $F$-term potential to stabilize $T$. An interesting
possibility is the KKLT stabilization of $T$ involving a hidden
gaugino condensation and also anti-brane for the uplifting
mechanism. In such case, the soft terms are determined by three
contributions mediated at the scales close to $M_{Pl}$: the GS
modulus mediation \cite{KL}, the anomaly mediation \cite{AM} and
the $U(1)_A$ mediation \cite{u1soft}. Generically these three
contributions are comparable to each other, yielding the mirage
mediation pattern of superparticle masses at low energy scale
\cite{choi1,choi2,endo,choi3}. However if the K\"ahler potential
of $X$ is related to the K\"ahler potential of $T$ in a specific
manner, the $U(1)_A$ mediation is suppressed by a small factor of
${\cal O}(1/8\pi^2)$ compared to the other two mediations. Since
the anomaly mediation and the GS modulus mediation remain to be
comparable to each other, the mirage mediation pattern is
unaltered in this special case that the $U(1)_A$ mediation is
relatively suppressed.
In fact, some models of anomalous $U(1)_A$ can yield $|R|\ll 1$.
If $T$ corresponds to a blowing-up modulus of orbifold singularity
stabilized at near the orbifold limit, one can have $|\xi_{FI}|\ll
M_{GS}^2$ \cite{ano,poppitz}, and thus $|R|\ll 1$.
In this limit, soft terms mediated by the GS modulus at
$M_{GS}$ are negligible compared to the soft terms mediated by
a $U(1)_A$ charged field $X$ at the lower scale
$\langle X\rangle\sim \sqrt{\xi_{FI}}$. If $|R|$ is small enough, e.g. $|R|\lesssim
10^{-4}$, $U(1)_A$ $D$-term contribution is also smaller than the
low scale mediation at $\sqrt{\xi_{FI}}$.
\section{4D effective action of KKLT compactification}
In this section, we review the 4D effective action
of KKLT compactification and
the resulting soft SUSY breaking terms of visible fields.
We also discuss some relevant features of the SUSY breaking by
red-shifted anti-brane which is a key element of the KKLT compactification.
KKLT compactification can be split into two parts. The first part
contains the bulk of (approximate) CY space as well as the $D$
branes of visible matter and gauge fields which are assumed to be
stabilized at a region where the warping is negligible. Note that
the 4D cutoff scale of this part should be somewhat close to
$M_{Pl}$ in order to realize the 4D gauge coupling unification at
$M_{GUT}\sim 2\times 10^{16}$ GeV. The low energy dynamics of this
part can be described by a 4D effective action which takes the
form of conventional 4D $N=1$ SUGRA: \begin{eqnarray} S_{\rm N=1}=\int d^4x
d^2\Theta \,2{\cal E} \left[\,\frac{1}{8}({\bar{\cal D}}^2-8{\cal
R}) \left(3e^{-K/3}\right)+\frac{1}{4}f_a W^{a\alpha}W^a_\alpha
+W\,\right] +{\rm h.c.},\end{eqnarray} where $\Theta^\alpha$ is the Grassmann
coordinate of the curved superspace, ${\cal E}$ is the chiral
density, ${\cal R}$ is the chiral curvature superfield, and $K$,
$f_a$ and $W$ denote the K\"ahler potential, gauge kinetic
function and superpotential, respectively. In the following, we
call this part the $N=1$ sector. The scalar potential of $S_{N=1}$
in the Einstein frame is given by
\begin{eqnarray}
V_{N=1}=e^K\left\{K^{I\bar{J}}(D_I W)(D_J W)^* -3|W|^2\right\}+
\frac{1}{2{\rm Re}(f_a)} D^aD^a,
\end{eqnarray}
where $D_IW=\partial_IW+(\partial_IK)W$ is the K\"ahler covariant derivative
of the superpotential and $D^a=-\eta^I_a\partial_IK$ for the holomorphic Killing vector
$\eta^I_a$ of the $a$-th gauge transformation
of $\Phi^I$.
In KKLT compactification, the $N=1$ sector is assumed to have a
supersymmetric AdS vacuum\footnote{Note that
$D^a=-\eta^I_aD_IW/W$, so $D_IW=0$ leads to $D^a=0$ for $W\neq 0$.}, i.e.
\begin{eqnarray}
\label{susyads}
\langle D_IW\rangle_{N=1}=0,\quad
\langle V_{N=1}\rangle=-3m_{3/2}^2M_{Pl}^2.
\end{eqnarray}
The remained part of KKLT compactification is anti-brane which
is stabilized at the end of a warped throat.
The SUSY preserved by anti-brane does not have any
overlap with the $N=1$ SUSY preserved by the background geometry
and flux.
As a consequence,
the field degrees of freedom on
anti-brane do not have $N=1$ superpartner in general.
For instance, the Goldstino fermion $\xi^\alpha$ of the broken $N=1$ SUSY
which originates from anti-brane does not have bosonic
$N=1$ superpartner. This means that the $N=1$ local SUSY is
non-linearly realized on the world-volume of anti-brane.
Still the anti-brane action can be written in a locally supersymmetric
superspace form
using the Goldstino superfield \cite{SW}:
\begin{eqnarray}
\label{supergold}
\Lambda^\alpha=\xi^\alpha+\Theta^\alpha+...,
\end{eqnarray}
where the ellipsis denotes the $\xi^\alpha$-dependent higher order terms.
In the unitary gauge of $\xi^\alpha=0$, the anti-brane action
appears to break the $N=1$ SUSY explicitly. Generic explicit SUSY
breaking relevant for the soft terms of visible fields is
described by three spurion operators: $D$-type spurion operator
$\tilde{\cal P}\Theta^2\bar{\Theta}^2$, $F$-type non-chiral
spurion operator $\tilde{\Gamma}\bar{\Theta}^2$, and $F$-type chiral
spurion operator $\tilde{\cal F}\Theta^2$. Then the local
lagrangian density on the world volume of anti-brane can be
written as \begin{eqnarray} {\cal L}_{\rm anti}&=&\delta^6(y-\bar{y})\int
d^2\Theta 2\,{\cal E}\left[\,\frac{1}{8}({\bar{\cal D}}^2-8{\cal
R}) \Big(\,e^{4A}\tilde{\cal
P}\,\Theta^2\bar{\Theta}^2+e^{3A}\tilde{\Gamma}\,\bar{\Theta}^2\,\Big)
\right.\nonumber \\
&&\left. \qquad\qquad\qquad\qquad\quad -\,e^{4A}\tilde{\cal
F}\,\Theta^2+...\,\right] +{\rm h.c.}, \end{eqnarray} where $\bar{y}$ is the
coordinate of the anti-brane in six-dimensional internal space,
$e^{2A}$ is the warp factor on the anti-brane world volume: \begin{eqnarray}
ds^2(\bar{y})=e^{2A}g_{\mu\nu}dx^\mu dx^\nu, \end{eqnarray} and the ellipsis
stands for the Goldstino-dependent terms which are not so relevant
for us. Generically $\tilde{\cal P}$, $\tilde{\Gamma}$ and
$\tilde{\cal F}$ have a value of order unity in the unit with
$M_{Pl}=1$ (or in the unit with the string scale $M_{\rm st}=1$).
The warp factor dependence of each spurion operator can be easily
determined by noting that $\tilde{\cal P}\Theta^2\bar{\Theta}^2$
and $\tilde{\cal F}\Theta^2$ give rise to an anti-brane energy
density which is red-shifted by $e^{4A}$,
while $\tilde{\Gamma}\bar{\Theta}^2$ gives rise to a gravitino mass on the anti-brane world volume
which is red-shifted by $e^{3A}$.
(See the discussion of Appendix A for this red-shift of gravitino mass.)
Including the Goldstino fermion explicitly,
the spurion operators in ${\cal L}_{\rm anti}$ can be written in a locally supersymmetric form, e.g.
\begin{eqnarray}
\tilde{\cal P}\Lambda^2\bar{\Lambda}^2&=& \tilde{\cal P}\Theta^2\bar{\Theta}^2+...,
\nonumber \\
\tilde{\Gamma}\bar{\Lambda}^2&=& \tilde{\Gamma}\bar{\Theta}^2+...,
\nonumber \\
\tilde{\cal F}\tilde{W}^\alpha\tilde{W}_\alpha&=&\tilde{\cal
F}\Theta^2+..., \end{eqnarray} where $\tilde{W}_\alpha=
\frac{1}{8}(\bar{\cal D}^2-8{\cal R}){\cal
D}_\alpha(\Lambda^2\bar{\Lambda}^2)$ and the ellipses denote the
Goldstino-dependent terms.
The SUSY breaking spurions on the world volume of anti-brane can be transmitted
to the visible $D$-branes by a bulk field propagating through the warped throat.
The warp factor dependence of spurions allows us to
estimate the size of SUSY breaking induced by each spurion without
knowing the detailed mechanism of transmission.
In addition to giving a vacuum energy density of ${\cal O}(e^{4A}M_{Pl}^4)$, the $D$-type spurion
$\tilde{\cal P}\Theta^2\bar{\Theta}^2$
can generate SUSY breaking scalar mass-squares of ${\cal O}(e^{4A}M_{Pl}^2)$
through the effective operator $e^{4A}\tilde{\cal P}\Theta^2\bar{\Theta}^2Q^{i*}Q^i$
which might be induced by the exchange of bulk fields, where
$Q^i$ denote the visible matter superfields.
The non-chiral $F$-type spurion $\tilde{\Gamma}\bar{\Theta}^2$
might generate trilinear scalar couplings of ${\cal
O}(e^{3A}M_{Pl})$ through the effective operator $e^{3A}\tilde{\Gamma}\bar{\Theta}^2Q^{i*}Q^i$,
while the chiral $F$-type spurion $\tilde{\cal
F}\Theta^2$ might generate gaugino masses of ${\cal O}(e^{4A}M_{Pl})$ through the effective
chiral operator $e^{4A}\tilde{\cal
F}\Theta^2W^{a\alpha}W^a_\alpha$. When combined with its
complex conjugate or with the $F$-component of $N=1$ sector
moduli, $\tilde{\Gamma}\bar{\Theta}^2$ can generate a vacuum
energy density of ${\cal O}(e^{6A}M_{Pl}^4)$ or ${\cal
O}(e^{3A}m_{3/2}M_{Pl}^3)$, and scalar mass-squares of ${\cal
O}(e^{6A}M_{Pl}^2)$ or ${\cal O}(e^{3A}m_{3/2}M_{Pl})$. Similarly,
the chiral $F$-type spurion $\tilde{\cal F}\Theta^2$ can generate
a vacuum energy density and scalar mass-squares, but they are
suppressed by one more power of $e^A$ compared to the contribution
from $\tilde{\Gamma}\bar{\Theta}^2$. In case with $e^{A}\sim 1$,
all spurions give equally important contributions of the Planck
scale size, leading to uncontrollable SUSY breaking. On the other
hand, in case that $e^{A}\sim \sqrt{m_{3/2}/M_{Pl}}$, which is in fact
required in order for that the anti-brane energy density cancels
the negative vacuum energy density (\ref{susyads}) of the $N=1$
sector, SUSY breaking terms which originate from the $F$-type spurions are
negligible compared to the terms which originate from the $D$-type spurion
since they are suppressed by additional power of $e^A\sim
\sqrt{m_{3/2}/M_{Pl}}$.
For instance, in the presence of the $D$-type spurion providing a vacuum energy
density of ${\cal O}(m_{3/2}^2M_{Pl}^2)$,
there are always the anomaly-mediated soft masses of ${\cal O}(m_{3/2}/8\pi^2)$
which are much bigger than the soft masses induced by the $F$-type spurions
when $e^A\ll 1/8\pi^2$.
Note that $e^A\sim\sqrt{m_{3/2}/M_{Pl}}\lesssim 10^{-6}$ for
$m_{3/2}\lesssim {\cal O}(8\pi^2)$ TeV which is necessary to
get the weak scale SUSY. Obviously, this feature of $D$-type
spurion dominance greatly simplifies the SUSY breaking by
red-shifted anti-brane.
In addition to the Goldstino fermion,
there can be other anti-brane fields, e.g. the anti-brane position moduli
$\tilde{\phi}$.\footnote{There can be also anti-brane gauge field $\tilde{A}_\mu$.
However $\tilde{A}_\mu$ is not relevant for the transmission of SUSY breaking to the visible sector,
thus will be ignored.}
The anti-brane moduli also do not have $N=1$
superpartner, however one can construct the corresponding
Goldstino-dependent superfields as
\begin{eqnarray} \tilde{\Phi}=\tilde{\phi}+i(
\Theta\sigma^\mu\bar{\xi}-\xi\sigma^\mu\bar{\Theta})\partial_\mu\tilde{\phi}+...\end{eqnarray}
The anti-brane lagrangian density including $\tilde{\Phi}$ and
also the bulk moduli $\Phi$ which can have a local interaction on
the world volume of anti-brane can be written as \begin{eqnarray}
\label{anti-action}
{\cal L}_{\rm anti}&=&\delta^6(y-\bar{y})\int d^2\Theta
\,2{e^{3A}\cal E} \left[\,\frac{1}{8}e^{-A}\Big({\bar{\cal
D}}^2-8{\cal R}\Big)\Omega_{\rm anti}(Z_A,Z_A^*)\, \right]+{\rm
h.c.}, \end{eqnarray} where $\Omega_{\rm anti}$ is a function of
$Z_A=\Big\{\,e^{A/2}\Lambda^\alpha, e^{-A/2}{\cal D}_\alpha,
e^{-A}{\cal R}, \tilde{\Phi}, \Phi \,\Big\}$. Here the warp factor
dependence of ${\cal L}_{\rm anti}$ is determined by the Weyl
weights of the involved superfields. Taking into account that the
$F$-type spurions can be ignored in case of $e^{A}\sim
\sqrt{m_{3/2}/M_{Pl}}$, $\Omega_{\rm anti}$ can be approximated as \begin{eqnarray}
\Omega_{\rm anti}&\simeq
&e^{2A}\Lambda^2\bar{\Lambda}^2\left[\tilde{\cal P}(\Phi,\Phi^*)
+\frac{1}{16}e^{-2A}Z_{\tilde{\Phi}}(\Phi,\Phi^*)\tilde{\Phi}^*
\bar{\cal D}^2{\cal
D}^2\tilde{\Phi}+M_{\tilde{\Phi}}^2(\Phi,\Phi^*)\tilde{\Phi}^*\tilde{\Phi}
\right], \end{eqnarray} where $Z_{\tilde{\Phi}}={\cal O}(1)$,
$M_{\tilde{\Phi}}={\cal O}(M_{Pl})$, and
$\langle\tilde{\Phi}\rangle$ is chosen to be zero. This shows that
the anti-brane moduli masses are generically of ${\cal
O}(\sqrt{m_{3/2}M_{Pl}})$. Since it is confined on the world
volume of anti-brane, $\tilde{\Phi}$ can not be a messenger of
SUSY breaking, so can be integrated out without affecting the
local SUSY breaking in the visible sector. Then, after integrating
out the KK modes of bulk fields as well as the anti-brane moduli $\tilde{\Phi}$,
the 4D effective action induced by $\Omega_{\rm anti}$ takes the form:
\begin{eqnarray} S^{(4D)}_{\rm anti} = \frac{1}{8}\int
d^4xd^2\Theta\, 2{\cal E}\,(\bar{\cal D}^2-8{\cal R})
\left(\tilde{\cal P}
(\Phi,\Phi^*)+\tilde{\cal Y}_i(\Phi,\Phi^*)Q^{i*}Q^i \right)
e^{4A}\Lambda^2\bar{\Lambda}^2+{\rm h.c.}
\end{eqnarray}
Note that the contact interaction between
$e^{4A}\Lambda^2\bar{\Lambda}^2$ and $Q^{i*}Q^i$ was not allowed in
$\Omega_{\rm anti}$ because $\Lambda^\alpha$ and the visible matter superfields
$Q^i$ live on different branes
which are geometrically separated from each other.
Thus, if $\tilde{\cal Y}_i\neq 0$,
it should be a consequence of
the exchange of bulk fields which couple to both $e^{4A}\Lambda^2\bar{\Lambda}^2$
(on anti-brane) and $Q^{i*}Q^i$ (on the $D$-branes of visible fields).
Possible phenomenological consequences of $S^{(4D)}_{\rm anti}$
are rather obvious. The Goldstino operator $e^{4A}\tilde{\cal
P}\Lambda^2\bar{\Lambda}^2$ gives rise to an uplifting potential
of ${\cal O}(m_{3/2}^2M_{Pl}^2)$ which would make the total vacuum
energy density to be nearly vanishing. In the following, we will
call this Goldstino operator the uplifting operator.\footnote{In
fact, this corresponds to the superspace expression of the
Volkov-Akulov Goldstino lagrangian density.} The uplifting
potential induces also a SUSY-breaking shift of the vacuum
configuration (\ref{susyads}), which would result in nonzero
vacuum values of $F^I$ and $D^a$. The effective contact
interaction between $Q^i$ and ${\Lambda}^\alpha$ gives soft
SUSY-breaking sfermion mass-squares of ${\cal O}(\tilde{\cal
Y}_im_{3/2}^2)$. Note that the features of the 4D effective action of anti-brane which
have been discussed so far rely {\it only} on that anti-brane
is red-shifted by the warp factor $e^{A}\sim \sqrt{m_{3/2}/M_{Pl}}$, thus are valid
for generic KKLT
compactification.
Since the scalar masses from the the effective contact
term $e^{4A}\Lambda^2\bar{\Lambda}^2Q^{i*}Q^i$ in $S^{(4D)}_{\rm anti}$
can be phenomenologically important, let us consider in what
situation this contact interaction can be generated. The warped
throat in KKLT compactification has approximately the geometry of
$T_5\times{\rm AdS}_5$ where $T_5$ is a compact 5-manifold which
is topologically $S^2\times S^3$. In the limit that the radius of
$T_5$ is small compared to the length of the warped throat,
the transmission of SUSY breaking through the throat
can be described by a supersymmetric 5D Randall-Sundrum (RS) model
\cite{RS} with visible $D$-branes at the UV fixed point
($y=0$) and
anti-brane at the IR fixed point ($y=\pi$) \cite{hebecker}.
Let us thus examine the possible generation of the effective contact
term within the framework of
the supersymmetric 5D RS model.
It has been
noticed that the 5D bulk SUGRA multiplet does not generate a
contact interaction between UV superfield and IR superfield at
tree level \cite{luty}.\footnote{In CFT interpretation, this might
correspond to the conformal sequestering discussed in
\cite{conformal}.} Loops of 5D SUGRA fields generate such contact
interaction, however the resulting coefficient $\tilde{\cal Y}_i$
is suppressed by the warp factor $e^{2A}$
\cite{rattazzi}, so is
negligible.\footnote{Note the difference of
the SUSY-breaking IR brane operator between our case and \cite{rattazzi}.
In our case, the SUSY breaking IR brane operator is given by
$e^{4A}\Lambda^2\bar{\Lambda}^2$ for the Goldstino superfield
$\Lambda^\alpha$ normalized as (\ref{supergold}), while
the SUSY breaking IR brane operator of \cite{rattazzi} is $e^{2A}Z^*Z$
for a $N=1$ chiral IR brane superfield $Z$ with nonzero $F^Z$.}
In fact, in order to generate the contact interaction
$e^{4A}\Lambda^2\bar{\Lambda}^2Q^{i*}Q^i$ in 4D effective action,
one needs a bulk field $B$ other than the 5D SUGRA multiplet which has
a non-derivative coupling in $N=1$ superspace to both
$e^{4A}\Lambda^2\bar{\Lambda}^2$ at the IR fixed point and $Q^{i*}Q^i$
at the UV fixed point.
Since the SUGRA multiplet is not crucial for the following discussion,
we will use the rigid $N=1$ superspace for simplicity, and then
the required fixed point couplings of $B$ can be written as
\begin{eqnarray}
\int d^2\theta d^2\bar{\theta} \Big[
\delta(y)g_B BQ^{i*}Q^i+\delta(y-\pi)g_B^\prime e^{4A}B\Lambda^2\bar{\Lambda}^2
+{\rm h.c.}\Big],
\end{eqnarray}
where $\theta^\alpha$ is the Grassmann coordinate of the rigid $N=1$ superspace.
If $B$ is a
chiral superfield in $N=1$ superspace,
the effective contact interaction between $Q^{i*}Q^i$ and
$e^{4A}\Lambda^2\bar{\Lambda}^2$ induced by the exchange of $B$
is suppressed by the superspace derivative $\bar{\cal D}^2$.
This can be easily noticed from the fact that the effective contact interaction arises
from the part of the solution of $B$ which is proportional to the UV brane source $Q^{i*}Q^i$
or the IR brane source $e^{4A}\Lambda^2\bar{\Lambda}^2$.
Since the brane sources are non-chiral, this part of the
solution should include the chiral projection operator $\bar{\cal D}^2$.
As a result, the coefficient of the induced contact interaction
is given by $\tilde{\cal Y}_i \sim g_Bg^\prime_B\bar{\cal D}^2/k$
where $k$ is the AdS curvature which is essentially of
${\cal O}(M_{Pl})$. Since $\bar{\cal D}^2/k$ leads to an additional suppression by
$m_{3/2}/M_{Pl}$,
the contact interaction induced by
chiral bulk superfield gives at most a contribution of ${\cal O}(m_{3/2}^3/M_{Pl})$
to the soft scalar mass-squares of $Q^i$ when $e^{A}\sim \sqrt{m_{3/2}/M_{Pl}}$, which is totally negligible.
On the other hand, if $B$ is a vector superfield in $N=1$ superspace,
there is no such suppression by the chiral projection operator, so
the resulting $\tilde{\cal Y}_i$ can be sizable in certain cases.
To examine the contact term induced by a bulk vector superfield in more detail, one can consider
the 5D lagrangian of $B$ which contains
\begin{eqnarray}
\int d^2\theta d^2\bar{\theta}
&\Big[&\frac{1}{8}B{\cal D}_\alpha\bar{\cal D}^2{\cal D}^\alpha B+M_B^2e^{-2kLy}B^2
\nonumber \\
&&+\,\delta(y)g_B BQ^{i*}Q^i+\delta(y-\pi)g_B^\prime e^{-4kLy}B\Lambda^2\bar{\Lambda}^2
\,\Big],
\end{eqnarray}
where
$e^{-kLy}$ is the position dependent warp factor in
AdS$_5$, $L$ is the orbifold length, and $M_B$ is the 5D mass of the vector superfield $B$.
($e^{-\pi kL}=e^A$ in this convention.)
The warp factor dependence of each term in the above 5D lagrangian can be determined by looking at
the dependence on the background spacetime metric.
Note that the UV brane coupling $g_B$ (the IR brane coupling $g_B^\prime$)
corresponds to the gauge coupling between the 4D vector
field component of $B$ and the 4D current component of
the UV brane operator $Q^{i*}Q^i$ (the IR brane operator $\Lambda^2\bar{\Lambda}^2$).
The 5D locality and dimensional analysis suggest that
the contact term obtained by integrating out $B$ has a coefficient
$\tilde{\cal Y}_i\propto e^{-\pi M_B L}$ in the limit $M_B\gg k$.
Indeed, for $M_B\gtrsim k$, a more careful analysis \cite{kim} gives
\begin{eqnarray}
\tilde{\cal Y}_i\,\sim\, {g_Bg^\prime_Be^{-\pi (\sqrt{M_B^2+k^2}-k)L}}/{M_B^2}.
\end{eqnarray}
This result indicates that a sizable contact term can be induced
if the model contains a vector superfield $B$ propagating through the warped throat
with bulk mass $M_B\lesssim k$ and also sizable $g_B$ and $g^\prime_B$.
In KKLT compactification
of Type IIB string theory, one does not have such bulk vector superfield, thus
it is expected that $\tilde{\cal Y}_i$ is negligibly small,
i.e. anti-brane is sequestered well from the
$D$-branes of visible fields. In fact, in KKLT compactification of Type
IIB string theory, one finds that $\tilde{\cal P}$ is independent of
the CY volume modulus \cite{choi1}, thus even the CY volume modulus is
sequestered from anti-brane. This is not suprising in view of that
the wavefunction of the volume modulus has a negligible value over
the throat, thus the volume modulus can be identified as a UV
brane field in the corresponding RS picture \cite{hebecker}. In
the following, we will assume that $Q^i$ and $\Lambda^\alpha$ are
sequestered from each other, thus
\begin{eqnarray}
\tilde{\cal Y}_i=0.
\end{eqnarray}
We stress
that this sequestering assumption is relevant {\it only} for the
soft scalar masses. The other SUSY breaking observables such as
the gaugino masses and trilinear scalar couplings are {\it not}
affected even when $\tilde{\cal Y}_i$ has a sizable value.
According to the above discussion, the 4D effective action of anti-brane is highly dominated
by the uplifting operator:
\begin{eqnarray} S^{(4D)}_{\rm anti} \,\simeq \,\frac{1}{8}\int
d^4xd^2\Theta\, 2{\cal E}\,(\bar{\cal D}^2-8{\cal R})
\Big(\,e^{4A}\tilde{\cal
P}(\Phi,\Phi^*)\Lambda^2\bar{\Lambda}^2\,\Big) +{\rm h.c.}, \end{eqnarray}
and then the total 4D effective action of KKLT compactification is
given by \begin{eqnarray} \label{superaction} S_{\rm
KKLT}&=&S_{\rm N=1}+S^{(4D)}_{\rm anti} \nonumber \\
&=& \int d^4xd^2\Theta \,2 {\cal E} \left[\,\frac{1}{8}\big(\bar{\cal
D}^2-8{\cal R}\big) \Big(\,3e^{-K/3}+{\cal
P}\Lambda^2\bar{\Lambda}^2\,\Big) \right.
\nonumber \\
&& \left.\qquad\qquad\qquad +\,\frac{1}{4}f_a
W^{a\alpha}W^a_\alpha + W \,\right]+{\rm h.c.}, \end{eqnarray} where \begin{eqnarray}
{\cal P}(\Phi,\Phi^*)=e^{4A}\tilde{\cal P}(\Phi,\Phi^*)={\cal
O}(m_{3/2}^2M_{Pl}^2). \end{eqnarray} Since the vacuum expectation value of
${\cal P}$ can be fixed by the condition of vanishing cosmological
constant, the above 4D effective action is almost equally
predictive as the conventional $N=1$ SUGRA without the anti-brane
term ${\cal P}\Lambda^2\bar{\Lambda}^2$. This nice feature of KKLT
compactification is essentially due to that anti-brane is highly
red-shifted.
In fact, for the discussion of moduli stabilization at a nearly
flat dS vacuum and the subsequent SUSY breaking in the visible
sector, the SUGRA multiplet can be simply replaced by their vacuum
expectation values, e.g. $g_{\mu\nu}=\eta_{\mu\nu}$ and
$\psi_\mu=0$, {\it except for} its scalar auxiliary component
${\cal M}$ whose vacuum expectation value should be determined by
minimizing the scalar potential. The most convenient formulation
for the SUSY breaking by ${\cal M}$ is to introduce the chiral
compensator superfield $C$, then choose the superconformal gauge
${\cal M}=0$ to trade ${\cal M}$ for $F^C$, and finally replace the SUGRA multiplet by
their vacuum values, while making the superconformal gauge choice
in the rigid superspace:
\begin{eqnarray} C=C_0+\theta^2F^C. \end{eqnarray}
In the unitary gauge,
this procedure corresponds to the following replacements for the
superspace action: \begin{eqnarray} && \Lambda^\alpha \,\rightarrow\,
\frac{C^*}{C^{1/2}}\theta^\alpha,
\nonumber \\
&& W^{a\alpha}\,\rightarrow\, C^{-3/2}W^{a\alpha},\nonumber \\
&& d^2\Theta 2{\cal E}\,\rightarrow\, d^2\theta C^3, \nonumber \\
&& -\frac{1}{4}d^2\Theta 2{\cal E}(\bar{\cal D}^2-8{\cal R}) \,\rightarrow\,
d^2\theta d^2{\bar{\theta}}CC^*,
\end{eqnarray}
under which the locally supersymmetric action (\ref{superaction})
is changed to
\begin{eqnarray}
\label{rigid-action} S_{\rm KKLT}&=& \int d^4x \left[
\,\int d^2\theta d^2\bar{\theta} \,CC^*
\left(\,-3e^{-K/3}-CC^*{\cal
P}\theta^2\bar{\theta}^2\,\right)\right. \nonumber \\
&&\left. \qquad+\left(\int d^2\theta \left(\,
\frac{1}{4}f_aW^{a\alpha}W^a_\alpha+C^3W\,\right)+{\rm
h.c.}\right)\, \right]. \end{eqnarray}
Although written in the rigid superspace, the action
(\ref{rigid-action}) includes all SUGRA effects on SUSY breaking.
Also as it has been derived from locally supersymmetric action
without any inconsistent truncation, it provides a fully
consistent low energy description of KKLT compactification which
contains a red-shifted anti-brane.
It is obvious
that the uplifting anti-brane operator ${\cal
P}\theta^2\bar{\theta}^2$ does not modify the solutions for the
auxiliary components of $N=1$ superfields, thus \begin{eqnarray}
\frac{F^C}{C_0} &=& \frac{1}{3}F^I\partial_IK
+ \frac{C_0^{*2}}{C_0}\, e^{K/3}W^*
\,=\, \frac{1}{3}F^I\partial_IK+m_{3/2}^*, \nonumber \\
F^I&=& -\frac{C^{*2}_0}{C_0}\, e^{K/3}K^{I\bar{J}}(D_J W)^*
\,=\,-e^{K/2}K^{I\bar{J}}(D_JW)^*, \nonumber \\
D^a &=& -C_0C_0^* e^{-K/3}\eta_a^I\partial_IK
\,=\,-\eta^I_a\partial_IK, \end{eqnarray} where $m_{3/2}=e^{K/2}W$ and we
have chosen the Einstein frame condition $C_0=e^{K/6}$ for the
last expressions. Here the index $I$ stands for generic chiral
superfield $\Phi^I$, and $\eta^I_a$ is the holomorphic Killing
vector for the infinitesimal gauge transformation: \begin{eqnarray}
\delta_a\Phi^I=i\alpha_a(x)\eta_a^I. \end{eqnarray} Although it does not
modify the on-shell expression of the auxiliary components of the
$N=1$ superfields, the uplifting operator provides an additional
scalar potential $V_{\rm lift}$ which plays the role of an
uplifting potential in KKLT compactification: \begin{eqnarray} V_{\rm TOT}
=V_F+V_D+V_{\rm lift}, \end{eqnarray} where
\begin{eqnarray}
V_F &=& (C_0C_0^*)^2\, e^{K/3} \left\{K^{I\bar{J}}(D_I W)(D_J W)^*
-3|W|^2\right\} \nonumber \\
&=& e^K\left\{K^{I\bar{J}}(D_I W)(D_J W)^* -3|W|^2\right\},
\nonumber \\
V_D &=& \frac{1}{2{\rm Re}(f_a)} D^a D^a, \nonumber \\ V_{\rm
lift} &=& (C_0C_0^*)^2\, {\cal P}\,=\, e^{2K/3}{\cal P}, \end{eqnarray}
where again the last expressions correspond to
the results in the Einstein frame with $C_0=e^{K/6}$.
Let us now consider the KKLT stabilization of CY moduli $\Phi$ and
the resulting soft SUSY breaking terms of visible fields using the
4D effective action (\ref{superaction}) or equivalently
(\ref{rigid-action}). In the first stage, $\Phi$ is stabilized at
the SUSY AdS minimum $\Phi_0$ of $V_{N=1}=V_F+V_D$ for which \begin{eqnarray}
D_I W(\Phi_0)=0, \quad W(\Phi_0)\neq 0. \end{eqnarray} The moduli masses at
this SUSY AdS vacuum are dominated by the supersymmetric
contribution which is presumed to be significantly larger than the
gravitino mass: \begin{eqnarray} |m_\Phi|\gg |m_{3/2}|,\end{eqnarray} where
\begin{eqnarray} \label{modulimass} m_{\Phi} \simeq
-\left(\frac{e^{K/2}\partial_\Phi^2W}{\partial_\Phi\partial_{\bar{\Phi}}K}\right)_{\Phi_0}.\end{eqnarray}
Adding the uplifting potential will shift the moduli vacuum values
while making the total vacuum energy density to be nearly zero.
Expanding the effective lagrangian of $\Phi$ around $\Phi_0$, one
finds \begin{eqnarray} {\cal
L}_\Phi&=&\partial_\Phi\partial_{\bar{\Phi}}K(\Phi_0)\Big(
|\partial_\mu\Delta\Phi|^2-|m_\Phi|^2|\Delta\Phi|^2\Big)
+3|m_{3/2}|^2M_{Pl}^2
\nonumber \\
&&-\,V_{\rm lift}(\Phi_0)-\Big(\Delta\Phi\partial_\Phi V_{\rm lift}(\Phi_0)+{\rm h.c}\Big)
+...,
\end{eqnarray}
where $\Delta\Phi=\Phi-\Phi_0$.
Then the moduli vacuum shift is determined to be \begin{eqnarray}
\label{shift} \frac{\Delta\Phi}{\Phi_0}\simeq
-\frac{\Phi^*_0\partial_{\bar{\Phi}}V_{\rm
lift}(\Phi_0)}{|\Phi_0|^2\partial_\Phi\partial_{\bar{\Phi}}K(\Phi_0)|m_\Phi|^2}={\cal
O}\left(\frac{m_{3/2}^2}{m_\Phi^2}\right)\end{eqnarray} for
$|\Phi_0|^2\partial_\Phi\partial_{\bar{\Phi}}K(\Phi_0)={\cal
O}(1)$ and $\Phi^0\partial_\Phi V_{\rm lift}(\Phi_0)= {\cal
O}(V_{\rm lift}(\Phi_0))={\cal O}(m_{3/2}^2M_{Pl}^2)$. This vacuum
shift induces a nonzero $F^\Phi$ as \begin{eqnarray} \label{fcomponent}
\frac{F^\Phi}{\Phi}&\simeq& \frac{\Delta\Phi\partial_\Phi
F^\Phi+\Delta\Phi^*\partial_{\bar{\Phi}}F^\Phi}{\Phi} \nonumber
\\
&\simeq& -\frac{e^{K/2}(\partial^2_\Phi
W)^*}{\partial_\Phi\partial_{\bar{\Phi}}K}
\frac{\Delta\Phi^*}{\Phi}
\nonumber \\
&\simeq& -\left(\frac{3\Phi^*\partial_\Phi\ln(V_{\rm
lift})}{|\Phi|^2\partial_\Phi\partial_{\bar{\Phi}} K}
\right)\frac{|m_{3/2}|^2}{m_\Phi} \,=\,{\cal
O}\left(\frac{m_{3/2}^2}{m_\Phi}\right), \end{eqnarray} where we have used
$V_{\rm lift}(\Phi_0)\simeq 3|m_{3/2}|^2M_{Pl}^2$. The above
result implies that heavy CY moduli with $m_\Phi\gg 8\pi^2m_{3/2}$
are not relevant for the low energy SUSY breaking since the
corresponding $F^\Phi/\Phi$ is negligible even compared to the
anomaly mediated soft masses of ${\cal O}(m_{3/2}/8\pi^2)$.
In KKLT compactification, all complex
structure moduli (and the Type IIB dilaton also) are assumed to
get a heavy mass of ${\cal O}(M_{KK}^3/M_{\rm st}^2)$ which is
much heavier than $8\pi^2m_{3/2}$ for $m_{3/2}\lesssim {\cal
O}(8\pi^2)$ TeV. (Here $M_{KK}$ and $M_{\rm st}$ are the CY
compactification scale and the string scale, respectively.) This
means that complex structure moduli (and the Type IIB dilaton) are
{\it not} relevant for the low energy soft terms, thus can be
safely integrated out. On the other hand, the K\"ahler moduli
masses from hidden gaugino condensations are given by $m_\Phi\sim
m_{3/2}\ln(M^2_{Pl}/m^2_{3/2})$, and thus $F^\Phi/\Phi\sim
m_{3/2}/8\pi^2$. As a result, the K\"ahler moduli can be an
important messenger of SUSY breaking and generically their
contributions to soft terms are comparable to the anomaly
mediation \cite{choi1}.
The eqs.(\ref{shift}) and (\ref{fcomponent}) show that one needs
to know how the uplifting operator ${\cal P}$ depends on $\Phi$ in
order to determine $F^\Phi/\Phi$. The above discussion implies
also that only the dependence of ${\cal P}$ on the relatively
light moduli with $m_\Phi\lesssim {\cal O}(8\pi^2m_{3/2})$ is
relevant for the low energy SUSY breaking. In KKLT
compactification of Type IIB string theory, anti-brane is
stabilized at the end of a nearly collapsing 3-cycle. On the other
hand, the messenger K\"ahler moduli correspond to the 4-cycle
volumes, thus their wavefunctions have a negligible value at the
end of the collapsing 3-cycle. This implies that K\"ahler moduli
$\Phi$ are {\it sequestered} from anti-brane, i.e. $\partial_\Phi
\ln{\cal P}\simeq 0$. Indeed, in this case, one finds
$K=-3\ln(T+T^*)$ and $V_{\rm lift}\propto 1/(T+T^*)^2$ for the CY
volume modulus $T$, for which ${\cal P}=e^{-2K/3}V_{\rm lift}$ is
independent of $T$.
On the other hand, in the absence of warped throat, one finds
$V_{\rm lift}\propto 1/(T+T^*)^3$ and thus ${\cal P}\propto
1/(T+T^*)$, showing that $T$ has a contact interaction with
anti-brane. This indicates that the presence of warped throat is
crucial for the sequestering as well as for the necessary
red-shift of anti-brane. In this paper, although we are mainly
interested in the sequestered anti-brane, we will leave it open
possibility that ${\cal P}$ depends on some messenger moduli.
To derive the expression of the soft SUSY breaking terms of
visible fields, let us expand $K$ and $W$ in powers of the visible
chiral matter fields $Q^i$: \begin{eqnarray} K&=& {\cal
K}_0(\Phi^x,\Phi^{x*},V) +
Z_i(\Phi^x,\Phi^{x*},V)Q^{i*}e^{2q_iV}Q^i, \nonumber \\
W&=&W_0(\Phi^x)+\frac{1}{6}\tilde{\lambda}_{ijk}(\Phi^x)Q^iQ^jQ^k,
\end{eqnarray} where $\Phi^x$ stand for generic messenger superfields of
SUSY breaking, and $V$ is the vector superfield for gauge field.
The soft SUSY breaking terms of canonically normalized visible
fields can be written as
\begin{eqnarray}
{\cal L}_{\rm
soft}&=&-\frac{1}{2}M_a\lambda^a\lambda^a-\frac{1}{2}m_i^2|\tilde{Q}^i|^2
-\frac{1}{6}A_{ijk}y_{ijk}\tilde{Q}^i\tilde{Q}^j\tilde{Q}^k+{\rm
h.c.},
\end{eqnarray}
where $\lambda^a$ are gauginos, $\tilde{Q}^i$ is the scalar
component of the superfield $Q^i$, and $y_{ijk}$ are the
canonically normalized Yukawa couplings:
\begin{eqnarray}
y_{ijk}=\frac{\tilde{\lambda}_{ijk}}{\sqrt{e^{-{\cal
K}_0}Z_iZ_jZ_k}}.
\end{eqnarray}
Then from the superspace action (\ref{rigid-action}), one finds
that the soft masses renormalized at just below the GUT threshold
scale $M_{GUT}$ are given by\footnote{Note that these soft terms are
the consequence of either a non-renormalizable interaction suppressed by
$1/M_{Pl}$ or an exchange of messenger field with a mass close to $M_{Pl}$.
As a result, the messenger scale of these soft terms is close to $M_{Pl}$
although the cutoff scale of the dynamical origin of SUSY breaking,
i.e. the anti-brane,
is $e^AM_{Pl}\sim \sqrt{m_{3/2}M_{Pl}}$ which is far below $M_{Pl}$.}
\begin{eqnarray}
\label{soft1} M_a&=& F^x\partial_x\ln\left({\rm
Re}(f_a)\right) +\frac{b_ag_a^2}{8\pi^2}\frac{F^C}{C_0},
\nonumber \\
A_{ijk}&=&
-F^x\partial_x\ln\left(\frac{\tilde{\lambda}_{ijk}}{e^{-{\cal
K}_0}Z_iZ_jZ_k}\right)-
\frac{1}{16\pi^2}(\gamma_i+\gamma_j+\gamma_k)\frac{F^C}{C_0},
\nonumber \\
m_i^2&=& \frac{2}{3}\langle V_F+V_{\rm lift}\rangle -F^x
F^{x*}\partial_x\partial_{\bar{x}}\ln \left(e^{-{\cal
K}_0/3}Z_i\right)
- \Big(q_i+\eta^x\partial_x\ln(Z_i)\Big)g^2\langle
D\rangle\nonumber
\\&&-\,\frac{1}{32\pi^2}\frac{d\gamma_i}{d\ln\mu}\left|\frac{F^C}{C_0}\right|^2
+ \frac{1}{16\pi^2}\left\{ (\partial_{x}{\gamma}_i)
F^x\left(\frac{F^C}{C_0}\right)^* +{\rm h.c.}\right\}
\nonumber \\
&=&\frac{2}{3}\langle V_{\rm lift}\rangle+\Big( \langle
V_F\rangle+ m_{3/2}^2-F^x F^{x*}\partial_x\partial_{\bar{x}}\ln
\left(Z_i\right)\Big) - \Big(q_i+\eta^x\partial_x\ln(Z_i)\Big)g^2\langle
D\rangle \nonumber
\\&&-\,\frac{1}{32\pi^2}\frac{d\gamma_i}{d\ln\mu}\left|\frac{F^C}{C_0}\right|^2
+ \frac{1}{16\pi^2}\left\{ (\partial_x{\gamma}_i)
F^x\left(\frac{F^C}{C_0}\right)^* +{\rm h.c.}\right\},
\end{eqnarray} where $\partial_x=\partial/\partial\Phi^x$ and $F^x$ is the $F$-component
of $\Phi^x$ which can be determined by (\ref{shift}) and
(\ref{fcomponent}) in KKLT moduli stabilization scenario. Here we
have included the anomaly mediated contributions, i.e. the parts
involving $F^C$, and the $D$-term contribution (for $U(1)$ gauge
group under which $\delta\Phi^x=i\alpha\,\eta^x$) as well as the contributions from $F^x$. As we will see in
the next sections, all these three contributions can be comparable
to each other in models with anomalous $U(1)_A$, thus should be
kept altogether. Here $b_a$ and $\gamma_i$ are the one-loop beta
function coefficients and the anomalous dimension of $Q^i$,
respectively, defined by \begin{eqnarray} \frac{dg_a}{d\ln
\mu}=\frac{b_a}{8\pi^2} g_a^3, \qquad \frac{d\ln Z_i}{d\ln
\mu}=\frac{1}{8\pi^2}\gamma_i. \nonumber \end{eqnarray} More explicitly,
\begin{eqnarray} b_a&=&-\frac{3}{2}{\rm tr}\left(T_a^2({\rm
Adj})\right)+\frac{1}{2}\sum_i {\rm tr}\left(T^2_a(Q^i)\right),
\nonumber \\
\gamma_i&=&2C_2(Q^i)-\frac{1}{2}\sum_{jk}|y_{ijk}|^2 \quad
(\,\sum_a g_a^2T_a^2(Q^i)\equiv C_2(Q^i)\bf{1}\,),
\nonumber \\
\partial_x\gamma_i
&=&-\frac{1}{2}\sum_{jk}|y_{ijk}|^2\partial_x\ln\left(
\frac{\tilde{\lambda}_{ijk}}{e^{-{\cal K}_0}Z_iZ_jZ_k}\right)
-2C_2(Q^i)\partial_x\ln\left({\rm Re}(f_a)\right), \end{eqnarray} where
$\omega_{ij}=\sum_{kl}y_{ikl}y^*_{jkl}$ is assumed to be diagonal.
Note that soft scalar masses depend on $\langle V_{\rm
lift}\rangle$, $\langle V_F\rangle$ and $\langle D\rangle$. Since
any of $\langle V_F\rangle$, $\langle V_{\rm lift}\rangle$ and
$\langle D\rangle$ can give an important contribution to $m_i^2$
under the condition of vanishing cosmological constant: \begin{eqnarray}
\langle V_{\rm TOT}\rangle =\langle V_F\rangle +\langle V_{\rm
lift}\rangle +\langle V_D\rangle=0,\end{eqnarray} all of these contributions
should be included with correct coefficients.
\section{Mass scales, $F$ and $D$ terms in 4D SUGRA with anomalous $U(1)$}
In this section, we discuss the mass scales and SUSY breaking $F$
and $D$ terms in 4D effective SUGRA which has an anomalous
$U(1)_A$ gauge symmetry. To apply our results to the KKLT
stabilization of the GS modulus, we will include the uplifting Goldstino superfield
operator ${\cal P}\Lambda^2\bar{\Lambda}^2$ which was discussed in
the previous section. The results for the conventional 4D SUGRA
can be obtained by simply taking
the limit ${\cal P}=0$.
In addition to the visible matter
superfields $\{Q^i\}$, the model contains the MSSM singlet
superfields $\{\Phi^x\}=\{\,T,X^p\,\}$ which can participate in SUSY
breaking and/or $U(1)_A$ breaking,
where $T$ is the GS modulus-axion superfield.
These chiral superfields transform under $U(1)_A$ as \begin{eqnarray}
\label{nonlinear} U(1)_A: \quad \delta_A T=-i\alpha(x)\frac{\delta_{GS}}{2}\,,
\quad \delta_A X^p=i\alpha(x)q_p X^p,
\quad \delta_AQ^i=i\alpha(x)q_iQ^i,\end{eqnarray} where $\alpha(x)$
is the infinitesimal $U(1)_A$ transformation function, and
$\delta_{GS}$ is a constant.
We will choose the normalization of $T$
for which the holomorphic gauge kinetic functions are given by
\begin{eqnarray} f_a=k_aT +T\mbox{-independent part},\end{eqnarray} where $k_a$ are real
(quantized) constants of order unity. Under this normalization, we
need $|\langle T\rangle|\lesssim {\cal O}(1)$ to get the gauge
coupling constants of order unity, and also the cancellation of
anomalies by the $U(1)_A$ variation of $k_a{\rm
Im}(T)F^{a\mu\nu}\tilde{F}^a_{\mu\nu}$ requires
\begin{eqnarray}
\delta_{GS}={\cal O}\left(\frac{1}{8\pi^2}\right).\end{eqnarray}
Models with anomalous
$U(1)_A$ gauge symmetry contain also an approximate {\it global}
$U(1)$ symmetry: \begin{eqnarray}
\label{u1t} U(1)_T: \quad \delta_T T= i\beta,
\quad\delta_T X^p=\delta_TQ^i=0,\end{eqnarray} where $\beta$ is an infinitesimal
constant. Obviously $U(1)_T$ is explicitly broken by
$\delta_Tf_a=ik_a\beta$ as well as by
non-perturbative effects depending on $e^{-cT}$
with an appropriate constant $c$. In some cases, it is more
convenient to consider the following approximate global symmetry
\begin{eqnarray}
\label{u1x} U(1)_X:\quad
\delta_{X}T=0,\quad
\delta_{X}X^p=i\beta q_p X^p,\quad \delta_XQ^i=i\beta
q_iQ^i\end{eqnarray} which is a combination of $U(1)_A$ and $U(1)_T$. The
fact that quantum amplitudes are free from $U(1)_A$ anomaly
requires \begin{eqnarray} \label{xanomaly} \left(\delta_X f_a\right)_{\rm
1-loop}= i\beta k_a \frac{\delta_{GS}}{2}\,, \end{eqnarray}
where $\delta_Xf_a$ represent the $U(1)_X$ anomalies due to the fermion loops.
For generic 4D SUGRA action (\ref{superaction}) including
the Goldstino superfield operator ${\cal
P}\Lambda^2\bar{\Lambda}^2$,
one can find the following relation between the vacuum expectation
values of SUSY breaking quantities:
\begin{eqnarray}
\label{relation} && \left( V_F + \frac{2}{3}V_{\rm
lift} + 2|m_{3/2}|^2 + \frac{1}{2}\,M^2_A \right)D_A
\nonumber \\
&& =\, -F^IF^{J*}\partial_I(\eta^L\partial_L\partial_{\bar J}K) +
V_D\eta^I \partial_I\ln g^2_A + V_{\rm lift}\eta^I\partial_I
\ln{\cal P},
\end{eqnarray}
where
$g_A$, $D_A$, $M_A$ and $\eta^I$ denote the gauge coupling,
$D$-term, gauge boson mass, and
holomorphic Killing vector of $U(1)_A$, respectively:
\begin{eqnarray} D_A=-\eta^I\partial_IK, \quad
M_A^2=2g_A^2\eta^I\eta^{J*}\partial_I\partial_{\bar{J}}K
\end{eqnarray}
for the $U(1)_A$ transformation
$\delta_A\Phi^I=i\alpha(x)\eta^I$.
Here $V_F$, $V_D$ and $V_{\rm lift}$ are the $F$-term
potential, the $D$-term potential and the uplifting potential,
respectively:
\begin{eqnarray}
V_F&=&K_{I\bar{J}}F^IF^{J*}-3|m_{3/2}|^2,
\nonumber \\
V_D&=&\frac{1}{2}g_A^2D_A^2,
\quad
V_{\rm lift}\,=\,e^{2K/3}{\cal P}, \end{eqnarray} and all quantities
are evaluated for the vacuum configuration satisfying \begin{eqnarray}
\label{stationary}
\partial_IV_{\rm TOT}=
\partial_I(V_F+V_D+V_{\rm lift})=0.
\end{eqnarray}
The relation (\ref{relation}) has been derived before \cite{kawamura} for
the conventional 4D SUGRA without $V_{\rm lift}$. Since
it plays an important role for our
subsequent discussion, let us briefly sketch the derivation of (\ref{relation}) for
SUGRA including the uplifting
operator ${\cal P}\Lambda^2\bar{\Lambda}^2$. From the $U(1)_A$
invariances of $K$ and $W$, one easily finds \begin{eqnarray}
\label{invariance}
\eta^I\partial_IK=\eta^{I*}\partial_{\bar{I}}K,\quad
\eta^ID_IW=-WD_A\end{eqnarray}
which lead to \begin{eqnarray}
&&\eta^I\partial_ID_A=-\eta^I\eta^{J*}\partial_I\partial_{\bar{J}}K=
-\frac{M_A^2}{2g_A^2}\,,
\nonumber \\
&&(\partial_L\eta^I)D_IW+\eta^I\partial_I(D_LW)=W\eta^{\bar{I}}\partial_{\bar{I}}\partial_LK.
\end{eqnarray} Using these relations, one can find
\begin{eqnarray}
\label{gaugeinvariance}
\eta^I\partial_I V_D&=&
V_D\eta^I\partial_I\ln(g_A^2)-\frac{1}{2}M_A^2D_A, \nonumber \\
\eta^I\partial_IV_F&=&-(V_F+2|m_{3/2}|^2)D_A-F^IF^{J*}\partial_I(\eta^L\partial_L
\partial_{\bar{J}}K),\nonumber \\
\eta^I\partial_IV_{\rm lift}&=& \left(-
\frac{2}{3}D_A+\eta^I\partial_I\ln ({\cal P})\right)V_{\rm
lift}.\end{eqnarray} Applying the stationary condition
(\ref{stationary}) to (\ref{gaugeinvariance}), one finally obtains the
relation (\ref{relation}).
For the analysis of SUSY and $U(1)_A$ breaking, we can simply set
$Q^i=0$.
Also for simplicity, we assume that all moduli other than $T$
can be integrated out without affecting the SUSY and $U(1)_A$
breaking. Then $X^p$ correspond to the $U(1)_A$ charged but MSSM
singlet chiral superfields with vacuum expectation values
which are small enough to allow the expansion in
powers of $X^p/M_{Pl}$, but still large enough to play an
important role in SUSY and/or $U(1)_A$ breaking. To be concrete,
we will use the K\"ahler potential which takes the form: \begin{eqnarray}
\label{kahleru1} K&=&{\cal K}_0(\Phi^x,\Phi^{x*},V_A)+
Z_i(t_V)Q^{i*}e^{2q_iV_A}Q^i\nonumber \\
&=&
K_0(t_V)+Z_p(t_V)X^{p*}e^{2q_pV_A}X^p+Z_i(t_V)Q^{i*}e^{2q_iV_A}Q^i,
\end{eqnarray} where $t_V=T+T^*-\delta_{GS}V_A$ for the $U(1)_A$ vector
superfield $V_A$, however our results will be valid for more
general $K$ including the terms higher order in $X^p/M_{Pl}$. For
the above K\"ahler potential, the $U(1)_A$ $D$-term and gauge
boson mass-square are given by \begin{eqnarray} D_A&=& \xi_{FI}-
q_p\tilde{Z}_p|X^p|^2,
\nonumber \\
\frac{M_A^2}{2g_A^2}&=&
M_{GS}^2+\left(q_p^2\tilde{Z}_p-\frac{\delta_{GS}}{2}\,
q_p\partial_T\tilde{Z}_p\right)|X^p|^2,
\end{eqnarray}
where $\xi_{FI}$ and $M_{GS}^2$ are the FI $D$-term and
the GS axion contribution to $M_A^2$, respectively: \begin{eqnarray}
\xi_{FI}&=& \frac{\delta_{GS}}{2}\,\partial_TK_0, \nonumber \\
M_{GS}^2&=& \frac{\delta_{GS}^2}{4}\,\partial_T\partial_{\bar{T}}K_0, \end{eqnarray}
and \begin{eqnarray} q_p\tilde{Z}_p=q_pZ_p-\frac{\delta_{GS}}{2}\,\partial_TZ_p.\end{eqnarray}
If $|\langle T\rangle|\lesssim {\cal O}(1)$ as required for the gauge coupling constants
to be of order unity, the
K\"ahler metric of $T$ is generically of order unity, and then \begin{eqnarray}
M_{GS}\sim \delta_{GS}M_{Pl}\sim
\frac{M_{Pl}}{8\pi^2}.\end{eqnarray} On the other hand, the size of
$\xi_{FI}$ depends on the more detailed property of $T$. If ${\rm Re}(T)$ is a dilaton or
a K\"ahler modulus
stabilized at $\langle {\rm Re}(T)\rangle={\cal O}(1)$, we have
$|\xi_{GS}|\simeq M_{GS}^2(T+T^*)/|\delta_{GS}|\sim 8\pi^2
M_{GS}^2$.
In another case that $T$ is a blowing-up modulus of
orbifold singularity stabilized at near the orbifold limit,
the resulting $|\xi_{FI}|\ll M_{GS}^2$.
In view of that the gaugino masses receive the anomaly mediated
contribution of ${\cal O}(m_{3/2}/8\pi^2)$, one needs $m_{3/2}$
hierarchically lower than $M_{Pl}$, e.g. $m_{3/2}\lesssim {\cal O}(8\pi^2)$ TeV,
in order to realize the supersymmetric
extension of the standard model at the TeV scale. Since the $U(1)_A$ gauge boson mass
is always rather close to $M_{Pl}$: \begin{eqnarray}
M_A\gtrsim \sqrt{2}g_AM_{GS}\sim \frac{M_{Pl}}{8\pi^2},\end{eqnarray}
let us focus on models with
\begin{eqnarray} \label{condition} m_{3/2}\ll M_A,\quad
\langle V_{\rm TOT}\rangle\simeq 0,\end{eqnarray} and examine the mass scales in such
models. The condition of nearly vanishing cosmological constant requires that \begin{eqnarray}
K_{I\bar{J}}F^IF^{J*}\lesssim {\cal
O}(m_{3/2}^2M_{Pl}^2), \quad V_{\rm lift}\lesssim {\cal
O}(m_{3/2}^2M_{Pl}^2), \end{eqnarray} and then the relation (\ref{relation})
implies \begin{eqnarray}
\label{Dbound} |D_A|\,\lesssim\, {\cal O}\left(
\frac{m_{3/2}^2M_{Pl}^2}{M_A^2}\right)\,\lesssim\, {\cal O}((8\pi^2)^2m_{3/2}^2). \end{eqnarray}
It has been pointed out that one might not need to introduce anti-brane
to obtain a dS vacuum if the $D$-term potential $V_D=\frac{1}{2}g_A^2D_A^2$ can compensate
the negative vacuum energy density $-3m_{3/2}^2M_{Pl}^2$ in $V_F$ \cite{BKQ}.
The second relation of (\ref{invariance}) indicates that $F^I\neq 0$
is required for $D_A\neq 0$,
thus the $D$-term uplifting scenario can {\it not} be
realized for the supersymmetric AdS solution of $V_F$.
However for a SUSY-breaking solution with $F^I\neq 0$,
$V_D$ might play the role of an uplifting potential
making $\langle V_F+V_D\rangle\geq 0$.
The above bound on $D_A$ imposes a severe limitation on such possibility as it
implies that
$V_D$ can {\it not} be an uplifting potential
in SUSY breaking scenarios with $m_{3/2}\ll M_A^2/M_{Pl}$.
In other words,
SUSY breaking models in which $V_D$ plays the role of
an uplifting potential generically predict a rather large
$m_{3/2}\gtrsim {\cal O}(M_A^2/M_{Pl})\gtrsim {\cal O}(M_{Pl}/(8\pi^2)^2)$.
For instance, the model of \cite{casas} in which $V_D$ indeed compensates
$-3m_{3/2}^2M_{Pl}^2$ in $V_F$
gives $M_A= {\cal O}(M_{Pl}/\sqrt{8\pi^2})$
and $m_{3/2}={\cal O}(M_{Pl}/8\pi^2)$.
Let us now examine more detailed relations between the $F$ and $D$
terms for the K\"ahler potential (\ref{kahleru1}). In case that
$\langle {\rm Re}(T)\rangle={\cal O}(1)$, the FI $D$-term is rather
close to $M_{Pl}^2$: $\xi_{FI}=
{\cal O}(M_{Pl}^2/8\pi^2)$. Such a large value of $\xi_{FI}$ in $D_A$ should
be cancelled by $q_p\tilde{Z}_p|X^p|^2$ in order to give $D_A$
satisfying the bound (\ref{Dbound}), thus \begin{eqnarray}
\label{cancellation} \xi_{FI}\simeq q_p\tilde{Z}_p|X^p|^2.\end{eqnarray} In
some case, for instance the case that the GS modulus is a blowing-up
mode of orbifold singularity, $\xi_{FI}$ can have a vacuum value
smaller than $M_{Pl}^2$ by many orders of magnitude. However the
existence of the anomalous (approximate) global symmetry $U(1)_X$
implies that some $X^p$ should get a large vacuum value
$|X^p|^2\gg |D_A|$ to break $U(1)_X$ at a sufficiently high energy scale.
This means that $|\xi_{FI}|\gg |D_A|$ and
the relation
(\ref{cancellation}) remains to be valid
even in case that $|\xi_{FI}|$ is smaller than $M_{Pl}^2$ by many orders of
magnitude.
Then using $\eta^ID_IW=-WD_A$, we find \begin{eqnarray}
\label{frelation} F^T&=&
\frac{q_p\tilde{Z}_p|X^p|^2}{\delta_{GS}\partial_T\partial_{\bar{T}}K_0/2
-q_r\partial_T\tilde{Z}_r|X^r|^2}\left(\frac{F^p}{X^p}\right)+
{\cal O}\left(\frac{8\pi^2m_{3/2}D_A}{M_{Pl}}\right)
\nonumber \\
&=&\frac{{\cal O}(\delta_{GS}\xi_{FI})}{M_{GS}^2+{\cal
O}(\delta_{GS}\xi_{FI})}\frac{F^p}{X^p},\end{eqnarray}
where we have used (\ref{cancellation}) for the last expression.
Applying this relation to (\ref{relation}), we also find \begin{eqnarray}
\label{drelation} g_A^2D_A&=&
-\frac{q_p\tilde{Z}_p\delta_{p\bar{q}}+q_pq_qX^{p*}X^q\partial_T[
\tilde{Z}_p\tilde{Z}_q/(\delta_{GS}\partial_T\partial_{\bar{T}}K_0/2
-q_r\partial_T\tilde{Z}_r|X^r|^2)]}
{\delta_{GS}^2\partial_T\partial_{\bar{T}}K_0/4+(q^2_r\tilde{Z}_r-q_r\delta_{GS}\partial_T\tilde{Z}_r/2)
|X^r|^2}{F^p}{F^{q*}} \nonumber \\
&& +\,\frac{V_{\rm lift}\eta^I\partial_I\ln{\cal
P}}{\delta_{GS}^2\partial_T\partial_{\bar{T}}K_0/4+
(q^2_r\tilde{Z}_r-q_r\delta_{GS}\partial_T\tilde{Z}_r/2)
|X^r|^2}
\nonumber \\
&=&\frac{{\cal O}(\xi_{FI})}{M_{GS}^2+{\cal
O}(\xi_{FI})}\left|\frac{F^p}{X^p}\right|^2
+\frac{V_{\rm lift}}{M_{GS}^2+{\cal
O}(\xi_{FI})}\eta^I\partial_I\ln{\cal P}. \end{eqnarray}
Note that the piece proportional to $V_{\rm lift}$ vanishes if the Goldstino superfield on anti-brane
is sequestered from the $U(1)_A$ charged fields,
i.e. $\eta^I\partial_I{\cal P}=0$,
which is a rather plausible possibility in view of our discussion in section 2.
The relations (\ref{frelation}) and (\ref{drelation}) show that the relative importance
of the GS modulus mediation and the $U(1)_A$ $D$-term mediation
is determined essentially by the ratio
\begin{eqnarray}
R\equiv \frac{\xi_{FI}}{M_{GS}^2}=\frac{2\partial_TK_0}{\delta_{GS}
\partial_T\partial_{\bar{T}}K_0}.
\end{eqnarray}
If $T$ is a string dilaton or a K\"ahler modulus normalized as $\partial_Tf_a={\cal O}(1)$, its
K\"ahler potential is given by
$$K_0=-n_0\ln(T+T^*)+{\cal O}(1/8\pi^2(T+T^*)).$$
As long as ${\rm Re}(T)$ is stabilized at a value of ${\cal O}(1)$,
the higher order string loop or $\alpha^\prime$
corrections to $K_0$ can be safely ignored,
yielding
$|R|={\cal O}(8\pi^2)$.
In such case, (\ref{frelation}) and (\ref{drelation}) imply that generically
\begin{eqnarray}
|D_A|\sim |F^T|^2\sim \left|\frac{F^p}{X^p}\right|^2.
\end{eqnarray}
Note that in the limit $|R|\gg 1$, the $U(1)_A$ gauge boson mass-square
is dominated by the contribution from $\langle X^p\rangle\sim \sqrt{|\xi_{FI}|}$.
In this case, the longitudinal component of the massive $U(1)_A$ gauge boson
comes mostly from the phase degrees of $X^p$, while
the GS modulus $T$ is approximately a flat direction of the
$U(1)_A$ $D$-term potential.
An interesting possibility is then to stabilize $T$ by non-perturbative superpotential
at a SUSY AdS vacuum with ${\rm Re}(T)={\cal O}(1)$, and then lift this
AdS vacuum to dS state by adding a red-shifted anti-brane as in the KKLT moduli
stabilization scenario.
In the next section, we will discuss such KKLT stabilization of the GS modulus
in more detail together with
the resulting pattern of soft SUSY breaking terms.
Another possibility is that ${\rm Re}(T)$ is a blowing-up modulus of orbifold singularity,
thus $\xi_{FI}=\delta_{GS}\partial_TK_0=0$ in the orbifold limit.
Choosing ${\rm Re}(T)=0$ in the orbifold limit, $K_0$ can be expanded as
$$
K_0\approx \frac{1}{2}a_0(T+T^*)^2+{\cal O}((T+T^*)^3)
$$
for a constant $a_0$.
If ${\rm Re}(T)$ is stabilized at near the orbifold limit for which
$|\xi_{FI}| \ll
M_{GS}^2$,
the resulting
$|R|\ll 1$.
In this limit, if the uplifting anti-brane is sequestered
from the $U(1)_A$ charged fields, i.e. $\eta^I\partial_I{\cal
P}=0$, eqs. (\ref{frelation}) and (\ref{drelation}) lead to \begin{eqnarray}
F^T\,\sim\, \delta_{GS}R\,\frac{F^p}{X^p}, \quad
D_A\,\sim\, R\left|\frac{F^p}{X^p}\right|^2, \end{eqnarray} where
$F^p/X^p$
represents the SUSY breaking mediated at the scale around
$\langle X^p\rangle\sim \sqrt{|\xi_{FI}|}\ll M_{GS}$. The anomaly
condition (\ref{xanomaly}) for the $U(1)_X$
symmetry (\ref{u1x}) implies that the gauge kinetic functions
receive a loop correction $\Delta f_a\sim \frac{1}{8\pi^2}\ln X^p$
at the scale $\langle X^p\rangle$ where $U(1)_X$ is spontaneously
broken. For instance, there might be a coupling $X^pQ_1Q_2$ in
the superpotential generating $\Delta f_a$ through the loop of
$Q_1+Q_2$ which are charged under the standard model gauge
group.\footnote{This corresponds to the gauge mediation at the
messenger scale $\langle X^p\rangle$.}
This results in the gaugino masses \begin{eqnarray} M_a ={\cal
O}\left(\frac{1}{8\pi^2}\frac{F^p}{X^p}\right) \end{eqnarray} mediated at
the scale $\langle X^p\rangle$. Obviously $F^T$ is smaller than
this $M_a$ in the limit $|R|\ll 1$.
If $\xi_{FI}$ is smaller than $M_{Pl}^2$ by many orders of
magnitude, e.g. $|R|\lesssim 10^{-4}$, $|D_A|$ also is smaller
than $|M_a|^2$ mediated at $\langle X^p\rangle$. Then the soft
terms are dominated by the contributions mediated at the low
messenger scale around $\langle X^p\rangle\sim \sqrt{|\xi_{FI}|}$.
Those soft terms with low messenger scale depend on more detailed
property of the model, which is beyond the scope of this paper.
\section{A model for the KKLT stabilization of the GS modulus}
In this section, we discuss a model
for the KKLT stabilization of the GS modulus $T$
in detail.
In this model, $T$ is stabilized at a value of ${\cal O}(1)$,
yielding $\xi_{FI}\sim \delta_{GS}M_{Pl}^2$.
For simplicity, we introduce a single $U(1)_A$ charged
MSSM singlet $X$ whose
vacuum value cancels $\xi_{FI}$ in $D_A$.
In addition to $X$ and the visible matter superfields $Q^i$,
one needs also a hidden $SU(N_c)$ Yang-Mills sector with $SU(N_c)$
charged matter fields $Q_H+Q^c_H$ in order to
produce non-perturbative superpotential stabilizing $T$.
The gauge kinetic functions of the model are given by
\begin{eqnarray}
\label{gaugekinetic}
f_a=kT+\Delta f, \qquad f_H=k_HT+\Delta f_{H},
\end{eqnarray}
where $f_a$ ($a=3,2,1$) and $f_H$ are the gauge kinetic functions of the $SU(3)_c\times SU(2)_W\times
U(1)_Y$ and the hidden $SU(N_c)$ gauge group, respectively, and
$k$ and $k_H$ are real constants of ${\cal O}(1)$.
Generically $\Delta f$ and $\Delta f_{H}$ can depend on other moduli of the model.
Here we assume that those other moduli are fixed by fluxes with a large mass
$\gg 8\pi^2 m_{3/2}$, and then $\Delta f$ and $\Delta f_{H}$ can be considered as constants
which are obtained by integrating out the heavy moduli.
The K\"ahler potential, superpotential,
and the uplifting operator are given by
\begin{eqnarray}
\label{model} && K \,=\, K_0(t_V) + Z_X(t_V)X^*e^{-2V_A}X +
Z_H(t_V)Q_H^*e^{2qV_A}Q_H
\nonumber \\
&& \qquad \,\,+\,Z^c_H(t_V)Q^{c*}_He^{2q_{c}V_A}Q^c_H
+Z_{i}(t_V)Q^{i*}e^{2q_iV_A}Q^i, \nonumber \\
&& W \,=\, \omega_0 + \lambda X^{q+q_c}Q_H^cQ_H +
(N_c-N_f) \left(\frac{e^{-8\pi^2 f_H}}{{\rm
det}(Q_H^cQ_H)}\right)^{\frac{1}{N_c-N_f}} \nonumber \\
&&\qquad\,\, +\,
\frac{1}{6}\,\lambda_{ijk}X^{q_i+q_j+q_k}Q^i Q^j Q^k, \nonumber \\
&&
{\cal P}\,=\,{\cal P}(t_V),
\end{eqnarray}
where $t_V=T+T^*-\delta_{GS}V_A$, $w_0$ is a constant of ${\cal
O}(m_{3/2}M_{Pl}^2)$, $\lambda$ and $\lambda_{ijk}$ are constant
Yukawa couplings, $N_f$ denotes the number of flavors for the
hidden matter $Q_H+Q^c_H$, ${\cal P}\Lambda^2\bar{\Lambda}^2$ is
the uplifting Goldstino superfield operator induced by anti-brane,
and finally the $U(1)_A$ charge of $X$ is normalized as $q_X=-1$.
As we have discussed in section 2, anti-brane in KKLT
compactification is expected to be sequestered from the $D$-brane
of $U(1)_A$, and then ${\cal P}$ is independent of $t_V$. Here we
consider more general case that ${\cal P}$ can depend on $t_V$ in
order to see what would be the consequence of the uplifting
operator if it is not sequestered from $U(1)_A$. Note that the GS
cancellation of the mixed anomalies of $U(1)_A$ requires \begin{eqnarray}
\label{gscondition}
\frac{\delta_{GS}}{2} =\frac{N_f(q+q^c)}{8\pi^2k_H}=\frac{\sum_iq_i{\rm
Tr}(T_a^2(Q^i))}{4\pi^2k_a}. \end{eqnarray}
In our case, the non-perturbative superpotential in (\ref{model}),
i.e. the Affleck-Dine-Seiberg superpotential \cite{ADS}\begin{eqnarray} W_{\rm
ADS}=(N_c-N_f) \left(\frac{e^{-8\pi^2 f_H}}{{\rm
det}(Q_H^cQ_H)}\right)^{\frac{1}{N_c-N_f}} \end{eqnarray}
requires a more careful interpretation.
If $\lambda$ is so small that the tree level mass
$M_Q=\lambda\langle X^{q+q_c}\rangle$
of $Q_H+Q^c_H$
is lower than the dynamical scale of
$SU(N_c)$ gauge interaction, $W_{\rm ADS}$ can be interpreted as
the non-perturbative superpotential of the light composite meson
superfields $\Sigma=Q_H^cQ_H$. However a more plausible possibility
is that $\lambda={\cal O}(1)$, and so (in the unit with
$M_{Pl}=1$) \begin{eqnarray} M_Q=\lambda\langle X^{q+q_c}\rangle \gg
\Lambda_H= \Big(e^{-8\pi^2 f_H}{\rm det}(M_Q)\Big)^{1/(3N_c)}. \end{eqnarray}
Note that $|X|^2={\cal O}(|\xi_{FI}|)={\cal O}(M_{Pl}^2/8\pi^2)$
in this model. In this case, the correct procedure to deal with
$SU(N_c)$ dynamics is to integrate out first the heavy
$Q_H+Q_H^c$ at the scale $M_Q$. The resulting effective theory is
a pure super YM theory at the scale just below $M_Q$, but
with the modified gauge kinetic function: \begin{eqnarray} f_{\rm eff}(M_Q)=
f_H+\frac{3N_c-N_f}{8\pi^2}\ln(M_Q/M_{Pl}). \end{eqnarray}
Then the
$SU(N_c)$ gaugino condensation is formed at $\Lambda_H$ by this
pure super YM dynamics, yielding a non-perturbative superpotential
\begin{eqnarray} W_{\rm eff}=N_c M_Q^3e^{-8\pi^2 f_{\rm eff}(M_\Phi)/N_c}.
\end{eqnarray} This $W_{\rm eff}$ is the same as the non-perturbative
superpotential obtained by integrating out $\Sigma=Q^c_HQ_H$
using the equations of motion $\partial_\Sigma W=0$ for the
superpotential of (\ref{model}). In the following, we will simply
use the superpotential of (\ref{model}) since it leads to the
correct vacuum configuration independently of the value of
$M_Q/\Lambda_H$.
To examine the vacuum configuration of the model (\ref{model}),
it is convenient to estimate first the mass scales of
the model. As long as $m_{3/2}$
is hierarchically smaller than $M_{Pl}$,
one easily finds that the following mass patterns are independent
of the details of SUSY breaking. First,
$T$ is stabilized at a vacuum expectation value of ${\cal O}(1)$, and as a result
\begin{eqnarray}
R=\frac{\xi_{FI}}{M_{GS}^2}=\frac{2\partial_TK_0}{\delta_{GS}
\partial_T\partial_{\bar{T}}K_0}={\cal O}(8\pi^2).
\end{eqnarray}
The $U(1)_A$ gauge boson mass-square is dominated
by the contribution from $|X|^2 \sim |\xi_{FI}|$:
\begin{eqnarray}
\frac{M_A^2}{2g^2_A} \simeq Z_X|X|^2+ M_{GS}^2\simeq Z_X|X|^2.
\end{eqnarray}
The hidden $SU(N_C)$ confines at the scale \begin{eqnarray} \Lambda_H =
\Big(e^{-8\pi^2 (k_HT+\Delta f_{H})}{\rm
det}(M_Q/M_{Pl})\Big)^{1/(3N_c)}M_{Pl},\end{eqnarray} and the $SU(N_c)$
$D$-flat directions of the hidden matter fields are stabilized at
\begin{eqnarray} \langle Q_H^cQ_H\rangle \sim \frac{\Lambda_H^3}{M_Q}. \end{eqnarray}
Finally the hidden $SU(N_c)$ scale and $m_{3/2}$ obey the
standard relation: \begin{eqnarray} \frac{\Lambda_H^3}{M_{Pl}^3}\,\sim\,
\frac{m_{3/2}}{M_{Pl}}. \end{eqnarray}
It is straightforward to see
that in the absence of the uplifting Goldstino operator
${\cal P}\Lambda^2\bar{\Lambda}^2$, the model (\ref{model})
has a unique and stable SUSY AdS vacuum.\footnote{
This SUSY AdS vacuum is a saddle point solution of
$V_F$, but is the global minimum of $V_F+V_D$.} For $m_{3/2}$
hierarchically smaller than $M_A$ and $\Lambda_H$, adding the
uplifting operator with ${\cal P}\sim
m^2_{3/2}M^2_{Pl}$ triggers a small shift of vacuum configuration,
leading to non-zero vacuum expectation values of $F^T, F^X,
F^\Sigma$ and $D_A$, where $\Sigma=Q_H^cQ_H$.
In the following, we compute
these SUSY breaking vacuum values within a perturbative
expansion in
$$
\frac{\delta_{GS}}{T+T^*} ={\cal
O}\left(\frac{1}{8\pi^2}\right),$$ while ignoring the
corrections suppressed by the following scale hierarchy
factors:\begin{eqnarray} \label{scale-hierarchy}
\frac{\Lambda_H}{M_A},\, \frac{m_{3/2}}{\Lambda_H},\,
\frac{m_{3/2}}{M_\Phi}, \frac{\langle Q_H^cQ_H\rangle}{\langle
XX^*\rangle}\, \ll\, \frac{1}{8\pi^2}. \end{eqnarray}
Let us now examine the vacuum configuration in more detail.
As we have mentioned, the true vacuum configuration is given by a small shift
induced by $V_{\rm lift}$ from the
SUSY AdS solution of $D_A=0$ and $D_IW=0$.
With this observation, we find (in the unit with $M_{Pl}=1$):
\begin{eqnarray}
\label{vacuumvalue}
|X|^2&=& -\frac{\delta_{GS}\partial_T
K_0}{2Z_X}\left(1+{\cal O}\left(\frac{1}{8\pi^2}\right)\right),
\nonumber \\
Q^c_HQ_H&=& e^{-8\pi^2(k_HT+\Delta f_{H})/N_c}(\lambda X^{q+q_c})^{(N_f-N_c)/N_c},
\nonumber \\
{\rm Re}(T)&= &\frac{N_c}{8\pi^2k_H}\ln
\left|\frac{8\pi^2k_H}{\omega_0\partial_TK_0}\right|
-\frac{\Delta f_{H}+\Delta f_{H}^*}{2k_H}
+\frac{N_f}{8\pi^2k_H}\ln|\lambda X^{q+q_c}|+{\cal O}\left(\frac{1}{8\pi^2}\right)
\nonumber \\
&=& \frac{N_c}{8\pi^2k_H}\ln\left(\frac{M_{Pl}}{m_{3/2}}\right)-\frac{\Delta f_{H}+\Delta f_{H}^*}{2k_H}
+{\cal O}\left(\frac{1}{8\pi^2}\right).
\end{eqnarray}
Note that $|X|^2={\cal O}(M_{Pl}^2/8\pi^2)$, thus an effect further suppressed by
$|X|^2/M_{Pl}^2$ is comparable to the loop correction.
The above result on the vacuum expectation value of ${\rm Re}(T)$ shows that
the GS modulus is stabilized at a value of ${\cal O}(1)$
for the model parameters giving the weak scale SUSY, e.g.
$m_{3/2}\lesssim {\cal O}(8\pi^2)$ TeV.
If ${\rm Re}(T)$ is stabilized at a value of ${\cal O}(1)$ as desired,
$\Sigma=Q^c_HQ_H$ is hierarchically smaller than $M_{Pl}^2$.
Since $F^\Sigma={\cal O}(\Sigma F^T)$
and the couplings between $Q_H+Q^c_H$ and the visible
fields are suppressed by $1/M_{Pl}$,
the contribution from $F^\Sigma$ to the visible soft terms
can be ignored.
Then the soft terms of visible fields are determined by
the following four SUSY-breaking auxiliary
components: \begin{eqnarray}
\label{aucom}
\frac{F^T}{T+T^*}&=&\frac{m^*_{3/2}}{8\pi^2}\left(
\frac{3N_c\partial_T\ln(V_{\rm lift})}{k_H(T+T^*)\partial_TK_0}\right)
\left(1+{\cal O}\left(\frac{1}{8\pi^2}\right)\right), \nonumber \\
\frac{F^X}{X}&=&-F^T\partial_T\ln\left(-\frac{Z_X}{\partial_TK_0}\right)
\left(1+{\cal O}\left(\frac{1}{8\pi^2}\right)\right),
\nonumber \\
g_A^2D_A &=&\left|F^T\right|^2\partial_T\partial_{\bar{T}}\ln\left(-\frac{Z_X}{\partial_TK_0}\right)
\left(1+{\cal O}\left(\frac{1}{8\pi^2}\right)\right)
\nonumber \\
&&+\,V_{\rm lift}\frac{\partial_T\ln {\cal
P}}{\partial_T K_0} \left(1+{\cal O}\left(\frac{1}{8\pi^2}\right)\right),
\nonumber \\
\frac{1}{8\pi^2}\frac{F^C}{C_0}
&=&\frac{m_{3/2}^*}{8\pi^2}\left(1+{\cal
O}\left(\frac{1}{8\pi^2}\right)\right), \end{eqnarray} where $F^C/8\pi^2$
and $F^T$ are the order parameters of anomaly mediation and GS
modulus mediation, respectively, and $F^X$ and $D_A$ are the order
parameters of $U(1)_A$ mediation. Note that $V_A$ and $X$
constitute a massive vector superfield $\tilde{V}_A=V_A-\ln|X|$.
The results on $F^X$ and $D_A$ can be obtained from eqs.
(\ref{frelation}) and (\ref{drelation}), while the result on $F^T$
can be obtained by applying eqs. (\ref{modulimass}) and
(\ref{fcomponent}).
The above results show that generically the GS modulus mediation,
the anomaly mediation and the $X$ mediation are comparable to each
other. If anti-brane and the $D$-brane of $U(1)_A$ are separated
from each other by a warped throat, it is expected that
$\partial_T\ln{\cal P}=0$. Then the $U(1)_A$ $D$-term mediation
is also generically comparable to the other mediations. However,
if the K\"ahler potential of $T$ and $X$ has a special form to
give ${Z_X}/{\partial_TK_0}=\mbox{constant}$, we have
${F^X}/{X}={\cal O}(F^T/8\pi^2)$ and $D_A={\cal
O}(|F^T|^2/8\pi^2)$, thus the $U(1)_A$ mediation is suppressed by
a loop factor of ${\cal O}(1/8\pi^2)$ compared to the GS-modulus
and anomaly mediations. Finally, if anti-brane is not sequestered,
the resulting $D_A$ is of ${\cal O}(m_{3/2}^2)={\cal
O}((8\pi^2F^T)^2)$ and then soft sfermion masses are dominated by
the $U(1)_A$ $D$-term contribution. Another important feature of
(\ref{aucom}) is that $F^T$, $F^X/X$ and $F^C/C_0$ are
relatively {\it real} since $K_0, Z_X, {\cal P}$ are real
functions of the real variable $t=T+T^*$. As a result, the gaugino
masses and $A$-parameters mediated by these auxiliary components
automatically preserve CP \cite{choi4}. Since one can always make
$m_{3/2}=e^{K/2}W$ to be real by an appropriate
$R$-transformation, all of the above auxiliary components can be
chosen to be real, which will be taken in the following
discussions.
Applying the above results to the soft terms of (\ref{soft1}) and also taking into account
that $|X|^2/M_{Pl}^2={\cal O}(1/8\pi^2)$, we find the soft masses at
the scale just below $M_{GUT}$:
\begin{eqnarray}
\label{kkltsoft}
M_a&=&M_0+\frac{b_a}{8\pi^2}\,g_{GUT}^2m_{3/2}+{\cal O}\left(\frac{M_0}{8\pi^2}\right),
\nonumber \\
A_{ijk}&=&M_0(a_{i}+a_{j}+a_{k})-\frac{1}{16\pi^2}(\gamma_i+\gamma_j+\gamma_k)m_{3/2}
+{\cal O}\left(\frac{M_0}{8\pi^2}\right),
\nonumber \\
m_i^2&=&c_iM_0^2-\frac{1}{32\pi^2}\,\dot{\gamma}_im_{3/2}^2
+\frac{m_{3/2}M_0}{8\pi^2}\left(\frac{1}{2}\sum_{jk}|y_{ijk}|^2(a_i+a_j+a_k)-2C_2(Q^i)\right)
\nonumber \\
&& -\, 3q_im_{3/2}^2\frac{\partial_T\ln{\cal P}}{\partial_TK_0}\left(1+
{\cal O}\left(\frac{1}{8\pi^2}\right)\right)+{\cal O}\left(\frac{M_0^2}{8\pi^2}\right),
\end{eqnarray}
where $M_0$ is the universal modulus-mediated gaugino mass
at $M_{GUT}$:
\begin{eqnarray}
\label{parameter}
M_0\equiv F^T\partial_T\ln {\rm Re}(f_a)= \frac{m_{3/2}}{8\pi^2}\left(\frac{3N_c
\partial_T\ln V_{\rm lift}}{k_H\partial_TK_0}\right)\partial_T\ln({\rm Re}(f_a)),
\end{eqnarray}
for
\begin{eqnarray}
\partial_T\ln({\rm Re}(f_a))=\frac{1}{T+T^*+(\Delta f+\Delta f^*)/k}
=\frac{kg_{GUT}^2}{2},
\end{eqnarray}
and
\begin{eqnarray}
a_i&=&\frac{\partial_T\ln\left(e^{-K_0/3}Z_i\left(-{Z_X}/{\partial_TK_0}\right)^{q_i}\right)}{
\partial_T\ln({\rm Re}(f_a))},
\nonumber \\
c_i&=&-\frac{\partial_T\partial_{\bar{T}}\ln\left(e^{-K_0/3}Z_i
\left(-{Z_X}/{\partial_TK_0}\right)^{q_i}\right)}{[\partial_T\ln({\rm Re}(f_a))]^2}. \end{eqnarray}
Here $\dot{\gamma_i}=d\gamma_i/d\ln\mu$ for the anomalous dimension
$\gamma_i=8\pi^2d\ln Z_i/d\ln\mu$, $2C_2(Q^i)$ is the gauge
contribution to $\gamma_i$, i.e. $C_2(Q^i){\bf 1}=\sum_a
g_a^2T_a^2(Q^i)$ for the gauge generator $T_a(Q^i)$, and finally
the canonical Yukawa couplings are given by \begin{eqnarray} y_{ijk}
=\frac{\lambda_{ijk}(\delta_{GS}/2)^{(q_i+q_j+q_k)/2}}
{\sqrt{(-{Z}_X/\partial_TK_0)^{q_i+q_j+q_k}e^{-K_0}Z_iZ_jZ_k}}.
\end{eqnarray}
The soft parameters of (\ref{kkltsoft}) show that
the gaugino masses $M_a$ in models of KKLT stabilization of the GS modulus
are determined by the GS modulus mediation and the anomaly mediation
which are comparable to each other.
In case that anti-brane is sequestered from $U(1)_A$ and thus from the GS modulus $T$,
i.e. $\partial_T\ln{\cal P}=0$,
soft sfermion masses are comparable to the gaugino masses.
However if anti-brane is not sequestered,
soft sfermion mass-squares (for $q_i\neq 0$) are dominated
by the $U(1)_A$ $D$-term contribution of ${\cal O}(8\pi^2 M_a^2)$, which
might enable us to realize the more minimal supersymmetric standard model scenario \cite{CKN}.
It has been noticed that the low energy gaugino masses obtained
from the renormalization group (RG) running of the gaugino masses
of (\ref{kkltsoft}) at $M_{GUT}$ are given by \begin{eqnarray}
M_a(\mu)=M_0\left[1-\frac{1}{4\pi^2}b_ag_a^2(\mu)\ln\left(\frac{M_{GUT}}{(M_{Pl}/m_{3/2})^{\alpha/2}\mu}
\right)\right], \end{eqnarray} which are same as the low energy gaugino
masses in pure modulus-mediation started from the mirage messenger
scale \begin{eqnarray} M_{\rm mirage}=(m_{3/2}/M_{Pl})^{\alpha/2}M_{GUT}, \end{eqnarray}
where \begin{eqnarray} \alpha\equiv\frac{m_{3/2}}{M_0\ln(M_{Pl}/m_{3/2})}.
\end{eqnarray} Similar mirage mediation pattern arises also for the low
energy soft sfermion masses if the involved Yukawa couplings are
small or $a_i+a_j+a_k=1$ and $c_i+c_j+c_k=1$ for the combination
$(i,j,k)=(H_u,t_L,t_R)$ in the top-quark Yukawa coupling. From
(\ref{parameter}), we find that the anomaly to modulus mediation ratio $\alpha$ is given by
\begin{eqnarray}
\label{alpha}
\alpha=\frac{2\partial_TK_0}{2\partial_TK_0+3\partial_T\ln{\cal
P}} \left(\
1+\frac{4\pi^2[k_H(\Delta f+\Delta f^*)-k(\Delta f_{H}+\Delta f_{H}^*)]}{kN_c\ln(M_{Pl}/m_{3/2})}\right).
\end{eqnarray}
In the minimal
KKLT model, anti-brane is sequestered, i.e $\partial_T\ln{\cal P}=0$,
$\Delta f \simeq 0$ and $\Delta f_H\simeq 0$, thus
$\alpha=1$. However in more generic compactifications, it is
possible that the gauge kinetic functions $f_a$ and $f_H$ are
given by different linear combinations of $T$ and other moduli.
Stabilizing the other moduli can give rise to sizable $\Delta f_H$
and/or $\Delta f$,
thus a different value of $\alpha$ even in the case that
anti-brane is sequestered \cite{abe}. In this regard, one
interesting possibility is to have $\alpha=2$ which leads to the TeV scale
mirage mediation. As was noticed in \cite{choi3}, the little
hierarchy problem of the MSSM can be significantly ameliorated in
the TeV scale mirage mediation scenario.
For the model under discussion, $\alpha= 2$ can be achieved for instance
when $\partial_T\ln{\cal P}=0$,
${\rm Re}(\Delta f)=0$ and ${\rm Re}(\Delta f_H)=-N_c/2$.
To be more concrete, let us consider
the following K\"ahler potential and the uplifting operator which are expected
to be valid for a wide class of string compactifications:
\begin{eqnarray}
f_a&=&kT, \quad f_H\,=\,k_HT+\Delta f_{H},
\nonumber \\
K_0&=& -n_0\ln(t_V), \quad Z_I\,=\, \frac{1}{t_V^{n_I}},\quad
{\cal P}\,=\,\frac{{\cal P}_0}{t_V^{n_P}},
\end{eqnarray}
where $Z_I$ denote the K\"ahler metric of $\Phi^I=(X,Q^i)$, and
${\cal P}_0$ is a constant of ${\cal O}(m_{3/2}^2M_{Pl}^2)$.
Applying (\ref{vacuumvalue}), (\ref{aucom}), (\ref{kkltsoft}) and (\ref{alpha}) to this
form of gauge kinetic functions, K\"ahler
potential and uplifting operator, we find
\begin{eqnarray}
|X|^2
&=& \frac{n_0\delta_{GS}}{2(T+T^*)^{1-n_X}},
\nonumber \\
\frac{F^X}{X} &=& (n_X-1)M_0,
\nonumber \\
g_A^2D_A &=&(n_X-1)M_0^2+ \frac{3n_P}{n_0}m^2_{3/2},
\nonumber \\
a_i&=&c_i\,=\,\frac{1}{3}\,n_0-n_i-(n_X-1)q_i, \nonumber \\
\alpha&=&\frac{2n_0}{2n_0+3n_P}
\left(1-\frac{4\pi^2(\Delta f_{H}+\Delta f_H^*)}
{N_c\ln(M_{Pl}/m_{3/2})}\right),
\nonumber \\
y_{ijk}&=& \frac{(n_0\delta_{GS}/2)^{(q_i+q_j+q_k)/2}}{
(T+T^*)^{(a_i+a_j+a_k)/2}}\lambda_{ijk}.
\end{eqnarray}
In fact, since $U(1)_A$ is spontaneously broken by $\langle X\rangle\sim
M_{Pl}/\sqrt{8\pi^2}$, the soft parameters of (\ref{kkltsoft}) can be
obtained also from an effective SUGRA which would be derived
by integrating out the massive vector
multiplet $\tilde{V}_A=V_A-\ln|X|$ as well as the hidden matter
$Q_H+Q_H^c$. To derive the effective SUGRA, it is convenient to make
the following field redefinition:
\begin{eqnarray}
V_A &\rightarrow & V_A+\ln|X|,
\nonumber \\
T &\rightarrow &T+\frac{\delta_{GS}}{2}\ln(X),
\nonumber \\
Q^I &\rightarrow & X^{-q_I}Q^I
\quad (Q^I=Q_H,Q_H^c,Q^i).
\end{eqnarray}
This field redefinition induces an anomalous variation of the gauge kinetic functions
\begin{eqnarray}
&&f_a\rightarrow f_a-\frac{1}{4\pi^2}\sum_iq_i{\rm Tr}(T_a^2(Q^i))\ln(X),
\nonumber \\
&&f_H\rightarrow f_H-\frac{1}{8\pi^2}(q+q^c)N_f\ln(X). \end{eqnarray} Taking
into account this change of gauge kinetic functions together with
the anomaly cancellation condition (\ref{gscondition}), the model
in the new field basis is given by \begin{eqnarray} \label{remodel}
K &=& K_0(t_V) + Z_X(t_V)e^{-2V_A} +
Z_I(t_V)Q^{I*}e^{2q_IV_A}Q^I
\nonumber \\
W &=& \omega_0 + \lambda Q_H^cQ_H +
\frac{1}{6}\,\lambda_{ijk}Q^i Q^j Q^k, \nonumber \\
f_a&=&kT+\Delta f, \quad f_H=k_HT+\Delta f_{H},\quad {\cal P}\,=\,{\cal
P}(t_V), \end{eqnarray} In the new field basis, $V_A$ corresponds to the
massive vector superfield $\tilde{V}_A$. The heavy hidden matter
$Q_H+Q^c_H$ can be easily integrated out, leaving a threshold
correction to the hidden gauge kinetic function: $\delta
f_H=-N_f\ln(\lambda)/8\pi^2$. The massive vector superfield can be
also integrated out using the equation of motion:
\begin{eqnarray}
\frac{\partial K}{\partial V_A} - \theta^2\bar\theta^2CC^*e^{K/3}
\frac{\partial {\cal P}}{\partial V_A}&=&0.
\end{eqnarray}
For simplicity, here we will consider only the case of sequestered
anti-brane, i.e. $\partial {\cal P}/\partial V_A=0$. The
generalization to unsequestered anti-brane is straightforward.
Making an expansion in $\delta_{GS}={\cal O}(1/8\pi^2)$, the
solution of the above equation is given by \begin{eqnarray}
e^{-2V_A}&=&-\frac{\delta_{GS}\partial_TK_0}{2{Z}_X}\left(1+ {\cal
O}\left(\frac{1}{8\pi^2}\right)\right)
\nonumber \\
&&+\, q_i\left(\frac{-2{Z}_X}{\delta_{GS}\partial_TK_0}\right)^{q_i}
\left(1+{\cal O}\left(\frac{1}{8\pi^2}\right)\right)Z_i Q^{i*}Q^i. \end{eqnarray}
Inserting this solution to (\ref{remodel}) and also adding the
gaugino condensation superpotential of the super $SU(N_c)$ YM
theory whose gauge kinetic function is now given by
$f_H=k_HT+\Delta f_{H}+\delta f_H$, we find the following effective
SUGRA: \begin{eqnarray} K_{\rm eff}&=&K_0(t)
+\left|\frac{Z_X(t)}{\partial_TK_0(t)}\right|^{q_i}Z_i(t)Q^{i*}Q^i,
\nonumber \\
W_{\rm eff}&=&w_0+N_c\lambda^{N_f/N_c}e^{-8\pi^2\Delta f_{H}/N_c}e^{
-8\pi^2k_HT/N_c}
+\frac{1}{6}|\delta_{GS}/2|^{(q_i+q_j+q_k)/2}\lambda_{ijk}Q^iQ^jQ^k,
\nonumber \\
f^{\rm eff}_a&=&kT+\Delta f, \qquad {\cal P}_{\rm eff}={\cal
P}=\mbox{constant}, \end{eqnarray} where $t=T+T^*$ and we made the final
field redefinition $Q^i\rightarrow |\delta_{GS}/2|^{q_i/2}Q^i$. One
can now compute the vacuum values of $T$, $F^T$ and the resulting
soft terms of visible fields using the above effective SUGRA, and
finds the same results as those in (\ref{vacuumvalue}),
(\ref{aucom}) and (\ref{kkltsoft}) for $\partial_T\ln{\cal P}=0$.
\section{Conclusion}
In this paper, we examined the effects of anomalous $U(1)_A$ gauge
symmetry on SUSY breaking while incorporating the stabilization of
the modulus-axion multiplet responsible for the GS anomaly
cancellation mechanism. Since our major concern is the KKLT
stabilization of the GS modulus, we also discussed some features
such as the $D$-type spurion dominance and the sequestering of the
SUSY breaking by red-shifted anti-brane which is a key element of
the KKLT moduli stabilization. It is noted also that the $U(1)_A$
$D$-term potential can not be an uplifting potential for dS vacuum
in SUSY breaking scenarios with a gravitino mass hierarchically
smaller than the Planck scale.
In case of the KKLT
stabilization of the GS modulus,
soft terms of visible fields are determined by the GS modulus
mediation, the anomaly mediation and the $U(1)_A$ mediation which
are generically comparable to each other, thereby yielding the
mirage mediation pattern of the superparticle masses at low energy
scale.
\vspace{5mm}
\noindent{\bf Acknowledgments} \vspace{5mm}
We thank A. Casas, T. Kobayashi, K.-I. Okumura
for useful discussions, and particularly Ian Woo Kim for explaining us
some features of the supersymmetry breaking by red-shifted
anti-brane which are presented in this paper.
This work is supported by the KRF Grant funded by the Korean
Government (KRF-2005-201-C00006), the KOSEF Grant (KOSEF
R01-2005-000-10404-0), and the Center for High Energy Physics of
Kyungpook National University.
K.S.J acknowledges also the support of the Spanish Government under
the Becas MAEC-AECI, Curso 2005/06.
\vskip 1cm \noindent {\bf Appendix A. SUSY breaking by red-shifted
anti-brane}
\vskip 0.5cm
In this appendix, we discuss the red-shift of
the couplings of 4D graviton and gravitino on the world volume of
anti-brane within the framework
of the supersymmetric Randall-Sundrum model on
$S^1/Z_2$ \cite{susyrs}. The bulk action of the model is given by
\begin{eqnarray} \label{5daction1} S_{\rm 5D} &=& -\frac{1}{2}\int d^4x dy
\sqrt{-G}\, M_5^3 \left\{\,
{R}_5+\bar{\Psi}^i_M\gamma^{MNP}D_N\Psi_{i\,P}
\right.
\nonumber \\
&&-\left.\frac{3}{2}k\epsilon
(y)\bar{\Psi}^i_M\gamma^{MN}(\sigma_3)_{ij} \Psi^j_N
-12k^2+\frac{\left(\,
\delta(y)-\delta(y-\pi)\,\right)}{\sqrt{G_{55}}}12k\,\right\},
\end{eqnarray} where $R_5$ is the 5D Ricci scalar for the metric $G_{MN}$,
$\Psi^i_M$ ($i=1,2$) are the symplectic Majorana gravitinos, $M_5$
is the 5-dimensional Planck scale, and $k$ is the AdS curvature.
Here we have ignored the graviphoton as it is not relevant for our
discussion.
The relations between the gravitino kink mass and the brane
cosmological constants are determined by supersymmetry.
Imposing the standard orbifold boundary
conditions on the 5-bein and 5D gravitino,
one finds that a slice of AdS$_5$ is a solution of the
equations of motion: \begin{equation} \label{5dmetric}
ds^2 =e^{-2kL|y|}\eta_{\mu\nu}dx^\mu dx^\nu+L^2dy^2\quad (-\pi\leq
y\leq \pi), \end{equation}
where $L$ is the orbifold radius.
The corresponding gravitino zero mode equation is given by
\begin{eqnarray}
\partial_y \Psi^{i}_{(0)\mu} + \frac{L}{2}k\epsilon(y) (\sigma_3)^i_{~j}
\gamma_5\Psi^{j}_{(0)\mu} = 0, \nonumber \end{eqnarray}
yielding the following 4D
graviton and gravitino zero modes: \begin{eqnarray} \label{zeromodes}
G_{(0)\mu\nu}(x,y)&=&e^{-2kL|y|}g_{\mu\nu}(x),
\nonumber \\
\Psi^{i=1}_{(0)_\mu}(x,y)&=&e^{-\frac{1}{2}kL|y|}\psi_{\mu L}(x).
\end{eqnarray}
The above form of wavefunctions reflects the
quasi-localization of the 4D graviton and gravitino zero modes
at the UV fixed point $y=0$, leading to a
red-shift of the zero mode couplings
at the IR fixed point $y=\pi$.
To make an analogy with the KKLT set-up,
let us introduce a brane of 4D AdS SUGRA at $y=0$ and
an anti-brane of non-linearly realized 4D SUGRA at $y=\pi$.
Written in terms of the 4D zero modes $g_{\mu\nu}(x)$ and $\psi_\mu$, the UV brane action
is given by
\begin{eqnarray} S_{\rm UV}
\label{UV} &=&\int d^4x \sqrt{-g}\left[\, 3m_{\rm UV}^2M_0^2
-\frac{1}{2}M_0^2R(g) \right. \nonumber \\
&&\left.-\frac{1}{2}\Big(\epsilon^{\mu\nu\rho\sigma}\bar{\psi}_\mu
\gamma_5\gamma_\nu D_\rho{\psi}_\sigma +m_{\rm UV}\bar{\psi}_{\mu
L}\sigma^{\mu\nu}\psi_{\nu R} +{\rm h.c.}\Big)\,\right]. \end{eqnarray}
As for the anti-brane action with non-linearly realized 4D SUGRA at $y=\pi$, let us
choose the unitary gauge of $\xi^\alpha=0$,
where $\xi^\alpha$ is the Goldstino fermion living on the world-volume of anti-brane.
Then using
\begin{eqnarray}
\label{redshift}
G_{(0)\mu\nu}(x,\pi)&=&e^{-2\pi kL}g_{\mu\nu}(x),
\nonumber \\
\Psi^{i=1}_{(0)_\mu}(x,\pi)&=&e^{-\pi kL/2}\psi_{\mu L}(x),
\end{eqnarray}
one easily finds that a generic anti-brane action of
$g_{\mu\nu}$ and $\psi_\mu$
can be written as \cite{clark}
\begin{eqnarray} \label{IR} S_{\rm IR}&=& \int d^4x
\sqrt{-{g}}\left[\,
-e^{4A}\Lambda_1^4-\frac{1}{2}e^{2A}\Lambda_2^2R({g})
+e^{2A}Z_1\epsilon^{\mu\nu\rho\sigma}\bar{\psi}_\mu\gamma_5{\gamma}_\nu
D_\rho{\psi}_\sigma \right.
\nonumber \\
&&+\,e^{3A}\Big(\Lambda_3\bar{\psi}_{\mu L}{\sigma}^{\mu\nu}
\psi_{\nu R}+\Lambda_4\bar{\psi}_{\mu L}\psi^\mu_R +{\rm h.c.}\Big)
\nonumber \\
&&+\left.e^{2A}\bar{\psi}_\mu\gamma_5{\gamma}_\nu
D_\rho{\psi}_\sigma \Big(Z_2{g}^{\mu\nu}{g}^{\rho\sigma}
+Z_3{g}^{\mu\rho}{g}^{\nu\sigma}+Z_4
{g}^{\mu\sigma}{g}^{\nu\rho}\Big)\right], \end{eqnarray}
where $e^A\equiv e^{-\pi kL}$
and all the coefficients, i.e.
$\Lambda_i$ and $Z_i$ ($i=1,..,4$), are of order
unity in the unit with $M_5=1$.
In fact, adding the brane actions (\ref{UV}) and
(\ref{IR}) to the bulk action (\ref{5daction1}) makes the solution
(\ref{zeromodes}) unstable.
This problem can be avoided by
introducing a proper mechanism to stabilze the orbifold radius
$L$. In the KKLT compactifications of Type IIB string theory,
such stabilization is achieved by the effects of fluxes.
Generalization of (\ref{5daction1}), (\ref{UV}) and
(\ref{IR}) incorporating the stabilization of the radion $L$ will modify the
wavefunctions of the graviton and gravitino zero modes, however
still (\ref{zeromodes}) provides a
qualitatively good approximation for the modified wavefunctions as long as
the quasi-localization of zero modes is maintained.
To compensate the {\it negative} vacuum energy density
of the UV brane, the anti-brane should provide a positive
vacuum energy density: $e^{4A}\Lambda_1^4\simeq
3m_{\rm UV}^2M_0^2$, which requires
$e^{2A}\,\sim\,
m_{\rm UV}/M_{0}$ for $M_0\sim \Lambda_1$.
For this value of the warp factor,
the 4D Planck scale and gravitino mass
are given by
\begin{eqnarray}
M_{Pl}^2\simeq \frac{M_5^3}{k}+M_0^2,
\quad
m_{3/2}\simeq m_{\rm UV},
\end{eqnarray}
where we have assumed $M_5\sim k\sim M_0$ and
ignored the contributions suppressed by an additional power of $e^{A}$.
Then one finds that SUSY breaking effects due to the terms of
$S_{\rm IR}$ {\it other than} $e^{4A}\Lambda_1^4$ are suppressed by more
powers of $e^A\sim \sqrt{m_{3/2}/M_{Pl}}$
compared to the effects due to the terms in $S_{5D}$ and $S_{\rm UV}$
even when $\Lambda_i$ and $Z_i$ are all of order unity in the
unit with $M_{Pl}=1$. For instance, the gravitino mass from
$S_{\rm IR}$ is of ${\cal O}(e^{3A}M_{Pl})$, while the gravitino mass
from $S_{\rm UV}$ is $m_{3/2}={\cal O}(e^{2A}M_{Pl})$.
\vskip 0.5cm
|
1,108,101,564,927 | arxiv | \section{Introduction}
The Megamaser Cosmology Project (MCP) aims to determine the Hubble Constant to within 3\%, by accurately measuring the distance to 10 galaxies in the Hubble flow. The Hubble Constant is an important complement to CMB data for constraining the nature of Dark Energy, the geometry (flatness) of the universe, and the fraction of the critical density contributed by matter. The technique used by the MCP, first pioneed by Herrnstein et al. (1999) to measure the distance to NGC~4258, uses water megamaser emission at 22\,GHz from the center of active galaxies to trace the inner disk geometries at high angular resolution (mas-scale), and thus determine their angular size. The angular size of this inner disk is then compared to the linear size measured through single dish observations, yielding the distance to the galaxy. The ability to image these objects at such high angular resolution comes through the very high brightness provided by the maser process, the maser disks observed by the MCP are usually extremely compact, extending to $<< $ 1\,pc from the central black hole, and. The spectral signature of such a maser disk is a cluster of systemic H$_2$O features, and two additional H$_2$O clusters, one red- and one blue-shifted with respect to the cluster of systemic features. Masers disks suitable for the distance technique require a special geometry (the nuclear accretion disk has to be edge-on for significant maser amplification; Lo, 2005), and therefore are extremely rare, so many galaxies must be surveyed to find good candidates, which are then followed up with VLBI
The precision obtained by the MCP in determining H$_0$ depends on the quality of the individual measurements, but also on the number of galaxies that can be measured, their distance distribution, and distribution on the sky. An overall 3\% precision in H$_0$ therefore can be achieved by measuring the distances to 10 galaxies if each distance could be measured to 10\% precision, assuming the individual distance measurements are uncorrelated. There are currently about 150 galaxies detected in water vapor maser emission, of which about one third show some evidence of disk origin. NGC 4258 is the only galaxy with a 10\% or better distance determination, but it not suitable to constrain H$_0$ directly as it is too close and could be mostly affected by peculiar motion. On the other hand, the MCP is currently studying in detail six H$_2$O maser disks in galaxies which are well into the Hubble flow (e.g Braatz et al. 2010, Kuo et al. 2011). However, because a broad distribution of megamaser sources in the sky is essential for reducing measurement uncertainties, surveys to find more such galaxies remain crucial for the success of the project.
We present here recent results on Mrk~1419, which is one of galaxies studied within the MCP with the aim of determining an accurate geometric distance. The megamaser in Mrk~1419 was discovered in 2002 by Henkel et al. with the 100\,m Effelsberg telescope, and it was the first maser after NGC~4258 to display the characterestic ``disk signature'', but is ten times farther away. Through single dish monitoring with Effelsberg, the authors could already measure the secular acceleration of the systemic components, and therefore concluded that the maser emission in Mrk 1419 must arise from an almost edge-on circumnuclear disk.
\section{Observations and calibration}
We observed Mrk~1419 with the global VLBI between May 2009 and January 2011, for a total of six epochs of 12 hours each. The global VLBI array comprises the VLBA, the GBT, and Effelsberg. In two of these epochs, we also added the EVLA, tuned to the frequency of the systemic masers, in order to improve the signal-to-noise level in this part of the spectrum.
\begin{figure}[ht]
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[scale=0.21]{mrk1419_spec_IFs.pdf}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[scale=0.27]{mrk1419_accelerations.pdf}
\end{minipage} \caption{{\it Left}: GBT spectrum of Mrk~1419, taken on December 16, 2009. At the bottem, the position of the VLBI bands are marked for each of the spectral components. Note that for the red and systemic bands, each IF represents two polarizations. {\it Right:} Plot of the systemic maser velocity (on the left shown with the black bar underneath) as a function of time. The velocity of each maser spot is determined from the GBT observations. The slope in the fit gives the acceleration for each component. }
\end{figure}
All our observations were carried out in self-calibration mode. Figure~1 (left) shows a typical single dish spectrum of Mrk~1419, taken in December, 2009. The systemic masers are typically 40\,mJy, and range over 150\,km\,s$^{-1}$. Given the relative weakness of the masers in this source, and in order to improve the quality of our calibration, we self-calibrated the data using a clump of systemic masers spreading over 10\,km\,s$^{-1}$. VLBI observations were carried out with four IF bands and two polarizations (RCP and LCP), each of 16\,MHz. Two of the bands were centred on the galaxy systemic velocity, two further were centered on the blue-shifted part of the spectrum, and the last two were centered on the red-shifted part of the spectrum, offset from each other because of the larger spread in velocityin this part (Figure~1, left). "Geodetic" blocks were placed at the start and end of our observations, in order to solve for atmospheric and clock delay residuals for each antenna.
Calibration was performed using AIPS, and included an a priori phase and delay calibration, zenith and atmospheric delays and clock drifts (with the geodetic block data), flux density calibration, a manual phase calibration to remove delay and phase differences among all bands, and selecting a maser feature as the interferometer phase-reference. After calibrating each dataset separately, the data were "glued" together, and imaged in all spectral channels for each of the IF bands. The image from each spectral channel appeared to contain a single maser spot, which we fitted with a Gaussian brightness distribution in order to obtain positions and flux densities.
Single dish GBT monitoring of Mrk~1419 was performed with approximately one observation per month, except for the summer months when the humidity makes observations at 22\,GHz inefficient. The GBT spectrometer was configured with two 200\,MHz spectral window each with 8192 channels, one centered on the systemic velocity of the galaxy and the second offset by 180\,MHz. Each observation was carried out for about 4\,h. Finally, data calibration was performed in GBTIDL, with a low order-polynomial fit to the line-free channels to remove the spectral baseline.
\section{Results and discussion }
We present here preliminary results from three VLBI epochs, out of the six epochs observed overall, and from the GBT acceleration measurement performed around those epochs. The VLBI epochs, labeled as BB261N, P and R, were all observed between December 2009 and January 2010. The rms noise level for each of the VLBI maps is $\sim$ 0.8\,mJy\,beam$^{-1}$. Figure~2 (left) shows the maser distribution on the sky, for the three epochs combined, with east-west, and north-south offsets (in mas) relative to the maser components at systemic (black symbols). The position angle of the maser disk is $-131^{\circ}$ and the inclination with respect to the observer is 89$^{\circ}$. The inner and outer radii of the disk are 0.13 and 0.37\,pc, respectively. The disk shows some warping, especially towards the lower, blue-shifted part. While on the outer part the disk flattens out, there is clear evidence for a significant bending in the inner side. Towards the red-shifted part of the disk, however, the larger vertical spread in the maser distribution may be due in part to the masers being fainter ($\sim$ 10\,mJy), lowering our signal-to-noise, but may also indicate a true scatter in the maser position, due to a larger inclination in this part of the disk, or may reveal a thicker disk. Figure~2 (right) shows the position-velocity, PV, diagram for Mrk~1419. The high-velocity masers trace a Keplerian rotation curve and the systemic masers fall on a linear slope. This slope can be extended to the rotation curve traced by the high-velocity features, and this intersection determines the angular radius of the disk and magnitude of the rotation velocity traced by systemic masers. The precise fit to the high velocity masers demonstrate that the disk is dominated by the gravitational potential of the supermassive black hole at the center. From this fit, we calculate the mass of the black hole to be (1.16 $\pm$ 0.05) $\times$ 10$^{7}$\,M$_{\rm solar}$ (for a Hubble constant value of H$_0$ = 73\,km\,s$^{-1}$\,Mpc; see Kuo at al. 2011).
\begin{figure}[ht]
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[scale=0.37]{mrk1419_44NPR_color.pdf}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[scale=0.46]{rotcurve_cropped.pdf}
\end{minipage}
\caption{{\it Left:} Spatial distribution of the inner accretion disk traced by H$_2$O masers. {\it Right:} Position-velocity diagram for Mrk~1419. }
\end{figure}
From the parameters derived from both single dish and VLBI measurements, we measure the angular diameter distance to Mrk~1419 to be 81$\pm$10\,Mpc. Here, we calculate the distance using: D = a$^{-1}$\,k$^{2/3}$\,$\Omega ^{4/3}$, where a is the acceleration from the GBT results (Figure~1, right), k is the Keplerian rotation constant, derived from the fit to the high velocity masers in the PV-diagram, and $\Omega$ is the slope velocity/impact parameter for the systemic masers (see Braatz et al. 2010). In our most simplified model of the disk, the maser emission originates in a thin, flat, edge-on disk and the dynamics are dominated by a central massive object, with all maser clouds in circular orbits. In this model, high-velocity masers trace gas near the tangential point at the edge of the disk. Systemic masers occupy part of a ring orbiting at a single radius and covering a small range velocities on the near side of the disk. The positive slopes seen in the maser velocities with time (Figure 1, right) indeed show clear evidence for the centripetal acceleration of masers, as they move across the line of sight in front of the central black hole. Using a ``by-eye'' method from similar plots, we measure two accelerations for the systemic masers, one for the masers with velocities $>$ 4940\,km\,s$^{-1}$, of 3.5\,km\,s$^{-1}$\,yr$^{-1}$, and one for the masers $<$ 4940\,km\,s$^{-1}$, of 2.1\,km\,s$^{-1}$\,yr$^{-1}$. Because the higher acceleration masers are fainter and not visible in our VLBI maps, we only take the lower acceleration into account for the distance estimation. While it is clear that with more than one acceleration the systemic masers likely originate from more than one radial distance from the black hole, as in our simplified assumption, more sensitive VLBI observations will be extremely important in the future to better constrain our models with the available information. Finally, we determined the slope for the systemic masers in the PV-diagram, $\Omega$, by rotating the disk on the sky by 45$^{\circ}$ (counterclockwise in Figure~2, left) and measured the impact parameter for each maser component by its abscissa on the rotated axes. This method worked well, but has the caveat that the linear slope in the PV diagram of Mrk~1419 is best fitted when the systemic features are rotated by a smaller angle than the disk further out, thus giving further evidence for the presence of a significant warp in the part of the map that is not directly traced by the masers.
\section{Summary and future work}
We presented here VLBI images and single-dish GBT results of the water vapor masers in Mrk~1419. The spatial distribution of the masers in this source is nearly linear, with high-velocity masers on both sides of the masers at the galaxy systemic velocity. The water masers trace gas in Keplerian orbits at radii of $\sim$ 0.2\,pc, moving under the influence of a $\sim$ 1.16 $\times$ 10$^{7}$\,M$_{\rm solar}$ black hole. We model the rotation from the systemic masers assuming a narrow ring, and combine our results with the acceleration measurement from single dish observations to determine a distance to Mrk~1419 of 81$\pm$10\,Mpc. The main source of uncertainty in the distance comes from the measurement of the orbital curvature parameter $\Omega$, and the uncertainty in the acceleration, while the contribution from the Keplerian rotation constant is negligible. However, the complex geometry in this source is evident from a significant warp in the disk, and the presence of more than one ring for the systemic masers. A more sophistcated modeling of the maser disk using a Bayesian fitting will therefore help solve these complications, and some first results using this method look promising (see Figure~3). Finally, the addition of more sensitive VLBI epochs to our analysis will improve the signal-to-noise ratio and can reduce the distance uncertainty to about 10\%.
\begin{figure}[ht]
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[scale=0.29]{Disk_bayes.pdf}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[b]{0.5\linewidth}
\includegraphics[scale=0.29]{Disk_mod_bayes.pdf}
\end{minipage}
\caption{ {\it Left:} Position of the masers seen from ``above'' the disk, determined from the output of the Bayesian fitting program. {\it Right: } Comparison between model and data. }
\end{figure}
|
1,108,101,564,928 | arxiv | \section{Introduction}
The birth of string theory is widely considered to be the discovery by Veneziano of the scattering amplitude formula that today bears his name \cite{Veneziano:1968yb}. More than five decades later, the calculation of string scattering amplitudes remains a formidable challenge. To give the example of the type II superstring in Minkowski spacetime, the four-point amplitude for massless external states was computed at tree level and one loop in 1982 \cite{Green:1981yb,Schwarz:1982jn}, and at two loops in 2005 \cite{DHoker:2005vch,Berkovits:2005df,Berkovits:2005ng}. There has been significant work on the three-loop problem, namely a proposal for the chiral measure \cite{DHoker:2004fcs,DHoker:2004qhf,Cacciatori:2008ay} and a partial computation using the pure spinor formalism \cite{Gomez:2013sla}, but it remains to be fully addressed. The advances have had a rich interplay with those in gauge theory and gravity amplitudes, particularly in their maximally supersymmetric versions. For instance, the first computations of the four-point one-loop amplitudes in the now widely studied 4D ${\mathcal N}=4$ super-Yang-Mills theory (SYM) and ${\mathcal N}=8$ supergravity were based on the field theory limit of the analogous superstring calculations \cite{Green:1982sw}. In this paper, we aim to return the favour by importing three-loop results in ${\mathcal N}=8$ supergravity, themselves obtained from non-planar ${\mathcal N}=4$ SYM via the Bern-Carrasco-Johansson (BCJ) double copy \cite{Bern:2010ue}, into the type II superstring.
\section{String theory versus field theory}
We will consider the type II superstring four-point amplitude for massless incoming states of momenta $k_i$ $(i=1,\ldots,4)$. The 10D maximal supersymmetry implies that information on the four external states is encoded in a kinematic prefactor ${\mathcal R}^4$ \cite{Green:1987sp}, such that the supergravity tree-level amplitude is $\sim{\mathcal R}^4/(s_{12}s_{13}s_{14})$. We define the Mandelstam variables as $s_{ij}=2k_i\cdot k_j$. Our working assumption will be that, up to three loops \footnote{Beyond three loops, the integration over the period matrix $\Omega_{IJ}$ must be restricted due to the Schottky problem. Beyond four loops, the delicate issue of non-projectedness of supermoduli space is also known to arise \cite{Donagi:2013dua}.}, the $g$-loop superstring amplitude ${\mathcal A}_{\,\mathbb{S}}^{(g)}$ takes the form
\begin{align}
\label{eq:ssamp}
& \frac{{\mathcal A}_{\,\mathbb{S}}^{(g)}}{{\mathcal R}^4} = \int_{{\mathcal M}_{g,4}} \Big|\! \prod_{I\leq J} d\Omega_{IJ\,} \! \Big|^2 \int \!\! d\ell \,\, \big|{\mathcal Y}_{\mathbb{S}}^{(g)}\big|^2 \,\prod_{i<j} | E(z_i,z_j) |^{\frac{\alpha' \!s_{ij}}{2}} \nonumber\\
& \; \times
\Big|\exp{\frac{\alpha'}{2}\!\big(i\pi\, \Omega_{IJ}\,\ell^I\!\cdot\! \ell^J+2\pi i\sum_j \ell^I \!\cdot\! k_j\!\int_{z_0}^{z_j}\!\omega_I\big)}\Big|^{\,2} \,.
\end{align}
The integration denoted by ${{\mathcal M}_{g,4}}$ is over a genus-$g$ fundamental domain parametrised by the period matrix $\Omega_{IJ}$ ($I,J=1,\ldots,g$) and over four marked points $z_i$. We use a `chiral splitting' representation \cite{DHoker:1988ta,DHoker:1989cxq}, made possible by the introduction of the loop momenta $\ell^I$, with $d\ell$ denoting $\prod_I d^{10}\ell^I$. The appearance of the prime form $E(z_i,z_j)$ and the exponential (involving the holomorphic Abelian differentials $\omega_I$ whose cycles define the period matrix) constitute the chiral$\times$anti-chiral loop-level Koba-Nielsen factors. The interesting object is ${\mathcal Y}_{\,\mathbb{S}}^{(g)}$. We make no distinction between type IIA and type IIB apart from the details of ${\mathcal R}^4$, since at four points there is no contribution from odd spin structures at least up to three loops \footnote{This follows from supersymmetry and is clear if we consider factorisable NS-NS external states $\varepsilon_i^{\mu\nu}=\epsilon_i^{\mu}\tilde\epsilon_i^{\nu}$. The supersymmetric pre-factor is ${\mathcal R}^4(\epsilon,\tilde\epsilon)={\mathcal F}^4(\epsilon){\mathcal F}^4(\tilde\epsilon)$, where ${\mathcal F}^4$ is the pre-factor for the open superstring and includes products $\epsilon_i\cdot\epsilon_j$. At three loops and four points, a 10D Levi-Civita tensor arising from an odd spin structure's zero mode may just about be saturated: $\varepsilon_{10}(k_1,k_2,k_3,\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4,\ell_1,\ell_2,\ell_3)$, but it would never give rise to any $\epsilon_i\cdot\epsilon_j$. Moreover, the contraction of the two Levi-Civita tensors (with $\epsilon_i$ and with $\tilde\epsilon_i$) over three indices after loop integration, required for a potentially non-vanishing contribution, yields products $\epsilon_i\cdot\tilde\epsilon_j$, inconsistent with ${\mathcal F}^4(\epsilon){\mathcal F}^4(\tilde\epsilon)$. This discussion is consistent with the results of \cite{Gomez:2013sla,Berkovits:2006vc}.}.
We will exploit the analogy between the formula \eqref{eq:ssamp} for the superstring and the following expected formula for supergravity:
\begin{align}
\label{eq:asamp}
\frac{\mathcal{A}_{\,\mathbb{A}}^{(g)}}{\mathcal{R}^4}
& = \! \int \! d\ell \! \int_{{\mathcal M}_{g,4}} \prod_{I\leq J} d\Omega_{IJ} \;\big(\mathcal{Y}_{\mathbb{A}}^{(g)}\big)^2 \prod_{i=1}^4\bar\delta(\mathcal{E}_i)\prod_{I\leq J}\bar\delta(u_{IJ}) \,.
\end{align}
This type of formula for a scattering amplitude was discovered at tree level by Cachazo, He and Yuan \cite{Cachazo:2013hca,Cachazo:2013iea}, generalising a previous formula from twistor string theory \cite{Witten:2003nn,Roiban:2004yf}.
The loop-level extension \cite{Adamo:2013tsa,Casali:2014hfa,Adamo:2015hoa,Geyer:2015bja,Geyer:2015jch,Geyer:2016wjx,Geyer:2018xwu} was derived from the type II ambitwistor string \cite{Mason:2013sva}, which is a worldsheet model of type II supergravity. The 10D loop integration in \eqref{eq:asamp} is UV divergent, so the expression is formal only, and we understand it as defining a loop integrand. The genus-$g$ moduli-space integration is fully localised on a set of critical points, determined by the genus-$g$ {\it scattering equations}: $\mathcal{E}_i=0$ and $u_{IJ}=0$ \footnote{For the delta functions, we use the definition $\bar\delta(z)=\bar\partial(1/2\pi i z)$, standard in this context}. An extensive discussion of the loop-level version of this formalism was presented in \cite{Geyer:2018xwu}; the brief discussion below will be sufficient for our purposes. There is a clear analogy between \eqref{eq:ssamp} and \eqref{eq:asamp}. Our proposal, under conditions to be discussed, is to identify the `chiral half-integrands',
\begin{equation}
\label{eq:YsYa}
\mathcal{Y}^{(g)}_{\mathbb{S}}=\mathcal{Y}^{(g)}_{\mathbb{A}}\,,
\end{equation}
which is known to be possible for $g\leq 2$. Notice that \eqref{eq:ssamp} is a simplified expression where $ \mathcal{Y}^{(g)}_{\mathbb{S}}$ is independent of $\alpha'$. The idea is that we can import an ambitwistor string---i.e.~supergravity---result into the superstring.
The only known procedure to evaluate \eqref{eq:asamp} reflects the fact that the ambitwistor string is a field theory in disguise: the genus-$g$ formula can be localised on a maximal non-separating degeneration, i.e.~a Riemann sphere with $g$ nodes, as in FIG.~\ref{fig:degen}. This follows from a residue argument in moduli space at one \cite{Geyer:2015bja,Geyer:2015jch} and two \cite{Geyer:2016wjx,Geyer:2018xwu} loops, and our three-loop results provide evidence that it holds at higher order. The formula on the nodal sphere is
\begin{align}
\label{eq:assamp}
\frac{\mathcal{A}_{\mathbb{A}}^{(g)}}{\mathcal{R}^4}
& =\int \frac{d\ell}{\prod_I(\ell^I)^2} \int_{{\mathcal M}_{0,4+2g}} \hspace{-10pt} c^{(g)}\big({\mathcal J}^{(g)}\mathcal{Y}^{(g)}\big)^2 \prod_{A=1}^{4+2g}\bar\delta(\mathcal{E}_A)\ \,.
\end{align}
Here, ${\mathcal M}_{0,4+2g}$ is the moduli space of the Riemann sphere with $4+2g$ marked points, corresponding to 4 external particles and $2g$ `loop marked points', one pair per node as in FIG.~\ref{fig:degen}.
\begin{figure}[t]
\includegraphics[width=0.2\textwidth]{genus_3_v2} \qquad
\includegraphics[width=0.2\textwidth]{genus_3_degen}
\caption{Genus-3 surface and its maximal non-separating degeneration (genus 0) with 2 marked points per node.}
\label{fig:degen}
\end{figure}
The factors $c^{(g)}$ and ${\mathcal J}^{(g)}$ arise from the degeneration of ${\mathcal M}_{g,4}$ to ${\mathcal M}_{0,4+2g}$ \cite{Geyer:2018xwu}. We will give an example momentarily. The object $\mathcal{Y}^{(g)}$ in this expression is the limit of $\mathcal{Y}^{(g)}_{\mathbb{A}}$ in the maximal non-separating degeneration. Finally, the delta functions impose the loop-level scattering equations on the nodal sphere, $\mathcal{E}_A=0$, on whose finite set of solutions the moduli-space integral fully localises; in fact, this integral can be understood as a multi-dimensional residue integral.
Let us be more concrete. The degeneration to the $g$-nodal sphere is achieved in a limit involving the diagonal components of the period matrix: $q_{II}=e^{i\pi\Omega_{II}} \to 0\,$.
In this limit, the holomorphic Abelian differentials whose periods define the period matrix acquire simple poles at the corresponding node: with $\sigma \in {\mathbb C}{\mathbb P}^1$,
\begin{equation}
\omega_I=\frac{\omega_{I^+I^-}}{2\pi i}\,, \quad
\omega_{I^+I^-}(\sigma)= \frac{(\sigma_{I^+}-\sigma_{I^-})\, d\sigma}{(\sigma-\sigma_{I^+})(\sigma-\sigma_{I^-})}\,,
\end{equation}
where the $\sigma_{I^\pm}$ are the marked points for node $I$. Together with the marked points $\sigma_i$ associated to the four external particles, we have the total of $4+2g$ marked points parametrising ${\mathcal M}_{0,4+2g}$ up to $\mathrm{SL}(2,\mathbb{C})$. For $g\geq 2$, the off-diagonal components of the period matrix are expressed in this limit in terms of cross-ratios of the nodal marked points,
\begin{equation}
\label{eq:qij}
q_{IJ}=e^{2i\pi\Omega_{IJ}}=\frac{\sigma_{I^+J^+}\sigma_{I^-J^-}}{\sigma_{I^+J^-}\sigma_{I^-J^+}} \,,
\end{equation}
where we denote $\sigma_{AB}=\sigma_A-\sigma_B$\,. This change of integration variables leads to the $({\mathcal J}^{(g)})^2$ appearing in \eqref{eq:assamp}. One ${\mathcal J}^{(g)}$ arises from the moduli-space measure,
\begin{equation}
\label{eq:defJ}
\prod_{I<J}\frac{dq_{IJ}}{q_{IJ}}
= \frac{{\mathcal J}^{(g)}}{\mathrm{vol\; SL}(2,\mathbb{C})}
\,, \quad {\mathcal J}^{(g)}={ J}^{(g)} \prod_{I^\pm}d\sigma_{I^\pm}\,,
\end{equation}
while the other arises from rewriting higher-genus scattering equations as nodal sphere ones. Finally, the scattering equations on the nodal sphere are equivalent to the vanishing of a meromorphic quadratic differential $\mathfrak{P}^{(g)}$ with only simple poles, and can be read off from the residues of this differential at the $4+2g$ marked points,
\begin{equation}
\mathcal{E}_A =\mathrm{Res}_{\sigma_{\!A}}\mathfrak{P}^{(g)} \,.
\end{equation}
The ingredients of \eqref{eq:assamp} can be illustrated with the two-loop example. We have $\,c^{(2)} = 1/(1-q_{12})\,$ \footnote{$c^{(2)}$ is associated to the genus-2 fundamental domain constraint $|q_{12}|<1$. In particular, $c^{(2)}$ arises from relaxing this constraint when degenerating to the sphere \cite{Geyer:2018xwu}.} and
\begin{equation}
\mathfrak{P}^{(2)}=P^2 - (\ell^I \!\omega_{I^+I^-})^2 +(\ell_1^2+\ell_2^2)\,\omega_{1^+1^-}\omega_{2^+2^-}\,,
\end{equation}
where
\begin{equation}
P_\mu(\sigma) = \ell^I_\mu \,\omega_{I^+I^-}(\sigma) + \sum_i \frac{k_{i\mu}}{\sigma-\sigma_i} \,d\sigma \,.
\end{equation}
Effectively, $\mathfrak{P}^{(g)} $ encodes all the potential loop-integrand propagators in an expression like \eqref{eq:assamp}, while $c^{(g)}$ projects out certain unphysical propagators. These details are not important for this paper, where we are concerned with ${\mathcal J}^{(g)}$ and especially $\mathcal{Y}^{(g)}$. At two loops, we have
\begin{equation}
J^{(2)}=\frac{1}{\sigma_{1^+2^+}\sigma_{1^+2^-}\sigma_{1^-2^+}\sigma_{1^-2^-}}
\end{equation}
and
\begin{equation}\label{eq:Y_g=1,2}
\mathcal{Y}^{(2)} = \frac1{3} \left( (s_{14}-s_{13})\,\Delta^{(2)}_{12}\Delta^{(2)}_{34} +\mathrm{cyc}(234) \right) \,,
\end{equation}
where we used the determinant
\begin{equation}
\label{eq:Deltag}
\Delta^{(g)}_{i_1\dots i_g}
= \varepsilon^{I_1\dots I_g}\,\omega_{I_1}(\sigma_{i_1})\dots \omega_{I_g}(\sigma_{i_g})
\end{equation}
defined for any $g$.
The expression \eqref{eq:Y_g=1,2} is built from the differentials $\omega_{I}$, which naturally lift from the nodal sphere to become the holomorphic Abelian differentials on the genus-$2$ surface. Indeed, the genus-2 expression is also valid as $\mathcal{Y}^{(2)}_{\mathbb{A}}$ in \eqref{eq:asamp} and, crucially for us, as $\mathcal{Y}^{(2)}_{\mathbb{S}}$ in \eqref{eq:ssamp}. The object $ \Delta^{(g)}$ is a modular form of weight $-1$ at any genus, which at genus 2 gives $\mathcal{Y}^{(2)}_{\mathbb{S}}$ the appropriate weight such that the moduli-space integral is well defined. At three loops, the answer is not as simple as \eqref{eq:Y_g=1,2}, but $ \Delta^{(3)}$ still appears, as seen in \cite{Gomez:2013sla} and as we will see here.
\section{$\mathcal{Y}^{(g)}_{\mathbb{S}}$ from BCJ numerators}
Let us present and test our strategy. The steps are to:
\begin{enumerate}
\item[(i)] take a supergravity loop integrand written in a BCJ double-copy representation,
\item[(ii)] translate that integrand into the ambitwistor string moduli-space integrand localised on the nodal Riemann sphere, i.e.~obtain $\mathcal{Y}^{(g)}$\,,
\item[(iii)] uplift that formula to a higher-genus modular form conjecturally valid for the superstring, i.e.~obtain $\mathcal{Y}^{(g)}_{\mathbb{S}}$ such that $\mathcal{Y}^{(g)}_{\mathbb{S}}\to \mathcal{Y}^{(g)}$ as $q_{II}\to0$\,.
\end{enumerate}
With our current understanding, step (iii) relies on an educated guess, as we will exemplify.
Starting with step (i), a BCJ representation is one in which the loop integrand is written in terms of trivalent diagrams, whose numerators are the square of analogous numerators in non-planar SYM obeying the BCJ colour-kinematics duality \cite{Bern:2008qj,Bern:2010ue} \footnote{It should be noted that the supergravity loop integrand in \eqref{eq:assamp}, once the moduli-space integral is performed, is not written in terms of Feynman-like propagators, as in the original BCJ representation \cite{Bern:2010ue}. It is instead written in an alternative representation, which was discovered in the ambitwistor string; see \cite{Geyer:2015bja,He:2015yua,Baadsgaard:2015twa,Geyer:2015jch,Cachazo:2015aol,He:2016mzd,Feng:2016nrf,He:2017spx,Geyer:2017ela,Geyer:2019hnn,Edison:2020uzf,Farrow:2020voh} for discussions. The supergravity BCJ numerators relevant in our two- and three-loop problems are valid in both representations.}. See \cite{Bern:2019prr} for a review of this remarkable construction, which was motivated by the KLT relations of string theory \cite{Kawai:1985xq}. Indeed, there is a large body of work relating this construction to aspects of string theory, e.g.~\cite{BjerrumBohr:2009rd,Stieberger:2009hq,Mafra:2011kj,Mafra:2012kh,Ochirov:2013xba,Mafra:2014gja,He:2015wgf,Mafra:2015vca,Tourkine:2016bak,Hohenegger:2017kqy,Ochirov:2017jby,Tourkine:2019ukp,Mizera:2019gea,Casali:2019ihm,Casali:2020knc,Borsten:2021hua,Bridges:2021ebs}. Step (ii) is based on the connection to the scattering equations story, for which we use the following relation based on a differential form with logarithmic singularities \footnote{The tree-level (${\mathcal J}^{(0)}=1$) version of this relation was revealed in string theory in \cite{Mafra:2011nv,Mafra:2011kj} and in the scattering equations (CHY) formalism in \cite{Cachazo:2013iea}; see also \cite{Azevedo:2018dgo,He:2018pol}. Our higher-loop formula is motivated by its verification at two loops in \cite{Geyer:2019hnn}. The higher-multiplicity extension is trivial.}
\begin{equation}
\label{eq:expKK}
(2\pi i)^4 \, {\mathcal J}^{(g)}\mathcal{Y}^{(g)} = \sum_{\rho\in S_{2+2g}}\frac{N^{(g)}(1^+,\rho,1^-)}{(1^+,\rho,1^-)}
\;\,\prod_{A=1}^{4+2g}d\sigma_A\,,
\end{equation}
where $(ABC\dots D)=\sigma_{AB}\sigma_{BC}\dots\sigma_{DA}$ is a Parke-Taylor denominator. The BCJ numerators $N^{(g)}$, which depend on a particle ordering, are SYM numerators whose square gives the supergravity numerators; this square effectively translates into the square of ${\mathcal J}^{(g)}\mathcal{Y}^{(g)}$ in \eqref{eq:assamp}. Notice, however, that we have extracted the overall factor ${\mathcal{R}^4}$ in \eqref{eq:assamp}, whose `square root'
is therefore not included in the SYM numerators. The correspondence between the numerators $N^{(g)}$ and trivalent diagrams is best understood in an explicit example, to be discussed below. Before that, let us make two comments.
The first is that two marked points singled out in \eqref{eq:expKK} were chosen to be $\sigma_{1^\pm}$, but the sum is independent of that choice. The second, for the reader familiar with the scattering equations formalism including the developments \cite{Bjerrum-Bohr:2014qwa,Mizera:2019blq,Kalyanapuram:2021xow,Kalyanapuram:2021vjt}, is that equalities like \eqref{eq:expKK} often hold only when the marked points satisfy the scattering equations (e.g.~for CHY Pfaffians). Here, on the other hand, we propose that \eqref{eq:expKK} defines $\mathcal{Y}^{(g)}$ such that it may be uplifted to the superstring, as happens up to two loops.
\begin{figure}[t]
\includegraphics[width=0.485\textwidth]{fig2loop}
\caption{Two-loop example. Diagram associated to the numerator $N(1^+,2,2^+,3,4,2^-,1,1^-)$.}
\label{fig:2loopnum}
\end{figure}
Let us test the strategy at two loops, for which the BCJ representation of the four-point supergravity loop integrand is long known \cite{Bern:1998ug} \footnote{The known result is for 4D ${\mathcal N}=8$ supergravity. The 10D type II supergravity amplitude is not defined, due to the UV divergence, but the loop integrand can be taken to be a straightforward dimensional `oxidation', with appropriate prefactor ${\mathcal R}^4$.}. The two-loop BCJ numerators can be compactly written as
\begin{equation}
\label{eq:2loopBCJ}
N^{(2)}(1^+\!,\rho_1,2^{\pm}\!,\rho_2,2^{\mp}\!,\rho_3,1^-)=
\begin{cases}
s_{ij} & \rho_2 = \{i,j\}\\
0 &\text{otherwise}\,.
\end{cases}
\iffalse
N^{(2)}(1^+,\rho,1^-)=
\begin{cases}
s_{i_2i_3} & \rho = \{i_1\,2^+\,i_2i_3\,2^-\,i_4\}\\
0 &\text{otherwise}\,.
\end{cases}\fi
\end{equation}
They correspond to half-ladder diagrams with loop momenta $\pm \ell_1$ at the ends; see FIG.~\ref{fig:2loopnum}.
A standard two-loop diagram is then obtained by gluing the nodal legs, i.e.~$I^+$ with $I^-$.
Taking the result \eqref{eq:2loopBCJ} from the literature, it is possible to obtain $\mathcal{Y}^{(2)}$ via \eqref{eq:expKK}. Then, it is both natural and easy to rewrite $\mathcal{Y}^{(2)}$ in the form \eqref{eq:Y_g=1,2}, which as explained earlier can be uplifted to genus 2, matching the superstring result $\mathcal{Y}^{(2)}_{\mathbb{S}}$. This achieves step (iii).
\section{Three Loops}
We now apply our strategy to the much more intricate three-loop case. From the general form of a three-loop field theory integrand, namely the inclusion of the relevant diagram topologies, we can determine $c^{(3)}$ and $\mathfrak{P}^{(3)}$. However, they do not appear in \eqref{eq:expKK}, so they are not important for the goal of this paper \footnote{They will be discussed elsewhere. For illustration, a valid choice is given by
$$
c^{(3)} = \frac{1}{(1-q_{12})(1-q_{23})(1-q_{13})}\; \frac{q_{23}}{1-q_{23}}
$$
and
$$
\mathfrak{P}^{(3)} = P^2 -(\ell^I\!\omega_{I^+I^-})^2 +(\ell_1^2+\ell_2^2)\omega_{1^+1^-}\omega_{2^+2^-}
$$
$$
\phantom{\mathfrak{P}^{(3)} = }+(\ell_1^2+\ell_3^2)\omega_{1^+1^-}\omega_{3^+3^-}+(2\ell_2\cdot \ell_3-\ell_1^2)\omega_{2^+2^-}\omega_{3^+3^-}\,.
$$}.
The important quantities are ${\mathcal J}^{(3)}$ and $\mathcal{Y}^{(3)}$. The Jacobian is straightforwardly obtained from \eqref{eq:defJ} and can be written as
\begin{equation}
J^{(3)} = J_{\text{hyp}}\, \frac{\prod_I\sigma_{I^+I^-}}{\prod_{I< J}\,\sigma_{I^+J^+}\sigma_{I^-J^-}\sigma_{I^+J^-}\sigma_{I^-J^+}} \,,
\end{equation}
where in the factor
\begin{equation}
\label{eq:Jhyp}
J_{\text{hyp}} = \sigma_{1^+2^-}\sigma_{2^+3^-}\sigma_{3^+1^-}-\sigma_{1^+3^-}\sigma_{3^+2^-}\sigma_{2^+1^-}
\end{equation}
the subscript refers to {\it hyperelliptic}, as we will explain.
We can now determine $\mathcal{Y}^{(3)}$ using \eqref{eq:expKK}. The right-hand side is obtained from the known BCJ representation of the three-loop supergravity integrand, a landmark application of the double copy \cite{Bern:2010ue} \footnote{The result in \cite{Bern:2010ue} applies to 4D ${\mathcal N}=8$ supergravity, but we will assume that it `oxidates' trivially to 10D type II supergravity for similar reasons as in the two loop case, given the absence of contributions from odd spin structures.}. The BCJ numerators, listed in table I of \cite{Bern:2010ue}, are not as simple as at two loops and depend linearly on the loop momenta, e.g. \footnote{Our convention for the external momenta is that they are incoming, whereas the convention in \cite{Bern:2010ue} was that they are outgoing. This affects the sign of the term linear in the loop momenta.}
\begin{align}
& N(1^+,1,2,2^+,3,3^+,2^-,4,3^-,1^-) = \frac1{3}\,s_{12}(s_{12}-s_{14}) \nonumber \\
&+ \frac{2}{3}\, \ell^{1}\cdot\big(k_2(s_{13}-s_{14}) +k_3(s_{13}-s_{12})+k_4(s_{12}-s_{14}) \big)
\,. \nonumber
\end{align}
Via \eqref{eq:expKK}, this property implies
\begin{equation}
\label{eq:Y3}
2\pi i\, \mathcal{Y}^{(3)}_{\mathbb{S}}= \mathcal{Y}_0 + 2\pi i\, \ell^I_\mu \, \mathcal{Y}_I^\mu \,,
\end{equation}
where the factors were chosen for later convenience.
We write our results already in uplifted form, i.e.~for $\mathcal{Y}^{(3)}_{\mathbb{S}}$ (which we claim is $\mathcal{Y}^{(3)}_{\mathbb{A}}$) instead of its degeneration $\mathcal{Y}^{(3)}$. To determine $\mathcal{Y}^{(3)}_{\mathbb{S}}$, we construct a well-motivated ansatz with the required modular weight of $-1$, and fix the coefficients of that ansatz by matching numerically the degeneration limit to \eqref{eq:expKK}. This requires expanding in the degeneration parameters the Jacobi theta functions which define various objects, a straightforward if computationally heavy procedure.
The second term in \eqref{eq:Y3} is the easiest: we can write
\begin{equation}
\label{eq:Yloop}
\mathcal{Y}^\mu_I = \frac{2}{3} \left( \alpha_1^\mu \, \omega_I(z_1)\Delta^{(3)}_{234} +\mathrm{cyc}(1234) \right) \,,
\end{equation}
with\, $\alpha_1^\mu = k_2^\mu\,\left(k_3-k_4\right)\cdot k_1+\mathrm{cyc}(234)$ . All the ingredients have been introduced previously.
The object $\mathcal{Y}_0$ is more involved. It is convenient to extricate the kinematic dependence by writing
\begin{equation}
\mathcal{Y}_0 = s_{13}s_{14}\, Y_{12,34} + \mathrm{cyc}(234)\,,
\end{equation}
where $Y_{12,34}$ is independent of the $s_{ij}$ and is symmetric when exchanging: $\sigma_1\leftrightarrow\sigma_2$, $\sigma_3\leftrightarrow\sigma_4$, $\{\sigma_1,\sigma_2\}\leftrightarrow\{\sigma_3,\sigma_4\}$. Let us first state the result and then discuss it:
\begin{equation}
\label{eq:Y1234}
Y_{12,34} = \frac1{3}\, {\mathcal D}_{12,34} - \frac1{15\,\Psi_9} \left( {\mathcal S}_{12,34}^{(a)} -\frac1{8}\,{\mathcal S}_{12,34}^{(b)}\right)
\,,
\end{equation}
where
\begin{align}
\label{eq:wDelta}
{\mathcal D}_{12,34} &= \omega_{3,4}(z_1)\Delta^{(3)}_{234} + \omega_{3,4}(z_2)\Delta^{(3)}_{134} \nonumber \\
&\quad + \omega_{1,2}(z_3)\Delta^{(3)}_{412} + \omega_{1,2}(z_4)\Delta^{(3)}_{312} \,, \\
{} \nonumber\\
\label{eq:Szegoa}
{\mathcal S}_{12,34}^{(a)} &= \! \sum_\delta \Xi_8[\delta] \Big( S_\delta(z_1,z_2)S_\delta(z_2,z_3)S_\delta(z_3,z_4)S_\delta(z_4,z_1) \nonumber \\
& \!\!\!\!\!\! + S_\delta(z_2,z_1)S_\delta(z_1,z_3)S_\delta(z_3,z_4)S_\delta(z_4,z_2) \Big) \,,\\
{} \nonumber\\
\label{eq:Szegob}
{\mathcal S}_{12,34}^{(b)} &= \sum_\delta \Xi_8[\delta] \, S_\delta(z_1,z_2)^2 S_\delta(z_3,z_4)^2 \,.
\end{align}
Starting with the expression \eqref{eq:wDelta}, the object $\omega_{i,j}(z_k)$ is the normalised Abelian differential of the third kind, whose degeneration limit is
\begin{equation}
\omega_{i,j}(\sigma)= \frac{\sigma_{ij}}{(\sigma-\sigma_{i})(\sigma-\sigma_{j})}\, d\sigma \,.
\end{equation}
A consistency check is that the contribution \eqref{eq:wDelta}, including the kinematic coefficient, is completely fixed by \eqref{eq:Yloop}. This follows from the condition of `homology invariance': distinct choices of homology cycles of the Riemann surface with respect to the marked points $z_i$ obey monodromy relations dictated by the chiral splitting procedure \cite{DHoker:1989cxq}, and this connects the two contributions \footnote{In summary, if a marked point $z_i$ is shifted by a `B-cycle', (i) the loop momentum associated to that cycle is shifted by $k_i$ and (ii) the Abelian differential of the third kind has non-trivial monodromy. These two effects combine precisely to achieve homology invariance. See the very clear discussion for the two-loop five-point amplitude in \cite{DHoker:2020prr}, where the objects $g^I_{i,j}$ relate to $\omega_{3,4}(z_1)$ as $\omega_{3,4}(z_1)=(g^I_{1,3}-g^I_{1,4})\omega_I(z_1)$}.
The contributions \eqref{eq:Szegoa} and \eqref{eq:Szegob} are more elaborate, but the structure is familiar from the RNS formalism \cite{DHoker:1988ta,DHoker:2002hof,DHoker:2001kkt,DHoker:2001qqx,DHoker:2001foj,DHoker:2001jaf,DHoker:2005dys,DHoker:2005vch}. The sums are over the 36 even spin structures at genus 3, labelled by $\delta$, and the objects $S_\delta(z_i,z_j)$ are the Szeg\H{o} kernels arising from the OPEs of worldsheet fermions. The `chiral measure' $\Xi_8[\delta]/\Psi_9$ is the crucial ingredient. Here, $\Psi_9=\sqrt{-\prod_\delta \theta[\delta](0)}$ is a modular form of weight 9 (note our non-standard definition for the sign), defined in terms of the even Jacobi theta functions. The general properties of the chiral measure were described in \cite{DHoker:2004qhf,DHoker:2004fcs} and the precise definition of $\Xi_8[\delta]$ was given in \cite{Cacciatori:2008ay}. It is a sophisticated definition, so we will not repeat it here; we found ref.~\cite{tsuyumine1986} very helpful.
In the degeneration limit $q_{II}\to0$, $\Psi_9$ vanishes with leading behaviour $\Psi_9=(\prod_I q_{II}^2)\,\psi_9+\ldots$ ,
\begin{equation}
\psi_9 = 2^{14}\, J_{\text{hyp}}\; \frac{\left(\prod_I\sigma_{I^+I^-}\right)^3}{\prod_{I< J}\,\sigma_{I^+J^+}\sigma_{I^-J^-}\sigma_{I^+J^-}\sigma_{I^-J^+}} \,,
\end{equation}
where $J_{\text{hyp}}$ is given in \eqref{eq:Jhyp}.
It is opportune to note that only a codimension-1${}_{\mathbb{C}}$ subset of genus-3 Riemann surfaces are hyperelliptic (whereas for $g\leq2$ all surfaces are), and these are precisely identified by the vanishing of $\Psi_9$ \footnote{and the non-vanishing of another modular form called $\Sigma_{140}$ in the classical reference \cite{Igusa}.}. The condition $J_{\text{hyp}}=0$ identifies hyperelliptic surfaces in the degeneration limit. The factors of $J_{\text{hyp}}$ in $J^{(3)}$ and in $1/\Psi_9$ cancel, such that ${\mathcal J}^{(3)}\mathcal{Y}^{(3)}$ does not vanish in the hyperelliptic sector.
The sums \eqref{eq:Szegoa} and \eqref{eq:Szegob}, which are modular forms of weight 8, vanish in the degeneration limit in a manner analogous to $\Psi_9$, so that the ratio appearing in \eqref{eq:Y1234} yields a finite result on the nodal sphere \footnote{One may also ask why a sum with
$$
S_\delta(z_1,z_3)^2 S_\delta(z_2,z_4)^2
+ S_\delta(z_1,z_4)^2 S_\delta(z_2,z_3)^2
$$
is absent from our result, since it has the correct symmetries. It turns out that this sum gives precisely twice the sum \eqref{eq:Szegob}, at least in the degeneration limit (we expect this to hold beyond the limit too).}. As consistency checks on our implementation of the chiral measure, we verified to order $O(q_{II}^2)$ the following identities (respectively, from \cite{Cacciatori:2008ay,Grushevsky:2008qp,Matone:2008td}):
\begin{align}
& \sum_\delta \Xi_8[\delta] =0 \,, \qquad \sum_\delta \Xi_8[\delta] \, S_\delta(z_1,z_2)^2 =0 \,, \nonumber \\
& \sum_\delta \Xi_8[\delta] \, S_\delta(z_1,z_2) S_\delta(z_2,z_3) S_\delta(z_3,z_1)= C\, \Psi_9 \, \Delta^{(3)}_{1,2,3} \,, \nonumber
\end{align}
where we determined the previously unknown coefficient $C=15\,(2\pi i)^3$. We could not find simplified expressions for \eqref{eq:Szegoa} and \eqref{eq:Szegob}; they are not proportional to $\Psi_9$, i.e.~not proportional to $J_{\text{hyp}}$ in the degeneration limit.
Comparing our result to the pure spinor computation of \cite{Gomez:2013sla}, the latter was restricted to part of the correlator and was not manifestly modular invariant, but appears to be consistent at least with \eqref{eq:Yloop}. The main goal of \cite{Gomez:2013sla}, for which the partial computation was sufficient, was to match a prediction from S-duality \cite{Green:2005ba} for the low-energy amplitude, where the overall normalisation is important. We neglected the normalisation here, and leave this aspect and a proper comparison to \cite{Gomez:2013sla} for future work. Due to manifest supersymmetry, the splitting of spin structures does not arise in the pure spinor approach \cite{Berkovits:2000fe,Berkovits:2001rb,Berkovits:2002zk,Berkovits:2004px,Berkovits:2005df,Berkovits:2005ng}, so this approach may be helpful in simplifying the sums seen above.
\section{Discussion}
We have constructed a conjectured expression for the three-loop four-point amplitude of massless states in the type II superstring. The crucial ingredient is the chiral half-integrand \eqref{eq:Y3}. As at two loops \cite{DHoker:2005vch,DHoker:2020prr}, this object can also in principle be imported into the Heterotic superstring, paired with a bosonic counterpart.
In place of a first-principles worldsheet calculation, we wrote down an ansatz inspired by insights from the RNS and pure spinor formalisms, and then constrained that ansatz using supergravity data mined with modern amplitudes techniques.
Our focus was on briefly delineating a strategy, with very concrete results. Additional technical details will be presented elsewhere. We hope that our conjecture can guide rigorous derivations using established worldsheet methods. Alternatively, in the spirit of the amplitudes programme, perhaps the proof can follow from a set of basic constraints, such as unitarity.
Natural future directions are: the study of the moduli-space integration in the low-energy limit, building on \cite{DHoker:2005jhf,Gomez:2013sla,DHoker:2013fcx,DHoker:2014oxd,DHoker:2020tcq}, which is newly motivated by beautiful advances in the non-perturbative amplitudes bootstrap \cite{Guerrieri:2021ivu}; and the consideration of higher-point \cite{Tsuchiya:1988va,Richards:2008jg,Mafra:2012kh,Tsuchiya:2012nf,Green:2013bza,Mafra:2014gja,Mafra:2015vca,Mafra:2016nwr,Mafra:2018nla,Mafra:2018pll,Mafra:2018qqe,DHoker:2020prr,DHoker:2020tcq} or higher-loop \cite{DHoker:2004fcs,Cacciatori:2008ay,Grushevsky:2008zm,Cacciatori:2008pj,SalvatiManni:2008qa,Morozov:2008wz,Grushevsky:2008zp,Matone:2010yv,Matone:2005vm} amplitudes. We expect our strategy to prove useful, not least because there are BCJ numerators for ${\mathcal N}=8$ supergravity up to five loops \cite{Bern:2012uf,Bern:2017yxu,Bern:2017ucb}, although the five-loop case required a generalisation of this representation.
Also at this loop order, the relation between supermoduli space and ordinary moduli space becomes more intricate \cite{Donagi:2013dua}, calling into question the structure of our starting point \eqref{eq:ssamp}. The interplay between field theory and string theory amplitudes continues to present us with many challenges and fruitful surprises.
\vspace{0.3cm}
\noindent
{\textbf{Note}} As this work was concluded, it came to our knowledge that the authors of \cite{DHoker:2020prr} have independently constructed the contribution to the half-integrand that is linear in the loop momenta, equation \eqref{eq:Yloop}.
\vspace{0.3cm}
\begin{acknowledgments}
\noindent
{\textbf{Acknowledgements}} We thank Eric D'Hoker, Carlos Mafra, Boris Pioline, Rodolfo Russo, Oliver Schlotterer and Edward Witten for comments. YG is supported by the CUniverse research promotion project ``Toward World-class Fundamental Physics" of Chulalongkorn University (grant CUAASC). RM and RSM are supported by the Royal Society via a University Research Fellowship and a Studentship Grant, respectively.
\end{acknowledgments}
|
1,108,101,564,929 | arxiv |
\section{Conclusion}
In this work, we propose a meta-learning and self-supervised learning-based symbol detection algorithm. Unlike the existing work on deep learning detectors, the proposed model can be adapted to a new channel environment with a fixed number of adaptation steps while enjoying relatively less supervision on training signals. The experimental results show that the proposed model outperforms OFDM-MMSE and shows comparable performance with the BCJR algorithm when the channel information at a receiver is noisy.
\section{Numerical Experiments}
In this section, we evaluate the performance of the MetaSSD{} proposed in \autoref{sec:method} and compare it with the BCJR and OFDM-MMSE.
\subsection{Experimental Setting}
We conduct experiments with synthetic datasets to evaluate the symbol error rates (SERs) of various detection methods.
We assume the channel state is maintained for $N$ time slots.
We consider an ISI channel where the channel output $y_i$ at time slot $i$ is formalized as:
\begin{equation}\label{eq:finite_memory_channel}
y_{i}=\sum_{l=1}^{L} h_{l} x_{i-l+1}+z_{i},
\end{equation}
where $z_i$ is a noise signal at time slot $i$ distributed as $\mathcal{CN}(0,1/\rho^2)$, and $\rho^2$ represents a signal-to-noise ratio (SNR).
A BPSK modulator is used, i.e., ${\mathbb{X}}=\{-1,1\}$.
The ISI channel is modeled as frequency-selective Rayleigh fading with exponential power delay profile (Exp-PDP), in which the $l$th entry of the ISI channel ${\mathbf{h}}$ is sampled from
\begin{equation}\label{eq:exp-pdp}
h_{l}\sim\mathcal{CN}(0,\sigma_l^2),
\end{equation}
with
$$\sigma_l^2=\frac{\exp(-\gamma(l-1))}{\sum_{l=1}^{L}\exp(-\gamma(l-1))},$$
where $\gamma$ depends on wireless channel environment.
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{figures/experiments/main.pdf}
\caption{Symbol error rates (SER) on various levels of $\text{SNR}_{\text{dB}}$. Our model (MetaSSD{}) performs consistently better than OFDM-MMSE. The proposed model shows the least performance changes from the perfect to the noisy environments.}
\label{fig:main result}
\end{figure}
\begin{figure*}[t!]
\centering
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/experiments/abl_meta.pdf}
\caption{Meta-learning}
\label{fig:abl-meta}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/experiments/abl_ss.pdf}
\caption{Self-supervised learning}
\label{fig:abl-ss}
\end{subfigure}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{figures/experiments/abl_temp.pdf}
\caption{Temperature learning}
\label{fig:abl-temp}
\end{subfigure}
\caption{Ablation study on the three different parts of our learning framework.}
\end{figure*}
A meta-training set $\{D^{(t)}_{\text{train}}\}^T_{t=1}$ is required to train MetaSSD{}. We want to make the model exposed to various environment during training. To do so, we randomly sample $\{h_l\}_{l=1}^L$ from Exp-PDP channel model with $\gamma=2$ and $\text{SNR}_{\text{dB}}$ from $\text{unif}\{0,15\}$ for each training set $D^{(t)}_{\text{train}}$.
The size of meta-training set is 10,500, i.e., $T=10,500$, of which $500$ randomly selected sets are used for validation.
Note that the meta-training set consists of various $\text{SNR}_{\text{dB}}$. Therefore, the meta-initialization parameter obtained from training ensures that the model adapts to the unseen channel regardless of its $\text{SNR}_{\text{dB}}$.
For the test set, we sampled $500$ datasets for each $\text{SNR}_{\text{dB}}=\{0,\cdots,15\}$ from the same configuration used in the meta-training.
In all experiments, we fix the number of training symbols to 100, i.e., $P=100$, memory length to 4, i.e., $L=4$, and the message length to 10,000, i.e., $N=10,000$ for both training and test sets.
A multi-layer perceptron with five-hidden layers is used as $f_\theta$, where 100, 300, 300, 100, and 50 hidden units are used for each layer, respectively.
ReLu function is used for activation, and the mini-batch size is set to $50$, i.e., $B=50$.
For the local update in the meta-learning, we set the number of adaptation steps to four, i.e., $K=4$.
All hyperparameters, including the learning rate and regularization coefficients, are tuned by the Bayesian optimization~\cite{frazier2018tutorial} method on the validation set.
To compare the robustness of various detection methods against channel information error, we consider the following two scenarios:
(i) in a \textit{perfect} scenario, we assume that perfect channel information is available at the receiver (i.e., $\hat{\bf h} ={\bf h}$), and (ii) in a \textit{noisy} scenario, we assume that the channel information at the receiver is noisy and modeled as $\hat{\bf h} = {\bf h} + {\bf n}$, where ${\bf n}$ is an additive Gaussian noise whose entry is distributed as $\mathcal{CN}(0,\sigma_n^2)$, as in \cite{shlezinger2020data}.
We assume that $\sigma_n^2 = 0.4$ in our experiments.
At the test time, we evaluate the SER of MetaSSD{} after performing $K$ adaptation steps from the meta-initialized parameters for each task $D^{(t)}_{\text{test}}$.
\subsection{Results}
\autoref{fig:main result} shows SER performances of MetaSSD{}, BCJR and OFDM-MMSE.
We average the results of 500 test sets for each $\text{SNR}_{\text{dB}}$.
The proposed model outperforms OFDM-MMSE consistently across all $\text{SNR}_{\text{dB}}$ levels for both scenarios.
When the channel information is noisy, MetaSSD{} achieves a lower SER than BCJR for the case of $\text{SNR}_{\text{dB}} \geq 8$ dB while providing comparable performance for all $\text{SNR}_{\text{dB}}$ levels.
It is also shown that MetaSSD{} provides the smallest performance gap between \textit{perfect} and \textit{noisy} cases.
This result demonstrates the robustness of MetaSSD{} when the channel information at the receiver is noisy.
\subsection{Ablation Study}
\paragraph{Meta-learning}
To show the effectiveness of a meta-learning framework, we compare two variants of non-meta detection process: \textit{only-adaptation}-$K$ and \textit{naive}-$K$.
\textit{Only-adaptation}-$K$ starts with randomly initialized parameters and updates parameters for $K$ times with the gradient descent algorithm for each test set.
We note that \textit{only-adaptation}-$K$ is similar to the approach adopted by the previous deep learning-based detector without a meta-learning framework~\cite{ito2019trainable, csahin2019doubly, he2020model, shlezinger2020data, shlezinger2020deepsic, sharma2020deep}.
To compare with the previous work, \textit{only-adaptation}-$K$ uses a relatively less number of training symbols and update steps.
We report the results of two different $K$: 4 and 100.
\textit{Naive}-$K$ predicts symbols with the initialization parameter obtained by \autoref{alg:maml} without any local update step.
\textit{Naive}-$K$ assumes all meta-training data are drawn from the same distribution corresponding to the standard supervised learning.
At test time, \textit{naive}-$K$ performs $K$ number of local update steps for each test set.
The comparison between meta and non-meta strategies in \textit{perfect} scenario is shown in \autoref{fig:abl-meta}. \textit{meta}-$K$ denotes our model.
Relatively low performance of \textit{naive}-$4$ highlights the importance of the local adaptation step in training.
We observe that accuracy of \textit{meta}-$4$ is even higher than \textit{only-adaptation}-$100$. The \textit{only-adaptation}-$100$ fails to achieve comparable performance.
The result
demonstrates the importance of the meta-initialization parameter trained by the meta dataset.
\paragraph{Self-supervised learning}
To verify the effectiveness of our self-supervised learning framework, we
train the model without self-supervised loss.
\autoref{fig:abl-ss} compares the difference between the presence and absence of self-supervised loss. All models are trained with the meta-learning algorithm.
We observe that the model trained with the self-supervised loss performs significantly better in all $\text{SNR}_{\text{dB}}$ levels. The results show that the 100 training symbols are insufficient to train a model even with the meta-initialization and highlight the importance of the self-supervised loss.
\paragraph{Temperature learning}
We study the influence of the learnable temperature parameter $\tau$ in the softmax function.
We compare the performance of the softmax with and without temperature.
Since the learnable temperature is designed to reduce the negative effect of the channel estimation noise, we conduct a comparison in \textit{noisy} environment.
\autoref{fig:abl-temp} shows the result.
Although the improvement seems marginal, the model with the learnable temperature consistently outperforms the ones without the temperature across all $\text{SNR}_{\text{dB}}$ levels.
\section{Introduction}
Deep learning-based symbol detectors recently gained increasing attention due to relatively simple algorithms than model-based ones~\cite{balatsoukas2019deep, he2019model}. Earlier work in learning-based detectors uses a supervised learning framework to train a model~\cite{shlezinger2020data, shlezinger2020deepsic}. These studies assume that the model parameters are independent of each channel state. Hence, the model parameter needs to be re-estimated whenever a new train signal has arrived.
Online learning-based neural symbol detectors have been proposed to overcome the limitations~\cite{jiang2018artificial, shlezinger2019viterbinet, khani2020adaptive}. These models can adapt to a new environment through an incremental model update via an online learning strategy. Online learning, however, implicitly assumes that the channel state is not changing dramatically, making the models work hard in rapidly changing environments. One can retrain the model parameters from scratch to overcome the limitation, but it is unclear when to retrain the model in such a case.
On the other hand, the learning-based approach often requires more supervision than the model-based approach. Deep neural networks are data-hungry. To make supervised learning stable, one needs to send relatively long train signals, which reduces the efficiency of the communication systems. Moreover, the training time increases as one increases the level of supervision, making the learning-based model unemployable~\cite{khani2020adaptive, he2020model}.
In this work, we propose a new learning framework to address the limitations of the previous work. Our contribution is two-fold: 1) we adopt
a model-agnostic meta-learning (MAML)~\cite{finn2017model} framework
to find a good initialization parameter, from which the model can adapt to a new environment within a small
number of update steps, and 2) we propose self-supervised learning that aims to
predict symbols by minimizing reconstruction error of channel outputs.
The meta-learning helps the model be exposed to various channel environments during meta-training while adapting to a new environment within a few update steps. The self-supervised learning helps the model use relatively less supervision than the previous learning-based detectors.
We compare the performance of our model with BCJR~\cite{bahl1974optimal, li1995optimum} and OFDM-MMSE in a simulated environment. Our approach consistently outperforms OFDM-MMSE and shows comparable or even better results than BCJR when channel estimation is unstable. Further ablation studies on each component of our framework validate the necessity of the proposed framework.
The code is available at \url{https://github.com/ml-postech/MetaSSD}.
\section{Self-Supervised Detector with Meta-Learning}\label{sec:method}
In this section, we propose a Meta-learned Self-Supervised Detector (MetaSSD) to estimate the correct symbols in a finite-memory channel. The proposed model is based on two machine learning frameworks: self-supervised learning and meta-learning
\begin{figure*}[t!]
\centering
\includegraphics[width=0.8\textwidth]{figures/model/overall_framework.pdf}
\caption{The overall framework of the neural detector with the ISI channel, where we set the memory length to two for the illustrative purpose. The input of the first network is $y_{i-1:i+1}$ and that of second is $y_{i:i+2}$. Both networks share the same parameter $\theta$. As shown in the figure, $x_i$ is predicted twice at a different location of the network. The temperature parameter $\tau_l$ controls the uncertainty of the corresponding location. The weighted sum of two outputs is used as an ensemble prediction, where the weight is proportional to the magnitude $|\hat{h}_l|$ of the corresponding tap.}
\label{fig:framework}
\end{figure*}
\subsection{Self-supervised neural detector}
We consider a problem of transmitting a block of $N$ symbols over the finite-memory channel. Let $x_i \in {\mathbb{X}}$ be a transmitted symbol at time index $i \in [1, 2, ... , N ]$, and $y_i \in {\mathbb{Y}}$ be a output of the channel at time index $i \in [1, 2, ... , N] $. Given a finite memory length $L$, the conditional distribution of $y_i$ depends on symbols from $i-L+1$ to $i$, i.e., ${\mathbf{x}}_{i-L+1:i}$.
Therefore, the conditional distribution of outputs ${\mathbf{y}}$ given inputs ${\mathbf{x}}$ can be factorized as
\begin{align}
\label{eqn:finite-channel}
p({\mathbf{y}} | {\mathbf{x}}) = \prod_{i=1}^{N} p(y_i | {\mathbf{x}}_{i-L+1:i}).
\end{align}
If the channel changes over time, the distribution of the outputs also varies over time.
We aim to design a neural network architecture to recover the original symbols ${\mathbf{x}}$ with a given dataset $D = \{{\mathbf{x}}_{1:P}, {\mathbf{y}}_{1:N} \}$, where ${\mathbf{x}}_{1:P}$ are training symbols and ${\mathbf{y}}_{1:N}$ are channel outputs. We propose a neural network model that predicts a block of $L$ consecutive input symbols ${\mathbf{x}}_{i-L+1:i}$ from a sequence of channel outputs ${\mathbf{y}}_{i-L+1:i+L-1}$.
Under the assumption that the neural network can approximate an arbitrary relation between the inputs and outputs, we use the channel outputs ${\mathbf{y}}_{i-L+1:i+L-1}$ as an input of the neural network to predict ${\mathbf{x}}_{i-L+1:i}$.
Specifically, we employ a multi-layered perceptron $f_\theta: {\mathbb{Y}}^{2L-1} \rightarrow {\mathbb{R}}^{|{\mathbb{X}}| \times L}$ parameterized by $\theta$ to estimate the distribution over $x_{i-L+1}, \cdots, x_{i}$. Each block of $|{\mathbb{X}}|$ outputs are then fed into the softmax layer to estimate the distribution of individual symbol.
In the case of binary symbols, we use $L$ output units with the logistic function to model the distribution over the binary value.
Note that instead of estimating a joint distribution over ${\mathbf{x}}_{i-L+1:i}$, we choose to estimate the probability density of each $x_i$ independently since the output space of the former approach is increasing exponentially with respect to the memory length $L$.
The proposed neural network predicts $i$th symbol $x_i$ $L$ times at a different position of the network output as the input of the neural network ${\mathbf{y}}_{i-L+1:i+L-1}$ is shifted by one for each prediction.
We predict the same symbol multiple times since a tap with high intensity might be easier to infer than a tap with low intensity.
We then ensemble the estimated distributions to predict $x_i$ with different weights, where the weight is proportional to the estimated intensity of the corresponding tap.
Let $\hat{p}(x_i)$ be the ensembled predicted distribution of $x_i$. We use a cross entropy between the ground truth ${\mathbf{x}}$ and $\hat{p}(x_i)$ to train the model with the following objective function:
\begin{align}
\label{eqn:supervised_loss}
\mathcal{L}_{{\mathbf{x}}}(\theta) = \sum_{i=1}^{P}\operatorname{CE}(x_i, \hat{p}(x_i)),
\end{align}
where $P$ is the length of the training symbols, and CE is the cross entropy loss.
When the training length is short, the supervision from the cross entropy would be insufficient to train the model parameter $\theta$. To overcome the limitation, we additionally introduce a self-supervised loss to train the model. Let $\hat{x}_i$ is the ensemble-predicted symbol at time step $i$, i.e., $\hat{x}_i = \sum_{x \in {\mathbb{X}}}\hat{p}(x_i = x)x$. We can reconstruct the channel output $y_i$ with a noisy channel information $\hat{{\mathbf{h}}}$. The self-supervised objective can be formalized as
\begin{align}
\label{eqn:self-sup_loss}
\mathcal{L}_{{\mathbf{y}}}(\theta) = \sum_{i=1}^{N} \ell(y_i, g_{\hat{{\mathbf{h}}}}(\hat{{\mathbf{x}}}_{i-L+1:i})),
\end{align}
where $\ell$ is a loss function, and $g_{\hat{{\mathbf{h}}}}$ is a channel model-based estimation function of $y_i$ with the channel information $\hat{{\mathbf{h}}}$. For example, with the inter-symbol interference (ISI) channel, the reconstruction can be done via
\begin{align}
\hat{y_i} = g_{\hat{{\mathbf{h}}}}(\hat{{\mathbf{x}}}_{i-L+1:i}) = \sum_{l=1}^{L} \hat{h}_{l} \hat{x}_{i-l+1},
\end{align}
where $\hat{y_i}$ is the reconstructed output.
It is worth emphasizing that the self-supervised loss can be applied to each output $y_i$ even without true $x_i$, unlike the supervised loss in (\ref{eqn:supervised_loss}).
Minimizing the loss in (\ref{eqn:self-sup_loss}) would lead to an incorrect estimation of symbol $\hat{{\mathbf{x}}}$, if the channel information $\hat{{\mathbf{h}}}$ is incorrect. To mitigate the uncertainty of channel information, we additionally introduce a temperature parameter for each tap of the estimated channel state. Let $\tau_l$ is the temperature corresponding to $h_l$. The logits of the output unit corresponding to $h_l$ are then divided by the temperature parameter before the softmax.
Note that the model outputs $|{\mathbb{X}}| \times L$ logits before the softmax layer, where each $|{\mathbb{X}}|$ logits is used to model the estimated distribution of $l$th location.
Let $[f_\theta(y_{i-L+1:i+L-1})]_{lk}$ be the $k$th output logit of $f_\theta$ at location $l$ given input $y_{i-L+1:i+L-1}$. With temperature, the estimated probability of symbol $p_\theta(x_{i-L+l})$ given $y_{i-L+1:i+L-1}$ can be formalized as
\begin{align}
p_\theta(x_{i-L+l} = x^{(k)}) = \frac{\exp\left([f_\theta(y_{i-L+1:i+L-1})]_{lk} / \tau_l\right)}{\sum_{k'=1}^{|{\mathbb{X}}|} \exp\left([f_\theta(y_{i-L+1:i+L-1})]_{lk'}/ \tau_l\right) },
\end{align}
where $l \in \{1,2,..., L\}$ and $x^{(k)}$ is the $k$th element of ${\mathbb{X}}$.
If the temperature $\tau_l$ is close to zero, then the output distribution becomes sparse. If the temperature is high, the output distribution becomes smooth. We fit the temperature parameters with the other parameters while training. Note that the temperature influences the ensembled distribution.
Finally, we combine the two objectives (\ref{eqn:supervised_loss}) and (\ref{eqn:self-sup_loss}) to train the model:
\begin{align}
\label{eqn:objective}
\mathcal{L}_D(\theta) = \mathcal{L}_{{\mathbf{x}}}(\theta) + \alpha \mathcal{L}_{{\mathbf{y}}}(\theta),
\end{align}
where $\alpha$ controls the importance of the self-supervised loss. By minimizing the objective above, we can directly estimate the predicted symbols $\hat{{\mathbf{x}}}$ for the reconstruction of ${\mathbf{y}}$ with $g_{\hat{h}}$. \autoref{fig:framework} illustrates the overall framework of our detector with the ISI channel model.
\subsection{Meta-learning algorithm for neural detector}
In a real-world scenario, the channel state keeps changing over time. So, the detector trained with (\ref{eqn:objective}) needs to be retrained as the new training symbols have been received since the model trained on the previous channel state cannot be generalized to an unseen state. Retraining often requires multiple numbers of iterations over training sets to train a model. Therefore, the real-time responsiveness of the detector cannot be guaranteed with the retraining approach.
We employ a meta-learning strategy to overcome the problem of model retraining. Recently, meta-learning has emerged as a new framework to learn `how to learn a model'. Specifically, we adopt a MAML~\cite{finn2017model} framework to learn how to make a detector adapt rapidly to a new channel state. The MAML aims to find meta-initialization parameters from which a model can adapt to a new task environment with only a few parameter update steps. During the meta-training step, the model is exposed to multiple tasks, which correspond to different channel realizations in our case.
Let $\{D^{(t)}\}_{t=1}^T$ be a collection of training symbols and channel outputs, where each dataset consists of training symbols and channel outputs, i.e., $D^{(t)} = \{{\mathbf{x}}_{1:P}^{(t)}, {\mathbf{y}}_{1:N}^{(t)} \}$. We assume that each dataset $D^{(t)}$ is randomly generated from unknown channel distribution $p({\mathbf{h}})$ with an independent environmental noise.
Given meta-training set, MAML aims to find a meta-initialization parameter $\theta^*$ that can minimize the meta-training loss with a single gradient descent step:
\begin{align}
\theta^* = \argmin_\theta \sum_{t=1}^T \mathcal{L}_{D^{(t)}}(\theta - \lambda \nabla_\theta\mathcal{L}_{D^{(t)}}(\theta)),
\end{align}
where $\lambda$ is a learning rate. Note that the meta-optimization is performed over parameter $\theta$ whereas the objective is computed using the result of the local-optimization with the gradient descent on task $t$.
\begin{algorithm}[t!]
\caption{Training algorithm for MetaSSD{} \label{alg:maml}}
\begin{algorithmic}[1]
\Require task $\{D^{(t)}\}_{t=1}^T$, batch size $B$, adaptation step $K$, learning rate $\lambda, \eta$
\Ensure meta-initialized $\theta$
\State Randomly initialize $\theta$
\While{not done}
\State Sample batch of tasks $\{D^{(b)}\}_{b=1}^B \subset \{D^{(t)}\}_{t=1}^T$
\For{$b = 1, 2, \cdots, B$}
\For{$k = 1, 2, \cdots, K$}
\State Compute $\theta_k^{(b)}=\theta_{k-1}^{(b)}-\lambda\nabla_{\theta_{k-1}^{(b)}}\mathcal{L}_{D^{(b)}}(\theta_{k-1}^{(b)})$
\State \Comment ($\theta_0^{(b)} = \theta$)
\EndFor
\EndFor
\State Update $\theta \gets \theta-\eta\nabla_{\theta_{K}^{(b)}}\sum_b\mathcal{L}_{D^{(b)}}(\theta_{K}^{(b)})$
\EndWhile
\end{algorithmic}
\end{algorithm}
The meta-optimization can be performed via a stochastic gradient descent as follows:
\begin{align}
\theta = \theta - \eta \nabla_\theta \sum_{t \in B} \mathcal{L}_{D^{(t)}}(\theta - \lambda \nabla_\theta\mathcal{L}_{D^{(t)}}(\theta)),
\end{align}
where $B$ is a set of randomly sampled mini-batch indices, and $\eta$ is a meta-learning rate. Note that the meta-optimization requires to backpropagate through the Hessian matrix. In practice, the gradient step in local optimization, also called an adaptation step, can be extended to multiple updates, which requires higher-order derivations. In this work, we use the first-order approximation omitting the higher-order derivations~\cite{nichol2018reptile}. The entire algorithm with multiple adaptation steps is presented in \autoref{alg:maml}.
\section{Related Work}
Several deep learning-based symbol detection algorithms are proposed. Among them, a supervised learning algorithm is the most common choice. For example, TISTA~\cite{ito2019trainable} improves stability and performance on sparse signal recovery by adding learnable parameters to the iterative shrinkage thresholding algorithm.
Similarly, \cite{csahin2019doubly}, and \cite{he2020model} improve detection by adding adjustable parameters to the unfolded version of the orthogonal approximate message passing algorithm and expectation propagation algorithm, respectively.
BCJRNet~\cite{shlezinger2020data}, DeepSIC~\cite{shlezinger2020deepsic}, and DLR~\cite{sharma2020deep} perform robust detection when channel estimation is difficult by using deep learning model to learn channel model implicitly.
These models need to be trained from scratch when channel state or SNR changes, making the methods impractical.
VCDN~\cite{samuel2017deep}, TPG-detector~\cite{takabe2019deep}, DetNet~\cite{samuel2019learning}, and CG detector~\cite{wei2020learned} trains the model by the data with various channel state and utilize the channel state as input of the model. They perform detection on changed channels without adaptation at test-time.
These methods assume that the correct channel state is known, which is not guaranteed in general.
SwitchNet~\cite{jiang2018artificial}, ViterbiNet~\cite{shlezinger2019viterbinet}, and MMNet~\cite{khani2020adaptive} tackle the variability of channel states with an online-learning framework.
SwitchNet trains several models for different channel states offline and learns the parameters online to switch to a suitable model for the current channel state.
ViterbiNet tracks the error rate of the FEC decoder at test time. When the error rate exceeds a predefined threshold, the restored symbols are again used for fine-tuning.
MMNet uses the temporal and spectral locality of the channel model to reduce training time during online learning.
After the channel state of the first subcarrier is learned, it operates as a good initialization point and accelerates training for the rest of the channel information.
Online learning, however, implicitly assumes that the channel state is not changing dramatically, limiting adjustment in a rapidly changing environment.
EPNet~\cite{zhang2020meta} and Meta-ViterbiNet~\cite{raviv2021meta} adopt a meta-learning framework.
EPNet uses the meta-learned LSTM~\cite{hochreiter1997long} optimizer introduced in \cite{andrychowicz2016learning}.
The meta-learned optimizer is used to learn damping factors sensitive to channel state changes with a small number of epochs. With the proposed damping factor, the optimizer can adapt to the new channel state within a few epochs.
EPNet assumes that the receiver knows the exact channel state.
Meta-ViterbiNet extends ViterbiNet~\cite{shlezinger2019viterbinet} with MAML.
This work is the most similar to our proposed work. However, Meta-ViterbiNet requires updating the meta-initialization point periodically, increasing the computation cost.
Note that the number of training symbols for adaptation of the previous methods often exceeds thousands to make the supervised learning stable.
For example, ViterbiNet~\cite{raviv2021meta} and BCJRNet~\cite{shlezinger2020data} use 5,000 and 10,000 symbols for adaptation, respectively, whereas, in our work, we use 100 training symbols, which is significantly less than the previous methods.
|
1,108,101,564,930 | arxiv | \section{Introduction}{\label{intro}}
Ultra-cold atomic gas systems are one of the most actively studied subjects
in physics these days~\cite{Georgescu}.
By their high-controllability and versatility, the ultra-cold atoms provide an important
playground for study on interesting problems in quantum physics.
In particular, dynamical properties of the many-body quantum systems can be
investigated by controlling physical parameters of the systems.
Most of these investigations are beyond the reach of the conventional research methods
such as various numerical methods including the Monte-Carlo simulations,
density-matrix renormalization group, etc.
From this point of view, the ultra-cold atom systems are sometimes called
ideal quantum simulators~\cite{book,Blochrev}.
Among them, numerous interesting studies on quantum simulations of
the lattice gauge theory (LGT) have been reported
\cite{Zohar1,Zohar2,Tagliacozzo1,Banerjee1,Zohar3,Zohar4,Banerjee2,Tagliacozzo2,Zohar5,Wiese,Zoharrev,Bazavov,GHcirac,Zn,Schwinger,string}.
Various setups using internal degrees of freedoms of atoms have been proposed.
In these studies, one of the most important point is how to realize the local
gauge symmetry in charge-neutral atomic systems.
In the previous works~\cite{ours1,ours2,ours3,ours4}, we considered single-component
Bose gas systems described by an extended Bose-Hubbard model (EBHM) \cite{Dutta},
and show that the U(1) gauge-Higgs model with the exact local gauge symmetry
can be quantum simulated by the EBHM.
The gauge-Higgs model (GHM) is one of the most fundamental gauge theories \cite{Fradkin,Kogut}
in not only high-energy physics but also condensed matter physics.
The GHM has (at least) two distinct phases, one is the confinement
phase and the other is the Higgs phase.
In our works, we clarified phase diagrams by using the Monte-Carlo (MC)
simulations.
Dynamical variables such as the electric field exhibit very different behaviors
in the above two phases, and we studied their dynamics by using
the Gross-Pitaevskii equations.
In this paper, we continue the above study and investigate the
EBHM and GHM by the Gutzwiller (GW) variational method.
In particular, we are interested in case of relatively large fillings with the average
particle number per site $\rho_0=7\sim 30$, as
large filling legitimates the use of the GW variational method and the EBHM-GHM
correspondence.
This paper is organized as follows.
In Sec.~II, we introduce the EBHM and explain how it quantum simulates
the GHM on the lattice.
We also briefly summarize the previous works.
In Sec.~III, we show the numerical results for the model in one and two dimensions.
We first clarify the phase diagrams of the EBHM,
and identify the parameter regions corresponding to the confinement and Higgs phases.
Then, we investigate the dynamical behavior of the electric flux put in the central
region of the lattice.
In the confinement phase, the electric flux is stable although it exhibits
string-breaking-like fluctuations.
On the other hand in the Higgs phase,
it spreads in the empty space and breaks into bits.
This result is in good agreement with the previous result obtained by
the Gross-Pitaevskii equations.
In Sec.~IV, we study the robustness of confinement state in the GHM and
the effect of the random chemical potential on it.
In particular, we observe a kind of glassy dynamics of configurations
with a finite synthetic electric field in the confinement phase.
This behavior is reminiscent of anomalously slow dynamics observed by
recent experiments performed on Rydberg atom chain~\cite{Rydberg}
as indicated by Ref.~\cite{string}.
Then, it is interesting to study the effect of quenched disorder induced by
the random chemical potential on the glassy state.
We calculate life time of high-energy states with density-wave (DW)-type configurations
for various the strength of the disorder.
We obtain somewhat `unexpected' results, that it, a weak disorder hinders
the glassy state first, whereas further increase of disorder enhances the glassy nature.
This means that there exists a critical strength of the disorder at which
the glassy nature is hindered maximally.
Section V is devoted for discussion and conclusion.
We discuss the observed glassy behavior of the confinement phase from
the gauge-theoretical point of view, and clarify the origin of the above `unexpected'
results.
We also suggest certain experiments for examining our observation and
searching many-body localization (MBL) in ultra-cold gases with a dipole moment.
\section{Extended Bose-Hubbard model and Gauge-Higgs Model}
In the previous works~\cite{ours1,ours2,ours3,ours4},
we showed that the GHM appears as a low-energy
effective theory from the EBHM.
For the simplicity of the presentation, here we consider the one-dimensional (1D) EBHM,
and explain its relation to the GHM.
Extension to higher-dimensional cases are rather straightforward although
long-range repulsions are necessary.
Hamiltonian of the EBHM in 1D is given as follows,
\be
H_{\rm EBH}&=&-J\sum_i(\hb^\dagger_i\hb_{i+1}+\hb^\dagger_{i+1}\hb_i)
+{U \over 2}\sum_i\hn_i(\hn_i-1) \nonumber \\
&&+V\sum_i\hn_i\hn_{i+1}-\mu\sum_i\hn_{i},
\label{HEBH}
\ee
where $\hb_i \ (\hb^\dagger_i)$ is the boson annihilation (creation) operator at site $i$,
$\hn_i=\hb^\dagger_i\hb_i$, and $\mu$ is the chemical potential.
The $U$-term and $V$-term in Eq.~(\ref{HEBH}) are one-site and nearest-neighbor
(NN) repulsions, respectively.
We introduce the phase ($\hth_i$) operator as follows,
$\hb_i=e^{i\hth_i}\sqrt{\hat{\rho}_i}$.
By controlling the chemical potential, we consider the case of relatively large fillings
such as $\rho_0={1 \over L}\sum_i\langle \hat{\rho}_i\rangle=(7\sim 30)$ in this paper,
where $L$ is the linear system size.
To relate the boson operator to the gauge field, we introduce a dual lattice with
site $r$, which corresponds to link $(i,i+1)$ of the original lattice.
Artificial electric field, $E_r$, and vector potential, $A_{r,1}$, are given by
$E_r=-(-)^r(\hat{\rho}_i-\rho_0)\equiv -(-)^r\eta_r$, and $A_{r,1}=(-)^r\hth_i$.
It is verified that $E_r$ and $A_{r,1}$ satisfy the ordinary canonical commutation relations
such as $[E_r, A_{r',1}]=-i\delta_{rr'}$.
Then, the following Hamiltonian is derived from $H_{\rm EBH}$ [Eq.~(\ref{HEBH})]
by ignoring the third or higher-order terms of $\{\eta_r\}$ as we do not
consider the system in the critical regimes,
\be
H_{\rm GH}&=&\sum_r\Big[{V\over 2}(E_{r+1}-E_r)^2+{g^2 \over 2}E^2_r
\nonumber \\
&&-2J\rho_0\cos (A_{r+1,1}+A_{r,1})\Big],
\label{HGH}
\ee
where $g^2=U-2V$.
From Eq.~(\ref{HGH}), the partition function of the system, $Z$, is given by
the imaginary-time path integral,
\be
Z&=&\int [dA_1][dE] \nonumber \\
&&\times \exp\Big[\sum_\tau(iE_x(A_{x+0,1}-A_{x,1})
-\Delta\tau H_{\rm GH})\Big],
\label{ZGH}
\ee
where we have introduced the imaginary time $\tau$ and the corresponding
lattice with time slice $\Delta\tau$.
Now, the system in Eq.~(\ref{ZGH}) is defined on 2D lattice with site
$x=(x_0,x_1)=(\tau,r)$, and $x+0=(\tau+1,r), \ x+1=(\tau,r+1)$.
It is obvious that the Hamiltonian $H_{\rm GH}$ in Eq.~(\ref{HGH})
and the partition function $Z$ in Eq.~(\ref{ZGH}) are {\em not} invariant
under a local gauge transformation such as
$A_{x,1}\to A_{x,1}-\nabla_1\alpha_i$,
where $\nabla_1\alpha_i=\alpha_{i+1}-\alpha_i$ $[r=(i+1,i)]$ and $\{\alpha_i\}$ are arbitrary
real parameters at original sites.
In Ref.~\cite{ours1}, we showed that the system given by Eqs.~(\ref{HGH}) and (\ref{ZGH})
can be regarded as the U(1) GHM with the {\em exact local gauge symmetry}.
In order to express the partition function $Z$ in a gauge-invariant form,
we introduce two-component compact gauge potential on the link $(x,x+\nu) \ (\nu=0,1)$,
$U_{x,\nu}=e^{iA_{x,\nu}}$ and Higgs field $\phi_x=e^{i\varphi_x}$.
Then, we can prove the following equation,
\be
&&Z=\int[dA_0][dA_1][d\phi]\exp [A_{\rm GH}], \nonumber \\
&&A_{\rm GH}=A_I+A_P+A_H, \nonumber \\
&&A_I={1 \over 2V\Delta\tau}\sum_x\bar{\phi}_{x+0}U_{x,0}\phi_x+\mbox{c.c.},
\label{ZGH2} \\
&&A_P={1\over 2g^2\Delta\tau}\sum_x \bar{U}_{x,0}\bar{U}_{x+0,1}
U_{x+1,0}U_{x,1}+\mbox{c.c.}, \nonumber \\
&&A_H=J\rho_0\Delta\tau\sum_x\bar{\phi}_{x+2}U_{x+1,1}U_{x,1}\phi_x
+\mbox{c.c.},
\nonumber
\ee
where $A_I$ is the hopping term of the Higgs field in the $\tau$-direction
(the kinetic term), $A_P$ is the plaquette term of the gauge field
(the electro-magnetic term), and $A_H$ is the spatial hopping term of the Higgs field.
The time-component of the gauge field $A_{x,0}$ has been introduced as an
auxiliary field in order to perform the integration over the electric field $E_r$.
It is easily to show that the system described by Eq.~(\ref{ZGH2}) is gauge-invariant.
By fixing the gauge freedom with the gauge condition such as $\phi_x=1$,
which is so-called unitary gauge, the system Eq.~(\ref{ZGH2}) reduces to the one
derived from the original system Eq.~(\ref{ZGH}) by integrating out $E_r$ with
the auxiliary field $A_{x,0}$.
From the action in Eq.~(\ref{ZGH2}), it is shown that
for large $J\rho_0$, the Higgs phase is realized, whereas the (homogeneous)
confinement phase forms for large $U, V$ and $g^2>0$.
In the previous work~\cite{ours4},
we investigated the phase diagrams of the EBHM
[Eq.~(\ref{HEBH})] and the GHM [Eq.~(\ref{ZGH2})] by means of the MC simulations
separately, and verified that the phase diagrams of two models are consistent
with each other.
There exist three phases in the phase diagram, i.e., the superfluid (SF),
Mott insulator (MI) and DW.
It was shown that the SF corresponds to Higgs phase of the gauge theory,
whereas the MI in the vicinity of the DW corresponds to the confinement phase
of the gauge theory.
We also studied the 2D and 3D EBHM from the view point of a quantum simulation for
the lattice GHM, and obtained interesting results~\cite{ours1,ours2,ours3}.
In this paper,
we shall study the EBHM in 1D and 2D at relatively large fillings by means of the GW variational method.
At large fillings, the GW variational method is reliable even for the 1D system, as
it is expected that a quasi-Bose-Einstein condensation forms {\em at each site}
of the optical lattice at large fillings and the GW variational method can describe
dynamics of both the MI and SF.
\begin{figure}[t]
\centering
\begin{center}
\includegraphics[width=7cm]{fig1}
\end{center}
\caption{(a) Stable electric flux string: the blue dashed line represents background
mean density.
The mean density is constant.
The blue arrows and the red line represent the electric fields and $\rho_{i}$, respectively.
The electric flux string corresponds to the DW pattern of the density modulation.
At edges of the flux string, the fictitious Higgs charges appear.
In the confinement phase in 2D and 3D systems, electric flux forms a straight line.
(b) Random chemical potential case:
The random chemical potential leads to a random mean density pattern.
The short electric-flux string are generated.
The blue and green arrows represent the short flux strings.
Existence of random short electric fluxes induces an instability of the long electric
flux string shown in the upper panel (a).
Here, the Higgs charges residing at the edges of the short electric string are omitted.
}
\label{electricflux}
\end{figure}
Before going into the numerical study, it is useful to review the relation of the EBHM and
GHM in a pictorial way.
As explained above, Mott state in the vicinity of the DW corresponds to the confinement
phase.
In the atomic perspective, one-dimensional DW-type configuration of atoms
with a density modulation is an interesting object.
It is expected that such a configuration has a rather long lifetime because of
the NN interactions as observed by recent experiment on Rydberg atom~\cite{Rydberg}.
This configuration is nothing but an electric flux string in the gauge-theory perspective.
As schematically shown in Fig.~\ref{electricflux} (a), a pair of Higgs particle are attached
to edges of the electric flux as dictated by the Gauss law.
Even in 2D system, the electric flux string tends to form a straight line in the confinement
phase, and it can change its length only by pair creation of Higgs particle.
Let us consider effects of background density fluctuations around the DW string,
which are generated by a random chemical potential, see Fig.~\ref{electricflux} (b).
In the gauge-theory picture, these fluctuations correspond to randomly distributed
charges and resultant electric-field fluxes, which induce an instability of the original
electric flux string.
Through this picture, we expect that the lifetime of the DW string is shorten by
the random chemical potential.
This may be an unexpected result from the common brief that a high-energy
state such as a DW string is stabilized by disorders as a result of localization,
but is quite natural from the gauge-theoretical point of view.
In the subsequent section, we shall verify the above gauge-theoretical expectation
by the numerical simulations.
\section{Numerical Results: Systems without disorder}
In this section, we show the numerical results for the EBHM in 1D and 2D
obtained by the GW variational method.
The Hamiltonian of the EBHM in Eq.~(\ref{HEBH}) is factorized into
single-site local Hamiltonian with the maximum particle number at each site,
$n_c$~\cite{FN1}.
In this work, we set $n_c=30$ for 1D and $n_c=50$ for 2D systems.
While so far the 1D EBHM has been extensively studied under unit filling condition
\cite{Rossini,Batrouni,Kawaki}, our focus is large filling regime, thus it is worth
characterizing the large filling ground state.
We also employ the periodic boundary condition for the practical calculation.
We first study the phase diagrams and identify the parameter regions of
the confinement and Higgs phases.
Then, we investigate dynamical properties of the gauge field in these phases.
\subsection{Phase diagram of 1D EBHM}
In this subsection, we study equilibrium properties of the system,
in particular, the ground-state phase diagrams of the 1D EBHM.
To this end, we obtain the lowest-energy states for $H_{\rm EBH}$
by the GW variational method.
Order parameters, which are used for identification of phases, the following;
\be
\Phi={1 \over N_s}\sum_i\Phi_i={1 \over N_s}\sum_i\langle \hb_i\rangle, \;\;
\Delta n=\bar{\rho}_e-\bar{\rho}_o,
\label{OPs}
\ee
where $\bar{\rho}_{e(o)}$ is the average density of atom at even (odd) sites,
and $N_s$ is the total number of sites and $N_s=L=200$ in the present calculation.
Finite value of $\Phi$ indicates the existence of the SF, and $\Delta n$
measures the DW.
As we fix the chemical potential ($\mu$) in the calculation, the total average density
of atom, $\rho_0$, varies under a change of the parameters in the Hamiltonian
$H_{\rm EBH}$.
In Fig.~\ref{DW1d}, we show the calculation of $\Delta n$ in the $(V/J-U/J)$-plain.
$J=0.01$ and chemical potential is fixed as $\mu/J=950$ to obtain relatively large fillings.
In Fig.~\ref{SF1d}, we show the calculation of $\Phi$.
From the results in Figs.~\ref{DW1d} and \ref{SF1d}, we obtain the phase
diagram of the 1D EBHM as in Fig.~\ref{1DPD}~\cite{QMC}.
SF forms in the regions of relatively small $U/J$ and $V/J$.
MIs for large $U/J$ have large integer filling factors such as $\rho_0=7$ for
$U/J=71$ and $V/J=35$.
\begin{figure}[t]
\centering
\begin{center}
\includegraphics[width=8cm]{fig2}
\end{center}
\vspace{-1cm}
\caption{$\Delta n$ in the $(V/J-U/J)$ plain for the 1D EBHM.
There are four phases, Mott insulator (MI), superfluid (SF), density wave (DW),
and supersolid (SS).
Measurement of the SF order is shown in Fig.~\ref{SF1d}.
$\mu/J=950$.
}
\label{DW1d}
\end{figure}
\begin{figure}[t]
\centering
\begin{center}
\includegraphics[width=8cm]{fig3}
\end{center}
\vspace{-1cm}
\caption{$\Phi$ in the $(V/J-U/J)$ plain for the 1D EBHM.
In the MI and DW, $\Phi=0$, whereas in SF and SS, $\Phi>0$.
For small $V/J, U/J$, the SF order parameter is vanishingly small.
This result comes from the fact that the particle number at each site
saturates the maximum value $n_c$, and the GW method is not applicable there.
}
\label{SF1d}
\end{figure}
In Fig.~\ref{1DPD}, the three parameter regions indicated by the arrows
refer to the confinement [(a)], Higgs close to confinement [(b)],
and genuine Higgs phases [(c)], respectively.
Here, we should comment that in Fig.~\ref{SF1d}, there are many lines where
the finite SF density appears.
These lines exist between the MIs with different fillings or the DW phase.
Supersolid (SS) also exists in some parameter regions including narrow line regions
between the MIs and DW.
Similar tendency was reported in Ref.~\cite{Batrouni}.
In the subsequent section, we shall study physical properties of the above phases
from the viewpoint of the gauge theory.
\begin{figure}[t]
\centering
\begin{center}
\includegraphics[width=8.5cm]{fig4}
\end{center}
\vspace{-0.5cm}
\caption{Phase diagram of the 1D EBHM.
For large $U/J, V/J$, the MI and DW occupy the phase diagram, and
the SF is located between the MI and DW phases.
The arrows indicate the locations in which the behavior of electric flux is measured.
}
\label{1DPD}
\end{figure}
\subsection{Behavior of electric flux in quantum simulation of gauge-Higgs model:
1D case}
In this subsection, we shall study the time evolution of
``electric flux" put on a straight line.
To this end, we employ the time-dependent GW
methods~\cite{tGW1,tGW2,tGW3,tGW4,tGW5,tGW6,aoki,ours5}.
Behavior of the electric flux is a very important quantity in the gauge theory,
which discriminates the confinement, Coulomb and Higgs phases \cite{ours1}.
In the EBHM, an artificial electric flux at $r$ is produced by the configuration
such as
$\langle E_r\rangle=-(-)^r(\langle \hat{\rho}_i\rangle-\rho_0)=\Delta$,
where $\Delta$ specifies
a pair of charge, $(-\Delta,+\Delta)$, located at the edges of the electric flux string.
In the GHM, this configuration is explicitly given by
$
\prod_{r_1<r<r_2}(U_{r,1})^\Delta|0\rangle,
$
where a pair of static charge $\pm \Delta$ are located at $r_1$ and $r_2$
and $|0\rangle$ is the `vacuum' without electric fluxes.
In the practical calculation, we add very small but finite fluctuations in local
density of boson (i.e., local electric field) for initial states in order to perform smooth
calculations by the time-dependent GW method.
In most of practical simulations in this section, we put $J=0.01$ and
set unit of time with $\hbar/(100J)$, i.e., we use unit of energy to set unit of time.
We consider the 1D case.
In the phase diagram shown in Fig.~\ref{1DPD}, we exhibit three typical parameter
regions corresponding to (a) MI in the vicinity of the DW ($U/J=71, \ V/J=35$),
(b) SF close to MI ($U/J=70, \ V/J=14$), and (c) SF ($U/J=27, \ V/J=10$).
For all cases, $J=0.01$ and $\mu/J=950$.
System size $N_s=200$, and the electric flux is put from $r=70$ to $r=130$.
The confinement phase corresponds to the case (a), and the Higgs phase
to the case (c).
For the case (a), we performed numerical simulations for two cases, i.e.,
the first one for the background particle density $\rho_0=10$ and
the magnitude of the electric flux $\Delta=3$, and the second one for
$\rho_0=7$ and $\Delta=1$.
The equilibrium filling of the MI at this parameter is $\rho_0=7$.
As we explained above, this parameter region corresponds to the confinement
phase, and therefore we expect that the electric flux is rather stable and remains in
the original position without breaking up small pieces for rather long period.
\begin{figure}[t]
\centering
\begin{center}
\includegraphics[width=7cm]{fig5}
\end{center}
\vspace{-0.5cm}
\caption{Upper panel: Time evolution of electric flux located in the center of
the 1D system in the confinement phase.
$\rho_0=10$ and $\Delta=3$.
Electric flux is quite stable.
Middle and lower panels: Small but finite fluctuations of electric flux are observed.
In the lower panel, $E_{\rm in}(t=0)\neq 3$ comes from the density fluctuation
of the initial state that we employed.
}
\label{EF1d}
\end{figure}
In Fig.~\ref{EF1d}, we show the results of the simulation for $\rho_0=10$ and $\Delta=3$.
The electric flux is stable as we expected.
Close look at the inside of the electric flux reveals that small but finite fluctuations
of the electric field take place there~\cite{ours4}.
We studied the fluctuations of the electric field in the central region,
\be
E_{\rm in}\equiv {1 \over N_i}\sum_{70\leq r \leq 130} E_r,
\label{Ein}
\ee
where $N_i$ is the length of the initial electric string,
and the result
is shown in the middle and the lower panels in Fig.~\ref{EF1d}.
Averaged electric field first decreases slightly, and then keeps constant with small fluctuations.
In the gauge theoretical point of view, the stability means that the system is in confinement phase as we expected.
Recently, closely related experiments were done on Rydberg atom
chains~\cite{Rydberg}.
By the strong NN repulsion between Rydberg states, the system is nearly unit-filling,
and the DW type configurations exhibit anomalous slow dynamics.
In Ref.~\cite{string}, this phenomenon is interpreted as reminiscence of
string-breaking of electric flux in the confined gauge theory~\cite{ours4,nonAbelian}.
We will discuss this gauge-theoretical interpretations somewhat in detail in Sec.~V.
This stability of the electric field implies that the original EBHM exhibits a glassy
behavior in the parameter
region corresponding to the confinement phase of the corresponding GHM.
This observation will be examined in the subsequent section.
We also performed numerical calculations for $\rho_0=7$ and $\Delta=1$.
The obtained results are quite similar to those for $\rho_0=10$ and $\Delta=3$
in Fig.~\ref{EF1d}.
The electric flux is quite stable even for $\Delta=1$, as the background particle density,
$\rho_0=7$, is equal to that of the equilibrium value.
\begin{figure}[t]
\centering
\begin{center}
\includegraphics[width=6cm]{fig6}
\end{center}
\caption{
Upper panel: Time evolution of electric flux located in the center of the 1D system.
$\rho_0=10$ and $\Delta=1$.
The system exists in the SF close to the MI, which corresponds to the Higgs phase
relatively close to confinement.
Lower panel: Time evolution of electric flux located in the center of the 1D system
in the Higgs (SF) phase.
$\rho_0=20$ and $\Delta=3$.
}
\label{EF1dSF1}
\end{figure}
Let us turn to case (b).
We show the numerical results in Fig.~\ref{EF1dSF1}.
It is obvious that the electric flux string keeps the original configuration for a while, but it breaks into small pieces and these pieces spread the whole system.
This indicates the instability of the electric flux.
In the Higgs phase of the gauge theory, electric charge is {\em not conserved}, and
the electric fluxes are destroyed and also generated in various places.
Finally, we show the evolution of the electric flux in the case (c) in Fig.~\ref{EF1dSF1}.
It is obvious that the electric flux decays quite easily, and the whole system is full of large
fluctuations of electric field.
This means that the system is in deep Higgs phase.
\begin{figure}[t]
\centering
\begin{center}
\includegraphics[width=7.5cm]{fig7}
\end{center}
\vspace{-0.5cm}
\caption{Time evolution of electric flux in the outside region.
Cases (a), (b) and (c).
For the case (a), $E_{\rm out}(t=0)\neq 0$ comes from the density fluctuation of
the initial state.
}
\label{Eout}
\end{figure}
In order to verify the above behavior of the electric flux, we measured average of
electric field in the outside of the original location of the electric flux, i.e.,
\be
E_{\rm out}\equiv {1 \over N_o}\sum_{0<r<70, 130<r<200}E^2_r,
\label{defEout}
\ee
where $N_o$ is the number of sites in which the electric fluxes do not exist in the
initial configuration.
We show the results in Fig.~\ref{Eout}.
It is obvious that $E_{\rm out}$ is getting larger for smaller $V/J$ as we expected.
Let us briefly comment on the SS.
In the SS phase, phase coherence is remained,
but the density fluctuation in the SS is smaller than that in the SF regime.
The density-wave configuration in the SS phase does not affect to
the back-ground charge in the sense of gauge theory because the
{\em density fluctuation itself} corresponds to electric fields.
In this sense, the SS phase possesses Higgs-phase-like properties~\cite{ours4}.
\subsection{Phase diagram of 2D EBHM and behavior of electric flux}
\begin{figure}[t]
\centering
\begin{center}
\includegraphics[width=6.5cm]{fig8}
\end{center}
\vspace{-0.5cm}
\caption{DW order (upper panel) and SF (lower panel) in the $(V/J-U/J)$ plain.
$J=0.01$ and $\mu/J=2000$.
(a) ((b)) corresponds to confinement (Higgs) phase.}
\label{2DPD}
\end{figure}
In this subsection, we shall study the 2D EBHM and 2D GHM.
We first show the phase diagram of the 2D EBHM at large fillings obtained by
the GW methods.
Used order parameters are the superfluidity, $\Phi$, and DW, $\Delta n$ as in
the study of the 1D system.
The obtained numerical calculations and phase diagram are shown in Fig.~\ref{2DPD},
and we also indicate the parameter regions in which stability of the electric
flux will be examined.
As in the 1D case, the MI and DW occupy most of the phase diagram for
large $U/J$ and $V/J$,
and the SF forms in narrow regions between the MI and DW.
Most of the calculations were performed for the system size $N_s=20\times 20$.
\begin{figure}[h]
\centering
\begin{center}
\includegraphics[width=7cm]{fig9}
\end{center}
\vspace{-0.5cm}
\caption{Time evolution of electric flux put in the center of the system.
Upper panel: Confinement region with $\rho_0=7$ and $\Delta=1$.
Electric flux is stable for long period.
Lower panel: Higgs region with $\rho_0=30$ and $\Delta=3$.
Electric flux decays rapidly.
The on-site and nearest-neighbor repulsions play an essential role for
the stability of electric flux.
}
\label{ectric2DelF}
\end{figure}
\begin{figure}[h]
\centering
\begin{center}
\includegraphics[width=5.7cm]{fig10}
\end{center}
\vspace{-0.5cm}
\caption{$E_{\rm out}$ for the Higgs (upper panel)
and confinement (lower panel) phases, respectively.
}
\label{Eout2}
\end{figure}
In the 2D case, electric flux is initially put on the central region.
In the practical calculation, the initial configuration is prepared as follows,
\be
&&\rho_{x,y}-\rho_0=-(-)^x\Delta, \;\; \mbox{for $6\leq x\leq 15$ and $y=10$},
\nonumber \\
&&\rho_{x,y}=\rho_0, \;\; \mbox{otherwise}.
\label{electricF2D}
\ee
This configuration of $\{\rho_{x,y}\}$ describes the zigzag electric flux (on the gauge
lattice) extended in the $x$-direction with 10 lattice spacing.
[For more detailed dual lattice structure and definition of electric field,
see Fig.~\ref{2Dlattice} in Sec.~\ref{disorder}.]
As in the 1D case, we add very small but finite fluctuations in local boson
density.
In Fig.~\ref{ectric2DelF}, we show the behavior of the electric flux (a)
in the confinement region for
$J=0.01, \mu/J=2000$ and $U/J=175,V/J=30$ (MI close to DW), and also (b)
in the Higgs region $U/J=45,V/J=5$ (SF close to MI).
For the confinement region, we put $\rho_0=7$, which is the equilibrium value
for the above parameters.
The source electric charge at $x=6$ and $15$ are $\pm \Delta=\pm 1$, respectively.
Even for the smallest unit charge, the electric flux is stable up to
$t=300(= 3\times \hbar/J)$,
and it gradually decays after that.
Here, the hopping time in our model is $\sim 2\times \hbar/J$.
Its small decay starts to occur a little beyond the hopping time.
For larger source charges such as $\Delta=3$, the electric flux is quite stable.
On the other hand for the Higgs region, we put $\rho_0=30$, which is again
the equilibrium value for the above parameters.
We show the calculations for $\Delta=3$.
Even for this relatively large value of $\Delta$, the electric flux
breaks after a very small period, and it spreads whole
region and fluctuates strongly.
In Fig.~\ref{Eout2}, we also show the square of the electric flux outside of the region
$\vec{r}\neq (6\leq x\leq 15,y=10)$, $E_{\rm out}$, which is defined
similarly to the 1D case in Eq.~(\ref{defEout}).
The results in Fig.~\ref{Eout2} obviously support the results in Fig.~\ref{ectric2DelF}.
\begin{figure}[h]
\centering
\begin{center}
\includegraphics[width=6cm]{fig11}
\end{center}
\vspace{-1cm}
\caption{Time evolution of electric flux put in the center of the system.
$J=0.05, \rho_0=7$ and $\Delta=2$.
Unit of time is $\hbar/(20J)$.
Electric flux is stable for long period.
}
\label{ectric2DelF2}
\end{figure}
\begin{figure}[h]
\centering
\begin{center}
\includegraphics[width=9cm]{fig12}
\end{center}
\caption{Upper panels: Time evolution of DW-type density modulation
corresponding to electric flux put in the confinement phase
(MI close to DW).
$J=0.01, \rho_0=7$ and $\Delta=1$.
Electric flux shortens but is stable for long period.
Lower panels: Time evolution of electric flux put in the Higgs phase (SF close to MI).
Electric flux `melts' and its fluctuations start immediately.
}
\label{ectric2Dwhole}
\end{figure}
In order to verify the universality of the above result, we investigated various
parameter regions of the EBHM.
In particular, the stability of the electric flux is a very important phenomenon.
In Fig.~\ref{ectric2DelF2}, we show the calculations of the case of relatively large
hopping $J=0.05$ and $U/J=175, V/J=30, \mu/J=2000$, which corresponds to
point (a) in Fig.~\ref{2DPD}.
The parameter point is in the confinement phase.
The electric flux is again quite stable even for larger value of $J$.
In Fig.~\ref{ectric2Dwhole}, 2D profiles of the electric flux in the whole
$20\times 20$ region are shown.
For the confinement phase for $J=0.01$ (Fig.~\ref{ectric2DelF}),
the electric flux starts to get shorter at $t\simeq 300(= 3\times \hbar/J)$.
For the Higgs phase in Fig.~\ref{ectric2DelF}, the electric flux decays quite rapidly.
The initial electric flux `melts' and fluctuation of electric field (i.e., particle density)
starts to develop immediately.
Time evolution of the electric field located out of the original place,
$E_{\rm out}$, in Fig.~\ref{Eout2} again supports
the above behavior.
\section{Glassy dynamics and effect of quenched disorder}\label{disorder}
In the previous section, we observed that the electric-flux string is quite stable
in the confinement phase.
Then, it is interesting to study how higher-energy states of the DW type evolve
in that parameter region.
To examine effects of a random chemical potential on this phenomenon
is also an important problem.
In real experiments in an optical lattice, a similar random chemical potential can
be implemented by using a laser speckle \cite{Schulte,Clement}.
Closely related phenomenon to the above was recently investigated by experiments
on ultra-cold atomic gases, and it was observed that life time of states
with higher energies is lengthened by the quenched disorder
induced by the random chemical potential~\cite{exper1}.
To reveal the origin of this glassy phenomenon and its relation to
the MBL is an interesting problem.
For the $(1+1)$D quantum electro-dynamics (QED), in which electron is
always confined, an extremely slow evolution of entropy was observed
for configurations with background charges~\cite{2DQED}.
In this section, we focus on the 2D model and study if the confinement phase
exhibits glassy dynamics,
and if so, we clarify its origin from the gauge-field point of view.
Parameters of the numerical studies in this section are $J=0.05, U/J=175, V/J=30$
(confinement region) and the average particle density $\rho_0=7$.
We employ the random chemical potential distributed uniformly as
$\mu_i\in \mu\pm[-{W\over 2}, +{W\over 2}]$ with $\mu=100$, and
study the model with some specific values of $W$.
Unit of time is $\hbar/(20J)$ in this section as we put $J=0.05$ in the practical
calculation.
\begin{figure}[t]
\centering
\includegraphics[width=7cm]{fig13}
\caption{Left panel: The dotted lines indicate the original square optical lattice,
and the solid lines the gauge lattice.
Upward arrows indicate electric flux $\mathbf{E}$.
Right panel: Initial density configuration is shown.
Particle numbers are $9$ (blue), $7$ (green) and $5$ (red), respectively.
The left half is filled with a finite synthetic electric field as shown by the left panel.
}
\label{2Dlattice}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=7.5cm]{fig14}
\caption{(a) and (b):
Time evolution of electric field $\mathbf{E}$ in the confinement phase without disorder.
Direction of arrows indicate the electric field direction, and length of arrows
and color show its magnitude.
Electric field is quite stable and keeps its original configuration.
(c) and (d): Confinement phase with disorder $W=20$.
Electric filed spreads from the original region.
There, obtained are data for a single initial configuration.
}
\label{ectric2Dhalf}
\end{figure}
First, we study time evolution of the initial state in which
a half of the system is a DW-type configuration,
and the other half is a homogeneous state with $\rho_i\simeq \rho_0=7$.
More precisely, the DW-type region of the initial configuration is the state filled with
electric field pointing to the $y$-direction.
Detailed lattice structure of the gauge system, electric field defined on links of
the gauge lattice, and the initial configuration are shown in Fig.~\ref{2Dlattice}.
Time evolution of this kind of configurations is a good measure for the stability
of a bunch of electric flux tubes.
In Fig.~\ref{ectric2Dhalf}, we show the time evolution of the 2D system
with $W=0$ and $W=20$.
Data are obtained for a single initial-configuration realization.
Other samples give almost the same results.
It is obvious that in the case of $W=0$, the electric field is quite stable.
This is an expected result from the stability of electric flux in the confinement phase.
On the other hand, in the case of $W=20$, it tends to spread out the empty space.
We examined the case with $W=10$ and $W=30$, and obtained similar results.
That is, the EBHM and GHM exhibit glassy dynamics in the case without disorder,
and disorder hinders glassy nature.
This is somehow an `unexpected' result.
Disorder often enhances the localization even if there are interactions between
particles.
However as we explain later, the gauge-theoretical view point gives a clear interpretation
to above phenomenon.
Before going into discussion on this point, let us consider another example of
glassy dynamics of the present system.
\begin{figure}[h]
\centering
\begin{center}
\includegraphics[width=6cm]{fig15}
\end{center}
\vspace{-0.5cm}
\caption{Upper panel: Time evolution of the local DW order $\{\delta_i\}$ at $y=10$
in the confinement phase with $W=0$.
DW order is stable for long period.
Second panel: The density difference keeps $\Delta n\simeq 4$.
Third panel: Time evolution of $\{\delta_i\}$ at $y=10$ for $W=20$.
$\{\delta_i\}$ starts to fluctuate after certain period.
Bottom panel: $\Delta n$ as a function of time for various $W$s.
The result indicates the existence of critical $W$ at which the DW
fades away maximally.
The dotted arrow indicates an increase of $W$ as guide of eyes.
There, obtained are data for a single initial configuration.
}
\label{randomwhole}
\end{figure}
Next, we consider the evolution of the initial configurations of the genuine
DW type such as $\rho_i=\rho_0+(-)^i\delta$ in the whole system
[$(-)^i=1 (-1)$ for even sites (odd sites)].
From the gauge-theoretical point of view, this configuration is nothing but
the state filled with a bunch of electric flux tubes pointing to opposite
directions alternatively.
From the above observation indicating the stability of the uniform electric field
in the confinement phase, how the genuine DW state evolves is an interesting
problem, and if this configuration is stable, we can conclude that the confinement
phase has the genuine glassy dynamics.
For $\delta=2$, calculations of the local DW order parameter
$\delta_i\equiv (-)^i(\langle\rho_i\rangle-\rho_0)$ and the difference between
the average particle numbers at even-odd sites, $\Delta n$ in Eq.~(\ref{OPs}),
are shown in Fig.~\ref{randomwhole} for various $W$s.
There, obtained are data for a single initial configuration.
Other samples give almost the same results.
The calculations show an interesting phenomenon.
For $W=0$, i.e., the case without disorders, the density difference $\Delta n$
keeps the original value $\Delta n\simeq 4$ for a long period.
On the other hand,
snapshots of $\{\delta_i\}$ in the central region of the system shown in
Fig.~\ref{randomwhole} indicate that the system of $W=20$ evolves towards
the state of $(\bar{\rho}_e-\bar{\rho}_o)\simeq 0$ with local density fluctuations,
whereas the system $W=0$ is quite stable.
The calculation of $\Delta n$ in Fig.~\ref{randomwhole} shows that
as $W$ increases from $0$ to $10$, the stability of the initial state decreases.
However, further increase of $W$ makes the DW state stable.
Although analysis for larger times is needed to conclude
$\Delta n\neq 0$ for $t\to \infty$,
this behavior of the disorder system is reminiscent of the MBL dynamics.
Furthermore, it is expected that there exists a critical value of
$W$, $W_c\sim 10$, at which the DW fades away maximally~\cite{Luitz,Pal}.
This interesting observation will be discussed in Sec.~\ref{discussion}
from the gauge-theoretical point of view.
Here, we report that we recently observed somewhat similar phenomenon
in quasi-1D system, Creutz ladder model, by the exact diagonalization~\cite{ours6}.
Without the NN repulsion and disorder, the system exhibits flat-band localization.
In the presence of the NN interaction, localized states survive.
As increasing disorder in chemical potentials, the localization properties of the system
weaken at intermediate disorder regime.
Further increase of disorder makes the system localized again, i.e.,
a DW-type modulation fades away maximally at intermediate disorder strength.
We discussed there that a crossover from the flat-band Anderson localization to MBL
takes place under increase of disorder.
\section{Discussion and conclusion}\label{discussion}
In this paper, we studied the 1D and 2D EBHM by the GW methods
from the view point of the quantum simulation for the gauge theory.
We first clarified the phase diagrams at relatively large fillings
$\rho_0=(7\sim 30)$.
The phase diagrams themselves exhibit a rather interesting structure
composed of the SF, MI, DW and SS phases.
We identified the parameter regions in the phase diagrams corresponding to
the confinement and Higgs phases.
Then, we studied the time evolution of configurations with electric flux tube,
and verified that electric flux tube is stable in the confinement phase but
breaks immediately in the Higgs phase.
The stability in the confinement phase increases as the magnitude of charge
at the edges of the electric flux, $\Delta$, increases.
After the above observations, we studied the effect of disorder caused by
the random chemical potential
in the 2D system, which generates density inhomogeneity in the system.
We first verified the stability of the electric field in the confinement phase by
studying time evolution of electric field filling half of the system.
For the case without disorder, the electric field remains stable for long periods.
Then, we introduced disorder and found that disorder induced by the random chemical
potential renders the electric filed unstable.
This is somehow an `unexpected result', but is plausible from the gauge-theoretical
point of view.
In the gauge theory, it is established that the quark confinement takes place
as a straight electric flux tube forms between a quark-anti-quark pair and
it exhibits almost no fluctuations.
This confinement picture explains the stability of the electric field filling
the half of the system in the case of without disorder.
On the other hand,
the random chemical potential induces spatial electric field fluctuations as
it generates inhomogeneous background charges.
The above picture obviously is based on the Gauss law, $\nabla\cdot \vec{E}=Q_e$,
where $Q_e$ is charge density.
In the quantum simulations of the GHM, the NN repulsion, as well as the exact
gauge symmetry, plays an essential role for the Gauss-law constraint to be satisfied.
Recently, some related observation with the above was given in Ref.~\cite{string},
in which the experiments on Rydberg atom chain in Ref.~\cite{Rydberg} was
interpreted in the gauge-theory framework.
There, the strong NN repulsion of the Rydberg state hinders its occupation
on NN sites, and this constraint of the Hilbert space can be regarded as the
Gauss law.
The emergent gauge invariance explains the very slow dynamics observed in
Ref.~\cite{Rydberg}.
In other words, the glassy dynamics in the confinement phase observed in the present
work results from strong interactions between atoms in the confinement
regime.
It was shown in Ref.~\cite{carleo} that MBL and glassy dynamics appear
due to frustrating dynamical constraints by interactions.
As the second case with disorder, we investigated the stability of the genuine DW
configuration with high energies in the whole system.
Interesting enough, we found that the DW configuration
is stable for a long period in the case {\em without disorder}, whereas
the inhomogeneous particle density caused by {\em moderate} $W$s reduces
the robustness of the DW-type configuration.
This result means that the disorder-induced spatial density modulations
hinder the glassy dynamics.
Obviously, this behavior is reminiscent of the quark confinement mechanism
explained above.
Interestingly enough, we observed that further increase of disorder makes
the DW-type configurations tend to dynamically survive again.
Recently, certain related experiment was performed on the ultra-cold
atoms and similar result was obtained, i.e., the random chemical potential
enhances life time of the high-energy configurations~\cite{exper1,numerical}.
This may be an expected result, i.e., disorder enhances the localization.
For stronger disorders with $W>W_c$ in the present system, background charge is
modulated strongly, and as a result, the gauge-theoretical picture does not
work anymore.
We think that the EBHM in the present parameter regime exhibits interesting multiple
`phase transitions' or `crossovers' from the interaction induced glassy dynamics to
the ordinary MBL by increasing the strength of disorder and in between regime
an ergodic phase exists.
From the above observations, it is interesting to study atomic gas systems
with NN repulsions by varying strength of disorder.
We expect that similar experiment on them to those in Ref.~\cite{exper1} sheds light on
our picture of the glassy dynamics of the confined gauge theory obtained
in this work.
It is also important to examine effects of disorders in the experiments
on the Rydberg atom chain~\cite{Rydberg} by introducing disorder in detuning
and/or Rabi frequency.
We expect that similar `phase transitions' are to be observed there.
Also, a recent numerical study \cite{Sierant} suggested that the glassy dynamics
(slow down to the relaxation) is related to MBL.
In our work, as shown in Fig.~\ref{randomwhole} (d), the tendency of the glassy
dynamics becomes stronger as weaker disorder in the confinement phase.
That is, we expect that the confinement phase has also the MBL properties.
In confinement phase, such a conjecture has been verified for other lattice gauge
models \cite{2DQED,Smith1,Smith2}.
As related subject, we would like to comment on works in which localization of
magnetic flux lines was studied for the type-II superconductors~\cite{nandkishore,pretko}.
Although type-II superconductors correspond to the Higgs phase of the gauge theory,
there exists duality between confinement and superconductivity.
Confinement of the gauge theory takes place as a result of condensation of
magnetic charges such as a magnetic monopole.
Squeeze of electric flux in the confinement phase is sometimes called dual
Meissner effect~\cite{Man}.
Therefore, the localization of magnetic flux lines in superconductors suggests
the similar localization of electric flux string in the confinement phase, which
is observed in this work.
Here, let us comment on reliability of the time-dependent GW methods.
We summarize the technical aspects first.
We study time evolutions up to $t\sim 1000$.
From unit of time, this corresponds to $t\sim 10\hbar/J \
(\mbox{or } 50\hbar/J)$.
Furthermore, as we used the forth-order Runge-Kutta formula, generated
errors are $O(dt^5)$ with time slice $dt=10^{-5}$.
Contrary to the time-dependent DMRG and TEBD, truncations of quantum states during
evolutions are not done.
Unfortunately, no reliable numerical methods exist for examining correctness of
our calculations, in particular for 2D systems and even for 1D systems of high fillings.
However, experiments on ultra-cold atoms, i.e., quantum simulations,
provide useful information on the reliability of the time-dependent GW methods.
As far as we know, there are at least two experiments, which are useful for
the present examination.
First one is the experiment on ultra-cold Bose gases in a 2D disordered optical
lattice~\cite{exper1}, which we have already mentioned in the main text.
There, time evolution of Bose gas filled in a half of the optical lattice was observed
in the presence of on-site disorder in order to study glassy dynamics and MBL.
Corresponding to the above experiment, numerical simulations by the
time-dependent GW methods were performed for time evolutions in rather long
times~\cite{numerical}.
Obtained results are in good agreement with the experiments.
Second one is study on quench dynamics of ultra-cold atoms in a 2D optical lattice.
In order to examine Kibble-Zurek scaling for a quantum phase transition,
quench dynamics from Mott to SF was observed by experiments~\cite{exper2}.
In particular, scaling exponents were estimated from experimental data.
Stimulated by this experiment, we simulated the above quench dynamics
by the time-dependent GW methods~\cite{ours5}.
We found that the experiments and our numerical simulations are in good
agreement.
We also showed that the measurements in the experiment were performed
in rather late times far apart from the phase transition time, and it is source of
the discrepancy with the Kibble-Zurek scaling exponents and the observed ones.
Even though above two are both studies on low-filling systems, they gave positive
evidences for reliability and applicability of the time-dependent GW methods to
time evolution for long times.
As we studied fairly large-filling systems in this work, we think that there are
sufficient grounds to believe that the obtained results by the time-dependent
GW methods are correct.
In the present work, we employed the GW variational methods, we could not calculate
the entanglement entropy, which is often used for identification of MBL.
In the previous paper~\cite{ours7}, we discussed reliability of the GW methods
comparing it with quantum Monte-Carlo simulations.
As emphasized there, quantum correlations are taken into account by
the GW methods in some amounts.
Unfortunately, their precise estimation is not known yet, and investigation by
means of concrete models are obviously welcome.
One example is a bosonic version of the Creutz ladder model,
which can be viewed as a gauge-Higgs model on the ladder.
Relatively large-filling cases can be studied by both the truncated Wigner method and
GW methods.
We expect that the glassy dynamics takes place there as in the original
fermionic Creutz ladder model~\cite{ours6}, and the glassy dynamics
of the confined gauge theory is closely related to MBL.
We are now planning a study on it, and we hope that the results will be reported
in the near future.
|
1,108,101,564,931 | arxiv | \section{Introduction}\label{Introduction}
In recent years, the analysis of ordinal response data has become a popular topic in the mainstream research. Such data arise naturally in many areas of scientific studies, for-instance in psychology, sociology, economics, medicine, political science and in several other disciplines, where the final response of a subject belongs to a finite number of ordered categories based on the values of several explanatory variables in a way described later in this article. One such example may be the qualitative customer review of a particular vehicle where its price, mileage, carbon emission properties, etc., are to be taken into account to arrive at the qualitative response in an ordinal scale. While these ratings summarize many important explanatory variables and are primarily useful to a new customer, these customer feedback often turn out to be equally important to the manufacturer as the latter might want to learn about the statistical relationship between the ordinal response and its covariates to improve their product or for post-manufacturing surveys to fix things.
A pioneering work in this field is done by McCullagh \citep{mccullagh1980regression}, who advocates the use of an underlying continuous latent variable that drives ordinal responses based on some unknown cut-offs. This method has become popular as it enables us to view the ordinal response model within a unified framework of the generalized linear model (GLM); see, e.g., Nelder et al. \citep{nelder1972generalized} and McCullagh and Nelder \citep{mccullagh1989binary}. Moustaki \citep{moustaki2000latent} uses the maximum likelihood (ML) method to fit a multi-dimensional latent variable model to a set of observed ordinal variables and also discusses about their goodness-of-fit. For more related discussions see Moustaki et al. \citep{moustaki2003general}. Piccolo \citep{piccolo2003moments} and Iannario et al. \citep{iannario2016generalized} suggest a different approach, where the ordered response variable is represented as a combination of discrete mixture of uniform and a shifted binomial (CUB) random variables. The CUB model, however requires a latent variable that directly invokes the effect of covariates.
Although the area of robust statistics has a very rich and developed body of literature, applications in the direction of ordinal response data are rather limited. An early reference is Hampel \citep{hampel1968contributions}, where in addition to the development of classical infinitesimal approach to the robustness, some pointers about the robustness in the setting of binomial model fitting are discussed. Robust estimators have been developed by Victoria-Feser and Ronchetti \citep{victoria1997robust} for grouped data. Ruckstuhl and Welsh \citep{ruckstuhl2001robust} discuss different classes of estimators in the context of fitting a robust binomial model to a data set. Moustaki and Victoria-Feser \citep{moustaki2004bounded, moustaki2006bounded} develop bounded-bias and bounded-influence robust estimator for the generalized linear latent (GLL) variable. Lack of robustness for the maximum likelihood estimators in the logistic regression model has been extensively studied in the literature (see Croux et al. \citep{croux2002breakdown} and M$\ddot{\rm u}$ller and Neykov \citep{muller2003breakdown}). Croux et al. \citep{croux2013robust} studied a weighted maximum likelihood (WML) estimation method under the logit link function, through different choices of the weight function. Iannario et al. \citep{iannario2016robustness} deal with robustness for the class of CUB models. More recently, Iannario et al. \citep{iannario2017robust} have proposed a general M-estimation procedure with the objective function chosen as a weighted likelihood function under different considerations of link functions. Unlike the approaches of Croux \cite{croux2013robust}, where the weights are the function of robust Mahalanobis type distances, Iannario et al. \cite{iannario2017robust} have considered Huber's weights for different link functions. A good weight function essentially controls the influential observations with respect to some reference model. From a Bayesian perspective, some approaches for detecting outliers in categorical and ordinal data have been advanced by Albert and Chib \citep{albert1993bayesian, albert1995bayesian}, who compute Bayesian residuals for both binary and polychotomous response data. This approach has also been extended to sequential ordinal modelling by Albert and Chib \citep{albert2001sequential}.
In this paper, we use the density power divergence (DPD) as originally proposed by Basu et al. \citep{basu1998robust}, to obtain the minimum density power divergence estimator in the ordinal response data under the same set-up as Iannario et al. \citep{iannario2017robust}. However, the independent but non-homogeneous version of the DPD (see Ghosh and Basu \citep{ghosh2013robust}) is best suited for this application.
The contribution of this paper is to adapt the minimum density power divergence technique to the case of ordinal data with latent variables and demonstrate that many members of this class provide highly stable estimators of the regression parameters and the cut-offs under contamination with little loss in asymptotic efficiency under pure data compared to the classical estimators.
This paper is organised as follows: in Section \ref{Parametric model}, we state the problem under study and briefly review the formulations for the maximum likelihood estimation. General M-estimator and the minimum density power divergence estimator (MDPDE) are introduced for the present set-up respectively in Subsections \ref{M-estimation} and \ref{mdpde:present set-up}. Some asymptotic properties of the MDPDE for the present set-up have been discussed in Section \ref{properties}; in particular we prove their weak consistency and derive their asymptotic distribution in Subsection \ref{asymptotic properties and ARE}, and also provide an appropriate discussion regarding their robustness properties in Subsections \ref{IF analysis} and \ref{Breakdown point}. In Section \ref{simulation study}, some simulation studies are performed to compare the MDPDE with the MLE. In Section \ref{alpha selection}, we briefly discuss about the strategy for tuning parameter selection based on a given data set. Our proposed method has been applied to a real-life data set and compared with some other existing methodologies in Section \ref{real data analysis}. Concluding remarks have been made in Section \ref{conclusions}. The proofs of all the results and some additional figures are provided in Appendix \ref{appendix}.
\section{The parametric model and estimation of parameters}\label{Parametric model}
Let $\{( x_{i}, Y_{i}); i=1,2, \ldots , n\}$ be a data set of size $n$, where the covariates $x_{i}$'s are assumed to be non-stochastic in $\mathds{R}^{p}$. The $i$-th measurement $Y_{i}$ is a realisation of the random variable $Y$ conditioned on $x_{i}$. The variable $Y$ takes values in the finite set $\chi=\{1,2, \ldots ,m\}$. Following McCullagh \citep{mccullagh1980regression}, we presume that there exists an unobserved latent random variable $Y^{*}$ such that it's related to $Y$ in the following way
\begin{equation} \label{latent relationship}
Y=j \iff \gamma_{j-1} < Y^{*} \le \gamma_{j} \mbox{ for } j \in \chi,
\end{equation}
where $-\infty=\gamma_{0} <\gamma_{1}< \gamma_{2} < \cdots < \gamma_{m-1} < \gamma_{m}=+\infty$ are the thresholds (cut-off points) in the continuous support of the latent variable. The $i$-th copy of $Y^{*}$ linearly depends on $p(\ge 1)$ covariate(s) through $x_{i}$ as below
\begin{equation}\label{linear latent regressiion eq}
Y^{*}_{i}= x_{i1}\beta_1+x_{i2}\beta_2+\cdots+x_{ip}\beta_{p}+\epsilon_i=x_{i}^{T}\beta+\epsilon_{i} \mbox{ for all } i.
\end{equation}
Here $\beta=(\beta_{1},\beta_{2},\ldots,\beta_{p})^{T}$ is a vector of regression coefficients in the latent linear (LL) regression model and $\epsilon_{i} $ is a random error term having a known i.i.d probability distribution function $F$ over $i = 1, 2, \ldots, n$ (it's also called a link function in this context). $F$ is assumed to possess a probability density function $f$.
Under the above parametric set-up, we wish to estimate the regression coefficients $\beta=(\beta_{1}, \ldots, \beta_{p})$ and the cut-off points $\gamma =(\gamma_{1},\gamma_{2},\ldots,\gamma_{m-1})$ simultaneously, which should be reasonably efficient at the true model, and also robust and outlier stable when being away from it. In this set-up, the model probabilities are given by
\begin{equation}\label{model probability}
p_{\theta,i}(j)=Pr(Y=j|x_{i})=F(\gamma_{j} -x_{i}^{T}\beta)-F(\gamma_{j-1} -x_{i}^{T}\beta),
\mbox{ where } \theta=(\gamma,\beta)
\end{equation}
for $i=1,\ldots, n$, and $j=1, \ldots, m$. Note that
$p_{\theta, i}(1)=F(\gamma_{1}-x_{i}^{T}\beta)$ and $p_{\theta, i}(m)=1-F(\gamma_{m-1}-x_{i}^{T}\beta)$. For notational convenience, we have avoided using a bold-faced $\theta, \gamma, \beta$. The log-likelihood function for the model as in Equation (\ref{model probability}) is therefore given by
\begin{equation}\label{log likelihood}
\sum_{i=1}^{n}\ell(\theta;x_{i}, Y_{i}),
\end{equation}
where $\ell(\theta;x_{i}, Y_{i})=\sum_{j=1}^{m} Z_{i}(j)\ln p_{\theta,i}(j)$ is the log-likelihood function at $i$-th the data point $(x_{i}, Y_{i})$, and $Z_{i}(j)=\mathds{1} (Y =j| x_{i})$. Here $\ln(\cdot)$ represents natural logarithm and $\mathds {1}(\cdot|x_{i})$ is the conditional indicator function for $\{Y=j\}$ given $x_{i}$. The maximum likelihood estimator (MLE) for $\theta$ is a maximizer of the log-likelihood function as in Equation (\ref{log likelihood}). Under certain regularity conditions, the MLE is asymptotically the most efficient, achieving the Rao-Cram\'er lower bound for the variance when the data are obtained from the pure model; unfortunately the MLE might be completely unreliable in the presence of even a small proportion of outliers in the data. Croux et al. \citep{croux2013robust} robustify the likelihood function in Equation (\ref{log likelihood}). They use the following objective function
\begin{equation}\label{weighted likelihood fun}
\sum_{i=1}^{n} w_{i} \phi(\theta; x_{i}, Y_{i}),
\end{equation}
where
\begin{align} \label{phi in weighted likelihood fun}
\phi(\theta; x_{i}, Y_{i})
&= \sum_{j=1}^{m} Z_{i}(j)\log \Bigg\{ F\Big(\gamma_{1}+\sum_{k=2}^{j}\gamma^{2}_{k}-\beta^{T}x_{i}\Big)- F\Big(\gamma_{1}+\sum_{k=2}^{j-1}\gamma^{2}_{k}-\beta^{T}x_{i}\Big)\Bigg\}, \\
\label{weight:croux2013robust}
w_{i}
&=W(d_{i}), \mbox{ with } d_{i}=(x_{i}-\mu)^{T}S^{-1}(x_{i}-\mu).
\end{align}
We note that $\phi(\theta; x_{i}, Y_{i})$ is a reparameterized version of the usual log-likelihood function and equivalent to $\ell(\theta;x_{i}, Y_{i})$ for all $i$. The quantities $(\mu,S)$ as in Equation (\ref{weight:croux2013robust}) denote some robust estimates of the location and scatter of $X_{n \times p}:=(x_{1}, \ldots, x_{n})^{T}$. The weighted maximum likelihood estimator $(\hat{\theta}_{WMLE})$ maximizing (\ref{weighted likelihood fun}) yields bounded influence function for logit link. A popular choice for (\ref{weight:croux2013robust}) is
\begin{align}\label{weight:trimmed mle}
w_{i}=\begin{cases}
1 & \mbox{ if } d_{i} \le \chi^{2}_{p-1}(q) \\
0 & \mbox{ otherwise},
\end{cases}
\end{align}
where $\chi^{2}_{p-1}(q)$ is the upper $100(1-q)\%$ point of central $\chi_{p-1}^{2}$. Equation (\ref{weight:trimmed mle}) produces the well-known $100q\%$-trimmed MLE for $\theta$. Next, we shall briefly discuss the M-estimation in this context.
\subsection{ \textbf{General M-estimation}}\label{M-estimation}
Suppose $\big\{Y|x_{i}; i=1,2, \ldots,n\big\}$ is a collection of independent random variables with values in a measurable space $(\mathscr{Y}, \mathscr{A})$. The conditional distribution of $Y|x_{i}$ is denoted by $G_{i}$, $i=1,2, \ldots,n$. Moreover, we define $\overline{G}\equiv\frac{1}{n}\sum_{i=1}^{n}G_{i}$ as the average probability measure, and let $\hat{G} \equiv \frac{1}{n}\sum_{i=1}^{n} \Delta_{Y_{i}}$ be its empirical counterpart. Here $\Delta_{Y_{i}}$ denotes the Dirac probability measure concentrated on the singleton set $\{Y_{i}\}, i=1,2, \ldots,n$. Now, consider the parameter space $\Theta$ as a subset of some metric space. Then for a general loss function $\rho_{\theta}: \Theta \times \mathscr{Y} \to \mathds{R}_{\ge 0}$, the M-functional $\theta_M$ is defined as
\begin{equation}\label{general M functional}
\theta_{M}=\arg\min_{\theta \in \Theta}\int \rho_{\theta}d\overline{G}.
\end{equation}
The corresponding M-estimator $\hat{\theta}_{M}$ is similarly given by
\begin{equation}\label{general M estimator}
\hat{\theta}_{M}=\arg\min_{\theta \in \Theta}\int\rho_{\theta}d\hat{G}.
\end{equation}
If $\rho_{\theta}$ admits partial derivatives with respect to each component of $\theta$, then $\theta_{M}$ and $\hat{\theta}_{M}$ satisfy the following sets of estimating equations
\begin{equation} \label{general M estimating eq}
\int \psi_{\theta_{M}} d\overline{G}=0 \mbox{ and }
\int \psi_{\hat{\theta}_{M}} d\hat{G}=0,
\end{equation}
where $\psi_{\theta}=\nabla \rho_{\theta}=(\psi_{1}, \cdots , \psi_{m+p-1})^{T}$ with $\psi_{t}=\frac{\partial}{\partial \theta_{t}} \rho_{\theta}$ for $t=1, \ldots, (m+p-1)$. `$\nabla$' conventionally denotes the partial derivative operator with respect to each components of $\theta$. Maronna et al. \citep{maronna2019robust} discusses different sets of regularity conditions under which the M-estimator is consistent, i.e., $\hat{\theta}_{M} \overset{\mathds{P}}{\longrightarrow}\theta_{M}$ when $n \uparrow +\infty$.
Define
\begin{align}
J_{M} =\mathds{E}_{\overline{G}}\big(\nabla \psi_{\theta_{M}}\big)
\mbox{ and }
K_{M} =\mathds{E}_{\overline{G}}\big(\psi_{\theta_{M}}\psi_{\theta_{M}}^{T}\big).
\end{align}
It is well know, see, e.g., Maronna \citep{maronna2019robust}, that under appropriate regularity conditions on the objective function, design vectors and link function $F$, $(\hat{\theta}_{M}-\theta_{M})$ with proper scaling has an asymptotically normal distribution as below
\begin{align}
\sqrt{n} K^{-\frac{1}{2}}_{M} J_{M}(\hat{\theta}_{M}-\theta_{M}) \overset{L}{\longrightarrow} \mathcal{N}(0, I_{m+p-1}), \mbox{ as } n \uparrow +\infty.
\end{align}
The influence function (IF) of the M-functional $\theta_{M}$ at any point $(x,Y)$ for true $\overline{G}$, is given by $IF(x,Y,\theta_{M},\overline{G})=J^{-1}_{M}\psi_{\theta_{M}}(x,Y)$. This measures the infinitesimal effect of the data point $(x, Y)$ on $\theta_{M}$. When IF turns out to be bounded, the corresponding M-functional leads to an outlier-stable M-estimator. We notice that IF is bounded if and only if both the functions $\psi$ and $\nabla \psi$ are bounded. We often choose
\begin{equation*}
\psi_{\theta}(x_{i}, Y_{i})=w(\theta; x_{i}, Y_{i})u(\theta;x_{i}, Y_{i}),
\end{equation*}
where $u(\theta;x_{i}, Y_{i})=\nabla \ell(\theta;x_{i}, Y_{i})$ is the score function and $w(\theta;x_{i}, Y_{i})$ is an appropriate weight function at each data point $(x_{i}, Y_{i})$ for all $i$. When we use the DPD--see Basu et al. \citep{basu1998robust}, Ghosh and Basu \citep{ghosh2013robust}--as our candidate loss function, the resultant estimator will be the MDPDE; this also belongs to the class of M-estimators.
\subsection{ \textbf{MDPDE for the present set-up and the estimating equation}}\label{mdpde:present set-up}
The density power divergence (DPD) between two generic probability density functions $g$ and $f$ with respect to a common dominating measure is defined as
\begin{equation}\label{DPD}
d_{\alpha}(g,f)=\bigintssss_{\mathcal{S}}\Bigg\{f^{1+\alpha}-\Bigg(1+\frac{1}{\alpha}\Bigg)f^{\alpha}g+\frac{1}{\alpha}g^{1+\alpha}\Bigg\} \mbox{ for } \alpha > 0,
\end{equation}
and $\mathcal{S}$ being the common support of $f$ and $g$. For the discrete case, the divergence may be accordingly modified by replacing the integration with the summation. Although the divergence would be undefined when we simply substitute $\alpha=0$ in the Equation (\ref{DPD}), the limit of the above divergence is well-defined for $\alpha \downarrow 0+$, which is defined as $d_{0}(g,f)$. Some routine algebraic manipulation shows that
\begin{equation}\label{KLD}
d_{0}(g,f)=\int_{\mathcal{S}} g \ln{\frac{g}{f}}.
\end{equation}
This is a version of the Kullback-Leibler divergence. Now, consider the method of minimum distance inference to estimate $\theta$ related to a parametric model family $\mathcal{F}_{\theta} = \{F_{\theta}: \theta \in \Theta \}$, which is used to model a true probability distribution function $G$ (having density $g$). As $f$ is being replaced by a parametric model density $f_{\theta}$ of $F_{\theta}$ in the Equation (\ref{KLD}), a minimizer of the divergence $d_{0}(g, f_{\theta})$ over $\theta \in \Theta$ yields the maximum likelihood functional at the true distribution function $G$, which we shall denote by $T_{0}(G)$. For a generic non-negative $\alpha$, the functional $T_{\alpha}(G)$ analogously minimizes $d_{\alpha}(g, f_\theta)$ over $\theta \in \Theta$. As the third term in the Equation (\ref{DPD}) will be independent of $\theta$ upon substitution of $f_{\theta}$ for $f$, the essential objective function for the minimization of the divergence $d_{\alpha}(g, f_{\theta})$ may be written as a function of $G$ as
\begin{equation}\label{obj}
\int_{\mathcal{S}} f_{\theta}^{1+\alpha}-\Bigg(1+\frac{1}{\alpha}\Bigg)\int_{\mathcal{S}} f_{\theta}^{\alpha}dG.
\end{equation}
In practice, the true probability distribution function $G$ is unknown. So, given a set of i.i.d copies $\{U_{i}\}_{i=1}^{n}$ from $G$, the MDPD estimator $\hat{\theta}_{\alpha}$ of $\theta$, may be obtained as a minimizer of the empirical version of the divergence measure as in Equation (\ref{obj}) with $G$ being replaced by $\frac{1}{n}\sum_{i=1}^{n}\Delta_{U_{i}}$. Hence
\begin{equation}
\hat{\theta}_{\alpha} := \arg\min_{\theta\in\Theta}\Bigg\{\int_{\mathcal{S}} f_{\theta}^{1+\alpha}-\Bigg(1+\frac{1}{\alpha}\Bigg)
\frac{1}{n}\sum_{i=1}^{n}f_{\theta}(U_{i})^{\alpha}\Bigg\}.
\end{equation}
For more details of the method, see Basu et al. \citep{basu1998robust}. Notice that this does not require the use of any non-parametric density estimator for the true density, which is unavoidably necessary in many other minimum distance procedures, such as those based on the Hellinger distance (or any other $\phi$-divergence).
When the random observations are independent but not necessarily identically distributed, this method may be generalized in a variety of different ways; however, we shall follow the approah of Ghosh and Basu \citep{ghosh2013robust}. Recalling the notation of $G_{i}$ from the subsection \ref{M-estimation}, we assume that it admits a probability density function $g_{i}$ with respect to a common dominating measure for $i=1,2, \ldots,n$; and they are possibly different. We model $g_{i}$ by $p_{ \theta, i}$ and also recall that $Z_{i}$ is the empirical density for $Y|x_{i}$ for $i=1, 2, \ldots, n$. We have already seen that $Z_{i}$ puts total mass $1$ at the single point $Y_{i}$ for all $i$. Although different, all of these model densities share a common parameter $\theta$, which is vector-valued. Observe that for each $i$, all the related densities $g_{i}, p_{\theta,i}$ and $Z_{i}$ are supported on the finite set $\chi$, and it is independent of $\theta$. We continue to use the term density function (rather than mass function) for a unified nomenclature. Given the realisations $Y_{1}, \ldots , Y_{n}$ of $Y|x_{1}, \ldots, Y|x_{n}$, the $i$-th DPD between the true and the model densities is given as
\begin{equation}\label{i th dpd}
d_{\alpha}(g_{i}, p_{\theta, i})=\sum_{j=1}^{m}\Bigg\{p_{\theta, i}(j)^{1+\alpha}- \Big(1+\frac{1}{\alpha} \Big) p_{\theta, i}(j)^{\alpha}g_{i}(j)+\frac{1}{\alpha} g_{i}(j)^{1+\alpha} \Bigg\} \mbox{ for } i=1,2, \ldots,n.
\end{equation}
The empirical version of (\ref{i th dpd}) is similarly given by
\begin{align}\label{i th empirical dpd}
d_{\alpha}(Z_{i}, p_{\theta, i})
&=\sum_{j=1}^{m}\Bigg\{p_{\theta, i}(j)^{1+\alpha}- \Big(1+\frac{1}{\alpha} \Big) p_{\theta, i}(j)^{\alpha}Z_{i}(j)+\frac{1}{\alpha} Z_{i}(j)^{1+\alpha} \Bigg\} \nonumber \\
&=\sum_{j=1}^{m}p_{\theta,i}(j)^{1+\alpha}-\Big(1+\frac{1}{\alpha}\Big)p_{\theta,i}(Y_{i})^{\alpha}+\frac{1}{\alpha} \mbox{ for all } i.
\end{align}
Considering the overall divergence as the arithmetic mean of the individual divergences (as in Ghosh and Basu \citep{ghosh2013robust}), the objective function (minus the constant term) for this independent but non-homogeneous (INH) case may be defined as
\begin{equation}\label{INH obj fun}
\begin{split}
H_{n}(\theta)=\frac{1}{n} \sum_{i=1}^{n} V_{i}(Y_{i},\theta), \mbox{ such that }
V_{i}(Y_{i}, \theta)
=\sum_{j=1}^{m} p_{\theta,i}(j)^{1+\alpha}- \Big(1+\frac{1}{\alpha} \Big)p_{\theta,i}(Y_{i})^{\alpha}.
\end{split}
\end{equation}
The MDPD estimator $\hat{\theta}_{\alpha}$ of $\theta$ under this INH set-up is a minimizer of the objective function defined in Equation (\ref{INH obj fun}) over the parameter space $\Theta \subseteq \mathds{R}^{m+p-1}$ for fixed $\alpha$. The population version of (\ref{INH obj fun}) is analogously given by
\begin{equation}\label{population INH obj fun}
\begin{split}
H(\theta)=\frac{1}{n}\sum_{i=1}^{n}H^{(i)}(\theta),
\mbox{ where }
H^{(i)}(\theta)=\sum_{j=1}^{m} \Bigg\{ p_{\theta,i}(j)^{1+\alpha}-\Big(1+\frac{1}{\alpha}\Big)p_{\theta,i}(j)^{\alpha} g_i(j) \Bigg\}.
\end{split}
\end{equation}
The best fitting parameter (MDPD functional) $\theta_{\alpha}$ similarly minimizes the objective function (\ref{population INH obj fun}) over $\Theta$.
\subsubsection{ \textbf{Estimating equations}}\label{estimation eq}
Under the differentiability of the link function $F$, the MDPD estimator $\hat{\theta}_{\alpha}$ is a zero of the following sets of estimating equations
\begin{equation}\label{estimating eq}
\nabla H_{n}( \theta)=\frac{1}{n} \sum_{i=1}^{n} \nabla V_{i}( Y_{i},\theta)=0,
\end{equation}
where $\nabla V_{i}( Y_{i},\theta)=(1+\alpha) \Big\{\sum_{j=1}^{m} p_{\theta,i}(j)^{\alpha} \nabla p_{\theta,i}(j)-p_{\theta,i}(Y_{i})^{\alpha-1} \nabla p_{\theta,i}(Y_{i})\Big\}$ and the vector $\nabla p_{\theta,i}(j)$ at a point $j \in \chi$ is given by
\begin{equation}\label{score function}
\nabla p_{\theta,i}(j)
=\frac{\partial }{\partial \theta} p_{\theta, i}(j)
=\Big( \frac{\partial }{\partial \gamma^{T}} p_{\theta, i}(j), \frac{\partial }{\partial \beta^{T}} p_{\theta, i}(j)\Big)^{T} \in \mathds{R}^{m+p-1}.
\end{equation}
Simple calculations show that for all $j \in \chi$,
\begin{align} \label{first deriv of p w.r.t gamma}
\frac{\partial}{\partial \gamma_{s}} p_{\theta, i}(j)
=\begin{cases}
f(\gamma_{s}-x^{T}_{i}\beta) &\mbox{ when } j=s \\
-f(\gamma_{s}-x^{T}_{i}\beta) &\mbox{ when } j=s+1 \\
0 &\mbox{ otherwise},
\end{cases}
\end{align}
\begin{align}
\label{first deriv of p w.r.t beta}
\frac{\partial}{\partial \beta_{k}} p_{\theta, i}(j)
&=\Big\{ f(\gamma_{j-1}- x^{T}_{i}\beta)-f(\gamma_{j}-x^{T}_{i}\beta ) \Big\}x_{ik},
\end{align}
for $s=1,2,\ldots ,(m-1)$ and $k=1, \ldots,p$. Equations (\ref{first deriv of p w.r.t gamma}) and (\ref{first deriv of p w.r.t beta}) would together imply that \begin{align} \label{first deriv of V w.r.t gamma}
\frac{1}{1+\alpha}\cdot\frac{\partial }{\partial \gamma_{s}} V_{i}(Y_{i}, \theta)
&=\Bigg\{p_{\theta, i}(s)^{\alpha}f(\gamma_{s}-x_{i}^{T}\beta)-p_{\theta, i}(Y_{i})^{\alpha-1} f(\gamma_{Y_{i}}-x_{i}^{T}\beta) \mathds{1}(Y_{i}=s|x_{i})\Bigg\} \nonumber \\
&-\Bigg\{p_{\theta, i}(s+1)^{\alpha}f(\gamma_{s}-x_{i}^{T}\beta)-p_{\theta, i}(Y_{i})^{\alpha-1} f(\gamma_{Y_{i}-1}-x_{i}^{T}\beta) \mathds{1}(Y_{i}=s+1|x_{i})\Bigg\}, \\
\label{first deriv of V w.r.t beta}
\frac{1}{1+\alpha}\cdot\frac{\partial}{\partial \beta_{k}}V_{i}(Y_{i}, \theta)
&=x_{ik} \Bigg[ \sum_{j=1}^{m}
p_{\theta, i}(j)^{\alpha} \Big\{f(\gamma_{j-1}- x^{T}_{i}\beta)-f(\gamma_{j}-x^{T}_{i}\beta )
\Big\} \nonumber \\
&-p_{\theta, i}(Y_{i})^{\alpha-1} \Big\{f(\gamma_{Y_{i}-1}- x^{T}_{i}\beta)-f(\gamma_{Y_{i}}-x^{T}_{i}\beta )
\Big\} \Bigg],
\end{align}
for $s=1, 2, \ldots,(m-1)$ and $k=1, 2, \ldots,p$. Using (\ref{first deriv of V w.r.t gamma}) and (\ref{first deriv of V w.r.t beta}), the estimating equations in (\ref{estimating eq}) can be explicitly obtained. Therefore the MDPD estimator $\hat{\theta}_{ \alpha}$ satisfies the following sets of estimating equations
\begin{align}\label{simplified estimating eq 1}
\frac{1}{n}\sum_{i=1}^{n} \frac{\partial }{\partial \gamma_{s}} V_{i}(Y_{i}, \hat{\theta}_{\alpha}) &=0 \mbox{ for all } s=1,2, \ldots,(m-1), \\
\label{simplified estimating eq 2}
\frac{1}{n}\sum_{i=1}^{n}\frac{\partial }{\partial \beta_{k}} V_{i}(Y_{i}, \hat{\theta}_{\alpha})
&=0 \mbox{ for all } k=1,2, \ldots ,p
\end{align}
for fixed $\alpha$. The system of estimating equations in (\ref{simplified estimating eq 1}) and (\ref{simplified estimating eq 2}) are unbiased when the true densities belong to the model families, i.e., $g_{i}\equiv p_{\theta_{0},i}$ for true $\theta_{0}$ and $i=1,\ldots,n$. In that case the MDPD functional would be Fisher consistent, i.e., $\theta_{\alpha}=\theta_{0}$. Observe that when Equations (\ref{simplified estimating eq 1}) and (\ref{simplified estimating eq 2}) are solved at $\hat{\theta}_{\alpha}$, it necessarily follows for each $i$ that $|\nabla_{l}V_{i}(Y_{i}, \theta)|$ be bounded (follows from the continuity of $\theta \mapsto \nabla_{l}V_{i}(Y_{i},\theta)$) in some open set containing $\hat{\theta}_{\alpha},$ $l=1,2, \ldots,(m+p-1)$. The MDPD functional $\theta_{\alpha}$ similarly solves the equations
\begin{align}\label{theta_g as zero}
\frac{1}{n}\sum_{i=1}^{n}\nabla H^{(i)}(\theta_{\alpha})=0
\iff \nabla_{l}H^{(i)}(\theta_{\alpha})=0 \mbox{ for all } i=1, \ldots,n. \mbox{ and } l=1,2, \ldots ,(m+p-1).
\end{align}
Here $\nabla_{l}$ denotes the partial derivative with respect to the $l$-th component of $\theta$.
\section{Properties of the MDPDE under the ordinal response models}\label{properties}
\subsection{\textbf{Asymptotic distributions and relative efficiency}}\label{asymptotic properties and ARE}
We shall now present the weak consistency and asymptotic normality results for the MDPD estimator $\hat{\theta}_{\alpha}$. First, we shall introduce some notation. For each fixed $\alpha$, define
\begin{equation}\label{J(i)}
J^{(i)}(\alpha)
=\frac{1}{1+\alpha} \mathds{E}_{g_{i}}\Big[\nabla^{2} V_{i}(Y_{i}, \theta_{\alpha})\Big]
\end{equation}
which is assumed to be positive definite. Also, define
\begin{equation}\label{psi_n and omega_n}
\Psi_{n}(\alpha)=\frac{1}{n}\sum_{i=1}^{n}J^{(i)}(\alpha),
\mbox{ and }
\Omega_{n}(\alpha)=\frac{1}{n}\sum_{i=1}^{n} \mathds{V}ar_{g_{i}}\Big[\nabla V_{i}(Y_{i}, \theta_{\alpha})\Big],
\end{equation}
where $\mathds{E}_{g_{i}}$ and $\mathds{V}ar_{g_{i}}$ stand for mathematical expectation and variance respectively under the true densities $g_{i}, i=1,2, \ldots,n$. The actual expressions for the quantities in Equation (\ref{psi_n and omega_n}) can be obtained from Ghosh and Basu \citep{ghosh2013robust}, and they turn out to be
\begin{align}\label{psi_n}
\Psi_{n}(\alpha)
=&\frac{1}{n} \sum_{i=1}^{n} \Bigg[\sum_{j=1}^{m}u_{ \theta_{\alpha},i}(j)u_{\theta_{\alpha},i}(j)^{T}p_{\theta_{\alpha},i}(j)^{1+\alpha} \nonumber \\
&-\sum_{j=1}^{m}\Big\{\nabla u_{\theta_{\alpha},i}(j)+\alpha \cdot u_{\theta_{\alpha},i}(j) u_{\theta_{\alpha},i}(j)^{T}\Big\}\Big\{g_{i}(j)-p_{\theta_{\alpha},i}(j)\Big\}p_{\theta_{\alpha},i}(j)^\alpha \Bigg], \\
\label{omega_n}
\Omega_{n}(\alpha)
=&\frac{1}{n}\sum_{i=1}^{n}\Bigg\{\sum_{j=1}^{m} u_{\theta_{\alpha},i}(j)u_{\theta_{\alpha},i}(j)^{T}p_{\theta_{\alpha},i}(j)^{2\alpha} g_{i}(j)-\xi_{i}(\alpha)\xi_{i}(\alpha)^{T}\Bigg\},
\end{align}
where $\xi_{i}(\alpha) =\sum_{j=1}^{m}u_{\theta_{\alpha},i}(j)p_{\theta_{\alpha},i}(j)^{\alpha} g_{i}(j)$ and $u_{\theta,i}(j)=\Big(\frac{\partial}{\partial \gamma^{T}}\ln p_{\theta,i}(j), \frac{\partial}{\partial \beta}^{T}\ln p_{\theta,i}(j) \Big)^{T} \in \mathds{R}^{m+p-1}$ is the value of the score vector at $j \in \chi$. We recall that $x_{i}=(x_{i1}, x_{i2}, \ldots , x_{ip})^{T}$, and denote by $J=(1,1, \ldots ,1)^{T} \in \mathds{R}^{m-1}$. Also recognize $e_{i} \in \mathds{R}^{m-1}$ as the $i$-th unit vector in $\mathds{R}^{m-1}$. Therefore using the formulae as in Equations (\ref{first deriv of p w.r.t gamma}) and (\ref{first deriv of p w.r.t beta}), the score vector can be further simplified to
\begin{align}\label{score function expression}
u_{\theta,i}(j)
=\begin{pmatrix}
&I_{m-1}\\
\\
&-x_{i}J^{T}
\end{pmatrix}
D_{\theta,i}(j)
(e_{j}-e_{j-1}),
\end{align}
where
\begin{align}
D_{\theta,i}(j)
&=\frac{1}{p_{\theta,i}(j)}\begin{pmatrix}
f(\gamma_{1}-x_{i}^{T}\beta), &0, &\ldots &0\\
0, &f(\gamma_{2}-x_{i}^{T}\beta), &\ldots &0\\
0, & \ldots &\ddots &0\\
0, &0, &\ldots &f(\gamma_{m-1}-x_{i}^{T}\beta)
\end{pmatrix}_{(m-1)\times(m-1)}
\end{align}
for $j=1,2, \ldots,m$. We denote $e_{0}= e_{m}=0_{(m-1) \times 1}$ for notational convenience. It may be easily checked that
\begin{align}\label{partial of D w.r.t gamma_{s}}
\frac{\partial }{\partial \gamma_{s}} D_{\theta,i}(j)
=\begin{pmatrix}
&\frac{\partial }{\partial \gamma_{s}}\frac{f(\gamma_{1}- x_{i}^{T}\beta)}{p_{\theta,i}(j)},
&0,
&\ldots
&0\\
&0,
&\frac{\partial }{\partial \gamma_{s}}\frac{f(\gamma_{2}- x_{i}^{T}\beta)}{p_{\theta,i}(j)},
&\ldots
&0\\
&\ldots
&\ldots
&\ldots
&\ldots\\
&0,
&\ldots,
&0,
& \frac{\partial }{\partial \gamma_{s}}\frac{f(\gamma_{m-1}- x_{i}^{T}\beta)}{p_{\theta,i}(j)}
\end{pmatrix},
\end{align}
where
\begin{align}
\frac{\partial}{\partial \gamma_{s}} \frac{f(\gamma_{s'}-x_{i}^{T}\beta)}{p_{\theta,i}(j)}
&=
\begin{cases}
\frac{p_{\theta,i}(j)f'(\gamma_{s}-x_{i}^{T}\beta)
+(-1)^{j+s+1}f^{2}(\gamma_{s}-x_{i}^{T}\beta)}{p_{\theta,i}^{2}(j)}
& \mbox{ for } s'=s, \mbox{ and } j=s, s+1 \\[10pt]
\frac{(-1)^{j+s+1}f(\gamma_{s'}-x_{i}^{T}\beta)
f(\gamma_{s}-x_{i}^{T}\beta)}{p_{\theta,i}^{2}(j)}
& \mbox{ for } s'\ne s, \mbox{ and } j=s, s+1
\end{cases}
\end{align}
for all $s,s'=1,2, \ldots,(m-1)$ and $j=1,2, \ldots ,m$. Similarly
\begin{align}\label{partial of D w.r.t beta_{k}}
\frac{\partial }{\partial \beta_{k}} D_{\theta,i}(j)
&=\begin{pmatrix}
&\frac{\partial }{\partial \beta_{k}}\frac{f(\gamma_{1}- x_{i}^{T}\beta)}{p_{\theta,i}(j)},
&0,
&\ldots
&0\\
&0,
&\frac{\partial }{\partial \beta_{k}}\frac{f(\gamma_{2}- x_{i}^{T}\beta)}{p_{\theta,i}(j)},
&\ldots
&0\\
&\ldots
&\ldots
&\ldots
&\ldots\\
&0,
&\ldots,
&0,
& \frac{\partial }{\partial \beta_{k}}\frac{f(\gamma_{m-1}- x_{i}^{T}\beta)}{p_{\theta,i}(j)}
\end{pmatrix},
\end{align}
where
\begin{align}
\frac{\partial }{\partial \beta_{k}}\frac{f(\gamma_{s}- x_{i}^{T}\beta)}{p_{\theta,i}(j)}
&=-\frac{x_{ik}}{p^{2}_{\theta,i}(j)} \Bigg[p_{\theta,i}(j) f'(\gamma_{s}-x_{i}^{T}\beta) \nonumber \\
&+ f(\gamma_{s}-x_{i}^{T}\beta) \Big\{ f(\gamma_{j-1}-x_{i}^{T}\beta)-f(\gamma_{j}-x_{i}^{T}\beta) \Big\} \Bigg]
\end{align}
for all $s,k$ and $j$. Here $f'$ denotes the first derivative of $f$ with respect to its argument.
\begin{Remark}
It may be easily checked that
\begin{align}
f'(x)=
\begin{cases}
\frac{e^{-x}-1}{(e^{-x}+1)^{2}} &\mbox{ when } X \sim Logistic(0,1)\\
-\frac{1}{\sqrt{2\pi}} e^{-x^{2}/2} x &\mbox{ when } X \sim \mathcal{N}(0,1).\\
\end{cases}
\end{align}
Thus for both normal and logistic distributions, $f,f'$ are bounded almost surely.
\end{Remark}
Simple matrix algebra gives that
\begin{gather} \label{nabla score}
\nabla u_{\theta,i}(j)
=\begin{pmatrix}
\frac{\partial}{\partial \gamma} u_{\theta,i}(j)^{T}\\ \\
\frac{\partial}{\partial \beta} u_{\theta,i}(j)^{T}
\end{pmatrix} \nonumber \\
= \begin{pmatrix}
&((e_{j}-e_{j-1})^{T} \otimes I_{m-1}) \frac{\partial}{\partial \gamma}D_{\theta,i}(j),
&((e_{j-1}-e_{j})^{T} \otimes I_{m-1}) \frac{\partial}{\partial \gamma}D_{\theta,i}(j) Jx_{i}^{T}\\ \\
&((e_{j}-e_{j-1})^{T} \otimes I_{k})\frac{\partial}{\partial \beta}D_{\theta,i}(j),
&((e_{j-1}-e_{j})^{T}\otimes I_{k})\frac{\partial}{\partial \beta}D_{\theta,i}(j) Jx_{i}^{T}\\
\end{pmatrix}
\end{gather}
where $\otimes$ denotes the Kronecker product between two matrices. The blocks $[1,1]$ and $[1,2]$ of the partitioned matrix (\ref{nabla score}) can be further simplified using the $(m-1)^{2} \times (m-1)$ matrices
\begin{align}\label{nabla score blocks [1,1] and [1,2]}
\frac{\partial}{\partial \gamma}D_{\theta,i}(j)
=\begin{pmatrix}
\frac{\partial}{\partial \gamma_{1}} D_{\theta,i}(j) \\ \\
\frac{\partial}{\partial \gamma_{2}} D_{\theta,i}(j) \\
\vdots\\
\frac{\partial}{\partial \gamma_{m-1}} D_{\theta,i}(j)
\end{pmatrix} \mbox{ and }
\frac{\partial}{\partial \gamma}D_{\theta,i}(j) Jx_{i}^{T}
=\begin{pmatrix}
\frac{\partial}{\partial \gamma_{1}} D_{\theta,i}(j)\\ \\
\frac{\partial}{\partial \gamma_{2}} D_{\theta,i}(j) \\
\vdots \\
\frac{\partial}{\partial \gamma_{m-1}} D_{\theta,i}(j)
\end{pmatrix}
Jx_{i}^{T}.
\end{align}
Similar calculations also hold for the blocks $[2,1]$ and $[2,2]$ of (\ref{nabla score}). Using (\ref{score function expression}) and (\ref{nabla score}), we can explicitly calculate (\ref{psi_n}) and (\ref{omega_n}). Next we shall make the following assumptions, which will be used to prove the consistency and asymptotic normality of MDPDE.
\begin{description}
\descitem{(A1)}
The MDPD functional $\theta_{\alpha}$ is an interior point of $\Theta$ for almost all $j \in \chi$.
\descitem{(A2)}The link function $F$ is thrice continuously differentiable.
\descitem{(A3)}
The matrices $J^{(i)}(\alpha)$ are positive definite for all $i$, and
\begin{equation} \label{A2}
\lambda_{0}:=\underset{n}{\inf}\Big[\min \mbox{ eigenvalue of } \Psi_{n}(\alpha)\Big]>0.
\end{equation}
\descitem{(A4)}
There exists an integrable functions $M_{ll'l^{*}}(Y_{i})$ such that
\begin{gather}
|\nabla_{ll'l^{*}}V_{i}(Y_{i}, \theta_{\alpha})| \le M_{ll'l^{*}}(Y_{i}) \mbox{ for all } i
\mbox{ and } \frac{1}{n}\sum_{i=1}^{n}\mathds{E}_{g_{i}} \big[ M_{ll'l^{*}}(Y_{i})\big]=\mathcal{O}(1)
\end{gather}
for all $l,l',l^{*}=1,2, \ldots , (m+p-1)$.
\descitem{(A5)}
$|\nabla_{l}V_{i}(Y_{i}, \theta_{\alpha})|$ and $|\nabla_{ll'} V_{i}(Y_{i}, \theta_{\alpha})|$ are assumed to be almost surely bounded for all $i$ and $l,l'=1,2, \ldots (m+p-1)$.
\end{description}
\begin{Theorem}\label{Theorem: Consistency and CLT}
Suppose the assumptions \descref{(A1)} to \descref{(A5)} are true. Then the following results hold for each fixed non-negative $\alpha$.
\begin{description}
\descitem{(a)} $\hat{\theta}_{\alpha} \overset{\mathds{P}}{\longrightarrow}\theta_{\alpha}$ as n $\uparrow +\infty$.
\descitem{(b)} $\sqrt{n}\Omega_{n}^{-\frac{1}{2}}(\alpha)\Psi_{n}(\alpha)\Big(\hat{\theta}_{\alpha}-\theta_{\alpha}\Big)\overset{L}{\longrightarrow} \mathcal{N}\Big(0,I_{m+p-1}\Big)$ as $n \uparrow +\infty$.
\end{description}
\end{Theorem}
\begin{Remark}
The variance of $\sqrt{n} \hat{\theta}_{\alpha}$ is approximately $(\Psi^{-1}_{n}(\alpha)\Omega_{n}(\alpha)\Psi^{-1}_{n}(\alpha))$. To compare the asymptotic performances of $\hat{\theta}_{\alpha}$ for different choices of the tuning parameter $\alpha$, we compute the trace of $(\Psi^{-1}_{n}(\alpha)\Omega_{n}(\alpha)\Psi^{-1}_{n}(\alpha))$ and compare them. We know that the MLE is asymptotically the most efficient estimator when the true distribution belong to the model family. The asymptotic relative efficiency (ARE) of the MDPDE with respect to the MLE is given as
\begin{equation}\label{ARE}
ARE(\alpha)=\frac{tr\Big(\Psi^{-1}_{n}(0)\Omega_{n}(0) \Psi^{-1}_{n}(0)\Big)} {tr\Big(\Psi^{-1}_{n}(\alpha)\Omega_{n}(\alpha) \Psi^{-1}_{n}(\alpha) \Big)} \mbox{ for } \alpha \ge 0,
\end{equation}
where $tr(A)$ denotes the trace of the matrix $A$. If the true distribution of the data deviates from the assumed model, or there are some outliers in a data set then performance of the MLE may become very unstable depending on the amount of anomaly in the data. As the MDPDE naturally down-weights those outlying observations with respect to the model, we expect that the MDPDE will have superior performance over the MLE when $\alpha$ is chosen to be positive in such a scenario.
\end{Remark}
\begin{Remark}\label{remark:asymp var}
The unknown asymptotic variance needs to be estimated based on the data at hand. A consistent estimator of $\Psi_{n}(\alpha)$ is obtained by plugging in $\hat{\theta}_{\alpha}$ and $Z_{i}$ respectively for $\theta_{\alpha}$ and $g_{i}$ in the Equation (\ref{psi_n}). So
\begin{align}\label{est psi_n}
\hat{\Psi}_{n}(\alpha)
=&\frac{1}{n} \sum_{i=1}^{n} \Bigg[\sum_{j=1}^{m}\Big\{\nabla u_{\hat{\theta}_{\alpha},i}(j)+(1+\alpha) \cdot u_{\hat{\theta}_{\alpha},i}(j) u_{\hat{\theta}_{ \alpha},i}(j)^{T}\Big\}p_{\hat{\theta}_{\alpha},i}(j)^{1+\alpha} \nonumber \\
&-\Big\{\nabla u_{\hat{\theta}_{\alpha},i}(Y_{i})+\alpha \cdot u_{\hat{\theta}_{\alpha},i}(Y_{i}) u_{\hat{\theta}_{ \alpha},i}(Y_{i})^{T}\Big\}p_{\hat{\theta}_{\alpha},i}(Y_{i})^{\alpha} \Bigg].
\end{align}
However $\Omega_{n}(\alpha)$ cannot be estimated using only a single observation $Y_{i}$ from $g_{i}, i=1,2 \ldots,n$. To avoid that we replace the true densities by the model densities in the Equation (\ref{omega_n}) and $\hat{\theta}_{\alpha}$ for $\theta_{\alpha}$.
\begin{align}
\label{est omega_n}
\hat{\Omega}_{n}(\alpha) &=\frac{1}{n}\sum_{i=1}^{n}\Bigg\{\sum_{j=1}^{m} u_{\hat{\theta}_{\alpha},i}(j)u_{\hat{\theta}_{ \alpha},i}(j)^{T}p_{ \hat{\theta}_{\alpha},i}(j)^{2\alpha+1} -\hat{\xi}_{i}(\alpha)\hat{\xi}_{i}(\alpha)^{T}\Bigg\}, \\
\label{est xi_i}
\hat{\xi}_{i}(\alpha)
&=\sum_{j=1}^{m}u_{\hat{\theta}_{ \alpha},i}(j)p_{\hat{\theta}_{ \alpha},i}(j)^{1+\alpha} .
\end{align}
\end{Remark}
Using (\ref{est psi_n}), (\ref{est omega_n}) and (\ref{est xi_i}) we can consistently estimate $ARE(\alpha)$ when the true densities belongs to the model family.
\subsection{ \textbf{Influence function analysis}}\label{IF analysis}
The influence function (IF) is one of the most popular measures of robustness to study the impact of infinitesimal data contamination on a functional. Here we shall present the influence function of the MDPD functional $\theta_{\alpha}$, which minimizes $H(\theta)$.
Following Huber \citep{huber1983minimax}, the true probability densities $\{g_{i}\}$ are contaminated with $\epsilon$-proportions at the points $\{t_{i}\}$ by a sequence of degenerate densities $\{\delta_{t_{i}}\}$. These $\epsilon$-contaminated true densities are denoted as $g_{i,\epsilon} \equiv (1-\epsilon)g_{i}+\epsilon \delta_{t_{i}}$ for all $i$. Remembering ${G_{i}}$ as the CDF corresponding to the true probability densities $\{g_{i}\}$, we may explicitly rewrite $\theta_{\alpha}=\theta_{g_{1},\ldots,g_{n}}$. Let $\theta_{\epsilon}^{i_0}=\theta_{g_{1},\ldots,g_{i_{0}-1},g_{i_{0},\epsilon},g_{i_{0}+1},\ldots,g_{n}}$ be the MDPD functional when the $i_{0}$-{th} true density is contaminated at $\epsilon$-proportion. Now, we substitute $\theta_{\epsilon}^{i_{0}}$ and $g_{i_{0},\epsilon}$ respectively for $\theta_{\alpha}$ and $g_{i_{0}}$ in the estimating equations (\ref{simplified estimating eq 1}) and (\ref{simplified estimating eq 2}). After differentiating these equations with respect to $\epsilon$, and evaluating at $\epsilon=0$, the influence function with contamination only at the $i_{0}$-th data point is obtained as
\begin{equation}\label{IF i_0 coordinate}
IF_{i_{0}}(t_{i_{0}},\theta_{\alpha},G_{1},\cdots,G_{n})=\Psi_{n}^{-1}(\alpha)\frac{1}{n}\Big\{p_{\theta_{\alpha},i_{0}}(t_{0})^{\alpha} u_{\theta_{\alpha},i_{0}}(t_{0})-\xi_{i_{0}}(\alpha)\Big\}.
\end{equation}
Next we assume that all the data points are simultaneously contaminated, each at $\epsilon$-proportion. Then, following the earlier description, the influence function in this case is obtained as
\begin{equation}\label{IF MDPD}
IF(t_{1},\cdots,t_{n},\theta_{\alpha},G_{1},\cdots,G_{n})
=\Psi_{n}^{-1}(\alpha)\frac{1}{n}\sum_{i=1}^{n}\Big\{p_{\theta_{\alpha},i}(t_{i})^{\alpha} u_{\theta_{\alpha},i}(t_{i})-\xi_{i}(\alpha)\Big\}.
\end{equation}
Observe that (\ref{IF MDPD}) is a simple arithmetic average of (\ref{IF i_0 coordinate}) over all the data points. Looking at the Equations (\ref{IF i_0 coordinate}) and (\ref{IF MDPD}), it is clearly evident that the MDPD functional down-weights the influence of the data points that are inconsistent with the models with weights being chosen as model densities raised to the power of $\alpha\in [0,1]$. Following Hampel et al. \citep{hampel2011robust} and Ghosh and Basu \citep{ghosh2013robust}, some IF based measures of robustness are unstandardized and self-standardized gross error sensitivity \textit{(GES)}. For the MDPD functional $\theta_{\alpha}$ the unstandardized one is given as
\begin{gather}\label{IF unstandardized}
\gamma_{i_{0}}^{u}(\theta_{\alpha},G_1,\cdots,G_n) =\sup_{t}{||IF_{i_{0}}(t,\theta_{\alpha},G_{1},\cdots,G_{n})||} \\
=\frac{1}{n}\sup_{t}\Bigg[\Big\{p_{\theta_{\alpha},t_{0}}(t)^{\alpha} u_{\theta_{\alpha},i_{0}}(t)-\xi_{i_{0}}(\alpha)\Big\}^{T}
\Psi_{n}^{-2}(\alpha)\Big\{p_{\theta_{\alpha},t_{0}}(t)^{\alpha}u_{\theta_{\alpha},i_{0}}(t)-\xi_{i_{0}}(\alpha)\Big\}\Bigg]^{\frac{1}{2}},
\end{gather}
where `$u$' stands for unstandardized. Note that the measure in Equation (\ref{IF unstandardized}) is not invariant with respect to scale transformation of the parameters. Upon existence of the asymptotic variance of MDPD estimator at the true distributions $\{G_{i}\}$, the self-standardized \textit{GES} is defined as
\begin{gather}\label{IF standardized}
\gamma_{i_{0}}^{s}(\theta_{\alpha},G_{1},\cdots,G_{n}) \\
=\sup_{t}\Big\{IF_{i_{0}}(t,\theta_{\alpha},G_{1},\cdots,G_{n})^{T}\Big(\Psi_{n}^{-1}(\alpha)\Omega_{n}(\alpha)\Psi_{n}^{-1}(\alpha)\Big)^{-1} IF_{i_{0}}(t,\theta_{\alpha},G_{1},\cdots,G_{n})\Big\}^{\frac{1}{2}} \\
=\frac{1}{n}\sup_{t}\Bigg[\Big\{p_{\theta_{\alpha},t_{0}}(t)^{\alpha}u_{\theta_{\alpha},i_{0}}(t)-\xi_{i_{0}}(\alpha)\Big]^{T}\Omega_{n}^{-1}(\alpha)\Big[p_{\theta_{\alpha},t_{0}}(t)^{\alpha}u_{\theta_{\alpha},i_{0}}(t)-\xi_{i_{0}}(\alpha)\Big\}\Bigg]^{\frac{1}{2}}.
\end{gather}
When all the data points are contaminated simultaneously, unstandardized and standardized \textit{GES} may be similarly defined respectively using (\ref{IF unstandardized}) and (\ref{IF standardized}) with (\ref{IF i_0 coordinate}) being replaced by (\ref{IF MDPD}) in the formulae and taking the supremum over all possible $t_{1}, t_{2}, \ldots,t_{n}$. In Figures \ref{fig:IF model1 to model2} and \ref{fig:IF model3 to model5}, we have presented the influence functions of $\beta_{1}$ and $\gamma_{2}$ for several values of $\alpha \ge 0$ associated to different models as in Section \ref{simulation study} for the probit link function. It is clear that $\alpha$ close to zero represents the only case for which IF seem to be unbounded; all the other curves are bounded. This gives a strong evidence that when $\alpha$ is chosen close to zero, the MDPD functional may tend to become unbounded; this is indeed the case for maximum likelihood functional. For higher $\alpha$, the corresponding MDPD functional achieves better stability against outliers. \vspace{12pt}
\begin{figure}[!ht]
\captionsetup{justification=centering}
\begin{multicols}{2}
\includegraphics[width=1 \linewidth]{Model1-probit-infcurve-beta.pdf}\par
\includegraphics[width=1 \linewidth]{Model1-probit-infcurve-gamma.pdf}\par \end{multicols}
\begin{multicols}{2}
\includegraphics[width=1 \linewidth]{Model2-probit-infcurve-beta.pdf}\par
\includegraphics[width=1 \linewidth]{Model2-probit-infcurve-gamma.pdf}\par
\end{multicols}
\caption{IF plots for $\beta_{1}$ and $\gamma_{2}$ as per \descref{Model 1}, \descref{Model 2} with probit link as in Section \ref{simulation study}}
\label{fig:IF model1 to model2}
\end{figure}
\begin{figure}
\captionsetup{justification=centering}
\begin{multicols}{2}
\includegraphics[width=1 \linewidth]{Model3-probit-infcurve-beta.pdf}\par
\includegraphics[width=1 \linewidth]{Model3-probit-infcurve-gamma.pdf}\par
\end{multicols}
\begin{multicols}{2}
\includegraphics[width=1\linewidth]{Model4-probit-infcurve-beta.pdf}\par
\includegraphics[width=1\linewidth]{Model4-probit-infcurve-gamma.pdf}\par
\end{multicols}
\begin{multicols}{2}
\includegraphics[width=1\linewidth]{Model5-probit-infcurve-beta.pdf}\par
\includegraphics[width=1\linewidth]{Model5-probit-infcurve-gamma.pdf}\par
\end{multicols}
\caption{IF plots for $\beta_{1}$ and $\gamma_{2}$ as per \descref{Model 3}, \descref{Model 4} and \descref{Model 5} with probit link as in Section \ref{simulation study}}
\label{fig:IF model3 to model5}
\end{figure}
\newpage
\subsection{ \textbf{Asymptotic Breakdown point at the model}}\label{Breakdown point}
In this section, our goal is to find the asymptotic breakdown point of the MDPD functional $\theta_{\alpha}$ for fixed positive $\alpha$ under certain conditions. Suppose that the $i$-th true probability distribution function $G_{i}$ is contaminated at $\epsilon$-proportion by a sequence of contaminating distributions $\{K_{i, m_{1}}\}_{m_{1}=1}^{\infty}$, and denoted as $H_{i,\epsilon,m_{1}}\equiv (1-\epsilon)G_{i}+\epsilon K_{i,m_{1}}$ for fixed $n,m$. Further denote by $h_{i,\epsilon,m_{1}}$, $g_{i}$ and $k_{i,m_{1}}$ as the probability density functions corresponding to the respective CDFs. We assume that the true densities $\{g_{i}\}$, the models $\{p_{\theta,i}; \theta \in \Theta\}$ and the contaminating sequence of densities $\{k_{i,m_{1}}\}$ are all supported on $\chi$. Let $\theta_{\alpha}^{h_{\epsilon,m_{1}}}$ be the MDPD functional when all $G_{i}$'s are $\epsilon$-contaminated as above, i.e.,
\begin{align}
\theta_{\alpha}^{h_{\epsilon, m_{1}}}:=\arg \min_{\Theta}
\frac{1}{n}\sum_{i=1}^{n} d_{\alpha}(h_{i, \epsilon, m_{1}}, p_{\theta,i}).
\end{align}
We declare that breakdown occurs for $\theta_{\alpha}$ at the contamination proportion $\epsilon$, if $||\theta_{\alpha}-\theta_{\alpha}^{h_{\epsilon,m_{1}}}|| \uparrow +\infty$ when $m_{1} \uparrow +\infty$ for fixed $n,m$ (cf. Simpson \citep{simpson1989hellinger}; Park and Basu \citep{park2004minimum}). Define
\begin{equation}
D_{\alpha}\Big( g_{i}(j),p_{\theta, i}(j)\Big)=\Bigg\{p_{\theta,i}^{1+\alpha}(j)-\Big(1+\frac{1}{\alpha}\Big) p_{\theta,i}^{\alpha}(j)g_i(j)+\frac{1}{\alpha}g_i(j)^{1+\alpha}\Bigg\} \mbox{ for } j \in \chi,
\end{equation}
and $M_{i}^{\alpha}=\sum_{j}p_{\theta_{\alpha},i}(j)^{1+\alpha}$ for all $i$. If the true densities belong to the model families, we get $g_{i}\equiv p_{\theta_{0},i}$ for true $\theta_{0}$ and $i=1,\ldots,n$. In that case $\theta_{\alpha}=\theta_{0}$. We make the following assumptions to find the asymptotic breakdown point for $\theta_{\alpha}$. Throughout this section we assume that $m$ be fixed.
\begin{description}
\descitem{(BP1)}
$g_{i}$ and $p_{\theta,i}$ belong to a common family $\mathscr{F}_{\theta,i}, i=1,2, \ldots ,n$.
\descitem{(BP2)}
There exists a set $B \subset \chi$ such that for all $j \in B$,
\begin{equation}
\big\{k_{i, m_{1}}(j)- p_{\theta,i}(j)\big\} \longrightarrow \delta_{1}(j) \ge 0
\mbox{ and } \sum_{j \in B}k_{i, m_{1}}(j) \longrightarrow 1
\mbox{ as } m_{1} \uparrow +\infty \mbox{ for all } i
\end{equation}
uniformly for $||\theta||< +\infty$. This condition means that on a set $B \subset \chi$, the contaminating sequence of densities $k_{i,m_{1}}$ asymptotically dominate $p_{\theta,i}$ when the parameter $\theta$ is uniformly bounded; moreover, $B^{c}$ asymptotically becomes a $K_{i,m_{1}}$-null set as $m_{1}$ increases to infinity for each $i$.
\descitem{(BP3)}
There exists a set $C \subset \chi$ such that for all $j \in C$,
\begin{align}
\big\{p_{\theta_{m_{1}},i}(j)- p_{\theta_{\alpha},i}(j)\big\} \longrightarrow \delta_{2}(j) \ge 0
\mbox{ and } \sum_{j\in C} p_{\theta_{m_{1}},i}(j) \longrightarrow 1
\mbox{ as } m_{1} \uparrow +\infty \mbox{ for all } i,
\end{align}
for any sequence $\{\theta_{m_{1}}\}$ such that $||\theta_{m_{1}}|| \uparrow +\infty$ as $m_{1} \uparrow +\infty$. This means that as $ ||\theta_{m_{1}}||$ diverges, the associated sequence of models $p_{\theta_{m_{1}},i}$ tend to dominate the true density $p_{\theta_{\alpha},i}$ on a set $C$, and the sequence $p_{\theta_{m_{1}},i}$ remain concentrated on $C$ for $i=1,2, \ldots ,n$.
\descitem{(BP4)} Assume that for any density $q_{i}$ supported on $\chi$, we have \begin{equation}\label{BP4:eq1}
d_{\alpha}(\epsilon q_{i}, p_{\theta,i}) \ge d_{\alpha}(\epsilon p_{\theta_{\alpha},i}, p_{\theta_{\alpha},i}) \mbox{ for all } \theta,i \mbox{ and } 0 < \epsilon < 1.
\end{equation}
This means that $d_{\alpha}(\epsilon q_{i}, p_{\theta,i})$ will be minimized at $\theta=\theta_{\alpha}$ and $q_{i}=p_{\theta_{\alpha},i}$ for all $i$. Further assume that
\begin{equation}\label{BP4:eq2}
\underset{m_{1} \uparrow +\infty}{\limsup} \Big( k_{i,m_{1}}(j) \Big)
\le p_{\theta_{\alpha},i}(j) \mbox{ for all } i,j, \mbox{ and }
M^{\alpha}=\frac{1}{n} \sum_{i=1}^{n}M^{\alpha}_{i} < +\infty
\end{equation}
for fixed $n,m$.
\end{description}
\begin{Remark}
Since we know that $p_{\theta,i}(j)=F(\gamma_{j}-x_{i}^{T}\beta)- F(\gamma_{j-1}-x_{i}^{T}\beta)$, \descref{(BP3)} can be verified in the following situations.
\begin{description}
\descitem{(S1)} Any particular $\gamma_{j}$ increases to $+\infty$ or decreases to $-\infty$, but $\beta$'s remain bounded.
\descitem{(S2)} Some of $\gamma_{j}$'s diverge, but $\beta$'s remain bounded.
\descitem{(S3)} $\gamma_{j}$'s remain bounded but $\beta$'s diverge to $\pm \infty$.
\end{description}
In Scenario \descref{(S1)}, it is clear that $p_{\theta,i}(j)$ will become such that it will converge to $1$ for a specific value of $j$ and will converge to $0$ for the rest. So the assumption \descref{(BP3)} gets satisfied. Scenario \descref{(S2)} will be a simple extension of \descref{(S1)}, in such case the probability $p_{\theta,i}(j)$ will become concentrated on a subset of $\chi$, determined by the set of $\gamma_{j}$'s that diverge. For example, if $\gamma_{i_{1}}, \ldots , \gamma_{{m}}$ diverge to $+\infty$, then the probability is concentrated on the set $\{1,2, \ldots, (i_{1}-1)\}$. In \descref{(S3)}, it will depend on the sign of $x_{i}^{T}\beta$. If $x_{i}^{T}\beta \to +\infty$, then all the terms $(\gamma_{j}-x_{i}^{T}\beta)$ goes to $-\infty$. In that case, the last term $p_{\theta,i}(m)=1-F(\gamma_{m-1}-x_{i}^{T}\beta) \to 1$. Thus the mass gets concentrated in a singleton set $\{m\}$. On the other hand, if $x_{i}^{T}\beta \to -\infty$, then the probability mass gets concentrated on the set $\{1\}$.
In \descref{(BP4)}, we assume some specific conditions regarding the divergence in (\ref{BP4:eq1}), and also about the true and contaminating densities in Equation (\ref{BP4:eq2}).
\end{Remark}
\begin{Theorem} \label{Theorem: Asymptotic Breakdown point}
Suppose $g_{i}, p_{\theta,i}$ and the contaminating sequence of densities $\{k_{i,m_{1}}\}_{m_{1}=1}^{\infty}$ are supported on $\chi$, $i=1, \ldots,n$. Then under the assumptions of \descref{(BP1)}-\descref{(BP4)}, the asymptotic breakdown point $\epsilon^{*}$ of the MDPD functional $\theta_{\alpha}$ is at least $\frac{1}{2}$ at the models for $\alpha>0$.
\end{Theorem}
In Theorem \ref{Theorem: Asymptotic Breakdown point}, we establish that the minimum DPD procedure generates estimator with high breakdown points for all $\alpha >0 $ when the model and the true densities are all embedded in the same family.
\section{Simulation Study}\label{simulation study}
We simulate samples of size $N=200$ for the following five models, \descref{Model 1} to \descref{Model 5}, and compute the estimate $\hat{\theta}_{ \alpha, b}$ for $b=1,2, \ldots ,B=1000$. The final estimate is obtained as $\hat{\theta}_{\alpha}=\frac{1}{B}\sum_{b=1}^{B}\hat{\theta}_{\alpha,b}$. Their individual biases (in parenthesis) are reported in the Tables \ref{Table:Model 1}-\ref{Table: Model4 type 2 contam}, and the graphs of ARE in Figures \ref{fig:Model 1 and Model 5 ARE}-\ref{fig:Model 4 ARE} for different values of $\alpha$. For the simulation study, however $\frac{1}{B}\sum_{i=1}^{B}(\hat{\theta}_{\alpha, b}-\theta_{0})^{T}(\hat{\theta}_{\alpha, b}-\theta_{0})$ consistently estimates $MSE(\alpha)$ defined in Equation (\ref{mse:eq1}) as $\hat{\theta}_{\alpha}$ is consistent for $\theta_{0}$. This formulation is used to estimate the ARE described in Equation (\ref{ARE}).
\begin{description}
\descitem{Model 1}: The response variable $Y$ assumes $5$ categories $1, \ldots, 5$, generated by ({\ref{latent relationship}}). It depends on one qualitative explanatory variable with four categories, coded through three dichotomous $0-1$ variables $X_1, X_2, X_3$ such that at most one of them can take value 1. The error component in (\ref{linear latent regressiion eq}) is $\mathcal{N}(0,1)$, the regression coefficients $\beta=(2.5,1.2,0.5)^{T}$ and the cut-points are given by $\gamma=(-0.7,0,1.5,2.9)^{T}$.
\descitem{Model 2}: The response variable $Y$ assumes $5$ categories and depends on one regressor $X \sim \mathcal{N}(0,1)$. The latent variable $Y^{*}=1.5X+\epsilon$, and the cutpoints are $\gamma=(-1.7,-0.5,0.5,1.7)^{T}$ where $\epsilon \sim \mathcal{N}(0,1)$, and $\gamma=(-2.1,-0.6,0.6,2.1)^{T}$, and $\epsilon$ follows logistic distribution with mean $0$ and variance $\frac{\pi^{2}}{3}$. The cutpoints are so chosen such that they roughly correspond to the same percentiles for the latent variable $Y^{*}$ for both logit and probit links.
\descitem{Contaminated Model 2}: Suppose that the gross error occurs so that, for the $\kappa$ statistical units the value of the regressor which is $\mathcal{N}(0,1)$, is erroneously recorded as $25$. Here $\kappa$ is so chosen that it accounts for $20 \%$ contamination of covariates in a sample of size 200.
\descitem{Model 3}: The response variable $Y$ assumes 4 categories, and depends on two regressors $X_{1} \sim \mathcal{N}(0,1)$ and $X_{2} \sim \mathcal{N}(0,4)$ with $ Cov(X_1,X_2)=1.2$. The regression coefficients are given by $\beta=(1.5, 0.7)^{T}$, while the cutpoints are $\gamma=(-2.3,0,2.3)^{T}$ for probit link and $\gamma=(-2.6,0,2.6)^{T}$ for standard logit link. Denote by $\mu_{1}=(0,0)^{T}$ and
$\Sigma_{1}=\begin{pmatrix}
1 & 1.2 \\
1.2 & 4 \\
\end{pmatrix}$.
\descitem{Contaminated Model 3}: Model 3 is contaminated upto $20\%$ by erroneously mis-reporting two statistical units as $25$.
\descitem{Model 4}: The response variable $Y$ assumes 3 categories depending upon three regressors $X_{1} \sim \mathcal{N}(0,1)$, $X_{2} \sim \mathcal{N}(0,4)$ and $X_{3} \sim \mathcal{N}(0,9)$ with $ Cov(X_{1},X_{2})=1.5$, $Cov(X_{1},X_{3})=0.8$, $Cov(X_{1},X_{3})=2.5$, $\beta=(2.5,1.2, 0.7)^{T}$ while $\gamma=(-3.8,3.8)^{T}$ for probit link and $\gamma=(-4,4)^{T}$ for standard logit link. Denote by $\mu_{2}=(0,0,0)^{T}$ and
$\Sigma_{2}=\begin{pmatrix}
1 & 1.5 & .8 \\
1.5 & 4 & 2.5 \\
.8 & 2.5 & 9 \\
\end{pmatrix}$.
\descitem{Contaminated Model 4 (Type 1)}: In the first case, we have contaminated the covariates by drawing it from $0.8 \times \mathcal{N}(\mu_{2},\Sigma_{2})+0.2\times\delta_{(25,25,25)}$ for both probit and logit links.
\descitem{Contaminated Model 4 (Type 2)}: We have here unnecessarily inserted two independent regressors $X_{4} \sim \mathcal{N}(0,1)$ and $X_{5} \sim \mathcal{N}(0,1)$, hence the overparametrized model assumed for the latent variable for estimation is $Y^{*}=\beta_{1}X_{1}+\cdots+\beta_{5}X_{5}+\epsilon$, through $\beta_{4}=\beta_{5}=0$. In addition the data have been contaminated by changing the value of $X_{1}$ to $5$ for $20\%$ statistical units.
\clearpage
\newpage
\descitem{Model 5}: the response variable $Y$ assumes categories $1,\cdots,4$ with two explanatory variables $D \sim Bernoulli(\frac{1}{2})$ and $X \sim \mathcal{N}(0,1)$. The latent regression model is $Y^{*}=2.5D+1.2X+0.7XD+\epsilon$, and cut-points are $\gamma=(-1,1,3)^{T}$ and $\gamma=(-1.4,1.1,3.4)^{T}$ for standard probit and logit link, respectively.
\end{description}
\graphicspath{ {./graphs/} }
\begin{figure}[!h]
\begin{multicols}{2}
\includegraphics[scale=0.5]{Model1.pdf}\par
\includegraphics[scale=0.5]{Model5.pdf}\par
\end{multicols}
\caption{ARE of MDPDE simulated from pure \descref{Model 1} (probit link) and \descref{Model 5} (probit $\&$ logit links)}
\label{fig:Model 1 and Model 5 ARE}
\end{figure}
\begin{table}[!h]
\centering
\caption{MDPDE (bias) based on pure data from \descref{Model 1} for probit link}
\begin{adjustbox}{width=0.66\linewidth}
\begin{tabular}{c c c c c c c c}
\hline
$\alpha$ & $\hat{\gamma}_1$ & $\hat{\gamma}_2$ & $\hat{\gamma}_{3}$ & $\hat{\gamma}_{4}$ & $\hat{\beta}_{1}$ & $\hat{\beta}_{2}$ & $\hat{\beta}_{3}$ \\
\hline
0.0 & -0.73293 & -0.02186 & 1.48262 & 2.88306 & 2.48357 & 1.18127 & 0.47842 \\
& (-0.03293) & (-0.02186) & (-0.01738) & (-0.01694) & (-0.01643) & (-0.01871) & (-0.02158)\\ \cline{2-8}
0.1 & -0.73106 & -0.02083 & 1.48180 & 2.88124 & 2.48073 & 1.18124 & 0.47998 \\
& (-0.03106) & (-0.02083) & (-0.01820) & (-0.01875) & (-0.01926) & (-0.01876) & (-0.02002)\\
\hline
\end{tabular}
\end{adjustbox}
\label{Table:Model 1}
\bigskip
\centering
\caption{MDPDE (bias) based on pure data from \descref{Model 5} for probit $\&$ logit links}
\begin{adjustbox}{width=0.66\linewidth}
\begin{tabular}{c c c c c c c c c }
\hline
$\alpha$ & Link &$\hat{\gamma}_1$ & $\hat{\gamma}_2$ & $\hat{\gamma}_{3}$ & $\hat{\beta}_{1}$ & $\hat{\beta}_{2}$ & $\hat{\beta}_{3}$ \\
\hline
0.0 & probit & -1.02333 & 1.01898 & 3.08823 & 2.57460 & 1.22683 & 0.73191 \\
& & (-0.02333) & (0.01898) & (0.08823) & (0.07460) & (0.02683) & (0.03191) \\
\cline{2-8}
& logit & -1.41299 & 1.13264 & 3.45446 & 2.54765 & 1.20968 & 0.70920 \\
& & (-0.01298) & (0.03264) & (0.05446) & (0.05476) & (0.00968) & (0.00920)\\
\hline
0.1 & probit & -1.02258 & 1.01799 & 3.08705 & 2.57289 & 1.22511 & 0.73211 \\
& & (-0.02258) & (0.01799) & (0.08705) & (0.07289) & (0.02511) & (0.03211) \\
\cline{2-8}
& logit & -1.42312 & 1.13587 & 3.47362 & 2.57441 & 1.22934 & 0.71220 \\
& & (-0.02312) & (0.03587) & (0.07362) & (0.07441) & (0.02934) & (0.01220)\\
\hline
\end{tabular}
\end{adjustbox}
\label{Table:Model 5}
\end{table}
Here pure data have been simulated from both \descref{Model 1} and \descref{Model 5}. From Figure \ref{fig:Model 1 and Model 5 ARE} we see that the all MDPDEs with $\alpha > 0$ have ARE values smaller than 1.
When $\alpha$ tends towards zero, the ARE of the MDPDE approaches $1$. In the Tables \ref{Table:Model 1} and \ref{Table:Model 5}, we have reported the MLE ($\alpha=0$) and MDPDE with $\alpha=0.1$ for both the models. We also note that the MDPDE for $\alpha=0.1$ (or, rather $\alpha$ close to zero)
leads to a minimal, if any, increase in bias in each component, compared to the MLE. So even under pure data, the use of an MDPDE with a very small value of $\alpha$ will lead to only negligible loss in efficiency.
\newpage
\clearpage
\begin{figure}[ht]
\begin{multicols}{2}
\includegraphics[scale=0.5]{Model2.pdf}\par
\includegraphics[scale=0.5]{Model3.pdf}\par
\end{multicols}
\caption{ARE for MDPDE simulated from \descref{Model 2} and \descref{Model 3}}
\label{fig:Model 2 and Model 3 ARE}
\end{figure}
\begin{table}[!h]
\centering
\caption{MDPDE (bias) based on pure data simulated from \descref{Model 2}}
\begin{adjustbox}{width=0.66\linewidth}
\begin{tabular}{c c c c c c c }
\hline
$\alpha$ & Link &$\hat{\gamma}_1$ & $\hat{\gamma}_2$ & $\hat{\gamma}_{3}$ & $\hat{\gamma}_{4}$ & $\hat{\beta}_{1}$ \\
\hline
0.0 & probit & -1.71492 & -0.50240 & 0.50529 & 1.71204 & 1.51857 \\
& & (-0.01492) & (-0.00240) & (0.00529) & (0.01204) & (0.01857) \\
\cline{2-7}
& logit & -2.11680 & -0.60111 & 0.61334 & 2.12154 & 1.51309 \\
& & (-0.01680) & (-0.00111) & (0.01334) & (0.02154) & (0.01309) \\
\hline
0.1 & probit &-1.71485 & -0.50235 & 0.50523 & 1.71178 & 1.51773 \\
& & (-0.01485) & (-0.00235) & (0.00523) & (0.01178) & (0.01773) \\
\cline{2-7}
& logit & -2.11758 & -0.60120 & 0.61373 & 2.12211 & 1.51406 \\
& &(-0.01758) & (-0.00120) & (0.01373) & (0.02211) & (0.01406)\\
\hline
0.6 & probit & -1.72796 & -0.50618 & 0.50830 & 1.72362 & 1.52894 \\
& & (-0.02796) & (-0.00618) & (0.00830) & (0.02362) & (0.02894) \\
\cline{2-7}
& logit & -2.13015 & -0.60449 & 0.61761 & 2.13382 & 1.52704 \\
& & (-0.03015) & (-0.00449) & (0.01761) & (0.03382) & (0.02704) \\
\hline
\end{tabular}
\end{adjustbox}
\label{Table: Model2 pure}
\bigskip
\centering
\caption{MDPDE (bias) based on data simulated from \descref{Contaminated Model 2}}
\begin{adjustbox}{width=0.66\linewidth}
\begin{tabular}{c c c c c c c }
\hline
$\alpha$ & Link &$\hat{\gamma}_1$ & $\hat{\gamma}_2$ & $\hat{\gamma}_{3}$ & $\hat{\gamma}_{4}$ & $\hat{\beta}_{1}$ \\
\hline
0.0 & probit & -1.18016 & -0.32508 & 0.38281 & 1.22967 & 0.51429 \\
& & (0.51984) & (0.17491) & (-0.11719) & (-0.47033) & (-0.98571) \\
\cline{2-7}
& logit & -1.49769 & -0.38347 & 0.48069 & 1.59253 & 0.00927 \\
& & (0.60231) & (0.21653) & (-0.11932) & (-0.50747) & (-1.49073) \\
\hline
0.1 & probit & -1.35554 & -0.38089 & 0.42250 & 1.38765 & 0.83959 \\
& & (-0.01485) & (-0.00235) & (0.00523) & (0.01178) & (0.01773) \\
\cline{2-7}
& logit & -1.49955 & -0.38416 & 0.48117 & 1.59437 & 0.01408 \\
& & (0.60045) & (0.21583) & (-0.11883) & (-0.50563) & (-1.48591) \\
\hline
0.6 & probit & -1.65861 & -0.47750 & 0.49212 & 1.65604 & 1.38428 \\
& & (0.04139) & (0.02250) & (-0.00788) & (-0.04396) & (-0.11572)\\
\cline{2-7}
& logit & -2.00184 & -0.55670 & 0.59615 & 2.03251 & 1.21096 \\
& & (0.09816) & (0.04330) & (-0.00385) & (-0.06749) & (-0.28904) \\
\hline
\end{tabular}
\end{adjustbox}
\label{Table: Model2 contam}
\end{table}
\clearpage
\newpage
\begin{table}
\centering
\caption{ MDPDE (bias) based on pure data simulated from \descref{Model 3}}
\begin{adjustbox}{width=0.66\linewidth}
\begin{tabular}{c c c c c c c }
\hline
$\alpha$ & Link &$\hat{\gamma}_1$ & $\hat{\gamma}_2$ & $\hat{\gamma}_{3}$ & $\hat{\beta}_{1}$ & $\hat{\beta}_{2}$ \\
\hline
0.0 & probit & -2.36537 & 0.00124 & 2.37498 & 1.55111 & 0.72152 \\
& & (-0.06537) & (0.00124) & (0.07498) & (0.05111) & (0.02152) \\
\cline{2-7}
& logit & -2.62954 & 0.00058 & 2.63183 & 1.52052 & 0.70664 \\
& & (-0.02954) & (0.00058) & (0.03183) & (0.02052) & (0.00664) \\
\hline
0.1 & probit & -2.36057 & 0.00094 & 2.36977 & 1.54677 & 0.72033 \\
& & (-0.06057) & (0.00094) & (0.06977) & (0.04677) & (0.02033) \\
\cline{2-7}
& logit & -2.63041 & 0.00064 & 2.63224 & 1.52032 & 0.70680\\
& & (-0.03041) & (0.00064) & (0.03224) & (0.02032) & (0.00680) \\
\hline
0.5 & probit & -2.37751 & -0.00011 & 2.38625 & 1.55510 & 0.72689 \\
& & (-0.07751) & (-0.00011) & (0.08625) & (0.05510) & (0.02689) \\
\cline{2-7}
& logit & -2.63677 & 0.00090 & 2.63654 & 1.52289 & 0.70856 \\
& & (-0.03677) & (0.00090) & (0.03654) & (0.02289) & (0.00856)\\
\hline
\end{tabular}
\end{adjustbox}
\label{Table: Model3 pure}
\end{table}
\begin{table}[!h]
\centering
\caption{MDPDE (bias) based on data simulated from \descref{Contaminated Model 3}}
\begin{adjustbox}{width=0.66\linewidth}
\begin{tabular}{c c c c c c c c c}
\hline
$\alpha$ & Link &$\hat{\gamma}_1$ & $\hat{\gamma}_2$ & $\hat{\gamma}_{3}$ & $\hat{\beta}_{1}$ & $\hat{\beta}_{2}$ \\
\hline
0.0 & probit & -1.90399 & 0.01411 & 1.94637 & 0.95555 & 0.61256 \\
& & (0.39601) & (0.01411) & (-0.35363) & (-0.54445) & (-0.08744)\\
\cline{2-7}
& logit & -1.50286 & 0.05133 & 1.60350 & -0.47878 & 0.51856 \\
& & (1.09714) & (0.05133) & (-0.99650) & (-1.97878) & (-0.18144) \\
\hline
0.1 & probit & -2.06321 & 0.00891 & 2.09949 & 1.15567 & 0.65137 \\
& & (0.23679) & (0.00891) & (-0.20051) & (-0.34433) & (-0.04863) \\
\cline{2-7}
& logit & -1.48739 & 0.05205 & 1.58747 & -0.50362 & 0.51430 \\
& & (1.11261) & (0.05205) & (-1.01253) & (-2.00362) & (-0.18569)\\
\hline
0.5 & probit & -2.33634 & 0.00072 & 2.35327 & 1.50241 & 0.71931 \\
& & (-0.03634) & (0.00072) & (0.05327) & (0.00241) & (0.01931) \\
\cline{2-7}
& logit & -2.51668 & 0.01054 & 2.53122 & 1.32849 & 0.69356 \\
& & (0.08332) & (0.01054) & (-0.06878) & (-0.17151) & (-0.00644)\\
\hline
\end{tabular}
\end{adjustbox}
\label{Table: Model3 contam}
\end{table}
As in the previous cases, the MLE is the best among the estimators considered, but the MDPDEs with small values of $\alpha$ are almost as good for pure data.
Similarly, Tables \ref{Table: Model2 pure}-\ref{Table: Model3 contam} show that in terms of bias, MDPDE at $\alpha = 0.1$ performs no worse than the MLE. Improvement in performance with increasing $\alpha$ is also clearly apparent. The findings are similar in Figure \ref{fig:Model 4 ARE} and Tables \ref{Table: Model4}-\ref{Table: Model4 type 2 contam}. At least in Figure \ref{fig:Model 2 and Model 3 ARE} it appears that the benefits for using the MDPDEs over the MLE are greater in case of the probit link compared to the logit link.
\clearpage
\newpage
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.6]{Model4}
\caption{The graph of estimates of ARE of the estimator for \descref{Model 4}}
\label{fig:Model 4 ARE}
\end{center}
\end{figure}
\begin{table}[!ht]
\centering
\caption{MDPDE (bias) based on pure data simulated from \descref{Model 4}}
\begin{adjustbox}{width=0.66\linewidth}
\begin{tabular}{c c c c c c c c c}
\hline
$\alpha$ & Link &$\hat{\gamma}_1$ & $\hat{\gamma}_2$ & $\hat{\beta}_{1}$ & $\hat{\beta}_{2}$ & $\hat{\beta}_{3}$ \\
\hline
0.0 & probit & -4.09169 & 4.10612 & 2.69224 & 1.29544 & 0.75574\\
& & (-0.29169) & (0.30612) & (0.19224) & (0.09544) & (0.05574) \\
\cline{2-7}
& logit & -4.17711 & 4.17460 & 2.60830 & 1.25717 & 0.72809 \\
& & (-0.17711) & (0.17460) & (0.10830) & (0.05717) & (0.02809) \\
\hline
0.1 & probit & -4.09663 & 4.11023 & 2.69483 & 1.29642 & 0.75621 \\
& & (-0.29663) & (0.31023) & (0.19483) & (0.09642) & (0.05621)\\
\cline{2-7}
& logit & -4.18323 & 4.18098 & 2.61070 & 1.25960 & 0.72898 \\
& & (-0.18323) & (0.18098) & (0.11070) & (0.05960) & (0.02898)\\
\hline
0.3 & probit & -4.13876 & 4.15050 & 2.72111 & 1.30926 & 0.76340 \\
& & (-0.33876) & (0.35050) & (0.22111) & (0.10926) & (0.06340)\\
\cline{2-7}
& logit & -4.20452 & 4.20236 & 2.62241 & 1.26716 & 0.73241 \\
& & (-0.20452) & (0.20236) & (0.12241) & (0.06716) & (0.03241) \\
\hline
\end{tabular}
\end{adjustbox}
\label{Table: Model4}
\bigskip
\centering
\caption{MDPDE (bias) based on simulated data from \descref{Contaminated Model 4 (Type 1)}}
\begin{adjustbox}{width=0.66\linewidth}
\begin{tabular}{c c c c c c c c c}
\hline
$\alpha$ & Link &$\hat{\gamma}_1$ & $\hat{\gamma}_2$ & $\hat{\beta}_1$ & $\hat{\beta}_{2}$ & $\hat{\beta}_{3}$ \\
\hline
0.0 & probit & -4.33654 & 4.33873 & 2.85627 & 1.37136 & 0.80135 \\
& & (-0.53654) & (0.53873) & (0.35627) & (0.17136) & (0.10135) \\
\cline{2-7}
& logit & -2.32834 & 2.43297 & 0.11804 & 1.01109 & 0.33154 \\
& & (1.67166) & (-1.56703) & (-2.38195) & (-0.18891) & (-0.36846) \\ \hline
0.1 & probit & -4.32141 & 4.32448 & 2.84615 & 1.36621 & 0.79834 \\
& & (-0.52141) & (0.52448) & (0.34615) & (0.16621) & (0.09834)\\
\cline{2-7}
& logit & -3.54745 & 3.60184 & 1.76741 & 1.19349 & 0.59415 \\
& & (0.45255) & (-0.39815) & (-0.73259) & (-0.00651) & (-0.10585)\\ \hline
0.3 & probit & -4.27931 & 4.29168 & 2.82178 & 1.35210 & 0.79031\\
& & (-0.47931) & (0.49168) & (0.32178) & (0.15210) & (0.09031)\\
\cline{2-7}
& logit & -4.18826 & 4.19614 & 2.60108 & 1.27080 & 0.72690 \\
& & (-0.18826) & (0.19614) & (0.10108) & (0.07080) & (0.02690) \\
\hline
\end{tabular}
\end{adjustbox}
\label{Table: Model4 type 1 contam}
\end{table}
\clearpage
\newpage
\begin{table}[h]
\centering
\caption{MDPDE (bias) based on data simulated from \descref{Contaminated Model 4 (Type 2)}}
\begin{adjustbox}{width=0.9\linewidth}
\begin{tabular}{c c c c c c c c c c c}
\hline
$\alpha$ & Link &$\hat{\gamma}_1$ & $\hat{\gamma}_2$ & $\hat{\beta}_{1}$ & $\hat{\beta}_{2}$ & $\hat{\beta}_{3}$ & $\hat{\beta}_{4}$ & $\hat{\beta}_{5}$ \\
\hline
0.0 & probit & -2.75095 & 2.81469 & 0.87770 & 1.25353 & 0.49455 & 0.00571 & 0.00682 \\
& & (1.04905) & (-0.98531) & (-1.62230) & (0.05353) & (-0.20545) & (0.00571) & (0.00682) \\
\cline{2-9}
& logit & -3.06739 & 3.12999 & 0.00591 & 1.68502 & 0.50388 & -0.01104 & 0.00363 \\
& & (0.93261) & (-0.87001) & (-2.49409) & (0.48502) & (-0.19612) & (-0.01104) & (0.00363) \\
\hline
0.1 & probit & -3.05495 & 3.10808 & 1.21080 & 1.29002 & 0.55411 & 0.00275 & 0.00692 \\
& & (0.74505) & (-0.69192) & (-1.28920) & (0.09002) & (-0.14588) & (0.00275) & (0.00692)\\
\cline{2-9}
& logit & -3.05469 & 3.11851 & 0.00592 & 1.67775 & 0.50199 & -0.01044 & 0.00392 \\
& & (0.94531) & (-0.88149) & (-2.49408) & (0.47775) & (-0.19801) & (-0.01044) & (0.00392)\\
\hline
0.3 & probit & -3.48675 & 3.52793 & 1.79781 & 1.30439 & 0.63769 & 0.00734 & 0.00938 \\
& & (0.31325) & (-0.27207) & (-0.70220) & (0.10439) & (-0.06231) & (0.00734) & (0.00938)\\ \cline{2-9}
& logit & -3.37928 & 3.42977 & 0.70755 & 1.57846 & 0.56788 & -0.00802 & 0.01036 \\
& & (0.62072) & (-0.57023) & (-1.79245) & (0.37846) & (-0.13212) & (-0.00802) & (0.01036) \\
\hline
\end{tabular}
\end{adjustbox}
\label{Table: Model4 type 2 contam}
\end{table}
When pure data have been simulated from \descref{Model 4}, similar conclusion holds. From the Tables \ref{Table: Model4 type 1 contam} and \ref{Table: Model4 type 2 contam}, we observe that MDPDE with $\alpha=0.1,0.3$ produce much stable estimators than the MLE.
\section{ Data driven selection of tuning parameter \texorpdfstring{$\alpha$}{}}
\label{alpha selection}
Choosing the tuning parameter $\alpha$ suitably is a very important practical problem. We need small values of $\alpha$ for greater efficiency under the model, and large values of $\alpha$ for greater stability away from it. But normally the experimenter will not know, a priori, the amount of contamination in the data. So a data driven selection of the ``optimal" tuning parameter is a very important problem. Among different existing approaches for this, we will follow the approach of Warwick and Jones \citep{warwick2005choosing} to choose the optimum data-based tuning parameter by constructing an empirical version of the mean square error and minimizing it over the tuning parameter. This method has been generalized by Ghosh and Basu \citep{ghosh2015robust}, and further extended by Basak et al. \citep{basak2021optimal}. Minimization of the empirical MSE has been shown to provide a satisfactory performance in selecting the appropriate tuning parameter based on the data. The empirical version of the asymptotic mean square error is expressed, as a function of the tuning parameter and a pilot estimator $\theta_{P}$, as
\begin{equation}\label{mse:eq1}
\widehat{MSE}(\alpha; \theta_{P})=(\hat{\theta}_{\alpha}-\theta_{P})^{T}(\hat{\theta}_{\alpha}-\theta_{P})+\frac{1}{n}tr\Big(\hat{\Psi}_{n}^{-1}(\alpha) \hat{\Omega}_{n}(\alpha) \hat{\Psi}_{n}^{-1}(\alpha)\Big).
\end{equation}
The optimal $\alpha$, obtained by minimizing this expression over $\alpha$ in the interval $[0, 1]$, will depend on the choice of the pilot estimator $\theta_{P}$, for which we will use the minimum $L_{2}$ distance estimator (the MDPDE with $\alpha = 1$). Some histograms, including cases for both pure data and contaminated data, are added in the Supplementary material, demonstrating the effectiveness of the algorithm in picking out a suitable tuning parameter for the particular data set under consideration. This proposed method is used in the analysis of a real-life data set in the next Section \ref{real data analysis}. We also hope to apply the Basak et al. \citep{basak2021optimal} refinements for this purpose in the future.
\section{Real Data Analysis} \label{real data analysis}
The wine quality data, from the UCI Machine learning repository, contains $12$ variables, of which wine quality is a categorical response variable, with values ranging in $\chi=\{1,\ldots,7\}$. The remaining $11$ continuous variables may be taken as continuous regressors in our set-up. To measure the performances of different methods in the estimation of the parameters involved through parametric models as in Equation (\ref{linear latent regressiion eq}), we split the entire data set into two parts: training and test data sets. The training data set, consisting of $75\%$ data points, will be used to estimate the parameters. These estimators will be used later to predict the wine quality levels in the test data set. The proportion of the cases, where the true levels match the predicted levels in the test data set, measures the accuracy of the methodology. Since the ranges of each covariate varies a lot, each covariate $X_{i}$ may be transformed as
\begin{equation}\label{data transformation}
X_{i}^{*}=\frac{X_{i}-median(X_{i})}{1.4828 \times MAD(X_{i})}
\end{equation}
to aid a better convergence for the optimization algorithms. For normalization of the test data, we use the median and MAD of the same covariate from the training data only.
In Table \ref{table: comparative study}, we have compared the performance MDPDE with the MLE and $95\%$-trimmed MLE. The $95\% $-trimmed MLE refers to the computation of the MLE after deletion of the covariates satisfying the following equation
\begin{align}\label{outlier deletion}
(X_{i}-\mu)^{T}S^{-1}(X_{i}-\mu) \ge \chi^{2}_{0.95, 11}.
\end{align}
In the Table \ref{table: comparative study}, we notice that the MDPDE with $\alpha \ge 0.1$ produces better accuracy ($51.7\%-52.8\%$ for probit link and $52.1-53.1\%$ for logit link) than the MLE ($51.2\%$ for probit link and $51.9\%$ for logit link), $95\%-$trimmed MLE ($52.3\%$ for probit link and $50.9\%$ for logit link).
\begin{table}[ht]
\centering
\caption{Comparative study of performances for different methods}
\begin{tabular}{c c c c}
\hline
Method & Link functions & Tuning parameter($\alpha$) &Accuracy\\
\hline
\mbox{MLE} & probit & 0 & 0.51184 \\
& logit & 0 & 0.51918 \\
\hline
\mbox{$95\%-$trimmed MLE} & probit & 0 & 0.52327 \\
& logit & 0 & 0.50939 \\
\hline
\mbox{MDPDE} & probit & 0.1 & 0.51673 \\
& logit & 0.1 & 0.52082 \\
\hline
\mbox{MDPDE} & probit & 0.25 & 0.52082 \\
& logit & 0.25 & 0.5249 \\
\hline
\mbox{MDPDE} & probit & 0.5 & 0.52245 \\
& logit & 0.5 & 0.52327 \\
\hline
\mbox{MDPDE} & probit & 0.75 & 0.52735 \\
& logit & 0.75 & 0.52735 \\
\hline
\mbox{MDPDE} & probit & 0.75 & 0.52735 \\
& logit & 0.75 & 0.52735 \\
\hline
\mbox{MDPDE} & probit & 1 & 0.52816 \\
& logit & 1 & 0.53061 \\
\hline
\end{tabular}
\label{table: comparative study}
\end{table}
\begin{table}[!ht]
\centering
\caption{Optimum tuning parameter along with estimated MSE }
\begin{tabular}{c c c c}
\hline
Method & Link functions & $\alpha$ & Estimated MSE\\
\hline
\mbox{MDPDE} & probit & 0.28 & 0.247652099 \\
& logit & 0.11 & 0.217303162 \\
\hline
\end{tabular}
\label{table:optimum alpha selection}
\end{table}
\begin{figure}[!h]
\begin{multicols}{2}
\includegraphics[scale=0.5]{winequality-alpha-probit.pdf}
\caption{MSE plots for probit link}
\label{mse:probit}
\par
\includegraphics[scale=0.5]{winequality-alpha-logit.pdf}
\caption{MSE plots for logit link}
\label{mse:logit}
\par
\end{multicols}
\end{figure}
Next, we need to select the optimum value of the tuning parameter based on the strategy discussed in Section \ref{alpha selection}. In the view of Figures \ref{mse:probit} and \ref{mse:logit}, we notice that MSE is minimized when $\alpha$ lies in the ranges $(0.25, 0.37)$ and $(0, 0.25)$ respectively for probit and logit links. However, the optimum $\alpha$ along with optimized MSE are reported in Table \ref{table:optimum alpha selection}.
\section{Conclusions}\label{conclusions}
The lack of robustness in the likelihood based inferential procedures pose a major challenge in modelling ordinal response data. Here we explore an alternative robust methodology to estimate the parameters in such statistical models, through the minimization of the density power divergence (DPD). The theory developed for the MDPDE when the data points are independent but not necessarily identically distributed, finds a nice application in this article. We see from the estimating equations how the choice of tuning parameters enable the MDPDE to achieve a higher degree of stability against any outlier that are inconsistent to a reference model. Robustness and asymptotic optimality are generally two competing concepts. The balance between these two is hard to achieve. We have demonstrated through the simulation studies how it is possible to find a suitable trade-off between these two extremities through the proper choice of a tuning parameter. Moreover, it's hard to overlook that the elegance of the DPD which lies in its simplicity in statistical interpretations, unlike many other M-estimators. Factoring in all such possible challenges, we believe that the uses of the MDPDE in the ordinal response models provides a useful tool for the applied scientist.
\bigskip
\textbf{Acknowledgement:}
The authors of this paper declare that they do not have any conflicts of interests in the subject matter or materials discussed in this manuscript.
Research of AG is partially supported by the INSPIRE Faculty Research Grant from Department of Science and Technology, Govt. of India, and an internal research grant from Indian Statistical Institute, India.
\section{Appendix}\label{appendix}
First we present the histograms of the best fitting $\alpha$ for both pure and contaminated data sets simulated through \descref{Model 1} to \descref{Model 5}. They supplement the results presented in Section \ref{simulation study} of the main paper. Notice that the modes of the histograms roughly agree with the optimum choices of $\alpha$ as discussed in Section \ref{simulation study}.
\graphicspath{ {./graphs/} }
\begin{figure}[h]
\begin{multicols}{2}
\includegraphics[scale=0.5]{Model1-probit-alpha.pdf}\par
\includegraphics[scale=0.5]{Model5-probit-alpha.pdf}\par
\end{multicols}
\caption{ Pure \descref{Model 1} and \descref{Model 5} with probit link}
\label{fig:opt alpha of Model1 and Model5}
\end{figure}
Since \descref{Model 1} and \descref{Model 5} are pure, most of the optimum $\alpha$ concentrate on values close to zero. When pure data are simulated from \descref{Model 2}, most of the optimum $\alpha$'s similarly stay close to zero, and away from zero as data are contaminated. Similar patterns are also observed for \descref{Model 3} and \descref{Model 4}.
\begin{figure}
\begin{multicols}{2}
\includegraphics[scale=0.5]{Model2-probit-alpha.pdf}\par
\includegraphics[scale=0.5]{Model2-contam-probit-alpha.pdf}\par
\end{multicols}
\begin{multicols}{2}
\includegraphics[scale=0.5]{Model2-logit-alpha.pdf}\par
\includegraphics[scale=0.5]{Model2-contam-logit-alpha.pdf}\par
\end{multicols}
\caption{ Pure and Contaminated \descref{Model 2}}
\label{fig:Model2 alpha}
\end{figure}
\begin{figure}
\begin{multicols}{2}
\includegraphics[scale=0.5]{Model3-probit-alpha.pdf}\par
\includegraphics[scale=0.5]{Model3-contam-probit-alpha.pdf}\par
\end{multicols}
\begin{multicols}{2}
\includegraphics[scale=0.5]{Model3-logit-alpha.pdf}\par
\includegraphics[scale=0.5]{Model3-contam-logit-alpha.pdf}\par
\end{multicols}
\caption{ Pure and Contaminated \descref{Model 3}}
\label{fig:Model3 alpha}
\end{figure}
\begin{figure}
\begin{multicols}{2}
\includegraphics[scale=0.5]{Model4-probit-alpha.pdf}\par
\includegraphics[scale=0.5]{Model4-contam-probit-alpha.pdf}\par
\end{multicols}
\begin{multicols}{2}
\includegraphics[scale=0.5]{Model4-logit-alpha.pdf}\par
\includegraphics[scale=0.5]{Model4-contam-logit-alpha.pdf}\par
\end{multicols}
\caption{ Pure and Contaminated (Type 1) \descref{Model 4}}
\label{fig:Model4 alpha}
\end{figure}
\clearpage
\newpage
Necessary calculations and proofs are provided in this section.
\begin{comment}
\begin{Proof} (\descref{Lemma 1})
Recall that
\begin{equation}
\nabla V_{i}(Y_{i}, \theta)=\Big(\frac{\partial}{\partial \gamma^{T}} V_{i}(Y_{i}, \theta),
\frac{\partial}{\partial \beta}^{T} V_{i}(Y_{i}, \theta)\Big)^{T} \in \mathds{R}^{m+p-1}.
\end{equation}
\descref{(L1)} Since $f$ is almost surely bounded, we can find a positive number $M_{0} < +\infty$ such that $f > M_{0}$ on a set with probability measure zero. From (\ref{first deriv of V w.r.t gamma}) and (\ref{first deriv of V w.r.t beta}), it's easy to verify that the following statements hold almost surely.
\begin{align} \label{defining U{s,i}}
\Big|\frac{\partial }{\partial \gamma_{s}} V_{i}(Y_{i}, \theta) \Big|
& \le 4M_{0}\Big\{1+p_{\theta, i}(Y_{i})^{\alpha-1} \Big\}=\Tilde{U}_{s,i}(\theta) \mbox{ (say)} \\
\label{defining U{m+k-1,i}}
\Big|\frac{\partial}{\partial \beta_{k}}V_{i}(Y_{i}, \theta) \Big|
&\le 4M_{0} \Big\{ m
+p_{\theta, i}(Y_{i})^{\alpha-1} \Big\} |x_{ik}|=\Tilde{U}_{m+k-1,i}(\theta) \mbox{ (say)},
\end{align}
for $s=1, 2, \ldots,(m-1)$ and $k=1, 2 \ldots,p$. For standard normal and logistic distributions, $M_{0}$ takes the value approximately $0.4$ and $0.25$ respectively. Since $\underset{i,k_{1}}{\max}|x_{ik_{1}}|$ is assumed to be bounded in $n$ for each $i$, all the elements of the finite sequence $\{\Tilde{U}_{l,i}\}_{l=1}^{m+p-1}$ are integrable for whatever large $n$ may be. Using (\ref{defining U{s,i}}) and (\ref{defining U{m+k-1,i}}), we get that for fixed $n$
\begin{align}\label{i eq0}
|\nabla_{l} V_{i}(Y_{i}, \theta) | \le \underset{l}{\max}\{\Tilde{U}_{l,i}\}=U_{i}(\theta) \mbox{ (say)},
\end{align}
where $l=1,2, \ldots, m+p-1$, and $i=1,2, \ldots, n$. For fixed $N \in \mathds{Z}_{+}$, it's easy to verify that
\begin{align}\label{i eq1}
\Bigg\{Y_{i}: \Big|\nabla_{l} V_{i}(Y_{i}, \theta)\Big| > N \Bigg\} \subseteq \Big\{ Y_{i}:U_{i}(\theta) > N\Big\} \mbox{ uniformly in } l.
\end{align}
Notice that for any positive integer $r_{n} > 1$, the following inequality holds
\begin{align}\label{i eq2}
\sup_{ n > 1 } \Bigg\{\frac{1}{n} \sum_{i=1}^{n}\mathds{E}_{g_{i}}\Bigg[ \Big|\nabla_{l} V_{i}(Y_{i}, \theta)\Big| \mathds{1} \Bigg( \Big|\nabla_{l} V_{i}(Y_{i}, \theta) \Big| > N \Bigg) \Bigg] \Bigg\}
&\le \inf_{n > 1 } \Bigg\{\frac{1}{n} \sum_{i=1}^{n} \mathds{E}_{g_{i}} \Big[ U_{i}(\theta) \times \mathds{1}\Big(U_{i}(\theta) > N\Big)\Big] \Bigg\} \nonumber \\
&\le \inf_{n > 1} \mathds{E} \Bigg\{ U_{(n)}(\theta) \mathds{1}\Big(U_{(n) }(\theta) > N\Big) \Bigg\} \nonumber \\
& \le \inf_{ n > r_{n} } \mathds{E} \Bigg\{ U_{(n)}(\theta) \mathds{1}\Big(U_{(n) }(\theta) > N\Big) \Bigg\} \nonumber \\
& \le \underset{r_{n} \uparrow +\infty}{\lim} \inf_{n > r_{n}}
\mathds{E} \Bigg\{ U_{(n)}(\theta) \mathds{1}\Big(U_{(n) }(\theta) > N\Big) \Bigg\} \nonumber \\
&\le \underset{r_{n} \uparrow +\infty}{\lim} \sup_{ n > r_{n}}
\mathds{E} \Bigg\{ U_{(n)}(\theta) \mathds{1}\Big(U_{(n) }(\theta) > N\Big) \Bigg\},
\end{align}
where $U_{(n)}(\theta)=\max\Big\{U_{i}(\theta); i=1,2, \ldots,n\Big\}$ for fixed $\theta$. Letting $ N \uparrow + \infty$ on both sides of the Equation (\ref{i eq2}) gives the following
\begin{gather}\label{i eq3}
\underset{N \uparrow +\infty }{\lim}
\sup_{n > 1 } \Bigg\{\frac{1}{n} \sum_{i=1}^{n}\mathds{E}_{g_{i}}\Bigg[ \Big|\nabla_{l} V_{i}(Y_{i}, \theta)\Big| \mathds{1} \Bigg( \Big|\nabla_{l} V_{i}(Y_{i}, \theta) \Big| > N \Bigg) \Bigg] \Bigg\}
\nonumber \\
\le
\underset{N \uparrow +\infty}{\lim}\underset{r_{n} \uparrow +\infty}{\lim} \sup_{n > r_{n}} \mathds{E} \Bigg\{ U_{(n)}(\theta) \mathds{1}\Big(U_{(n) }(\theta) > N\Big) \Bigg\} \mbox{ for all } \theta.
\end{gather}
From the construction, it's clear that $\Big\{U_{(n)}(\theta); n=1,2, \ldots\Big\}$ is a sequence of non-negative random variables having finite expectations. Moreover each element of that sequence can be bounded by a positive constant pointwise for $\theta$, as it's assumed that $ \underset{i,k_{1}}{\max}|x_{ik_{1}}|$ is bounded. So the sequence of random variables $\{U_{(n)}(\theta)\}$ is uniformly integrable (UI). Hence
\begin{align}\label{i eq4}
\underset{N \uparrow +\infty}{\lim}\underset{r_{n} \uparrow +\infty}{\lim} \sup_{n > r_{n}} \mathds{E} \Bigg\{ U_{(n)}(\theta) \mathds{1}\Big(U_{(n) }(\theta) > N\Big) \Bigg\}=0 \mbox{ for fixed } \theta.
\end{align}
This essentially completes the first part of the proof.
\descref{(L2)}
First we shall explicitly compute the second order partial derivatives with respect to all the components of $\theta$, as below
\begin{align}
\label{partial gamma s, gamma s}
\frac{1}{1+\alpha}\cdot \frac{\partial^{2}}{\partial \gamma^{2}_{s}}V_{i}(Y_{i}, \theta)
&= \Big\{\alpha p_{\theta,i}(s)^{\alpha-1}f^{2}(\gamma_{s}-x_{i}^{T}\beta)+ p_{\theta,i}(s)^{\alpha} f'(\gamma_{s}-x_{i}^{T}\beta)\Big\} \nonumber \\ &-\Big\{(\alpha-1)p_{\theta,i}(Y_{i})^{\alpha-2} f^{2}(\gamma_{Y_{i}}-x_{i}^{T}\beta)+p_{\theta,i}(Y_{i})^{\alpha-1}f'(\gamma_{Y_{i}}-x_{i}^{T}\beta) \Big\} \mathds{1}(Y_{i}=s|x_{i}) \nonumber \\
&+ \Big\{\alpha p_{\theta,i}(s+1)^{\alpha-1}f^{2}(\gamma_{s}-x_{i}^{T}\beta)-p_{\theta,i}(s+1)^{\alpha}f'(\gamma_{s}-x_{i}^{T}\beta)\Big\} \nonumber \\
&-\Big\{(\alpha-1)p_{\theta,i}(Y_{i})^{\alpha-2} f^{2}(\gamma_{Y_{i}-1}- x_{i}^{T}\beta)-p_{\theta,i}(Y_{i})^{\alpha-1} f'(\gamma_{Y_{i}-1}-x_{i}^{T}\beta)\Big\}\mathds{1}(Y_{i}=s+1| x_{i}), \\
\label{partial gamma s-1, gamma s}
\frac{1}{1+\alpha}\cdot \frac{\partial^{2}}{\partial \gamma_{s-1}\partial\gamma_{s}}V_{i}(Y_{i}, \theta)
&=-\Big\{\alpha p_{\theta, i}(s)^{\alpha-1}f(\gamma_{s-1}-x_{i}^{T}\beta)f(\gamma_{s}-x_{i}^{T}\beta) \Big\} \nonumber\\
&+\Big\{(\alpha-1)p_{\theta,i}(Y_{i})^{\alpha-2}f(\gamma_{Y_{i}-1}-x_{i}^{T}\beta)f(\gamma_{Y_{i}}-x_{i}^{T}\beta)\Big\}\mathds{1}(Y_{i}=s|x_{i}), \\
\label{partial gamma s+1, gamma s}
\frac{1}{1+\alpha}\cdot \frac{\partial^{2}}{\partial \gamma_{s+1}\partial \gamma_{s}}V_{i}(Y_{i}, \theta)
&= -\Big\{\alpha p_{\theta,i}(s+1)^{\alpha-1}f(\gamma_{s}-x_{i}^{T}\beta)f(\gamma_{s+1}-x_{i}^{T}\beta)\Big\} \nonumber \\
&+\Big\{(\alpha-1)p_{\theta,i}(Y_{i})^{\alpha-2}
f(\gamma_{Y_{i}-1}-x_{i}^{T}\beta)
f(\gamma_{Y_{i}}-x_{i}^{T}\beta)
\Big\}\mathds{1}(Y_{i}=s+1| x_{i}), \\
\label{partial beta k', beta k}
\frac{1}{1+\alpha} \cdot \frac{\partial^{2}}{\partial \beta_{k'} \partial \beta_{k}} V_{i}(Y_{i}, \theta)
&= \Bigg[ \sum_{j=1}^{m} \alpha p_{\theta,i}(j)^{\alpha-1}\Big\{f(\gamma_{j-1}-x_{i}^{T}\beta)- f(\gamma_{j}-x_{i}^{T}\beta)\Big\}^{2} \nonumber \\
& +\sum_{j=1}^{m}p_{\theta,i}(j)^{\alpha}\Big\{f'(\gamma_{j}-x_{i}^{T}\beta)- f'(\gamma_{j}-x_{i}^{T}\beta)\Big\} \nonumber \\
&-(\alpha-1)p_{\theta,i}(Y_{i})^{\alpha-2} \Big\{f(\gamma_{Y_{i}-1}-x_{i}^{T}\beta)-f(\gamma_{Y_{i}}-x_{i}^{T}\beta)\Big\}^{2} \nonumber\\
&+p_{\theta,i}(Y_{i})^{\alpha-1}\Big\{f'(\gamma_{Y_{i}}-x_{i}^{T}\beta)-f'(\gamma_{Y_{i}-1}-x_{i}^{T}\beta)\Big\} \Bigg] x_{ik}\cdot x_{ik'},
\end{align}
\begin{align}
\label{partial gamma s, beta k}
\frac{1}{1+\alpha} \cdot \frac{\partial^{2}}{\partial \gamma_{s}\partial \beta_{k} } V_{i}(Y_{i}, \theta)
&= x_{ik}\Bigg[\alpha \sum_{j=s}^{s+1}(-1)^{j+s}p_{\theta,i}(j)^{\alpha-1}\Big\{f(\gamma_{j-1}-x_{i}^{T}\beta)-f(\gamma_{j}-x_{i}^{T}\beta)\Big\}f(\gamma_{s}-x_{i}^{T}\beta) \nonumber \\
& +\Big\{p_{\theta,i}(s+1)^{\alpha} - p_{\theta,i}(s)^{\alpha} \Big\}f'(\gamma_{s}-x_{i}^{T}\beta)\Bigg] \nonumber \\
&-x_{ik}\Bigg[(\alpha-1)p_{\theta,i}(Y_{i})^{\alpha-2} f(\gamma_{Y_{i}}-x_{i}^{T}\beta) \Big\{f(\gamma_{Y_{i}-1}-x_{i}^{T}\beta)-f(\gamma_{Y_{i}}-x_{i}^{T}\beta)\Big\}\nonumber \\
&-p_{\theta,i}(Y_{i})^{\alpha-1}f'(\gamma_{Y_{i}}-x_{i}^{T}\beta) \Bigg] \mathds{1}(Y_{i}=s| x_{i}) \nonumber \\
&+x_{ik}\Bigg[(\alpha-1)p_{\theta,i}(Y_{i})^{\alpha-2} f(\gamma_{Y_{i}-1}-x_{i}^{T}\beta) \Big\{f(\gamma_{Y_{i}-1}-x_{i}^{T}\beta)-f(\gamma_{Y_{i}}-x_{i}^{T}\beta)\Big\}\nonumber \\
&-p_{\theta,i}(Y_{i})^{\alpha-1}f'(\gamma_{Y_{i}-1}-x_{i}^{T}\beta) \Bigg] \mathds{1}(Y_{i}=s+1| x_{i}).
\end{align}
Since $f'$ is almost surely bounded, there exits a finite positive number $M_{1}$ such that $f'>M_{1}$ on a set with probability measure zero. Therefore using (\ref{partial gamma s, gamma s}) to (\ref{partial gamma s, beta k}), we get
\begin{align}
\label{Expectation gamma s, gamma s}
\Bigg| \frac{\partial^{2}}{\partial \gamma^{2}_{s}}V_{i}(Y_{i}, \theta)-\mathds{E}_{g_{i}}\Bigg(\frac{\partial^{2}}{\partial \gamma^{2}_{s}}V_{i}(Y_{i}, \theta) \Bigg) \Bigg|
&\le
\sum_{j=s}^{s+1}\Big\{p_{\theta,i}(j)^{\alpha-2} M_{0}^{2}+2p_{\theta,i}(j)^{\alpha-1}M_{1} \Big\}g_{i}(j) \nonumber \\
&+\Big\{p_{\theta,i}(Y_{i})^{\alpha-2} M_{0}^{2}+2p_{\theta,i}(Y_{i})^{\alpha-1}M_{1} \Big\} ,
\end{align}
\begin{align}
\label{Expectation gamma s-1, gamma s}
\Bigg| \frac{\partial^{2}}{\partial \gamma_{s-1} \partial \gamma_{s}}V_{i}(Y_{i}, \theta)-\mathds{E}_{g_{i}}\Bigg(\frac{\partial^{2}}{\partial \gamma_{s-1} \partial \gamma_{s}}V_{i}(Y_{i}, \theta) \Bigg) \Bigg|
&\le
M_{0}^{2}\Big\{p_{\theta,i}(Y_{i})^{\alpha-2}
+p_{\theta,i}(s)^{\alpha-2}g_{i}(s)\Big\}, \\
\label{Expectation gamma s+1, gamma s}
\Bigg| \frac{\partial^{2}}{\partial \gamma_{s+1} \partial\gamma_{s}}V_{i}(Y_{i}, \theta)-\mathds{E}_{g_{i}}\Bigg(\frac{\partial^{2}}{\partial \gamma_{s+1}\partial \gamma_{s}}V_{i}(Y_{i}, \theta) \Bigg) \Bigg|
&\le M_{0}^{2}\Big\{ p_{\theta,i}(Y_{i})^{\alpha-2}+ p_{\theta,i}(s+1)^{\alpha-2}g_{i}(s+1)
\Big\},
\end{align}
\begin{align}
\label{Expectation beta k', beta k}
\Bigg|\frac{\partial^{2}}{\partial \beta_{k'} \partial \beta_{k}} V_{i}(Y_{i}, \theta)
- \mathds{E}_{g_{i}} \Bigg( \frac{\partial^{2}}{\partial \beta_{k'} \partial \beta_{k}} V_{i}(Y_{i}, \theta)\Bigg)\Bigg|
&\le 4|x_{ik}\cdot x_{ik'}|\Bigg[
2M^{2}_{0} \Big\{p_{\theta,i}(Y_{i})^{\alpha-2}
+\sum_{j=1}^{m}p_{\theta,i}(s)^{\alpha-2}g_{i}(j)\Big\} \nonumber \\
&+M_{1} \Big\{p_{\theta,i}(Y_{i})^{\alpha-1}+ \sum_{j=1}^{m}p_{\theta,i}(Y_{i})^{\alpha-1}g_{i}(j) \Big\} \Bigg] , \\
\label{Expectation gamma s, beta k}
\Bigg| \frac{\partial^{2}}{\partial \gamma_{s}\partial \beta_{k} } V_{i}(Y_{i}, \theta)
- \mathds{E}_{g_{i}}\Bigg(\frac{\partial^{2}}{\partial \gamma_{s}\partial \beta_{k} } V_{i}(Y_{i}, \theta) \Bigg)
\Bigg|
&\le 4|x_{ik}|\Bigg[2M_{0}^{2}\Big\{p_{\theta,i}(Y_{i})^{\alpha-2}+p_{\theta,i}(s)^{\alpha-2}g_{i}(s)\Big\} \nonumber \\
&-M_{1}\Big\{p_{\theta,i}(Y_{i})^{\alpha-1}+ p_{\theta,i}(s)^{\alpha-1}g_{i}(s)\Big\} \Bigg].
\end{align}
It's clear from (\ref{Expectation gamma s, gamma s}) to (\ref{Expectation gamma s, beta k}), that the generic term of the left hand sides of the above equations, which are denoted as $|\nabla_{ll'}V_{i}(Y_{i}, \theta)-\mathds{E}_{g_{i}} \nabla_{ll'}V_{i}(Y_{i}, \theta)|$, may be bounded by positive valued random variable $\{U_{ll',i}\}$ for each $i$ and $\theta$. When $i$ is fixed, the right hand sides of the above Equations (\ref{Expectation gamma s, gamma s}) to (\ref{Expectation gamma s, beta k}) may be chosen as the $(l,l')$-th element of the sequence $\{U_{ll',i}\}$. Since, $\underset{i,k,k'}{\max}|x_{ik}x_{ik'}|$ is assumed to be bounded in $n$, the above argument as in \descref{(L1)} can be easily extended to prove this result in a similar fashion.
\descref{(L3)} See that
\begin{align} \label{iii eq0}
||\Omega_{n}^{-\frac{1}{2}}(\alpha)\nabla V_{i}(Y_{i},\theta_{\alpha})||^{2}
&=\nabla V_{i}(Y_{i}, \theta_{\alpha})^{T} \Omega^{-1}_{n}(\alpha)
\nabla V_{i}(Y_{i}, \theta_{\alpha})>0, \mbox{ since } \nabla V_{i}(Y_{i}, \theta_{\alpha}) \ne 0,
\end{align}
and $\Omega_{n}(\alpha)$ is a positive definite matrix by construction. We know that
\begin{align}\label{iii eq1}
\frac{||\nabla V_{i}(Y_{i}, \theta_{\alpha})||^{2}}{\mu_{\max}} \le \nabla V_{i}(Y_{i}, \theta_{\alpha})^{T} \Omega^{-1}_{n}(\alpha)
\nabla V_{i}(Y_{i}, \theta_{\alpha})
&\le \frac{ ||\nabla V_{i}(Y_{i}, \theta_{\alpha})||^{2} }{ \mu_{\min}}
\nonumber\\
& \le \frac{(m+p)U^{2}_{(n)}(\theta_{\alpha}) }{\mu_{\min}}
\mbox{ for all } i.
\end{align}
The positive eigen roots of $\Omega_{n}(\alpha)$ are $\{\mu_{i}; i=1,2, \ldots,n \}$, and $||\nabla V_{i}(Y_{i}, \theta_{\alpha})||$ is bounded away from zero for all $i$. Now, it's easy to verify that
\begin{align}\label{iii eq2}
\Big\{ Y_{i}: ||\Omega^{-\frac{1}{2}}_{n}(\alpha) \nabla V_{i}(Y_{i}, \theta_{\alpha})|| > \epsilon \sqrt{n}\Big\}
&\subseteq \Big\{ Y_{i}: \nabla V_{i}(Y_{i}, \theta_{\alpha})^{T} \Omega^{-1}_{n}(\alpha)
\nabla V_{i}(Y_{i}, \theta_{\alpha}) > \epsilon^{2} n \Big\} \nonumber \\
& \subseteq \Big\{Y_{i}: W^{2}_{n}(\theta_{\alpha}) > \epsilon^{2} \frac{n}{m+p} \Big\} \mbox{ for all } i,
\end{align}
where $W_{n}(\theta)=\frac{U_{(n)}(\theta)}{\sqrt{\mu_{\min}}}$. Observe that
\begin{align}\label{iii eq3}
&\mathds{E}_{g_{n}}\Big[||\Omega_{n}^{-\frac{1}{2}}(\alpha)\nabla V_{n}(Y_{n},\theta_{\alpha})||^{2}\mathds {1}\Big(||\Omega_{n}^{-\frac{1}{2}}(\alpha)\nabla V_{n}(Y_{n},\theta_{\alpha})||>\epsilon\sqrt{n}\Big)\Big] \nonumber \\
&\le
\mathds{E}_{g_{n}}
\Bigg[ (m+p) W^{2}_{n}(\theta_{\alpha}) \times \mathds{1}\Big( W^{2}_{n}(\theta_{\alpha}) > \epsilon^{2} \frac{n}{m+p} \Big) \Bigg] \nonumber \\
&\le (m+p)\sup_{r_{n} \in \{n, n+1, \ldots \} }\mathds{E}_{g_{r_{n}}} \Bigg[ W^{2}_{r_{n}}(\theta_{\alpha}) \times \mathds{1}\Big(W^{2}_{r_{n} }(\theta_{\alpha}) > \epsilon^{2} \frac{r_{n}}{m+p} \Big)\Bigg].
\end{align}
Letting $n \uparrow +\infty$ at fixed $m,p$ on both sides of the Equation (\ref{iii eq3}), we get
\begin{align} \label{iii eq4}
\underset{n \uparrow +\infty}{\lim}
&\Bigg\{\mathds{E}_{g_{n}}\Big[||\Omega_{n}^{-\frac{1}{2}}(\alpha)\nabla V_{n}(Y_{n},\theta_{\alpha})||^{2}\mathds {1}\Big(||\Omega_{n}^{-\frac{1}{2}}(\alpha)\nabla V_{n}(Y_{n},\theta_{\alpha})||>\epsilon\sqrt{n}\Big)\Big]\Bigg\} \nonumber \\
&\le
(m+p)\underset{n \uparrow +\infty}{\lim}\sup_{r_{n} \ge n}\mathds{E}_{g_{r_{n}}} \Bigg[ W^{2}_{r_{n}}(\theta_{\alpha}) \times \mathds{1}\Big(W^{2}_{r_{n} }(\theta_{\alpha}) > \epsilon^{2} \frac{r_{n}}{m+p} \Big)\Bigg].
\end{align}
Since $\mu_{\min}$ is a finite positive number, and $\max_{i,k_{1}}|x_{ik_{1}}|=\mathcal{O}(1)$ for fixed $k$, $U_{i}(\theta)$ can be bounded by a fixed positive constant so is $W^{2}_{i}(\theta_{\alpha})$ for each $i$. Hence $\Bigg\{W^{2}_{i}(\theta_{\alpha}); i=1,2, \ldots \Bigg\}$ is UI. Therefore the right-hand side of the last equation is zero. Now, a simple application of Cauchy's theorem gives
\begin{align}\label{Cauchy's theorem}
\underset{n \uparrow +\infty}{\lim} \Bigg\{\frac{1}{n} \sum_{i=1}^{n}\mathds{E}_{g_{i}}\Big[||\Omega_{n}^{-\frac{1}{2}}(\alpha)\nabla V_{i}(Y_{i},\theta_{\alpha})||^{2}\mathds {1}\Big(||\Omega_{n}^{-\frac{1}{2}}(\alpha)\nabla V_{i}(Y_{i},\theta_{\alpha})||>\epsilon\sqrt{n}\Big)\Big]\Bigg\}=0.
\end{align}
\end{Proof}
\hspace{10pt}
\end{comment}
We explicitly compute the second order partial derivatives with respect to all the components of $\theta$, as below
\begin{gather}
\label{partial gamma s, beta k}
\frac{1}{1+\alpha} \cdot \frac{\partial^{2}}{\partial \gamma_{s}\partial \beta_{k} } V_{i}(Y_{i}, \theta) \nonumber \\
= x_{ik}\Bigg[\alpha \sum_{j=s}^{s+1}(-1)^{j+s}p_{\theta,i}(j)^{\alpha-1}
f(\gamma_{s}-x_{i}^{T}\beta)
\Big\{f(\gamma_{j-1}-x_{i}^{T}\beta)-f(\gamma_{j}-x_{i}^{T}\beta)\Big\} \nonumber \\
+\Big\{p_{\theta,i}(s+1)^{\alpha} - p_{\theta,i}(s)^{\alpha} \Big\}f'(\gamma_{s}-x_{i}^{T}\beta)\Bigg]
-x_{ik}\Bigg[(\alpha-1)p_{\theta,i}(Y_{i})^{\alpha-2} f(\gamma_{Y_{i}}-x_{i}^{T}\beta) \nonumber \\ \Big\{f(\gamma_{Y_{i}-1}-x_{i}^{T}\beta)-f(\gamma_{Y_{i}}-x_{i}^{T}\beta)\Big\}
-p_{\theta,i}(Y_{i})^{\alpha-1}f'(\gamma_{Y_{i}}-x_{i}^{T}\beta) \Bigg] \mathds{1}(Y_{i}=s| x_{i}) \nonumber \\
+x_{ik}\Bigg[(\alpha-1)p_{\theta,i}(Y_{i})^{\alpha-2} f(\gamma_{Y_{i}-1}-x_{i}^{T}\beta) \Big\{f(\gamma_{Y_{i}-1}-x_{i}^{T}\beta)-f(\gamma_{Y_{i}}-x_{i}^{T}\beta)\Big\}\nonumber \\
-p_{\theta,i}(Y_{i})^{\alpha-1}f'(\gamma_{Y_{i}-1}-x_{i}^{T}\beta) \Bigg] \mathds{1}(Y_{i}=s+1| x_{i}).
\end{gather}
\begin{align}
\label{partial gamma s, gamma s}
\frac{1}{1+\alpha}\cdot \frac{\partial^{2}}{\partial \gamma^{2}_{s}}V_{i}(Y_{i}, \theta)
&= \Big\{\alpha p_{\theta,i}(s)^{\alpha-1}f^{2}(\gamma_{s}-x_{i}^{T}\beta)+ p_{\theta,i}(s)^{\alpha} f'(\gamma_{s}-x_{i}^{T}\beta)\Big\} \nonumber \\
&-\Big\{(\alpha-1)p_{\theta,i}(Y_{i})^{\alpha-2} f^{2}(\gamma_{Y_{i}}-x_{i}^{T}\beta)+p_{\theta,i}(Y_{i})^{\alpha-1}f'(\gamma_{Y_{i}}-x_{i}^{T}\beta) \Big\} \nonumber \\
&\times \mathds{1}(Y_{i}=s|x_{i}) \nonumber \\
&+ \Big\{\alpha p_{\theta,i}(s+1)^{\alpha-1}f^{2}(\gamma_{s}-x_{i}^{T}\beta)-p_{\theta,i}(s+1)^{\alpha}f'(\gamma_{s}-x_{i}^{T}\beta)\Big\} \nonumber \\
&-\Big\{(\alpha-1)p_{\theta,i}(Y_{i})^{\alpha-2} f^{2}(\gamma_{Y_{i}-1}- x_{i}^{T}\beta)-p_{\theta,i}(Y_{i})^{\alpha-1} f'(\gamma_{Y_{i}-1}-x_{i}^{T}\beta)\Big\} \nonumber \\
&\times \mathds{1}(Y_{i}=s+1| x_{i}),
\end{align}
\begin{align}
\label{partial gamma s-1, gamma s}
\frac{1}{1+\alpha}\cdot \frac{\partial^{2}}{\partial \gamma_{s-1}\partial\gamma_{s}}V_{i}(Y_{i}, \theta)
&=-\Big\{\alpha p_{\theta, i}(s)^{\alpha-1}f(\gamma_{s-1}-x_{i}^{T}\beta)f(\gamma_{s}-x_{i}^{T}\beta) \Big\} \nonumber\\
&+\Big\{(\alpha-1)p_{\theta,i}(Y_{i})^{\alpha-2}f(\gamma_{Y_{i}-1}-x_{i}^{T}\beta)f(\gamma_{Y_{i}}-x_{i}^{T}\beta)\Big\}\mathds{1}(Y_{i}=s|x_{i}),
\end{align}
\begin{align}
\label{partial gamma s+1, gamma s}
\frac{1}{1+\alpha}\cdot \frac{\partial^{2}}{\partial \gamma_{s+1}\partial \gamma_{s}}V_{i}(Y_{i}, \theta)
&= -\Big\{\alpha p_{\theta,i}(s+1)^{\alpha-1}f(\gamma_{s}-x_{i}^{T}\beta)f(\gamma_{s+1}-x_{i}^{T}\beta)\Big\} \nonumber \\
&+\Big\{(\alpha-1)p_{\theta,i}(Y_{i})^{\alpha-2}
f(\gamma_{Y_{i}-1}-x_{i}^{T}\beta)
f(\gamma_{Y_{i}}-x_{i}^{T}\beta)
\Big\} \nonumber \\
&\times \mathds{1}(Y_{i}=s+1| x_{i}),
\end{align}
\begin{align}
\label{partial beta k', beta k}
\frac{1}{1+\alpha} \cdot \frac{\partial^{2}}{\partial \beta_{k'} \partial \beta_{k}} V_{i}(Y_{i}, \theta)
&= \Bigg[ \sum_{j=1}^{m} \alpha p_{\theta,i}(j)^{\alpha-1}\Big\{f(\gamma_{j-1}-x_{i}^{T}\beta)- f(\gamma_{j}-x_{i}^{T}\beta)\Big\}^{2} \nonumber \\
& +\sum_{j=1}^{m}p_{\theta,i}(j)^{\alpha}\Big\{f'(\gamma_{j}-x_{i}^{T}\beta)- f'(\gamma_{j}-x_{i}^{T}\beta)\Big\} \nonumber \\
&-(\alpha-1)p_{\theta,i}(Y_{i})^{\alpha-2} \Big\{f(\gamma_{Y_{i}-1}-x_{i}^{T}\beta)-f(\gamma_{Y_{i}}-x_{i}^{T}\beta)\Big\}^{2} \nonumber\\
&+p_{\theta,i}(Y_{i})^{\alpha-1}\Big\{f'(\gamma_{Y_{i}}-x_{i}^{T}\beta)-f'(\gamma_{Y_{i}-1}-{x_{i}^{T}\beta)\Big\} \Bigg] x_{ik}\cdot x_{ik'}},
\end{align}
\begin{Proof} (Theorem \ref{Theorem: Consistency and CLT}) \hspace{10pt}
\descref{(a)} To prove the consistency, we will study the behaviour of density power divergence or equivalently of $H_{n}(\theta)$ on the closed ball $Q_{a}=B[\theta_{\alpha},a]=\{\theta : ||\theta_{\alpha}-\theta|| \le a \}$ for fixed $\alpha$. We shall show that for sufficiently small $a$, $H_{n}(\theta)> H_{n}(\theta_{\alpha})$ for all points $\theta$ on the boundary set $\partial Q_{a}$. Then $H_{n}(\theta)$ will have a local minimum in the interior of $Q_{a}$ with probability tending to $1$ as $n \uparrow +\infty$.
Let us expand $H_{n}(\theta)$ around $\theta_{\alpha}$ up to third-order.
\begin{align}
\frac{1}{1+\alpha}\Big\{ H_{n}(\theta_{\alpha})-H_{n}(\theta)\Big\} &=\sum_{l}(-A_l)(\theta^{l}-\theta_{\alpha}^{l})+
\frac{1}{2} \sum_{l}\sum_{l'}(-B_{ll'})(\theta^{l}-\theta_{\alpha}^{l})(\theta^{l'}-\theta_{\alpha}^{l'}) \nonumber \\
& + \frac{1}{6} \sum_{l}\sum_{l'}\sum_{l^{*}}(\theta^{l}-\theta_{\alpha}^{l})(\theta^{l'}-\theta_{\alpha}^{l'})(\theta^{l^{*}}-\theta_{\alpha}^{l^{*}})
\frac{1}{n}\sum_{i=1}^{n}\gamma_{ll'l^{*}}{(Y_{i})}M_{ll'l^{*}}(Y_{i}) \nonumber\\
& =S_{1}+S_{2}+S_{3}
\end{align}
where
\begin{align}
A_{l} &=\frac{1}{1+\alpha} \nabla_{l}
H_{n}(\theta)\Big |_{\theta=\theta_{\alpha}}
=\frac{1}{n} \cdot\frac{1}{1+\alpha}\sum_{i=1}^{n}\nabla_{l}V_{i}(Y_i,\theta)\Big|_{\theta=\theta_{\alpha}}, \\
B_{ll'}
&=\frac{1}{1+\alpha}\nabla_{ll'}H_{n}(\theta)\Big|_{\theta=\theta_{\alpha}} =\frac{1}{n} \cdot\frac{1}{1+\alpha}\sum_{i=1}^{n}\nabla_{ll'}V_{i}(Y_{i},\theta)\Big|_{\theta=\theta_{\alpha}}
\end{align}
and $ 0\le|\gamma_{ll'l^{*}}(Y_{i})|\le 1 $ almost surely and $|\nabla_{ll'l^{*}}V_{i}(Y_{i},\theta_{\alpha})| \le M_{ll'l^{*}}(Y_{i})$ (see \descref{(A4)}). Note that
\begin{align}
\frac{1}{1+\alpha}\cdot\nabla_{l}V_{i}(Y_{i},\theta) &=\sum_{j=1}^{m}\Big\{p_{\theta,i}(j)^{\alpha} \nabla_{l}p_{\theta,i}(j)\Big\}-p_{\theta,i}(Y_{i})^{\alpha-1}\nabla_{l}p_{\theta,i}(Y_{i}), \\
\frac{1}{1+\alpha}\cdot\nabla_{ll'}V_{i}(Y_{i},\theta)
&=\sum_{j=1}^{m}\Bigg\{\alpha p_{\theta,i}(j)^{\alpha-1}
\Big(\nabla_{l}p_{\theta,i}(j)\Big)\Big(\nabla_{l'}p_{\theta,i}(j)\Big) \nonumber \\
&-(\alpha-1)p_{\theta,i}(Y_{i})^{\alpha-2}
\Big(\nabla_{l}p_{\theta,i}(Y_{i})\Big)\Big(\nabla_{l'}p_{\theta,i}(Y_{i})\Big)\Bigg\} \nonumber\\
& +\Bigg\{\sum_{j=1}^{m} p_{\theta,i}(j)^{\alpha}\nabla_{ll'}p_{\theta,i}(j)-p_{\theta,i}(Y_{i})^{\alpha-1}\nabla_{ll'}p_{\theta,i}(Y_{i})\Bigg\}.
\end{align}
Simple calculations show that for all $i$, the results
\begin{align}
\frac{1}{1+\alpha}\cdot\mathds{E}_{g_{i}}\Big[\nabla_{l}V_{i}(Y_{i},\theta)\Big]_{\theta=\theta_{\alpha}} &=\sum_{j=1}^{m}\Big\{p_{\theta_{\alpha},i}(j)^{\alpha}-p_{\theta_{\alpha},i}(j)^{\alpha-1}g_{i}(j)\Big\}\nabla_{l}p_{\theta_{\alpha},i}(j) \nonumber\\
&=\nabla_{l}H^{(i)}(\theta_{\alpha})=0,\\
\frac{1}{1+\alpha}\cdot\mathds{E}_{g_{i}}\Big[\nabla_{ll'}V_{i}(Y_{i},\theta)\Big]_{\theta=\theta_{\alpha}} &=\sum_{j=1}^{m}\Big\{\alpha p_{\theta_{\alpha},i}(j)^{\alpha-1}-(\alpha-1)p_{\theta_{\alpha},i}(j)^{\alpha-2}g_{i}(j)\Big\} \nonumber \\
&\Big(\nabla_{l}p_{\theta_{\alpha},i}(j)\Big)
\cdot\Big(\nabla_{l'}p_{\theta_{\alpha},i}(j)\Big) \nonumber\\
&+ \sum_{j=1}^{m}\Big\{ p_{\theta_{\alpha},i}(j)^{\alpha}-p_{\theta_{\alpha},i}(j)^{\alpha-1}g_{i}(j)\Big\}\nabla_{ll'}p_{\theta,i}(j) \nonumber\\
&=\nabla_{ll'} H^{(i)}(\theta_{\alpha})
\end{align}
hold for all $l,l'=1,2, \ldots, (m+p-1)$.
We can express $A_{l}=\frac{1}{1+\alpha} \overline{A}$, where $\overline{A}=\frac{1}{n}\sum_{i=1}^{n}\nabla_{l}V_{i}(Y_{i},\theta_{\alpha})$. Applying the Markov's inequality on $A_{l}$, we get
\begin{align}\label{morkov inequality}
\mathds{P}\Big(|A_{l}| < a^{2}\Big)
> 1-\frac{\mathds{E}|A_{l}|}{a^{2}}
&\ge 1- \frac{1}{a^{2}}\cdot\frac{1}{n}\sum_{i=1}^{n} \mathds{E}_{g_{i}}\Big(|\nabla_{l}V_{i}(Y_{i}, \theta_{\alpha})| \Big) \nonumber \\
&\ge 1- \frac{1}{a^{2}} \mathds{E}\max_{1 \le i \le n} \Big(|\nabla_{l}V_{i}(Y_{i}, \theta_{\alpha})| \Big) \nonumber \\
& \ge 1- \frac{1}{a^{2}} \mathds{E} \sup_{n \ge 1} \max_{1 \le i \le n}\Big( |\nabla_{l}V_{i}(Y_{i}, \theta_{\alpha})| \Big).
\end{align}
Since $Y$ is supported on the finite set $\chi$ and $Y \mapsto |\nabla_{l}V_{i}(Y, \theta_{\alpha})|$ is bounded (from \descref{(A5)}), so the quantity $\sup_{n \ge 1}\max_{1 \le i \le n}|\nabla_{l}V_{i}(Y_{i}, \theta_{\alpha})|$ is almost surely bounded having finite expectation. Thus for sufficiently small $a$, $|A_{l}|<a^{2}$, and hence $|S_{1}|<(m+p-1)a^{3}$ with probability tending to 1. We have
\begin{equation}
B_{ll'}-\Psi_{n}(\alpha)_{ll'} =\frac{1}{1+\alpha} \cdot\frac{1}{n}\sum_{i=1}^{n}
\Big\{\nabla_{ll'}V_{i}(Y_{i},\theta_{\alpha})-\mathds{E}_{g_{i}}\Big[\nabla_{ll'}V_{i}(Y_{i},\theta_{\alpha})\Big]\Big\}
\end{equation}
Consider the following representation,
\begin{align}\label{2S_{2}}
2S_{2} &=\sum_{l}\sum_{l'}-\Psi_{n}(\alpha)_{ll'}(\theta^{l}-\theta_{\alpha}^{l})(\theta^{l'}-\theta_{\alpha}^{l'})
+\sum_{l}\sum_{l'}\Big\{-B_{ll'}+\Psi_{n}(\alpha)_{ll'}\Big\}(\theta^{l}-\theta_{\alpha}^{l})(\theta^{l'}-\theta_{\alpha}^{l'}) \nonumber \\
&=-(\theta-\theta_{\alpha})^{T}\Psi(\alpha)(\theta-\theta_{\alpha})
+ (\theta-\theta_{\alpha})^{T}(\Psi(\alpha)-B)(\theta-\theta_{\alpha}), \mbox{ where } B=((B_{ll'})).
\end{align}
For the second term in the Equation (\ref{2S_{2}}), it follows from a similar argument that $|\Psi_{n}(\alpha)_{ll'}-B_{ll'}| \le (m+p-1)^{2}a^{3}$ with probability tending to $1$. From \descref{(A3)}, $(-\Psi_{n}(\alpha))$ is a negative definite matrix even if $n \uparrow +\infty$. So the quadratic form that appears as the first term of $2S_{2}$ is negative, and this can be reduced to a diagonal form $\sum_{l=1}^{m+p-1}\lambda_{l}\zeta_{l}^{2}$ subject to the orthogonal transformation $\sum_{l=1}^{\infty}\zeta_{l}^{2}=a^{2}$. $\lambda$'s (eigen roots of $\Psi_{n}(\alpha)$) and $\zeta$'s depend on $n$. Using \descref{(A3)} thus one gets $\sum_{l=1}^{m+p-1}\lambda_{l}\zeta^{2}_{l} \le -\lambda_{\max}a^{2}$. Combining the first and second terms, we thus get $c > 0$, $a_{0} >0 $ such that for $a < a_{0}$, $S_{2} <-ca^{2}$ with probability tending to $1$.
From \descref{(A4)}, we have
\begin{align}
\Big|\frac{1}{n}\sum_{i}\gamma_{ll'l^{*}}(Y_{i})M_{ll'l^{*}}(Y_{i})\Big|
<\frac{1}{n} \sum_{i=1}^{n} \mathds{E}_{g_{i}}\big[M_{ll'l^{*}}(Y_{i})\big]=\mathcal{O}(1)
\end{align}
with probability tending to 1. So $|S_{3}| < ba^{3}$ for some $b>0$. Combining these quantities, we see that
$\max(S_{1}+S_{2}+S_{3})<-ca^{2}+\Big\{b+(m+p-1)\Big\}a^{3}$, which is negative if $a<\frac{c}{b+(m+p-1)}$.
Thus for sufficiently small $a$, there exists a sequence of roots $\hat{\theta}_{\alpha}$ such that $\mathds{P}(||\hat{\theta}_{\alpha}-\theta_{\alpha}||<a^{2}) \to 1$ as $n\uparrow +\infty$. $||\cdot|| $ denotes the Euclidean norm. It remains to show that we can determine such a sequence independent of $a$. Let $\theta_{\alpha}^{*}$ be a root of $H_{n}(\theta)$, such that it's closest to $\theta_{\alpha}$ in the sense of Euclidean norm. This exists because the limit of a sequence of roots is again a root by the continuity of the map $\theta \mapsto H_{n}(\theta)$ for fixed $\alpha$. This clearly implies that $\mathds{P}(||\theta_{\alpha}^{*}-\theta_{\alpha}||<a^{2}) \to 1$. This completes the proof of existence of a consistent sequence of roots.
\descref{(b)}
Let us denote by $H^{(ll'..)}_{n}(\theta)$ as the partial derivatives of $H_{n}(\theta)$ with respect to the indicated components of $\theta$.
Consider the second-order Taylor series expansion of $H_{n}^{(l)}(\theta)$ about $\theta_{\alpha}$ as follows
\begin{equation}\label{H^{(l)}_n(theta)}
H_{n}^{(l)}(\theta) =H_{n}^{(l)}(\theta_{\alpha})+\sum_{l'=1}^{m+p-1}(\theta^{l'}-\theta_{\alpha}^{l'})H_{n}^{(ll')}(\theta_{\alpha}) + \frac{1}{2} \sum_{l,l'=1}^{m+p-1}(\theta^{l}-\theta_{\alpha}^{l})(\theta^{l'}-\theta_{\alpha}^{l'})H_{n}^{ll'l^{*}}(\theta^{*}),
\end{equation}
where $\theta^{*}$ is a point on the line segment connecting $\theta$ and $\theta_{\alpha}$. Since $H^{(l)}_{n}(\hat{\theta}_{\alpha})=0$ for all $l$, evaluating (\ref{H^{(l)}_n(theta)}) at $\theta=\hat{\theta}_{\alpha}$, we get
\begin{equation}\label{H^{(ll')}}
\sum_{l} \underbrace{\Big\{H^{(ll')}_{n}(\theta_{\alpha})+\frac{1}{2}\sum_{l'}(\hat{\theta}_{\alpha}^{l'}-\theta_{\alpha}^{l'})H^{(ll'l^{*})}_{n}(\theta^{*})\Big\}
}_{A_{ll'n}} \underbrace{\sqrt{n}(\hat{\theta}_{\alpha}^{l}-\theta_{\alpha}^{l} )}_{Z_{ln}}= \underbrace{-\sqrt{n} H^{(l)}_{n}(\theta_{\alpha})}_{T_{ln}} \mbox{ for all } l.
\end{equation}
Using vector notation, we can rewrite the Equation (\ref{H^{(ll')}}) as $A_{n}Z_{n}=T_{n}$ with $A_{n}=((A_{ll'n}))$ being a square matrix of order $(m+p-1)$, $Z_{n}=(Z_{1n},...,Z_{(m+p-1)n})^{T}$ and $T_{n}=(T_{1n}....,T_{(m+p-1)n})^{T}$ for fixed $n$. Note that $T_n=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\nabla V_{i}(Y_{i},\theta_{\alpha})$.
Since $\mathds{E}_{g_{i}}\Big[\nabla_{l} V_{i}(Y_{i},\theta_{\alpha})\Big]=0$ and $\nabla_{ll'} V_{i}(Y_{i},\theta_{\alpha})$ has finite variance w.r.t $g_{i}$ for all $i$. See that
\begin{align}\label{bound required for CLT}
0<||\Omega^{-1/2}_{n}(\alpha)\nabla V_{i}(Y_{i}, \theta_{\alpha})||^{2}
&=\nabla V_{i}(Y_{i}, \theta_{\alpha})^{T}
\Omega^{-1}_{n}(\alpha)
\nabla V_{i}(Y_{i}, \theta_{\alpha}) \nonumber \\
& \le \frac{\sum_{l=1}^{m+p-1}\big[\nabla_{l}V_{i}(Y_{i}, \theta_{\alpha})\big]^{2}}{\mu_{\min}},
\end{align}
$\mu_{\min}$ is minimum eigen root of $\Omega_{n}(\alpha)$ and it's positive. Then from \descref{(A5)} it follows that the LHS of (\ref{bound required for CLT}) is bounded. Now using a multivariate extension of Lindeberg-L\'evy CLT, it follows that $\frac{1}{1+\alpha}\Omega^{-\frac{1}{2}}_{n}T_{n} \overset{L}{\to} \mathcal{N} (0,I_{m+p-1} )$.
Thus using the relationship $A_{n}Z_{n}=T_{n}$, we get
\begin{equation}
A_{n}Z_{n}=T_{n} \mbox{ and }
\frac{1}{1+\alpha}\Omega^{-\frac{1}{2}}_{n}(\alpha)A_{n}(\alpha)Z_{n} \overset{L}{\longrightarrow} \mathcal{N} (0,I_{m+p-1} ).
\end{equation}
Due to \descref{(A4)}, $|H^{(ll'l^{*})}_{n}(\theta_{\alpha})| \le \frac{1}{n}\sum_{i=1}^{n}\mathds{E}_{g_{i}}\big[M_{ll'l^{*}}(Y_{i})\big]=\mathcal{O}(1)$ with probability tending to $1$ all $l,l', l^{*}$.
Thus the consistency of the roots imply that the second term of $A_{ll'n}$ is $o_{p}(1)$. Further note that
\begin{align}
\Big\{\frac{1}{1+\alpha} H^{(ll')}_n(\theta_{\alpha})-\Psi_{n}(\alpha)_{ll'}\Big\}\overset{\mathds{P}}{\longrightarrow} 0 \mbox{ for all } l,l'.
\end{align}
Then the following result is seen to hold:
\begin{equation}
\Omega_{n}^{-\frac{1}{2}}(\alpha)\Big[\frac{1}{1+\alpha} A_{n}(\alpha) - \Psi_{n}(\alpha)\Big] Z_{n}
\overset{\mathds{P}}{\longrightarrow} 0.
\end{equation}
Combining the above equations, we arrive at $\Omega^{-\frac{1}{2}}_{n}(\alpha) \Psi_{n}(\alpha) Z_{n} \overset{L}{\longrightarrow} \mathcal{N}(0,I_{m+p-1})$ when $n \uparrow +\infty$.
\end{Proof}
\hspace{10pt}
\begin{Proof} [Theorem \ref{Theorem: Asymptotic Breakdown point}] \hspace{10pt}
First, we define the following set
\begin{equation}\label{BP:eq1}
A_{i,m_{1}} =\Big\{j: p_{\theta_{\alpha},i}(j) > \max\big\{k_{i,m_{1}}(j),p_{\theta_{m_{1}},i}(j)\big\}\Big\}
\mbox{ and }
S_{i,m_{1}}=\Big\{j: p_{\theta_{\alpha},i}(j)> k_{i,m_{1}}(j)\Big\}.
\end{equation}
The $i$-th divergence between $h_{i,\epsilon, m_{1}}$ and $p_{\theta_{m_{1}},i}$ can be decomposed as
\begin{align}\label{BP:eq2}
d_{\alpha}(h_{i,\epsilon,m_{1}},p_{\theta_{m_{1}},i})&=\sum_{j:A_{i,m_{1}}} D_{\alpha}\Big(h_{i,\epsilon,m_{1}}(j),p_{\theta_{m_{1}},i}(j)\Big)+\sum_{j:A_{i,m_{1}}^{c}} D_{\alpha}\Big(h_{i,\epsilon,m_{1}}(j),p_{\theta_{m_{1}},i}(j)\Big).
\end{align}
Clearly, $A_{i,m_{1}} \subset S_{i,m_{1}}$. It follows from \descref{(BP2)} that for sufficiently large $m_{1} \equiv m_{1}(j)$, $k_{i,m_{1}}(j)- p_{\theta_{\alpha},i}(j) \ge 0$ for all $j \in B$. Since $B$ is a finite set, taking the maximum $M_{1}=\max\{m_{1}(j); j \in B\}$ ensures that $k_{i,m_{1}'}(j)-p_{\theta_{\alpha},i}(j) \ge 0$ for all $j \in B$ and $m_{1}' > M_{1}$. Thus, it follows that $S_{i,m_{1}} \to S \subseteq B^{c}$. Now
\begin{align}\label{BP:eq3}
\sum_{j\in A_{i,m_{1}}} k_{i,m_{1}}(j)
&\le \sum_{j \in S_{i,m_{1}}} k_{i,m_{1}}(j) =\sum_{j \in S} k_{i,m_{1}}(j)+\sum_{j \in S_{i,m_{1}}- S} k_{i,m_{1}}(j) \nonumber \\
&\le \sum_{j \in B^{c}} k_{i,m_{1}}(j)+\sum_{j \in S_{i,m_{1}}- S} p_{\theta,i}(j) \longrightarrow 0 \mbox{ as } m_{1} \uparrow +\infty.
\end{align}
The first term goes to $0$ as $\sum_{j \in B}k_{i,m_{1}}(j) \to 1$ by Assumption \descref{(BP2)} and the second term goes to $0$ as the set on which the sum is taken goes to the null set $\phi$. From \descref{(BP3)} it similarly follows that $\sum_{j:A_{i,m_{1}}} p_{\theta_{m_{1}},i}(j) \to 0 $ as $m_{1} \uparrow +\infty$. Therefore, the set $A_{i,m_{1}}$ converges to a set of probability measure zero under both $p_{\theta_{m_{1}},i}$ and $k_{i,m_{1}}$ for each $i$. Using \descref{(BP2)} and \descref{(BP3)} together, we also get $\max\{k_{i,m_{1}}(j), p_{\theta_{m_{1}},i}(j)\} \to 0$ for all $j \in A_{i,m_{1}}$ as $m_{1} \uparrow +\infty$. Therefore
\begin{align}\label{BP:eq5}
D_{\alpha}\Big(h_{i,\epsilon,m_{1}}(j),p_{\theta_{m_{1}},i}(j)\Big)
&=\Bigg\{p_{\theta_{m_{1}},i}(j)^{1+\alpha}-\Big(1+\frac{1}{\alpha}\Big) p_{\theta_{m_{1}},i}^{\alpha}(j) h_{i,\epsilon,m_{1}}(j)+\frac{1}{\alpha}
h_{i,\epsilon,m_{1}}^{1+\alpha}(j)\Bigg\} \\
& \longrightarrow \frac{1}{\alpha} (1-\epsilon)^{1+\alpha} p_{\theta_{\alpha},i}(j)^{1+\alpha} =D_{\alpha}\Big((1-\epsilon)p_{\theta_{\alpha},i}(j),0\Big)
\end{align}
for all $j \in A_{i,m_{1}}$ as $m_{1} \uparrow +\infty$. Now applying the DCT gives the following
\begin{equation}\label{BP:eq6}
\Big|\sum_{j:A_{i,m_{1}}} D_{\alpha}\Big(h_{i,\epsilon,m_{1}}(j),p_{\theta_{m_{1}},i}(j)\Big)-
\sum_{j:A_{i,m_{1}}}D_{\alpha}\Big((1-\epsilon)p_{\theta_{\alpha},i}(j),0\Big)\Big| \longrightarrow 0
\mbox{ as } m_{1} \uparrow +\infty.
\end{equation}
Using \descref{(BP2)} and \descref{(BP3)}, we also get
\begin{equation}\label{BP:eq7}
\Big|\sum_{j:A_{i,m_{1}}} D_{\alpha}\Big((1-\epsilon)p_{\theta_{\alpha},i}(j),0\Big)- \sum_{j \in \chi}D_{\alpha}\Big((1-\epsilon)p_{\theta_{\alpha},i}(j),0\Big)\Big| \longrightarrow 0 \mbox{ when }
m_{1} \uparrow +\infty.
\end{equation}
A simple application of the triangle inequality gives the following result
\begin{equation} \label{BP:eq8}
\Big|\sum_{j:A_{i,m_{1}}} D_{\alpha}\Big(h_{i,\epsilon,m_{1}}(j),p_{\theta_{m_{1}},i}(j)\Big) -\sum_{j\in \chi}D_{\alpha}\Big((1-\epsilon)p_{\theta_{\alpha},i}(j),0\Big)\Big| \longrightarrow 0 \mbox{ for }
m_{1} \uparrow +\infty.
\end{equation}
We also have
\begin{equation}\label{BP:eq9}
\sum_{j \in \chi}D_{\alpha}\Big((1-\epsilon)p_{\theta_{\alpha},i}(j),0\Big)=\frac{1}{\alpha}(1-\epsilon)^{1+\alpha}
\sum_{j \in \chi}p_{\theta_{\alpha},i}^{1+\alpha}(j)=\frac{1}{\alpha}(1-\epsilon)^{1+\alpha} M_{i}^{\alpha}.
\end{equation}
See that for $m_{1} \uparrow +\infty$,
\begin{align} \label{BP:eq10}
\sum_{j \in A_{i,m_{1}}} p_{\theta_{\alpha},i}(j)
&= \sum_{j \in A_{i,m_{1}} \cap \{k_{i,m_{1}} \downarrow +0 \} \cap \{p_{\theta_{m_{1}},i} \downarrow +0\}} p_{\theta_{\alpha},i}(j)
\longrightarrow \sum_{j \in \chi} p_{\theta_{\alpha},i}(j)=1.
\end{align}
So $\mathds{P}\Big[ A_{i,m_{1}}^{c} | p_{\theta_{\alpha},i}\Big] \to 0$ for $m_{1} \uparrow +\infty$. We also know that $\sum_{j: A_{i,m_{1}}^{c}}k_{i,m_{1}}(j) \to 1$ and $\sum_{j: A_{i,m_{1}}^{c}}p_{\theta_{\alpha},i}(j) \to 1$ as $m_{1} \uparrow +\infty$. So
\begin{equation} \label{BP:eq11}
\Big|\sum_{j:A_{i,m_{1}}^{c}}D_{\alpha}\Big(h_{i,\epsilon,m_{1}}(j),p_{\theta_{m_{1}},i}(j)\Big)-
\sum_{j \in \chi}D_{\alpha}\Big(\epsilon k_{i,m_{1}}(j),p_{\theta_{m_{1}},i}(j)\Big)\Big| \to 0
\mbox{ as } m_{1} \uparrow +\infty.
\end{equation}
From \descref{(BP4)}, we see that
\begin{align} \label{BP:eq12}
\sum_{j \in \chi}D_{\alpha}\Big(\epsilon k_{i,m_{1}}(j),p_{\theta_{m_{1}},i}(j)\Big) =d_{\alpha}(\epsilon k_{i,m_{1}},p_{\theta_{m_{1}},i}) &\ge d_{\alpha}(\epsilon p_{\theta_{\alpha},i},p_{\theta_{\alpha},i})
=a(\epsilon)M_{i}^{\alpha},
\end{align}
where $a(\epsilon)=\Big\{1-\Big(1+\frac{1}{\alpha}\Big)\epsilon+\frac{1}{\alpha}\epsilon^{1+\alpha}\Big\}$. So for fixed $n$,
\begin{equation} \label{BP:eq13}
\underset{m_{1} \uparrow +\infty}{\liminf}\Big\{ d_{\alpha}(h_{i,\epsilon,m_{1}},p_{\theta_{m_{1}},i})\Big\} \ge
\frac{1}{\alpha}(1-\epsilon)^{1+\alpha}M_{i}^{\alpha} + a(\epsilon)M_{i}^{\alpha}
\end{equation}
for all $i$. Averaging over all $i$, we get
\begin{align} \label{BP:eq14}
\underset{m_{1} \uparrow +\infty}{\liminf}\Bigg\{ \frac{1}{n} \sum_{i=1}^{n}d_{\alpha}(h_{i,\epsilon,m_{1}},p_{\theta_{m_{1}},i})\Bigg\}
&\ge \frac{1}{n}\sum_{i=1}^{n}\underset{m_{1} \to \infty}{\liminf}\Big\{d_{\alpha}(h_{i,\epsilon,m_{1}},p_{\theta_{m_{1}},i})\Big\}\\
&\ge \frac{1}{\alpha}(1-\epsilon)^{1+\alpha}M^{\alpha} + a(\epsilon)M^{\alpha}=a_{1}(\epsilon)
\end{align}
for fixed $n$, where $M^{\alpha}=\frac{1}{n}\sum_{i}M^{\alpha}_{i}$. We'll have a contradiction to our assumption that $\{k_{i,m_{1}}\}_{m_{1}=1}^{\infty}$ is sequence for which breakdown occurs, if we can show that there exists a constant value $\theta_{*}$ in the parameter space such that $\theta_{m_{1}} \to \theta_{*}$ but
\begin{equation} \label{BP:eq15}
\underset{m_{1} \uparrow +\infty}{\limsup} \Bigg\{\frac{1}{n} \sum_{i=1}^{n}d_{\alpha}(h_{i,\epsilon,m_{1}},p_{\theta_{*},i})\Bigg\} < a_{1}(\epsilon)
\end{equation}
for the same sequences $\{k_{i,m_{1}}\}_{m_{1}=1}^{\infty}$. It means that the sequence $\{\theta_{m_{1}}\}$ could not minimize the DPD $\frac{1}{n} \sum_{i}d_{\alpha}(h_{i,\epsilon,m_{1}},p_{\theta_{m_{1}},i})$ for each $m_{1}$.
Define $B_{i,m_{1}}=\Big\{j : k_{i,m_{1}}(j) > \max\{p_{\theta_{\alpha},i}(j),p_{\theta_{m_{1}},i}(j)\}\Big\}$ for some sequence $\{\theta_{m_{1}}\}$. From \descref{(BP2)} and \descref{(BP3)}, it's easy to show that $\sum_{j: B_{i,m_{1}}}p_{\theta_{\alpha},i}(j) \to 0 $,
$\sum_{j: B_{i,m_{1}}}p_{\theta_{m_{1}},i}(j) \to 0$ and $\sum_{j: B_{i,m_{1}}^{c}}k_{i,m_{1}}(j) \to 0$ as $m_{1} \uparrow +\infty$. Therefore under $k_{i,m_{1}}$ the set $B_{i,m_{1}}^{c}$ converges to a probability null set, also under both $p_{\theta_{\alpha},i}$ and $p_{\theta_{m_{1}},i}$ the set $B_{i,m_{1}}$ also converge to the sets of zero probability measures. Thus for all $j \in B_{i,m_{1}}$
\begin{align} \label{BP:eq16}
\Big|D_{\alpha}\Big(h_{i,\epsilon,m_{1}}(j),p_{\theta_{m_{1}},i}(j)\Big)- D_{\alpha}\Big(\epsilon k_{i,m_{1}}(j),0\Big)\Big| &\longrightarrow 0 \\
\mbox{ So, }
\Big|\sum_{j:B_{i,m_{1}}}D_{\alpha}\Big(h_{i,\epsilon,m_{1}}(j),p_{\theta_{m_{1}},i}(j)\Big)-\sum_{k_{i,m_{1}} >0}D_{\alpha}\Big(\epsilon k_{i,m_{1}}(j),0\Big)\Big| &\longrightarrow 0
\mbox{ \Big[ By DCT \Big]}.
\end{align}
Observe that $D_{\alpha}\Big(\epsilon k_{i,m_{1}}(j),0\Big)=\frac{\epsilon^{1+\alpha}}{\alpha}k_{i,m_{1}}^{1+\alpha}(j)$ when $k_{i,m_{1}} > 0$ and $\alpha>0$. As $m_{1} \uparrow +\infty$, it follows that
\begin{align} \label{BP:eq17}
\Big|\sum_{j: B_{i,m_{1}}}D_{\alpha}\Big(h_{i,\epsilon,m_{1}}(j),p_{\theta_{m_{1}},i}(j)\Big)-\frac{\epsilon^{1+\alpha}}{\alpha} \sum_{j \in \chi}k_{i,m_{1}}^{1+\alpha}(j)\Big| &\longrightarrow 0.
\end{align}
Similarly
\begin{align} \label{BP:eq18}
\Big|\sum_{j:B_{i,m_{1}}^{c}} D_{\alpha}\Big(h_{i,\epsilon,m_{1}}(j),p_{\theta_{m_{1}},i}(j)\Big)-
\sum_{j \in \chi}D_{\alpha}\Big((1-\epsilon)p_{\theta_{\alpha},i}(j),p_{\theta_{*},i}(j)\Big)\Big| &\longrightarrow 0 \mbox{ for } m_{1} \uparrow +\infty.
\end{align}
Now see that
\begin{align} \label{BP:eq19}
\underset{m_{1} \uparrow +\infty}{\limsup}
\Big\{ d_{\alpha}(h_{i,\epsilon,m_{1}},p_{\theta_{*},i})\Big\}
& = \underset{m_{1} \uparrow +\infty}{\limsup}\Bigg\{\sum_{j \in \chi }D_{\alpha}\Big((1-\epsilon)p_{\theta_{\alpha},i}(j),p_{\theta_{*},i}(j)\Big)
+\frac{\epsilon^{1+\alpha}}{\alpha} \sum_{j \in \chi}k_{i,m_{1}}^{1+\alpha}(j)\Bigg\} \nonumber \\
&\le \sum_{j \in \chi}D_{\alpha}\Big((1-\epsilon)p_{\theta_{\alpha},i}(j),p_{\theta_{*},i}(j)\Big)
+\frac{\epsilon^{1+\alpha}}{\alpha} M_{i}^{\alpha} \mbox{ for all } i.
\end{align}
Averaging (\ref{BP:eq19}) over all $i=1,2, \ldots,n$ we get
\begin{align}\label{BP:eq20}
\underset{m \uparrow +\infty}{\limsup}\Bigg\{\frac{1}{n} \sum_{i=1}^{n} d_{\alpha}(h_{i,\epsilon,m},p_{\theta_{*},i})\Bigg\}
&\le \frac{1}{n}\sum_{i=1}^{n}\underset{m \uparrow +\infty}{\limsup}\Big\{d_{\alpha}(h_{i,\epsilon,m},p_{\theta_{*},i})\Big\} \nonumber \\
& \le\frac{1}{n} \Bigg\{\sum_{i=1}^{n}\sum_{j=1}^{m}D_{\alpha}\Big((1-\epsilon)p_{\theta_{\alpha},i}(j),p_{\theta_{*},i}(j)\Big) + \frac{\epsilon^{1+\alpha}}{\alpha} \sum_{i=1}^{n}M_{i}^{\alpha} \Bigg\}
\end{align}
for fixed $n$. Let us choose $\theta_{m}=\hat{\theta}_{\alpha}$. Then theorem \ref{Theorem: Consistency and CLT} \descref{(a)} implies that $\theta_{*}=\theta_{\alpha}$. Let us substitute $\theta_{\alpha}$ for $\theta_{*}$ in the first part of the Equation (\ref{BP:eq20}), we get
\begin{equation}\label{BP:eq21}
\begin{split}
\sum_{j=1}^{m}D_{\alpha}\Big((1-\epsilon)
p_{\theta_{\alpha},i}(j),p_{\theta_{\alpha},i}(j)\Big)
&=a(1-\epsilon) \sum_{j=1}^{m}p_{\theta_{\alpha},i}(j)^{1+\alpha}=a(1-\epsilon) M_{i}^{\alpha} \mbox{ for all } i.
\end{split}
\end{equation}
Hence
\begin{equation}\label{BP:eq22}
\underset{m \uparrow +\infty}{\limsup}\Bigg\{\frac{1}{n} \sum_{i=1}^{n} d_{\alpha}(h_{i,\epsilon,m},p_{\theta_{\alpha},i})\Bigg\}\le a(1-\epsilon) M^{\alpha}+\frac{\epsilon^{1+\alpha}}{\alpha} M^{\alpha} =a_{2}(\epsilon).
\end{equation}
Asymptotically there will be no breakdown at $\epsilon$-contamination when $a_{2}(\epsilon) < a_{1}(\epsilon)$. Note that $a_1(\epsilon)$ and $a_2(\epsilon)$ are strictly decreasing and increasing functions respectively in $\epsilon$ and $a_{1}(\frac{1}{2})=a_{2}(\frac{1}{2})$. So, asymptotically there will be no breakdown for $\epsilon < \frac{1}{2}$.
\end{Proof}
\medskip
|
1,108,101,564,932 | arxiv | \section{Supplementary Lemmas for the proof of Theorem \ref{thm:regression}}
\label{sec:supp_B}
\subsection{Proof of Lemma \ref{bandwidth}}
\begin{proof}
First we establish the fact that $\theta_0^s \to \theta_0$. Note that for all $n$, we have:
$$
\mathbb{M}^s(\theta_0^s) \le \mathbb{M}^s(\theta_0)
$$
Taking $\limsup$ on the both side we have:
$$
\limsup_{n \to \infty} \mathbb{M}^s(\theta_0^s) \le \mathbb{M}(\theta_0) \,.
$$
Now using Lemme \ref{lem:uniform_smooth} we have:
$$
\limsup_{n \to \infty} \mathbb{M}^s(\theta_0^s) = \limsup_{n \to \infty} \left[\mathbb{M}^s(\theta_0^s) - \mathbb{M}(\theta_0^s) + \mathbb{M}(\theta_0^s)\right] = \limsup_{n \to \infty} \mathbb{M}(\theta_0^s) \,.
$$
which implies $\limsup_{n \to \infty} \mathbb{M}(\theta_0^s) \le \mathbb{M}(\theta_0)$ and from the continuity of $\mathbb{M}(\theta)$ and $\theta_0$ being its unique minimizer, we conclude the proof. Now, using Lemma \ref{lem:pop_curv_nonsmooth} and Lemma \ref{lem:uniform_smooth} we further obtain:
\begin{align}
u_- d^2(\theta_0^s, \theta_0) & \le \mathbb{M}(\theta_0^s) - \mathbb{M}(\theta_0) \notag \\
& = \mathbb{M}(\theta_0^s) - \mathbb{M}^s(\theta^s_0) + \underset{\le 0}{\underline{\mathbb{M}^s(\theta_0^s) - \mathbb{M}^s(\theta_0)}} + \mathbb{M}^s(\theta_0) - \mathbb{M}(\theta_0) \notag \\
\label{eq:est_dist_bound} & \le \sup_{\theta \in \Theta}\left|\mathbb{M}^s(\theta) - \mathbb{M}(\theta)\right| \le K_1 \sigma_n \,.
\end{align}
Note that we neeed consistency of $\theta_0^s$ here as the lower bound in Lemma \ref{lem:pop_curv_nonsmooth} is only valid in a neighborhood around $\theta_0$. As $\theta_0^s$ is the minimizer of $\mathbb{M}^s(\theta)$, from the first order condition we have:
\begin{align}
\label{eq:beta_grad}\nabla_{\beta}\mathbb{M}^s_n(\theta_0^s) & = -2\mathbb{E}\left[X(Y - X^{\top}\beta_0^s)\right] + 2\mathbb{E} \left\{\left[X_iX_i^{\top}\delta_0^s\right] K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right)\right\} = 0 \\
\label{eq:delta_grad}\nabla_{\delta}\mathbb{M}^s_n(\theta_0^s) & = \mathbb{E} \left\{\left[-2X_i\left(Y_i - X_i^{\top}\beta_0^s\right) + 2X_iX_i^{\top}\delta_0^s\right] K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right)\right\} = 0\\
\label{eq:psi_grad}\nabla_{\psi}\mathbb{M}^s_n(\theta_0^s) & = \frac{1}{\sigma_n}\mathbb{E} \left\{\left[-2\left(Y_i - X_i^{\top}\beta_0^s\right)X_i^{\top}\delta_0^s + (X_i^{\top}\delta_0^s)^2\right]\tilde Q_i K'\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right)\right\} = 0
\end{align}
We first show that $(\tilde \psi^s_0 - \tilde \psi_0)/\sigma_n \to 0$ by \emph{reductio ab absurdum}. From equation \eqref{eq:est_dist_bound}, we know $\|\psi_0^s - \psi_0\|/\sigma_n = O(1)$. Hence it has a convergent subsequent $\psi^s_{0, n_k}$, where $(\tilde \psi^s_{0, n_k} - \tilde \psi_0)/\sigma_n \to h$. If we can prove that $h = 0$, then we establish every subsequence of $\|\psi_0^s - \psi_0\|/\sigma_n$ has a further subsequence which converges to $0$ which further implies $\|\psi_0^s - \psi_0\|/\sigma_n$ converges to $0$. To save some notations, we prove that if $(\psi_0^s - \psi_0)/\sigma_n \to h$ then $h = 0$. We start with equation \eqref{eq:psi_grad}. Define $\tilde \eta = (\tilde \psi^s_0 - \tilde \psi_0)/\sigma_n = (\psi_0^s - \psi_0)/\sigma_n$ where $\tilde \psi$ is all the co-ordinates of $\psi$ except the first one, as the first co-ordinate of $\psi$ is always assumed to be $1$ for identifiability purpose.
\allowdisplaybreaks
\begin{align}
0 & = \frac{1}{\sigma_n}\mathbb{E} \left\{\left[-2\left(Y_i - X_i^{\top}\beta_0^s\right)X_i^{\top}\delta_0^s + (X_i^{\top}\delta_0^s)^2\right]\tilde Q_i K'\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right)\right\} \notag \\
& = \frac{1}{\sigma_n}\mathbb{E}\left[\left( -2\delta_0^s XX^{\top}(\beta_0 - \beta^s_0) -2\delta_0^s XX^{\top}\delta_0\mathds{1}_{Q^{\top}\delta_0 > 0} + (X_i^{\top}\delta_0^s)^2\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right] \notag \\
& = \frac{1}{\sigma_n}\mathbb{E}\left[\left( -2\delta_0^s XX^{\top}(\beta_0 - \beta^s_0) -2\delta_0^s XX^{\top}(\delta_0 - \delta_0^s)
\mathds{1}_{Q^{\top}\delta_0 > 0} \right. \right. \notag \\
& \hspace{10em} \left. \left. + (X_i^{\top}\delta_0^s)^2\left(1 - 2\mathds{1}_{Q^{\top}\delta_0 > 0}\right)\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right] \notag \\
& = \frac{-2}{\sigma_n}\mathbb{E}\left[\left(\delta_0^{s^{\top}} g(Q)(\beta_0 - \beta^s_0)\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right] \notag \\
& \qquad \qquad \qquad - \frac{2}{\sigma_n} \mathbb{E}\left[\left(\delta_0^{s^{\top}}g(Q)(\delta_0 - \delta^s_0)\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\mathds{1}_{Q^{\top}\delta_0 > 0}\right] \notag \\
& \hspace{15em} + \frac{1}{\sigma_n}\mathbb{E}\left[\left(\delta_0^{s^{\top}}g(Q)\delta^s_0\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\left(1 - 2\mathds{1}_{Q^{\top}\delta_0 > 0}\right)\right] \notag \\
& = -\underbrace{\frac{2}{\sigma_n}\mathbb{E}\left[\left(\delta_0^{s^{\top}} g(Q)(\beta_0 - \beta^s_0)\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right]}_{T_1} \notag \\
& \qquad \qquad -\underbrace{\frac{2}{\sigma_n} \mathbb{E}\left[\left(\delta_0^{s^{\top}}g(Q)(\delta_0 - \delta^s_0)\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\mathds{1}_{Q^{\top}\delta_0 > 0}\right]}_{T_2} \notag \\
\label{eq:pop_est_conv_1} & \qquad \qquad \qquad + \underbrace{\frac{1}{\sigma_n}\mathbb{E}\left[\left(\delta_0{\top}g(Q)\delta_0\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\left(1 - 2\mathds{1}_{Q^{\top}\delta_0 > 0}\right)\right]}_{T_3} \notag \\
& \qquad \qquad \qquad \qquad + \underbrace{\frac{2}{\sigma_n}\mathbb{E}\left[\left((\delta_0 - \delta_0^s)^{\top}g(Q)\delta_0\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\left(1 - 2\mathds{1}_{Q^{\top}\delta_0 > 0}\right)\right]}_{T_4} \notag \\
& = T_1 + T_2 + T_3 + T_4
\end{align}
As mentioned earlier, there is a bijection between $(Q_1, \tilde Q)$ and $(Q^{\top}\psi_0, \tilde Q)$. The map of one side is obvious. The other side is also trivial as the first coordinate of $\psi_0$ is 1, which makes $Q^{\top}\psi_0 = Q_1 + \tilde Q^{\top}\tilde \psi_0$:
$$
(Q^{\top}\psi_0, \tilde Q) \mapsto (Q^{\top}\psi_0 - \tilde Q^{\top}\tilde \psi_0, \tilde Q) \,.
$$
We first show that $T_1, T_2$ and $T_4$ are $o(1)$. Towards that end first note that:
\begin{align*}
|T_1| & \le \frac{2}{\sigma_n}\mathbb{E}\left[\|g(Q)\|_{op} \ \|\tilde Q\| \ \left|K'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right|\right]\|\delta_0^s\|\|\beta_0 - \beta_0^s\| \\
|T_2| & \le \frac{2}{\sigma_n} \mathbb{E}\left[\|g(Q)\|_{op} \ \|\tilde Q\| \ \left|K'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right|\right]\|\delta_0^s\|\|\delta_0 - \delta_0^s\| \\
|T_4| & \le \frac{2}{\sigma_n} \mathbb{E}\left[\|g(Q)\|_{op} \ \|\tilde Q\| \ \left|K'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right|\right]\|\delta_0^s\|\|\delta_0 - \delta_0^s\|
\end{align*}
From the above bounds, it is immediate that to show that above terms are $o(1)$ all we need to show is:
$$
\frac{1}{\sigma_n}\mathbb{E}\left[\|g(Q)\|_{op} \ \|\tilde Q\| \ \left|K'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right|\right] = O(1) \,.
$$
Towards that direction, define $\eta = (\tilde \psi_0^s - \tilde \psi_0)/\sigma_n$:
\begin{align*}
& \frac{1}{\sigma_n}\mathbb{E}\left[\|g(Q)\|_{op} \ \|\tilde Q\| \ \left|K'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right|\right] \\
& \le c_+ \frac{1}{\sigma_n}\mathbb{E}\left[\|\tilde Q\| \ \left|K'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right|\right] \\
& = c_+ \frac{1}{\sigma_n}\int \int \|\tilde q\| \left|K'\left(\frac{t}{\sigma_n} + \tilde q^{\top}\eta \right)\right| f_0\left(t \mid \tilde q\right) f(\tilde q) \ dt \ d\tilde q \\
& = c_+ \int \int \|\tilde q\| \left|K'\left(t + \tilde q^{\top}\eta \right)\right| f_0\left(\sigma_n t \mid \tilde q\right) f(\tilde q) \ dt \ d\tilde q \\
& = c_+ \int \|\tilde q\| f_0\left(0 \mid \tilde q\right) \int \left|K'\left(t + \tilde q^{\top}\eta \right)\right| \ dt \ f(\tilde q) \ d\tilde q + R_1 \\
& = c_+ \int \left|K'\left(t\right)\right| dt \ \mathbb{E}\left[\|\tilde Q\| f_0(0 \mid \tilde Q)\right] + R_1 = O(1) + R_1 \,.
\end{align*}
Therefore, all it remains to show is $R_1$ is also $O(1)$ (or of smaller order):
\begin{align*}
|R_1| & = \left|c_+ \int \int \|\tilde q\| \left|K'\left(t + \tilde q^{\top}\eta \right)\right| \left(f_0\left(\sigma_n t \mid \tilde q\right) - f_0(0 \mid \tilde q) \right)f(\tilde q) \ dt \ d\tilde q\right| \\
& \le c_+ F_+ \sigma_n \int \|\tilde q\| \int_{-\infty}^{\infty} |t|\left|K'\left(t + \tilde q^{\top}\eta \right)\right| \ dt \ f(\tilde q) \ d\tilde q \\
& = c_+ F_+ \sigma_n \int \|\tilde q\| \int_{-\infty}^{\infty} |t - q^{\top}\eta|\left|K'\left(t\right)\right| \ dt \ f(\tilde q) \ d\tilde q \\
& \le c_+ F_+ \sigma_n \left[\int \|\tilde q\| \int_{-\infty}^{\infty} |t|\left|K'\left(t\right)\right| \ dt \ f(\tilde q) \ d\tilde q + \int \|\tilde q\|^2\|\eta\| \int_{-\infty}^{\infty}\left|K'\left(t\right)\right| \ dt \ f(\tilde q) \ d\tilde q\right] \\
& = c_+ F_+ \sigma_n \left[\left(\int_{-\infty}^{\infty} |t|\left|K'\left(t\right)\right| \ dt\right) \times \mathbb{E}[\|\tilde Q\|] + \left(\int_{-\infty}^{\infty}\left|K'\left(t\right)\right| \ dt\right) \times \|\eta\| \ \mathbb{E}[\|\tilde Q\|^2]\right] \\
& = O(\sigma_n) = o(1) \,.
\end{align*}
This completes the proof. For $T_3$, the limit is non-degenerate which can be calculated as follows:
\begin{align*}
T_3 &= \frac{1}{\sigma_n}\mathbb{E}\left[\left(\delta_0{\top}g(Q)\delta_0\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\left(1 - 2\mathds{1}_{Q^{\top}\delta_0 > 0}\right)\right] \\
& = \frac{1}{\sigma_n} \int \int \left(\delta_0{\top}g(t - \tilde q^{\top}\tilde \psi_0, \tilde q)\delta_0\right)\tilde q K'\left(\frac{t}{\sigma_n} + \tilde q^{\top} \eta\right)\left(1 - 2\mathds{1}_{t > 0}\right) \ f_0(t \mid \tilde q) \ f(\tilde q) \ dt \ d\tilde q \\
& = \int \int \left(\delta_0{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0, \tilde q)\delta_0\right)\tilde q K'\left(t + \tilde q^{\top} \eta\right)\left(1 - 2\mathds{1}_{t > 0}\right) \ f_0(\sigma_n t \mid \tilde q) \ f(\tilde q) \ dt \ d\tilde q \\
& = \int \int \left(\delta_0{\top}g(- \tilde q^{\top}\tilde \psi_0, \tilde q)\delta_0\right)\tilde q K'\left(t + \tilde q^{\top} \eta\right)\left(1 - 2\mathds{1}_{t > 0}\right) \ f_0(0 \mid \tilde q) \ f(\tilde q) \ dt \ d\tilde q + R \\
& = \int \left(\delta_0{\top}g(- \tilde q^{\top}\tilde \psi_0, \tilde q)\delta_0\right)\tilde q f_0(0 \mid \tilde q) \left[\int_{-\infty}^0 K'\left(t + \tilde q^{\top} \eta\right) \ dt - \int_0^\infty K'\left(t + \tilde q^{\top}\tilde \eta\right) \ dt \right] \ f(\tilde q) \ d\tilde q + R \\
&= \int \left(\delta_0{\top}g(- \tilde q^{\top}\tilde \psi_0, \tilde q)\delta_0\right)\tilde q f_0(0 \mid \tilde q)\left(2K\left(\tilde q^{\top}\eta\right) - 1\right) \ f(\tilde q) \ d\tilde q + R \\
& = \mathbb{E}\left[\tilde Q f(0 \mid \tilde Q) \left(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0, \tilde Q)\delta_0\right)\left(2K(\tilde Q^{\top} \eta)- 1\right)\right] + R
\end{align*}
That the remainder $R$ is $o(1)$ again follows by similar calculation as before and hence skipped. Therefore we have when $\eta = (\tilde \psi_0^s - \psi_0)/\sigma_n \to h$:
$$
T_3 \overset{n \to \infty}{\longrightarrow} \mathbb{E}\left[\tilde Q f(0 \mid \tilde Q) \left(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0, \tilde Q)\delta_0\right)\left(2K(\tilde Q^{\top}h)- 1\right)\right] \,,
$$
which along with equation \eqref{eq:pop_est_conv_1} implies:
$$
\mathbb{E}\left[\tilde Q f(0 \mid \tilde Q) \left(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0, \tilde Q)\delta_0\right)\left(2K(\tilde Q^{\top}h)- 1\right)\right] = 0 \,.
$$
Taking inner product with respect to $h$ on both side of the above equation we obtain:
$$
\mathbb{E}\left[\tilde Q^{\top}h f(0 \mid \tilde Q) \left(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0, \tilde Q)\delta_0\right)\left(2K(\tilde Q^{\top}h)- 1\right)\right] = 0
$$
Now from the symmetry of our Kernel $K$ we have $\left(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0, \tilde Q)
\delta_0\right)\tilde Q^{\top}h f(0 \mid \tilde Q) (2K(\tilde Q^{\top}\tilde h) - 1) \ge 0$ almost surely. As the expectation is $0$, we further deduce that $\tilde Q^{\top}h f(0 \mid \tilde Q) (2K(\tilde Q^{\top}\tilde h)-1) = 0$ almost surely, which further implies $h = 0$.
\\\\
\noindent
We next prove that $(\beta_0 - \beta^s_0)/\sqrt{\sigma_n} \to 0$ and $(\delta_0 - \delta^s_0)/\sqrt{\sigma_n} \to 0$ using equations\eqref{eq:beta_grad} and \eqref{eq:delta_grad}. We start with equation \eqref{eq:beta_grad}:
\begin{align}
0 & = -\mathbb{E}\left[X(Y - X^{\top}\beta_0^s)\right] + \mathbb{E} \left\{\left[X_iX_i^{\top}\delta_0^s\right] K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right)\right\} \notag \\
& = -\mathbb{E}\left[XX^{\top}(\beta_0 - \beta_0^s)\right] - \mathbb{E}[XX^{\top}\delta_0\mathds{1}_{Q^{\top}\psi_0 > 0}] + \mathbb{E} \left[ g(Q)K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right)\right]\delta_0^s \notag \\
& = -\Sigma_X(\beta_0 - \beta_0^s) -\mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right](\delta_0 - \delta_0^s) + \mathbb{E} \left[g(Q)\left\{K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right\}\right]\delta_0^s \notag \\
\label{eq:deriv1} & = \Sigma_X\frac{(\beta_0^2 - \beta_0)}{\sigma_n} + \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\frac{(\delta_0^2 - \delta_0)}{\sigma_n} + \frac{1}{\sigma_n}\mathbb{E} \left[g(Q)\left\{K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right\}\right]\delta_0^s \notag \\
& = \left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1}\Sigma_X \frac{(\beta_0^2 - \beta_0)}{\sigma_n} + \frac{\delta_0^s - \delta_0}{\sigma_n} \notag \\
& \qquad \qquad \qquad \qquad + \left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1} \frac{1}{\sigma_n}\mathbb{E} \left[g(Q)\left\{K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right\}\right]\delta_0^s
\end{align}
From equation \eqref{eq:delta_grad} we have:
\begin{align}
0 & = \mathbb{E} \left\{\left[-X\left(Y - X^{\top}\beta_0^s\right) + XX^{\top}\delta_0^s\right] K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right\} \notag \\
& = -\mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right](\beta_0 - \beta_0^s) - \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\delta_0 \notag \\
& \hspace{20em}+ \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right]\delta_0^s \notag \\
& = -\mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right](\beta_0 - \beta_0^s) - \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right](\delta_0 - \delta_0^s) \notag \\
& \hspace{20em} + \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\left(1 - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right]\delta_0^s \notag \\
& = \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right]\frac{(\beta_0^s - \beta_0)}{\sigma_n} + \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\frac{(\delta^s_0 - \delta_0)}{\sigma_n} \notag \\
\label{eq:deriv2} & \hspace{20em} + \frac{1}{\sigma_n}\mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\left(1 - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right]\delta_0^s \notag \\
& = \left( \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1}\mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right]\frac{(\beta_0^s - \beta_0)}{\sigma_n} + \frac{(\delta^s_0 - \delta_0)}{\sigma_n} \notag \\
& \qquad \qquad \qquad + \left( \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1} \frac{1}{\sigma_n}\mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\left(1 - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right]\delta_0^s
\end{align}
Subtracting equation \eqref{eq:deriv2} from \eqref{eq:deriv1} we obtain:
$$
0 = A_n \frac{(\beta_0^s - \beta_0)}{\sigma_n} + b_n \,,
$$
i.e.
$$
\lim_{n \to \infty} \frac{(\beta_0^s - \beta_0)}{\sigma_n} = \lim_{n \to \infty} -A_n^{-1}b_n \,.
$$
where:
\begin{align*}
A_n & = \left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1}\Sigma_X \\
& \qquad \qquad - \left( \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1}\mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right] \\
b_n & = \left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1} \frac{1}{\sigma_n}\mathbb{E} \left[g(Q)\left\{K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right\}\right]\delta_0^s \\
& \qquad - \left( \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1} \frac{1}{\sigma_n}\mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\left(1 - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right]\delta_0^s
\end{align*}
It is immediate via DCT that as $n \to \infty$:
\begin{align}
\label{eq:limit_3} \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right] & \longrightarrow \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \,. \\
\label{eq:limit_4} \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right] & \longrightarrow \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \,.
\end{align}
From equation \eqref{eq:limit_3} and \eqref{eq:limit_4} it is immediate that:
\begin{align*}
\lim_{n \to \infty} A_n & = \left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1}\Sigma_X - I \\
& = \left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1}\left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 \le 0}\right]\right) := A\,.
\end{align*}
Next observe that:
\begin{align}
& \frac{1}{\sigma_n} \mathbb{E}\left[g(Q)\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right\}\right] \notag \\
& = \frac{1}{\sigma_n} \mathbb{E}\left[g(Q)\left\{K\left(\frac{Q^{\top}\psi_0}{\sigma_n} + \tilde Q^{\top}\tilde \eta\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right\}\right] \notag \\
& = \int_{\mathbb{R}^{p-1}} \int_{-\infty}^{\infty} g(\sigma_nt - \tilde q^{\top}\tilde \psi_0, \tilde q)\left[K\left(t + \tilde q^{\top}\tilde \eta\right) - \mathds{1}_{t > 0}\right] f(\sigma_n t \mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \notag \\
\label{eq:limit_1} & \longrightarrow \mathbb{E}\left[g(-\tilde Q^{\top}\tilde \psi_0, \tilde Q)f(0 \mid \tilde Q)\right] \cancelto{0}{\int_{-\infty}^{\infty} \left[K\left(t\right) - \mathds{1}_{t > 0}\right] \ dt} \,.
\end{align}
Similar calculation yields:
\begin{align}
\label{eq:limit_2} & \frac{1}{\sigma_n} \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\left(1 - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right] \notag \\
& \longrightarrow \mathbb{E}[g(-\tilde Q^{\top}\tilde \psi_0, \tilde Q)f_0(0 \mid \tilde Q)]\int_{-\infty}^{\infty} \left[K\left(t\right)\mathds{1}_{t \le 0}\right] \ dt \,.
\end{align}
Combining equation \eqref{eq:limit_1} and \eqref{eq:limit_2} we conclude:
\begin{align*}
\lim_{n \to \infty} b_n &= \left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1} \mathbb{E}[g(-\tilde Q^{\top}\tilde \psi_0, \tilde Q)\delta_0f_0(0 \mid \tilde Q)]\int_{-\infty}^{\infty} \left[K\left(t\right)\mathds{1}_{t \le 0}\right] \ dt \\
& := b \,.
\end{align*}
which further implies,
$$
\lim_{n \to \infty} \frac{(\beta_0^s - \beta_0)}{\sigma_n} = -A^{-1}b \implies (\beta_0^s - \beta_0) = o(\sqrt{\sigma_n})\,,
$$
and by similar calculations:
$$
(\delta_0^s - \delta_0) = o(\sqrt{\sigma_n}) \,.
$$
This completes the proof.
\end{proof}
\subsection{Proof of Lemma \ref{lem:pop_curv_nonsmooth}}
\begin{proof}
From the definition of $M(\theta)$ it is immediate that $\mathbb{M}(\theta_0) = \mathbb{E}[{\epsilon}^2] = \sigma^2$. For any general $\theta$:
\begin{align*}
\mathbb{M}(\theta) & = \mathbb{E}\left[\left(Y - X^{\top}\left(\beta + \delta\mathds{1}_{Q^{\top}\psi > 0}\right)\right)^2\right] \\
& = \sigma^2 + \mathbb{E}\left[\left( X^{\top}\left(\beta + \delta\mathds{1}_{Q^{\top}\psi > 0} - \beta_0 - \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right)^2\right] \\
& \ge \sigma^2 + c_- \mathbb{E}_Q\left[\left\|\beta - \beta_0 + \delta\mathds{1}_{Q^{\top}\psi > 0}- \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0} \right\|^2\right]
\end{align*}
This immediately implies:
$$
\mathbb{M}(\theta) - \mathbb{M}(\theta_0) \ge c_- \mathbb{E}\left[\left\|\beta - \beta_0 + \delta\mathds{1}_{Q^{\top}\psi > 0}- \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0} \right\|^2\right] \,.
$$
\noindent
For notational simplicity, define $p_{\psi} = \mathbb{P}(Q^{\top}\psi > 0)$. Expanding the RHS we have:
\begin{align}
& \mathbb{E}\left[\left\|\beta - \beta_0 + \delta\mathds{1}_{Q^{\top}\psi > 0}- \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0} \right\|^2\right] \notag \\
& = \|\beta - \beta_0\|^2 + 2(\beta - \beta_0)^{\top}\mathbb{E}\left[\delta\mathds{1}_{Q^{\top}\psi > 0}- \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0}\right] + \mathbb{E}\left[\left\|\delta\mathds{1}_{Q^{\top}\psi > 0}- \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0}\right\|^2\right] \notag \\
& = \|\beta - \beta_0\|^2 + 2(\beta - \beta_0)^{\top}\mathbb{E}\left[\delta\mathds{1}_{Q^{\top}\psi > 0}-\delta\mathds{1}_{Q^{\top}\psi_0 > 0} + \delta\mathds{1}_{Q^{\top}\psi_0 > 0} - \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \notag \\
& \qquad \qquad \qquad \qquad \qquad+ \mathbb{E}\left[\left\|\delta\mathds{1}_{Q^{\top}\psi > 0}-\delta\mathds{1}_{Q^{\top}\psi_0 > 0} + \delta\mathds{1}_{Q^{\top}\psi_0 > 0} - \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0}\right\|^2\right] \notag \\
& = \|\beta - \beta_0\|^2 + 2(\beta - \beta_0)^{\top}(\delta - \delta_0)p_{\psi_0} + \|\delta - \delta_0\|^2 p_{\psi_0} \notag \\
& \qquad \qquad \qquad + 2(\beta - \beta_0)^{\top}\delta\left(p_{\psi} - p_{\psi_0}\right) + \|\delta\|^2 \mathbb{P}\left(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)\right) \notag \\
\label{eq:nsb1} & \qquad \qquad \qquad \qquad \qquad - 2\delta^{\top}(\delta - \delta_0)\mathbb{P}\left(Q^{\top}\psi_0 > 0, Q^{\top}\psi < 0\right)
\end{align}
Using the fact that $2ab \ge (a^2/c) + cb^2$ for any constant $c$ we have:
\begin{align*}
& \|\beta - \beta_0\|^2 + 2(\beta - \beta_0)^{\top}(\delta - \delta_0)p_{\psi_0} + \|\delta - \delta_0\|^2 p_{\psi_0} \\
& \ge \|\beta - \beta_0\|^2 + \|\delta - \delta_0\|^2 p_{\psi_0} - \frac{\|\beta - \beta_0\|^2 p_{\psi_0}}{c} - c \|\delta - \delta_0\|^2 p_{\psi_0} \\
& = \|\beta - \beta_0\|^2\left(1 - \frac{p_{\psi_0}}{c}\right) + \|\delta - \delta_0\|^2 p_{\psi_0} (1 - c) \,.
\end{align*}
for any $c$. To make the RHS non-negative we pick $p_{\psi_0} < c < 1$ and concludes that:
\begin{equation}
\label{eq:nsb2}
\|\beta - \beta_0\|^2 + 2(\beta - \beta_0)^{\top}(\delta - \delta_0)p_{\psi_0} + \|\delta - \delta_0\|^2 p_{\psi_0} \gtrsim \left( \|\beta - \beta_0\|^2 + \|\delta - \delta_0\|^2\right) \,.
\end{equation}
For the last 3 summands of RHS of equation \eqref{eq:nsb1}:
\begin{align}
& 2(\beta - \beta_0)^{\top}\delta\left(p_{\psi} - p_{\psi_0}\right) + \|\delta\|^2 \mathbb{P}\left(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)\right) \notag \\
& \qquad \qquad - 2\delta^{\top}(\delta - \delta_0)\mathbb{P}\left(Q^{\top}\psi_0 > 0, Q^{\top}\psi < 0\right) \notag \\
& = 2(\beta - \beta_0)^{\top}\delta \mathbb{P}\left(Q^{\top}\psi > 0, Q^{\top}\psi_0 < 0\right) - 2(\beta - \beta_0)^{\top}\delta \mathbb{P}\left(Q^{\top}\psi < 0, Q^{\top}\psi_0 > 0\right) \notag \\
& \qquad \qquad + |\delta\|^2 \mathbb{P}\left(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)\right) - 2\delta^{\top}(\delta - \delta_0)\mathbb{P}\left(Q^{\top}\psi_0 > 0, Q^{\top}\psi < 0\right) \notag \\
& = \left[\|\delta\|^2 - 2(\beta - \beta_0)^{\top}\delta - 2\delta^{\top}(\delta - \delta_0)\right]\mathbb{P}\left(Q^{\top}\psi_0 > 0, Q^{\top}\psi < 0\right) \notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad + \left[\|\delta\|^2 + 2(\beta - \beta_0)^{\top}\delta\right]\mathbb{P}\left(Q^{\top}\psi > 0, Q^{\top}\psi_0 < 0\right) \notag \\
& = \left[\|\delta_0\|^2 - 2(\beta - \beta_0)^{\top}(\delta - \delta_0) - 2(\beta - \beta_0)^{\top}\delta_0 - \|\delta - \delta_0\|^2\right]\mathbb{P}\left(Q^{\top}\psi_0 > 0, Q^{\top}\psi < 0\right) \notag \\
& \qquad + \left[\|\delta_0\|^2 + \|\delta - \delta_0\|^2 + 2(\delta - \delta_0)^{\top}\delta_0 + 2(\beta - \beta_0)^{\top}(\delta - \delta_0) + 2(\beta - \beta_0)^{\top}\delta_0\right]\mathbb{P}\left(Q^{\top}\psi > 0, Q^{\top}\psi_0 < 0\right) \notag \\
& \ge \left[\|\delta_0\|^2 - 2\|\beta - \beta_0\|\|\delta - \delta_0\| - 2\|\beta - \beta_0\|\|\delta_0\| - \|\delta - \delta_0\|^2\right]\mathbb{P}\left(Q^{\top}\psi_0 > 0, Q^{\top}\psi < 0\right) \notag \\
& \qquad + \left[\|\delta_0\|^2 + \|\delta - \delta_0\|^2 + 2\|\delta - \delta_0\|\|\delta_0\| + 2\|\beta - \beta_0\|\|\delta - \delta_0\| + 2\|\beta - \beta_0\|\|\delta_0\|\right]\mathbb{P}\left(Q^{\top}\psi > 0, Q^{\top}\psi_0 < 0\right) \notag \\
\label{eq:nsb3} & \gtrsim \|\delta_0\|^2 \mathbb{P}\left(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)\right) \gtrsim \|\psi - \psi_0\| \hspace{0.2in} [\text{By Assumption }\ref{eq:assm}]\,.
\end{align}
Combining equation \eqref{eq:nsb2} and \eqref{eq:nsb3} we complete the proof of lower bound. The upper bound is relatively easier: note that by our previous calculation:
\begin{align*}
\mathbb{M}(\theta) - \mathbb{M}(\theta_0) & = \mathbb{E}\left[\left( X^{\top}\left(\beta + \delta\mathds{1}_{Q^{\top}\psi > 0} - \beta_0 - \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right)^2\right] \\
& \le c_+\mathbb{E}\left[\left\|\beta - \beta_0 + \delta\mathds{1}_{Q^{\top}\psi > 0}- \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0} \right\|^2\right] \\
& = c_+\mathbb{E}\left[\left\|\beta - \beta_0 + \delta\mathds{1}_{Q^{\top}\psi > 0}- \delta\mathds{1}_{Q^{\top}\psi_0 > 0} + \delta\mathds{1}_{Q^{\top}\psi_0 > 0} - \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0} \right\|^2\right] \\
& \lesssim \left[\|\beta - \beta_0\|^2 + \|\delta - \delta_0\|^2 + \mathbb{P}\left(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)\right)\right] \\
& \lesssim \left[\|\beta - \beta_0\|^2 + \|\delta - \delta_0\|^2 + \|\psi - \psi_0\|\right] \,.
\end{align*}
This completes the entire proof.
\end{proof}
\subsection{Proof of Lemma \ref{lem:uniform_smooth}}
\begin{proof}
The difference of the two losses:
\begin{align*}
\left|\mathbb{M}^s(\theta) - \mathbb{M}(\theta)\right| & = \left|\mathbb{E}\left[\left\{-2\left(Y_i - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right\}\left(K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi > 0}\right)\right]\right| \\
& \le \mathbb{E}\left[\left|-2\left(Y_i - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right|\left|K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi > 0}\right|\right] \\
& := \mathbb{E}\left[m(Q)\left|K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi > 0}\right|\right]
\end{align*}
where $m(Q) = \mathbb{E}\left[\left|-2\left(Y_i - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right| \mid Q\right]$. This function can be bounded as follows:
\begin{align*}
m(Q) & = \mathbb{E}\left[\left|-2\left(Y_i - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right| \mid Q\right] \\
& \le \mathbb{E}[ (X^{\top}\delta)^2 \mid Q] + 2\mathbb{E}\left[\left|(\beta - \beta_0)^{\top}XX^{\top}\delta\right|\right] + 2\mathbb{E}\left[\left|\delta_0^{\top}XX^{\top}\delta\right|\right] \\
& \le c_+\left(\|\delta\|^2 + 2\|\beta - \beta_0\|\|\delta\| + 2\|\delta\|\|\delta_0\|\right) \lesssim 1 \,,
\end{align*}
as our parameter space is compact. For the rest of the calculation define $\eta = (\tilde \psi - \tilde \psi_0)/\sigma_n$. The definition of $\eta$ may be changed from proof to proof, but it will be clear from the context. Therefore we have:
\begin{align*}
\left|\mathbb{M}^s(\theta) - \mathbb{M}(\theta)\right| & \lesssim \mathbb{E}\left[\left|K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi > 0}\right|\right] \\
& = \mathbb{E}\left[\left| \mathds{1}\left(\frac{Q^{\top}\psi_0}{\sigma_n} + \eta^{\top}\tilde{Q} \ge 0\right) - K\left(\frac{Q^{\top}\psi_0}{\sigma_n} + \eta^{\top}\tilde{Q}\right)\right|\right] \\
& = \sigma_n \int_{\mathbb{R}^{p-1}} \int_{-\infty}^{\infty} \left | \mathds{1}\left(t \ge 0\right) - K\left(t \right)\right | f_0(\sigma_n (t-\eta^{\top}\tilde{q}) | \tilde{q}) \ dt \ dP(\tilde{q}) \\
& \le f_+ \sigma_n \int_{-\infty}^{\infty} \left | \mathds{1}\left(t \ge 0\right) - K\left(t \right)\right | \ dt \lesssim \sigma_n \,.
\end{align*}
where the integral over $t$ is finite follows from the definition of the kernel. This completes the proof.
\end{proof}
\subsection{Proof of Lemma \ref{lem:pop_smooth_curvarture}}
\begin{proof}
First note that we can write:
\begin{align}
& \mathbb{M}^s(\theta) - \mathbb{M}^s(\theta_0^s) \notag \\
& = \underbrace{\mathbb{M}^s(\theta) - \mathbb{M}(\theta)}_{\ge -K_1\sigma_n} + \underbrace{\mathbb{M}(\theta) - \mathbb{M}(\theta_0)}_{\underbrace{\ge u_- d^2(\theta, \theta_0)}_{\ge \frac{u_-}{2} d^2(\theta, \theta_0^s) - u_-\sigma_n }} + \underbrace{\mathbb{M}(\theta_0) - \mathbb{M}(\theta_0^s)}_{\ge - u_+ d^2(\theta_0, \theta_0^s) \ge - u_+\sigma_n} + \underbrace{\mathbb{M}(\theta_0^s) - \mathbb{M}^s(\theta_0^s)}_{\ge - K_1 \sigma_n} \notag \\
& \ge \frac{u_-}{2}d^2(\theta, \theta_0^s) - (2K_1 + \xi)\sigma_n \notag \\
& \ge \frac{u_-}{2}\left[\|\beta - \beta^s_0\|^2 + \|\delta - \delta^s_0\|^2 + \|\psi - \psi^s_0\|\right] - (2K_1 + \xi)\sigma_n \notag \\
& \ge \left[\frac{u_-}{2}\left(\|\beta - \beta^s_0\|^2 + \|\delta - \delta^s_0\|^2\right) + \frac{u_-}{4}\|\psi - \psi^s_0\|\right]\mathds{1}_{\|\psi - \psi^s_0\| > \frac{4(2K_1 + \xi)}{u_-}\sigma_n} \notag \\
\label{eq:lower_curv_smooth} & \gtrsim \left[\|\beta - \beta^s_0\|^2 + \|\delta - \delta^s_0\|^2 + \|\psi - \psi^s_0\|\right]\mathds{1}_{\|\psi - \psi^s_0\| > \frac{4(2K_1 + \xi)}{u_-}\sigma_n}
\end{align}
where $\xi$ can be taken as close to $0$ as possible. Henceforth we set $\mathcal{K} = 4(2K_1 + \xi)/u_-$. For the other part of the curvature (i.e. when $\|\psi - \psi_0^s\| \le \mathcal{K} \sigma_n$) we start with a two step Taylor expansion of the smoothed loss function:
\begin{align*}
\mathbb{M}^s(\theta) - \mathbb{M}^s(\theta_0^s) = \frac12 (\theta_0 - \theta^0_s)^{\top}\nabla^2 \mathbb{M}^s(\theta^*)(\theta_0 - \theta^0_s)
\end{align*}
Recall the definition of $\mathbb{M}^s(\theta)$:
$$
\mathbb{M}^s_n(\theta) = \mathbb{E}\left(Y - X^{\top}\beta\right)^2 + \mathbb{E} \left\{\left[-2\left(Y_i - X_i^{\top}\beta\right)X_i^{\top}\delta + (X_i^{\top}\delta)^2\right] K\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\right\}
$$
The partial derivates of $\mathbb{M}^s(\theta)$ with respect to $(\beta, \delta, \psi)$ was derived in equation \eqref{eq:beta_grad} - \eqref{eq:psi_grad}. From there, we calculate the hessian of $\mathbb{M}^s(\theta)$:
\begin{align*}
\nabla_{\beta\beta}\mathbb{M}^s(\theta) & = 2\Sigma_X \\
\nabla_{\delta\delta}\mathbb{M}^s(\theta) & = 2 \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\right] = 2 \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right] \\
\nabla_{\psi\psi} \mathbb{M}^s(\theta) & = \frac{1}{\sigma_n^2}\mathbb{E} \left\{\left[-2\left(Y_i - X_i^{\top}\beta\right)X_i^{\top}\delta + (X_i^{\top}\delta)^2\right]\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
\nabla_{\beta \delta}\mathbb{M}^s(\theta) & = 2 \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\right] = 2 \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right] \\
\nabla_{\beta \psi}\mathbb{M}^s(\theta) & = \frac{2}{\sigma_n}\mathbb{E}\left(g(Q)\delta\tilde Q^{\top}K'\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right) \\
\nabla_{\delta \psi} \mathbb{M}^s(\theta) & = \frac{2}{\sigma_n}\mathbb{E} \left\{\left[-X_i\left(Y_i - X_i^{\top}\beta\right) + X_iX_i^{\top}\delta\right]\tilde Q_i^{\top} K'\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \,.
\end{align*}
where we use $\tilde \eta$ for a generic notation for $(\tilde \psi - \tilde \psi_0)/\sigma_n$. For notational simplicity, we define $\gamma = (\beta, \delta)$ and $\nabla^2\mathbb{M}^{s, \gamma}(\theta)$, $\nabla^2\mathbb{M}^{s, \gamma \psi}(\theta), \nabla^2\mathbb{M}^{s, \psi \psi}(\theta)$ to be corresponding blocks of the hessian matrix. We have:
\begin{align}
\mathbb{M}^s(\theta) - \mathbb{M}^s(\theta_0^s) & = \frac12 (\theta - \theta^0_s)^{\top}\nabla^2 \mathbb{M}^s(\theta^*)(\theta - \theta^0_s) \notag \\
& = \frac12 (\gamma - \gamma_0^s)^{\top}\nabla^2 \mathbb{M}^{s, \gamma}(\theta^*)(\gamma - \gamma^0_s) + (\gamma - \gamma_0^s)^{\top}\nabla^2 \mathbb{M}^{s, \gamma \psi}(\theta^*)(\psi - \psi^0_s) \notag \\
& \qquad \qquad \qquad \qquad + \frac12(\psi - \psi_0^s)^{\top}\nabla^2 \mathbb{M}^{s, \psi \psi}(\theta^*)(\psi - \psi^0_s) \notag \\
\label{eq:hessian_1} & := \frac12 \left(T_1 + 2T_2 + T_3\right)
\end{align}
Note that we can write:
\begin{align*}
T_1 & = (\gamma - \gamma_0^s)^{\top}\nabla^2 \mathbb{M}^{s, \gamma}(\tilde \theta)(\gamma - \gamma^0_s) \\
& = (\gamma - \gamma_0^s)^{\top}\nabla^2 \mathbb{M}^{s, \gamma}(\theta_0)(\gamma - \gamma^0_s) + (\gamma - \gamma_0^s)^{\top}\left[\nabla^2 \mathbb{M}^{s, \gamma}(\tilde \theta) - \nabla^2 \mathbb{M}^{s, \gamma}(\theta_0)\right](\gamma - \gamma^0_s)
\end{align*}
The operator norm of the difference of two hessians can be bounded as:
$$
\left\|\nabla^2 \mathbb{M}^{s, \gamma}(\theta^*) - \nabla^2 \mathbb{M}^{s, \gamma}(\theta_0)\right\|_{op} = O(\sigma_n) \,.
$$
for any $\theta^*$ in a neighborhood of $\theta_0^s$ with $\|\psi - \psi_0^s\| \le \mathcal{K} \sigma_n$. To prove this note that for any $\theta$:
$$
\nabla^2 \mathbb{M}^{s, \gamma}(\theta^*) - \nabla^2 \mathbb{M}^{s, \gamma}(\theta_0) = 2\begin{pmatrix}0 & A \\
A & A\end{pmatrix} = \begin{pmatrix}0 & 1 \\ 1 & 1\end{pmatrix} \otimes A
$$
where:
$$
A = \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\right] - \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n}\right)\right]
$$
Therefore it is enough to show $\|A\|_{op} = O(\sigma_n)$. Towards that direction:
\begin{align*}
A & = \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\right] - \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n}\right)\right] \\
& = \sigma_n \int \int g(\sigma_n t - \tilde q^{\top}\tilde \psi_0)\left(K(t + \tilde q^{\top}\eta) - K(t) \right) f_0(\sigma_n t \mid \tilde q) \ f(\tilde q) \ dt \ d\tilde q \\
& = \sigma_n \left[\int \int g(- \tilde q^{\top}\tilde \psi_0)\left(K(t + \tilde q^{\top}\eta) - K(t) \right) f_0(0 \mid \tilde q) \ f(\tilde q) \ dt \ d\tilde q + R \right] \\
& = \sigma_n \left[\int \int g(- \tilde q^{\top}\tilde \psi_0)f_0(0 \mid \tilde q) \int_t^{t + \tilde q^{\top}\eta}K'(s) \ ds \ f(\tilde q) \ dt \ d\tilde q + R \right] \\
& = \sigma_n \left[\int g(- \tilde q^{\top}\tilde \psi_0)f_0(0 \mid \tilde q) \int_{-\infty}^{\infty}K'(s) \int_{s-\tilde q^{\top}\eta}^s \ dt \ ds \ f(\tilde q)\ d\tilde q + R \right] \\
& = \sigma_n \left[\int g(- \tilde q^{\top}\tilde \psi_0)f_0(0 \mid \tilde q)\tilde q^{\top}\eta \ f(\tilde q)\ d\tilde q + R \right] \\
& = \sigma_n \left[\mathbb{E}\left[g(- \tilde Q^{\top}\tilde \psi_0, \tilde Q)f_0(0 \mid \tilde Q)\tilde Q^{\top}\eta\right] + R \right]
\end{align*}
using the fact that $\left\|\mathbb{E}\left[g(- \tilde Q^{\top}\tilde \psi_0, \tilde Q)f_0(0 \mid \tilde Q)\tilde Q^{\top}\eta\right]\right\|_{op} = O(1)$ and $\|R\|_{op} = O(\sigma_n)$ we conclude the claim. From the above claim we conclude:
\begin{equation}
\label{eq:hessian_gamma}
T_1 = (\gamma - \gamma_0^s)^{\top}\nabla^2 \mathbb{M}^{s, \gamma}(\theta^*)(\gamma - \gamma^s_0) \ge \|\gamma - \gamma^s_0\|^2(1 - O(\sigma_n)) \ge \frac12 \|\gamma - \gamma_0^s\|^2
\end{equation}
for all large $n$.
\\\\
\noindent
We next deal with the cross term $T_2$ in equation \eqref{eq:hessian_1}. Towards that end first note that:
\begin{align*}
& \frac{1}{\sigma_n}\mathbb{E}\left((g(Q)\delta)\tilde Q^{\top}K'\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\eta^*\right)\right) \\
& = \int_{\mathbb{R}^{(p-1)}}\left[ \int_{-\infty}^{\infty} \left(g\left(\sigma_nt - \tilde q^{\top}\tilde \psi_0, \tilde q\right)\delta\right) K'\left(t + \tilde q^{\top}\eta^*\right) f_0(\sigma_n t \mid \tilde q) \ dt\right] \tilde q^{\top} \ f(\tilde q) \ d\tilde q \\
& = \int_{\mathbb{R}^{(p-1)}}\left[ \int_{-\infty}^{\infty} \left(g\left(- \tilde q^{\top}\tilde \psi_0, \tilde q\right)\delta\right) K'\left(t + \tilde q^{\top}\eta^*\right) f_0(0 \mid \tilde q) \ dt\right] \tilde q^{\top} \ f(\tilde q) \ d\tilde q + R_1\\
& = \mathbb{E}\left[\left(g\left( - \tilde Q^{\top}\tilde \psi_0, \tilde Q\right)\delta\right)\tilde Q^{\top}f_0(0 \mid \tilde Q)\right] + R_1
\end{align*}
where the remainder term $R_1$ can be further decomposed $R_1 = R_{11} + R_{12} + R_{13}$ with:
\begin{align*}
\left\|R_{11}\right\| & = \left\|\int_{\mathbb{R}^{(p-1)}}\left[ \int_{-\infty}^{\infty} \left(g\left(- \tilde q^{\top}\tilde \psi_0, \tilde q\right)\delta\right) K'\left(t + \tilde q^{\top}\eta^*\right) (f_0(\sigma_nt\mid \tilde q) - f_0(0 \mid \tilde q)) \ dt\right] \tilde q^{\top} \ f(\tilde q) \ d\tilde q\right\| \\
& \le \left\|\int_{\mathbb{R}^{(p-1)}}\left[ \int_{-\infty}^{\infty} \left\|g\left(- \tilde q^{\top}\tilde \psi_0, \tilde q\right)\right\|_{op}\|\delta\| \left|K'\left(t + \tilde q^{\top}\eta^*\right)\right| \left|f_0(\sigma_nt\mid \tilde q) - f_0(0 \mid \tilde q)\right| \ dt\right] \left|\tilde q\right| \ f(\tilde q) \ d\tilde q\right\| \\
& \le \sigma_n \dot{f}^+ c_+ \|\delta\| \int_{\mathbb{R}^{(p-1)}} \|\tilde q\| \int_{-\infty}^{\infty} |t| \left|K'\left(t + \tilde q^{\top}\eta^*\right)\right| \ dt \ f(\tilde q) \ d\tilde q \\
& \le \sigma_n \dot{f}^+ c_+ \|\delta\| \int_{\mathbb{R}^{(p-1)}} \|\tilde q\| \int_{-\infty}^{\infty} |t - \tilde q^{\top}\eta^*| \left|K'\left(t\right)\right| \ dt \ f(\tilde q) \ d\tilde q \\
& \le \sigma_n \dot{f}^+ c_+ \|\delta\| \left[\int_{\mathbb{R}^{(p-1)}} \|\tilde q\| \int_{-\infty}^{\infty} |t| \left|K'\left(t\right)\right| \ dt \ f(\tilde q) \ d\tilde q \right. \\
& \qquad \qquad \qquad \left. + \int_{\mathbb{R}^{(p-1)}} \|\tilde q\|^2 \|\eta^*\| \int_{-\infty}^{\infty} |K'(t)| \ dt \ f(\tilde q) \ d\tilde q\right] \\
& \le \sigma_n \dot{f}^+ c_+ \|\delta\| \left[\int_{\mathbb{R}^{(p-1)}} \|\tilde q\| \int_{-\infty}^{\infty} |t| \left|K'\left(t\right)\right| \ dt \ f(\tilde q) \ d\tilde q + \mathcal{K}\int_{\mathbb{R}^{(p-1)}} \|\tilde q\|^2 \int_{-\infty}^{\infty} |K'(t)| \ dt \ f(\tilde q) \ d\tilde q\right] \\
& \lesssim \sigma_n \,.
\end{align*}
where the last bound follows from our assumptions using the fact that:
\begin{align*}
& \|R_{12}\| \\
&= \left\|\int_{\mathbb{R}^{(p-1)}}\left[ \int_{-\infty}^{\infty} \left(\left(g\left(\sigma_n t- \tilde q^{\top}\tilde \psi_0, \tilde q\right) - g\left(- \tilde q^{\top}\tilde \psi_0, \tilde q\right)\right)\delta\right) K'\left(t + \tilde q^{\top} \eta^*\right) f_0(0 \mid \tilde q) \ dt\right] \tilde q^{\top} \ f(\tilde q) \ d\tilde q\right\| \\
& \le \int \|\tilde q\|\|\delta\|f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} \left\|g\left(\sigma_n t- \tilde q^{\top}\tilde \psi_0, \tilde q\right) - g\left(- \tilde q^{\top}\tilde \psi_0, \tilde q\right) \right\|_{op}\left|K'\left(t + \tilde q^{\top} \eta^*\right)\right| \ dt \ f(\tilde q) \ d\tilde q \\
& \le \dot{c}_+ \sigma_n \int \|\tilde q\|\|\delta\|f_0(0 \mid \tilde q)\dot \int_{-\infty}^{\infty} |t| \left|K'\left(t + \tilde q^{\top}\tilde \eta\right)\right| \ dt \ f(\tilde q) \ d\tilde q \hspace{0.2in} [\text{Assumption }\ref{eq:assm}]\\
& \lesssim \sigma_n \,.
\end{align*}
The other remainder term $R_{13}$ is the higher order term and can be shown to be $O(\sigma_n^2)$ using same techniques. This implies for all large $n$:
\begin{align*}
\left\|\nabla_{\beta \psi}\mathbb{M}^s(\theta)\right\|_{op} & = O(1) \,.
\end{align*}
and similar calculation yields $ \left\|\nabla_{\delta \psi}\mathbb{M}^s(\theta)\right\|_{op} = O(1)$. Using this we have:
\begin{align}
T_2 & = (\gamma - \gamma_0^s)^{\top}\nabla^2 \mathbb{M}^{s, \gamma \psi}(\tilde \theta)(\psi - \psi^0_s) \notag \\
& = (\beta - \beta_0^s)^{\top}\nabla_{\beta \psi}^2 \mathbb{M}^{s}(\tilde \theta)(\psi - \psi^0_s) + (\delta - \delta_0^s)^{\top}\nabla_{\delta \psi}^2 \mathbb{M}^{s}(\tilde \theta)(\psi - \psi^0_s) \notag \\
& \ge - C\left[\|\beta - \beta_0^s\| + \|\delta - \delta_0^s\| \right]\|\psi - \psi^0_s\| \notag \\
& \ge -C \sqrt{\sigma_n}\left[\|\beta - \beta_0^s\| + \|\delta - \delta_0^s\| \right]\frac{\|\psi - \psi^0_s\| }{\sqrt{\sigma_n}} \notag \\
\label{eq:hessian_cross} & \gtrsim - \sqrt{\sigma_n}\left(\|\beta - \beta_0^s\|^2 + \|\delta - \delta_0^s\|^2 +\frac{\|\psi - \psi^0_s\|^2 }{\sigma_n} \right)
\end{align}
Now for $T_3$ note that:
\allowdisplaybreaks
\begin{align*}
& \sigma_n \nabla_{\psi\psi} \mathbb{M}^s_n(\theta) \\
& = \frac{1}{\sigma_n}\mathbb{E} \left\{\left[-2\left(Y_i - X_i^{\top}\beta\right)X_i^{\top}\delta + (X_i^{\top}\delta)^2\right]\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& = \frac{1}{\sigma_n}\mathbb{E} \left\{\left[-2\left(Y_i - X_i^{\top}\beta\right)X_i^{\top}\delta \right]\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& \qquad \qquad \qquad + \frac{1}{\sigma_n}\mathbb{E} \left\{(\delta^{\top}g(Q) \delta)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& = \frac{1}{\sigma_n}\mathbb{E} \left\{\left[-2 X_i^{\top}\left(\beta_0 -\beta\right)X_i^{\top}\delta - 2(X_i^{\top}\delta_0)(X_i^{\top}\delta)\mathds{1}_{Q_i^{\top}\psi_0 > 0}\right]\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& \qquad \qquad \qquad + \frac{1}{\sigma_n}\mathbb{E} \left\{(\delta^{\top}g(Q) \delta)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& = \frac{-2}{\sigma_n}\mathbb{E} \left\{((\beta_0 - \beta)^{\top}g(Q) \delta)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& \qquad \qquad \qquad + \frac{-2}{\sigma_n}\mathbb{E} \left\{(\delta_0^{\top}g(Q) \delta)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\mathds{1}_{Q_i^{\top}\psi_0 > 0}\right\} \\
& \qquad \qquad \qquad \qquad \qquad \qquad + \frac{1}{\sigma_n}\mathbb{E} \left\{(\delta^{\top}g(Q) \delta)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& = \underbrace{\frac{-2}{\sigma_n}\mathbb{E} \left\{((\beta_0 - \beta)^{\top}g(Q)\delta)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\}}_{M_1} \\
& \qquad \qquad \qquad + \underbrace{\frac{-2}{\sigma_n}\mathbb{E} \left\{(\delta_0^{\top}g(Q) \delta_0)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\mathds{1}_{Q_i^{\top}\psi_0 > 0}\right\}}_{M_2} \\
& \qquad \qquad \qquad \qquad \qquad \qquad +
\underbrace{\frac{-2}{\sigma_n}\mathbb{E} \left\{(\delta_0^{\top} g(Q) (\delta - \delta_0))\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\mathds{1}_{Q_i^{\top}\psi_0 > 0}\right\}}_{M_3} \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \underbrace{\frac{1}{\sigma_n}\mathbb{E} \left\{(\delta^{\top}g(Q) \delta)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\}}_{M_4} \\
& := M_1 + M_2 + M_3 + M_4
\end{align*}
We next show that $M_1$ and $M_4$ are $O(\sigma_n)$. Towards that end note that for any two vectors $v_1, v_2$:
\begin{align*}
& \frac{1}{\sigma_n}\mathbb{E} \left\{(v_1^{\top}g(Q)v_2)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& = \int \tilde q \tilde q^{\top} \int_{-\infty}^{\infty}(v_1^{\top}g(\sigma_nt - \tilde q^{\top}\tilde \eta, \tilde q)v_2) K''(t + \tilde q^{\top}\tilde \eta) f(\sigma_nt \mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \\
& = \int \tilde q \tilde q^{\top} (v_1^{\top}g( - \tilde q^{\top}\tilde \eta, \tilde q)v_2)f(0 \mid \tilde q) f(\tilde q) \ d\tilde q \cancelto{0}{\int_{-\infty}^{\infty} K''(t) \ dt} + R = R
\end{align*}
as $\int K''(t) \ dt = 0$ follows from our choice of kernel $K(x) = \Phi(x)$. Similar calculation as in the case of analyzing the remainder of $T_2$ yields $\|R\|_{op} = O(\sigma_n)$.
\noindent
This immediately implies $\|M_1\|_{op} = O(\sigma_n)$ and $\|M_4\|_{op} = O(\sigma_n)$. Now for $M_2$:
\begin{align}
M_2 & = \frac{-2}{\sigma_n}\mathbb{E} \left\{(\delta_0^{\top}g(Q) \delta_0)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\mathds{1}_{Q_i^{\top}\psi_0 > 0}\right\} \notag \\
& = -2\int \int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} f_0(\sigma_n t \mid \tilde q) \ dt f(\tilde q) \ d\tilde q \notag \\
& = -2\int (\delta_0^{\top}g(- \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q + R \notag \\
\label{eq:M_2_double_deriv} & = 2\mathbb{E}\left[(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0) \delta_0)\tilde
Q\tilde Q^{\top} f_0(0 \mid \tilde Q) K'(\tilde Q^{\top}\eta^*)\right] + R
\end{align}
where the remainder term R is $O_p(\sigma_n)$ can be established as follows:
\begin{align*}
R & = -2\left[\int \int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} f_0(\sigma_n t \mid \tilde q) \ dt f(\tilde q) \ d\tilde q \right. \\
& \qquad \qquad - \left. \int (\delta_0^{\top}g(- \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q \right] \\
& = -2\left\{\left[\int \int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} f_0(\sigma_n t \mid \tilde q) \ dt f(\tilde q) \ d\tilde q \right. \right. \\
& \qquad \qquad - \left. \left. \int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0) \tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q \right] \right. \\
& \left. + \left[\int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q \right. \right. \\
& \qquad \qquad \left. \left. -\int (\delta_0^{\top}g(- \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q \right]\right\} \\
& = -2(R_1 + R_2) \,.
\end{align*}
For $R_1$:
\begin{align*}
\left\|R_1\right\|_{op} & = \left\|\left[\int \int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} f_0(\sigma_n t \mid \tilde q) \ dt f(\tilde q) \ d\tilde q \right. \right. \,.\\
& \qquad \qquad - \left. \left. \int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q \right] \right\|_{op} \\
& \le c_+ \int \int \|\tilde q\|^2 |K''\left(t + \tilde q^{\top}\eta^*\right)| |f_0(\sigma_n t \mid \tilde q) -f_0(0\mid \tilde q)| \ dt \ f(\tilde q) \ d\tilde q \\
& \le c_+ F_+\sigma_n \int \|\tilde q\|^2 \int |t| |K''\left(t + \tilde q^{\top}\eta^*\right)| \ dt \ f(\tilde q) \ d\tilde q \\
& = c_+ F_+\sigma_n \int \|\tilde q \|^2 \int |t - \tilde q^{\top}\eta^*| |K''\left(t\right)| \ dt \ f(\tilde q) \ d\tilde q \\
& \le c_+ F_+ \sigma_n \left[\mathbb{E}[\|\tilde Q\|^2]\int |t||K''(t)| \ dt + \|\eta^*\|\mathbb{E}[\|\tilde Q\|^3]\int |K''(t)| \ dt\right] = O(\sigma_n) \,.
\end{align*}
and similarly for $R_2$:
\begin{align*}
\|R_2\|_{op} & = \left\|\left[\int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q \right. \right. \\
& \qquad \qquad \left. \left. -\int (\delta_0^{\top}g(- \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q \right]\right\|_{op} \\
& \le F_+ \|\delta_0\|^2 \int \left\|g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) - g( - \tilde q^{\top}\tilde \psi_0) \right\|_{op} \|\tilde q\|^2 \int_{-\infty}^{\infty} |K''\left(t + \tilde q^{\top}\eta^*\right)| \ dt \\
& \le G_+ F_+ \sigma_n \int \|\tilde q\|^2 \int_{-\infty}^{\infty} |t||K''\left(t + \tilde q^{\top}\eta^*\right)| \ dt = O(\sigma_n) \,.
\end{align*}
Therefore from \eqref{eq:M_2_double_deriv} we conclude:
\begin{equation}
M_2 = 2\mathbb{E}\left[(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0) \delta_0)\tilde
Q\tilde Q^{\top} f_0(0 \mid \tilde Q) K'(\tilde Q^{\top}\eta^*)\right] + O(\sigma_n) \,.
\end{equation}
Similar calculation for $M_3$ yields:
\begin{equation*}
M_3 = 2\mathbb{E}\left[(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0)(\delta - \delta_0))\tilde
Q\tilde Q^{\top} f_0(0 \mid \tilde Q) K'(\tilde Q^{\top}\eta^*)\right] + O(\sigma_n) \,.
\end{equation*}
i.e.
\begin{equation}
\|M_3\|_{op} \le c_+ \mathbb{E}\left[\|\tilde Q\|^2f_0(0 \mid \tilde Q) K'(\tilde Q^{\top}\eta^*)\right]\|\delta_0\| \|\delta - \delta_0\| \,.
\end{equation}
Now we claim that for any $\mathcal{K} < \infty$, $\lambda_{\min} (M_2) > 0$ for all $\|\eta^*\| \le \mathcal{K}$. Towards that end, define a function $\lambda:B_{\mathbb{R}^{2d}}(1) \times B_{\mathbb{R}^{2d}}(\mathcal{K}) \to \mathbb{R}_+$ as:
$$
\lambda: (v, \eta) \mapsto 2\mathbb{E}\left[(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0) \delta_0)
\left(v^{\top}\tilde Q\right) ^2 f_0(0 \mid \tilde Q) K'(\tilde Q^{\top}\eta)\right]
$$
Clearly $\lambda \ge 0$ and is continuous on a compact set. Hence its infimum must be attained. Suppose the infimum is $0$, i.e. there exists $(v^*, \eta^*)$ such that:
$$
\mathbb{E}\left[(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0) \delta_0)
\left(v^{*^{\top}}\tilde Q\right) ^2 f_0(0 \mid \tilde Q) K'(\tilde Q^{\top}\eta^*)\right] = 0 \,.
$$
as $\lambda_{\min}(g(\dot)) \ge c_+$, we must have $\left(v^{*^{\top}}\tilde Q\right) ^2 f_0(0 \mid \tilde Q) K'(\tilde Q^{\top}\eta^*) = 0$ almost surely. But from our assumption, $\left(v^{*^{\top}}\tilde Q\right) ^2 > 0$ and $K'(\tilde Q^{\top}\eta^*) > 0$ almost surely, which implies $f_0(0 \mid \tilde q) = 0$ almost surely, which is a contradiction. Hence there exists $\lambda_-$ such that:
$$
\lambda_{\min} (M_2) \ge \lambda_- > 0 \ \ \forall \ \ \|\psi - \psi_0^s\| \le \mathcal{K} \,.
$$
Hence we have:
$$
\lambda_{\min}\left(\sigma_n \nabla_{\psi \psi}\mathbb{M}^2(\theta)\right) \ge \frac{\lambda_-}{2}(1 - O(\sigma_n))
$$
for all theta such that $d_*(\theta, \theta_0^s) \le {\epsilon} \,.$
\begin{align}
\label{eq:hessian_psi}
& \frac{1}{\sigma_n}(\psi - \psi_0^s)^{\top}\sigma_n \nabla^{\psi \psi}\mathbb{M}^s(\tilde \theta) (\psi - \psi^0) \gtrsim \frac{\|\psi - \psi^s_0\|^2}{\sigma_n} \left(1- O(\sigma_n)\right)
\end{align}
From equation \eqref{eq:hessian_gamma}, \eqref{eq:hessian_cross} and \eqref{eq:hessian_psi} we have:
\begin{align*}
& \frac12 (\theta_0 - \theta^0_s)^{\top}\nabla^2 \mathbb{M}^s(\theta^*)(\theta_0 - \theta^0_s) \\
& \qquad \qquad \gtrsim \left[\|\beta - \beta^s_0\|^2 + \|\gamma - \gamma^s_0\|^2 + \frac{\|\psi - \psi^s_0\|^2}{\sigma_n}\right]\mathds{1}_{\|\psi - \psi_0^s\| \le \mathcal{K} \sigma_n} \,.
\end{align*}
This, along with equation \eqref{eq:lower_curv_smooth} concludes the proof.
\end{proof}
\subsection{Proof of Lemma \ref{asymp-normality}}
We start by proving that analogues of Lemma 2 of \cite{seo2007smoothed}: we show that:
\begin{align*}
\lim_{n \to \infty} \mathbb{E}\left[ \sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0)\right] & = 0 \\
\lim_{n \to \infty} {\sf var}\left[ \sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0)\right] & = V^{\psi}
\end{align*}
for some matrix $V^{\psi}$ which will be specified later in the proof. To prove the limit of the expectation:
\begin{align*}
& \mathbb{E}\left[ \sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0)\right] \\
& = \sqrt{\frac{n}{\sigma_n}}\mathbb{E}\left[\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right] \\
& = \sqrt{\frac{n}{\sigma_n}}\mathbb{E}\left[\left(\delta_0^{\top}g(Q)\delta_0\right)\left(1 - 2\mathds{1}_{Q^{\top}\psi_0 > 0}\right)\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right] \\
& = \sqrt{\frac{n}{\sigma_n}} \times \sigma_n \int \int \left(\delta_0^{\top}g(\sigma_nt - \tilde q^{\top}\tilde \psi_0, \tilde q)\delta_0\right)\left(1 - 2\mathds{1}_{t > 0}\right)\tilde q K'\left(t\right) \ f_0(\sigma_n t \mid \tilde q) f (\tilde q) \ dt \ d\tilde q \\
& = \sqrt{n\sigma_n} \left[\int \tilde q \left(\delta_0^{\top}g(- \tilde q^{\top}\tilde \psi_0, \tilde q)\delta_0\right)f_0(0 \mid \tilde q) \cancelto{0}{\left(\int_{-\infty}^{\infty} \left(1 - 2\mathds{1}_{t > 0}\right)K'\left(t\right) \ dt\right)} f (\tilde q) d\tilde q + O(\sigma_n)\right] \\
& = O(\sqrt{n\sigma_n^3}) = o(1) \,.
\end{align*}
For the variance part:
\begin{align*}
& {\sf var}\left[ \sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0)\right] \\
& = \frac{1}{\sigma_n}{\sf var}\left(\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right) \\
& = \frac{1}{\sigma_n}\mathbb{E}\left(\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}^2 \tilde Q\tilde Q^{\top} \left(K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right)^2\right) \\
& \qquad \qquad + \frac{1}{\sigma_n}\mathbb{E}^{\otimes 2}\left[\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right]
\end{align*}
The outer product of the expectation (the second term of the above summand) is $o(1)$ which follows from our previous analysis of the expectation term. For the second moment:
\begin{align*}
& \frac{1}{\sigma_n}\mathbb{E}\left(\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}^2 \tilde Q\tilde Q^{\top} \left(K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right)^2\right) \\
& = \frac{1}{\sigma_n}\mathbb{E}\left(\left\{(X^{\top}\delta_0)^2(1 - 2\mathds{1}_{Q^{\top}\psi_0 > 0}) -2{\epsilon} (X^{\top}\delta_0)\right\}^2 \tilde Q\tilde Q^{\top} \left(K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right)^2\right) \\
& = \frac{1}{\sigma_n}\left[\mathbb{E}\left((X^{\top}\delta_0)^4 \tilde Q\tilde Q^{\top} \left(K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right)^2\right) + 4\sigma_{\epsilon}^2\mathbb{E}\left((X^{\top}\delta_0)^2 \tilde Q\tilde Q^{\top} \left(K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right)^2\right) \right] \\
& \longrightarrow \left(\int_{-\infty}^{\infty}(K'(t))^2 \ dt\right)\left[\mathbb{E}\left(g_{4, \delta_0}(-\tilde Q^{\top}\tilde \psi_0, \tilde Q)\tilde Q\tilde Q^{\top}f_0(0 \mid \tilde Q)\right) \right. \\
& \hspace{10em}+ \left. 4\sigma_{\epsilon}^2\mathbb{E}\left(\delta_0^{\top}g(-\tilde Q^{\top}\tilde \psi_0, \tilde Q)\delta_0 \tilde Q\tilde Q^{\top}f_0(0 \mid \tilde Q)\right)\right] \\
& := 2V^{\psi} \,.
\end{align*}
Finally using Lemma 6 of \cite{horowitz1992smoothed} we conclude that $ \sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0) \implies \mathcal{N}(0, V^{\psi})$.
\\\\
\noindent
We next prove that $ \sqrt{n}\nabla \mathbb{M}_n^{s, \gamma}(\theta_0)$ to normal distribution. This is a simple application of CLT along with bounding some remainder terms which are asymptotically negligible. The gradients are:
\begin{align*}
\sqrt{n}\begin{pmatrix} \nabla_{\beta}\mathbb{M}^s_n(\theta_0^s) \\ \nabla_{\delta}\mathbb{M}^s_n(\theta_0^s) \end{pmatrix} & = 2\sqrt{n}\begin{pmatrix}\frac1n \sum_i X_i(X_i^{\top}\beta_0 - Y_i)+ \frac1n \sum_i X_iX_i^{\top}\delta_0 K\left(\frac{Q_i^{\top}\psi_0}{\sigma_n}\right) \\
\frac1n \sum_i \left[X_i(X_i^{\top}\beta_0 + X_i^{\top}\delta_0 - Y_i)\right] K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right) \end{pmatrix} \\
& = 2\begin{pmatrix} -\frac{1}{\sqrt{n}} \sum_i X_i {\epsilon}_i + \frac{1}{\sqrt{n}} \sum_i X_iX_i^{\top}\delta_0 \left(K\left(\frac{Q_i^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q_i^{\top}\psi_0 > 0}\right) \\ -\frac{1}{\sqrt{n}} \sum_i X_i {\epsilon}_iK\left(\frac{Q_i^{\top}\psi_0}{\sigma_n}\right) + \frac{1}{\sqrt{n}} \sum_i X_iX_i^{\top}\delta_0K\left(\frac{Q_i^{\top}\psi_0}{\sigma_n}\right)\mathds{1}_{Q_i^{\top}\psi_0 \le 0}
\end{pmatrix}\\
& = 2\begin{pmatrix} -\frac{1}{\sqrt{n}} \sum_i X_i {\epsilon}_i + R_1 \\ -\frac{1 }{\sqrt{n}} \sum_i X_i {\epsilon}_i\mathbf{1}_{Q_i^{\top}\psi_0 > 0} +R_2
\end{pmatrix}
\end{align*}
That $(1/\sqrt{n})\sum_i X_i {\epsilon}_i$ converges to normal distribution follows from a simple application of CLT. Therefore, once we prove that $R_1$ and $R_2$ are $o_p(1)$ we have:
$$
\sqrt{n} \nabla_{\gamma}\mathbb{M}^s_n(\theta_0^s) \overset{\mathscr{L}}{\implies} \mathcal{N}\left(0, 4V^{\gamma}\right)
$$
where:
\begin{equation}
\label{eq:def_v_gamma}
V^{\gamma} = \sigma_{\epsilon}^2 \begin{pmatrix}\mathbb{E}\left[XX^{\top}\right] & \mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \\
\mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] & \mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \end{pmatrix} \,.
\end{equation}
To complete the proof we now show that $R_1$ and $R_2$ are $o_p(1)$. For $R_1$, we show that $\mathbb{E}[R_1] \to 0$ and ${\sf var}(R_1) \to 0$. For the expectation part:
\begin{align*}
& \mathbb{E}[R_1] \\
& = \sqrt{n}\mathbb{E}\left[XX^{\top}\delta_0 \left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right] \\
& = \sqrt{n}\delta_0^{\top}\mathbb{E}\left[g(Q) \left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right] \\
& = \sqrt{n}\int_{\mathbb{R}^{p-1}} \int_{-\infty}^{\infty} \delta_0^{\top}g\left(t-\tilde q^{\top}\tilde \psi_0, \tilde q\right)\left(\mathds{1}_{t > 0} - K\left(\frac{t}{\sigma_n}\right)\right)f_0(t \mid \tilde q) f(\tilde q) \ dt \ d\tilde q \\
& = \sqrt{n}\sigma_n \int_{\mathbb{R}^{p-1}} \int_{-\infty}^{\infty} \delta_0^{\top}g\left(\sigma_n z-\tilde q^{\top}\tilde \psi_0, \tilde q\right)\left(\mathds{1}_{z > 0} - K\left(z\right)\right)f_0(\sigma_n z \mid \tilde q) f(\tilde q) \ dz \ d\tilde q \\
& = \sqrt{n}\sigma_n \left[\int_{\mathbb{R}^{p-1}}\delta_0^{\top}g\left(-\tilde q^{\top}\tilde \psi_0, \tilde q\right) f_0(0 \mid \tilde q) f(\tilde q) \ d\tilde q \cancelto{0}{\left[\int_{-\infty}^{\infty} \left(\mathds{1}_{z > 0} - K\left(z\right)\right)\ dz\right]} + O(\sigma_n) \right] \\
& = O(\sqrt{n}\sigma_n^2) = o(1) \,.
\end{align*}
For the variance part:
\begin{align*}
& {\sf var}(R_1) \\
& = {\sf var}\left(XX^{\top}\delta_0 \left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right) \\
& \le \mathbb{E}\left[\|X\|^2 \delta_0^{\top}XX^{\top}\delta_0 \left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)^2\right] \\
& = O(\sigma_n ) = o(1) \,.
\end{align*}
This shows that ${\sf var}(R_1) = o(1)$ and this establishes $R_1 = o_p(1)$. The proof for $R_2$ is similar and hence skipped for brevity.
\\\\
Our next step is to prove that $\sqrt{n\sigma_n}\nabla_{\psi}\mathbb{M}^s_n(\theta_0^s)$ and $\sqrt{n}\nabla \mathbb{M}^{s, \gamma}_n(\theta_0^s)$ are asymptotically uncorrelated. Towards that end, first note that:
\begin{align*}
& \mathbb{E}\left[X(X^{\top}\beta_0 - Y) + XX^{\top}\delta_0 K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) \right] \\
& = \mathbb{E}\left[XX^{\top}\delta_0\left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right] \\
& = \mathbb{E}\left[g(Q)\delta_0\left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right] \\
& = \sigma_n \int \int g(\sigma_n t - \tilde q^{\top}\tilde \psi_0, \tilde q)(K(t) - \mathds{1}_{t>0})f_0(\sigma_n t \mid \tilde q) f(\tilde q) \ dt \ d\tilde q \\
& = \sigma_n \int g(- \tilde q^{\top}\tilde \psi_0, \tilde q)\cancelto{0}{\int_{-\infty}^{\infty} (K(t) - \mathds{1}_{t>0}) \ dt} \ f_0(0 \mid \tilde q) f(\tilde q) \ dt \ d\tilde q + O(\sigma_n^2) \\
& = O(\sigma_n^2) \,.
\end{align*}
Also, it follows from the proof of $\mathbb{E}\left[\sqrt{n\sigma_n}\nabla_\psi \mathbb{M}_n^s(\theta_0)\right] \to 0$ we have:
$$
\mathbb{E}\left[\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right] = O(\sigma_n^2) \,.
$$
Finally note that:
\begin{align*}
& \mathbb{E}\left[\left(\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right) \times \right. \\
& \qquad \qquad \qquad \qquad \qquad \left. \left(X(X^{\top}\beta_0 - Y) + XX^{\top}\delta_0 K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right)^{\top}\right] \\
& = \mathbb{E}\left[\left(\left\{(X^{\top}\delta_0)^2(1 - 2\mathds{1}_{Q^{\top}\psi_0 > 0}) - 2{\epsilon} X^{\top}\delta_0\right\}\tilde QK'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right) \right. \\
& \qquad \qquad \qquad \qquad \qquad \left. \times \left\{XX^{\top}\delta_0\left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right) - X{\epsilon} \right\}\right] \\
& = \mathbb{E}\left[\left((X^{\top}\delta_0)^2(1 - 2\mathds{1}_{Q^{\top}\psi_0 > 0})\tilde QK'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right) \right. \\
& \qquad \qquad \qquad \left. \times \left(XX^{\top}\delta_0\left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right)^{\top}\right] \\
& \qquad \qquad + 2\sigma^2_{\epsilon} \mathbb{E}\left[XX^{\top}\delta_0\tilde Q^{\top}K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right] \\
&= O(\sigma_n ) \,.
\end{align*}
Now getting back to the covariance:
\begin{align*}
& \mathbb{E}\left[\left(\sqrt{n\sigma_n}\nabla_{\psi}\mathbb{M}^s_n(\theta_0)\right)\left(\sqrt{n}\nabla_\beta \mathbb{M}^s_n(\theta_0)\right)^{\top}\right] \\
& = \frac{1}{\sqrt{\sigma_n}}\mathbb{E}\left[\left(\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right) \times \right. \\
& \qquad \qquad \qquad \qquad \qquad \left. \left(X(X^{\top}\beta_0 - Y) + XX^{\top}\delta_0 K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right)^{\top}\right] \\
& \qquad \qquad + \frac{n-1}{\sqrt{\sigma_n}}\left[\mathbb{E}\left[\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right] \right. \\
& \qquad \qquad \qquad \qquad \times \left. \left(\mathbb{E}\left[X(X^{\top}\beta_0 - Y) + XX^{\top}\delta_0 K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) \right]\right)^{\top}\right] \\
& = \frac{1}{\sqrt{\sigma_n}} \times O(\sigma_n) + \frac{n-1}{\sqrt{\sigma_n}} \times O(\sigma_n^4) = o(1) \,.
\end{align*}
The proof for $\mathbb{E}\left[\left(\sqrt{n\sigma_n}\nabla_{\psi}\mathbb{M}^s_n(\theta_0)\right)\left(\sqrt{n}\nabla_\delta \mathbb{M}^s_n(\theta_0)\right)^{\top}\right]$ is similar and hence skipped. This completes the proof.
\subsection{Proof of Lemma \ref{conv-prob}}
To prove first note that by simple application of law of large number (and using the fact that $\|\psi^* - \psi_0\|/\sigma_n = o_p(1)$ we have:
\begin{align*}
\nabla^2 \mathbb{M}_n^{s, \gamma}(\theta^*) & = 2\begin{pmatrix}\frac{1}{n}\sum_i X_i X_i^{\top} & \frac{1}{n}\sum_i X_i X_i^{\top}K\left(\frac{Q_i^{\top}\psi^*}{\sigma_n}\right) \\ \frac{1}{n}\sum_i X_i X_i^{\top}K\left(\frac{Q_i^{\top}\psi^*}{\sigma_n}\right) & \frac{1}{n}\sum_i X_i X_i^{\top}K\left(\frac{Q_i^{\top}\psi^*}{\sigma_n}\right)
\end{pmatrix} \\
& \overset{p}{\longrightarrow} 2 \begin{pmatrix}\mathbb{E}\left[XX^{\top}\right] & \mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \\ \mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] & \mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \end{pmatrix} := 2Q^{\gamma}
\end{align*}
The proof of the fact that $\sqrt{\sigma_n}\nabla^2_{\psi \gamma}\mathbb{M}_n^s(\theta^*) = o_p(1)$ is same as the proof of Lemma 5 of \cite{seo2007smoothed} and hence skipped. Finally the proof of the fact that
$$
\sigma_n \nabla^2_{\psi \psi}\mathbb{M}_n^s(\theta^*) \overset{p}{\longrightarrow} 2Q^{\psi}\,.
$$
for some non-negative definite matrix $Q$. The proof is similar to that of Lemma 6 of \cite{seo2007smoothed}, using which we conclude the proof with:
$$
Q^{\psi} = \left(\int_{-\infty}^{\infty} -\text{sign}(t) K''(t) \ dt\right) \times \mathbb{E}\left[\delta_0^{\top} g\left(-\tilde Q^{\top}\tilde \psi_0, \tilde Q\right)\delta_0 \tilde Q \tilde Q^{\top} f_0(0 \mid \tilde Q)\right] \,.
$$
This completes the proof. So we have established:
\begin{align*}
\sqrt{n}\left(\hat \gamma^s - \gamma_0\right) & \overset{\mathscr{L}}{\implies} \mathcal{N}\left(0, \left(Q^\gamma\right)^{-1}V^\gamma \left(Q^\gamma\right)^{-1}\right) \,, \\
\sqrt{\frac{n}{\sigma_n}}\left(\hat \psi^s - \psi_0\right) & \overset{\mathscr{L}}{\implies} \mathcal{N}\left(0, \left(Q^\psi\right)^{-1}V^\psi \left(Q^\psi\right)^{-1}\right) \,.
\end{align*}
and they are asymptotically uncorrelated.
\section{Proof of Theorem \ref{thm:binary}}
\label{sec:supp_classification}
In this section, we present the details of the binary response model, the assumptions, a roadmap of the proof and then finally prove Theorem \ref{thm:binary}.
\noindent
\begin{assumption}
\label{as:distribution}
The below assumptions pertain to the parameter space and the distribution of $Q$:
\begin{enumerate}
\item The parameter space $\Theta$ is a compact subset of $\mathbb{R}^p$.
\item The support of the distribution of $Q$ contains an open subset around origin of $\mathbb{R}^p$ and the distribution of $Q_1$ conditional on $\tilde{Q} = (Q_2, \dots, Q_p)$ has, almost surely, everywhere positive density with respect to Lebesgue measure.
\end{enumerate}
\end{assumption}
\noindent
For notational convenience, define the following:
\begin{enumerate}
\item Define $f_{\psi} (\cdot | \tilde{Q})$ to the conditional density of $Q^{\top}\psi$ given $\tilde{Q}$ for $\theta \in \Theta$. Note that the following relation holds: $$f_{\theta}(\cdot |\tilde{Q}) = f_{Q_1}(\cdot - \tilde{\psi}^{\top}\tilde{Q} | \tilde{Q}) \,.$$ where we define $f_{Q_1}(\cdot | \tilde X)$ is the conditional density of $Q_1$ given $\tilde Q$.
\item Define $f_0(\cdot | \tilde{Q}) = f_{\psi_0}(\cdot | \tilde{Q})$ where $\psi_0$ is the unique minimizer of the population score function $M(\psi)$.
\item Define $f_{\tilde Q}(\cdot)$ to be the marginal density of $\tilde Q$.
\end{enumerate}
\noindent
The rest of the assumptions are as follows:
\begin{assumption}
\label{as:differentiability}
$f_0(y|\tilde{Q})$ is at-least once continuously differentiable almost surely for all $\tilde{Q}$. Also assume that there exists $\delta$ and $t$ such that $$\inf_{|y| \le \delta} f_0(y|\tilde{Q}) \ge t$$ for all $\tilde{Q}$ almost surely.
\end{assumption}
This assumption can be relaxed in the sense that one can allow the lower bound $t$ to depend on $\tilde{Q}$, provided that some further assumptions are imposed on $\mathbb{E}(t(\tilde{Q}))$. As this does not add anything of significance to the import of this paper, we use Assumption \ref{as:differentiability} to simplify certain calculations.
\begin{assumption}
\label{as:density_bound}
Define $m\left(\tilde{Q}\right) = \sup_{t}f_{X_1}(t | \tilde{Q}) = \sup_{\theta} \sup_{t}f_{\theta}(t | \tilde{Q})$. Assume that $\mathbb{E}\left(m\left(\tilde{Q}\right)^2\right) < \infty$.
\end{assumption}
\begin{assumption}
\label{as:derivative_bound}
Define $h(\tilde{Q}) = \sup_{t} f_0'(t | \tilde{Q})$. Assume that $\mathbb{E}\left(h^2\left(\tilde{Q}\right)\right) < \infty$.
\end{assumption}
\begin{assumption}
\label{as:eigenval_bound}
Assume that $f_{\tilde{Q}}(0) > 0$ and also that the minimum eigenvalue of $\mathbb{E}\left(\tilde{Q}\tilde{Q}^{\top}f_0(0|\tilde{Q})\right) > 0$.
\end{assumption}
\subsection{Sufficient conditions for above assumptions }
We now demonstrate some sufficient conditions for the above assumptions to hold. If the support of $Q$ is compact and both $f_1(\cdot | \tilde Q)$ and $f'_1(\cdot | \tilde Q)$ are uniformly bounded in $\tilde Q$, then Assumptions $(\ref{as:distribution}, \ \ref{as:differentiability}, \ \ref{as:density_bound},\ \ref{as:derivative_bound})$ follow immediately. The first part of Assumption \ref{as:eigenval_bound}, i.e. the assumption $f_{\tilde{Q}}(0) > 0$ is also fairly general and satisfied by many standard probability distributions. The second part of Assumption \ref{as:eigenval_bound} is satisfied when $f_0(0|\tilde{Q})$ has some lower bound independent of $\tilde{Q}$ and $\tilde{Q}$ has non-singular dispersion matrix.
Below we state our main theorem. In the next section, we first provide a roadmap of our proof and then fill in the corresponding details. For the rest of the paper, \emph{we choose our bandwidth $\sigma_n$ to satisfy $\frac{\log{n}}{n \sigma_n} \rightarrow 0$}.
\noindent
\begin{remark}
As our procedure requires the weaker condition $(\log{n})/(n \sigma_n) \rightarrow 0$, it is easy to see from the above Theorem that the rate of convergence can be almost as fast as $n/\sqrt{\log{n}}$.
\end{remark}
\begin{remark}
Our analysis remains valid in presence of an intercept term. Assume, without loss of generality, that the second co-ordinate of $Q$ is $1$ and let $\tilde{Q} = (Q_3, \dots, Q_p)$. It is not difficult to check that all our calculations go through under this new definition of $\tilde Q$. We, however, avoid this scenario for simplicity of exposition.
\end{remark}
\vspace{0.2in}
\noindent
{\bf Proof sketch: }We now provide a roadmap of the proof of Theorem \ref{thm:binary} in this paragraph while the elaborate technical derivations in the later part.
Define the following: $$T_n(\psi) = \nabla \mathbb{M}_n^s(\psi)= -\frac{1}{n\sigma_n}\sum_{i=1}^n (Y_i - \gamma)K'\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\tilde{Q}_i$$ $$Q_n(\psi) = \nabla^2 \mathbb{M}_n^s(\psi) = -\frac{1}{n\sigma_n^2}\sum_{i=1}^n (Y_i - \gamma)K''\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\tilde{Q}_i\tilde{Q}_i^{\top}$$ As $\hat{\psi}^s$ minimizes $\mathbb{M}^s_n(\psi)$ we have $T_n(\hat{\psi}^s) = 0$. Using one step Taylor expansion we have:
\allowdisplaybreaks
\begin{align*}
T_n(\hat{\psi}^s) = T_n(\psi_0) + Q_n(\psi^*_n)\left(\hat{\psi}^s - \psi_0\right) = 0
\end{align*}
or:
\begin{equation}
\label{eq:main_eq} \sqrt{n/\sigma_n}\left(\hat{\psi}^s - \psi_0\right) = -\left(\sigma_nQ_n(\psi^*_n)\right)^{-1}\sqrt{n\sigma_n}T_n(\psi_0)
\end{equation}
for some intermediate point $\psi^*_n$ between $\hat \psi^s$ and $\psi_0$. The following lemma establishes the asymptotic properties of $T_n(\psi_0)$:
\begin{lemma}[Asymptotic Normality of $T_n$]
\label{asymp-normality}
\label{asymp-normality}
If $n\sigma_n^{3} \rightarrow \lambda$, then
$$
\sqrt{n \sigma_n} T_n(\psi_0) \Rightarrow \mathcal{N}(\mu, \Sigma)
$$
where
$$\mu = -\sqrt{\lambda}\frac{\beta_0 - \alpha_0}{2}\left[\int_{-1}^{1} K'\left(t\right)|t| \ dt \right] \int_{\mathbb{R}^{p-1}}\tilde{Q} f'(0 | \tilde{Q}) \ dP(\tilde{Q})
$$
and
$$\Sigma = \left[a_1 \int_{-1}^{0} \left(K'\left(t\right)\right)^2 \ dt + a_2 \int_{0}^{1} \left(K'\left(t\right)\right)^2 \ dt \right]\int_{\mathbb{R}^{p-1}}\tilde{Q}\tilde{Q}^{\top} f(0|\tilde{Q}) \ dP(\tilde{Q}) \,.
$$
Here $a_1 = (1 - \gamma)^2 \alpha_0 + \gamma^2 (1-\alpha_0), a_2 = (1 - \gamma)^2 \beta_0 + \gamma^2 (1-\beta_0)$ and $\alpha_0, \beta_0, \gamma$ are model parameters defined around equation \eqref{eq:new_loss}.
\end{lemma}
\noindent
In the case that $n \sigma_n^3 \rightarrow 0$, which, holds when $n\sigma_n \rightarrow 0$ as assumed prior to the statement of the theorem, $\lambda = 0$ and we have:
$$\sqrt{n \sigma_n} T_n(\psi_0) \rightarrow \mathcal{N}(0, \Sigma) \,.$$
Next, we analyze the convergence of $Q_n(\psi^*_n)^{-1}$ which is stated in the following lemma:
\begin{lemma}[Convergence in Probability of $Q_n$]
\label{conv-prob}
Under Assumptions (\ref{as:distribution} - \ref{as:eigenval_bound}), for any random sequence $\breve{\psi}_n$ such that $\|\breve{\psi}_n - \psi_0\|/\sigma_n \overset{P} \rightarrow 0$,
$$
\sigma_n Q_n(\breve{\psi}_n) \overset{P} \rightarrow Q = \frac{\beta_0 - \alpha_0}{2}\left(\int_{-1}^{1} -K''\left(t \right)\text{sign}(t) \ dt\right) \ \mathbb{E}\left(\tilde{Q}\tilde{Q}^{\top} f(0 |\tilde{Q})\right) \,.
$$
\end{lemma}
It will be shown later that the condition $\|\breve{\psi}_n - \psi_0\|/\sigma_n \overset{P} \rightarrow 0$ needed in Lemma \ref{conv-prob} holds for the (random) sequence $\psi^*_n$. Then, combining Lemma \ref{asymp-normality} and Lemma \ref{conv-prob} we conclude from equation \ref{eq:main_eq} that:
$$
\sqrt{n/\sigma_n} \left(\hat{\psi}^s - \psi_0\right) \Rightarrow N(0, Q^{-1}\Sigma Q^{-1}) \,.
$$
This concludes the proof of the our Theorem \ref{thm:binary} with $\Gamma = Q^{-1}\Sigma Q^{-1}$.
\newline
\newline
Observe that, to show $\left\|\psi^*_n - \psi_0 \right\| = o_P(\sigma_n)$, it suffices to to prove that $\left\|\hat \psi^s - \psi_0 \right\| = o_P(\sigma_n)$. Towards that direction, we have following lemma:
\begin{lemma}[Rate of convergence]
\label{lem:rate}
Under Assumptions (\ref{as:distribution} - \ref{as:eigenval_bound}),
$$
n^{2/3}\sigma_n^{-1/3} d^2_n\left(\hat \psi^s, \psi_0^s\right) = O_P(1) \,,
$$
where
$$
d_n\left(\psi, \psi_0^s\right) = \sqrt{\left[\frac{\|\psi - \psi_0^s\|^2}{\sigma_n} \mathds{1}(\|\psi - \psi_0^s\| \le \mathcal{K}\sigma_n) + \|\psi - \psi_0^s\| \mathds{1}(\|\psi - \psi_0^s\| \ge \mathcal{K}\sigma_n)\right]}
$$
for some specific constant $\mathcal{K}$. (This constant will be mentioned precisely in the proof).
\end{lemma}
\noindent
The lemma immediately leads to the following corollary:
\begin{corollary}
\label{rate-cor}
If $n\sigma_n \rightarrow \infty$ then $\|\hat \psi^s - \psi_0^s\|/\sigma_n \overset{P} \longrightarrow 0$.
\end{corollary}
\noindent
Finally, to establish $\|\hat \psi^s - \psi_0\|/\sigma_n \overset{P} \rightarrow 0$, all we need is that $\|\psi_0^s - \psi_0\|/\sigma_n \rightarrow 0$ as demonstrated in the following lemma:
\begin{lemma}[Convergence of population minimizer]
\label{bandwidth}
For any sequence of $\sigma_n \rightarrow 0$, we have: $\|\psi_0^s - \psi_0\|/\sigma_n \rightarrow 0$.
\end{lemma}
\noindent
Hence the final roadmap is the following: Using Lemma \ref{bandwidth} and Corollary \ref{rate-cor} we establish that $\|\hat \psi^s - \psi_0\|/\sigma_n \rightarrow 0$ if $n\sigma_n \rightarrow \infty$. This, in turn, enables us to prove that $\sigma_n Q_n(\psi^*_n) \overset{P} \rightarrow Q$,which, along with Lemma \ref{asymp-normality}, establishes the main theorem.
\begin{remark}
\label{rem:gamma}
In the above analysis, we have assumed knowledge of $\gamma$ in between $(\alpha_0, \beta_0)$. However, all our calculations go through if we replace $\gamma$ by its estimate (say $\bar Y$) with more tedious book-keeping. One way to simplify the calculations is to split the data into two halves, estimate $\gamma$ (via $\bar Y$) from the first half and then use it as a proxy for $\gamma$ in the second half of the data to estimate $\psi_0$. As this procedure does not add anything of interest to the core idea of our proof, we refrain from doing so here.
\end{remark}
\subsection{Variant of quadratic loss function}
\label{loss_func_eq}
In this sub-section we argue why the loss function in \eqref{eq:new_loss} is a variant of the quadratic loss function for any $\gamma \in (\alpha_0, \beta_0)$. Assume that we know $\alpha_0, \beta_0$ and seek to estimate $\psi_0$. We start with an expansion of the quadratic loss function:
\begin{align*}
& \mathbb{E}\left(Y - \alpha_0\mathds{1}_{Q^{\top}\psi \le 0} - \beta_0 \mathds{1}_{Q^{\top}\psi > 0}\right)^2 \\
& = \mathbb{E}\left(\mathbb{E}\left(Y - \alpha_0\mathds{1}_{Q^{\top}\psi \le 0} - \beta_0 \mathds{1}_{Q^{\top}\psi > 0}\right)^2 \ | X\right) \\
& = \mathbb{E}_{Q}\left(\mathbb{E}\left( Y^2 \mid Q \right) \right) + \mathbb{E}_{Q}\left(\alpha_0\mathds{1}_{Q^{\top}\psi \le 0} + \beta_0 \mathds{1}_{Q^{\top}\psi > 0}\right)^2 \\
& \qquad \qquad \qquad -2 \mathbb{E}_{Q}\left(\left(\alpha_0\mathds{1}_{Q^{\top}\psi \le 0} + \beta_0 \mathds{1}_{Q^{\top}\psi > 0}\right) \mathbb{E}(Y \mid Q)\right) \\
& = \mathbb{E}_Q\left(\mathbb{E}\left( Y \mid Q \right) \right) + \mathbb{E}_Q\left(\alpha_0\mathds{1}_{Q^{\top}\psi \le 0} + \beta_0 \mathds{1}_{Q^{\top}\psi > 0}\right)^2 \\
& \qquad \qquad \qquad -2 \mathbb{E}_Q\left(\left(\alpha_0\mathds{1}_{Q^{\top}\psi \le 0} + \beta_0 \mathds{1}_{Q^{\top}\psi > 0}\right) \mathbb{E}(Y \mid Q)\right) \\
\end{align*}
Since the first summand is just $\mathbb{E} Y$, it is irrelevant to the minimization. A cursory inspection shows that it suffices to minimize
\begin{align}
& \mathbb{E}\left(\left(\alpha_0\mathds{1}_{Q^{\top}\psi \le 0} + \beta_0 \mathds{1}_{Q^{\top}\psi > 0}\right) - \mathbb{E}(Y \mid Q)\right)^2 \notag\\
\label{eq:lse_1} & = (\beta_0 - \alpha_0)^2 \P\left(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)\right)
\end{align}
On the other hand the loss we are considering is $\mathbb{E}\left((Y - \gamma)\mathds{1}_{Q^{\top}\psi \le 0}\right)$:
\begin{align}
\label{eq:lse_2} \mathbb{E}\left((Y - \gamma)\mathds{1}_{Q^{\top}\psi \le 0}\right) & = (\beta_0 - \gamma)\P(Q^{\top}\psi_0 > 0 , Q^{\top}\psi \le 0) \notag \\
& \hspace{10em}+ (\alpha_0 - \gamma)\P(Q^{\top}\psi_0 \le 0, Q^{\top}\psi \le 0)\,,
\end{align}
which can be rewritten as:
\begin{align*}
& (\alpha_0 - \gamma)\P(X^{\top} \psi_0 \leq 0) + (\beta_0 - \gamma)\,\P(X^{\top} \psi_0 > 0, X^{\top} \psi \leq 0) \\
& \qquad \qquad \qquad + (\gamma - \alpha_0)\,P (X^{\top} \psi_0 \leq 0, X^{\top} \psi > 0) \,.
\end{align*}
By Assumption \ref{as:distribution}, for $\psi \neq \psi_0$, $\P\left(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)\right) > 0$. As an easy consequence, equation \eqref{eq:lse_1} is uniquely minimized at $\psi = \psi_0$. To see that the same is true for \eqref{eq:lse_2} when $\gamma \in (\alpha_0, \beta_0)$, note that the first summand in the equation does not depend on $\psi$, that the second and third summands are both non-negative and that at least one of these must be positive under Assumption \ref{as:distribution}.
\subsection{Linear curvature of the population score function}
Before going into the proofs of the Lemmas and the Theorem, we argue that the population score function $M(\psi)$ has linear curvature near $\psi_0$, which is useful in proving Lemma \ref{lem:rate}. We begin with the following observation:
\begin{lemma}[Curvature of population risk]
\label{lem:linear_curvature}
Under Assumption \ref{as:differentiability} we have: $$u_- \|\psi - \psi_0\|_2 \le \mathbb{M}(\psi) - \mathbb{M}(\psi_0) \le u_+ \|\psi - \psi_0\|_2$$ for some constants $0 < u_- < u_+ < \infty$, for all $\psi \in \psi$.
\end{lemma}
\begin{proof}
First, we show that
$$
\mathbb{M}(\psi) - \mathbb{M}(\psi_0) = \frac{(\beta_0 - \alpha_0)}{2} \P(\text{sign}(Q^{\top}\psi) \neq X^{\top}(\psi_0))
$$ which follows from the calculation below:
\begin{align*}
& \mathbb{M}(\psi) - \mathbb{M}(\psi_0) \\
& = \mathbb{E}\left((Y - \gamma)\mathds{1}(Q^{\top}\psi \le 0)\right) - \mathbb{E}\left((Y - \gamma)\mathds{1}(Q^{\top}\psi_0 \le 0)\right) \\
& = \frac{\beta_0 - \alpha_0}{2} \mathbb{E}\left(\left\{\mathds{1}(Q^{\top}\psi \le 0) - \mathds{1}(Q^{\top}\psi_0 \le 0)\right\}\left\{\mathds{1}(Q^{\top}\psi_0 \ge 0) - \mathds{1}(Q^{\top}\psi_0 \le 0)\right\}\right) \\
& = \frac{\beta_0 - \alpha_0}{2} \mathbb{E}\left(\left\{\mathds{1}(Q^{\top}\psi \le 0, Q^{\top}\psi_0 \ge 0) - \mathds{1}(Q^{\top}\psi \le 0, Q^{\top}\psi_0 \le 0) + \mathds{1}(Q^{\top}\psi_0 \le 0)\right\}\right) \\
& = \frac{\beta_0 - \alpha_0}{2} \mathbb{E}\left(\left\{\mathds{1}(Q^{\top}\psi \le 0, Q^{\top}\psi_0 \ge 0) + \mathds{1}(Q^{\top}\psi \ge 0, Q^{\top}\psi_0 \le 0)\right\}\right) \\
& = \frac{\beta_0 - \alpha_0}{2} \P(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)) \,.
\end{align*}
We now analyze the probability of the wedge shaped region, the region between the two hyperplanes $Q^{\top}\psi = 0$ and $Q^{\top}\psi_0 = 0$. Note that,
\allowdisplaybreaks
\begin{align}
& \P(Q^{\top}\psi > 0 > Q^{\top}\psi_0) \notag\\
& = \P(-\tilde{Q}^{\top}\tilde{\psi} < X_1 < -\tilde{Q}^{\top}\tilde{\psi}_0) \notag\\
\label{lin1} & = \mathbb{E}\left[\left(F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}_0\right) - F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}\right)\right)\mathds{1}\left(\tilde{Q}^{\top}\tilde{\psi}_0 \le \tilde{Q}^{\top}\tilde{\psi}\right)\right]
\end{align}
A similar calculation yields
\allowdisplaybreaks
\begin{align}
\label{lin2} \P(Q^{\top}\psi < 0 < Q^{\top}\psi_0) & = \mathbb{E}\left[\left(F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}\right) - F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}_0\right)\right)\mathds{1}\left(\tilde{Q}^{\top}\tilde{\psi}_0 \ge \tilde{Q}^{\top}\tilde{\psi}\right)\right]
\end{align}
Adding both sides of equation \ref{lin1} and \ref{lin2} we get:
\begin{equation}
\label{wedge_expression}
\P(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)) = \mathbb{E}\left[\left|F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}\right) - F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}_0\right)\right|\right]
\end{equation}
Define $\psi_{\max} = \sup_{\psi \in \psi}\|\psi\|$, which is finite by Assumption \ref{as:distribution}. Below, we establish the lower bound:
\allowdisplaybreaks
\begin{align*}
& \P(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)) \notag\\
& = \mathbb{E}\left[\left|F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}\right) - F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}_0\right)\right|\right] \\
& \ge \mathbb{E}\left[\left|F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}\right) - F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}_0\right)\right|\mathds{1}\left(\left|\tilde{Q}^{\top}\tilde{\psi}\right| \vee \left| \tilde{Q}^{\top}\tilde{\psi}_0\right| \le \delta\right)\right] \hspace{0.2in} [\delta \ \text{as in Assumption \ref{as:differentiability}}]\\
& \ge \mathbb{E}\left[\left|F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}\right) - F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}_0\right)\right|\mathds{1}\left(\|\tilde{Q}\| \le \delta/\psi_{\max}\right)\right] \\
& \ge t \mathbb{E}\left[\left| \tilde{Q}^{\top}(\psi - \psi_0)\right| \mathds{1}\left(\|\tilde{Q}\| \le \delta/\psi_{\max}\right)\right] \\
& = t \|\psi - \psi_0\| \,\mathbb{E}\left[\left| \tilde{Q}^{\top}\frac{(\psi - \psi_0)}{\|\psi - \psi_0\|}\right| \mathds{1}\left(\|\tilde{Q}\| \le \delta/\psi_{\max}\right)\right] \\
& \ge t\|\psi - \psi_0\| \inf_{\gamma \in S^{p-1}}\mathbb{E}\left[\left| \tilde{Q}^{\top}\gamma\right| \mathds{1}\left(\|\tilde{Q}\| \le \delta/\psi_{\max}\right)\right] \\
& = u_-\|\psi - \psi_0\| \,.
\end{align*}
At the very end, we have used the fact that $$\inf_{\gamma \in S^{p-1}}\mathbb{E}\left[\left| \tilde{Q}^{\top}\gamma\right| \mathds{1}\left(\|\tilde{Q}\| \le \delta/\psi_{\max}\right)\right] > 0$$ To prove this, assume that the infimum is 0. Then, there exists $\gamma_0 \in S^{p-1}$ such that
$$\mathbb{E}\left[\left| \tilde{Q}^{\top}\gamma_0\right| \mathds{1}\left(\|\tilde{Q}\| \le \delta/\psi_{\max}\right)\right] = 0 \,,$$
as the above function continuous in $\gamma$ and any continuous function on a compact set attains its infimum. Hence, $\left|\tilde{Q}^{\top}\gamma_0 \right| = 0$ for all $\|\tilde{Q}\| \le \delta/\psi_{\max}$, which implies that $\tilde{Q}$ does not have full support, violating Assumption \ref{as:distribution} (2). This gives a contradiction.
\\\\
\noindent
Establishing the upper bound is relatively easier. Going back to equation \eqref{wedge_expression}, we have:
\begin{align*}
& \P(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)) \notag\\
& = \mathbb{E}\left[\left|F_{Q_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}\right) - F_{Q_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}_0\right)\right|\right] \\
& \le \mathbb{E}\left[m(\tilde Q) \, \|Q\| \,\|\psi- \psi_0\|\right] \hspace{0.2in} [m(\cdot) \ \text{is defined in Assumption \ref{as:density_bound}}]\\
& \le u_+ \|\psi - \psi_0\| \,,
\end{align*}
as $ \mathbb{E}\left[m(\tilde Q) \|Q\|\right] < \infty$ by Assumption \ref{as:density_bound} and the sub-Gaussianity of $\tilde X$.
\end{proof}
\subsection{Proof of Lemma \ref{asymp-normality}}
\begin{proof}
We first prove that under our assumptions $\sigma_n^{-1} \mathbb{E}(T_n(\psi_0)) \overset{n \to \infty}\longrightarrow A$ where $$A = -\frac{\beta_0 - \alpha_0}{2!}\left[\int_{-\infty}^{\infty} K'\left(t\right)|t| \ dt \right] \int_{\mathbb{R}^{p-1}}\tilde{Q}f_0'(0 | \tilde{Q}) \ dP(\tilde{Q})$$ The proof is based on Taylor expansion of the conditional density:
\allowdisplaybreaks
\begin{align*}
& \sigma_n^{-1} \mathbb{E}(T_n(\psi_0)) \\
& = -\sigma_n^{-2}\mathbb{E}\left((Y - \gamma)K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\tilde{Q}\right) \\
& = -\frac{\beta_0 - \alpha_0}{2}\sigma_n^{-2}\mathbb{E}\left(K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\tilde{Q}(\mathds{1}(Q^{\top}\psi_0 \ge 0) - \mathds{1}(Q^{\top}\psi_0 \le 0))\right) \\
& = -\frac{\beta_0 - \alpha_0}{2}\sigma_n^{-2}\int_{\mathbb{R}^{p-1}}\tilde{Q}\left[\int_{0}^{\infty} K'\left(\frac{z}{\sigma_n}\right)f_0(z|\tilde{Q}) \ dz - \int_{-\infty}^{0} K'\left(\frac{z}{\sigma_n}\right)f_0(z|\tilde{Q}) \ dz \right] \ dP(\tilde{Q}) \\
& = -\frac{\beta_0 - \alpha_0}{2}\sigma_n^{-1}\int_{\mathbb{R}^{p-1}}\tilde{Q}\left[\int_{0}^{\infty} K'\left(t\right)f_0(\sigma_n t|\tilde{Q}) \ dt - \int_{-\infty}^{0} K'\left(t\right)f_0(\sigma_n t |\tilde{Q}) \ dt \right] \ dP(\tilde{Q}) \\
& = -\frac{\beta_0 - \alpha_0}{2}\sigma_n^{-1}\left[\int_{\mathbb{R}^{p-1}}\tilde{Q}\left[\int_{0}^{\infty} K'\left(t\right)f_0(0|\tilde{Q}) \ dt - \int_{-\infty}^{0} K'\left(t\right)f_0(0 |\tilde{Q}) \ dt \right] \ dP(\tilde{Q}) \right. \\
& \qquad \qquad \qquad + \left. \int_{\mathbb{R}^{p-1}}\sigma_n \left[\int_{0}^{\infty} K'\left(t\right)tf_0'(\lambda \sigma_n t|\tilde{Q}) \ dt - \int_{-\infty}^{0} K'\left(t\right) t f_0'(\lambda \sigma_n t |\tilde{Q}) \ dt \right] \ dP(\tilde{Q}) \right] \hspace{0.2in} [0 < \lambda < 1]\\
& = -\frac{\beta_0 - \alpha_0}{2}\int_{\mathbb{R}^{p-1}}\tilde{Q}\left[\int_{0}^{\infty} k\left(t\right)tf_0'(\lambda \sigma_n t|\tilde{Q}) \ dz - \int_{-\infty}^{0} k\left(t\right)tf_0'(\lambda \sigma_nt |\tilde{Q}) \ dz \right] \ dP(\tilde{Q})\\
& \underset{n \rightarrow \infty} \longrightarrow -\frac{\beta_0 - \alpha_0}{2}\left[\int_{-\infty}^{\infty} k\left(t\right)|t| \ dt \right] \int_{\mathbb{R}^{p-1}}\tilde{Q}f_0'(0 | \tilde{Q}) \ dP(\tilde{Q})
\end{align*}
Next, we prove that $\mbox{Var}\left(\sqrt{n\sigma_n}T_n(\psi_0)\right)\longrightarrow \Sigma$ as $n \rightarrow \infty$, where $\Sigma$ is as defined in Lemma \ref{asymp-normality}. Note that:
\allowdisplaybreaks
\begin{align*}
\mbox{Var}\left(\sqrt{n\sigma_n}T_n(\psi_0)\right) & = \sigma_n \mathbb{E}\left((Y - \gamma)^2\left(K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)^2\frac{\tilde{Q}\tilde{Q}^{\top}}{\sigma_n^2}\right)\right) - \sigma_n \mathbb{E}(T_n(\psi_0))\mathbb{E}(T_n(\psi_0))^{\top}
\end{align*}
As $\sigma_n^{-1}\mathbb{E}(T_n(\psi_0)) \rightarrow A$, we can conclude that $\sigma_n \mathbb{E}(T_n(\psi_0))\mathbb{E}(T_n(\psi_0))^{\top} \rightarrow 0$.
Define $a_1 = (1 - \gamma)^2 \alpha_0 + \gamma^2 (1-\alpha_0), a_2 = (1 - \gamma)^2 \beta_0 + \gamma^2 (1-\beta_0)$. For the first summand:
\allowdisplaybreaks
\begin{align*}
& \sigma_n \mathbb{E}\left((Y - \gamma)^2\left(K^{'^2}\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\frac{\tilde{Q}\tilde{Q}^{\top}}{\sigma_n^2}\right)\right) \\
& = \frac{1}{\sigma_n} \int_{\mathbb{R}^{p-1}}\tilde{Q}\tilde{Q}^{\top} \left[a_1 \int_{-\infty}^{0} K^{'^2}\left(\frac{z}{\sigma_n}\right) f(z|\tilde{Q}) \ dz \right. \notag \\ & \left. \qquad \qquad \qquad + a_2 \int_{0}^{\infty}K^{'^2}\left(\frac{z}{\sigma_n}\right) f(z|\tilde{Q}) \ dz \right] \ dP(\tilde{Q})\\
& = \int_{\mathbb{R}^{p-1}}\tilde{Q}\tilde{Q}^{\top} \left[a_1 \int_{-\infty}^{0} K^{'^2}\left(t\right)f(\sigma_n t|\tilde{Q}) \ dt + a_2 \int_{0}^{\infty} K^{'^2}\left(t\right) f(\sigma_n t |\tilde{Q}) \ dt \right] \ dP(\tilde{Q}) \\
& = \int_{\mathbb{R}^{p-1}}\tilde{Q}\tilde{Q}^{\top} \left[a_1 \int_{-\infty}^{0} K^{'^2}\left(t\right)f(\sigma_n t|\tilde{Q}) \ dt + a_2 \int_{0}^{\infty} K^{'^2}\left(t\right) f(\sigma_n t |\tilde{Q}) \ dt \right] \ dP(\tilde{Q}) \\
& \underset{n \rightarrow \infty} \longrightarrow \left[a_1 \int_{-\infty}^{0} K^{'^2}\left(t\right) \ dt + a_2 \int_{0}^{\infty} K^{'^2}\left(t\right) \ dt \right]\int_{\mathbb{R}^{p-1}}\tilde{Q}\tilde{Q}^{\top} f(0|\tilde{Q}) \ dP(\tilde{Q}) \ \ \overset{\Delta} = \Sigma \, .
\end{align*}
Finally, suppose $n \sigma_n^{3} \rightarrow \lambda$. Define $W_n = \sqrt{n\sigma_n}\left[T_n(\psi) - \mathbb{E}(T_n(\psi))\right]$. Using Lemma 6 of Horowitz \cite{horowitz1992smoothed}, it is easily established that $W_n \Rightarrow N(0, \Sigma)$. Also, we have:
\allowdisplaybreaks
\begin{align*}
\sqrt{n\sigma_n}\mathbb{E}(T_n(\psi_0)) = \sqrt{n\sigma_n^{3}}\sigma_n^{-1}\mathbb{E}(T_n(\psi_0) & \rightarrow \sqrt{\lambda}A = \mu
\end{align*}
As $\sqrt{n\sigma_n}T_n(\psi_0) = W_n + \sqrt{n\sigma_n}\mathbb{E}(T_n(\psi_0))$, we conclude that $\sqrt{n\sigma_n} T_n(\psi_0) \Rightarrow N(\mu, \Sigma)$.
\end{proof}
\subsection{Proof of Lemma \ref{conv-prob}}
\begin{proof}
Let $\epsilon_n \downarrow 0$ be a sequence such that $\P(\|\breve{\psi}_n - \psi_0\| \le \epsilon_n \sigma_n) \rightarrow 1$. Define $\Psi_n = \{\psi: \|\psi - \psi_0\| \le \epsilon_n \sigma_n\}$. We show that $$\sup_{\psi \in \psi_n} \|\sigma_n Q_n(\psi) - Q\|_F \overset{P} \to 0$$ where $\|\cdot\|_F$ denotes the Frobenius norm of a matrix. Sometimes, we omit the subscript $F$ when there is no ambiguity. Define $\mathcal{G}_n$ to be collection of functions:
$$
\mathcal{G}_n= \left\{g_{\psi}(y, q) = -\frac{1}{\sigma_n}(y - \gamma)\tilde q\tilde q^{\top} \left(K''\left(\frac{q^{\top}\psi}{\sigma_n}\right) - K''\left(\frac{q^{\top}\psi_0}{\sigma_n}\right)\right), \psi \in \Psi_n \right\}
$$
That the function class $\mathcal{G}_n$ has bounded uniform entropy integral (BUEI) is immediate from the fact that the function $Q \to Q^{\top}\psi$ has finite VC dimension (as the hyperplanes has finite VC dimension) and it does change upon constant scaling. Therefore $Q \mapsto Q^{\top}\psi/\sigma_n$ also has finite VC dimension which does not depend on n and hence BUEI. As composition with a monotone function and multiplication with constant (parameter free) functions or multiplication of two BUEI class of functions keeps BUEI property, we conclude that $\mathcal{G}_n$ has BUEI.
We first expand the expression in two terms:
\allowdisplaybreaks
\begin{align*}
\sup_{\psi \in \psi_n} \|\sigma_n Q_n(\psi) - Q\| & \le \sup_{\psi \in \psi_n} \|\sigma_n Q_n(\psi) - \mathbb{E}(\sigma_n Q_n(\psi))\| + \sup_{\psi \in \psi_n} \| \mathbb{E}(\sigma_n Q_n(\psi)) - Q\| \\
& = \|(\mathbb{P}_n - P)\|_{\mathcal{G}_n} + \sup_{\psi \in \psi_n}\| \mathbb{E}(\sigma_n Q_n(\psi)) - Q\| \\
& = T_{1,n} + T_{2,n} \hspace{0.3in} \,. [\text{Say}]
\end{align*}
\vspace{0.2in}
\noindent
That $T_{1,n} \overset{P} \to 0$ follows from uniform law of large number of a BUEI class (e.g. combining Theorem 2.4.1 and Theorem 2.6.7 of \cite{vdvw96}).
For uniform convergence of the second summand $T_{n,2}$, define $\chi_n = \{\tilde{Q}: \|\tilde{Q}\| \le 1/\sqrt{\epsilon_n}\}$. Then $\chi_n \uparrow \mathbb{R}^{p-1}$. Also for any $\psi \in \Psi_n$, if we define $\gamma_n \equiv \gamma_n(\psi) = (\psi - \psi_0)/\sigma_n$, then $|\tilde \gamma_n^{\top}\tilde{Q}| \le \sqrt{\epsilon_n}$ for all $n$ and for all $\psi \in \Psi_n, \tilde{Q} \in \chi_n$. Now,
\allowdisplaybreaks
\begin{align*}
& \sup_{\psi \in \psi_n}\| \mathbb{E}(\sigma_n Q_n(\psi)) - Q\| \notag \\
&\qquad \qquad = \sup_{\psi \in \psi_n}\| (\mathbb{E}(\sigma_n Q_n(\psi)\mathds{1}(\chi_n))-Q_1) + (\mathbb{E}(\sigma_n Q_n(\psi)\mathds{1}(\chi_n^c))-Q_2)\|
\end{align*}
where $$Q_1 = \frac{\beta_0 - \alpha_0}{2}\left(\int_{-\infty}^{\infty} -K''\left(t \right)\text{sign}(t) \ dt\right) \ \mathbb{E}\left(\tilde{Q}\tilde{Q}^{\top} f_0(0 |\tilde{Q})\mathds{1}(\chi_n) \right)$$ $$Q_2 = \frac{\beta_0 - \alpha_0}{2}\left(\int_{-\infty}^{\infty} -K''\left(t \right)\text{sign}(t) \ dt\right) \ \mathbb{E}\left(\tilde{Q}\tilde{Q}^{\top} f(0 |\tilde{Q})\mathds{1}(X_n^c) \right) \,.$$
Note that
\allowdisplaybreaks
\begin{flalign}
& \|\mathbb{E}(\sigma_n Q_n(\psi)\mathds{1}(\chi_n)) - Q_1\| \notag\\
& =\left\| \frac{\beta_0 - \alpha_0}{2}\left[\int_{\chi_n} \tilde{Q}\tilde{Q}^{\top} \left[\int_{-\infty}^{\tilde{Q}^{\top}\gamma_n} K''\left(t \right) f_0(\sigma_n (t-\tilde{Q}^{\top}\gamma_n) |\tilde{Q}) \ dt \right. \right. \right. \notag \\
& \left. \left. \left. \qquad \qquad - \int_{\tilde{Q}^{\top}\gamma_n}^{\infty} K''\left(t\right) f_0(\sigma_n (t - \tilde{Q}^{\top}\gamma_n) | \tilde{Q}) \ dt \right]dP(\tilde{Q})\right]\right. \notag\\ & \left. \qquad \qquad \qquad - \frac{\beta_0 - \alpha_0}{2}\left[\int_{\chi_n} \tilde{Q}\tilde{Q}^{\top} f(0 |\tilde{Q})\left[\int_{-\infty}^{0} K''\left(t \right) \ dt - \int_{0}^{\infty} K''\left(t\right) \ dt \right]dP(\tilde{Q})\right] \right \|\notag\\
& =\left \| \frac{\beta_0 - \alpha_0}{2}\left[\int_{\chi_n} \tilde{Q}\tilde{Q}^{\top} \left[\int_{-\infty}^{\tilde{Q}^{\top}\gamma_n} K'''\left(t \right) (f_0(\sigma_n (t-\tilde{Q}^{\top}\gamma_n) |\tilde{Q})-f_0(0 | \tilde{Q})) \ dt \right. \right. \right.\notag\\& \qquad \qquad- \left. \left. \left. \int_{\tilde{Q}^{\top}\gamma_n}^{\infty} K''\left(t\right) (f_0(\sigma_n (t - \tilde{Q}^{\top}\gamma_n) | \tilde{Q}) - f_0(0 | \tilde{Q})) \ dt \right]dP(\tilde{Q})\right]\right. \notag\\ & \qquad \qquad \qquad + \left. \frac{\beta_0 - \alpha_0}{2}\left[\int_{\chi_n} \tilde{Q}\tilde{Q}^{\top} f_0(0 |\tilde{Q}) \left[\int_{-\infty}^{\tilde{Q}^{\top}\gamma_n} K''\left(t \right) \ dt - \int_{-\infty}^{0} K''\left(t \right) \ dt \right. \right. \right. \notag \\
& \qquad \qquad \qquad \qquad \left. \left. \left. + \int_{\tilde{Q}^{\top}\gamma_n}^{\infty} K''\left(t \right) \ dt - \int_{0}^{\infty} K''\left(t\right) \ dt \right]dP(\tilde{Q})\right] \right \|\notag\\
& \le \frac{\beta_0 - \alpha_0}{2}\sigma_n \int_{\chi_n}\|\tilde{Q}\tilde{Q}^{\top}\|h(\tilde{Q})\int_{-\infty}^{\infty}|K''(t)||t - \gamma_n^{\top}\tilde{Q}| \ dt \ dP(\tilde{Q}) \notag\\ & \qquad \qquad + \frac{\beta_0 - \alpha_0}{2} \int_{\chi_n}\|\tilde{Q}\tilde{Q}^{\top}\| f_0(0 | \tilde{Q}) \left[\left| \int_{-\infty}^{\tilde{Q}^{\top}\gamma_n} K''\left(t \right) \ dt - \int_{-\infty}^{0} K''\left(t \right) \ dt \right| \right. \notag \\ & \left. \qquad \qquad \qquad + \left| \int_{\tilde{Q}^{\top}\gamma_n}^{\infty} K''\left(t \right) \ dt - \int_{0}^{\infty} K''\left(t\right) \ dt \right|\right] \ dP(\tilde{Q})\notag\\
& \le \frac{\beta_0 - \alpha_0}{2}\left[\sigma_n \int_{\chi_n}\|\tilde{Q}\tilde{Q}^{\top}\|h(\tilde{Q})\int_{-\infty}^{\infty}|K''(t)||t - \gamma_n^{\top}\tilde{Q}| \ dt \ dP(\tilde{Q}) \right. \notag \\
& \left. \qquad \qquad \qquad + 2\int_{\chi_n}\|\tilde{Q}\tilde{Q}^{\top}\| f_0(0 | \tilde{Q}) (K'(0) - K'(\gamma_n^{\top}\tilde{Q})) \ dP(\tilde{Q})\right]\notag \\
\label{cp1}&\rightarrow 0 \hspace{0.3in} [\text{As} \ n \rightarrow \infty] \,,
\end{flalign}
by DCT and Assumptions \ref{as:distribution} and \ref{as:derivative_bound}. For the second part:
\allowdisplaybreaks
\begin{align}
& \|\mathbb{E}(\sigma_n Q_n(\psi)\mathds{1}(\chi_n^c)) - Q_2\|\notag\\
& =\left\| \frac{\beta_0 - \alpha_0}{2}\left[\int_{\chi_n^c} \tilde{Q}\tilde{Q}^{\top} \left[\int_{-\infty}^{\tilde{Q}^{\top}\gamma_n} K''\left(t \right) f_0(\sigma_n (t-\tilde{Q}^{\top}\gamma_n) |\tilde{Q}) \ dt \right. \right. \right. \notag \\
& \left. \left. \left. \qquad \qquad - \int_{\tilde{Q}^{\top}\gamma_n}^{\infty} K''\left(t\right) f_0(\sigma_n (t - \tilde{Q}^{\top}\gamma_n) | \tilde{Q}) \ dt \right]dP(\tilde{Q})\right]\right. \notag\\ & \left. \qquad \qquad \qquad -\frac{\beta_0 - \alpha_0}{2}\left[\int_{\chi_n^c} \tilde{Q}\tilde{Q}^{\top} f_0(0 |\tilde{Q})\left[\int_{-\infty}^{0} K''\left(t \right) \ dt - \int_{0}^{\infty} K''\left(t\right) \ dt \right]dP(\tilde{Q})\right] \right \|\notag\\
& \le \frac{\beta_0 - \alpha_0}{2} \int_{\infty}^{\infty} |K''(t)| \ dt \int_{\chi_n^c} \|\tilde{Q}\tilde{Q}^{\top}\|(m(\tilde{Q}) + f_0(0|\tilde{Q})) \ dP(\tilde{Q}) \notag\\
\label{cp2} & \rightarrow 0 \hspace{0.3in} [\text{As} \ n \rightarrow \infty] \,,
\end{align}
again by DCT and Assumptions \ref{as:distribution} and \ref{as:density_bound}. Combining equations \ref{cp1} and \ref{cp2}, we conclude the proof.
\end{proof}
\subsection{Proof of Lemma \ref{bandwidth}}
Here we prove that $\|\psi^s_0 - \psi_0\|/\sigma_n \rightarrow 0$ where $\psi^s_0$ is the minimizer of $\mathbb{M}^s(\psi)$ and $\psi_0$ is the minimizer of $M(\psi)$.
\begin{proof}
Define $\eta = (\psi^s_0 - \psi_0)/\sigma_n$. At first we show that, $\|\tilde \eta\|_2$ is $O(1)$, i.e. there exists some constant $\Omega_1$ such that $\|\tilde \eta\|_2 \le \Omega_1$ for all $n$:
\begin{align*}
\|\psi^s_0 - \psi_0\|_2 & \le \frac{1}{u_-} \left(\mathbb{M}(\psi_n) - \mathbb{M}(\psi_0)\right) \hspace{0.2in} [\text{Follows from Lemma} \ \ref{lem:linear_curvature}]\\
& \le \frac{1}{u_-} \left(\mathbb{M}(\psi_n) - \mathbb{M}^s(\psi_n) + \mathbb{M}^s(\psi_n) - \mathbb{M}^s(\psi_0) + \mathbb{M}^s(\psi_0) - \mathbb{M}(\psi_0)\right) \\
& \le \frac{1}{u_-} \left(\mathbb{M}(\psi_n) - \mathbb{M}^s(\psi_n) + \mathbb{M}^s(\psi_0) - M(\psi_0)\right) \hspace{0.2in} [\because \mathbb{M}^s(\psi_n) - \mathbb{M}^s(\psi_0) \le 0]\\
& \le \frac{2K_1}{u_-}\sigma_n \hspace{0.2in} [\text{from equation} \ \eqref{eq:lin_bound_1}]
\end{align*}
\noindent
As $\psi^s_0$ minimizes $\mathbb{M}^s(\psi)$:
$$\nabla \mathbb{M}^s(\psi^s_0) = -\mathbb{E}\left((Y-\gamma)\tilde{Q}K'\left(\frac{Q^{\top}\psi^0_s}{\sigma_n}\right)\right) = 0$$
Hence:
\begin{align*}
0 &= \mathbb{E}\left((Y-\gamma)\tilde{Q}K'\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right) \\
& = \frac{(\beta_0 - \alpha_0)}{2} \mathbb{E}\left(\tilde{Q}K'\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\left\{\mathds{1}(Q^{\top}\psi_0 \ge 0) -\mathds{1}(Q^{\top}\psi_0 < 0)\right\}\right) \\
& = \frac{(\beta_0 - \alpha_0)}{2} \mathbb{E}\left(\tilde{Q}K'\left(\frac{Q^{\top}\psi_0}{\sigma_n} + \tilde{\eta}^{\top} \tilde{Q}\right)\left\{\mathds{1}(Q^{\top}\psi_0 \ge 0) -\mathds{1}(Q^{\top}\psi_0 < 0)\right\}\right) \\
& = \frac{(\beta_0 - \alpha_0)}{2} \left[\int_{\mathbb{R}^{p-1}}\tilde{Q} \int_0^{\infty} K'\left(\frac{z}{\sigma_n} + \tilde{\eta}^{\top} \tilde{Q}\right) \ f_0(z|\tilde{Q}) \ dz \ dP(\tilde{Q})\right. \\
& \qquad \qquad \qquad \qquad \qquad \left. - \int_{\mathbb{R}^{p-1}}\tilde{Q} \int_{-\infty}^0 K'\left(\frac{z}{\sigma_n} + \tilde{\eta}^{\top} \tilde{Q}\right) \ f_0(z|\tilde{Q}) \ dz \ dP(\tilde{Q})\right] \\
& =\sigma_n \frac{(\beta_0 - \alpha_0)}{2} \left[\int_{\mathbb{R}^{p-1}}\tilde{Q} \int_0^{\infty} K'\left(t + \tilde{\eta}^{\top} \tilde{Q}\right) \ f_0(\sigma_n t|\tilde{Q}) \ dt \ dP(\tilde{Q})\right. \\
& \qquad \qquad \qquad \qquad \qquad \left. - \int_{\mathbb{R}^{p-1}}\tilde{Q} \int_{-\infty}^0 K'\left(t + \tilde{\eta}^{\top} \tilde{Q}\right) \ f_0(\sigma_n t|\tilde{Q}) \ dz \ dP(\tilde{Q})\right]
\end{align*}
As $\sigma_n\frac{(\beta_0 - \alpha_0)}{2} > 0$, we can forget about it and continue. Also, as we have proved $\|\tilde \eta\| = O(1)$, there exists a subsequence $\eta_{n_k}$ and a point $c \in \mathbb{R}^{p-1}$ such that $\eta_{n_k} \rightarrow c$. Along that sub-sequence we have:
\begin{align*}
0 & = \left[\int_{\mathbb{R}^{p-1}}\tilde{Q} \int_0^{\infty} K'\left(t + \tilde{\eta}_{n_k}^{\top} \tilde{Q}\right) \ f_0(\sigma_{n_k} t|\tilde{Q}) \ dt \ dP(\tilde{Q})\right. \\
& \qquad \qquad \qquad \qquad \qquad \left. - \int_{\mathbb{R}^{p-1}}\tilde{Q} \int_{-\infty}^0 K'\left(t + \tilde{\eta}_{n_k}^{\top} \tilde{Q}\right) \ f_0(\sigma_{n_k} t|\tilde{Q}) \ dt \ dP(\tilde{Q})\right]
\end{align*}
Taking limits on both sides and applying DCT (which is permissible by DCT) we conclude:
\begin{align*}
0 & = \left[\int_{\mathbb{R}^{p-1}}\tilde{Q} \int_0^{\infty} K'\left(t +c^{\top} \tilde{Q}\right) \ f_0(0|\tilde{Q}) \ dt \ dP(\tilde{Q})\right. \\
& \qquad \qquad \qquad \qquad \qquad \left. - \int_{\mathbb{R}^{p-1}}\tilde{Q} \int_{-\infty}^0 K'\left(t + c^{\top} \tilde{Q}\right) \ f_0(0|\tilde{Q}) \ dt \ dP(\tilde{Q})\right] \\
& = \left[\int_{\mathbb{R}^{p-1}}\tilde{Q} \ f_0(0|\tilde{Q}) \int_{c^{\top} \tilde{Q}}^{\infty} K'\left(t\right) \ dt \ dP(\tilde{Q})\right. \\
& \qquad \qquad \qquad \qquad \qquad \left. - \int_{\mathbb{R}^{p-1}}\tilde{Q}\ f_0(0|\tilde{Q}) \int_{-\infty}^{c^{\top} \tilde{Q}} K'\left(t \right) \ dt \ dP(\tilde{Q})\right] \\
& = \left[\int_{\mathbb{R}^{p-1}}\tilde{Q} \ f_0(0|\tilde{Q}) \left[1 - K(c^{\top} \tilde{Q})\right] \ dt \ dP(\tilde{Q})\right. \\
& \qquad \qquad \qquad \qquad \qquad\left. - \int_{\mathbb{R}^{p-1}}\tilde{Q}\ f_0(0|\tilde{Q}) K(c^{\top} \tilde{Q}) \ dt \ dP(\tilde{Q})\right] \\
& = \mathbb{E}\left(\tilde{Q} \left(2K(c^{\top} \tilde{Q}) - 1\right)f_0(0|\tilde{Q})\right) \,.
\end{align*}
Now, taking the inner-products of both sides with respect to $c$, we get:
\begin{equation}
\label{eq:zero_eq}
\mathbb{E}\left(c^{\top}\tilde{Q} \left(2K(c^{\top} \tilde{Q}) - 1\right)f_0(0|\tilde{Q})\right) = 0 \,.
\end{equation}
By our assumption that $K$ is symmetric kernel and that $K(t) > 0$ for all $t \in (-1, 1)$, we easily conclude that $c^{\top}\tilde{Q} \left(2K(c^{\top} \tilde{Q}) - 1\right) \ge 0$ almost surely in $\tilde{Q}$ with equality iff $c^{\top}X = 0$, which is not possible unless $c = 0$. Hence we conclude that $c = 0$. This shows that any convergent subsequence of $\eta_n$ converges to $0$, which completes the proof.
\end{proof}
\subsection{Proof of Lemma \ref{lem:rate}}
\begin{proof}
To obtain the rate of convergence of our kernel smoothed estimator we use Theorem 3.4.1 of \cite{vdvw96}: There are three key ingredients that one needs to take care of if in order to apply this theorem:
\begin{enumerate}
\item Consistency of the estimator (otherwise the conditions of the theorem needs to be valid for all $\eta$).
\item The curvature of the population score function near its minimizer.
\item A bound on the modulus of continuity in a vicinity of the minimizer of the population score function.
\end{enumerate}
Below, we establish the curvature of the population score function (item 2 above) globally, thereby obviating the need to establish consistency separately. Recall that the population score function was defined as:
$$
\mathbb{M}^s(\psi) = \mathbb{E}\left((Y - \gamma)\left(1 - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right)\right)
$$
and our estimator $\hat{\psi}_n$ is the argmin of the corresponding sample version. Consider the set of functions $\mathcal{H}_n = \left\{h_{\psi}: h_{\psi}(q,y) = (y - \gamma)\left(1 - K\left(\frac{q^{\top}\psi}{\sigma_n}\right)\right)\right\}$. Next, we argue that $\mathcal{H}_n$ is a VC class of functions with fixed VC dimension. We know that the function $\{(q,y) \mapsto q^{\top}\psi/\sigma_n: \psi \in \psi\}$ has fixed VC dimension (i.e. not depending on $n$). Now, as a finite dimensional VC class of functions composed with a fixed monotone function or multiplied by a fixed function still remains a finite dimensional VC class, we conclude that $\mathcal{H}_n$ is a fixed dimensional VC class of functions with bounded envelope (as the functions considered here are bounded by 1).
Now, we establish a lower bound on the curvature of the population score function $\mathbb{M}^s(\psi)$ near its minimizer $\psi_n$:
$$
\mathbb{M}^s(\psi) - \mathbb{M}^s(\psi_n) \gtrsim d^2_n(\psi, \psi_n)$$ where $$d_n(\psi, \psi_n) = \sqrt{\frac{\|\psi - \psi_n\|^2}{\sigma_n} \mathds{1}\left(\|\psi - \psi_n\| \le \mathcal{K}\sigma_n\right) + \|\psi - \psi_n\|\mathds{1}\left(\|\psi - \psi_n\| > \mathcal{K}\sigma_n\right)}
$$ for some constant $\mathcal{K} > 0$. The intuition behind this compound structure is following: When $\psi$ is in $\sigma_n$ neighborhood of $\psi_n$, $\mathbb{M}^s(\psi)$ behaves like a smooth quadratic function, but when it is away from the truth, $\mathbb{M}^s(\psi)$ starts resembling $M(\psi)$ which induces the linear curvature.
\\\\
\noindent
For the linear part, we first establish that $|\mathbb{M}(\psi) - \mathbb{M}^s(\psi)| = O(\sigma_n)$ uniformly for all $\psi$. Define $\eta = (\psi - \psi_0)/\sigma_n$:
\allowdisplaybreaks
\begin{align}
& |\mathbb{M}(\psi) - \mathbb{M}^s(\psi)| \notag \\
& \le \mathbb{E}\left(\left | \mathds{1}(Q^{\top}\psi \ge 0) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right | \right) \notag\\
& = \mathbb{E}\left(\left | \mathds{1}\left(\frac{Q^{\top}\psi_0}{\sigma_n} + \eta^{\top}\tilde{Q} \ge 0\right) - K\left(\frac{Q^{\top}\psi_0}{\sigma_n} + \eta^{\top}\tilde{Q}\right)\right | \right) \notag \\
& = \sigma_n \int_{\mathbb{R}^{p-1}} \int_{-\infty}^{\infty} \left | \mathds{1}\left(t + \eta^{\top}\tilde{Q} \ge 0\right) - K\left(t + \eta^{\top}\tilde{Q}\right)\right | f_0(\sigma_n t | \tilde{Q}) \ dt \ dP(\tilde{Q}) \notag\\
& = \sigma_n \int_{\mathbb{R}^{p-1}} \int_{-\infty}^{\infty} \left | \mathds{1}\left(t \ge 0\right) - K\left(t \right)\right | f_0(\sigma_n (t-\eta^{\top}\tilde{Q}) | \tilde{Q}) \ dt \ dP(\tilde{Q}) \notag \\
& = \sigma_n \int_{\mathbb{R}^{p-1}} m(\tilde{Q})\int_{-\infty}^{\infty} \left | \mathds{1}\left(t \ge 0\right) - K\left(t \right)\right | \ dt \ dP(\tilde{Q}) \notag\\
& = \sigma_n \mathbb{E}(m(\tilde{Q})) \int_{-\infty}^{\infty} \left | \mathds{1}\left(t \ge 0\right) - K\left(t \right)\right | \ dt \notag \\
\label{eq:lin_bound_1} & \le K_1 \sigma_n \mathbb{E}(m(\tilde{Q})) < \infty \hspace{0.3in} [\text{by Assumption \ref{as:density_bound}}] \,.
\end{align}
Here, the constant $K_1$ is $\mathbb{E}(m(\tilde{Q})) \left[\int_{-1}^{1}\left | \mathds{1}\left(t \ge 0\right) - K\left(t \right)\right | \ dt \right]$ which does not depend on $\psi$, hence the bound is uniform over $\psi$. Next:
\begin{align*}
\mathbb{M}^s(\psi) - \mathbb{M}^s(\psi_0^s) & = \mathbb{M}^s(\psi) - \mathbb{M}(\psi) + \mathbb{M}(\psi) - \mathbb{M}(\psi_0) \\
& \qquad \qquad + \mathbb{M}(\psi_0) - \mathbb{M}(\psi_0^s) + \mathbb{M}(\psi_0^s) -\mathbb{M}^s(\psi_0^s) \\
& = T_1 + T_2 + T_3 + T_4
\end{align*}
\noindent
We bound each summand separately:
\begin{enumerate}
\item $T_1 = \mathbb{M}^s(\psi) - \mathbb{M}(\psi) \ge -K_1 \sigma_n$ by equation \ref{eq:lin_bound_1}\,
\item $T_2 = \mathbb{M}(\psi) - \mathbb{M}(\psi_0) \ge u_-\|\psi - \psi_0\|$ by Lemma \ref{lem:linear_curvature}\,
\item $T_3 = \mathbb{M}(\psi_0) - \mathbb{M}(\psi_0^s) \ge -u_+\|\psi_0^s - \psi_0\| \ge -\epsilon_1 \sigma_n$ where one can take $\epsilon_1$ as small as possible, as we have established $\|\psi_0^s - \psi_0\|/\sigma_n \rightarrow 0$. This follows by Lemma \ref{lem:linear_curvature} along with Lemma \ref{bandwidth}\,
\item $T_4 = \mathbb{M}(\psi_0^s) -\mathbb{M}^s(\psi_0^s) \ge -K_1 \sigma_n$ by equation \ref{eq:lin_bound_1}.
\end{enumerate}
Combining, we have
\allowdisplaybreaks
\begin{align*}
\mathbb{M}^s(\psi) - \mathbb{M}^s(\psi_0^s) & \ge u_-\|\psi - \psi_0\| -(2K_1 + \epsilon_1) \sigma_n \\
& \ge ( u_-/2)\|\psi - \psi_0\| \hspace{0.2in} \left[\text{If} \ \|\psi - \psi_0\| \ge \frac{2(2K_1 + \epsilon_1)}{u_-}\sigma_n\right] \\
& \ge ( u_-/4)\|\psi - \psi_0^s\|
\end{align*}
where the last inequality holds for all large $n$ as proved in Lemma \ref{bandwidth}. Using Lemma \ref{bandwidth} again, we conclude that for any pair of positive constants $(\epsilon_1, \epsilon_2)$:
$$\|\psi - \psi_0^s\| \ge \left(\frac{2(2K_1 + \epsilon_1)}{u_-}+\epsilon_2\right)\sigma_n \Rightarrow \|\psi - \psi_0\| \ge \frac{2(2K_1 + \epsilon_1)}{u_-}\sigma_n$$ for all large $n$, which implies:
\begin{align}
& \mathbb{M}^s(\psi) - \mathbb{M}^s(\psi_0^s) \notag \\
& \ge (u_-/4) \|\psi - \psi_0^s\| \mathds{1}\left(\|\psi - \psi_0^s\| \ge \left(\frac{2(2K_1 + \epsilon_1)}{u_-}+\epsilon_2\right)\sigma_n \right) \notag \\
\label{lb2} & \ge (u_-/4) \|\psi - \psi_0^s\| \mathds{1}\left(\frac{\|\psi - \psi_0^s\|}{\sigma_n} \ge \left(\frac{7K_1}{u_-}\right) \right) \hspace{0.2in} [\text{for appropriate specifications of} \ \epsilon_1, \epsilon_2] \notag \\
& := (u_-/4) \|\psi - \psi_0^s\| \mathds{1}\left(\frac{\|\psi - \psi_0^s\|}{\sigma_n} \ge \mathcal{K} \right)
\end{align}
\noindent
In the next part, we find the lower bound when $\|\psi - \psi^0_s\| \le \mathcal{K} \sigma_n$. For the quadratic curvature, we perform a two step Taylor expansion: Define $\eta = (\psi - \psi_0)/\sigma_n$. We have:
\allowdisplaybreaks
\begin{align}
& \nabla^2\mathbb{M}^s(\psi) \notag\\
& = \frac{\beta_0 - \alpha_0}{2}\frac{1}{\sigma_n^2} \mathbb{E}\left(\tilde{Q}\tilde{Q}^{\top} K''\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\left\{\mathds{1}(Q^{\top}\psi_0 \le 0) - \mathds{1}(Q^{\top}\psi_0 \ge 0)\right\}\right) \notag\\
& = \frac{\beta_0 - \alpha_0}{2}\frac{1}{\sigma_n^2} \mathbb{E}\left(\tilde{Q}\tilde{Q}^{\top} K''\left(\frac{Q^{\top}\psi_0}{\sigma_n} + \tilde{Q}^{\top}\tilde \eta \right)\left\{\mathds{1}(Q^{\top}\psi_0 \le 0) - \mathds{1}(Q^{\top}\psi_0 \ge 0)\right\}\right) \notag\\
& = \frac{\beta_0 - \alpha_0}{2}\frac{1}{\sigma_n^2} \mathbb{E}\left[\tilde{Q}\tilde{Q}^{\top} \left[\int_{-\infty}^{0} K''\left(\frac{z}{\sigma_n} + \tilde{Q}^{\top}\tilde \eta \right) f_0(z |\tilde{Q}) \ dz \right. \right. \notag \\
& \left. \left. \qquad \qquad \qquad \qquad -\int_{0}^{\infty} K''\left(\frac{z}{\sigma_n} + \tilde{Q}^{\top}\tilde \eta \right) f_0(z | \tilde{Q}) \ dz \right]\right] \notag\\
& = \frac{\beta_0 - \alpha_0}{2}\frac{1}{\sigma_n} \mathbb{E}\left[\tilde{Q}\tilde{Q}^{\top} \left[\int_{-\infty}^{0} K''\left(t+ \tilde{Q}^{\top}\tilde \eta \right) f_0(\sigma_n t |\tilde{Q}) \ dt \right. \right. \notag \\
& \left. \left. \qquad \qquad \qquad \qquad - \int_{0}^{\infty} K''\left(t + \tilde{Q}^{\top}\tilde \eta \right) f_0(\sigma_n t | \tilde{Q}) \ dt \right]\right] \notag\\
& = \frac{\beta_0 - \alpha_0}{2}\frac{1}{\sigma_n} \mathbb{E}\left[\tilde{Q}\tilde{Q}^{\top} f_0(0| \tilde{Q})\left[\int_{-\infty}^{0} K''\left(t+ \tilde{Q}^{\top}\tilde \eta \right) \ dt \right. \right. \notag \\
& \left. \left. \qquad \qquad \qquad \qquad - \int_{0}^{\infty} K''\left(t + \tilde{Q}^{\top}\tilde \eta \right) \ dt \right]\right] + R \notag\\
\label{eq:quad_eq_1} & =(\beta_0 - \alpha_0)\frac{1}{\sigma_n}\mathbb{E}\left[\tilde{Q}\tilde{Q}^{\top} f_0(0| \tilde{Q})K'(\tilde{Q}^{\top}\tilde \eta)\right] + R \,.
\end{align}
As we want a lower bound on the set $\|\psi - \psi^0_s\| \le \mathcal{K} \sigma_n$, we have $\|\eta\| \le \mathcal{K}$. For the rest of the analysis, define
\begin{align*}
\Lambda: (v_1, v_2) \mapsto \inf_{\|v_1\| = 1, \|v_2\| \le \mathcal{K}} \mathbb{E}_{\tilde X}\left[|v_1^{\top}\tilde{Q}|^2 f(0|\tilde{Q})K'(\tilde{Q}^{\top}v_2) \right]
\end{align*}
Clearly $\Lambda \ge 0$ and continuous on a compact set, hence its infimum is attained. Suppose $\Lambda(v_1, v_2) = 0$ for some $v_1, v_2$. Then we have:
\begin{align*}
\mathbb{E}\left[|v_1^{\top}\tilde{Q}|^2 f(0|\tilde{Q})K'(\tilde{Q}^{\top}v_2) \right] = 0 \,,
\end{align*}
which further implies $|\tilde v_1^{\top}\tilde X| = 0$ almost surely and violates Assumption \ref{as:eigenval_bound}. Hence, our claim is demonstrated. On the other hand, for the remainder term of equation \eqref{eq:quad_eq_1}:
fix $\nu \in S^{p-1}$. Then:
\allowdisplaybreaks
\begin{align}
& \left| \nu^{\top} R \nu \right| \notag \\
& = \left|\frac{1}{\sigma_n} \mathbb{E}\left[\left(\nu^{\top}\tilde{Q}\right)^2 \left[\int_{-\infty}^{0} K''\left(t+ \tilde{Q}^{\top}\tilde \eta \right) (f_0(\sigma_n t |\tilde{Q}) - f_0(0|\tilde{Q})) \ dt \right. \right. \right. \notag \\
& \qquad \qquad \qquad \qquad \left. \left. \left. - \int_{0}^{\infty} K''\left(t + \tilde{Q}^{\top}\tilde \eta \right) (f_0(\sigma_n t |\tilde{Q}) - f_0(0|\tilde{Q})) \ dt \right]\right]\right| \notag\\
& \le \mathbb{E} \left[\left(\nu^{\top}\tilde{Q}\right)^2h(\tilde{Q}) \int_{-\infty}^{\infty} \left|K''\left(t+ \tilde{Q}^{\top}\tilde \eta \right)\right| |t| \ dt\right] \notag\\
& \le \mathbb{E} \left[\left(\nu^{\top}\tilde{Q}\right)^2h(\tilde{Q}) \int_{-1}^{1} \left|K''\left(t\right)\right| |t - \tilde{Q}^{\top}\tilde \eta | \ dt\right] \notag\\
\label{eq:quad_eq_3} & \le \mathbb{E} \left[\left(\nu^{\top}\tilde{Q}\right)^2h(\tilde{Q})(1+ \|\tilde{Q}\|/2\kappa) \int_{-1}^{1} \left|K''\left(t\right)\right| \ dt\right] = C_1 \hspace{0.2in} [\text{say}]
\end{align}
by Assumption \ref{as:distribution} and Assumption \ref{as:derivative_bound}. By a two-step Taylor expansion, we have:
\begin{align*}
\mathbb{M}^s(\psi) - \mathbb{M}^s(\psi_0^s) & = \frac12 (\psi - \psi_0^s)^{\top} \nabla^2\mathbb{M}^s(\psi^*_n) (\psi - \psi_0^s) \\
& \ge \left(\min_{\|v_1\| = 1, \|v_2 \| \le \mathcal{K}} \Lambda(v_1, v_2)\right) \frac{\|\psi - \psi_0^s\|^2}{2\sigma_n} - \frac{C_1\sigma_n}{2} \, \frac{\|\psi - \psi_0^s\|^2_2}{\sigma_n} \\
& \gtrsim \frac{\|\psi - \psi_0^s\|^2_2}{\sigma_n} \,
\end{align*}
This concludes the proof of the curvature.
\\\\
\noindent
Finally, we bound the modulus of continuity:
$$\mathbb{E}\left(\sup_{d_n(\psi, \psi_0^s) \le \delta} \left|(\mathbb{M}^s_n-\mathbb{M}^s)(\psi) - (\mathbb{M}^s_n-\mathbb{M}^s)(\psi_n)\right|\right) \,.$$
The proof is similar to that of Lemma \ref{lem:rate_smooth} and therefore we sketch the main steps briefly. Define the estimating function $f_\psi$ as:
$$
f_\psi(Y, Q) = (Y - \gamma)\left(1 - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right)
$$
and the collection of functions $\mathcal{F}_\zeta = \{f_\psi - f_{\psi_0^n}: d_n(\psi, \psi_0^s) \le \delta\}$. That $\mathcal{F}_\zeta$ has finite VC dimension follows from the same argument used to show $\mathcal{G}_n$ has finite VC dimension in the proof of Lemma \ref{conv-prob}. Now to bound modulus of continuity, we use Lemma 2.14.1 of \cite{vdvw96}, which implies:
$$
\sqrt{n}\mathbb{E}\left(\sup_{d_n(\psi, \psi_0^s) \le \delta} \left|(\mathbb{M}^s_n-\mathbb{M}^s)(\psi) - (\mathbb{M}^s_n-\mathbb{M}^s)(\psi_n)\right|\right) \lesssim \mathcal{J}(1, \mathcal{F}_\zeta) \sqrt{PF_\zeta^2}
$$
where $F_\zeta(Y, Q)$ is the envelope of $\mathcal{F}_\zeta$ defined as:
\begin{align*}
F_\zeta(Y, Q) & = \sup_{d_*(\psi, \psi_0^s) \le \zeta}\left|(Y - \gamma)\left(K\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)-K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right)\right| \\
& = \left|(Y - \gamma)\right| \sup_{d_*(\psi, \psi_0^s) \le \zeta} \left|\left(K\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)-K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right)\right|
\end{align*}
and $\mathcal{J}(1, \mathcal{F}_\zeta)$ is the entropy integral which can be bounded above by a constant independent of $n$ as the class $\mathcal{F}_\zeta$ has finite VC dimension. As in the proof of Lemma \ref{lem:rate_smooth}, we here consider two separate cases: (1) $\zeta \le \sqrt{\mathcal{K} \sigma_n}$ and (2) $\zeta > \sqrt{\mathcal{K} \sigma_n}$. In the first case, we have $\sup_{d_n(\psi, \psi_0^s) \le \zeta} \|\psi_ - \psi_0^s\| = \zeta \sqrt{\sigma_n}$. This further implies:
\begin{align*}
& \sup_{d_*(\psi, \psi_0^s) \le \zeta} \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}\right|^2 \\
& \le \max\left\{\left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n} + \|\tilde Q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)\right\}\right|^2, \right. \\
& \qquad \qquad \qquad \qquad \left. \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n} - \|\tilde Q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)\right\}\right|^2\right\} \\
& := \max\{T_1, T_2\} \,.
\end{align*}
Therefore to bound $\mathbb{E}[F_\zeta^2(Y, Q)]$ is equivalent to bounding both $\mathbb{E}[(Y- \gamma)^2 T_1]$ and $\mathbb{E}[(Y - \gamma)^2 T_2]$ separately, which, in turn equivalent to bound $\mathbb{E}[T_1]$ and $\mathbb{E}[T_2]$, as $|Y - \gamma| \le 1$. These bounds follows from similar calculation as of Lemma \ref{lem:rate_smooth}, hence skipped. Finally we have in this case, $$
\mathbb{E}[F_\zeta^2(Y, Q)] \lesssim \zeta \sqrt{\sigma_n} \,.
$$
The other case, when $\zeta > \sqrt{\mathcal{K} \sigma_n}$ also follows by similar calculation of Lemma \ref{lem:rate_smooth}, which yields:
$$
\mathbb{E}[F_\zeta^2(Y, Q)] \lesssim \zeta^2 \,.
$$
\noindent
Using this in the maximal inequality yields:
\begin{align*}
\sqrt{n}\mathbb{E}\left(\sup_{d_n(\psi, \psi_0) \le \delta} \left|\mathbb{M}_n(\psi - \psi_n) - \mathbb{M}^s(\psi - \psi_n)\right|\right) & \lesssim \sqrt{\zeta}\sigma^{1/4}_n\mathds{1}_{\zeta \le \sqrt{\mathcal{K} \sigma_n}} + \zeta \mathds{1}_{\zeta > \sqrt{\mathcal{K} \sigma_n}} \\
& := \phi_n(\zeta) \,
\end{align*}
This implies (following the same argument as of Lemma \ref{lem:rate_smooth}):
$$
n^{2/3}\sigma_n^{-1/3}d^2(\hat \psi^s, \psi_0^s) = O_p(1) \,.
$$
Now as $n^{2/3}\sigma_n^{-1/3} \gg \sigma_n^{-1}$, we have:
$$
\frac{1}{\sigma_n}d_n^2(\hat \psi^s, \psi_0^s) = o_p(1) \,.
$$
which further indicates
\begin{align}
\label{rate1} & n^{2/3}\sigma_n^{-1/3}\left[\frac{\|\hat \psi^s - \psi_0^s\|^2}{\sigma_n} \mathds{1}(\|\hat \psi^s - \psi_0^s\| \le \mathcal{K}\sigma_n) \right. \notag \\
& \qquad \qquad \qquad \left. + \|\hat \psi^s - \psi_0^s\| \mathds{1}(\|\hat \psi^s - \psi_0^s\|\ge \mathcal{K}\sigma_n)\right] = O_P(1)
\end{align}
This implies:
\begin{enumerate}
\item $\frac{n^{2/3}}{\sigma_n^{4/3}}\|\hat \psi^s - \psi_0^s\| \mathds{1}(\|\hat \psi^s - \psi_0^s\|\le \mathcal{K}\sigma_n) = O_P(1)$
\item $\frac{n^{2/3}}{\sigma_n^{1/3}}\|\hat \psi^s - \psi_0^s\| \mathds{1}(\|\hat \psi^s - \psi_0^s\| \ge \mathcal{K}\sigma_n) = O_P(1)$
\end{enumerate}
Therefore:
\begin{align*}
& \frac{n^{2/3}}{\sigma_n^{4/3}}\|\hat \psi^s - \psi_0^s\| \mathds{1}(\|\hat \psi^s - \psi_0^s\| \le \mathcal{K}\sigma_n) \\
& \qquad \qquad \qquad + \frac{n^{2/3}}{\sigma_n^{1/3}}\|\hat \psi^s - \psi_0^s\| \mathds{1}(\|\hat \psi^s - \psi_0^s\| \ge \mathcal{K}\sigma_n) = O_p(1) \,.
\end{align*}
i.e.
$$
\left(\frac{n^{2/3}}{\sigma_n^{4/3}} \wedge \frac{n^{2/3}}{\sigma_n^{1/3}}\right)\|\hat \psi^s - \psi_0^s\| = O_p(1) \,.
$$
Now $(n^{2/3}/\sigma_n^{4/3} \gg 1/\sigma_n$ as long as $n^{2/3} \gg \sigma_n^{1/3}$ which is obviously true. On the other hand, $n^{2/3}/\sigma_n^{1/3} \gg 1/\sigma_n$ iff $n\sigma_n \gg 1$ which is also true as per our assumption. Therefore we have:
$$
\frac{\|\hat \psi^s - \psi_0^s\|}{\sigma_n} = O_p(1) \,.
$$
This completes the proof.
\end{proof}
\section{Proof of Theorem \ref{thm:regression}}
\section{Appendix}
In this section, we present the proof of Lemma \ref{lem:rate_smooth}, which lies at the heart of our refined analysis of the smoothed change plane estimator. Proofs of the other lemmas and our results for the binary response model are available in the Appendix \ref{sec:supp_B}.
\subsection{Proof of Lemma \ref{lem:rate_smooth}}
\begin{proof}
The proof of Lemma \ref{lem:rate_smooth} is quite long, hence we further break it into few more lemmas.
\begin{lemma}
\label{lem:pop_curv_nonsmooth}
Under Assumption \eqref{eq:assm}, there exists $u_- , u_+ > 0$ such that:
$$
u_- d^2(\theta, \theta_0) \le \mathbb{M}(\theta) - \mathbb{M}(\theta_0) \le u_+ d^2(\theta, \theta_0) \,,
$$
for $\theta$ in a (non-srinking) neighborhood of $\theta_0$, where:
$$
d(\theta, \theta_0) := \sqrt{\|\beta - \beta_0\|^2 + \|\delta - \delta_0\|^2 + \|\psi - \psi_0\|} \,.
$$
\end{lemma}
\begin{lemma}
\label{lem:uniform_smooth}
Under Assumption \ref{eq:assm} the smoothed loss function $\mathbb{M}^s(\theta)$ is uniformly close to the non-smoothed loss function $\mathbb{M}(\theta)$:
$$
\sup_{\theta \in \Theta}\left|\mathbb{M}^s(\theta) - \mathbb{M}(\theta)\right| \le K_1 \sigma_n \,,
$$
for some constant $K_1$.
\end{lemma}
\begin{lemma}
\label{lem:pop_smooth_curvarture}
Under certain assumptions:
\begin{align*}
\mathbb{M}^s(\theta) - \mathbb{M}^s(\theta_0^s) & \gtrsim \|\beta - \beta_0^s\|^2 + \|\delta - \delta_0^s\|^2 \\
& \qquad \qquad + \frac{\|\psi - \psi_0^s\|^2}{\sigma_n} \mathds{1}_{\|\psi - \psi_0^s\| \le \mathcal{K}\sigma_n} + \|\psi - \psi_0^s\| \mathds{1}_{\|\psi - \psi_0^s\| > \mathcal{K}\sigma_n} \\\
& := d_*^2(\theta, \theta_0^s) \,.
\end{align*}
for some constant $\mathcal{K}$ and for all $\theta$ in a neighborhood of $\theta_0$, which does not change with $n$.
\end{lemma}
The proofs of the three lemmas above can be found in Appendix \ref{sec:supp_B}. We next move to the proof of Lemma \ref{lem:rate_smooth}. In Lemma \ref{lem:pop_smooth_curvarture} we have established the curvature of the smooth loss function $\mathbb{M}^s(\theta)$ around $\theta_0^s$. To determine the rate of convergence of $\hat \theta^s$ to $\theta_0^s$, we further need an upper bound on the modulus of continuity of our loss function. Towards that end, first recall that our loss function is:
$$
f_{\theta}(Y, X, Q) = \left(Y - X^{\top}\beta\right)^2 + \left[-2\left(Y - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right] K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)
$$
The centered loss function can be written as:
\begin{align}
& f_{\theta}(Y, X, Q) - f_{\theta_0^s}(Y, X, Q) \notag \\
& = \left(Y - X^{\top}\beta\right)^2 + \left[-2\left(Y - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right] K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) \notag \\
& \qquad \qquad \qquad \qquad - \left(Y - X^{\top}\beta_0^s\right)^2 - \left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right] K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) \notag \\
& = \left(Y - X^{\top}\beta\right)^2 + \left[-2\left(Y - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right] K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) \notag \\
& \qquad \qquad \qquad \qquad - \left(Y - X^{\top}\beta_0^s\right)^2 - \left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right] K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) \notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad - \left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right] \left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\} \notag \\
& = \underbrace{\left(Y - X^{\top}\beta\right)^2 - \left(Y - X^{\top}\beta_0^s\right)^2}_{M_1} \notag \\
& \qquad + \underbrace{\left\{ \left[-2\left(Y - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right] - \left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right]\right\} K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)}_{M_2} \notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad - \underbrace{\left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right] \left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}}_{M_3} \notag \\
\label{eq:expand_f} & := M_1 + M_2 + M_3
\end{align}
For the rest of the analysis, fix $\zeta > 0$ and consider the collection of functions $\mathcal{F}_{\zeta}$ which is defined as:
$$
\mathcal{F}_{\zeta} = \left\{f_\theta - f_{\theta^s}: d_*(\theta, \theta^s) \le \zeta\right\} \,.
$$
First note that $\mathcal{F}_\zeta$ has bounded uniform entropy integral (henceforth BUEI) over $\zeta$. To establish this, it is enough to argue that the collection $\mathcal{F} = \{ f_\theta : \theta \in \Theta\}$ is BUEI. Note that the functions $X \mapsto X^{\top}\beta$ has VC dimension $p$ and so is the map $X \mapsto X^{\top}(\beta + \delta)$. Therefore the functions $(X, Y) \mapsto (Y - X^{\top}(\beta + \delta))^2 - (Y - X^{\top}\beta)^2$ is also BUEI, as composition with monotone function (here $x^2$) and taking difference keeps this property. Further by the hyperplane $Q \mapsto Q^{\top}\psi$ also has finite dimension (only depends on the dimension of $Q$) and the VC dimension does not change by scaling it with $\sigma_n$. Therefore the functions $Q \mapsto Q^{\top}\psi/sigma_n$ has same VC dimension as $Q \mapsto Q^{\top}\psi$ which is independent of $n$. Again, as composition of monotone function keeps BUEI property, the functions $Q \mapsto K(Q^{\top}\psi/\sigma_n)$ is also BUEI. As the product of two BUEI class is BUEI, we conclude that $\mathcal{F}$ (and hence $\mathcal{F}_\zeta$) is BUEI.
\\\\
\noindent
Now to bound the modulus of continuity we use Lemma 2.14.1 of \cite{vdvw96}:
\begin{equation*}
\label{eq:moc_bound}
\sqrt{n}\mathbb{E}\left[\sup_{\theta: d_*(\theta, \theta_0^s) \le \zeta} \left|\left(\mathbb{P}_n - P\right)\left(f_\theta - f_{\theta_0^s}\right)\right|\right] \lesssim \mathcal{J}(1, \mathcal{F}_\zeta) \sqrt{\mathbb{E}\left[F_{\zeta}^2(X, Y, Q)\right]}
\end{equation*}
where $F_\zeta$ is some envelope function of $\mathcal{F}_\zeta$. As the function class $\mathcal{F}_\zeta$ has bounded entropy integral, $\mathcal{J}(1, \mathcal{F}_\zeta) $ can be bounded above by some constant independent of $n$. We next calculate the order of the envelope function $F_\zeta$. Recall that, by definition of envelope function is:
$$
F_{\zeta}(X, Y, Q) \ge \sup_{\theta: d_*(\theta, \theta_0^s) \le \zeta} \left| f_{\theta} - f_{\theta_0}\right| \,.
$$
and we can write $f_\theta - f_{\theta_0^s} = M_1 + M_2 + M_3$ which follows from equation \eqref{eq:expand_f}. Therefore, to find the order of the envelope function, it is enough to find the order of bounds of $M_1, M_2, M_3$ over the set $d_*(\theta, \theta_0^s) \le \zeta$. We start with $M_1$:
\begin{align}
\sup_{d_*(\theta, \theta_0^s) \le \zeta}|M_1| & = \sup_{d_*(\theta, \theta_0^s) \le \delta}\left|\left(Y - X^{\top}\beta\right)^2 - \left(Y - X^{\top}\beta_0^s\right)^2\right| \notag \\
& = \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|2YX^{\top}(\beta_0^s - \beta) + (X^{\top}\beta)^2 - (X^{\top}\beta_0^S)^2\right| \notag \\
& \le \sup_{d_*(\theta, \theta_0^s) \le \zeta} \|\beta - \beta_0^s\| \left[2|Y|\|X\| + (\|\beta_0^s\| + \zeta)\|X\|^2\right] \notag \\
\label{eq:env_1} & \le \zeta\left[2|Y|\|X\| + (\|\beta_0^s\| + \zeta)\|X\|^2\right] := F_{1, \zeta}(X, Y, Q) \hspace{0.1in} [\text{Envelope function of }M_1]
\end{align}
and the second term:
\allowdisplaybreaks
\begin{align}
& \sup_{d_*(\theta, \theta_0^s) \le \zeta} |M_2| \notag \\
& = \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{\left[-2\left(Y - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right] \right. \right. \notag \\
& \qquad \qquad \qquad \qquad \left. \left. - \left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right]\right\}\right|K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) \notag \\
& \le \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{\left[-2\left(Y - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right] \right. \right. \notag \\
& \qquad \qquad \qquad \qquad \left. \left. - \left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right]\right\}\right| \notag \\
& = \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{\left[2Y(X^{\top}\delta_0^s - X^{\top}\delta) + 2[(X^{\top}\beta)(X^{\top}\delta) \right. \right. \right. \notag \\
& \qquad \qquad \qquad \qquad \left. \left. \left. - (X^{\top}\beta_0^s)(X^{\top}\delta_0^s)] + (X^{\top}\delta)^2 - (X^{\top}\delta_0^s)^2\right]\right\}\right| \notag \\
& \le \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left\{\|\delta - \delta_0^s\|2|Y|\|X\| + 2\|\beta - \beta_0\|\|X\|\|\delta\| \right. \notag \\
& \qquad \qquad \qquad \qquad \left. + 2\|\delta - \delta_0^s\|\|X\|\|\beta_0^s\| + 2\|X\|\|\delta + \delta_0^s\|\|\delta - \delta_0^s\|\right\} \notag \\ \notag \\
& \le \zeta \left[2|Y|\|X\| + 2\|X\|(\|\delta_0^s\| + \|\zeta\|) + 2\|X\|\|\beta_0^s\| + 2\|X\|(\|\delta_0^s\| + \zeta)\right] \notag \\
\label{eq:env_2}& = \zeta \times 2\|X\|\left[2|Y| + 2(\|\delta_0^s\| + \|\zeta\|) + \|\beta_0^s\|\right] := F_{2, \zeta}(X, Y, Q) \hspace{0.1in} [\text{Envelope function of }M_2]
\end{align}
For the third term, note that:
\begin{align*}
& \sup_{d_*(\theta, \theta_0^s) \le \zeta} |M_3| \\
& \le \left|\left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right]\right| \times \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}\right| \\
& := F_{3, \zeta} (X, Y, Q)
\end{align*}
Henceforth, we define the envelope function to be $F_\zeta = F_{\zeta, 1} + F_{\zeta, 2} + F_{\zeta, 3}$. Hence we have by triangle inequality:
$$
\sqrt{\mathbb{E}\left[F_{\zeta}^2(X, Y, Q)\right]} \le \sum_{i=1}^3 \sqrt{\mathbb{E}\left[F_{i, \zeta}^2(X, Y, Q)\right]}
$$
From equation \eqref{eq:env_1} and \eqref{eq:env_2} we have:
\begin{equation}
\label{eq:moc_bound_2}
\sqrt{\mathbb{E}\left[F_{1, \zeta}^2(X, Y, Q)\right]} + \sqrt{\mathbb{E}\left[F_{2, \zeta}^2(X, Y, Q)\right]} \lesssim \zeta \,.
\end{equation}
For $F_{3, \zeta}$, first note that:
\begin{align*}
& \mathbb{E}\left[\left|\left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right]\right|^2 \mid Q\right] \\
& \le 8\mathbb{E}\left[\left(Y - X^{\top}\beta_0^s\right)^2(X^{\top}\delta_0)^2 \mid Q\right] + 2\mathbb{E}[(X^{\top}\delta_0^s)^4 \mid Q] \\
& \le \left\{8\|\beta - \beta_0^s\|^2\|\delta_0\|^2 + 8\|\delta_0\|^4 + 2\|\delta_0^s\|^4\right\}m_4(Q) \,.
\end{align*}
where $m_4(Q)$ is defined in Assumption \ref{eq:assm}. In this part, we have to tackle the dichotomous behavior of $\psi$ around $\psi_0^s$ carefully. Henceforth define $d_*^2(\psi, \psi_0^s)$ as:
\begin{align*}
d_*^2(\psi, \psi_0^s) = & \frac{\|\psi - \psi_0^s\|^2}{\sigma_n}\mathds{1}_{\|\psi - \psi_0^s\| \le \mathcal{K}\sigma_n} + \|\psi - \psi_0^s\|\mathds{1}_{\|\psi - \psi_0^s\| > \mathcal{K}\sigma_n}
\end{align*}
This is a slight abuse of notation, but the reader should think of it as the part of $\psi$ in $d_*^2(\theta, \theta_0^s)$. Define $B_{\zeta}(\psi_0^s)$ to be set of all $\psi$'s such that $d^2_*(\psi, \psi_0^s) \le \zeta^2$. We can decompose $B_{\zeta}(\psi_0^s)$ as a disjoint union of two sets:
\begin{align*}
B_{\zeta, 1}(\psi_0^s) & = \left\{\psi: d^2_*(\psi, \psi_0^s) \le \zeta^2, \|\psi - \psi_0^s\| \le \mathcal{K}\sigma_n\right\} \\
& = \left\{\psi:\frac{\|\psi - \psi_0^s\|^2}{\sigma_n} \le \zeta^2, \|\psi - \psi_0^s\| \le \mathcal{K}\sigma_n\right\} \\
& = \left\{\psi:\|\psi - \psi_0^s\| \le \zeta \sqrt{\sigma_n}, \|\psi - \psi_0^s\| \le \mathcal{K}\sigma_n\right\} \\\\
B_{\zeta, 2}(\psi_0^s) & = \left\{\psi: d^2_*(\psi, \psi_0^s) \le \zeta^2, \|\psi - \psi_0^s\| > \mathcal{K}\sigma_n\right\} \\
& = \left\{\psi: \|\psi - \psi_0^s\| \le \zeta^2, \|\psi - \psi_0^s\| > \mathcal{K}\sigma_n\right\}
\end{align*}
Assume $\mathcal{K} > 1$. The case where $\mathcal{K} < 1$ follows from similar calculations and hence skipped for brevity. Consider the following two cases:
\\\\
\noindent
{\bf Case 1: }Suppose $\zeta \le \sqrt{\mathcal{K}\sigma_n}$. Then $B_{\zeta, 2} = \phi$. Also as $\mathcal{K} > 1$, we have: $\zeta\sqrt{\sigma_n} \le \mathcal{K}\sigma_n$. Hence we have:
$$
\sup_{d_*^2(\psi, \psi_0^s) \le \zeta^2}\|\psi - \psi_0^s\| = \sup_{B_{\zeta, 1}}\|\psi - \psi_0^s\| = \zeta\sqrt{\sigma_n} \,.
$$
This implies:
\begin{align*}
& \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}\right|^2 \\
& \le \max\left\{\left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n} + \|\tilde Q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)\right\}\right|^2, \right. \\
& \qquad \qquad \qquad \left. \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n} - \|\tilde Q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)\right\}\right|^2\right\} \\
& := \max\{T_1, T_2\} \,.
\end{align*}
Therefore we have:
$$
\mathbb{E}\left[F^2_{3, \zeta}(X, Y, Q)\right] \le \mathbb{E}[m_4(Q) T_1] + \mathbb{E}[m_4(Q) T_2] \,.
$$
Now:
\begin{align}
& \mathbb{E}[m_4(Q) T_1] \notag \\
& = \mathbb{E}\left[m_4(Q) \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n} + \|\tilde Q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)\right\}\right|^2\right] \notag \\
& = \sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty} m_4(\sigma_nt - \tilde q^{\top}\tilde \psi_0^s, \tilde q) \left|K\left(t\right) - K\left(t + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)\right|^2 \ f_s(\sigma_nt\mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \notag\\
& \le \sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty} m_4(\sigma_nt - \tilde q^{\top}\tilde \psi_0^s, \tilde q) \left|K\left(t\right) - K\left(t + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)\right| \ f_s(\sigma_nt\mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \notag \\
& = \sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty}m_4(\sigma_nt - \tilde q^{\top}\tilde \psi_0^s, \tilde q) \int_{t}^{t + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} K'(s) \ ds \ f_s(\sigma_nt\mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \notag \\
& = \sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty}K'(s) \int_{s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^s m_4(\sigma_nt - \tilde q^{\top}\tilde \psi_0^s, \tilde q) f_s(\sigma_nt\mid \tilde q) \ dt \ ds
\ f(\tilde q) \ d\tilde q \notag \\
& = \zeta \sqrt{\sigma_n} \mathbb{E}[\|\tilde Q\|m_4(-\tilde Q^{\top}\psi_0^s, \tilde Q)f_s(0 \mid \tilde Q)] + R \notag
\end{align}
where as before we split $R$ into three parts $R = R_1 + R_2 + R_3$.
\begin{align}
\left|R_1\right| & = \left|\sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty}K'(s) \int_{s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^s m_4(- \tilde q^{\top}\tilde \psi_0^s, \tilde q) (f_s(\sigma_nt\mid \tilde q) - f_s(0 \mid \tilde q)) \ dt \ ds \ f(\tilde q) \ d\tilde q\right| \notag \\
\label{eq:r1_env_1} & \le \sigma_n^2 \int_{\mathbb{R}^{p-1}}m_4(- \tilde q^{\top}\tilde \psi_0^s, \tilde q)\dot f_s(\tilde q) \int_{-\infty}^{\infty}K'(s) \int_{s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^s |t| dt\ ds \ f(\tilde q) \ d\tilde q
\end{align}
We next calculate the inner integral (involving $(s,t)$) of equation \eqref{eq:r1_env_1}:
\begin{align*}
& \int_{-\infty}^{\infty}K'(s) \int_{s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^s |t| dt\ ds \\
& =\left(\int_{-\infty}^0 + \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} + \int_{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^{\infty}\right)K'(s) \int_{s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^s |t| dt\ ds \\
& = \frac12\int_{-\infty}^0 K'(s)\left[\left(s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)^2 - s^2\right] \ ds + \frac12\int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} K'(s)\left[\left(s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)^2 + s^2\right] \ ds \\
& \qquad \qquad \qquad \qquad + \frac12 \int_{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^{\infty}K'(s) \left[s^2 - \left(s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)^2\right] \ ds\\
& = -\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}} \int_{-\infty}^0 K'(s) s \ ds + \|\tilde q\|^2\frac{\zeta^2}{2\sigma_n} \int_{-\infty}^0 K'(s) \ ds + \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} s^2K'(s) \ ds \\
& \qquad \qquad -\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}} \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} sK'(s) \ ds + \|\tilde q\|^2\frac{\zeta^2}{2\sigma_n} \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} K'(s) \ ds \\
& \qquad \qquad \qquad + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}} \int_{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^{\infty} sK'(s) \ ds - \|\tilde q\|^2\frac{\zeta^2}{2\sigma_n} \int_{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^{\infty} K'(s) \ ds \\
& = \|\tilde q\|^2\frac{\zeta^2}{2\sigma_n}\left[2K\left(\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right) - 1\right] + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}} \left[ -\int_{-\infty}^0 K'(s) s \ ds - \right. \\
& \qquad \qquad \left. \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} K'(s)s \ ds + \int_{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^{\infty} sK'(s) \ ds\right] + \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} s^2K'(s) \ ds \\
& = \|\tilde q\|^2\frac{\zeta^2}{\sigma_n}\left[K\left(\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right) - K(0)\right] + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}} \left[ -\int_{-\infty}^{-\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} K'(s) s \ ds + \int_{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^{\infty} sK'(s) \ ds\right] \\
& \qquad \qquad + \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} s^2K'(s) \ ds \\
& = \|\tilde q\|^2\frac{\zeta^2}{\sigma_n}\left[K\left(\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right) - K(0)\right] + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\int_{-\infty}^{\infty} K'(s)|s|\mathds{1}_{|s| \ge \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} \ ds + \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} s^2K'(s) \ ds \\
& \le \dot{K}_+ \|\tilde q\|^3\frac{\zeta^3}{\sigma^{3/2}_n} + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}} \int_{-\infty}^{\infty} K'(s)|s| \ ds + \|\tilde q\|^2\frac{\zeta^2}{\sigma_n}\left(K\left(\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right) - K(0)\right) \\
& \lesssim \|\tilde q\|^3\frac{\zeta^3}{\sigma^{3/2}_n} + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}
\end{align*}
Putting this bound in equation \eqref{eq:r1_env_1} we obtain:
\begin{align*}
|R_1| & \le \frac{\sigma_n^2}{2} \int_{\mathbb{R}^{p-1}}m_4(- \tilde q^{\top}\tilde \psi_0^s, \tilde q)\dot f_s(\tilde q) \left(\|\tilde q\|^3\frac{\zeta^3}{\sigma^{3/2}_n} + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right) \ f(\tilde q) \ d\tilde q \\
& \le \frac{\zeta^3}{2\sqrt{\sigma_n}} \mathbb{E}\left[m_4(- \tilde Q^{\top}\tilde \psi_0^s, \tilde Q)\dot f_s(\tilde Q)\|\tilde Q\|^3\right] + \frac{\zeta \sqrt{\sigma_n}}{2} \mathbb{E}\left[m_4(- \tilde Q^{\top}\tilde \psi_0^s, \tilde Q)\dot f_s(\tilde Q)\|\tilde Q\|\right]
\end{align*}
and
\begin{align*}
& \left|R_2\right| \\
& = \left|\sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty}K'(s) \int_{s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^s \left(m_4(\sigma_n t - \tilde q^{\top}\tilde \psi_0^s, \tilde q) - m_4( - \tilde q^{\top}\tilde \psi_0^s, \tilde q)\right)f_s(0 \mid \tilde q) \ dt \ ds \ f(\tilde q) \ d\tilde q\right| \\
& \le \sigma_n^2 \int_{\mathbb{R}^{p-1}}\dot m_4( \tilde q)f_s(0 \mid \tilde q) \int_{-\infty}^{\infty}K'(s) \int_{s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^s |t| dt\ ds \ f(\tilde q) \ d\tilde q \\
& \le \sigma_n^2 \int_{\mathbb{R}^{p-1}}\dot m_4( \tilde q)f_s(0 \mid \tilde q) \left(\|\tilde q\|^3\frac{\zeta^3}{\sigma^{3/2}_n} + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right) \ f(\tilde q) \ d\tilde q \\
& = \zeta \sigma_n^{3/2} \mathbb{E}\left[\dot m_4( \tilde Q)f_s(0 \mid \tilde Q)\|\tilde Q\|\right] + \zeta^3 \sqrt{\sigma_n} \mathbb{E}\left[\dot m_4( \tilde Q)f_s(0 \mid \tilde Q)\|\tilde Q\|^3\right]
\end{align*}
The third residual $R_3$ is even higher order term and hence skipped. It is immediate that the order of the remainders are equal to or smaller than $\zeta \sqrt{\sigma_n}$ which implies:
$$
\mathbb{E}[m_4(Q)T_1] \lesssim \zeta\sqrt{\sigma_n} \,.
$$
The calculation for $T_2$ is similar and hence skipped for brevity. Combining conclusions for $T_1$ and $T_2$ we conclude when $\zeta \le \sqrt{\mathcal{K} \sigma_n}$:
\begin{align}
& \mathbb{E}\left[F^2_{3, \zeta}(X, Y, Q)\right] \notag \\
& \mathbb{E}\left[\left|\left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right]\right|^2 \times \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}\right|^2\right] \notag \\
& \lesssim \mathbb{E}\left[m_4(Q)\sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}\right|^2\right] \notag \\
\label{eq:env_3} & \lesssim \zeta \sqrt{\sigma_n} \,.
\end{align}
\\
\noindent
{\bf Case 2: } Now consider $\zeta > \sqrt{\mathcal{K} \sigma_n}$. Then it is immediate that:
$$
\sup_{d_*^2(\psi, \psi^s_0) \le \zeta^2} \|\psi - \psi^s_0\| = \zeta^2 \,.
$$
Using this we have:
\begin{align}
& \mathbb{E}[m_4(Q) T_1] \notag \\
& = \mathbb{E}\left[m_4(Q)\left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n} + \|\tilde Q\|\frac{\zeta^2}{\sqrt{\sigma_n}}\right)\right\}\right|^2\right] \notag \\
& = \sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty} m_4(\sigma_nt - \tilde q^{\top}\tilde \psi_0^s, \tilde q) \left|K\left(t\right) - K\left(t + \|\tilde q\|\frac{\zeta^2}{\sigma_n}\right)\right|^2 \ f_s(\sigma_nt\mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \notag \\
& \le \sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty} m_4(\sigma_nt - \tilde q^{\top}\tilde \psi_0^s, \tilde q) \left|K\left(t\right) - K\left(t + \|\tilde q\|\frac{\zeta^2}{\sigma_n}\right)\right| \ f_s(\sigma_nt\mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \notag \\
& \le \sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty} m_4(\sigma_nt - \tilde q^{\top}\tilde \psi_0^s, \tilde q)\|\tilde q\|\frac{\zeta^2}{\sigma_n} \ f_s(\sigma_nt\mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \notag \\
& = \zeta^2 \int_{\mathbb{R}^{p-1}}m_4(- \tilde q^{\top}\tilde \psi_0^s, \tilde q) f_s(0 \mid \tilde q)\|\tilde q\| \ f(\tilde q) \ d\tilde q + R \notag\\
& \le \zeta^2 \mathbb{E}\left[\|\tilde Q\|m_4\left(- \tilde Q^{\top}\tilde \psi_0^s, \tilde Q\right) f_s(0 \mid \tilde Q)\right] + R \notag
\end{align}
The analysis of the remainder term is similar and if is of higher order. This concludes when $\zeta > \sqrt{K\sigma_n}$:
\begin{align}
& \mathbb{E}\left[F^2_{3, \zeta}(X, Y, Q)\right] \notag \\
& \mathbb{E}\left[\left|\left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right]\right|^2 \times \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}\right|^2\right] \notag \\
& \lesssim \mathbb{E}\left[m_4(Q)\sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}\right|^2\right] \notag \\
\label{eq:env_4} & \lesssim \zeta^2
\end{align}
Combining \eqref{eq:env_3}, \eqref{eq:env_4} with equation \eqref{eq:moc_bound_2} we have:
\begin{align*}
\sqrt{n}\mathbb{E}\left[\sup_{\theta: d_*(\theta, \theta_0^s) \le \zeta} \left|\left(\mathbb{P}_n - P\right)\left(f_\theta - f_{\theta_0^s}\right)\right|\right] & \lesssim \sqrt{\zeta}\sigma_n^{1/4}\mathds{1}_{\zeta \le \sqrt{\mathcal{K}\sigma_n}} + \zeta \mathds{1}_{\zeta > \sqrt{\mathcal{K} \sigma_n}} \\
& := \phi_n(\zeta) \,.
\end{align*}
Hence to obtain rate we have to solve $r_n^2 \phi_n(1/r_n) \le \sqrt{n}$, i.e. (ignoring $\mathcal{K}$ as this does not affect the rate)
$$
r_n^{3/2}\sigma_n^{1/4}\mathds{1}_{r_n \ge \sigma_n^{-1/2}} + r_n \mathds{1}_{r_n \le \sigma_n^{-1/2}} \le \sqrt{n} \,.
$$
Now if $r_n \le \sigma_n^{-1/2}$ then $r_n = \sqrt{n}$ which implies $\sqrt{n} \le \sigma_n^{-1/2}$ i.e. $n\sigma_n \to 0$ and hence contradiction. On the other hand, if $r_n \ge \sigma_n^{-1/2}$ then $r_n = n^{1/3}\sigma_n^{-1/6}$. This implies $n^{1/3}\sigma_n^{-1/6} \ge \sigma_n^{-1/2}$, i.e. $n^{1/3} \ge \sigma_n^{-1/3}$, i.e. $n\sigma_n \to \infty$ which is okay. This implies:
$$
n^{2/3}\sigma_n^{-1/3}d^2(\hat \theta^s, \theta_0^s) = O_p(1) \,.
$$
Now as $n^{2/3}\sigma_n^{-1/3} \gg \sigma_n^{-1}$, we have:
$$
\frac{1}{\sigma_n}d^2(\hat \theta^s, \theta_0^s) = o_p(1) \,.
$$
which further indicates $\|\hat \psi^s - \psi_0^s\|/\sigma_n = o_p(1)$. This, along with the fact that $\|\psi_0^s - \psi_0\|/\sigma_n = o(1)$ (from Lemma \ref{bandwidth}), establishes that $\|\hat \psi_0^s - \psi_0\|/\sigma_n = o_p(1)$. This completes the proof.
\end{proof}
\section{Real data analysis}
\label{sec:real_data}
We illustrate our method using cross-country data on pollution (carbon-dioxide), income and urbanization obtained from the World Development Indicators (WDI), World Bank. The Environmental Kuznets Curve hypothesis (EKC henceforth), a popular and ongoing area of research in environmental economics, posits that at an initial stage of economic development pollution increases with economic growth, and then diminishes when society’s priorities change, leading to an inverted U-shaped relation between income (measured via real GDP per capita) and pollution. The hypothesis has led to numerous empirical papers (i) testing the hypothesis (whether the relation is inverted U-shaped for countries/regions of interest in the sample), (ii) exploring the threshold level of income at which pollution starts falling, as well as (iii) examining the countries/regions which belong to the upward rising part versus the downward sloping part of the inverted U-shape, if at all. The studies have been performed using US state level data or cross-country data (e.g. \cite{shafik1992economic}, \cite{millimet2003environmental}, \cite{aldy2005environmental}, \cite{lee2019nonparametric},\cite{boubellouta2021cross}, \cite{list1999environmental}, \cite{grossman1995economic}, \cite{bertinelli2005environmental}, \cite{azomahou2006economic}, \cite{taskin2000searching} to name a few). While some of these papers have found evidence in favor of the EKC hypothesis (inverted U-shaped income-pollution relation), others have found evidence against it (monotonically increasing or other shapes for the relation). The results often depend on countries/regions in the sample, period of analysis, as well as the pollutant studied.
\\\\
\noindent
While income-pollution remains the focal point of most EKC studies, several of them have also included urban agglomeration (UA) or some other measures of urbanization as an important control variable especially while investigating carbon emissions.\footnote {Although income growth is connected to urbanization, countries are heterogenous and follow different growth paths due to their varying geographical structures, population densities, infrastructures, ownerships of resources making a case for using urbanization as another control covariate in the income-pollution study. The income growth paths of oil rich UAE, manufacturing based China, serviced based Singapore, low population density Canada (with vast land) are all different.} (see for example, \cite{shafik1992economic}, \cite{boubellouta2021cross}and \cite{liang2019urbanization}). The theory of ecological economics posits potentially varying effects of increased urbanization on pollution– (i) urbanization leading to more pollution (due to its close links with sanitations, dense transportations, and proximities to polluting manufacturing industries), (ii) urbanization potentially leading to less pollution based on ‘compact city theory’ (see \cite{burton2000compact}, \cite{capello2000beyond}, \cite{sadorsky2014effect}) that explains the potential benefits of increased urbanization in terms of economies of scale (for example, replacing dependence on automobiles with large scale subway systems, using multi-storied buildings instead of single unit houses, keeping more open green space). \cite{liddle2010age}, using 17 developed countries, find a positive and significant effect of urbanization on pollution. On the contrary, using a set of 69 countries \cite{sharma2011determinants} find a negative and significant effect of urbanization on pollution while \cite{du2012economic} find an insignificant effect of urbanization on carbon emission. Using various empirical strategies \cite{sadorsky2014effect} conclude that the positive and negative effects of urbanization on carbon pollution may cancel out depending on the countries involved often leaving insignificant effects on pollution. They also note that many countries are yet to achieve a sizeable level of urbanization which presumably explains why many empirical works using less developed countries find insignificant effect of urbanization. In summary, based on the existing literature, both the relationship between urbanization and pollution as well as the relationship between income and pollution appear to depend largely on the set of countries considered in the sample. This motivates us to use UA along with income in our change plane model for analyzing carbon-dioxide emission to plausibly separate the countries into two regimes.
\\\\
\noindent
Following the broad literature we use pollution emission per capita (carbon-dioxide measured in metric tons per capita) as the dependent variable and real GDP per capita (measured in 2010 US dollars), its square (as is done commonly in the EKC literature) and a popular measure of urbanization, namely urban agglomeration (UA)\footnote{The exact definition can be found in the World Development Indicators database from the World Bank website.} as covariates (in our notation $X$) in our regression. In light of the preceding discussions we fit a change plane model comprising real GDP per capita and UA (in our notation $Q$). To summarize the setup, we use the continuous response model as described in equation \eqref{eq:regression_main_eqn}, i.e
\begin{align*}
Y_i & = X_i^{\top}\beta_0 + X_i^{\top}\delta_0\mathds{1}_{Q_i^{\top}\psi_0 > 0} + {\epsilon}_i \\
& = X_i^{\top}\beta_0\mathds{1}_{Q_i^{\top}\psi_0 \le 0} + X_i^{\top}(\beta_0 + \delta_0)\mathds{1}_{Q_i^{\top}\psi_0 > 0} + {\epsilon}_i
\end{align*}
with the per capita $CO_2$ emission in metric ton as $Y$, per capita GDP, square of per capita GDP and UA as $X$ (hence $X \in \mathbb{R}^3$) and finally, per capita GDP and UA as $Q$ (hence $Q \in \mathbb{R}^2$). Observe that $\beta_0$ represents the regression coefficients corresponding to the countries with $Q_i^{\top}\psi_0 \le 0$ (henceforth denoted by Group 1) and $(\beta_0+ \delta_0)$ represents the regression coefficients corresponding to the countries with $Q_i^{\top}\psi_0 \ge 0$ (henceforth denoted by Group 2). As per our convention, in the interests of identifiability we assume $\psi_{0, 1} = 1$, where $\psi_{0,1}$ is the change plane parameter corresponding to per capita GDP. Therefore the only change plane coefficient to be estimated is $\psi_{0, 2}$, the change plane coefficient for UA. For numerical stability, we divide per capita GDP by $10^{-4}$ (consequently square of per capital GDP is scaled by $10^{-8}$)\footnote{This scaling helps in the numerical stability of the gradient descent algorithm used to optimize the least squares criterion.}. After some pre-processing (i.e. removing rows consisting of NA and countries with $100\%$ UA) we estimate the coefficients $(\beta_0, \delta_0, \psi_0)$ of our model based on data from 115 countries with $\sigma_n = 0.05$ and test the significance of the various coefficients using the methodologies described in Section \ref{sec:inference}. We present our findings in Table \ref{tab:ekc_coeff}.
\begin{table}[!h]
\centering
\begin{tabular}{|c||c||c|}
\hline
Coefficients & Estimated values & p-values \\
\hline \hline
$\beta_{0, 1}$ (\text{RGDPPC for Group 1}) & 6.98555060 & 4.961452e-10 \\
$\beta_{0, 2}$ (\text{squared RGDPPC for Group 1}) & -0.43425991 & 7.136484e-02 \\
$\beta_{0, 3}$ (\text{UA for Group 1}) & -0.02613813 & 1.066065e-01
\\
$\beta_{0, 1} + \delta_{0, 1}$ (\text{RGDPPC for Group 2}) & 2.0563337 & 0.000000e+00\\
$\beta_{0, 2} + \delta_{0, 2}$ (\text{squared RGDPPC for Group 2}) & -0.1866490 & 4.912843e-04 \\
$\beta_{0, 3} + \delta_{0, 3}$ (\text{UA for Group 2}) & 0.1403171& 1.329788e-05 \\
$\psi_{0,2 }$ (\text{Change plane coeff for UA}) & -0.07061785 & 0.000000e+00\\
\hline
\end{tabular}
\caption{Table of the estimated regression and change plane coefficients along with their p-values.}
\label{tab:ekc_coeff}
\end{table}
\\\\
\noindent
From the above analysis, we find that GDP has significantly positive effect on pollution for both groups of countries. The effect of its squared term is negative for both groups; but the effect is significant for Group-2 consisting of mostly high income countries whereas its effect is insignificant (at the 5\% level) for the Group-1 countries (consisting of mostly low or middle income and few high income countries). Thus, not surprisingly, we find evidence in favor of EKC for the developed countries, but not for the mixed group. Notably, Group-1 consists of a mixed set of countries like Angola, Sudan, Senegal, India, China, Israel, UAE etc., whereas Group-1 consists of rich and developed countries like Canada, USA, UK, France, Germany etc. The urban variable, on the other hand, is seen to have insignificant effect on Group-1 which is in keeping with \cite{du2012economic}, \cite{sadorsky2014effect}. Many of them are yet to achieve substantial urbanization and this is more true for our sample period \footnote{We use 6 years average from 2010-2015 for GDP and pollution measures. Such averaging is in accordance with the cross-sectional empirical literature using cross-country/regional data and helps avoid business cycle fluctuations in GDP. It also minimizes the impacts of outlier events such as the financial crisis or great recession period. The years that we have chosen are ones for which we could find data for the largest number of countries.}. In contrast, UA has a positive and significant effect on Group-2 (developed) countries which is consistent with the findings of \cite{liddle2010age}, for example. Note that UA plays a crucial role in dividing the countries into different regimes, as the estimated value of $\psi_{0,2}$ is significant. Thus, we are able to partition countries into two regimes: a mostly rich and a mixed group.
\\\\
\noindent
Note that many underdeveloped countries and poorer regions of emerging countries are still swamped with greenhouse gas emissions from burning coal, cow dung etc., and usage of poor exhaust systems in houses and for transport. This is more true for rural and semi-urban areas of developing countries. So even while being less urbanized compared to developed nations, their overall pollution load is high (due to inefficient energy usage and higher dependence on fossil fuels as pointed out above) and rising with income and they are yet to reach the descending part of the inverted U-shape for the income-pollution relation. On the contrary, for countries in Group-2, the adoption of more efficient energy and exhaust systems are common in households and transportations in general, leading to eventually decreasing pollution with increasing income (supporting EKC). Both the results are in line with the existing EKC literature. Additionally we find that the countries in Group 2 are yet to achieve ‘compact city’ and green urbanization. This is a stylized fact that is confirmed by the positive and significant effect of UA on pollution in our analysis.
\\\\
\noindent
There are many future potential applications of our method in economics. Similar analyses can be performed for other pollutants (such as sulfur emission, electrical waste/e-waste, nitrogen pollution etc.). While income/GDP remains a common, indeed the most crucial variable in pollution studies, other covariates (including change plane defining variables) may vary, depending on the pollutant of interest. Another potential application can be that of identifying the determinants of family health expenses in household survey data. Families are often asked about their health expenses incurred in the past one year. An interesting case in point may be household surveys collected in India where one finds numerous (large) joint families with several children and old people residing in the same household and most families are uninsured. It is often seen that health expenditure increases with income with a major factor being the costs associated with regularly performed preventative medical examinations which are affordable only once a certain income level is reached. The important covariates here are per capita family income, family wealth, `dependency ratio' (number of children and old to the total number of people in the family) and the binary indicator of any history of major illness/hospitalizations in the family in the past year. Family income per capita and history of major illness are natural candidate covariates for defining the change plane.
\section{Binary response model}
\label{sec:classification_analysis}
Recall our binary response model in equation \eqref{eq:classification_eqn}. To estimate $\psi_0$, we resort to the following loss (without smoothing):
\begin{equation}
\label{eq:new_loss}
\mathbb{M}(\psi) = \mathbb{E}\left((Y - \gamma)\mathds{1}(Q^{\top}\psi \le 0)\right)\end{equation}
with $\gamma \in (\alpha_0, \beta_0)$, which can be viewed as a variant of the square error loss function:
$$
\mathbb{M}(\alpha, \beta, \psi) = \mathbb{E}\left(\left(Y - \alpha\mathds{1}(Q^{\top}\psi < 0) - \beta\mathds{1}(Q^{\top}\psi > 0)\right)^2\right)\,.
$$
We establish the connection between these losses in sub-section \ref{loss_func_eq}. It is easy to prove that under fairly mild conditions (discussed later)
$\psi_0 = {\arg\min}_{\psi \in \Theta}\mathbb{M}(\psi)$, uniquely. Under the standard classification paradigm, when we know a priori that
$\alpha_0 < 1/2 < \beta_0$, we can take $\gamma = 1/2$, and in the absence of this constraint, $\bar{Y}$, which converges to some $\gamma$ between $\alpha_0$ and $\beta_0$, may be substituted in the loss function. In the rest of the paper, we confine ourselves to a known $\gamma$, and for technical simplicity, we take $\gamma = \frac{(\beta_0 + \alpha_0)}{2}$, but this assumption can be removed with more mathematical book-keeping. Thus, $\psi_0$ is estimated by:
\begin{equation}
\label{non-smooth-score}
\hat \psi = {\arg\min}_{\psi \in \Theta} \mathbb{M}_n(\psi) = {\arg\min}_{\psi \in \Theta} \frac{1}{n}\sum_{\i=1}^n (Y_i - \gamma)\mathds{1}(Q_i^{\top}\psi \le 0)\,.
\end{equation} We resort to a smooth approximation of the indicator function in
\eqref{non-smooth-score} using a distribution kernel with suitable bandwidth. The smoothed version of the population score function then becomes:
\begin{equation}
\label{eq:kernel_smoothed_pop_score}
\mathbb{M}^s(\psi) = \mathbb{E}\left((Y - \gamma)\left(1-K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right)\right)
\end{equation}
where as in the continuous response model, we use $K(x) = \Phi(x)$, and the corresponding empirical version is:
\begin{equation}
\label{eq:kernel_smoothed_emp_score}
\mathbb{M}^s_n(\psi) = \frac{1}{n}\sum_{i=1}^n \left((Y_i - \gamma)\left(1-K\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\right)\right)
\end{equation}
Define $\hat{\psi}^s$ and $\psi_0^s$ to be the minimizer of the smoothed version of the empirical (equation \eqref{eq:kernel_smoothed_emp_score}) and population score (equation \eqref{eq:kernel_smoothed_pop_score}) function respectively. Here we only consider the choice of bandwidth $n\sigma_n \to \infty$ and $n\sigma_n^2 \to 0$. Analogous to Theorem \ref{thm:regression} we prove the following result for binary response model:
\begin{theorem}
\label{thm:binary}
Under Assumptions (\ref{as:distribution} - \ref{as:eigenval_bound}):
$$
\sqrt{\frac{n}{\sigma_n}}\left(\hat{\psi}_n - \psi_0\right) \Rightarrow N(0, \Gamma) \,,
$$
for some non-stochastic matrix $\Gamma$, which will be defined explicitly in the proof.
\end{theorem}
We have therefore established that in the regime $n\sigma_n \to \infty$ and $n\sigma_n^2 \to 0$, it is possible to attain asymptotic normality using a smoothed estimator for binary response model.
\section{Inferential methods}
\label{sec:inference}
We draw inferences on $(\beta_0, \delta_0, \psi_0)$ by resorting to similar techniques as in \cite{seo2007smoothed}. For the continuous response model, we need consistent estimators of $V^{\gamma}, Q^{\gamma}, V^{\psi}, Q^{\psi}$ (see Lemma \ref{conv-prob} for the definitions) for hypothesis testing. By virtue of the aforementioned Lemma, we can estimate $Q^{\gamma}$ and $Q^{\psi}$ as follows:
\begin{align*}
\hat Q^{\gamma} & = \nabla^2_{\gamma} \mathbb{M}_n^s(\hat \theta) \,, \\
\hat Q^{\psi} & = \sigma_n \nabla^2_{\psi} \mathbb{M}_n^s(\hat \theta) \,.
\end{align*}
The consistency of the above estimators is established in the proof of Lemma \ref{conv-prob}. For the other two parameters $V^{\gamma}, V^{\psi}$ we use the following estimators:
\begin{align*}
\hat V^{\psi} & = \frac{1}{n\sigma_n^2}\sum_{i=1}^n\left(\left(Y_i - X_i^{\top}(\hat \beta + \hat \delta)\right)^2 - \left(Y_i- X_i^{\top}\hat \beta\right)^2\right)^2\tilde Q_i \tilde Q_i^{\top}\left(K'\left(\frac{Q_i^{\top}\hat \psi}{\sigma_n}\right)\right)^2 \\
\hat V^{\gamma} & = \hat \sigma^2_{\epsilon} \begin{pmatrix} \frac{1}{n}X_iX_i^{\top} & \frac{1}{n}X_iX_i^{\top}\mathds{1}_{Q_i^{\top}\hat \psi > 0} \\ \frac{1}{n}X_iX_i^{\top}\mathds{1}_{Q_i^{\top}\hat \psi > 0} & \frac{1}{n}X_iX_i^{\top}\mathds{1}_{Q_i^{\top}\hat \psi > 0} \end{pmatrix}
\end{align*}
where $\hat \sigma^2_{\epsilon}$ can be obtained as $(1/n)(Y_i - X_i^{\top}\hat \beta - X_i^{\top}\hat \delta \mathds{1}(Q_i^{\top}\hat \psi > 0))^2$, i.e. the residual sum of squares. The explicit value of $V_\gamma$ (as derived in equation \eqref{eq:def_v_gamma} in the proof Lemma \ref{asymp-normality}) is:
$$
V^{\gamma} = \sigma_{\epsilon}^2 \begin{pmatrix}\mathbb{E}\left[XX^{\top}\right] & \mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \\
\mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] & \mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \end{pmatrix}
$$
Therefore, the consistency of $\hat V_\gamma$ is immediate from the law of large numbers. The consistency of $\hat V^{\psi}$ follows via arguments similar to those employed in proving Lemma \ref{conv-prob} but under somewhat more stringent moment conditions: in particular, we need $\mathbb{E}[\|X\|^8] < \infty$ and $\mathbb{E}[(X^{\top}\delta_0)^k \mid Q]$ to be Lipschitz functions over $Q$ for $1 \le k \le 8$. The inferential techniques for the classification model are similar and hence skipped, to avoid repetition.
\section{Proof of Theorem \ref{thm:regression}}
\section{Appendix}
In this section, we present the proof of Lemma \ref{lem:rate_smooth}, which lies at the heart of our refined analysis of the smoothed change plane estimator. Proofs of the other lemmas and our results for the binary response model are available in the Appendix \ref{sec:supp_B}.
\subsection{Proof of Lemma \ref{lem:rate_smooth}}
\begin{proof}
The proof of Lemma \ref{lem:rate_smooth} is quite long, hence we further break it into few more lemmas.
\begin{lemma}
\label{lem:pop_curv_nonsmooth}
Under Assumption \eqref{eq:assm}, there exists $u_- , u_+ > 0$ such that:
$$
u_- d^2(\theta, \theta_0) \le \mathbb{M}(\theta) - \mathbb{M}(\theta_0) \le u_+ d^2(\theta, \theta_0) \,,
$$
for $\theta$ in a (non-srinking) neighborhood of $\theta_0$, where:
$$
d(\theta, \theta_0) := \sqrt{\|\beta - \beta_0\|^2 + \|\delta - \delta_0\|^2 + \|\psi - \psi_0\|} \,.
$$
\end{lemma}
\begin{lemma}
\label{lem:uniform_smooth}
Under Assumption \ref{eq:assm} the smoothed loss function $\mathbb{M}^s(\theta)$ is uniformly close to the non-smoothed loss function $\mathbb{M}(\theta)$:
$$
\sup_{\theta \in \Theta}\left|\mathbb{M}^s(\theta) - \mathbb{M}(\theta)\right| \le K_1 \sigma_n \,,
$$
for some constant $K_1$.
\end{lemma}
\begin{lemma}
\label{lem:pop_smooth_curvarture}
Under certain assumptions:
\begin{align*}
\mathbb{M}^s(\theta) - \mathbb{M}^s(\theta_0^s) & \gtrsim \|\beta - \beta_0^s\|^2 + \|\delta - \delta_0^s\|^2 \\
& \qquad \qquad + \frac{\|\psi - \psi_0^s\|^2}{\sigma_n} \mathds{1}_{\|\psi - \psi_0^s\| \le \mathcal{K}\sigma_n} + \|\psi - \psi_0^s\| \mathds{1}_{\|\psi - \psi_0^s\| > \mathcal{K}\sigma_n} \\\
& := d_*^2(\theta, \theta_0^s) \,.
\end{align*}
for some constant $\mathcal{K}$ and for all $\theta$ in a neighborhood of $\theta_0$, which does not change with $n$.
\end{lemma}
The proofs of the three lemmas above can be found in Appendix \ref{sec:supp_B}. We next move to the proof of Lemma \ref{lem:rate_smooth}. In Lemma \ref{lem:pop_smooth_curvarture} we have established the curvature of the smooth loss function $\mathbb{M}^s(\theta)$ around $\theta_0^s$. To determine the rate of convergence of $\hat \theta^s$ to $\theta_0^s$, we further need an upper bound on the modulus of continuity of our loss function. Towards that end, first recall that our loss function is:
$$
f_{\theta}(Y, X, Q) = \left(Y - X^{\top}\beta\right)^2 + \left[-2\left(Y - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right] K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)
$$
The centered loss function can be written as:
\begin{align}
& f_{\theta}(Y, X, Q) - f_{\theta_0^s}(Y, X, Q) \notag \\
& = \left(Y - X^{\top}\beta\right)^2 + \left[-2\left(Y - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right] K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) \notag \\
& \qquad \qquad \qquad \qquad - \left(Y - X^{\top}\beta_0^s\right)^2 - \left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right] K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) \notag \\
& = \left(Y - X^{\top}\beta\right)^2 + \left[-2\left(Y - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right] K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) \notag \\
& \qquad \qquad \qquad \qquad - \left(Y - X^{\top}\beta_0^s\right)^2 - \left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right] K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) \notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad - \left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right] \left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\} \notag \\
& = \underbrace{\left(Y - X^{\top}\beta\right)^2 - \left(Y - X^{\top}\beta_0^s\right)^2}_{M_1} \notag \\
& \qquad + \underbrace{\left\{ \left[-2\left(Y - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right] - \left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right]\right\} K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)}_{M_2} \notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad - \underbrace{\left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right] \left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}}_{M_3} \notag \\
\label{eq:expand_f} & := M_1 + M_2 + M_3
\end{align}
For the rest of the analysis, fix $\zeta > 0$ and consider the collection of functions $\mathcal{F}_{\zeta}$ which is defined as:
$$
\mathcal{F}_{\zeta} = \left\{f_\theta - f_{\theta^s}: d_*(\theta, \theta^s) \le \zeta\right\} \,.
$$
First note that $\mathcal{F}_\zeta$ has bounded uniform entropy integral (henceforth BUEI) over $\zeta$. To establish this, it is enough to argue that the collection $\mathcal{F} = \{ f_\theta : \theta \in \Theta\}$ is BUEI. Note that the functions $X \mapsto X^{\top}\beta$ has VC dimension $p$ and so is the map $X \mapsto X^{\top}(\beta + \delta)$. Therefore the functions $(X, Y) \mapsto (Y - X^{\top}(\beta + \delta))^2 - (Y - X^{\top}\beta)^2$ is also BUEI, as composition with monotone function (here $x^2$) and taking difference keeps this property. Further by the hyperplane $Q \mapsto Q^{\top}\psi$ also has finite dimension (only depends on the dimension of $Q$) and the VC dimension does not change by scaling it with $\sigma_n$. Therefore the functions $Q \mapsto Q^{\top}\psi/sigma_n$ has same VC dimension as $Q \mapsto Q^{\top}\psi$ which is independent of $n$. Again, as composition of monotone function keeps BUEI property, the functions $Q \mapsto K(Q^{\top}\psi/\sigma_n)$ is also BUEI. As the product of two BUEI class is BUEI, we conclude that $\mathcal{F}$ (and hence $\mathcal{F}_\zeta$) is BUEI.
\\\\
\noindent
Now to bound the modulus of continuity we use Lemma 2.14.1 of \cite{vdvw96}:
\begin{equation*}
\label{eq:moc_bound}
\sqrt{n}\mathbb{E}\left[\sup_{\theta: d_*(\theta, \theta_0^s) \le \zeta} \left|\left(\mathbb{P}_n - P\right)\left(f_\theta - f_{\theta_0^s}\right)\right|\right] \lesssim \mathcal{J}(1, \mathcal{F}_\zeta) \sqrt{\mathbb{E}\left[F_{\zeta}^2(X, Y, Q)\right]}
\end{equation*}
where $F_\zeta$ is some envelope function of $\mathcal{F}_\zeta$. As the function class $\mathcal{F}_\zeta$ has bounded entropy integral, $\mathcal{J}(1, \mathcal{F}_\zeta) $ can be bounded above by some constant independent of $n$. We next calculate the order of the envelope function $F_\zeta$. Recall that, by definition of envelope function is:
$$
F_{\zeta}(X, Y, Q) \ge \sup_{\theta: d_*(\theta, \theta_0^s) \le \zeta} \left| f_{\theta} - f_{\theta_0}\right| \,.
$$
and we can write $f_\theta - f_{\theta_0^s} = M_1 + M_2 + M_3$ which follows from equation \eqref{eq:expand_f}. Therefore, to find the order of the envelope function, it is enough to find the order of bounds of $M_1, M_2, M_3$ over the set $d_*(\theta, \theta_0^s) \le \zeta$. We start with $M_1$:
\begin{align}
\sup_{d_*(\theta, \theta_0^s) \le \zeta}|M_1| & = \sup_{d_*(\theta, \theta_0^s) \le \delta}\left|\left(Y - X^{\top}\beta\right)^2 - \left(Y - X^{\top}\beta_0^s\right)^2\right| \notag \\
& = \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|2YX^{\top}(\beta_0^s - \beta) + (X^{\top}\beta)^2 - (X^{\top}\beta_0^S)^2\right| \notag \\
& \le \sup_{d_*(\theta, \theta_0^s) \le \zeta} \|\beta - \beta_0^s\| \left[2|Y|\|X\| + (\|\beta_0^s\| + \zeta)\|X\|^2\right] \notag \\
\label{eq:env_1} & \le \zeta\left[2|Y|\|X\| + (\|\beta_0^s\| + \zeta)\|X\|^2\right] := F_{1, \zeta}(X, Y, Q) \hspace{0.1in} [\text{Envelope function of }M_1]
\end{align}
and the second term:
\allowdisplaybreaks
\begin{align}
& \sup_{d_*(\theta, \theta_0^s) \le \zeta} |M_2| \notag \\
& = \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{\left[-2\left(Y - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right] \right. \right. \notag \\
& \qquad \qquad \qquad \qquad \left. \left. - \left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right]\right\}\right|K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) \notag \\
& \le \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{\left[-2\left(Y - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right] \right. \right. \notag \\
& \qquad \qquad \qquad \qquad \left. \left. - \left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right]\right\}\right| \notag \\
& = \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{\left[2Y(X^{\top}\delta_0^s - X^{\top}\delta) + 2[(X^{\top}\beta)(X^{\top}\delta) \right. \right. \right. \notag \\
& \qquad \qquad \qquad \qquad \left. \left. \left. - (X^{\top}\beta_0^s)(X^{\top}\delta_0^s)] + (X^{\top}\delta)^2 - (X^{\top}\delta_0^s)^2\right]\right\}\right| \notag \\
& \le \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left\{\|\delta - \delta_0^s\|2|Y|\|X\| + 2\|\beta - \beta_0\|\|X\|\|\delta\| \right. \notag \\
& \qquad \qquad \qquad \qquad \left. + 2\|\delta - \delta_0^s\|\|X\|\|\beta_0^s\| + 2\|X\|\|\delta + \delta_0^s\|\|\delta - \delta_0^s\|\right\} \notag \\ \notag \\
& \le \zeta \left[2|Y|\|X\| + 2\|X\|(\|\delta_0^s\| + \|\zeta\|) + 2\|X\|\|\beta_0^s\| + 2\|X\|(\|\delta_0^s\| + \zeta)\right] \notag \\
\label{eq:env_2}& = \zeta \times 2\|X\|\left[2|Y| + 2(\|\delta_0^s\| + \|\zeta\|) + \|\beta_0^s\|\right] := F_{2, \zeta}(X, Y, Q) \hspace{0.1in} [\text{Envelope function of }M_2]
\end{align}
For the third term, note that:
\begin{align*}
& \sup_{d_*(\theta, \theta_0^s) \le \zeta} |M_3| \\
& \le \left|\left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right]\right| \times \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}\right| \\
& := F_{3, \zeta} (X, Y, Q)
\end{align*}
Henceforth, we define the envelope function to be $F_\zeta = F_{\zeta, 1} + F_{\zeta, 2} + F_{\zeta, 3}$. Hence we have by triangle inequality:
$$
\sqrt{\mathbb{E}\left[F_{\zeta}^2(X, Y, Q)\right]} \le \sum_{i=1}^3 \sqrt{\mathbb{E}\left[F_{i, \zeta}^2(X, Y, Q)\right]}
$$
From equation \eqref{eq:env_1} and \eqref{eq:env_2} we have:
\begin{equation}
\label{eq:moc_bound_2}
\sqrt{\mathbb{E}\left[F_{1, \zeta}^2(X, Y, Q)\right]} + \sqrt{\mathbb{E}\left[F_{2, \zeta}^2(X, Y, Q)\right]} \lesssim \zeta \,.
\end{equation}
For $F_{3, \zeta}$, first note that:
\begin{align*}
& \mathbb{E}\left[\left|\left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right]\right|^2 \mid Q\right] \\
& \le 8\mathbb{E}\left[\left(Y - X^{\top}\beta_0^s\right)^2(X^{\top}\delta_0)^2 \mid Q\right] + 2\mathbb{E}[(X^{\top}\delta_0^s)^4 \mid Q] \\
& \le \left\{8\|\beta - \beta_0^s\|^2\|\delta_0\|^2 + 8\|\delta_0\|^4 + 2\|\delta_0^s\|^4\right\}m_4(Q) \,.
\end{align*}
where $m_4(Q)$ is defined in Assumption \ref{eq:assm}. In this part, we have to tackle the dichotomous behavior of $\psi$ around $\psi_0^s$ carefully. Henceforth define $d_*^2(\psi, \psi_0^s)$ as:
\begin{align*}
d_*^2(\psi, \psi_0^s) = & \frac{\|\psi - \psi_0^s\|^2}{\sigma_n}\mathds{1}_{\|\psi - \psi_0^s\| \le \mathcal{K}\sigma_n} + \|\psi - \psi_0^s\|\mathds{1}_{\|\psi - \psi_0^s\| > \mathcal{K}\sigma_n}
\end{align*}
This is a slight abuse of notation, but the reader should think of it as the part of $\psi$ in $d_*^2(\theta, \theta_0^s)$. Define $B_{\zeta}(\psi_0^s)$ to be set of all $\psi$'s such that $d^2_*(\psi, \psi_0^s) \le \zeta^2$. We can decompose $B_{\zeta}(\psi_0^s)$ as a disjoint union of two sets:
\begin{align*}
B_{\zeta, 1}(\psi_0^s) & = \left\{\psi: d^2_*(\psi, \psi_0^s) \le \zeta^2, \|\psi - \psi_0^s\| \le \mathcal{K}\sigma_n\right\} \\
& = \left\{\psi:\frac{\|\psi - \psi_0^s\|^2}{\sigma_n} \le \zeta^2, \|\psi - \psi_0^s\| \le \mathcal{K}\sigma_n\right\} \\
& = \left\{\psi:\|\psi - \psi_0^s\| \le \zeta \sqrt{\sigma_n}, \|\psi - \psi_0^s\| \le \mathcal{K}\sigma_n\right\} \\\\
B_{\zeta, 2}(\psi_0^s) & = \left\{\psi: d^2_*(\psi, \psi_0^s) \le \zeta^2, \|\psi - \psi_0^s\| > \mathcal{K}\sigma_n\right\} \\
& = \left\{\psi: \|\psi - \psi_0^s\| \le \zeta^2, \|\psi - \psi_0^s\| > \mathcal{K}\sigma_n\right\}
\end{align*}
Assume $\mathcal{K} > 1$. The case where $\mathcal{K} < 1$ follows from similar calculations and hence skipped for brevity. Consider the following two cases:
\\\\
\noindent
{\bf Case 1: }Suppose $\zeta \le \sqrt{\mathcal{K}\sigma_n}$. Then $B_{\zeta, 2} = \phi$. Also as $\mathcal{K} > 1$, we have: $\zeta\sqrt{\sigma_n} \le \mathcal{K}\sigma_n$. Hence we have:
$$
\sup_{d_*^2(\psi, \psi_0^s) \le \zeta^2}\|\psi - \psi_0^s\| = \sup_{B_{\zeta, 1}}\|\psi - \psi_0^s\| = \zeta\sqrt{\sigma_n} \,.
$$
This implies:
\begin{align*}
& \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}\right|^2 \\
& \le \max\left\{\left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n} + \|\tilde Q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)\right\}\right|^2, \right. \\
& \qquad \qquad \qquad \left. \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n} - \|\tilde Q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)\right\}\right|^2\right\} \\
& := \max\{T_1, T_2\} \,.
\end{align*}
Therefore we have:
$$
\mathbb{E}\left[F^2_{3, \zeta}(X, Y, Q)\right] \le \mathbb{E}[m_4(Q) T_1] + \mathbb{E}[m_4(Q) T_2] \,.
$$
Now:
\begin{align}
& \mathbb{E}[m_4(Q) T_1] \notag \\
& = \mathbb{E}\left[m_4(Q) \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n} + \|\tilde Q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)\right\}\right|^2\right] \notag \\
& = \sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty} m_4(\sigma_nt - \tilde q^{\top}\tilde \psi_0^s, \tilde q) \left|K\left(t\right) - K\left(t + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)\right|^2 \ f_s(\sigma_nt\mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \notag\\
& \le \sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty} m_4(\sigma_nt - \tilde q^{\top}\tilde \psi_0^s, \tilde q) \left|K\left(t\right) - K\left(t + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)\right| \ f_s(\sigma_nt\mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \notag \\
& = \sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty}m_4(\sigma_nt - \tilde q^{\top}\tilde \psi_0^s, \tilde q) \int_{t}^{t + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} K'(s) \ ds \ f_s(\sigma_nt\mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \notag \\
& = \sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty}K'(s) \int_{s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^s m_4(\sigma_nt - \tilde q^{\top}\tilde \psi_0^s, \tilde q) f_s(\sigma_nt\mid \tilde q) \ dt \ ds
\ f(\tilde q) \ d\tilde q \notag \\
& = \zeta \sqrt{\sigma_n} \mathbb{E}[\|\tilde Q\|m_4(-\tilde Q^{\top}\psi_0^s, \tilde Q)f_s(0 \mid \tilde Q)] + R \notag
\end{align}
where as before we split $R$ into three parts $R = R_1 + R_2 + R_3$.
\begin{align}
\left|R_1\right| & = \left|\sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty}K'(s) \int_{s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^s m_4(- \tilde q^{\top}\tilde \psi_0^s, \tilde q) (f_s(\sigma_nt\mid \tilde q) - f_s(0 \mid \tilde q)) \ dt \ ds \ f(\tilde q) \ d\tilde q\right| \notag \\
\label{eq:r1_env_1} & \le \sigma_n^2 \int_{\mathbb{R}^{p-1}}m_4(- \tilde q^{\top}\tilde \psi_0^s, \tilde q)\dot f_s(\tilde q) \int_{-\infty}^{\infty}K'(s) \int_{s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^s |t| dt\ ds \ f(\tilde q) \ d\tilde q
\end{align}
We next calculate the inner integral (involving $(s,t)$) of equation \eqref{eq:r1_env_1}:
\begin{align*}
& \int_{-\infty}^{\infty}K'(s) \int_{s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^s |t| dt\ ds \\
& =\left(\int_{-\infty}^0 + \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} + \int_{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^{\infty}\right)K'(s) \int_{s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^s |t| dt\ ds \\
& = \frac12\int_{-\infty}^0 K'(s)\left[\left(s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)^2 - s^2\right] \ ds + \frac12\int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} K'(s)\left[\left(s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)^2 + s^2\right] \ ds \\
& \qquad \qquad \qquad \qquad + \frac12 \int_{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^{\infty}K'(s) \left[s^2 - \left(s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)^2\right] \ ds\\
& = -\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}} \int_{-\infty}^0 K'(s) s \ ds + \|\tilde q\|^2\frac{\zeta^2}{2\sigma_n} \int_{-\infty}^0 K'(s) \ ds + \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} s^2K'(s) \ ds \\
& \qquad \qquad -\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}} \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} sK'(s) \ ds + \|\tilde q\|^2\frac{\zeta^2}{2\sigma_n} \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} K'(s) \ ds \\
& \qquad \qquad \qquad + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}} \int_{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^{\infty} sK'(s) \ ds - \|\tilde q\|^2\frac{\zeta^2}{2\sigma_n} \int_{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^{\infty} K'(s) \ ds \\
& = \|\tilde q\|^2\frac{\zeta^2}{2\sigma_n}\left[2K\left(\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right) - 1\right] + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}} \left[ -\int_{-\infty}^0 K'(s) s \ ds - \right. \\
& \qquad \qquad \left. \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} K'(s)s \ ds + \int_{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^{\infty} sK'(s) \ ds\right] + \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} s^2K'(s) \ ds \\
& = \|\tilde q\|^2\frac{\zeta^2}{\sigma_n}\left[K\left(\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right) - K(0)\right] + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}} \left[ -\int_{-\infty}^{-\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} K'(s) s \ ds + \int_{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^{\infty} sK'(s) \ ds\right] \\
& \qquad \qquad + \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} s^2K'(s) \ ds \\
& = \|\tilde q\|^2\frac{\zeta^2}{\sigma_n}\left[K\left(\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right) - K(0)\right] + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\int_{-\infty}^{\infty} K'(s)|s|\mathds{1}_{|s| \ge \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} \ ds + \int_0^{\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}} s^2K'(s) \ ds \\
& \le \dot{K}_+ \|\tilde q\|^3\frac{\zeta^3}{\sigma^{3/2}_n} + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}} \int_{-\infty}^{\infty} K'(s)|s| \ ds + \|\tilde q\|^2\frac{\zeta^2}{\sigma_n}\left(K\left(\|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right) - K(0)\right) \\
& \lesssim \|\tilde q\|^3\frac{\zeta^3}{\sigma^{3/2}_n} + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}
\end{align*}
Putting this bound in equation \eqref{eq:r1_env_1} we obtain:
\begin{align*}
|R_1| & \le \frac{\sigma_n^2}{2} \int_{\mathbb{R}^{p-1}}m_4(- \tilde q^{\top}\tilde \psi_0^s, \tilde q)\dot f_s(\tilde q) \left(\|\tilde q\|^3\frac{\zeta^3}{\sigma^{3/2}_n} + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right) \ f(\tilde q) \ d\tilde q \\
& \le \frac{\zeta^3}{2\sqrt{\sigma_n}} \mathbb{E}\left[m_4(- \tilde Q^{\top}\tilde \psi_0^s, \tilde Q)\dot f_s(\tilde Q)\|\tilde Q\|^3\right] + \frac{\zeta \sqrt{\sigma_n}}{2} \mathbb{E}\left[m_4(- \tilde Q^{\top}\tilde \psi_0^s, \tilde Q)\dot f_s(\tilde Q)\|\tilde Q\|\right]
\end{align*}
and
\begin{align*}
& \left|R_2\right| \\
& = \left|\sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty}K'(s) \int_{s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^s \left(m_4(\sigma_n t - \tilde q^{\top}\tilde \psi_0^s, \tilde q) - m_4( - \tilde q^{\top}\tilde \psi_0^s, \tilde q)\right)f_s(0 \mid \tilde q) \ dt \ ds \ f(\tilde q) \ d\tilde q\right| \\
& \le \sigma_n^2 \int_{\mathbb{R}^{p-1}}\dot m_4( \tilde q)f_s(0 \mid \tilde q) \int_{-\infty}^{\infty}K'(s) \int_{s- \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}}^s |t| dt\ ds \ f(\tilde q) \ d\tilde q \\
& \le \sigma_n^2 \int_{\mathbb{R}^{p-1}}\dot m_4( \tilde q)f_s(0 \mid \tilde q) \left(\|\tilde q\|^3\frac{\zeta^3}{\sigma^{3/2}_n} + \|\tilde q\|\frac{\zeta}{\sqrt{\sigma_n}}\right) \ f(\tilde q) \ d\tilde q \\
& = \zeta \sigma_n^{3/2} \mathbb{E}\left[\dot m_4( \tilde Q)f_s(0 \mid \tilde Q)\|\tilde Q\|\right] + \zeta^3 \sqrt{\sigma_n} \mathbb{E}\left[\dot m_4( \tilde Q)f_s(0 \mid \tilde Q)\|\tilde Q\|^3\right]
\end{align*}
The third residual $R_3$ is even higher order term and hence skipped. It is immediate that the order of the remainders are equal to or smaller than $\zeta \sqrt{\sigma_n}$ which implies:
$$
\mathbb{E}[m_4(Q)T_1] \lesssim \zeta\sqrt{\sigma_n} \,.
$$
The calculation for $T_2$ is similar and hence skipped for brevity. Combining conclusions for $T_1$ and $T_2$ we conclude when $\zeta \le \sqrt{\mathcal{K} \sigma_n}$:
\begin{align}
& \mathbb{E}\left[F^2_{3, \zeta}(X, Y, Q)\right] \notag \\
& \mathbb{E}\left[\left|\left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right]\right|^2 \times \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}\right|^2\right] \notag \\
& \lesssim \mathbb{E}\left[m_4(Q)\sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}\right|^2\right] \notag \\
\label{eq:env_3} & \lesssim \zeta \sqrt{\sigma_n} \,.
\end{align}
\\
\noindent
{\bf Case 2: } Now consider $\zeta > \sqrt{\mathcal{K} \sigma_n}$. Then it is immediate that:
$$
\sup_{d_*^2(\psi, \psi^s_0) \le \zeta^2} \|\psi - \psi^s_0\| = \zeta^2 \,.
$$
Using this we have:
\begin{align}
& \mathbb{E}[m_4(Q) T_1] \notag \\
& = \mathbb{E}\left[m_4(Q)\left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n} + \|\tilde Q\|\frac{\zeta^2}{\sqrt{\sigma_n}}\right)\right\}\right|^2\right] \notag \\
& = \sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty} m_4(\sigma_nt - \tilde q^{\top}\tilde \psi_0^s, \tilde q) \left|K\left(t\right) - K\left(t + \|\tilde q\|\frac{\zeta^2}{\sigma_n}\right)\right|^2 \ f_s(\sigma_nt\mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \notag \\
& \le \sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty} m_4(\sigma_nt - \tilde q^{\top}\tilde \psi_0^s, \tilde q) \left|K\left(t\right) - K\left(t + \|\tilde q\|\frac{\zeta^2}{\sigma_n}\right)\right| \ f_s(\sigma_nt\mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \notag \\
& \le \sigma_n \int_{\mathbb{R}^{p-1}}\int_{-\infty}^{\infty} m_4(\sigma_nt - \tilde q^{\top}\tilde \psi_0^s, \tilde q)\|\tilde q\|\frac{\zeta^2}{\sigma_n} \ f_s(\sigma_nt\mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \notag \\
& = \zeta^2 \int_{\mathbb{R}^{p-1}}m_4(- \tilde q^{\top}\tilde \psi_0^s, \tilde q) f_s(0 \mid \tilde q)\|\tilde q\| \ f(\tilde q) \ d\tilde q + R \notag\\
& \le \zeta^2 \mathbb{E}\left[\|\tilde Q\|m_4\left(- \tilde Q^{\top}\tilde \psi_0^s, \tilde Q\right) f_s(0 \mid \tilde Q)\right] + R \notag
\end{align}
The analysis of the remainder term is similar and if is of higher order. This concludes when $\zeta > \sqrt{K\sigma_n}$:
\begin{align}
& \mathbb{E}\left[F^2_{3, \zeta}(X, Y, Q)\right] \notag \\
& \mathbb{E}\left[\left|\left[-2\left(Y - X^{\top}\beta_0^s\right)X^{\top}\delta_0^s + (X^{\top}\delta_0^s)^2\right]\right|^2 \times \sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}\right|^2\right] \notag \\
& \lesssim \mathbb{E}\left[m_4(Q)\sup_{d_*(\theta, \theta_0^s) \le \zeta} \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}\right|^2\right] \notag \\
\label{eq:env_4} & \lesssim \zeta^2
\end{align}
Combining \eqref{eq:env_3}, \eqref{eq:env_4} with equation \eqref{eq:moc_bound_2} we have:
\begin{align*}
\sqrt{n}\mathbb{E}\left[\sup_{\theta: d_*(\theta, \theta_0^s) \le \zeta} \left|\left(\mathbb{P}_n - P\right)\left(f_\theta - f_{\theta_0^s}\right)\right|\right] & \lesssim \sqrt{\zeta}\sigma_n^{1/4}\mathds{1}_{\zeta \le \sqrt{\mathcal{K}\sigma_n}} + \zeta \mathds{1}_{\zeta > \sqrt{\mathcal{K} \sigma_n}} \\
& := \phi_n(\zeta) \,.
\end{align*}
Hence to obtain rate we have to solve $r_n^2 \phi_n(1/r_n) \le \sqrt{n}$, i.e. (ignoring $\mathcal{K}$ as this does not affect the rate)
$$
r_n^{3/2}\sigma_n^{1/4}\mathds{1}_{r_n \ge \sigma_n^{-1/2}} + r_n \mathds{1}_{r_n \le \sigma_n^{-1/2}} \le \sqrt{n} \,.
$$
Now if $r_n \le \sigma_n^{-1/2}$ then $r_n = \sqrt{n}$ which implies $\sqrt{n} \le \sigma_n^{-1/2}$ i.e. $n\sigma_n \to 0$ and hence contradiction. On the other hand, if $r_n \ge \sigma_n^{-1/2}$ then $r_n = n^{1/3}\sigma_n^{-1/6}$. This implies $n^{1/3}\sigma_n^{-1/6} \ge \sigma_n^{-1/2}$, i.e. $n^{1/3} \ge \sigma_n^{-1/3}$, i.e. $n\sigma_n \to \infty$ which is okay. This implies:
$$
n^{2/3}\sigma_n^{-1/3}d^2(\hat \theta^s, \theta_0^s) = O_p(1) \,.
$$
Now as $n^{2/3}\sigma_n^{-1/3} \gg \sigma_n^{-1}$, we have:
$$
\frac{1}{\sigma_n}d^2(\hat \theta^s, \theta_0^s) = o_p(1) \,.
$$
which further indicates $\|\hat \psi^s - \psi_0^s\|/\sigma_n = o_p(1)$. This, along with the fact that $\|\psi_0^s - \psi_0\|/\sigma_n = o(1)$ (from Lemma \ref{bandwidth}), establishes that $\|\hat \psi_0^s - \psi_0\|/\sigma_n = o_p(1)$. This completes the proof.
\end{proof}
\section{Supplementary Lemmas for the proof of Theorem \ref{thm:regression}}
\label{sec:supp_B}
\subsection{Proof of Lemma \ref{bandwidth}}
\begin{proof}
First we establish the fact that $\theta_0^s \to \theta_0$. Note that for all $n$, we have:
$$
\mathbb{M}^s(\theta_0^s) \le \mathbb{M}^s(\theta_0)
$$
Taking $\limsup$ on the both side we have:
$$
\limsup_{n \to \infty} \mathbb{M}^s(\theta_0^s) \le \mathbb{M}(\theta_0) \,.
$$
Now using Lemme \ref{lem:uniform_smooth} we have:
$$
\limsup_{n \to \infty} \mathbb{M}^s(\theta_0^s) = \limsup_{n \to \infty} \left[\mathbb{M}^s(\theta_0^s) - \mathbb{M}(\theta_0^s) + \mathbb{M}(\theta_0^s)\right] = \limsup_{n \to \infty} \mathbb{M}(\theta_0^s) \,.
$$
which implies $\limsup_{n \to \infty} \mathbb{M}(\theta_0^s) \le \mathbb{M}(\theta_0)$ and from the continuity of $\mathbb{M}(\theta)$ and $\theta_0$ being its unique minimizer, we conclude the proof. Now, using Lemma \ref{lem:pop_curv_nonsmooth} and Lemma \ref{lem:uniform_smooth} we further obtain:
\begin{align}
u_- d^2(\theta_0^s, \theta_0) & \le \mathbb{M}(\theta_0^s) - \mathbb{M}(\theta_0) \notag \\
& = \mathbb{M}(\theta_0^s) - \mathbb{M}^s(\theta^s_0) + \underset{\le 0}{\underline{\mathbb{M}^s(\theta_0^s) - \mathbb{M}^s(\theta_0)}} + \mathbb{M}^s(\theta_0) - \mathbb{M}(\theta_0) \notag \\
\label{eq:est_dist_bound} & \le \sup_{\theta \in \Theta}\left|\mathbb{M}^s(\theta) - \mathbb{M}(\theta)\right| \le K_1 \sigma_n \,.
\end{align}
Note that we neeed consistency of $\theta_0^s$ here as the lower bound in Lemma \ref{lem:pop_curv_nonsmooth} is only valid in a neighborhood around $\theta_0$. As $\theta_0^s$ is the minimizer of $\mathbb{M}^s(\theta)$, from the first order condition we have:
\begin{align}
\label{eq:beta_grad}\nabla_{\beta}\mathbb{M}^s_n(\theta_0^s) & = -2\mathbb{E}\left[X(Y - X^{\top}\beta_0^s)\right] + 2\mathbb{E} \left\{\left[X_iX_i^{\top}\delta_0^s\right] K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right)\right\} = 0 \\
\label{eq:delta_grad}\nabla_{\delta}\mathbb{M}^s_n(\theta_0^s) & = \mathbb{E} \left\{\left[-2X_i\left(Y_i - X_i^{\top}\beta_0^s\right) + 2X_iX_i^{\top}\delta_0^s\right] K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right)\right\} = 0\\
\label{eq:psi_grad}\nabla_{\psi}\mathbb{M}^s_n(\theta_0^s) & = \frac{1}{\sigma_n}\mathbb{E} \left\{\left[-2\left(Y_i - X_i^{\top}\beta_0^s\right)X_i^{\top}\delta_0^s + (X_i^{\top}\delta_0^s)^2\right]\tilde Q_i K'\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right)\right\} = 0
\end{align}
We first show that $(\tilde \psi^s_0 - \tilde \psi_0)/\sigma_n \to 0$ by \emph{reductio ab absurdum}. From equation \eqref{eq:est_dist_bound}, we know $\|\psi_0^s - \psi_0\|/\sigma_n = O(1)$. Hence it has a convergent subsequent $\psi^s_{0, n_k}$, where $(\tilde \psi^s_{0, n_k} - \tilde \psi_0)/\sigma_n \to h$. If we can prove that $h = 0$, then we establish every subsequence of $\|\psi_0^s - \psi_0\|/\sigma_n$ has a further subsequence which converges to $0$ which further implies $\|\psi_0^s - \psi_0\|/\sigma_n$ converges to $0$. To save some notations, we prove that if $(\psi_0^s - \psi_0)/\sigma_n \to h$ then $h = 0$. We start with equation \eqref{eq:psi_grad}. Define $\tilde \eta = (\tilde \psi^s_0 - \tilde \psi_0)/\sigma_n = (\psi_0^s - \psi_0)/\sigma_n$ where $\tilde \psi$ is all the co-ordinates of $\psi$ except the first one, as the first co-ordinate of $\psi$ is always assumed to be $1$ for identifiability purpose.
\allowdisplaybreaks
\begin{align}
0 & = \frac{1}{\sigma_n}\mathbb{E} \left\{\left[-2\left(Y_i - X_i^{\top}\beta_0^s\right)X_i^{\top}\delta_0^s + (X_i^{\top}\delta_0^s)^2\right]\tilde Q_i K'\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right)\right\} \notag \\
& = \frac{1}{\sigma_n}\mathbb{E}\left[\left( -2\delta_0^s XX^{\top}(\beta_0 - \beta^s_0) -2\delta_0^s XX^{\top}\delta_0\mathds{1}_{Q^{\top}\delta_0 > 0} + (X_i^{\top}\delta_0^s)^2\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right] \notag \\
& = \frac{1}{\sigma_n}\mathbb{E}\left[\left( -2\delta_0^s XX^{\top}(\beta_0 - \beta^s_0) -2\delta_0^s XX^{\top}(\delta_0 - \delta_0^s)
\mathds{1}_{Q^{\top}\delta_0 > 0} \right. \right. \notag \\
& \hspace{10em} \left. \left. + (X_i^{\top}\delta_0^s)^2\left(1 - 2\mathds{1}_{Q^{\top}\delta_0 > 0}\right)\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right] \notag \\
& = \frac{-2}{\sigma_n}\mathbb{E}\left[\left(\delta_0^{s^{\top}} g(Q)(\beta_0 - \beta^s_0)\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right] \notag \\
& \qquad \qquad \qquad - \frac{2}{\sigma_n} \mathbb{E}\left[\left(\delta_0^{s^{\top}}g(Q)(\delta_0 - \delta^s_0)\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\mathds{1}_{Q^{\top}\delta_0 > 0}\right] \notag \\
& \hspace{15em} + \frac{1}{\sigma_n}\mathbb{E}\left[\left(\delta_0^{s^{\top}}g(Q)\delta^s_0\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\left(1 - 2\mathds{1}_{Q^{\top}\delta_0 > 0}\right)\right] \notag \\
& = -\underbrace{\frac{2}{\sigma_n}\mathbb{E}\left[\left(\delta_0^{s^{\top}} g(Q)(\beta_0 - \beta^s_0)\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right]}_{T_1} \notag \\
& \qquad \qquad -\underbrace{\frac{2}{\sigma_n} \mathbb{E}\left[\left(\delta_0^{s^{\top}}g(Q)(\delta_0 - \delta^s_0)\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\mathds{1}_{Q^{\top}\delta_0 > 0}\right]}_{T_2} \notag \\
\label{eq:pop_est_conv_1} & \qquad \qquad \qquad + \underbrace{\frac{1}{\sigma_n}\mathbb{E}\left[\left(\delta_0{\top}g(Q)\delta_0\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\left(1 - 2\mathds{1}_{Q^{\top}\delta_0 > 0}\right)\right]}_{T_3} \notag \\
& \qquad \qquad \qquad \qquad + \underbrace{\frac{2}{\sigma_n}\mathbb{E}\left[\left((\delta_0 - \delta_0^s)^{\top}g(Q)\delta_0\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\left(1 - 2\mathds{1}_{Q^{\top}\delta_0 > 0}\right)\right]}_{T_4} \notag \\
& = T_1 + T_2 + T_3 + T_4
\end{align}
As mentioned earlier, there is a bijection between $(Q_1, \tilde Q)$ and $(Q^{\top}\psi_0, \tilde Q)$. The map of one side is obvious. The other side is also trivial as the first coordinate of $\psi_0$ is 1, which makes $Q^{\top}\psi_0 = Q_1 + \tilde Q^{\top}\tilde \psi_0$:
$$
(Q^{\top}\psi_0, \tilde Q) \mapsto (Q^{\top}\psi_0 - \tilde Q^{\top}\tilde \psi_0, \tilde Q) \,.
$$
We first show that $T_1, T_2$ and $T_4$ are $o(1)$. Towards that end first note that:
\begin{align*}
|T_1| & \le \frac{2}{\sigma_n}\mathbb{E}\left[\|g(Q)\|_{op} \ \|\tilde Q\| \ \left|K'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right|\right]\|\delta_0^s\|\|\beta_0 - \beta_0^s\| \\
|T_2| & \le \frac{2}{\sigma_n} \mathbb{E}\left[\|g(Q)\|_{op} \ \|\tilde Q\| \ \left|K'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right|\right]\|\delta_0^s\|\|\delta_0 - \delta_0^s\| \\
|T_4| & \le \frac{2}{\sigma_n} \mathbb{E}\left[\|g(Q)\|_{op} \ \|\tilde Q\| \ \left|K'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right|\right]\|\delta_0^s\|\|\delta_0 - \delta_0^s\|
\end{align*}
From the above bounds, it is immediate that to show that above terms are $o(1)$ all we need to show is:
$$
\frac{1}{\sigma_n}\mathbb{E}\left[\|g(Q)\|_{op} \ \|\tilde Q\| \ \left|K'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right|\right] = O(1) \,.
$$
Towards that direction, define $\eta = (\tilde \psi_0^s - \tilde \psi_0)/\sigma_n$:
\begin{align*}
& \frac{1}{\sigma_n}\mathbb{E}\left[\|g(Q)\|_{op} \ \|\tilde Q\| \ \left|K'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right|\right] \\
& \le c_+ \frac{1}{\sigma_n}\mathbb{E}\left[\|\tilde Q\| \ \left|K'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\right|\right] \\
& = c_+ \frac{1}{\sigma_n}\int \int \|\tilde q\| \left|K'\left(\frac{t}{\sigma_n} + \tilde q^{\top}\eta \right)\right| f_0\left(t \mid \tilde q\right) f(\tilde q) \ dt \ d\tilde q \\
& = c_+ \int \int \|\tilde q\| \left|K'\left(t + \tilde q^{\top}\eta \right)\right| f_0\left(\sigma_n t \mid \tilde q\right) f(\tilde q) \ dt \ d\tilde q \\
& = c_+ \int \|\tilde q\| f_0\left(0 \mid \tilde q\right) \int \left|K'\left(t + \tilde q^{\top}\eta \right)\right| \ dt \ f(\tilde q) \ d\tilde q + R_1 \\
& = c_+ \int \left|K'\left(t\right)\right| dt \ \mathbb{E}\left[\|\tilde Q\| f_0(0 \mid \tilde Q)\right] + R_1 = O(1) + R_1 \,.
\end{align*}
Therefore, all it remains to show is $R_1$ is also $O(1)$ (or of smaller order):
\begin{align*}
|R_1| & = \left|c_+ \int \int \|\tilde q\| \left|K'\left(t + \tilde q^{\top}\eta \right)\right| \left(f_0\left(\sigma_n t \mid \tilde q\right) - f_0(0 \mid \tilde q) \right)f(\tilde q) \ dt \ d\tilde q\right| \\
& \le c_+ F_+ \sigma_n \int \|\tilde q\| \int_{-\infty}^{\infty} |t|\left|K'\left(t + \tilde q^{\top}\eta \right)\right| \ dt \ f(\tilde q) \ d\tilde q \\
& = c_+ F_+ \sigma_n \int \|\tilde q\| \int_{-\infty}^{\infty} |t - q^{\top}\eta|\left|K'\left(t\right)\right| \ dt \ f(\tilde q) \ d\tilde q \\
& \le c_+ F_+ \sigma_n \left[\int \|\tilde q\| \int_{-\infty}^{\infty} |t|\left|K'\left(t\right)\right| \ dt \ f(\tilde q) \ d\tilde q + \int \|\tilde q\|^2\|\eta\| \int_{-\infty}^{\infty}\left|K'\left(t\right)\right| \ dt \ f(\tilde q) \ d\tilde q\right] \\
& = c_+ F_+ \sigma_n \left[\left(\int_{-\infty}^{\infty} |t|\left|K'\left(t\right)\right| \ dt\right) \times \mathbb{E}[\|\tilde Q\|] + \left(\int_{-\infty}^{\infty}\left|K'\left(t\right)\right| \ dt\right) \times \|\eta\| \ \mathbb{E}[\|\tilde Q\|^2]\right] \\
& = O(\sigma_n) = o(1) \,.
\end{align*}
This completes the proof. For $T_3$, the limit is non-degenerate which can be calculated as follows:
\begin{align*}
T_3 &= \frac{1}{\sigma_n}\mathbb{E}\left[\left(\delta_0{\top}g(Q)\delta_0\right)\tilde QK'\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)\left(1 - 2\mathds{1}_{Q^{\top}\delta_0 > 0}\right)\right] \\
& = \frac{1}{\sigma_n} \int \int \left(\delta_0{\top}g(t - \tilde q^{\top}\tilde \psi_0, \tilde q)\delta_0\right)\tilde q K'\left(\frac{t}{\sigma_n} + \tilde q^{\top} \eta\right)\left(1 - 2\mathds{1}_{t > 0}\right) \ f_0(t \mid \tilde q) \ f(\tilde q) \ dt \ d\tilde q \\
& = \int \int \left(\delta_0{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0, \tilde q)\delta_0\right)\tilde q K'\left(t + \tilde q^{\top} \eta\right)\left(1 - 2\mathds{1}_{t > 0}\right) \ f_0(\sigma_n t \mid \tilde q) \ f(\tilde q) \ dt \ d\tilde q \\
& = \int \int \left(\delta_0{\top}g(- \tilde q^{\top}\tilde \psi_0, \tilde q)\delta_0\right)\tilde q K'\left(t + \tilde q^{\top} \eta\right)\left(1 - 2\mathds{1}_{t > 0}\right) \ f_0(0 \mid \tilde q) \ f(\tilde q) \ dt \ d\tilde q + R \\
& = \int \left(\delta_0{\top}g(- \tilde q^{\top}\tilde \psi_0, \tilde q)\delta_0\right)\tilde q f_0(0 \mid \tilde q) \left[\int_{-\infty}^0 K'\left(t + \tilde q^{\top} \eta\right) \ dt - \int_0^\infty K'\left(t + \tilde q^{\top}\tilde \eta\right) \ dt \right] \ f(\tilde q) \ d\tilde q + R \\
&= \int \left(\delta_0{\top}g(- \tilde q^{\top}\tilde \psi_0, \tilde q)\delta_0\right)\tilde q f_0(0 \mid \tilde q)\left(2K\left(\tilde q^{\top}\eta\right) - 1\right) \ f(\tilde q) \ d\tilde q + R \\
& = \mathbb{E}\left[\tilde Q f(0 \mid \tilde Q) \left(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0, \tilde Q)\delta_0\right)\left(2K(\tilde Q^{\top} \eta)- 1\right)\right] + R
\end{align*}
That the remainder $R$ is $o(1)$ again follows by similar calculation as before and hence skipped. Therefore we have when $\eta = (\tilde \psi_0^s - \psi_0)/\sigma_n \to h$:
$$
T_3 \overset{n \to \infty}{\longrightarrow} \mathbb{E}\left[\tilde Q f(0 \mid \tilde Q) \left(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0, \tilde Q)\delta_0\right)\left(2K(\tilde Q^{\top}h)- 1\right)\right] \,,
$$
which along with equation \eqref{eq:pop_est_conv_1} implies:
$$
\mathbb{E}\left[\tilde Q f(0 \mid \tilde Q) \left(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0, \tilde Q)\delta_0\right)\left(2K(\tilde Q^{\top}h)- 1\right)\right] = 0 \,.
$$
Taking inner product with respect to $h$ on both side of the above equation we obtain:
$$
\mathbb{E}\left[\tilde Q^{\top}h f(0 \mid \tilde Q) \left(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0, \tilde Q)\delta_0\right)\left(2K(\tilde Q^{\top}h)- 1\right)\right] = 0
$$
Now from the symmetry of our Kernel $K$ we have $\left(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0, \tilde Q)
\delta_0\right)\tilde Q^{\top}h f(0 \mid \tilde Q) (2K(\tilde Q^{\top}\tilde h) - 1) \ge 0$ almost surely. As the expectation is $0$, we further deduce that $\tilde Q^{\top}h f(0 \mid \tilde Q) (2K(\tilde Q^{\top}\tilde h)-1) = 0$ almost surely, which further implies $h = 0$.
\\\\
\noindent
We next prove that $(\beta_0 - \beta^s_0)/\sqrt{\sigma_n} \to 0$ and $(\delta_0 - \delta^s_0)/\sqrt{\sigma_n} \to 0$ using equations\eqref{eq:beta_grad} and \eqref{eq:delta_grad}. We start with equation \eqref{eq:beta_grad}:
\begin{align}
0 & = -\mathbb{E}\left[X(Y - X^{\top}\beta_0^s)\right] + \mathbb{E} \left\{\left[X_iX_i^{\top}\delta_0^s\right] K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right)\right\} \notag \\
& = -\mathbb{E}\left[XX^{\top}(\beta_0 - \beta_0^s)\right] - \mathbb{E}[XX^{\top}\delta_0\mathds{1}_{Q^{\top}\psi_0 > 0}] + \mathbb{E} \left[ g(Q)K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right)\right]\delta_0^s \notag \\
& = -\Sigma_X(\beta_0 - \beta_0^s) -\mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right](\delta_0 - \delta_0^s) + \mathbb{E} \left[g(Q)\left\{K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right\}\right]\delta_0^s \notag \\
\label{eq:deriv1} & = \Sigma_X\frac{(\beta_0^2 - \beta_0)}{\sigma_n} + \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\frac{(\delta_0^2 - \delta_0)}{\sigma_n} + \frac{1}{\sigma_n}\mathbb{E} \left[g(Q)\left\{K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right\}\right]\delta_0^s \notag \\
& = \left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1}\Sigma_X \frac{(\beta_0^2 - \beta_0)}{\sigma_n} + \frac{\delta_0^s - \delta_0}{\sigma_n} \notag \\
& \qquad \qquad \qquad \qquad + \left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1} \frac{1}{\sigma_n}\mathbb{E} \left[g(Q)\left\{K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right\}\right]\delta_0^s
\end{align}
From equation \eqref{eq:delta_grad} we have:
\begin{align}
0 & = \mathbb{E} \left\{\left[-X\left(Y - X^{\top}\beta_0^s\right) + XX^{\top}\delta_0^s\right] K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right\} \notag \\
& = -\mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right](\beta_0 - \beta_0^s) - \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\delta_0 \notag \\
& \hspace{20em}+ \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right]\delta_0^s \notag \\
& = -\mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right](\beta_0 - \beta_0^s) - \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right](\delta_0 - \delta_0^s) \notag \\
& \hspace{20em} + \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\left(1 - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right]\delta_0^s \notag \\
& = \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right]\frac{(\beta_0^s - \beta_0)}{\sigma_n} + \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\frac{(\delta^s_0 - \delta_0)}{\sigma_n} \notag \\
\label{eq:deriv2} & \hspace{20em} + \frac{1}{\sigma_n}\mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\left(1 - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right]\delta_0^s \notag \\
& = \left( \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1}\mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right]\frac{(\beta_0^s - \beta_0)}{\sigma_n} + \frac{(\delta^s_0 - \delta_0)}{\sigma_n} \notag \\
& \qquad \qquad \qquad + \left( \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1} \frac{1}{\sigma_n}\mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\left(1 - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right]\delta_0^s
\end{align}
Subtracting equation \eqref{eq:deriv2} from \eqref{eq:deriv1} we obtain:
$$
0 = A_n \frac{(\beta_0^s - \beta_0)}{\sigma_n} + b_n \,,
$$
i.e.
$$
\lim_{n \to \infty} \frac{(\beta_0^s - \beta_0)}{\sigma_n} = \lim_{n \to \infty} -A_n^{-1}b_n \,.
$$
where:
\begin{align*}
A_n & = \left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1}\Sigma_X \\
& \qquad \qquad - \left( \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1}\mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right] \\
b_n & = \left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1} \frac{1}{\sigma_n}\mathbb{E} \left[g(Q)\left\{K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right\}\right]\delta_0^s \\
& \qquad - \left( \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1} \frac{1}{\sigma_n}\mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\left(1 - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right]\delta_0^s
\end{align*}
It is immediate via DCT that as $n \to \infty$:
\begin{align}
\label{eq:limit_3} \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right] & \longrightarrow \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \,. \\
\label{eq:limit_4} \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\mathds{1}_{Q^{\top}\psi_0 > 0}\right] & \longrightarrow \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \,.
\end{align}
From equation \eqref{eq:limit_3} and \eqref{eq:limit_4} it is immediate that:
\begin{align*}
\lim_{n \to \infty} A_n & = \left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1}\Sigma_X - I \\
& = \left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1}\left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 \le 0}\right]\right) := A\,.
\end{align*}
Next observe that:
\begin{align}
& \frac{1}{\sigma_n} \mathbb{E}\left[g(Q)\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right\}\right] \notag \\
& = \frac{1}{\sigma_n} \mathbb{E}\left[g(Q)\left\{K\left(\frac{Q^{\top}\psi_0}{\sigma_n} + \tilde Q^{\top}\tilde \eta\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right\}\right] \notag \\
& = \int_{\mathbb{R}^{p-1}} \int_{-\infty}^{\infty} g(\sigma_nt - \tilde q^{\top}\tilde \psi_0, \tilde q)\left[K\left(t + \tilde q^{\top}\tilde \eta\right) - \mathds{1}_{t > 0}\right] f(\sigma_n t \mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \notag \\
\label{eq:limit_1} & \longrightarrow \mathbb{E}\left[g(-\tilde Q^{\top}\tilde \psi_0, \tilde Q)f(0 \mid \tilde Q)\right] \cancelto{0}{\int_{-\infty}^{\infty} \left[K\left(t\right) - \mathds{1}_{t > 0}\right] \ dt} \,.
\end{align}
Similar calculation yields:
\begin{align}
\label{eq:limit_2} & \frac{1}{\sigma_n} \mathbb{E}\left[g(Q)K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\left(1 - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right] \notag \\
& \longrightarrow \mathbb{E}[g(-\tilde Q^{\top}\tilde \psi_0, \tilde Q)f_0(0 \mid \tilde Q)]\int_{-\infty}^{\infty} \left[K\left(t\right)\mathds{1}_{t \le 0}\right] \ dt \,.
\end{align}
Combining equation \eqref{eq:limit_1} and \eqref{eq:limit_2} we conclude:
\begin{align*}
\lim_{n \to \infty} b_n &= \left( \mathbb{E}\left[g(Q)\mathds{1}_{Q^{\top}\psi_0 > 0}\right]\right)^{-1} \mathbb{E}[g(-\tilde Q^{\top}\tilde \psi_0, \tilde Q)\delta_0f_0(0 \mid \tilde Q)]\int_{-\infty}^{\infty} \left[K\left(t\right)\mathds{1}_{t \le 0}\right] \ dt \\
& := b \,.
\end{align*}
which further implies,
$$
\lim_{n \to \infty} \frac{(\beta_0^s - \beta_0)}{\sigma_n} = -A^{-1}b \implies (\beta_0^s - \beta_0) = o(\sqrt{\sigma_n})\,,
$$
and by similar calculations:
$$
(\delta_0^s - \delta_0) = o(\sqrt{\sigma_n}) \,.
$$
This completes the proof.
\end{proof}
\subsection{Proof of Lemma \ref{lem:pop_curv_nonsmooth}}
\begin{proof}
From the definition of $M(\theta)$ it is immediate that $\mathbb{M}(\theta_0) = \mathbb{E}[{\epsilon}^2] = \sigma^2$. For any general $\theta$:
\begin{align*}
\mathbb{M}(\theta) & = \mathbb{E}\left[\left(Y - X^{\top}\left(\beta + \delta\mathds{1}_{Q^{\top}\psi > 0}\right)\right)^2\right] \\
& = \sigma^2 + \mathbb{E}\left[\left( X^{\top}\left(\beta + \delta\mathds{1}_{Q^{\top}\psi > 0} - \beta_0 - \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right)^2\right] \\
& \ge \sigma^2 + c_- \mathbb{E}_Q\left[\left\|\beta - \beta_0 + \delta\mathds{1}_{Q^{\top}\psi > 0}- \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0} \right\|^2\right]
\end{align*}
This immediately implies:
$$
\mathbb{M}(\theta) - \mathbb{M}(\theta_0) \ge c_- \mathbb{E}\left[\left\|\beta - \beta_0 + \delta\mathds{1}_{Q^{\top}\psi > 0}- \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0} \right\|^2\right] \,.
$$
\noindent
For notational simplicity, define $p_{\psi} = \mathbb{P}(Q^{\top}\psi > 0)$. Expanding the RHS we have:
\begin{align}
& \mathbb{E}\left[\left\|\beta - \beta_0 + \delta\mathds{1}_{Q^{\top}\psi > 0}- \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0} \right\|^2\right] \notag \\
& = \|\beta - \beta_0\|^2 + 2(\beta - \beta_0)^{\top}\mathbb{E}\left[\delta\mathds{1}_{Q^{\top}\psi > 0}- \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0}\right] + \mathbb{E}\left[\left\|\delta\mathds{1}_{Q^{\top}\psi > 0}- \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0}\right\|^2\right] \notag \\
& = \|\beta - \beta_0\|^2 + 2(\beta - \beta_0)^{\top}\mathbb{E}\left[\delta\mathds{1}_{Q^{\top}\psi > 0}-\delta\mathds{1}_{Q^{\top}\psi_0 > 0} + \delta\mathds{1}_{Q^{\top}\psi_0 > 0} - \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \notag \\
& \qquad \qquad \qquad \qquad \qquad+ \mathbb{E}\left[\left\|\delta\mathds{1}_{Q^{\top}\psi > 0}-\delta\mathds{1}_{Q^{\top}\psi_0 > 0} + \delta\mathds{1}_{Q^{\top}\psi_0 > 0} - \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0}\right\|^2\right] \notag \\
& = \|\beta - \beta_0\|^2 + 2(\beta - \beta_0)^{\top}(\delta - \delta_0)p_{\psi_0} + \|\delta - \delta_0\|^2 p_{\psi_0} \notag \\
& \qquad \qquad \qquad + 2(\beta - \beta_0)^{\top}\delta\left(p_{\psi} - p_{\psi_0}\right) + \|\delta\|^2 \mathbb{P}\left(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)\right) \notag \\
\label{eq:nsb1} & \qquad \qquad \qquad \qquad \qquad - 2\delta^{\top}(\delta - \delta_0)\mathbb{P}\left(Q^{\top}\psi_0 > 0, Q^{\top}\psi < 0\right)
\end{align}
Using the fact that $2ab \ge (a^2/c) + cb^2$ for any constant $c$ we have:
\begin{align*}
& \|\beta - \beta_0\|^2 + 2(\beta - \beta_0)^{\top}(\delta - \delta_0)p_{\psi_0} + \|\delta - \delta_0\|^2 p_{\psi_0} \\
& \ge \|\beta - \beta_0\|^2 + \|\delta - \delta_0\|^2 p_{\psi_0} - \frac{\|\beta - \beta_0\|^2 p_{\psi_0}}{c} - c \|\delta - \delta_0\|^2 p_{\psi_0} \\
& = \|\beta - \beta_0\|^2\left(1 - \frac{p_{\psi_0}}{c}\right) + \|\delta - \delta_0\|^2 p_{\psi_0} (1 - c) \,.
\end{align*}
for any $c$. To make the RHS non-negative we pick $p_{\psi_0} < c < 1$ and concludes that:
\begin{equation}
\label{eq:nsb2}
\|\beta - \beta_0\|^2 + 2(\beta - \beta_0)^{\top}(\delta - \delta_0)p_{\psi_0} + \|\delta - \delta_0\|^2 p_{\psi_0} \gtrsim \left( \|\beta - \beta_0\|^2 + \|\delta - \delta_0\|^2\right) \,.
\end{equation}
For the last 3 summands of RHS of equation \eqref{eq:nsb1}:
\begin{align}
& 2(\beta - \beta_0)^{\top}\delta\left(p_{\psi} - p_{\psi_0}\right) + \|\delta\|^2 \mathbb{P}\left(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)\right) \notag \\
& \qquad \qquad - 2\delta^{\top}(\delta - \delta_0)\mathbb{P}\left(Q^{\top}\psi_0 > 0, Q^{\top}\psi < 0\right) \notag \\
& = 2(\beta - \beta_0)^{\top}\delta \mathbb{P}\left(Q^{\top}\psi > 0, Q^{\top}\psi_0 < 0\right) - 2(\beta - \beta_0)^{\top}\delta \mathbb{P}\left(Q^{\top}\psi < 0, Q^{\top}\psi_0 > 0\right) \notag \\
& \qquad \qquad + |\delta\|^2 \mathbb{P}\left(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)\right) - 2\delta^{\top}(\delta - \delta_0)\mathbb{P}\left(Q^{\top}\psi_0 > 0, Q^{\top}\psi < 0\right) \notag \\
& = \left[\|\delta\|^2 - 2(\beta - \beta_0)^{\top}\delta - 2\delta^{\top}(\delta - \delta_0)\right]\mathbb{P}\left(Q^{\top}\psi_0 > 0, Q^{\top}\psi < 0\right) \notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad + \left[\|\delta\|^2 + 2(\beta - \beta_0)^{\top}\delta\right]\mathbb{P}\left(Q^{\top}\psi > 0, Q^{\top}\psi_0 < 0\right) \notag \\
& = \left[\|\delta_0\|^2 - 2(\beta - \beta_0)^{\top}(\delta - \delta_0) - 2(\beta - \beta_0)^{\top}\delta_0 - \|\delta - \delta_0\|^2\right]\mathbb{P}\left(Q^{\top}\psi_0 > 0, Q^{\top}\psi < 0\right) \notag \\
& \qquad + \left[\|\delta_0\|^2 + \|\delta - \delta_0\|^2 + 2(\delta - \delta_0)^{\top}\delta_0 + 2(\beta - \beta_0)^{\top}(\delta - \delta_0) + 2(\beta - \beta_0)^{\top}\delta_0\right]\mathbb{P}\left(Q^{\top}\psi > 0, Q^{\top}\psi_0 < 0\right) \notag \\
& \ge \left[\|\delta_0\|^2 - 2\|\beta - \beta_0\|\|\delta - \delta_0\| - 2\|\beta - \beta_0\|\|\delta_0\| - \|\delta - \delta_0\|^2\right]\mathbb{P}\left(Q^{\top}\psi_0 > 0, Q^{\top}\psi < 0\right) \notag \\
& \qquad + \left[\|\delta_0\|^2 + \|\delta - \delta_0\|^2 + 2\|\delta - \delta_0\|\|\delta_0\| + 2\|\beta - \beta_0\|\|\delta - \delta_0\| + 2\|\beta - \beta_0\|\|\delta_0\|\right]\mathbb{P}\left(Q^{\top}\psi > 0, Q^{\top}\psi_0 < 0\right) \notag \\
\label{eq:nsb3} & \gtrsim \|\delta_0\|^2 \mathbb{P}\left(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)\right) \gtrsim \|\psi - \psi_0\| \hspace{0.2in} [\text{By Assumption }\ref{eq:assm}]\,.
\end{align}
Combining equation \eqref{eq:nsb2} and \eqref{eq:nsb3} we complete the proof of lower bound. The upper bound is relatively easier: note that by our previous calculation:
\begin{align*}
\mathbb{M}(\theta) - \mathbb{M}(\theta_0) & = \mathbb{E}\left[\left( X^{\top}\left(\beta + \delta\mathds{1}_{Q^{\top}\psi > 0} - \beta_0 - \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right)^2\right] \\
& \le c_+\mathbb{E}\left[\left\|\beta - \beta_0 + \delta\mathds{1}_{Q^{\top}\psi > 0}- \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0} \right\|^2\right] \\
& = c_+\mathbb{E}\left[\left\|\beta - \beta_0 + \delta\mathds{1}_{Q^{\top}\psi > 0}- \delta\mathds{1}_{Q^{\top}\psi_0 > 0} + \delta\mathds{1}_{Q^{\top}\psi_0 > 0} - \delta_0\mathds{1}_{Q^{\top}\psi_0 > 0} \right\|^2\right] \\
& \lesssim \left[\|\beta - \beta_0\|^2 + \|\delta - \delta_0\|^2 + \mathbb{P}\left(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)\right)\right] \\
& \lesssim \left[\|\beta - \beta_0\|^2 + \|\delta - \delta_0\|^2 + \|\psi - \psi_0\|\right] \,.
\end{align*}
This completes the entire proof.
\end{proof}
\subsection{Proof of Lemma \ref{lem:uniform_smooth}}
\begin{proof}
The difference of the two losses:
\begin{align*}
\left|\mathbb{M}^s(\theta) - \mathbb{M}(\theta)\right| & = \left|\mathbb{E}\left[\left\{-2\left(Y_i - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right\}\left(K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi > 0}\right)\right]\right| \\
& \le \mathbb{E}\left[\left|-2\left(Y_i - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right|\left|K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi > 0}\right|\right] \\
& := \mathbb{E}\left[m(Q)\left|K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi > 0}\right|\right]
\end{align*}
where $m(Q) = \mathbb{E}\left[\left|-2\left(Y_i - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right| \mid Q\right]$. This function can be bounded as follows:
\begin{align*}
m(Q) & = \mathbb{E}\left[\left|-2\left(Y_i - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right| \mid Q\right] \\
& \le \mathbb{E}[ (X^{\top}\delta)^2 \mid Q] + 2\mathbb{E}\left[\left|(\beta - \beta_0)^{\top}XX^{\top}\delta\right|\right] + 2\mathbb{E}\left[\left|\delta_0^{\top}XX^{\top}\delta\right|\right] \\
& \le c_+\left(\|\delta\|^2 + 2\|\beta - \beta_0\|\|\delta\| + 2\|\delta\|\|\delta_0\|\right) \lesssim 1 \,,
\end{align*}
as our parameter space is compact. For the rest of the calculation define $\eta = (\tilde \psi - \tilde \psi_0)/\sigma_n$. The definition of $\eta$ may be changed from proof to proof, but it will be clear from the context. Therefore we have:
\begin{align*}
\left|\mathbb{M}^s(\theta) - \mathbb{M}(\theta)\right| & \lesssim \mathbb{E}\left[\left|K\left(\frac{Q^{\top}\psi}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi > 0}\right|\right] \\
& = \mathbb{E}\left[\left| \mathds{1}\left(\frac{Q^{\top}\psi_0}{\sigma_n} + \eta^{\top}\tilde{Q} \ge 0\right) - K\left(\frac{Q^{\top}\psi_0}{\sigma_n} + \eta^{\top}\tilde{Q}\right)\right|\right] \\
& = \sigma_n \int_{\mathbb{R}^{p-1}} \int_{-\infty}^{\infty} \left | \mathds{1}\left(t \ge 0\right) - K\left(t \right)\right | f_0(\sigma_n (t-\eta^{\top}\tilde{q}) | \tilde{q}) \ dt \ dP(\tilde{q}) \\
& \le f_+ \sigma_n \int_{-\infty}^{\infty} \left | \mathds{1}\left(t \ge 0\right) - K\left(t \right)\right | \ dt \lesssim \sigma_n \,.
\end{align*}
where the integral over $t$ is finite follows from the definition of the kernel. This completes the proof.
\end{proof}
\subsection{Proof of Lemma \ref{lem:pop_smooth_curvarture}}
\begin{proof}
First note that we can write:
\begin{align}
& \mathbb{M}^s(\theta) - \mathbb{M}^s(\theta_0^s) \notag \\
& = \underbrace{\mathbb{M}^s(\theta) - \mathbb{M}(\theta)}_{\ge -K_1\sigma_n} + \underbrace{\mathbb{M}(\theta) - \mathbb{M}(\theta_0)}_{\underbrace{\ge u_- d^2(\theta, \theta_0)}_{\ge \frac{u_-}{2} d^2(\theta, \theta_0^s) - u_-\sigma_n }} + \underbrace{\mathbb{M}(\theta_0) - \mathbb{M}(\theta_0^s)}_{\ge - u_+ d^2(\theta_0, \theta_0^s) \ge - u_+\sigma_n} + \underbrace{\mathbb{M}(\theta_0^s) - \mathbb{M}^s(\theta_0^s)}_{\ge - K_1 \sigma_n} \notag \\
& \ge \frac{u_-}{2}d^2(\theta, \theta_0^s) - (2K_1 + \xi)\sigma_n \notag \\
& \ge \frac{u_-}{2}\left[\|\beta - \beta^s_0\|^2 + \|\delta - \delta^s_0\|^2 + \|\psi - \psi^s_0\|\right] - (2K_1 + \xi)\sigma_n \notag \\
& \ge \left[\frac{u_-}{2}\left(\|\beta - \beta^s_0\|^2 + \|\delta - \delta^s_0\|^2\right) + \frac{u_-}{4}\|\psi - \psi^s_0\|\right]\mathds{1}_{\|\psi - \psi^s_0\| > \frac{4(2K_1 + \xi)}{u_-}\sigma_n} \notag \\
\label{eq:lower_curv_smooth} & \gtrsim \left[\|\beta - \beta^s_0\|^2 + \|\delta - \delta^s_0\|^2 + \|\psi - \psi^s_0\|\right]\mathds{1}_{\|\psi - \psi^s_0\| > \frac{4(2K_1 + \xi)}{u_-}\sigma_n}
\end{align}
where $\xi$ can be taken as close to $0$ as possible. Henceforth we set $\mathcal{K} = 4(2K_1 + \xi)/u_-$. For the other part of the curvature (i.e. when $\|\psi - \psi_0^s\| \le \mathcal{K} \sigma_n$) we start with a two step Taylor expansion of the smoothed loss function:
\begin{align*}
\mathbb{M}^s(\theta) - \mathbb{M}^s(\theta_0^s) = \frac12 (\theta_0 - \theta^0_s)^{\top}\nabla^2 \mathbb{M}^s(\theta^*)(\theta_0 - \theta^0_s)
\end{align*}
Recall the definition of $\mathbb{M}^s(\theta)$:
$$
\mathbb{M}^s_n(\theta) = \mathbb{E}\left(Y - X^{\top}\beta\right)^2 + \mathbb{E} \left\{\left[-2\left(Y_i - X_i^{\top}\beta\right)X_i^{\top}\delta + (X_i^{\top}\delta)^2\right] K\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\right\}
$$
The partial derivates of $\mathbb{M}^s(\theta)$ with respect to $(\beta, \delta, \psi)$ was derived in equation \eqref{eq:beta_grad} - \eqref{eq:psi_grad}. From there, we calculate the hessian of $\mathbb{M}^s(\theta)$:
\begin{align*}
\nabla_{\beta\beta}\mathbb{M}^s(\theta) & = 2\Sigma_X \\
\nabla_{\delta\delta}\mathbb{M}^s(\theta) & = 2 \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\right] = 2 \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right] \\
\nabla_{\psi\psi} \mathbb{M}^s(\theta) & = \frac{1}{\sigma_n^2}\mathbb{E} \left\{\left[-2\left(Y_i - X_i^{\top}\beta\right)X_i^{\top}\delta + (X_i^{\top}\delta)^2\right]\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
\nabla_{\beta \delta}\mathbb{M}^s(\theta) & = 2 \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\right] = 2 \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right] \\
\nabla_{\beta \psi}\mathbb{M}^s(\theta) & = \frac{2}{\sigma_n}\mathbb{E}\left(g(Q)\delta\tilde Q^{\top}K'\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right) \\
\nabla_{\delta \psi} \mathbb{M}^s(\theta) & = \frac{2}{\sigma_n}\mathbb{E} \left\{\left[-X_i\left(Y_i - X_i^{\top}\beta\right) + X_iX_i^{\top}\delta\right]\tilde Q_i^{\top} K'\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \,.
\end{align*}
where we use $\tilde \eta$ for a generic notation for $(\tilde \psi - \tilde \psi_0)/\sigma_n$. For notational simplicity, we define $\gamma = (\beta, \delta)$ and $\nabla^2\mathbb{M}^{s, \gamma}(\theta)$, $\nabla^2\mathbb{M}^{s, \gamma \psi}(\theta), \nabla^2\mathbb{M}^{s, \psi \psi}(\theta)$ to be corresponding blocks of the hessian matrix. We have:
\begin{align}
\mathbb{M}^s(\theta) - \mathbb{M}^s(\theta_0^s) & = \frac12 (\theta - \theta^0_s)^{\top}\nabla^2 \mathbb{M}^s(\theta^*)(\theta - \theta^0_s) \notag \\
& = \frac12 (\gamma - \gamma_0^s)^{\top}\nabla^2 \mathbb{M}^{s, \gamma}(\theta^*)(\gamma - \gamma^0_s) + (\gamma - \gamma_0^s)^{\top}\nabla^2 \mathbb{M}^{s, \gamma \psi}(\theta^*)(\psi - \psi^0_s) \notag \\
& \qquad \qquad \qquad \qquad + \frac12(\psi - \psi_0^s)^{\top}\nabla^2 \mathbb{M}^{s, \psi \psi}(\theta^*)(\psi - \psi^0_s) \notag \\
\label{eq:hessian_1} & := \frac12 \left(T_1 + 2T_2 + T_3\right)
\end{align}
Note that we can write:
\begin{align*}
T_1 & = (\gamma - \gamma_0^s)^{\top}\nabla^2 \mathbb{M}^{s, \gamma}(\tilde \theta)(\gamma - \gamma^0_s) \\
& = (\gamma - \gamma_0^s)^{\top}\nabla^2 \mathbb{M}^{s, \gamma}(\theta_0)(\gamma - \gamma^0_s) + (\gamma - \gamma_0^s)^{\top}\left[\nabla^2 \mathbb{M}^{s, \gamma}(\tilde \theta) - \nabla^2 \mathbb{M}^{s, \gamma}(\theta_0)\right](\gamma - \gamma^0_s)
\end{align*}
The operator norm of the difference of two hessians can be bounded as:
$$
\left\|\nabla^2 \mathbb{M}^{s, \gamma}(\theta^*) - \nabla^2 \mathbb{M}^{s, \gamma}(\theta_0)\right\|_{op} = O(\sigma_n) \,.
$$
for any $\theta^*$ in a neighborhood of $\theta_0^s$ with $\|\psi - \psi_0^s\| \le \mathcal{K} \sigma_n$. To prove this note that for any $\theta$:
$$
\nabla^2 \mathbb{M}^{s, \gamma}(\theta^*) - \nabla^2 \mathbb{M}^{s, \gamma}(\theta_0) = 2\begin{pmatrix}0 & A \\
A & A\end{pmatrix} = \begin{pmatrix}0 & 1 \\ 1 & 1\end{pmatrix} \otimes A
$$
where:
$$
A = \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\right] - \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n}\right)\right]
$$
Therefore it is enough to show $\|A\|_{op} = O(\sigma_n)$. Towards that direction:
\begin{align*}
A & = \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\right] - \mathbb{E}\left[g(Q)K\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n}\right)\right] \\
& = \sigma_n \int \int g(\sigma_n t - \tilde q^{\top}\tilde \psi_0)\left(K(t + \tilde q^{\top}\eta) - K(t) \right) f_0(\sigma_n t \mid \tilde q) \ f(\tilde q) \ dt \ d\tilde q \\
& = \sigma_n \left[\int \int g(- \tilde q^{\top}\tilde \psi_0)\left(K(t + \tilde q^{\top}\eta) - K(t) \right) f_0(0 \mid \tilde q) \ f(\tilde q) \ dt \ d\tilde q + R \right] \\
& = \sigma_n \left[\int \int g(- \tilde q^{\top}\tilde \psi_0)f_0(0 \mid \tilde q) \int_t^{t + \tilde q^{\top}\eta}K'(s) \ ds \ f(\tilde q) \ dt \ d\tilde q + R \right] \\
& = \sigma_n \left[\int g(- \tilde q^{\top}\tilde \psi_0)f_0(0 \mid \tilde q) \int_{-\infty}^{\infty}K'(s) \int_{s-\tilde q^{\top}\eta}^s \ dt \ ds \ f(\tilde q)\ d\tilde q + R \right] \\
& = \sigma_n \left[\int g(- \tilde q^{\top}\tilde \psi_0)f_0(0 \mid \tilde q)\tilde q^{\top}\eta \ f(\tilde q)\ d\tilde q + R \right] \\
& = \sigma_n \left[\mathbb{E}\left[g(- \tilde Q^{\top}\tilde \psi_0, \tilde Q)f_0(0 \mid \tilde Q)\tilde Q^{\top}\eta\right] + R \right]
\end{align*}
using the fact that $\left\|\mathbb{E}\left[g(- \tilde Q^{\top}\tilde \psi_0, \tilde Q)f_0(0 \mid \tilde Q)\tilde Q^{\top}\eta\right]\right\|_{op} = O(1)$ and $\|R\|_{op} = O(\sigma_n)$ we conclude the claim. From the above claim we conclude:
\begin{equation}
\label{eq:hessian_gamma}
T_1 = (\gamma - \gamma_0^s)^{\top}\nabla^2 \mathbb{M}^{s, \gamma}(\theta^*)(\gamma - \gamma^s_0) \ge \|\gamma - \gamma^s_0\|^2(1 - O(\sigma_n)) \ge \frac12 \|\gamma - \gamma_0^s\|^2
\end{equation}
for all large $n$.
\\\\
\noindent
We next deal with the cross term $T_2$ in equation \eqref{eq:hessian_1}. Towards that end first note that:
\begin{align*}
& \frac{1}{\sigma_n}\mathbb{E}\left((g(Q)\delta)\tilde Q^{\top}K'\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\eta^*\right)\right) \\
& = \int_{\mathbb{R}^{(p-1)}}\left[ \int_{-\infty}^{\infty} \left(g\left(\sigma_nt - \tilde q^{\top}\tilde \psi_0, \tilde q\right)\delta\right) K'\left(t + \tilde q^{\top}\eta^*\right) f_0(\sigma_n t \mid \tilde q) \ dt\right] \tilde q^{\top} \ f(\tilde q) \ d\tilde q \\
& = \int_{\mathbb{R}^{(p-1)}}\left[ \int_{-\infty}^{\infty} \left(g\left(- \tilde q^{\top}\tilde \psi_0, \tilde q\right)\delta\right) K'\left(t + \tilde q^{\top}\eta^*\right) f_0(0 \mid \tilde q) \ dt\right] \tilde q^{\top} \ f(\tilde q) \ d\tilde q + R_1\\
& = \mathbb{E}\left[\left(g\left( - \tilde Q^{\top}\tilde \psi_0, \tilde Q\right)\delta\right)\tilde Q^{\top}f_0(0 \mid \tilde Q)\right] + R_1
\end{align*}
where the remainder term $R_1$ can be further decomposed $R_1 = R_{11} + R_{12} + R_{13}$ with:
\begin{align*}
\left\|R_{11}\right\| & = \left\|\int_{\mathbb{R}^{(p-1)}}\left[ \int_{-\infty}^{\infty} \left(g\left(- \tilde q^{\top}\tilde \psi_0, \tilde q\right)\delta\right) K'\left(t + \tilde q^{\top}\eta^*\right) (f_0(\sigma_nt\mid \tilde q) - f_0(0 \mid \tilde q)) \ dt\right] \tilde q^{\top} \ f(\tilde q) \ d\tilde q\right\| \\
& \le \left\|\int_{\mathbb{R}^{(p-1)}}\left[ \int_{-\infty}^{\infty} \left\|g\left(- \tilde q^{\top}\tilde \psi_0, \tilde q\right)\right\|_{op}\|\delta\| \left|K'\left(t + \tilde q^{\top}\eta^*\right)\right| \left|f_0(\sigma_nt\mid \tilde q) - f_0(0 \mid \tilde q)\right| \ dt\right] \left|\tilde q\right| \ f(\tilde q) \ d\tilde q\right\| \\
& \le \sigma_n \dot{f}^+ c_+ \|\delta\| \int_{\mathbb{R}^{(p-1)}} \|\tilde q\| \int_{-\infty}^{\infty} |t| \left|K'\left(t + \tilde q^{\top}\eta^*\right)\right| \ dt \ f(\tilde q) \ d\tilde q \\
& \le \sigma_n \dot{f}^+ c_+ \|\delta\| \int_{\mathbb{R}^{(p-1)}} \|\tilde q\| \int_{-\infty}^{\infty} |t - \tilde q^{\top}\eta^*| \left|K'\left(t\right)\right| \ dt \ f(\tilde q) \ d\tilde q \\
& \le \sigma_n \dot{f}^+ c_+ \|\delta\| \left[\int_{\mathbb{R}^{(p-1)}} \|\tilde q\| \int_{-\infty}^{\infty} |t| \left|K'\left(t\right)\right| \ dt \ f(\tilde q) \ d\tilde q \right. \\
& \qquad \qquad \qquad \left. + \int_{\mathbb{R}^{(p-1)}} \|\tilde q\|^2 \|\eta^*\| \int_{-\infty}^{\infty} |K'(t)| \ dt \ f(\tilde q) \ d\tilde q\right] \\
& \le \sigma_n \dot{f}^+ c_+ \|\delta\| \left[\int_{\mathbb{R}^{(p-1)}} \|\tilde q\| \int_{-\infty}^{\infty} |t| \left|K'\left(t\right)\right| \ dt \ f(\tilde q) \ d\tilde q + \mathcal{K}\int_{\mathbb{R}^{(p-1)}} \|\tilde q\|^2 \int_{-\infty}^{\infty} |K'(t)| \ dt \ f(\tilde q) \ d\tilde q\right] \\
& \lesssim \sigma_n \,.
\end{align*}
where the last bound follows from our assumptions using the fact that:
\begin{align*}
& \|R_{12}\| \\
&= \left\|\int_{\mathbb{R}^{(p-1)}}\left[ \int_{-\infty}^{\infty} \left(\left(g\left(\sigma_n t- \tilde q^{\top}\tilde \psi_0, \tilde q\right) - g\left(- \tilde q^{\top}\tilde \psi_0, \tilde q\right)\right)\delta\right) K'\left(t + \tilde q^{\top} \eta^*\right) f_0(0 \mid \tilde q) \ dt\right] \tilde q^{\top} \ f(\tilde q) \ d\tilde q\right\| \\
& \le \int \|\tilde q\|\|\delta\|f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} \left\|g\left(\sigma_n t- \tilde q^{\top}\tilde \psi_0, \tilde q\right) - g\left(- \tilde q^{\top}\tilde \psi_0, \tilde q\right) \right\|_{op}\left|K'\left(t + \tilde q^{\top} \eta^*\right)\right| \ dt \ f(\tilde q) \ d\tilde q \\
& \le \dot{c}_+ \sigma_n \int \|\tilde q\|\|\delta\|f_0(0 \mid \tilde q)\dot \int_{-\infty}^{\infty} |t| \left|K'\left(t + \tilde q^{\top}\tilde \eta\right)\right| \ dt \ f(\tilde q) \ d\tilde q \hspace{0.2in} [\text{Assumption }\ref{eq:assm}]\\
& \lesssim \sigma_n \,.
\end{align*}
The other remainder term $R_{13}$ is the higher order term and can be shown to be $O(\sigma_n^2)$ using same techniques. This implies for all large $n$:
\begin{align*}
\left\|\nabla_{\beta \psi}\mathbb{M}^s(\theta)\right\|_{op} & = O(1) \,.
\end{align*}
and similar calculation yields $ \left\|\nabla_{\delta \psi}\mathbb{M}^s(\theta)\right\|_{op} = O(1)$. Using this we have:
\begin{align}
T_2 & = (\gamma - \gamma_0^s)^{\top}\nabla^2 \mathbb{M}^{s, \gamma \psi}(\tilde \theta)(\psi - \psi^0_s) \notag \\
& = (\beta - \beta_0^s)^{\top}\nabla_{\beta \psi}^2 \mathbb{M}^{s}(\tilde \theta)(\psi - \psi^0_s) + (\delta - \delta_0^s)^{\top}\nabla_{\delta \psi}^2 \mathbb{M}^{s}(\tilde \theta)(\psi - \psi^0_s) \notag \\
& \ge - C\left[\|\beta - \beta_0^s\| + \|\delta - \delta_0^s\| \right]\|\psi - \psi^0_s\| \notag \\
& \ge -C \sqrt{\sigma_n}\left[\|\beta - \beta_0^s\| + \|\delta - \delta_0^s\| \right]\frac{\|\psi - \psi^0_s\| }{\sqrt{\sigma_n}} \notag \\
\label{eq:hessian_cross} & \gtrsim - \sqrt{\sigma_n}\left(\|\beta - \beta_0^s\|^2 + \|\delta - \delta_0^s\|^2 +\frac{\|\psi - \psi^0_s\|^2 }{\sigma_n} \right)
\end{align}
Now for $T_3$ note that:
\allowdisplaybreaks
\begin{align*}
& \sigma_n \nabla_{\psi\psi} \mathbb{M}^s_n(\theta) \\
& = \frac{1}{\sigma_n}\mathbb{E} \left\{\left[-2\left(Y_i - X_i^{\top}\beta\right)X_i^{\top}\delta + (X_i^{\top}\delta)^2\right]\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& = \frac{1}{\sigma_n}\mathbb{E} \left\{\left[-2\left(Y_i - X_i^{\top}\beta\right)X_i^{\top}\delta \right]\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& \qquad \qquad \qquad + \frac{1}{\sigma_n}\mathbb{E} \left\{(\delta^{\top}g(Q) \delta)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& = \frac{1}{\sigma_n}\mathbb{E} \left\{\left[-2 X_i^{\top}\left(\beta_0 -\beta\right)X_i^{\top}\delta - 2(X_i^{\top}\delta_0)(X_i^{\top}\delta)\mathds{1}_{Q_i^{\top}\psi_0 > 0}\right]\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& \qquad \qquad \qquad + \frac{1}{\sigma_n}\mathbb{E} \left\{(\delta^{\top}g(Q) \delta)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& = \frac{-2}{\sigma_n}\mathbb{E} \left\{((\beta_0 - \beta)^{\top}g(Q) \delta)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& \qquad \qquad \qquad + \frac{-2}{\sigma_n}\mathbb{E} \left\{(\delta_0^{\top}g(Q) \delta)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\mathds{1}_{Q_i^{\top}\psi_0 > 0}\right\} \\
& \qquad \qquad \qquad \qquad \qquad \qquad + \frac{1}{\sigma_n}\mathbb{E} \left\{(\delta^{\top}g(Q) \delta)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& = \underbrace{\frac{-2}{\sigma_n}\mathbb{E} \left\{((\beta_0 - \beta)^{\top}g(Q)\delta)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\}}_{M_1} \\
& \qquad \qquad \qquad + \underbrace{\frac{-2}{\sigma_n}\mathbb{E} \left\{(\delta_0^{\top}g(Q) \delta_0)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\mathds{1}_{Q_i^{\top}\psi_0 > 0}\right\}}_{M_2} \\
& \qquad \qquad \qquad \qquad \qquad \qquad +
\underbrace{\frac{-2}{\sigma_n}\mathbb{E} \left\{(\delta_0^{\top} g(Q) (\delta - \delta_0))\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\mathds{1}_{Q_i^{\top}\psi_0 > 0}\right\}}_{M_3} \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \underbrace{\frac{1}{\sigma_n}\mathbb{E} \left\{(\delta^{\top}g(Q) \delta)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\}}_{M_4} \\
& := M_1 + M_2 + M_3 + M_4
\end{align*}
We next show that $M_1$ and $M_4$ are $O(\sigma_n)$. Towards that end note that for any two vectors $v_1, v_2$:
\begin{align*}
& \frac{1}{\sigma_n}\mathbb{E} \left\{(v_1^{\top}g(Q)v_2)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\right\} \\
& = \int \tilde q \tilde q^{\top} \int_{-\infty}^{\infty}(v_1^{\top}g(\sigma_nt - \tilde q^{\top}\tilde \eta, \tilde q)v_2) K''(t + \tilde q^{\top}\tilde \eta) f(\sigma_nt \mid \tilde q) \ dt \ f(\tilde q) \ d\tilde q \\
& = \int \tilde q \tilde q^{\top} (v_1^{\top}g( - \tilde q^{\top}\tilde \eta, \tilde q)v_2)f(0 \mid \tilde q) f(\tilde q) \ d\tilde q \cancelto{0}{\int_{-\infty}^{\infty} K''(t) \ dt} + R = R
\end{align*}
as $\int K''(t) \ dt = 0$ follows from our choice of kernel $K(x) = \Phi(x)$. Similar calculation as in the case of analyzing the remainder of $T_2$ yields $\|R\|_{op} = O(\sigma_n)$.
\noindent
This immediately implies $\|M_1\|_{op} = O(\sigma_n)$ and $\|M_4\|_{op} = O(\sigma_n)$. Now for $M_2$:
\begin{align}
M_2 & = \frac{-2}{\sigma_n}\mathbb{E} \left\{(\delta_0^{\top}g(Q) \delta_0)\tilde Q_i\tilde Q_i^{\top} K''\left(\frac{Q_i^{\top}\psi_0 }{\sigma_n} + \tilde Q^{\top}\tilde \eta\right)\mathds{1}_{Q_i^{\top}\psi_0 > 0}\right\} \notag \\
& = -2\int \int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} f_0(\sigma_n t \mid \tilde q) \ dt f(\tilde q) \ d\tilde q \notag \\
& = -2\int (\delta_0^{\top}g(- \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q + R \notag \\
\label{eq:M_2_double_deriv} & = 2\mathbb{E}\left[(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0) \delta_0)\tilde
Q\tilde Q^{\top} f_0(0 \mid \tilde Q) K'(\tilde Q^{\top}\eta^*)\right] + R
\end{align}
where the remainder term R is $O_p(\sigma_n)$ can be established as follows:
\begin{align*}
R & = -2\left[\int \int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} f_0(\sigma_n t \mid \tilde q) \ dt f(\tilde q) \ d\tilde q \right. \\
& \qquad \qquad - \left. \int (\delta_0^{\top}g(- \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q \right] \\
& = -2\left\{\left[\int \int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} f_0(\sigma_n t \mid \tilde q) \ dt f(\tilde q) \ d\tilde q \right. \right. \\
& \qquad \qquad - \left. \left. \int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0) \tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q \right] \right. \\
& \left. + \left[\int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q \right. \right. \\
& \qquad \qquad \left. \left. -\int (\delta_0^{\top}g(- \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q \right]\right\} \\
& = -2(R_1 + R_2) \,.
\end{align*}
For $R_1$:
\begin{align*}
\left\|R_1\right\|_{op} & = \left\|\left[\int \int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} f_0(\sigma_n t \mid \tilde q) \ dt f(\tilde q) \ d\tilde q \right. \right. \,.\\
& \qquad \qquad - \left. \left. \int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q \right] \right\|_{op} \\
& \le c_+ \int \int \|\tilde q\|^2 |K''\left(t + \tilde q^{\top}\eta^*\right)| |f_0(\sigma_n t \mid \tilde q) -f_0(0\mid \tilde q)| \ dt \ f(\tilde q) \ d\tilde q \\
& \le c_+ F_+\sigma_n \int \|\tilde q\|^2 \int |t| |K''\left(t + \tilde q^{\top}\eta^*\right)| \ dt \ f(\tilde q) \ d\tilde q \\
& = c_+ F_+\sigma_n \int \|\tilde q \|^2 \int |t - \tilde q^{\top}\eta^*| |K''\left(t\right)| \ dt \ f(\tilde q) \ d\tilde q \\
& \le c_+ F_+ \sigma_n \left[\mathbb{E}[\|\tilde Q\|^2]\int |t||K''(t)| \ dt + \|\eta^*\|\mathbb{E}[\|\tilde Q\|^3]\int |K''(t)| \ dt\right] = O(\sigma_n) \,.
\end{align*}
and similarly for $R_2$:
\begin{align*}
\|R_2\|_{op} & = \left\|\left[\int (\delta_0^{\top}g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q \right. \right. \\
& \qquad \qquad \left. \left. -\int (\delta_0^{\top}g(- \tilde q^{\top}\tilde \psi_0) \delta_0)\tilde q\tilde q^{\top} f_0(0 \mid \tilde q) \int_{-\infty}^{\infty} K''\left(t + \tilde q^{\top}\eta^*\right)\mathds{1}_{t > 0} \ dt f(\tilde q) \ d\tilde q \right]\right\|_{op} \\
& \le F_+ \|\delta_0\|^2 \int \left\|g(\sigma_n t - \tilde q^{\top}\tilde \psi_0) - g( - \tilde q^{\top}\tilde \psi_0) \right\|_{op} \|\tilde q\|^2 \int_{-\infty}^{\infty} |K''\left(t + \tilde q^{\top}\eta^*\right)| \ dt \\
& \le G_+ F_+ \sigma_n \int \|\tilde q\|^2 \int_{-\infty}^{\infty} |t||K''\left(t + \tilde q^{\top}\eta^*\right)| \ dt = O(\sigma_n) \,.
\end{align*}
Therefore from \eqref{eq:M_2_double_deriv} we conclude:
\begin{equation}
M_2 = 2\mathbb{E}\left[(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0) \delta_0)\tilde
Q\tilde Q^{\top} f_0(0 \mid \tilde Q) K'(\tilde Q^{\top}\eta^*)\right] + O(\sigma_n) \,.
\end{equation}
Similar calculation for $M_3$ yields:
\begin{equation*}
M_3 = 2\mathbb{E}\left[(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0)(\delta - \delta_0))\tilde
Q\tilde Q^{\top} f_0(0 \mid \tilde Q) K'(\tilde Q^{\top}\eta^*)\right] + O(\sigma_n) \,.
\end{equation*}
i.e.
\begin{equation}
\|M_3\|_{op} \le c_+ \mathbb{E}\left[\|\tilde Q\|^2f_0(0 \mid \tilde Q) K'(\tilde Q^{\top}\eta^*)\right]\|\delta_0\| \|\delta - \delta_0\| \,.
\end{equation}
Now we claim that for any $\mathcal{K} < \infty$, $\lambda_{\min} (M_2) > 0$ for all $\|\eta^*\| \le \mathcal{K}$. Towards that end, define a function $\lambda:B_{\mathbb{R}^{2d}}(1) \times B_{\mathbb{R}^{2d}}(\mathcal{K}) \to \mathbb{R}_+$ as:
$$
\lambda: (v, \eta) \mapsto 2\mathbb{E}\left[(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0) \delta_0)
\left(v^{\top}\tilde Q\right) ^2 f_0(0 \mid \tilde Q) K'(\tilde Q^{\top}\eta)\right]
$$
Clearly $\lambda \ge 0$ and is continuous on a compact set. Hence its infimum must be attained. Suppose the infimum is $0$, i.e. there exists $(v^*, \eta^*)$ such that:
$$
\mathbb{E}\left[(\delta_0^{\top}g(- \tilde Q^{\top}\tilde \psi_0) \delta_0)
\left(v^{*^{\top}}\tilde Q\right) ^2 f_0(0 \mid \tilde Q) K'(\tilde Q^{\top}\eta^*)\right] = 0 \,.
$$
as $\lambda_{\min}(g(\dot)) \ge c_+$, we must have $\left(v^{*^{\top}}\tilde Q\right) ^2 f_0(0 \mid \tilde Q) K'(\tilde Q^{\top}\eta^*) = 0$ almost surely. But from our assumption, $\left(v^{*^{\top}}\tilde Q\right) ^2 > 0$ and $K'(\tilde Q^{\top}\eta^*) > 0$ almost surely, which implies $f_0(0 \mid \tilde q) = 0$ almost surely, which is a contradiction. Hence there exists $\lambda_-$ such that:
$$
\lambda_{\min} (M_2) \ge \lambda_- > 0 \ \ \forall \ \ \|\psi - \psi_0^s\| \le \mathcal{K} \,.
$$
Hence we have:
$$
\lambda_{\min}\left(\sigma_n \nabla_{\psi \psi}\mathbb{M}^2(\theta)\right) \ge \frac{\lambda_-}{2}(1 - O(\sigma_n))
$$
for all theta such that $d_*(\theta, \theta_0^s) \le {\epsilon} \,.$
\begin{align}
\label{eq:hessian_psi}
& \frac{1}{\sigma_n}(\psi - \psi_0^s)^{\top}\sigma_n \nabla^{\psi \psi}\mathbb{M}^s(\tilde \theta) (\psi - \psi^0) \gtrsim \frac{\|\psi - \psi^s_0\|^2}{\sigma_n} \left(1- O(\sigma_n)\right)
\end{align}
From equation \eqref{eq:hessian_gamma}, \eqref{eq:hessian_cross} and \eqref{eq:hessian_psi} we have:
\begin{align*}
& \frac12 (\theta_0 - \theta^0_s)^{\top}\nabla^2 \mathbb{M}^s(\theta^*)(\theta_0 - \theta^0_s) \\
& \qquad \qquad \gtrsim \left[\|\beta - \beta^s_0\|^2 + \|\gamma - \gamma^s_0\|^2 + \frac{\|\psi - \psi^s_0\|^2}{\sigma_n}\right]\mathds{1}_{\|\psi - \psi_0^s\| \le \mathcal{K} \sigma_n} \,.
\end{align*}
This, along with equation \eqref{eq:lower_curv_smooth} concludes the proof.
\end{proof}
\subsection{Proof of Lemma \ref{asymp-normality}}
We start by proving that analogues of Lemma 2 of \cite{seo2007smoothed}: we show that:
\begin{align*}
\lim_{n \to \infty} \mathbb{E}\left[ \sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0)\right] & = 0 \\
\lim_{n \to \infty} {\sf var}\left[ \sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0)\right] & = V^{\psi}
\end{align*}
for some matrix $V^{\psi}$ which will be specified later in the proof. To prove the limit of the expectation:
\begin{align*}
& \mathbb{E}\left[ \sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0)\right] \\
& = \sqrt{\frac{n}{\sigma_n}}\mathbb{E}\left[\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right] \\
& = \sqrt{\frac{n}{\sigma_n}}\mathbb{E}\left[\left(\delta_0^{\top}g(Q)\delta_0\right)\left(1 - 2\mathds{1}_{Q^{\top}\psi_0 > 0}\right)\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right] \\
& = \sqrt{\frac{n}{\sigma_n}} \times \sigma_n \int \int \left(\delta_0^{\top}g(\sigma_nt - \tilde q^{\top}\tilde \psi_0, \tilde q)\delta_0\right)\left(1 - 2\mathds{1}_{t > 0}\right)\tilde q K'\left(t\right) \ f_0(\sigma_n t \mid \tilde q) f (\tilde q) \ dt \ d\tilde q \\
& = \sqrt{n\sigma_n} \left[\int \tilde q \left(\delta_0^{\top}g(- \tilde q^{\top}\tilde \psi_0, \tilde q)\delta_0\right)f_0(0 \mid \tilde q) \cancelto{0}{\left(\int_{-\infty}^{\infty} \left(1 - 2\mathds{1}_{t > 0}\right)K'\left(t\right) \ dt\right)} f (\tilde q) d\tilde q + O(\sigma_n)\right] \\
& = O(\sqrt{n\sigma_n^3}) = o(1) \,.
\end{align*}
For the variance part:
\begin{align*}
& {\sf var}\left[ \sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0)\right] \\
& = \frac{1}{\sigma_n}{\sf var}\left(\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right) \\
& = \frac{1}{\sigma_n}\mathbb{E}\left(\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}^2 \tilde Q\tilde Q^{\top} \left(K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right)^2\right) \\
& \qquad \qquad + \frac{1}{\sigma_n}\mathbb{E}^{\otimes 2}\left[\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right]
\end{align*}
The outer product of the expectation (the second term of the above summand) is $o(1)$ which follows from our previous analysis of the expectation term. For the second moment:
\begin{align*}
& \frac{1}{\sigma_n}\mathbb{E}\left(\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}^2 \tilde Q\tilde Q^{\top} \left(K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right)^2\right) \\
& = \frac{1}{\sigma_n}\mathbb{E}\left(\left\{(X^{\top}\delta_0)^2(1 - 2\mathds{1}_{Q^{\top}\psi_0 > 0}) -2{\epsilon} (X^{\top}\delta_0)\right\}^2 \tilde Q\tilde Q^{\top} \left(K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right)^2\right) \\
& = \frac{1}{\sigma_n}\left[\mathbb{E}\left((X^{\top}\delta_0)^4 \tilde Q\tilde Q^{\top} \left(K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right)^2\right) + 4\sigma_{\epsilon}^2\mathbb{E}\left((X^{\top}\delta_0)^2 \tilde Q\tilde Q^{\top} \left(K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right)^2\right) \right] \\
& \longrightarrow \left(\int_{-\infty}^{\infty}(K'(t))^2 \ dt\right)\left[\mathbb{E}\left(g_{4, \delta_0}(-\tilde Q^{\top}\tilde \psi_0, \tilde Q)\tilde Q\tilde Q^{\top}f_0(0 \mid \tilde Q)\right) \right. \\
& \hspace{10em}+ \left. 4\sigma_{\epsilon}^2\mathbb{E}\left(\delta_0^{\top}g(-\tilde Q^{\top}\tilde \psi_0, \tilde Q)\delta_0 \tilde Q\tilde Q^{\top}f_0(0 \mid \tilde Q)\right)\right] \\
& := 2V^{\psi} \,.
\end{align*}
Finally using Lemma 6 of \cite{horowitz1992smoothed} we conclude that $ \sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0) \implies \mathcal{N}(0, V^{\psi})$.
\\\\
\noindent
We next prove that $ \sqrt{n}\nabla \mathbb{M}_n^{s, \gamma}(\theta_0)$ to normal distribution. This is a simple application of CLT along with bounding some remainder terms which are asymptotically negligible. The gradients are:
\begin{align*}
\sqrt{n}\begin{pmatrix} \nabla_{\beta}\mathbb{M}^s_n(\theta_0^s) \\ \nabla_{\delta}\mathbb{M}^s_n(\theta_0^s) \end{pmatrix} & = 2\sqrt{n}\begin{pmatrix}\frac1n \sum_i X_i(X_i^{\top}\beta_0 - Y_i)+ \frac1n \sum_i X_iX_i^{\top}\delta_0 K\left(\frac{Q_i^{\top}\psi_0}{\sigma_n}\right) \\
\frac1n \sum_i \left[X_i(X_i^{\top}\beta_0 + X_i^{\top}\delta_0 - Y_i)\right] K\left(\frac{Q_i^{\top}\psi_0^s}{\sigma_n}\right) \end{pmatrix} \\
& = 2\begin{pmatrix} -\frac{1}{\sqrt{n}} \sum_i X_i {\epsilon}_i + \frac{1}{\sqrt{n}} \sum_i X_iX_i^{\top}\delta_0 \left(K\left(\frac{Q_i^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q_i^{\top}\psi_0 > 0}\right) \\ -\frac{1}{\sqrt{n}} \sum_i X_i {\epsilon}_iK\left(\frac{Q_i^{\top}\psi_0}{\sigma_n}\right) + \frac{1}{\sqrt{n}} \sum_i X_iX_i^{\top}\delta_0K\left(\frac{Q_i^{\top}\psi_0}{\sigma_n}\right)\mathds{1}_{Q_i^{\top}\psi_0 \le 0}
\end{pmatrix}\\
& = 2\begin{pmatrix} -\frac{1}{\sqrt{n}} \sum_i X_i {\epsilon}_i + R_1 \\ -\frac{1 }{\sqrt{n}} \sum_i X_i {\epsilon}_i\mathbf{1}_{Q_i^{\top}\psi_0 > 0} +R_2
\end{pmatrix}
\end{align*}
That $(1/\sqrt{n})\sum_i X_i {\epsilon}_i$ converges to normal distribution follows from a simple application of CLT. Therefore, once we prove that $R_1$ and $R_2$ are $o_p(1)$ we have:
$$
\sqrt{n} \nabla_{\gamma}\mathbb{M}^s_n(\theta_0^s) \overset{\mathscr{L}}{\implies} \mathcal{N}\left(0, 4V^{\gamma}\right)
$$
where:
\begin{equation}
\label{eq:def_v_gamma}
V^{\gamma} = \sigma_{\epsilon}^2 \begin{pmatrix}\mathbb{E}\left[XX^{\top}\right] & \mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \\
\mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] & \mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \end{pmatrix} \,.
\end{equation}
To complete the proof we now show that $R_1$ and $R_2$ are $o_p(1)$. For $R_1$, we show that $\mathbb{E}[R_1] \to 0$ and ${\sf var}(R_1) \to 0$. For the expectation part:
\begin{align*}
& \mathbb{E}[R_1] \\
& = \sqrt{n}\mathbb{E}\left[XX^{\top}\delta_0 \left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right] \\
& = \sqrt{n}\delta_0^{\top}\mathbb{E}\left[g(Q) \left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right] \\
& = \sqrt{n}\int_{\mathbb{R}^{p-1}} \int_{-\infty}^{\infty} \delta_0^{\top}g\left(t-\tilde q^{\top}\tilde \psi_0, \tilde q\right)\left(\mathds{1}_{t > 0} - K\left(\frac{t}{\sigma_n}\right)\right)f_0(t \mid \tilde q) f(\tilde q) \ dt \ d\tilde q \\
& = \sqrt{n}\sigma_n \int_{\mathbb{R}^{p-1}} \int_{-\infty}^{\infty} \delta_0^{\top}g\left(\sigma_n z-\tilde q^{\top}\tilde \psi_0, \tilde q\right)\left(\mathds{1}_{z > 0} - K\left(z\right)\right)f_0(\sigma_n z \mid \tilde q) f(\tilde q) \ dz \ d\tilde q \\
& = \sqrt{n}\sigma_n \left[\int_{\mathbb{R}^{p-1}}\delta_0^{\top}g\left(-\tilde q^{\top}\tilde \psi_0, \tilde q\right) f_0(0 \mid \tilde q) f(\tilde q) \ d\tilde q \cancelto{0}{\left[\int_{-\infty}^{\infty} \left(\mathds{1}_{z > 0} - K\left(z\right)\right)\ dz\right]} + O(\sigma_n) \right] \\
& = O(\sqrt{n}\sigma_n^2) = o(1) \,.
\end{align*}
For the variance part:
\begin{align*}
& {\sf var}(R_1) \\
& = {\sf var}\left(XX^{\top}\delta_0 \left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right) \\
& \le \mathbb{E}\left[\|X\|^2 \delta_0^{\top}XX^{\top}\delta_0 \left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)^2\right] \\
& = O(\sigma_n ) = o(1) \,.
\end{align*}
This shows that ${\sf var}(R_1) = o(1)$ and this establishes $R_1 = o_p(1)$. The proof for $R_2$ is similar and hence skipped for brevity.
\\\\
Our next step is to prove that $\sqrt{n\sigma_n}\nabla_{\psi}\mathbb{M}^s_n(\theta_0^s)$ and $\sqrt{n}\nabla \mathbb{M}^{s, \gamma}_n(\theta_0^s)$ are asymptotically uncorrelated. Towards that end, first note that:
\begin{align*}
& \mathbb{E}\left[X(X^{\top}\beta_0 - Y) + XX^{\top}\delta_0 K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) \right] \\
& = \mathbb{E}\left[XX^{\top}\delta_0\left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right] \\
& = \mathbb{E}\left[g(Q)\delta_0\left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right] \\
& = \sigma_n \int \int g(\sigma_n t - \tilde q^{\top}\tilde \psi_0, \tilde q)(K(t) - \mathds{1}_{t>0})f_0(\sigma_n t \mid \tilde q) f(\tilde q) \ dt \ d\tilde q \\
& = \sigma_n \int g(- \tilde q^{\top}\tilde \psi_0, \tilde q)\cancelto{0}{\int_{-\infty}^{\infty} (K(t) - \mathds{1}_{t>0}) \ dt} \ f_0(0 \mid \tilde q) f(\tilde q) \ dt \ d\tilde q + O(\sigma_n^2) \\
& = O(\sigma_n^2) \,.
\end{align*}
Also, it follows from the proof of $\mathbb{E}\left[\sqrt{n\sigma_n}\nabla_\psi \mathbb{M}_n^s(\theta_0)\right] \to 0$ we have:
$$
\mathbb{E}\left[\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right] = O(\sigma_n^2) \,.
$$
Finally note that:
\begin{align*}
& \mathbb{E}\left[\left(\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right) \times \right. \\
& \qquad \qquad \qquad \qquad \qquad \left. \left(X(X^{\top}\beta_0 - Y) + XX^{\top}\delta_0 K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right)^{\top}\right] \\
& = \mathbb{E}\left[\left(\left\{(X^{\top}\delta_0)^2(1 - 2\mathds{1}_{Q^{\top}\psi_0 > 0}) - 2{\epsilon} X^{\top}\delta_0\right\}\tilde QK'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right) \right. \\
& \qquad \qquad \qquad \qquad \qquad \left. \times \left\{XX^{\top}\delta_0\left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right) - X{\epsilon} \right\}\right] \\
& = \mathbb{E}\left[\left((X^{\top}\delta_0)^2(1 - 2\mathds{1}_{Q^{\top}\psi_0 > 0})\tilde QK'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right) \right. \\
& \qquad \qquad \qquad \left. \times \left(XX^{\top}\delta_0\left(K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) - \mathds{1}_{Q^{\top}\psi_0 > 0}\right)\right)^{\top}\right] \\
& \qquad \qquad + 2\sigma^2_{\epsilon} \mathbb{E}\left[XX^{\top}\delta_0\tilde Q^{\top}K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right] \\
&= O(\sigma_n ) \,.
\end{align*}
Now getting back to the covariance:
\begin{align*}
& \mathbb{E}\left[\left(\sqrt{n\sigma_n}\nabla_{\psi}\mathbb{M}^s_n(\theta_0)\right)\left(\sqrt{n}\nabla_\beta \mathbb{M}^s_n(\theta_0)\right)^{\top}\right] \\
& = \frac{1}{\sqrt{\sigma_n}}\mathbb{E}\left[\left(\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right) \times \right. \\
& \qquad \qquad \qquad \qquad \qquad \left. \left(X(X^{\top}\beta_0 - Y) + XX^{\top}\delta_0 K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right)^{\top}\right] \\
& \qquad \qquad + \frac{n-1}{\sqrt{\sigma_n}}\left[\mathbb{E}\left[\left\{(Y - X^{\top}(\beta_0 + \delta_0))^2 - (Y - X^{\top}\beta_0)^2\right\}\tilde Q K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\right] \right. \\
& \qquad \qquad \qquad \qquad \times \left. \left(\mathbb{E}\left[X(X^{\top}\beta_0 - Y) + XX^{\top}\delta_0 K\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right) \right]\right)^{\top}\right] \\
& = \frac{1}{\sqrt{\sigma_n}} \times O(\sigma_n) + \frac{n-1}{\sqrt{\sigma_n}} \times O(\sigma_n^4) = o(1) \,.
\end{align*}
The proof for $\mathbb{E}\left[\left(\sqrt{n\sigma_n}\nabla_{\psi}\mathbb{M}^s_n(\theta_0)\right)\left(\sqrt{n}\nabla_\delta \mathbb{M}^s_n(\theta_0)\right)^{\top}\right]$ is similar and hence skipped. This completes the proof.
\subsection{Proof of Lemma \ref{conv-prob}}
To prove first note that by simple application of law of large number (and using the fact that $\|\psi^* - \psi_0\|/\sigma_n = o_p(1)$ we have:
\begin{align*}
\nabla^2 \mathbb{M}_n^{s, \gamma}(\theta^*) & = 2\begin{pmatrix}\frac{1}{n}\sum_i X_i X_i^{\top} & \frac{1}{n}\sum_i X_i X_i^{\top}K\left(\frac{Q_i^{\top}\psi^*}{\sigma_n}\right) \\ \frac{1}{n}\sum_i X_i X_i^{\top}K\left(\frac{Q_i^{\top}\psi^*}{\sigma_n}\right) & \frac{1}{n}\sum_i X_i X_i^{\top}K\left(\frac{Q_i^{\top}\psi^*}{\sigma_n}\right)
\end{pmatrix} \\
& \overset{p}{\longrightarrow} 2 \begin{pmatrix}\mathbb{E}\left[XX^{\top}\right] & \mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \\ \mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] & \mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \end{pmatrix} := 2Q^{\gamma}
\end{align*}
The proof of the fact that $\sqrt{\sigma_n}\nabla^2_{\psi \gamma}\mathbb{M}_n^s(\theta^*) = o_p(1)$ is same as the proof of Lemma 5 of \cite{seo2007smoothed} and hence skipped. Finally the proof of the fact that
$$
\sigma_n \nabla^2_{\psi \psi}\mathbb{M}_n^s(\theta^*) \overset{p}{\longrightarrow} 2Q^{\psi}\,.
$$
for some non-negative definite matrix $Q$. The proof is similar to that of Lemma 6 of \cite{seo2007smoothed}, using which we conclude the proof with:
$$
Q^{\psi} = \left(\int_{-\infty}^{\infty} -\text{sign}(t) K''(t) \ dt\right) \times \mathbb{E}\left[\delta_0^{\top} g\left(-\tilde Q^{\top}\tilde \psi_0, \tilde Q\right)\delta_0 \tilde Q \tilde Q^{\top} f_0(0 \mid \tilde Q)\right] \,.
$$
This completes the proof. So we have established:
\begin{align*}
\sqrt{n}\left(\hat \gamma^s - \gamma_0\right) & \overset{\mathscr{L}}{\implies} \mathcal{N}\left(0, \left(Q^\gamma\right)^{-1}V^\gamma \left(Q^\gamma\right)^{-1}\right) \,, \\
\sqrt{\frac{n}{\sigma_n}}\left(\hat \psi^s - \psi_0\right) & \overset{\mathscr{L}}{\implies} \mathcal{N}\left(0, \left(Q^\psi\right)^{-1}V^\psi \left(Q^\psi\right)^{-1}\right) \,.
\end{align*}
and they are asymptotically uncorrelated.
\section{Proof of Theorem \ref{thm:binary}}
\label{sec:supp_classification}
In this section, we present the details of the binary response model, the assumptions, a roadmap of the proof and then finally prove Theorem \ref{thm:binary}.
\noindent
\begin{assumption}
\label{as:distribution}
The below assumptions pertain to the parameter space and the distribution of $Q$:
\begin{enumerate}
\item The parameter space $\Theta$ is a compact subset of $\mathbb{R}^p$.
\item The support of the distribution of $Q$ contains an open subset around origin of $\mathbb{R}^p$ and the distribution of $Q_1$ conditional on $\tilde{Q} = (Q_2, \dots, Q_p)$ has, almost surely, everywhere positive density with respect to Lebesgue measure.
\end{enumerate}
\end{assumption}
\noindent
For notational convenience, define the following:
\begin{enumerate}
\item Define $f_{\psi} (\cdot | \tilde{Q})$ to the conditional density of $Q^{\top}\psi$ given $\tilde{Q}$ for $\theta \in \Theta$. Note that the following relation holds: $$f_{\theta}(\cdot |\tilde{Q}) = f_{Q_1}(\cdot - \tilde{\psi}^{\top}\tilde{Q} | \tilde{Q}) \,.$$ where we define $f_{Q_1}(\cdot | \tilde X)$ is the conditional density of $Q_1$ given $\tilde Q$.
\item Define $f_0(\cdot | \tilde{Q}) = f_{\psi_0}(\cdot | \tilde{Q})$ where $\psi_0$ is the unique minimizer of the population score function $M(\psi)$.
\item Define $f_{\tilde Q}(\cdot)$ to be the marginal density of $\tilde Q$.
\end{enumerate}
\noindent
The rest of the assumptions are as follows:
\begin{assumption}
\label{as:differentiability}
$f_0(y|\tilde{Q})$ is at-least once continuously differentiable almost surely for all $\tilde{Q}$. Also assume that there exists $\delta$ and $t$ such that $$\inf_{|y| \le \delta} f_0(y|\tilde{Q}) \ge t$$ for all $\tilde{Q}$ almost surely.
\end{assumption}
This assumption can be relaxed in the sense that one can allow the lower bound $t$ to depend on $\tilde{Q}$, provided that some further assumptions are imposed on $\mathbb{E}(t(\tilde{Q}))$. As this does not add anything of significance to the import of this paper, we use Assumption \ref{as:differentiability} to simplify certain calculations.
\begin{assumption}
\label{as:density_bound}
Define $m\left(\tilde{Q}\right) = \sup_{t}f_{X_1}(t | \tilde{Q}) = \sup_{\theta} \sup_{t}f_{\theta}(t | \tilde{Q})$. Assume that $\mathbb{E}\left(m\left(\tilde{Q}\right)^2\right) < \infty$.
\end{assumption}
\begin{assumption}
\label{as:derivative_bound}
Define $h(\tilde{Q}) = \sup_{t} f_0'(t | \tilde{Q})$. Assume that $\mathbb{E}\left(h^2\left(\tilde{Q}\right)\right) < \infty$.
\end{assumption}
\begin{assumption}
\label{as:eigenval_bound}
Assume that $f_{\tilde{Q}}(0) > 0$ and also that the minimum eigenvalue of $\mathbb{E}\left(\tilde{Q}\tilde{Q}^{\top}f_0(0|\tilde{Q})\right) > 0$.
\end{assumption}
\subsection{Sufficient conditions for above assumptions }
We now demonstrate some sufficient conditions for the above assumptions to hold. If the support of $Q$ is compact and both $f_1(\cdot | \tilde Q)$ and $f'_1(\cdot | \tilde Q)$ are uniformly bounded in $\tilde Q$, then Assumptions $(\ref{as:distribution}, \ \ref{as:differentiability}, \ \ref{as:density_bound},\ \ref{as:derivative_bound})$ follow immediately. The first part of Assumption \ref{as:eigenval_bound}, i.e. the assumption $f_{\tilde{Q}}(0) > 0$ is also fairly general and satisfied by many standard probability distributions. The second part of Assumption \ref{as:eigenval_bound} is satisfied when $f_0(0|\tilde{Q})$ has some lower bound independent of $\tilde{Q}$ and $\tilde{Q}$ has non-singular dispersion matrix.
Below we state our main theorem. In the next section, we first provide a roadmap of our proof and then fill in the corresponding details. For the rest of the paper, \emph{we choose our bandwidth $\sigma_n$ to satisfy $\frac{\log{n}}{n \sigma_n} \rightarrow 0$}.
\noindent
\begin{remark}
As our procedure requires the weaker condition $(\log{n})/(n \sigma_n) \rightarrow 0$, it is easy to see from the above Theorem that the rate of convergence can be almost as fast as $n/\sqrt{\log{n}}$.
\end{remark}
\begin{remark}
Our analysis remains valid in presence of an intercept term. Assume, without loss of generality, that the second co-ordinate of $Q$ is $1$ and let $\tilde{Q} = (Q_3, \dots, Q_p)$. It is not difficult to check that all our calculations go through under this new definition of $\tilde Q$. We, however, avoid this scenario for simplicity of exposition.
\end{remark}
\vspace{0.2in}
\noindent
{\bf Proof sketch: }We now provide a roadmap of the proof of Theorem \ref{thm:binary} in this paragraph while the elaborate technical derivations in the later part.
Define the following: $$T_n(\psi) = \nabla \mathbb{M}_n^s(\psi)= -\frac{1}{n\sigma_n}\sum_{i=1}^n (Y_i - \gamma)K'\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\tilde{Q}_i$$ $$Q_n(\psi) = \nabla^2 \mathbb{M}_n^s(\psi) = -\frac{1}{n\sigma_n^2}\sum_{i=1}^n (Y_i - \gamma)K''\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\tilde{Q}_i\tilde{Q}_i^{\top}$$ As $\hat{\psi}^s$ minimizes $\mathbb{M}^s_n(\psi)$ we have $T_n(\hat{\psi}^s) = 0$. Using one step Taylor expansion we have:
\allowdisplaybreaks
\begin{align*}
T_n(\hat{\psi}^s) = T_n(\psi_0) + Q_n(\psi^*_n)\left(\hat{\psi}^s - \psi_0\right) = 0
\end{align*}
or:
\begin{equation}
\label{eq:main_eq} \sqrt{n/\sigma_n}\left(\hat{\psi}^s - \psi_0\right) = -\left(\sigma_nQ_n(\psi^*_n)\right)^{-1}\sqrt{n\sigma_n}T_n(\psi_0)
\end{equation}
for some intermediate point $\psi^*_n$ between $\hat \psi^s$ and $\psi_0$. The following lemma establishes the asymptotic properties of $T_n(\psi_0)$:
\begin{lemma}[Asymptotic Normality of $T_n$]
\label{asymp-normality}
\label{asymp-normality}
If $n\sigma_n^{3} \rightarrow \lambda$, then
$$
\sqrt{n \sigma_n} T_n(\psi_0) \Rightarrow \mathcal{N}(\mu, \Sigma)
$$
where
$$\mu = -\sqrt{\lambda}\frac{\beta_0 - \alpha_0}{2}\left[\int_{-1}^{1} K'\left(t\right)|t| \ dt \right] \int_{\mathbb{R}^{p-1}}\tilde{Q} f'(0 | \tilde{Q}) \ dP(\tilde{Q})
$$
and
$$\Sigma = \left[a_1 \int_{-1}^{0} \left(K'\left(t\right)\right)^2 \ dt + a_2 \int_{0}^{1} \left(K'\left(t\right)\right)^2 \ dt \right]\int_{\mathbb{R}^{p-1}}\tilde{Q}\tilde{Q}^{\top} f(0|\tilde{Q}) \ dP(\tilde{Q}) \,.
$$
Here $a_1 = (1 - \gamma)^2 \alpha_0 + \gamma^2 (1-\alpha_0), a_2 = (1 - \gamma)^2 \beta_0 + \gamma^2 (1-\beta_0)$ and $\alpha_0, \beta_0, \gamma$ are model parameters defined around equation \eqref{eq:new_loss}.
\end{lemma}
\noindent
In the case that $n \sigma_n^3 \rightarrow 0$, which, holds when $n\sigma_n \rightarrow 0$ as assumed prior to the statement of the theorem, $\lambda = 0$ and we have:
$$\sqrt{n \sigma_n} T_n(\psi_0) \rightarrow \mathcal{N}(0, \Sigma) \,.$$
Next, we analyze the convergence of $Q_n(\psi^*_n)^{-1}$ which is stated in the following lemma:
\begin{lemma}[Convergence in Probability of $Q_n$]
\label{conv-prob}
Under Assumptions (\ref{as:distribution} - \ref{as:eigenval_bound}), for any random sequence $\breve{\psi}_n$ such that $\|\breve{\psi}_n - \psi_0\|/\sigma_n \overset{P} \rightarrow 0$,
$$
\sigma_n Q_n(\breve{\psi}_n) \overset{P} \rightarrow Q = \frac{\beta_0 - \alpha_0}{2}\left(\int_{-1}^{1} -K''\left(t \right)\text{sign}(t) \ dt\right) \ \mathbb{E}\left(\tilde{Q}\tilde{Q}^{\top} f(0 |\tilde{Q})\right) \,.
$$
\end{lemma}
It will be shown later that the condition $\|\breve{\psi}_n - \psi_0\|/\sigma_n \overset{P} \rightarrow 0$ needed in Lemma \ref{conv-prob} holds for the (random) sequence $\psi^*_n$. Then, combining Lemma \ref{asymp-normality} and Lemma \ref{conv-prob} we conclude from equation \ref{eq:main_eq} that:
$$
\sqrt{n/\sigma_n} \left(\hat{\psi}^s - \psi_0\right) \Rightarrow N(0, Q^{-1}\Sigma Q^{-1}) \,.
$$
This concludes the proof of the our Theorem \ref{thm:binary} with $\Gamma = Q^{-1}\Sigma Q^{-1}$.
\newline
\newline
Observe that, to show $\left\|\psi^*_n - \psi_0 \right\| = o_P(\sigma_n)$, it suffices to to prove that $\left\|\hat \psi^s - \psi_0 \right\| = o_P(\sigma_n)$. Towards that direction, we have following lemma:
\begin{lemma}[Rate of convergence]
\label{lem:rate}
Under Assumptions (\ref{as:distribution} - \ref{as:eigenval_bound}),
$$
n^{2/3}\sigma_n^{-1/3} d^2_n\left(\hat \psi^s, \psi_0^s\right) = O_P(1) \,,
$$
where
$$
d_n\left(\psi, \psi_0^s\right) = \sqrt{\left[\frac{\|\psi - \psi_0^s\|^2}{\sigma_n} \mathds{1}(\|\psi - \psi_0^s\| \le \mathcal{K}\sigma_n) + \|\psi - \psi_0^s\| \mathds{1}(\|\psi - \psi_0^s\| \ge \mathcal{K}\sigma_n)\right]}
$$
for some specific constant $\mathcal{K}$. (This constant will be mentioned precisely in the proof).
\end{lemma}
\noindent
The lemma immediately leads to the following corollary:
\begin{corollary}
\label{rate-cor}
If $n\sigma_n \rightarrow \infty$ then $\|\hat \psi^s - \psi_0^s\|/\sigma_n \overset{P} \longrightarrow 0$.
\end{corollary}
\noindent
Finally, to establish $\|\hat \psi^s - \psi_0\|/\sigma_n \overset{P} \rightarrow 0$, all we need is that $\|\psi_0^s - \psi_0\|/\sigma_n \rightarrow 0$ as demonstrated in the following lemma:
\begin{lemma}[Convergence of population minimizer]
\label{bandwidth}
For any sequence of $\sigma_n \rightarrow 0$, we have: $\|\psi_0^s - \psi_0\|/\sigma_n \rightarrow 0$.
\end{lemma}
\noindent
Hence the final roadmap is the following: Using Lemma \ref{bandwidth} and Corollary \ref{rate-cor} we establish that $\|\hat \psi^s - \psi_0\|/\sigma_n \rightarrow 0$ if $n\sigma_n \rightarrow \infty$. This, in turn, enables us to prove that $\sigma_n Q_n(\psi^*_n) \overset{P} \rightarrow Q$,which, along with Lemma \ref{asymp-normality}, establishes the main theorem.
\begin{remark}
\label{rem:gamma}
In the above analysis, we have assumed knowledge of $\gamma$ in between $(\alpha_0, \beta_0)$. However, all our calculations go through if we replace $\gamma$ by its estimate (say $\bar Y$) with more tedious book-keeping. One way to simplify the calculations is to split the data into two halves, estimate $\gamma$ (via $\bar Y$) from the first half and then use it as a proxy for $\gamma$ in the second half of the data to estimate $\psi_0$. As this procedure does not add anything of interest to the core idea of our proof, we refrain from doing so here.
\end{remark}
\subsection{Variant of quadratic loss function}
\label{loss_func_eq}
In this sub-section we argue why the loss function in \eqref{eq:new_loss} is a variant of the quadratic loss function for any $\gamma \in (\alpha_0, \beta_0)$. Assume that we know $\alpha_0, \beta_0$ and seek to estimate $\psi_0$. We start with an expansion of the quadratic loss function:
\begin{align*}
& \mathbb{E}\left(Y - \alpha_0\mathds{1}_{Q^{\top}\psi \le 0} - \beta_0 \mathds{1}_{Q^{\top}\psi > 0}\right)^2 \\
& = \mathbb{E}\left(\mathbb{E}\left(Y - \alpha_0\mathds{1}_{Q^{\top}\psi \le 0} - \beta_0 \mathds{1}_{Q^{\top}\psi > 0}\right)^2 \ | X\right) \\
& = \mathbb{E}_{Q}\left(\mathbb{E}\left( Y^2 \mid Q \right) \right) + \mathbb{E}_{Q}\left(\alpha_0\mathds{1}_{Q^{\top}\psi \le 0} + \beta_0 \mathds{1}_{Q^{\top}\psi > 0}\right)^2 \\
& \qquad \qquad \qquad -2 \mathbb{E}_{Q}\left(\left(\alpha_0\mathds{1}_{Q^{\top}\psi \le 0} + \beta_0 \mathds{1}_{Q^{\top}\psi > 0}\right) \mathbb{E}(Y \mid Q)\right) \\
& = \mathbb{E}_Q\left(\mathbb{E}\left( Y \mid Q \right) \right) + \mathbb{E}_Q\left(\alpha_0\mathds{1}_{Q^{\top}\psi \le 0} + \beta_0 \mathds{1}_{Q^{\top}\psi > 0}\right)^2 \\
& \qquad \qquad \qquad -2 \mathbb{E}_Q\left(\left(\alpha_0\mathds{1}_{Q^{\top}\psi \le 0} + \beta_0 \mathds{1}_{Q^{\top}\psi > 0}\right) \mathbb{E}(Y \mid Q)\right) \\
\end{align*}
Since the first summand is just $\mathbb{E} Y$, it is irrelevant to the minimization. A cursory inspection shows that it suffices to minimize
\begin{align}
& \mathbb{E}\left(\left(\alpha_0\mathds{1}_{Q^{\top}\psi \le 0} + \beta_0 \mathds{1}_{Q^{\top}\psi > 0}\right) - \mathbb{E}(Y \mid Q)\right)^2 \notag\\
\label{eq:lse_1} & = (\beta_0 - \alpha_0)^2 \P\left(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)\right)
\end{align}
On the other hand the loss we are considering is $\mathbb{E}\left((Y - \gamma)\mathds{1}_{Q^{\top}\psi \le 0}\right)$:
\begin{align}
\label{eq:lse_2} \mathbb{E}\left((Y - \gamma)\mathds{1}_{Q^{\top}\psi \le 0}\right) & = (\beta_0 - \gamma)\P(Q^{\top}\psi_0 > 0 , Q^{\top}\psi \le 0) \notag \\
& \hspace{10em}+ (\alpha_0 - \gamma)\P(Q^{\top}\psi_0 \le 0, Q^{\top}\psi \le 0)\,,
\end{align}
which can be rewritten as:
\begin{align*}
& (\alpha_0 - \gamma)\P(X^{\top} \psi_0 \leq 0) + (\beta_0 - \gamma)\,\P(X^{\top} \psi_0 > 0, X^{\top} \psi \leq 0) \\
& \qquad \qquad \qquad + (\gamma - \alpha_0)\,P (X^{\top} \psi_0 \leq 0, X^{\top} \psi > 0) \,.
\end{align*}
By Assumption \ref{as:distribution}, for $\psi \neq \psi_0$, $\P\left(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)\right) > 0$. As an easy consequence, equation \eqref{eq:lse_1} is uniquely minimized at $\psi = \psi_0$. To see that the same is true for \eqref{eq:lse_2} when $\gamma \in (\alpha_0, \beta_0)$, note that the first summand in the equation does not depend on $\psi$, that the second and third summands are both non-negative and that at least one of these must be positive under Assumption \ref{as:distribution}.
\subsection{Linear curvature of the population score function}
Before going into the proofs of the Lemmas and the Theorem, we argue that the population score function $M(\psi)$ has linear curvature near $\psi_0$, which is useful in proving Lemma \ref{lem:rate}. We begin with the following observation:
\begin{lemma}[Curvature of population risk]
\label{lem:linear_curvature}
Under Assumption \ref{as:differentiability} we have: $$u_- \|\psi - \psi_0\|_2 \le \mathbb{M}(\psi) - \mathbb{M}(\psi_0) \le u_+ \|\psi - \psi_0\|_2$$ for some constants $0 < u_- < u_+ < \infty$, for all $\psi \in \psi$.
\end{lemma}
\begin{proof}
First, we show that
$$
\mathbb{M}(\psi) - \mathbb{M}(\psi_0) = \frac{(\beta_0 - \alpha_0)}{2} \P(\text{sign}(Q^{\top}\psi) \neq X^{\top}(\psi_0))
$$ which follows from the calculation below:
\begin{align*}
& \mathbb{M}(\psi) - \mathbb{M}(\psi_0) \\
& = \mathbb{E}\left((Y - \gamma)\mathds{1}(Q^{\top}\psi \le 0)\right) - \mathbb{E}\left((Y - \gamma)\mathds{1}(Q^{\top}\psi_0 \le 0)\right) \\
& = \frac{\beta_0 - \alpha_0}{2} \mathbb{E}\left(\left\{\mathds{1}(Q^{\top}\psi \le 0) - \mathds{1}(Q^{\top}\psi_0 \le 0)\right\}\left\{\mathds{1}(Q^{\top}\psi_0 \ge 0) - \mathds{1}(Q^{\top}\psi_0 \le 0)\right\}\right) \\
& = \frac{\beta_0 - \alpha_0}{2} \mathbb{E}\left(\left\{\mathds{1}(Q^{\top}\psi \le 0, Q^{\top}\psi_0 \ge 0) - \mathds{1}(Q^{\top}\psi \le 0, Q^{\top}\psi_0 \le 0) + \mathds{1}(Q^{\top}\psi_0 \le 0)\right\}\right) \\
& = \frac{\beta_0 - \alpha_0}{2} \mathbb{E}\left(\left\{\mathds{1}(Q^{\top}\psi \le 0, Q^{\top}\psi_0 \ge 0) + \mathds{1}(Q^{\top}\psi \ge 0, Q^{\top}\psi_0 \le 0)\right\}\right) \\
& = \frac{\beta_0 - \alpha_0}{2} \P(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)) \,.
\end{align*}
We now analyze the probability of the wedge shaped region, the region between the two hyperplanes $Q^{\top}\psi = 0$ and $Q^{\top}\psi_0 = 0$. Note that,
\allowdisplaybreaks
\begin{align}
& \P(Q^{\top}\psi > 0 > Q^{\top}\psi_0) \notag\\
& = \P(-\tilde{Q}^{\top}\tilde{\psi} < X_1 < -\tilde{Q}^{\top}\tilde{\psi}_0) \notag\\
\label{lin1} & = \mathbb{E}\left[\left(F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}_0\right) - F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}\right)\right)\mathds{1}\left(\tilde{Q}^{\top}\tilde{\psi}_0 \le \tilde{Q}^{\top}\tilde{\psi}\right)\right]
\end{align}
A similar calculation yields
\allowdisplaybreaks
\begin{align}
\label{lin2} \P(Q^{\top}\psi < 0 < Q^{\top}\psi_0) & = \mathbb{E}\left[\left(F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}\right) - F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}_0\right)\right)\mathds{1}\left(\tilde{Q}^{\top}\tilde{\psi}_0 \ge \tilde{Q}^{\top}\tilde{\psi}\right)\right]
\end{align}
Adding both sides of equation \ref{lin1} and \ref{lin2} we get:
\begin{equation}
\label{wedge_expression}
\P(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)) = \mathbb{E}\left[\left|F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}\right) - F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}_0\right)\right|\right]
\end{equation}
Define $\psi_{\max} = \sup_{\psi \in \psi}\|\psi\|$, which is finite by Assumption \ref{as:distribution}. Below, we establish the lower bound:
\allowdisplaybreaks
\begin{align*}
& \P(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)) \notag\\
& = \mathbb{E}\left[\left|F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}\right) - F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}_0\right)\right|\right] \\
& \ge \mathbb{E}\left[\left|F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}\right) - F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}_0\right)\right|\mathds{1}\left(\left|\tilde{Q}^{\top}\tilde{\psi}\right| \vee \left| \tilde{Q}^{\top}\tilde{\psi}_0\right| \le \delta\right)\right] \hspace{0.2in} [\delta \ \text{as in Assumption \ref{as:differentiability}}]\\
& \ge \mathbb{E}\left[\left|F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}\right) - F_{X_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}_0\right)\right|\mathds{1}\left(\|\tilde{Q}\| \le \delta/\psi_{\max}\right)\right] \\
& \ge t \mathbb{E}\left[\left| \tilde{Q}^{\top}(\psi - \psi_0)\right| \mathds{1}\left(\|\tilde{Q}\| \le \delta/\psi_{\max}\right)\right] \\
& = t \|\psi - \psi_0\| \,\mathbb{E}\left[\left| \tilde{Q}^{\top}\frac{(\psi - \psi_0)}{\|\psi - \psi_0\|}\right| \mathds{1}\left(\|\tilde{Q}\| \le \delta/\psi_{\max}\right)\right] \\
& \ge t\|\psi - \psi_0\| \inf_{\gamma \in S^{p-1}}\mathbb{E}\left[\left| \tilde{Q}^{\top}\gamma\right| \mathds{1}\left(\|\tilde{Q}\| \le \delta/\psi_{\max}\right)\right] \\
& = u_-\|\psi - \psi_0\| \,.
\end{align*}
At the very end, we have used the fact that $$\inf_{\gamma \in S^{p-1}}\mathbb{E}\left[\left| \tilde{Q}^{\top}\gamma\right| \mathds{1}\left(\|\tilde{Q}\| \le \delta/\psi_{\max}\right)\right] > 0$$ To prove this, assume that the infimum is 0. Then, there exists $\gamma_0 \in S^{p-1}$ such that
$$\mathbb{E}\left[\left| \tilde{Q}^{\top}\gamma_0\right| \mathds{1}\left(\|\tilde{Q}\| \le \delta/\psi_{\max}\right)\right] = 0 \,,$$
as the above function continuous in $\gamma$ and any continuous function on a compact set attains its infimum. Hence, $\left|\tilde{Q}^{\top}\gamma_0 \right| = 0$ for all $\|\tilde{Q}\| \le \delta/\psi_{\max}$, which implies that $\tilde{Q}$ does not have full support, violating Assumption \ref{as:distribution} (2). This gives a contradiction.
\\\\
\noindent
Establishing the upper bound is relatively easier. Going back to equation \eqref{wedge_expression}, we have:
\begin{align*}
& \P(\text{sign}(Q^{\top}\psi) \neq \text{sign}(Q^{\top}\psi_0)) \notag\\
& = \mathbb{E}\left[\left|F_{Q_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}\right) - F_{Q_1 | \tilde{Q}}\left(-\tilde{Q}^{\top}\tilde{\psi}_0\right)\right|\right] \\
& \le \mathbb{E}\left[m(\tilde Q) \, \|Q\| \,\|\psi- \psi_0\|\right] \hspace{0.2in} [m(\cdot) \ \text{is defined in Assumption \ref{as:density_bound}}]\\
& \le u_+ \|\psi - \psi_0\| \,,
\end{align*}
as $ \mathbb{E}\left[m(\tilde Q) \|Q\|\right] < \infty$ by Assumption \ref{as:density_bound} and the sub-Gaussianity of $\tilde X$.
\end{proof}
\subsection{Proof of Lemma \ref{asymp-normality}}
\begin{proof}
We first prove that under our assumptions $\sigma_n^{-1} \mathbb{E}(T_n(\psi_0)) \overset{n \to \infty}\longrightarrow A$ where $$A = -\frac{\beta_0 - \alpha_0}{2!}\left[\int_{-\infty}^{\infty} K'\left(t\right)|t| \ dt \right] \int_{\mathbb{R}^{p-1}}\tilde{Q}f_0'(0 | \tilde{Q}) \ dP(\tilde{Q})$$ The proof is based on Taylor expansion of the conditional density:
\allowdisplaybreaks
\begin{align*}
& \sigma_n^{-1} \mathbb{E}(T_n(\psi_0)) \\
& = -\sigma_n^{-2}\mathbb{E}\left((Y - \gamma)K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\tilde{Q}\right) \\
& = -\frac{\beta_0 - \alpha_0}{2}\sigma_n^{-2}\mathbb{E}\left(K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\tilde{Q}(\mathds{1}(Q^{\top}\psi_0 \ge 0) - \mathds{1}(Q^{\top}\psi_0 \le 0))\right) \\
& = -\frac{\beta_0 - \alpha_0}{2}\sigma_n^{-2}\int_{\mathbb{R}^{p-1}}\tilde{Q}\left[\int_{0}^{\infty} K'\left(\frac{z}{\sigma_n}\right)f_0(z|\tilde{Q}) \ dz - \int_{-\infty}^{0} K'\left(\frac{z}{\sigma_n}\right)f_0(z|\tilde{Q}) \ dz \right] \ dP(\tilde{Q}) \\
& = -\frac{\beta_0 - \alpha_0}{2}\sigma_n^{-1}\int_{\mathbb{R}^{p-1}}\tilde{Q}\left[\int_{0}^{\infty} K'\left(t\right)f_0(\sigma_n t|\tilde{Q}) \ dt - \int_{-\infty}^{0} K'\left(t\right)f_0(\sigma_n t |\tilde{Q}) \ dt \right] \ dP(\tilde{Q}) \\
& = -\frac{\beta_0 - \alpha_0}{2}\sigma_n^{-1}\left[\int_{\mathbb{R}^{p-1}}\tilde{Q}\left[\int_{0}^{\infty} K'\left(t\right)f_0(0|\tilde{Q}) \ dt - \int_{-\infty}^{0} K'\left(t\right)f_0(0 |\tilde{Q}) \ dt \right] \ dP(\tilde{Q}) \right. \\
& \qquad \qquad \qquad + \left. \int_{\mathbb{R}^{p-1}}\sigma_n \left[\int_{0}^{\infty} K'\left(t\right)tf_0'(\lambda \sigma_n t|\tilde{Q}) \ dt - \int_{-\infty}^{0} K'\left(t\right) t f_0'(\lambda \sigma_n t |\tilde{Q}) \ dt \right] \ dP(\tilde{Q}) \right] \hspace{0.2in} [0 < \lambda < 1]\\
& = -\frac{\beta_0 - \alpha_0}{2}\int_{\mathbb{R}^{p-1}}\tilde{Q}\left[\int_{0}^{\infty} k\left(t\right)tf_0'(\lambda \sigma_n t|\tilde{Q}) \ dz - \int_{-\infty}^{0} k\left(t\right)tf_0'(\lambda \sigma_nt |\tilde{Q}) \ dz \right] \ dP(\tilde{Q})\\
& \underset{n \rightarrow \infty} \longrightarrow -\frac{\beta_0 - \alpha_0}{2}\left[\int_{-\infty}^{\infty} k\left(t\right)|t| \ dt \right] \int_{\mathbb{R}^{p-1}}\tilde{Q}f_0'(0 | \tilde{Q}) \ dP(\tilde{Q})
\end{align*}
Next, we prove that $\mbox{Var}\left(\sqrt{n\sigma_n}T_n(\psi_0)\right)\longrightarrow \Sigma$ as $n \rightarrow \infty$, where $\Sigma$ is as defined in Lemma \ref{asymp-normality}. Note that:
\allowdisplaybreaks
\begin{align*}
\mbox{Var}\left(\sqrt{n\sigma_n}T_n(\psi_0)\right) & = \sigma_n \mathbb{E}\left((Y - \gamma)^2\left(K'\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)^2\frac{\tilde{Q}\tilde{Q}^{\top}}{\sigma_n^2}\right)\right) - \sigma_n \mathbb{E}(T_n(\psi_0))\mathbb{E}(T_n(\psi_0))^{\top}
\end{align*}
As $\sigma_n^{-1}\mathbb{E}(T_n(\psi_0)) \rightarrow A$, we can conclude that $\sigma_n \mathbb{E}(T_n(\psi_0))\mathbb{E}(T_n(\psi_0))^{\top} \rightarrow 0$.
Define $a_1 = (1 - \gamma)^2 \alpha_0 + \gamma^2 (1-\alpha_0), a_2 = (1 - \gamma)^2 \beta_0 + \gamma^2 (1-\beta_0)$. For the first summand:
\allowdisplaybreaks
\begin{align*}
& \sigma_n \mathbb{E}\left((Y - \gamma)^2\left(K^{'^2}\left(\frac{Q^{\top}\psi_0}{\sigma_n}\right)\frac{\tilde{Q}\tilde{Q}^{\top}}{\sigma_n^2}\right)\right) \\
& = \frac{1}{\sigma_n} \int_{\mathbb{R}^{p-1}}\tilde{Q}\tilde{Q}^{\top} \left[a_1 \int_{-\infty}^{0} K^{'^2}\left(\frac{z}{\sigma_n}\right) f(z|\tilde{Q}) \ dz \right. \notag \\ & \left. \qquad \qquad \qquad + a_2 \int_{0}^{\infty}K^{'^2}\left(\frac{z}{\sigma_n}\right) f(z|\tilde{Q}) \ dz \right] \ dP(\tilde{Q})\\
& = \int_{\mathbb{R}^{p-1}}\tilde{Q}\tilde{Q}^{\top} \left[a_1 \int_{-\infty}^{0} K^{'^2}\left(t\right)f(\sigma_n t|\tilde{Q}) \ dt + a_2 \int_{0}^{\infty} K^{'^2}\left(t\right) f(\sigma_n t |\tilde{Q}) \ dt \right] \ dP(\tilde{Q}) \\
& = \int_{\mathbb{R}^{p-1}}\tilde{Q}\tilde{Q}^{\top} \left[a_1 \int_{-\infty}^{0} K^{'^2}\left(t\right)f(\sigma_n t|\tilde{Q}) \ dt + a_2 \int_{0}^{\infty} K^{'^2}\left(t\right) f(\sigma_n t |\tilde{Q}) \ dt \right] \ dP(\tilde{Q}) \\
& \underset{n \rightarrow \infty} \longrightarrow \left[a_1 \int_{-\infty}^{0} K^{'^2}\left(t\right) \ dt + a_2 \int_{0}^{\infty} K^{'^2}\left(t\right) \ dt \right]\int_{\mathbb{R}^{p-1}}\tilde{Q}\tilde{Q}^{\top} f(0|\tilde{Q}) \ dP(\tilde{Q}) \ \ \overset{\Delta} = \Sigma \, .
\end{align*}
Finally, suppose $n \sigma_n^{3} \rightarrow \lambda$. Define $W_n = \sqrt{n\sigma_n}\left[T_n(\psi) - \mathbb{E}(T_n(\psi))\right]$. Using Lemma 6 of Horowitz \cite{horowitz1992smoothed}, it is easily established that $W_n \Rightarrow N(0, \Sigma)$. Also, we have:
\allowdisplaybreaks
\begin{align*}
\sqrt{n\sigma_n}\mathbb{E}(T_n(\psi_0)) = \sqrt{n\sigma_n^{3}}\sigma_n^{-1}\mathbb{E}(T_n(\psi_0) & \rightarrow \sqrt{\lambda}A = \mu
\end{align*}
As $\sqrt{n\sigma_n}T_n(\psi_0) = W_n + \sqrt{n\sigma_n}\mathbb{E}(T_n(\psi_0))$, we conclude that $\sqrt{n\sigma_n} T_n(\psi_0) \Rightarrow N(\mu, \Sigma)$.
\end{proof}
\subsection{Proof of Lemma \ref{conv-prob}}
\begin{proof}
Let $\epsilon_n \downarrow 0$ be a sequence such that $\P(\|\breve{\psi}_n - \psi_0\| \le \epsilon_n \sigma_n) \rightarrow 1$. Define $\Psi_n = \{\psi: \|\psi - \psi_0\| \le \epsilon_n \sigma_n\}$. We show that $$\sup_{\psi \in \psi_n} \|\sigma_n Q_n(\psi) - Q\|_F \overset{P} \to 0$$ where $\|\cdot\|_F$ denotes the Frobenius norm of a matrix. Sometimes, we omit the subscript $F$ when there is no ambiguity. Define $\mathcal{G}_n$ to be collection of functions:
$$
\mathcal{G}_n= \left\{g_{\psi}(y, q) = -\frac{1}{\sigma_n}(y - \gamma)\tilde q\tilde q^{\top} \left(K''\left(\frac{q^{\top}\psi}{\sigma_n}\right) - K''\left(\frac{q^{\top}\psi_0}{\sigma_n}\right)\right), \psi \in \Psi_n \right\}
$$
That the function class $\mathcal{G}_n$ has bounded uniform entropy integral (BUEI) is immediate from the fact that the function $Q \to Q^{\top}\psi$ has finite VC dimension (as the hyperplanes has finite VC dimension) and it does change upon constant scaling. Therefore $Q \mapsto Q^{\top}\psi/\sigma_n$ also has finite VC dimension which does not depend on n and hence BUEI. As composition with a monotone function and multiplication with constant (parameter free) functions or multiplication of two BUEI class of functions keeps BUEI property, we conclude that $\mathcal{G}_n$ has BUEI.
We first expand the expression in two terms:
\allowdisplaybreaks
\begin{align*}
\sup_{\psi \in \psi_n} \|\sigma_n Q_n(\psi) - Q\| & \le \sup_{\psi \in \psi_n} \|\sigma_n Q_n(\psi) - \mathbb{E}(\sigma_n Q_n(\psi))\| + \sup_{\psi \in \psi_n} \| \mathbb{E}(\sigma_n Q_n(\psi)) - Q\| \\
& = \|(\mathbb{P}_n - P)\|_{\mathcal{G}_n} + \sup_{\psi \in \psi_n}\| \mathbb{E}(\sigma_n Q_n(\psi)) - Q\| \\
& = T_{1,n} + T_{2,n} \hspace{0.3in} \,. [\text{Say}]
\end{align*}
\vspace{0.2in}
\noindent
That $T_{1,n} \overset{P} \to 0$ follows from uniform law of large number of a BUEI class (e.g. combining Theorem 2.4.1 and Theorem 2.6.7 of \cite{vdvw96}).
For uniform convergence of the second summand $T_{n,2}$, define $\chi_n = \{\tilde{Q}: \|\tilde{Q}\| \le 1/\sqrt{\epsilon_n}\}$. Then $\chi_n \uparrow \mathbb{R}^{p-1}$. Also for any $\psi \in \Psi_n$, if we define $\gamma_n \equiv \gamma_n(\psi) = (\psi - \psi_0)/\sigma_n$, then $|\tilde \gamma_n^{\top}\tilde{Q}| \le \sqrt{\epsilon_n}$ for all $n$ and for all $\psi \in \Psi_n, \tilde{Q} \in \chi_n$. Now,
\allowdisplaybreaks
\begin{align*}
& \sup_{\psi \in \psi_n}\| \mathbb{E}(\sigma_n Q_n(\psi)) - Q\| \notag \\
&\qquad \qquad = \sup_{\psi \in \psi_n}\| (\mathbb{E}(\sigma_n Q_n(\psi)\mathds{1}(\chi_n))-Q_1) + (\mathbb{E}(\sigma_n Q_n(\psi)\mathds{1}(\chi_n^c))-Q_2)\|
\end{align*}
where $$Q_1 = \frac{\beta_0 - \alpha_0}{2}\left(\int_{-\infty}^{\infty} -K''\left(t \right)\text{sign}(t) \ dt\right) \ \mathbb{E}\left(\tilde{Q}\tilde{Q}^{\top} f_0(0 |\tilde{Q})\mathds{1}(\chi_n) \right)$$ $$Q_2 = \frac{\beta_0 - \alpha_0}{2}\left(\int_{-\infty}^{\infty} -K''\left(t \right)\text{sign}(t) \ dt\right) \ \mathbb{E}\left(\tilde{Q}\tilde{Q}^{\top} f(0 |\tilde{Q})\mathds{1}(X_n^c) \right) \,.$$
Note that
\allowdisplaybreaks
\begin{flalign}
& \|\mathbb{E}(\sigma_n Q_n(\psi)\mathds{1}(\chi_n)) - Q_1\| \notag\\
& =\left\| \frac{\beta_0 - \alpha_0}{2}\left[\int_{\chi_n} \tilde{Q}\tilde{Q}^{\top} \left[\int_{-\infty}^{\tilde{Q}^{\top}\gamma_n} K''\left(t \right) f_0(\sigma_n (t-\tilde{Q}^{\top}\gamma_n) |\tilde{Q}) \ dt \right. \right. \right. \notag \\
& \left. \left. \left. \qquad \qquad - \int_{\tilde{Q}^{\top}\gamma_n}^{\infty} K''\left(t\right) f_0(\sigma_n (t - \tilde{Q}^{\top}\gamma_n) | \tilde{Q}) \ dt \right]dP(\tilde{Q})\right]\right. \notag\\ & \left. \qquad \qquad \qquad - \frac{\beta_0 - \alpha_0}{2}\left[\int_{\chi_n} \tilde{Q}\tilde{Q}^{\top} f(0 |\tilde{Q})\left[\int_{-\infty}^{0} K''\left(t \right) \ dt - \int_{0}^{\infty} K''\left(t\right) \ dt \right]dP(\tilde{Q})\right] \right \|\notag\\
& =\left \| \frac{\beta_0 - \alpha_0}{2}\left[\int_{\chi_n} \tilde{Q}\tilde{Q}^{\top} \left[\int_{-\infty}^{\tilde{Q}^{\top}\gamma_n} K'''\left(t \right) (f_0(\sigma_n (t-\tilde{Q}^{\top}\gamma_n) |\tilde{Q})-f_0(0 | \tilde{Q})) \ dt \right. \right. \right.\notag\\& \qquad \qquad- \left. \left. \left. \int_{\tilde{Q}^{\top}\gamma_n}^{\infty} K''\left(t\right) (f_0(\sigma_n (t - \tilde{Q}^{\top}\gamma_n) | \tilde{Q}) - f_0(0 | \tilde{Q})) \ dt \right]dP(\tilde{Q})\right]\right. \notag\\ & \qquad \qquad \qquad + \left. \frac{\beta_0 - \alpha_0}{2}\left[\int_{\chi_n} \tilde{Q}\tilde{Q}^{\top} f_0(0 |\tilde{Q}) \left[\int_{-\infty}^{\tilde{Q}^{\top}\gamma_n} K''\left(t \right) \ dt - \int_{-\infty}^{0} K''\left(t \right) \ dt \right. \right. \right. \notag \\
& \qquad \qquad \qquad \qquad \left. \left. \left. + \int_{\tilde{Q}^{\top}\gamma_n}^{\infty} K''\left(t \right) \ dt - \int_{0}^{\infty} K''\left(t\right) \ dt \right]dP(\tilde{Q})\right] \right \|\notag\\
& \le \frac{\beta_0 - \alpha_0}{2}\sigma_n \int_{\chi_n}\|\tilde{Q}\tilde{Q}^{\top}\|h(\tilde{Q})\int_{-\infty}^{\infty}|K''(t)||t - \gamma_n^{\top}\tilde{Q}| \ dt \ dP(\tilde{Q}) \notag\\ & \qquad \qquad + \frac{\beta_0 - \alpha_0}{2} \int_{\chi_n}\|\tilde{Q}\tilde{Q}^{\top}\| f_0(0 | \tilde{Q}) \left[\left| \int_{-\infty}^{\tilde{Q}^{\top}\gamma_n} K''\left(t \right) \ dt - \int_{-\infty}^{0} K''\left(t \right) \ dt \right| \right. \notag \\ & \left. \qquad \qquad \qquad + \left| \int_{\tilde{Q}^{\top}\gamma_n}^{\infty} K''\left(t \right) \ dt - \int_{0}^{\infty} K''\left(t\right) \ dt \right|\right] \ dP(\tilde{Q})\notag\\
& \le \frac{\beta_0 - \alpha_0}{2}\left[\sigma_n \int_{\chi_n}\|\tilde{Q}\tilde{Q}^{\top}\|h(\tilde{Q})\int_{-\infty}^{\infty}|K''(t)||t - \gamma_n^{\top}\tilde{Q}| \ dt \ dP(\tilde{Q}) \right. \notag \\
& \left. \qquad \qquad \qquad + 2\int_{\chi_n}\|\tilde{Q}\tilde{Q}^{\top}\| f_0(0 | \tilde{Q}) (K'(0) - K'(\gamma_n^{\top}\tilde{Q})) \ dP(\tilde{Q})\right]\notag \\
\label{cp1}&\rightarrow 0 \hspace{0.3in} [\text{As} \ n \rightarrow \infty] \,,
\end{flalign}
by DCT and Assumptions \ref{as:distribution} and \ref{as:derivative_bound}. For the second part:
\allowdisplaybreaks
\begin{align}
& \|\mathbb{E}(\sigma_n Q_n(\psi)\mathds{1}(\chi_n^c)) - Q_2\|\notag\\
& =\left\| \frac{\beta_0 - \alpha_0}{2}\left[\int_{\chi_n^c} \tilde{Q}\tilde{Q}^{\top} \left[\int_{-\infty}^{\tilde{Q}^{\top}\gamma_n} K''\left(t \right) f_0(\sigma_n (t-\tilde{Q}^{\top}\gamma_n) |\tilde{Q}) \ dt \right. \right. \right. \notag \\
& \left. \left. \left. \qquad \qquad - \int_{\tilde{Q}^{\top}\gamma_n}^{\infty} K''\left(t\right) f_0(\sigma_n (t - \tilde{Q}^{\top}\gamma_n) | \tilde{Q}) \ dt \right]dP(\tilde{Q})\right]\right. \notag\\ & \left. \qquad \qquad \qquad -\frac{\beta_0 - \alpha_0}{2}\left[\int_{\chi_n^c} \tilde{Q}\tilde{Q}^{\top} f_0(0 |\tilde{Q})\left[\int_{-\infty}^{0} K''\left(t \right) \ dt - \int_{0}^{\infty} K''\left(t\right) \ dt \right]dP(\tilde{Q})\right] \right \|\notag\\
& \le \frac{\beta_0 - \alpha_0}{2} \int_{\infty}^{\infty} |K''(t)| \ dt \int_{\chi_n^c} \|\tilde{Q}\tilde{Q}^{\top}\|(m(\tilde{Q}) + f_0(0|\tilde{Q})) \ dP(\tilde{Q}) \notag\\
\label{cp2} & \rightarrow 0 \hspace{0.3in} [\text{As} \ n \rightarrow \infty] \,,
\end{align}
again by DCT and Assumptions \ref{as:distribution} and \ref{as:density_bound}. Combining equations \ref{cp1} and \ref{cp2}, we conclude the proof.
\end{proof}
\subsection{Proof of Lemma \ref{bandwidth}}
Here we prove that $\|\psi^s_0 - \psi_0\|/\sigma_n \rightarrow 0$ where $\psi^s_0$ is the minimizer of $\mathbb{M}^s(\psi)$ and $\psi_0$ is the minimizer of $M(\psi)$.
\begin{proof}
Define $\eta = (\psi^s_0 - \psi_0)/\sigma_n$. At first we show that, $\|\tilde \eta\|_2$ is $O(1)$, i.e. there exists some constant $\Omega_1$ such that $\|\tilde \eta\|_2 \le \Omega_1$ for all $n$:
\begin{align*}
\|\psi^s_0 - \psi_0\|_2 & \le \frac{1}{u_-} \left(\mathbb{M}(\psi_n) - \mathbb{M}(\psi_0)\right) \hspace{0.2in} [\text{Follows from Lemma} \ \ref{lem:linear_curvature}]\\
& \le \frac{1}{u_-} \left(\mathbb{M}(\psi_n) - \mathbb{M}^s(\psi_n) + \mathbb{M}^s(\psi_n) - \mathbb{M}^s(\psi_0) + \mathbb{M}^s(\psi_0) - \mathbb{M}(\psi_0)\right) \\
& \le \frac{1}{u_-} \left(\mathbb{M}(\psi_n) - \mathbb{M}^s(\psi_n) + \mathbb{M}^s(\psi_0) - M(\psi_0)\right) \hspace{0.2in} [\because \mathbb{M}^s(\psi_n) - \mathbb{M}^s(\psi_0) \le 0]\\
& \le \frac{2K_1}{u_-}\sigma_n \hspace{0.2in} [\text{from equation} \ \eqref{eq:lin_bound_1}]
\end{align*}
\noindent
As $\psi^s_0$ minimizes $\mathbb{M}^s(\psi)$:
$$\nabla \mathbb{M}^s(\psi^s_0) = -\mathbb{E}\left((Y-\gamma)\tilde{Q}K'\left(\frac{Q^{\top}\psi^0_s}{\sigma_n}\right)\right) = 0$$
Hence:
\begin{align*}
0 &= \mathbb{E}\left((Y-\gamma)\tilde{Q}K'\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\right) \\
& = \frac{(\beta_0 - \alpha_0)}{2} \mathbb{E}\left(\tilde{Q}K'\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right)\left\{\mathds{1}(Q^{\top}\psi_0 \ge 0) -\mathds{1}(Q^{\top}\psi_0 < 0)\right\}\right) \\
& = \frac{(\beta_0 - \alpha_0)}{2} \mathbb{E}\left(\tilde{Q}K'\left(\frac{Q^{\top}\psi_0}{\sigma_n} + \tilde{\eta}^{\top} \tilde{Q}\right)\left\{\mathds{1}(Q^{\top}\psi_0 \ge 0) -\mathds{1}(Q^{\top}\psi_0 < 0)\right\}\right) \\
& = \frac{(\beta_0 - \alpha_0)}{2} \left[\int_{\mathbb{R}^{p-1}}\tilde{Q} \int_0^{\infty} K'\left(\frac{z}{\sigma_n} + \tilde{\eta}^{\top} \tilde{Q}\right) \ f_0(z|\tilde{Q}) \ dz \ dP(\tilde{Q})\right. \\
& \qquad \qquad \qquad \qquad \qquad \left. - \int_{\mathbb{R}^{p-1}}\tilde{Q} \int_{-\infty}^0 K'\left(\frac{z}{\sigma_n} + \tilde{\eta}^{\top} \tilde{Q}\right) \ f_0(z|\tilde{Q}) \ dz \ dP(\tilde{Q})\right] \\
& =\sigma_n \frac{(\beta_0 - \alpha_0)}{2} \left[\int_{\mathbb{R}^{p-1}}\tilde{Q} \int_0^{\infty} K'\left(t + \tilde{\eta}^{\top} \tilde{Q}\right) \ f_0(\sigma_n t|\tilde{Q}) \ dt \ dP(\tilde{Q})\right. \\
& \qquad \qquad \qquad \qquad \qquad \left. - \int_{\mathbb{R}^{p-1}}\tilde{Q} \int_{-\infty}^0 K'\left(t + \tilde{\eta}^{\top} \tilde{Q}\right) \ f_0(\sigma_n t|\tilde{Q}) \ dz \ dP(\tilde{Q})\right]
\end{align*}
As $\sigma_n\frac{(\beta_0 - \alpha_0)}{2} > 0$, we can forget about it and continue. Also, as we have proved $\|\tilde \eta\| = O(1)$, there exists a subsequence $\eta_{n_k}$ and a point $c \in \mathbb{R}^{p-1}$ such that $\eta_{n_k} \rightarrow c$. Along that sub-sequence we have:
\begin{align*}
0 & = \left[\int_{\mathbb{R}^{p-1}}\tilde{Q} \int_0^{\infty} K'\left(t + \tilde{\eta}_{n_k}^{\top} \tilde{Q}\right) \ f_0(\sigma_{n_k} t|\tilde{Q}) \ dt \ dP(\tilde{Q})\right. \\
& \qquad \qquad \qquad \qquad \qquad \left. - \int_{\mathbb{R}^{p-1}}\tilde{Q} \int_{-\infty}^0 K'\left(t + \tilde{\eta}_{n_k}^{\top} \tilde{Q}\right) \ f_0(\sigma_{n_k} t|\tilde{Q}) \ dt \ dP(\tilde{Q})\right]
\end{align*}
Taking limits on both sides and applying DCT (which is permissible by DCT) we conclude:
\begin{align*}
0 & = \left[\int_{\mathbb{R}^{p-1}}\tilde{Q} \int_0^{\infty} K'\left(t +c^{\top} \tilde{Q}\right) \ f_0(0|\tilde{Q}) \ dt \ dP(\tilde{Q})\right. \\
& \qquad \qquad \qquad \qquad \qquad \left. - \int_{\mathbb{R}^{p-1}}\tilde{Q} \int_{-\infty}^0 K'\left(t + c^{\top} \tilde{Q}\right) \ f_0(0|\tilde{Q}) \ dt \ dP(\tilde{Q})\right] \\
& = \left[\int_{\mathbb{R}^{p-1}}\tilde{Q} \ f_0(0|\tilde{Q}) \int_{c^{\top} \tilde{Q}}^{\infty} K'\left(t\right) \ dt \ dP(\tilde{Q})\right. \\
& \qquad \qquad \qquad \qquad \qquad \left. - \int_{\mathbb{R}^{p-1}}\tilde{Q}\ f_0(0|\tilde{Q}) \int_{-\infty}^{c^{\top} \tilde{Q}} K'\left(t \right) \ dt \ dP(\tilde{Q})\right] \\
& = \left[\int_{\mathbb{R}^{p-1}}\tilde{Q} \ f_0(0|\tilde{Q}) \left[1 - K(c^{\top} \tilde{Q})\right] \ dt \ dP(\tilde{Q})\right. \\
& \qquad \qquad \qquad \qquad \qquad\left. - \int_{\mathbb{R}^{p-1}}\tilde{Q}\ f_0(0|\tilde{Q}) K(c^{\top} \tilde{Q}) \ dt \ dP(\tilde{Q})\right] \\
& = \mathbb{E}\left(\tilde{Q} \left(2K(c^{\top} \tilde{Q}) - 1\right)f_0(0|\tilde{Q})\right) \,.
\end{align*}
Now, taking the inner-products of both sides with respect to $c$, we get:
\begin{equation}
\label{eq:zero_eq}
\mathbb{E}\left(c^{\top}\tilde{Q} \left(2K(c^{\top} \tilde{Q}) - 1\right)f_0(0|\tilde{Q})\right) = 0 \,.
\end{equation}
By our assumption that $K$ is symmetric kernel and that $K(t) > 0$ for all $t \in (-1, 1)$, we easily conclude that $c^{\top}\tilde{Q} \left(2K(c^{\top} \tilde{Q}) - 1\right) \ge 0$ almost surely in $\tilde{Q}$ with equality iff $c^{\top}X = 0$, which is not possible unless $c = 0$. Hence we conclude that $c = 0$. This shows that any convergent subsequence of $\eta_n$ converges to $0$, which completes the proof.
\end{proof}
\subsection{Proof of Lemma \ref{lem:rate}}
\begin{proof}
To obtain the rate of convergence of our kernel smoothed estimator we use Theorem 3.4.1 of \cite{vdvw96}: There are three key ingredients that one needs to take care of if in order to apply this theorem:
\begin{enumerate}
\item Consistency of the estimator (otherwise the conditions of the theorem needs to be valid for all $\eta$).
\item The curvature of the population score function near its minimizer.
\item A bound on the modulus of continuity in a vicinity of the minimizer of the population score function.
\end{enumerate}
Below, we establish the curvature of the population score function (item 2 above) globally, thereby obviating the need to establish consistency separately. Recall that the population score function was defined as:
$$
\mathbb{M}^s(\psi) = \mathbb{E}\left((Y - \gamma)\left(1 - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right)\right)
$$
and our estimator $\hat{\psi}_n$ is the argmin of the corresponding sample version. Consider the set of functions $\mathcal{H}_n = \left\{h_{\psi}: h_{\psi}(q,y) = (y - \gamma)\left(1 - K\left(\frac{q^{\top}\psi}{\sigma_n}\right)\right)\right\}$. Next, we argue that $\mathcal{H}_n$ is a VC class of functions with fixed VC dimension. We know that the function $\{(q,y) \mapsto q^{\top}\psi/\sigma_n: \psi \in \psi\}$ has fixed VC dimension (i.e. not depending on $n$). Now, as a finite dimensional VC class of functions composed with a fixed monotone function or multiplied by a fixed function still remains a finite dimensional VC class, we conclude that $\mathcal{H}_n$ is a fixed dimensional VC class of functions with bounded envelope (as the functions considered here are bounded by 1).
Now, we establish a lower bound on the curvature of the population score function $\mathbb{M}^s(\psi)$ near its minimizer $\psi_n$:
$$
\mathbb{M}^s(\psi) - \mathbb{M}^s(\psi_n) \gtrsim d^2_n(\psi, \psi_n)$$ where $$d_n(\psi, \psi_n) = \sqrt{\frac{\|\psi - \psi_n\|^2}{\sigma_n} \mathds{1}\left(\|\psi - \psi_n\| \le \mathcal{K}\sigma_n\right) + \|\psi - \psi_n\|\mathds{1}\left(\|\psi - \psi_n\| > \mathcal{K}\sigma_n\right)}
$$ for some constant $\mathcal{K} > 0$. The intuition behind this compound structure is following: When $\psi$ is in $\sigma_n$ neighborhood of $\psi_n$, $\mathbb{M}^s(\psi)$ behaves like a smooth quadratic function, but when it is away from the truth, $\mathbb{M}^s(\psi)$ starts resembling $M(\psi)$ which induces the linear curvature.
\\\\
\noindent
For the linear part, we first establish that $|\mathbb{M}(\psi) - \mathbb{M}^s(\psi)| = O(\sigma_n)$ uniformly for all $\psi$. Define $\eta = (\psi - \psi_0)/\sigma_n$:
\allowdisplaybreaks
\begin{align}
& |\mathbb{M}(\psi) - \mathbb{M}^s(\psi)| \notag \\
& \le \mathbb{E}\left(\left | \mathds{1}(Q^{\top}\psi \ge 0) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right | \right) \notag\\
& = \mathbb{E}\left(\left | \mathds{1}\left(\frac{Q^{\top}\psi_0}{\sigma_n} + \eta^{\top}\tilde{Q} \ge 0\right) - K\left(\frac{Q^{\top}\psi_0}{\sigma_n} + \eta^{\top}\tilde{Q}\right)\right | \right) \notag \\
& = \sigma_n \int_{\mathbb{R}^{p-1}} \int_{-\infty}^{\infty} \left | \mathds{1}\left(t + \eta^{\top}\tilde{Q} \ge 0\right) - K\left(t + \eta^{\top}\tilde{Q}\right)\right | f_0(\sigma_n t | \tilde{Q}) \ dt \ dP(\tilde{Q}) \notag\\
& = \sigma_n \int_{\mathbb{R}^{p-1}} \int_{-\infty}^{\infty} \left | \mathds{1}\left(t \ge 0\right) - K\left(t \right)\right | f_0(\sigma_n (t-\eta^{\top}\tilde{Q}) | \tilde{Q}) \ dt \ dP(\tilde{Q}) \notag \\
& = \sigma_n \int_{\mathbb{R}^{p-1}} m(\tilde{Q})\int_{-\infty}^{\infty} \left | \mathds{1}\left(t \ge 0\right) - K\left(t \right)\right | \ dt \ dP(\tilde{Q}) \notag\\
& = \sigma_n \mathbb{E}(m(\tilde{Q})) \int_{-\infty}^{\infty} \left | \mathds{1}\left(t \ge 0\right) - K\left(t \right)\right | \ dt \notag \\
\label{eq:lin_bound_1} & \le K_1 \sigma_n \mathbb{E}(m(\tilde{Q})) < \infty \hspace{0.3in} [\text{by Assumption \ref{as:density_bound}}] \,.
\end{align}
Here, the constant $K_1$ is $\mathbb{E}(m(\tilde{Q})) \left[\int_{-1}^{1}\left | \mathds{1}\left(t \ge 0\right) - K\left(t \right)\right | \ dt \right]$ which does not depend on $\psi$, hence the bound is uniform over $\psi$. Next:
\begin{align*}
\mathbb{M}^s(\psi) - \mathbb{M}^s(\psi_0^s) & = \mathbb{M}^s(\psi) - \mathbb{M}(\psi) + \mathbb{M}(\psi) - \mathbb{M}(\psi_0) \\
& \qquad \qquad + \mathbb{M}(\psi_0) - \mathbb{M}(\psi_0^s) + \mathbb{M}(\psi_0^s) -\mathbb{M}^s(\psi_0^s) \\
& = T_1 + T_2 + T_3 + T_4
\end{align*}
\noindent
We bound each summand separately:
\begin{enumerate}
\item $T_1 = \mathbb{M}^s(\psi) - \mathbb{M}(\psi) \ge -K_1 \sigma_n$ by equation \ref{eq:lin_bound_1}\,
\item $T_2 = \mathbb{M}(\psi) - \mathbb{M}(\psi_0) \ge u_-\|\psi - \psi_0\|$ by Lemma \ref{lem:linear_curvature}\,
\item $T_3 = \mathbb{M}(\psi_0) - \mathbb{M}(\psi_0^s) \ge -u_+\|\psi_0^s - \psi_0\| \ge -\epsilon_1 \sigma_n$ where one can take $\epsilon_1$ as small as possible, as we have established $\|\psi_0^s - \psi_0\|/\sigma_n \rightarrow 0$. This follows by Lemma \ref{lem:linear_curvature} along with Lemma \ref{bandwidth}\,
\item $T_4 = \mathbb{M}(\psi_0^s) -\mathbb{M}^s(\psi_0^s) \ge -K_1 \sigma_n$ by equation \ref{eq:lin_bound_1}.
\end{enumerate}
Combining, we have
\allowdisplaybreaks
\begin{align*}
\mathbb{M}^s(\psi) - \mathbb{M}^s(\psi_0^s) & \ge u_-\|\psi - \psi_0\| -(2K_1 + \epsilon_1) \sigma_n \\
& \ge ( u_-/2)\|\psi - \psi_0\| \hspace{0.2in} \left[\text{If} \ \|\psi - \psi_0\| \ge \frac{2(2K_1 + \epsilon_1)}{u_-}\sigma_n\right] \\
& \ge ( u_-/4)\|\psi - \psi_0^s\|
\end{align*}
where the last inequality holds for all large $n$ as proved in Lemma \ref{bandwidth}. Using Lemma \ref{bandwidth} again, we conclude that for any pair of positive constants $(\epsilon_1, \epsilon_2)$:
$$\|\psi - \psi_0^s\| \ge \left(\frac{2(2K_1 + \epsilon_1)}{u_-}+\epsilon_2\right)\sigma_n \Rightarrow \|\psi - \psi_0\| \ge \frac{2(2K_1 + \epsilon_1)}{u_-}\sigma_n$$ for all large $n$, which implies:
\begin{align}
& \mathbb{M}^s(\psi) - \mathbb{M}^s(\psi_0^s) \notag \\
& \ge (u_-/4) \|\psi - \psi_0^s\| \mathds{1}\left(\|\psi - \psi_0^s\| \ge \left(\frac{2(2K_1 + \epsilon_1)}{u_-}+\epsilon_2\right)\sigma_n \right) \notag \\
\label{lb2} & \ge (u_-/4) \|\psi - \psi_0^s\| \mathds{1}\left(\frac{\|\psi - \psi_0^s\|}{\sigma_n} \ge \left(\frac{7K_1}{u_-}\right) \right) \hspace{0.2in} [\text{for appropriate specifications of} \ \epsilon_1, \epsilon_2] \notag \\
& := (u_-/4) \|\psi - \psi_0^s\| \mathds{1}\left(\frac{\|\psi - \psi_0^s\|}{\sigma_n} \ge \mathcal{K} \right)
\end{align}
\noindent
In the next part, we find the lower bound when $\|\psi - \psi^0_s\| \le \mathcal{K} \sigma_n$. For the quadratic curvature, we perform a two step Taylor expansion: Define $\eta = (\psi - \psi_0)/\sigma_n$. We have:
\allowdisplaybreaks
\begin{align}
& \nabla^2\mathbb{M}^s(\psi) \notag\\
& = \frac{\beta_0 - \alpha_0}{2}\frac{1}{\sigma_n^2} \mathbb{E}\left(\tilde{Q}\tilde{Q}^{\top} K''\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\left\{\mathds{1}(Q^{\top}\psi_0 \le 0) - \mathds{1}(Q^{\top}\psi_0 \ge 0)\right\}\right) \notag\\
& = \frac{\beta_0 - \alpha_0}{2}\frac{1}{\sigma_n^2} \mathbb{E}\left(\tilde{Q}\tilde{Q}^{\top} K''\left(\frac{Q^{\top}\psi_0}{\sigma_n} + \tilde{Q}^{\top}\tilde \eta \right)\left\{\mathds{1}(Q^{\top}\psi_0 \le 0) - \mathds{1}(Q^{\top}\psi_0 \ge 0)\right\}\right) \notag\\
& = \frac{\beta_0 - \alpha_0}{2}\frac{1}{\sigma_n^2} \mathbb{E}\left[\tilde{Q}\tilde{Q}^{\top} \left[\int_{-\infty}^{0} K''\left(\frac{z}{\sigma_n} + \tilde{Q}^{\top}\tilde \eta \right) f_0(z |\tilde{Q}) \ dz \right. \right. \notag \\
& \left. \left. \qquad \qquad \qquad \qquad -\int_{0}^{\infty} K''\left(\frac{z}{\sigma_n} + \tilde{Q}^{\top}\tilde \eta \right) f_0(z | \tilde{Q}) \ dz \right]\right] \notag\\
& = \frac{\beta_0 - \alpha_0}{2}\frac{1}{\sigma_n} \mathbb{E}\left[\tilde{Q}\tilde{Q}^{\top} \left[\int_{-\infty}^{0} K''\left(t+ \tilde{Q}^{\top}\tilde \eta \right) f_0(\sigma_n t |\tilde{Q}) \ dt \right. \right. \notag \\
& \left. \left. \qquad \qquad \qquad \qquad - \int_{0}^{\infty} K''\left(t + \tilde{Q}^{\top}\tilde \eta \right) f_0(\sigma_n t | \tilde{Q}) \ dt \right]\right] \notag\\
& = \frac{\beta_0 - \alpha_0}{2}\frac{1}{\sigma_n} \mathbb{E}\left[\tilde{Q}\tilde{Q}^{\top} f_0(0| \tilde{Q})\left[\int_{-\infty}^{0} K''\left(t+ \tilde{Q}^{\top}\tilde \eta \right) \ dt \right. \right. \notag \\
& \left. \left. \qquad \qquad \qquad \qquad - \int_{0}^{\infty} K''\left(t + \tilde{Q}^{\top}\tilde \eta \right) \ dt \right]\right] + R \notag\\
\label{eq:quad_eq_1} & =(\beta_0 - \alpha_0)\frac{1}{\sigma_n}\mathbb{E}\left[\tilde{Q}\tilde{Q}^{\top} f_0(0| \tilde{Q})K'(\tilde{Q}^{\top}\tilde \eta)\right] + R \,.
\end{align}
As we want a lower bound on the set $\|\psi - \psi^0_s\| \le \mathcal{K} \sigma_n$, we have $\|\eta\| \le \mathcal{K}$. For the rest of the analysis, define
\begin{align*}
\Lambda: (v_1, v_2) \mapsto \inf_{\|v_1\| = 1, \|v_2\| \le \mathcal{K}} \mathbb{E}_{\tilde X}\left[|v_1^{\top}\tilde{Q}|^2 f(0|\tilde{Q})K'(\tilde{Q}^{\top}v_2) \right]
\end{align*}
Clearly $\Lambda \ge 0$ and continuous on a compact set, hence its infimum is attained. Suppose $\Lambda(v_1, v_2) = 0$ for some $v_1, v_2$. Then we have:
\begin{align*}
\mathbb{E}\left[|v_1^{\top}\tilde{Q}|^2 f(0|\tilde{Q})K'(\tilde{Q}^{\top}v_2) \right] = 0 \,,
\end{align*}
which further implies $|\tilde v_1^{\top}\tilde X| = 0$ almost surely and violates Assumption \ref{as:eigenval_bound}. Hence, our claim is demonstrated. On the other hand, for the remainder term of equation \eqref{eq:quad_eq_1}:
fix $\nu \in S^{p-1}$. Then:
\allowdisplaybreaks
\begin{align}
& \left| \nu^{\top} R \nu \right| \notag \\
& = \left|\frac{1}{\sigma_n} \mathbb{E}\left[\left(\nu^{\top}\tilde{Q}\right)^2 \left[\int_{-\infty}^{0} K''\left(t+ \tilde{Q}^{\top}\tilde \eta \right) (f_0(\sigma_n t |\tilde{Q}) - f_0(0|\tilde{Q})) \ dt \right. \right. \right. \notag \\
& \qquad \qquad \qquad \qquad \left. \left. \left. - \int_{0}^{\infty} K''\left(t + \tilde{Q}^{\top}\tilde \eta \right) (f_0(\sigma_n t |\tilde{Q}) - f_0(0|\tilde{Q})) \ dt \right]\right]\right| \notag\\
& \le \mathbb{E} \left[\left(\nu^{\top}\tilde{Q}\right)^2h(\tilde{Q}) \int_{-\infty}^{\infty} \left|K''\left(t+ \tilde{Q}^{\top}\tilde \eta \right)\right| |t| \ dt\right] \notag\\
& \le \mathbb{E} \left[\left(\nu^{\top}\tilde{Q}\right)^2h(\tilde{Q}) \int_{-1}^{1} \left|K''\left(t\right)\right| |t - \tilde{Q}^{\top}\tilde \eta | \ dt\right] \notag\\
\label{eq:quad_eq_3} & \le \mathbb{E} \left[\left(\nu^{\top}\tilde{Q}\right)^2h(\tilde{Q})(1+ \|\tilde{Q}\|/2\kappa) \int_{-1}^{1} \left|K''\left(t\right)\right| \ dt\right] = C_1 \hspace{0.2in} [\text{say}]
\end{align}
by Assumption \ref{as:distribution} and Assumption \ref{as:derivative_bound}. By a two-step Taylor expansion, we have:
\begin{align*}
\mathbb{M}^s(\psi) - \mathbb{M}^s(\psi_0^s) & = \frac12 (\psi - \psi_0^s)^{\top} \nabla^2\mathbb{M}^s(\psi^*_n) (\psi - \psi_0^s) \\
& \ge \left(\min_{\|v_1\| = 1, \|v_2 \| \le \mathcal{K}} \Lambda(v_1, v_2)\right) \frac{\|\psi - \psi_0^s\|^2}{2\sigma_n} - \frac{C_1\sigma_n}{2} \, \frac{\|\psi - \psi_0^s\|^2_2}{\sigma_n} \\
& \gtrsim \frac{\|\psi - \psi_0^s\|^2_2}{\sigma_n} \,
\end{align*}
This concludes the proof of the curvature.
\\\\
\noindent
Finally, we bound the modulus of continuity:
$$\mathbb{E}\left(\sup_{d_n(\psi, \psi_0^s) \le \delta} \left|(\mathbb{M}^s_n-\mathbb{M}^s)(\psi) - (\mathbb{M}^s_n-\mathbb{M}^s)(\psi_n)\right|\right) \,.$$
The proof is similar to that of Lemma \ref{lem:rate_smooth} and therefore we sketch the main steps briefly. Define the estimating function $f_\psi$ as:
$$
f_\psi(Y, Q) = (Y - \gamma)\left(1 - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right)
$$
and the collection of functions $\mathcal{F}_\zeta = \{f_\psi - f_{\psi_0^n}: d_n(\psi, \psi_0^s) \le \delta\}$. That $\mathcal{F}_\zeta$ has finite VC dimension follows from the same argument used to show $\mathcal{G}_n$ has finite VC dimension in the proof of Lemma \ref{conv-prob}. Now to bound modulus of continuity, we use Lemma 2.14.1 of \cite{vdvw96}, which implies:
$$
\sqrt{n}\mathbb{E}\left(\sup_{d_n(\psi, \psi_0^s) \le \delta} \left|(\mathbb{M}^s_n-\mathbb{M}^s)(\psi) - (\mathbb{M}^s_n-\mathbb{M}^s)(\psi_n)\right|\right) \lesssim \mathcal{J}(1, \mathcal{F}_\zeta) \sqrt{PF_\zeta^2}
$$
where $F_\zeta(Y, Q)$ is the envelope of $\mathcal{F}_\zeta$ defined as:
\begin{align*}
F_\zeta(Y, Q) & = \sup_{d_*(\psi, \psi_0^s) \le \zeta}\left|(Y - \gamma)\left(K\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)-K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right)\right| \\
& = \left|(Y - \gamma)\right| \sup_{d_*(\psi, \psi_0^s) \le \zeta} \left|\left(K\left(\frac{Q^{\top}\psi^s_0}{\sigma_n}\right)-K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right)\right|
\end{align*}
and $\mathcal{J}(1, \mathcal{F}_\zeta)$ is the entropy integral which can be bounded above by a constant independent of $n$ as the class $\mathcal{F}_\zeta$ has finite VC dimension. As in the proof of Lemma \ref{lem:rate_smooth}, we here consider two separate cases: (1) $\zeta \le \sqrt{\mathcal{K} \sigma_n}$ and (2) $\zeta > \sqrt{\mathcal{K} \sigma_n}$. In the first case, we have $\sup_{d_n(\psi, \psi_0^s) \le \zeta} \|\psi_ - \psi_0^s\| = \zeta \sqrt{\sigma_n}$. This further implies:
\begin{align*}
& \sup_{d_*(\psi, \psi_0^s) \le \zeta} \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right\}\right|^2 \\
& \le \max\left\{\left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n} + \|\tilde Q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)\right\}\right|^2, \right. \\
& \qquad \qquad \qquad \qquad \left. \left|\left\{K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n}\right) - K\left(\frac{Q^{\top}\psi_0^s}{\sigma_n} - \|\tilde Q\|\frac{\zeta}{\sqrt{\sigma_n}}\right)\right\}\right|^2\right\} \\
& := \max\{T_1, T_2\} \,.
\end{align*}
Therefore to bound $\mathbb{E}[F_\zeta^2(Y, Q)]$ is equivalent to bounding both $\mathbb{E}[(Y- \gamma)^2 T_1]$ and $\mathbb{E}[(Y - \gamma)^2 T_2]$ separately, which, in turn equivalent to bound $\mathbb{E}[T_1]$ and $\mathbb{E}[T_2]$, as $|Y - \gamma| \le 1$. These bounds follows from similar calculation as of Lemma \ref{lem:rate_smooth}, hence skipped. Finally we have in this case, $$
\mathbb{E}[F_\zeta^2(Y, Q)] \lesssim \zeta \sqrt{\sigma_n} \,.
$$
The other case, when $\zeta > \sqrt{\mathcal{K} \sigma_n}$ also follows by similar calculation of Lemma \ref{lem:rate_smooth}, which yields:
$$
\mathbb{E}[F_\zeta^2(Y, Q)] \lesssim \zeta^2 \,.
$$
\noindent
Using this in the maximal inequality yields:
\begin{align*}
\sqrt{n}\mathbb{E}\left(\sup_{d_n(\psi, \psi_0) \le \delta} \left|\mathbb{M}_n(\psi - \psi_n) - \mathbb{M}^s(\psi - \psi_n)\right|\right) & \lesssim \sqrt{\zeta}\sigma^{1/4}_n\mathds{1}_{\zeta \le \sqrt{\mathcal{K} \sigma_n}} + \zeta \mathds{1}_{\zeta > \sqrt{\mathcal{K} \sigma_n}} \\
& := \phi_n(\zeta) \,
\end{align*}
This implies (following the same argument as of Lemma \ref{lem:rate_smooth}):
$$
n^{2/3}\sigma_n^{-1/3}d^2(\hat \psi^s, \psi_0^s) = O_p(1) \,.
$$
Now as $n^{2/3}\sigma_n^{-1/3} \gg \sigma_n^{-1}$, we have:
$$
\frac{1}{\sigma_n}d_n^2(\hat \psi^s, \psi_0^s) = o_p(1) \,.
$$
which further indicates
\begin{align}
\label{rate1} & n^{2/3}\sigma_n^{-1/3}\left[\frac{\|\hat \psi^s - \psi_0^s\|^2}{\sigma_n} \mathds{1}(\|\hat \psi^s - \psi_0^s\| \le \mathcal{K}\sigma_n) \right. \notag \\
& \qquad \qquad \qquad \left. + \|\hat \psi^s - \psi_0^s\| \mathds{1}(\|\hat \psi^s - \psi_0^s\|\ge \mathcal{K}\sigma_n)\right] = O_P(1)
\end{align}
This implies:
\begin{enumerate}
\item $\frac{n^{2/3}}{\sigma_n^{4/3}}\|\hat \psi^s - \psi_0^s\| \mathds{1}(\|\hat \psi^s - \psi_0^s\|\le \mathcal{K}\sigma_n) = O_P(1)$
\item $\frac{n^{2/3}}{\sigma_n^{1/3}}\|\hat \psi^s - \psi_0^s\| \mathds{1}(\|\hat \psi^s - \psi_0^s\| \ge \mathcal{K}\sigma_n) = O_P(1)$
\end{enumerate}
Therefore:
\begin{align*}
& \frac{n^{2/3}}{\sigma_n^{4/3}}\|\hat \psi^s - \psi_0^s\| \mathds{1}(\|\hat \psi^s - \psi_0^s\| \le \mathcal{K}\sigma_n) \\
& \qquad \qquad \qquad + \frac{n^{2/3}}{\sigma_n^{1/3}}\|\hat \psi^s - \psi_0^s\| \mathds{1}(\|\hat \psi^s - \psi_0^s\| \ge \mathcal{K}\sigma_n) = O_p(1) \,.
\end{align*}
i.e.
$$
\left(\frac{n^{2/3}}{\sigma_n^{4/3}} \wedge \frac{n^{2/3}}{\sigma_n^{1/3}}\right)\|\hat \psi^s - \psi_0^s\| = O_p(1) \,.
$$
Now $(n^{2/3}/\sigma_n^{4/3} \gg 1/\sigma_n$ as long as $n^{2/3} \gg \sigma_n^{1/3}$ which is obviously true. On the other hand, $n^{2/3}/\sigma_n^{1/3} \gg 1/\sigma_n$ iff $n\sigma_n \gg 1$ which is also true as per our assumption. Therefore we have:
$$
\frac{\|\hat \psi^s - \psi_0^s\|}{\sigma_n} = O_p(1) \,.
$$
This completes the proof.
\end{proof}
\section{Methodology and Theory for Continuous Response Model}
\label{sec:theory_regression}
In this section we present our analysis for the continuous response model. Without smoothing, the original estimating equation is:
$$
f_{\beta, \delta, \psi}(Y, X, Q) = \left(Y - X^{\top}\beta - X^{\top}\delta\mathds{1}_{Q^{\top}\psi > 0}\right)^2
$$
and we estimate the parameters as:
\begin{align}
\label{eq:ls_estimator}
\left(\hat \beta^{LS}, \hat \delta^{LS}, \hat \psi^{LS}\right) & = {\arg\min}_{(\beta, \delta, \psi) \in \Theta} \mathbb{P}_n f_{\beta, \delta, \psi} \notag \\
& := {\arg\min}_{(\beta, \delta, \psi) \in \Theta}\mathbb{M}_n(\beta, \delta, \psi)\,.
\end{align}
where $\mathbb{P}_n$ is empirical measure based on i.i.d. observations $\{(X_i, Y_i, Q_i)\}_{i=1}^n$ and $\Theta$ is the parameter space. Henceforth, we assume $\Theta$ is a compact subset of dimension ${\bbR}^{2p+d}$. We also define $\theta = (\beta, \delta, \psi)$, i.e. all the parameters together as a vector and by $\theta_0$ is used to denote the true parameter vector $(\beta_0, \delta_0, \psi_0)$. Some modification of equation \eqref{eq:ls_estimator} leads to the following:
\begin{align*}
(\hat \beta^{LS}, \hat \delta^{LS}, \hat \psi^{LS}) & = {\arg\min}_{\beta, \delta, \psi} \sum_{i=1}^n \left(Y_i - X_i^{\top}\beta - X_i^{\top}\delta\mathds{1}_{Q_i^{\top}\psi > 0}\right)^2 \\
& = {\arg\min}_{\beta, \delta, \psi} \sum_{i=1}^n \left[\left(Y_i - X_i^{\top}\beta\right)^2\mathds{1}_{Q_i^{\top}\psi_0 \le 0} \right. \\
& \hspace{14em} \left. + \left(Y_i - X_i^{\top}\beta - X_i^{\top}\delta\right)^2\mathds{1}_{Q_i^{\top}\psi > 0} \right] \\
& = {\arg\min}_{\beta, \delta, \psi} \sum_{i=1}^n \left[\left(Y_i - X_i^{\top}\beta\right)^2 + \left\{\left(Y_i - X_i^{\top}\beta - X_i^{\top}\delta\right)^2 \right. \right. \\
& \hspace{17em} \left. \left. - \left(Y_i - X_i^{\top}\beta\right)^2\right\}\mathds{1}_{Q_i^{\top}\psi > 0} \right]
\end{align*}
Typical empirical process calculations yield under mild conditions:
$$
\|\hat \beta^{LS} - \beta_0\|^2 + \|\hat \delta^{LS} - \delta_0\|^2 + \|\hat \psi^{LS} - \psi_0 \|_2 = O_p(n^{-1})
$$
but inference is difficult as the limit distribution is unknown, and in any case, would be a highly non-standard distribution. Recall that even in the one-dimensional change point model with fixed jump size, the least squares change point estimator converges at rate $n$ to the truth with a non-standard limit distribution, namely a minimizer of a two-sided compound Poisson process (see \cite{lan2009change} for more details). To obtain a computable estimator with tractable limiting distribution, we resort to a smooth approximation of the indicator function in \eqref{eq:ls_estimator} using a distribution kernel with suitable bandwidth, i.e we replace $\mathds{1}_{Q_i^{\top}\psi > 0}$ by $K(Q_i^{\top}\psi/\sigma_n)$ for some appropriate distribution function $K$ and bandwidth $\sigma_n$, i.e.
\begin{align*}
(\hat \beta^S, \hat \delta^S, \hat \psi^S) & = {\arg\min}_{\beta, \delta, \psi} \left\{ \frac1n \sum_{i=1}^n \left[\left(Y_i - X_i^{\top}\beta\right)^2 + \left\{\left(Y_i - X_i^{\top}\beta - X_i^{\top}\delta\right)^2 \right. \right. \right. \\
& \hspace{15em} \left. \left. \left. - \left(Y_i - X_i^{\top}\beta\right)^2\right\}K\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right) \right] \right\} \\
& = {\arg\min}_{(\beta, \delta, \psi) \in \Theta} \mathbb{P}_n f^s_{(\beta, \delta, \psi)}(X, Y, Q) \\
& := {\arg\min}_{\theta \in \Theta} \mathbb{M}^s_n(\theta) \,.
\end{align*}
Define $\mathbb{M}$ (resp. $\mathbb{M}^s$) to be the population counterpart of $\mathbb{M}_n$ and $\mathbb{M}_n^s$ respectively which are defined as:
\begin{align*}
\mathbb{M}(\theta) & = \mathbb{E}\left(Y - X^{\top}\beta\right)^2 + \mathbb{E}\left(\left[-2\left(Y_i - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right] \mathds{1}_{Q^{\top}\psi > 0}\right) \,, \\
\mathbb{M}^s(\theta) & = \mathbb{E}\left[(Y - X^{\top}\beta)^2 + \left\{-2(Y-X^{\top}\beta)(X^{\top}\delta) + (X^{\top}\delta)^2\right\}K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right] \,.
\end{align*}
As noted in the proof of \textcolor{blue}{Seo and Linton}, the assumption $\log{n}/n\sigma_n^2 \to 0$ was only used to show:
$$
\frac{\left\|\hat \psi^s - \psi_0\right\|}{\sigma_n} = o_p(1) \,.
$$
In this paper, we show that one can achieve the same conclusion as long as $n\sigma_n \to \infty$. The rest of the proof for the normality is similar to that of \cite{seo2007smoothed}, we will present it briefly for the ease the readers. The proof is quite long and technical, therefore we break the proof into several lemmas. We, first, list our assumptions:
\begin{assumption}
\label{eq:assm}
\begin{enumerate}
\item Define $f_\psi(\cdot \mid \tilde Q)$ to be the conditional distribution of $Q^{\top}\psi$ given $\tilde Q$. (In particular we will denote by $f_0(\cdot \mid \tilde q)$ to be conditional distribution of $Q^{\top}\psi_0$ given $\tilde Q$ and $f_s(\cdot \mid \tilde q)$ to be the conditional distribution of $Q^{\top}\psi_0^s$ given $\tilde Q$. Assume that there exists $F_+$ such that $\sup_t f_0(t | \tilde Q) \le F_+$ almost surely on $\tilde Q$ and for all $\psi$ in a neighborhood of $\psi_0$ (in particular for $\psi_0^s$). Further assume that $f_\psi$ is differentiable and the derivative is bounded by $F_+$ for all $\psi$ in a neighborhood of $\psi_0$ (again in particular for $\psi_0^s$).
\vspace{0.1in}
\item Define $g(Q) = {\sf var}(X \mid Q)$. There exists $c_-$ and $c_+$ such that $c_- \le \lambda_{\min}(g(Q)) \le \lambda_{\max}(g(Q)) \le c_+$ almost surely. Also assume that $g$ is a Lipschitz with constant $G_+$ with respect to $Q$.
\vspace{0.1in}
\item There exists $p_+ < \infty$ and $p_- > 0, r > 0$ such that:
$$
p_- \|\psi - \psi_0\| \le \mathbb{P}\left(\text{sign}\left(Q^{\top}\psi\right) \neq \text{sign}\left(Q^{\top}\psi_0\right)\right) \le p_+ \|\psi - \psi_0\| \,,
$$
for all $\psi$ such that $\|\psi - \psi_0\| \le r$.
\vspace{0.1in}
\item For all $\psi$ in the parameter space $0 < \mathbb{P}\left(Q^{\top}\psi > 0\right) < 1$.
\vspace{0.1in}
\item Define $m_2(Q) = \mathbb{E}\left[\|X\|^2 \mid Q\right]$ and $m_4(Q) = \mathbb{E}\left[\|X\|^4 \mid Q\right]$. Assume $m_2, m_4$ are bounded Lipschitz function of $Q$.
\end{enumerate}
\end{assumption}
\subsection{Sufficient conditions for above assumptions }
We now demonstrate some sufficient conditions for the above assumptions to hold. The first condition is essentially a condition on the conditional density of the first co-ordinate of $Q$ given all other co-ordinates. If this conditional density is bounded and has bounded derivative, then first assumption is satisfied. This condition is satisfied in fair generality. The second assumption implies that the conditional distribution of X given Q has variance in all the direction over all $Q$. This is also very weak condition, as is satisfied for example if X and Q and independent (with $X$ has non-degenerate covariance matrix) or $(X, Q)$ are jointly normally distributed to name a few. This condition can further be weaken by assuming that the maximum and minimum eigenvalues of $\mathbb{E}[g(Q)]$ are bounded away from $\infty$ and $0$ respectively but it requires more tedious book-keeping. The third assumption is satisfied as long as as $Q^{\top}\psi$ has non-zero density near origin, while the fourth assumption merely states that the support of $Q$ is not confined to one side of the hyperplane for any hyperplane and a simple sufficient condition for this is $Q$ has continuous density with non-zero value at the origin. The last assumption is analogous to the second assumption for the conditional fourth moment which is also satisfied in fair generality.
\\\\
\noindent
{\bf Kernel function and bandwidth: } We take $K(x) = \Phi(x)$ (distribution of standard normal random variable) for our analysis. For the bandwidth we assume $n\sigma_n^2 \to 0$ and $n \sigma_n \to \infty$ as the other case, (i.e. $n\sigma_n^2 \to \infty$) is already established in \cite{seo2007smoothed}.
\\\\
\noindent
Based on Assumption \ref{eq:assm} and our choice of kernel and bandwidth we establish the following theorem:
\begin{theorem}
\label{thm:regression}
Under Assumption \ref{eq:assm} and the above choice of kernel and bandwidth we have:
$$
\sqrt{n}\begin{pmatrix}\begin{pmatrix} \hat \beta^s \\ \hat \delta^s \end{pmatrix} - \begin{pmatrix} \beta_0 \\ \delta_0 \end{pmatrix} \end{pmatrix} \overset{\mathscr{L}}{\implies} \mathcal{N}(0, \Sigma_{\beta, \delta})
$$
and
$$
\sqrt{n/\sigma_n} \left(\hat \psi^s - \psi_0\right) \overset{\mathscr{L}}{\implies} \mathcal{N}(0, \Sigma_\psi) \,,
$$
for matrices $\Sigma_{\beta, \delta}$ and $\Sigma_\psi$ mentioned explicitly in the proof. Moreover they are asymptotically independent.
\end{theorem}
The proof of the theorem is relatively long, so we break it into several lemmas. We provide a roadmap of the proof in this section while the elaborate technical derivations of the supporting lemmas can be found in Appendix. Let $\nabla \mathbb{M}_n^s(\theta)$ and $\nabla^2 \mathbb{M}_n^s(\theta)$ be the gradient and Hessian of $\mathbb{M}_n^s(\theta)$ with respect to $\theta$. As $\hat \theta^s$ minimizes $\mathbb{M}_n^s(\theta)$, we have from the first order condition, $\nabla \mathbb{M}_n^s(\hat \theta^s) = 0$. Using one step Taylor expansion we have:
\allowdisplaybreaks
\begin{align*}
\label{eq:taylor_first}
\nabla \mathbb{M}_n^s(\hat \theta^s) = \nabla \mathbb{M}_n^s(\theta_0) + \nabla^2 \mathbb{M}_n^s(\theta^*)\left(\hat \theta^s - \theta_0\right) = 0
\end{align*}
i.e.
\begin{equation}
\label{eq:main_eq}
\left(\hat{\theta}^s - \theta_0\right) = -\left(\nabla^2 \mathbb{M}_n^s(\theta^*)\right)^{-1} \nabla \mathbb{M}_n^s(\theta_0)
\end{equation}
for some intermediate point $\theta^*$ between $\hat \theta^s$ and $\theta_0$.
Following the notation of \cite{seo2007smoothed}, define a diagonal matrix $D_n$ of dimension $2p + d$ with first $2p$ elements being 1 and the last $d$ elements being $\sqrt{\sigma_n}$.
we can write:
\begin{align}
\sqrt{n}D_n^{-1}(\hat \theta^s - \theta_0) & = - \sqrt{n}D_n^{-1}\nabla^2\mathbb{M}_n^s(\theta^*)^{-1}\nabla \mathbb{M}_n^s(\theta_0) \notag \\
\label{eq:taylor_main} & = \begin{pmatrix} \nabla^2\mathbb{M}_n^{s, \gamma}(\theta^*) & \sqrt{\sigma_n}\nabla^2\mathbb{M}_n^{s, \gamma \psi}(\theta^*) \\
\sqrt{\sigma_n}\nabla^2\mathbb{M}_n^{s, \gamma \psi}(\theta^*) & \sigma_n\nabla^2\mathbb{M}_n^{s, \psi}(\theta^*)\end{pmatrix}^{-1}\begin{pmatrix} \sqrt{n}\nabla \mathbb{M}_n^{s, \gamma}(\theta_0) \\ \sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0)\end{pmatrix}
\end{align}
where $\gamma = (\beta, \delta) \in {\bbR}^{2p}$. The following lemma establishes the asymptotic properties of $\nabla \mathbb{M}_n^s(\theta_0)$:
\begin{lemma}[Asymptotic Normality of $\nabla \mathbb{M}_n^s(\theta_0)$]
\label{asymp-normality}
\label{asymp-normality}
Under assumption \ref{eq:assm} we have:
\begin{align*}
\sqrt{n}\nabla \mathbb{M}_n^{s, \gamma}(\theta_0) \implies \mathcal{N}\left(0, 4V^{\gamma}\right) \,,\\
\sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0) \implies \mathcal{N}\left(0, V^{\psi}\right) \,.
\end{align*}
for some n.n.d. matrices $V^{\gamma}$ and $V^{\psi}$ which is mentioned explicitly in the proof. Further more $\sqrt{n}\nabla \mathbb{M}_n^{s, \gamma}(\theta_0)$ and $\sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0)$ are asymptotically independent.
\end{lemma}
\noindent
Next, we analyze the convergence of $\nabla^2 \mathbb{M}_n^s(\theta^*)$ which is stated in the following lemma:
\begin{lemma}[Convergence in Probability of $\nabla^s \mathbb{M}_n^s(\theta^*)$]
\label{conv-prob}
Under Assumption \eqref{eq:assm}, for any random sequence $\breve{\theta} = \left(\breve{\beta}, \breve{\delta}, \breve{\psi}\right)$ such that $\breve{\beta} \overset{p}{\to} \beta_0, \breve{\delta} \overset{p}{\to} \delta_0, \|\breve{\psi} - \psi_0\|/\sigma_n \overset{P} \rightarrow 0$, we have:
\begin{align*}
\nabla^2_{\gamma} \mathbb{M}_n^s(\breve{\theta}) & \overset{p}{\longrightarrow} 2Q^{\gamma} \,, \\
\sqrt{\sigma_n}\nabla^2_{\psi \gamma} \mathbb{M}_n^s(\breve{\theta}) & \overset{p}{\longrightarrow} 0 \,, \\
\sigma_n \nabla^2_{\psi} \mathbb{M}_n^s(\breve{\theta}) & \overset{p}{\longrightarrow} Q^{\psi} \,.
\end{align*}
for some matrices $Q^{\gamma}, Q^{\psi}$ mentioned explicitly in the proof. This, along with equation \eqref{eq:taylor_main}, establishes:
\begin{align*}
\sqrt{n}\left(\hat \gamma^s - \gamma_0\right) & \overset{\mathscr{L}}{\implies} \mathcal{N}\left(0, Q^{\gamma^{-1}}V^{\gamma}Q^{\gamma^{-1}}\right) \,, \\
\sqrt{n/\sigma_n}\left(\hat \psi^s - \psi_0\right) & \overset{\mathscr{L}}{\implies} \mathcal{N}\left(0, Q^{\psi^{-1}}V^{\psi}Q^{\psi^{-1}}\right) \,.
\end{align*}
where as before $\hat \gamma^s = (\hat \beta^s, \hat \delta^s)$.
\end{lemma}
It will be shown later that the condition $\|\breve{\psi}_n - \psi_0\|/\sigma_n \overset{P} \rightarrow 0$ needed in Lemma \ref{conv-prob} holds for the (random) sequence $\psi^*$, the intermediate point in the Taylor expansion. Then, combining Lemma \ref{asymp-normality} and Lemma \ref{conv-prob} we conclude the proof of Theorem \ref{thm:regression}.
Observe that, to show $\left\|\psi^* - \psi_0 \right\| = o_P(\sigma_n)$, it suffices to to prove that $\left\|\hat \psi^s - \psi_0 \right\| = o_P(\sigma_n)$. Towards that direction, we have following lemma:
\begin{lemma}[Rate of convergence]
\label{lem:rate_smooth}
Under Assumption \ref{eq:assm} and our choice of kernel and bandwidth,
$$
n^{2/3}\sigma_n^{-1/3} d^2_*\left(\hat \theta^s, \theta_0^s\right) = O_P(1) \,,
$$
where
\begin{align*}
d_*^2(\theta, \theta_0^s) & = \|\beta - \beta_0^s\|^2 + \|\delta - \delta_0^s\|^2 \\
& \qquad \qquad + \frac{\|\psi - \psi_0^s\|^2}{\sigma_n} \mathds{1}_{\|\psi - \psi_0^s\| \le \mathcal{K}\sigma_n} + \|\psi - \psi_0^s\| \mathds{1}_{\|\psi - \psi_0^s\| > \mathcal{K}\sigma_n} \,.
\end{align*}
for some specific constant $\mathcal{K}$. (This constant will be mentioned precisely in the proof). Hence as $n\sigma_n \to \infty$, we have $n^{2/3}\sigma_n^{-1/3} \gg \sigma_n^{-1}$ which implies $\|\hat \psi^s - \psi_0^s\|/\sigma_n \overset{P} \longrightarrow 0 \,.$
\end{lemma}
\noindent
The above lemma establishes $\|\hat \psi^s - \psi_0^s\|/\sigma_n = o_p(1)$ but our goal is to show that $\|\hat \psi^s - \psi_0\|/\sigma_n = o_p(1)$. Therefore, we further need $\|\psi^s_0 - \psi_0\|/\sigma_n \rightarrow 0$ which is demonstrated in the following lemma:
\begin{lemma}[Convergence of population minimizer]
\label{bandwidth}
Under Assumption \ref{eq:assm} and our choice of kernel and bandwidth, we have: $\|\psi^s_0 - \psi_0\|/\sigma_n \rightarrow 0$.
\end{lemma}
\noindent
Hence the final roadmap is the following: Using Lemma \ref{bandwidth} and Lemma \ref{lem:rate_smooth} we establish that $\|\hat \psi^s - \psi_0\|/\sigma_n = o_p(1)$ if $n\sigma_n \to 0$. This, in turn, enables us to prove Lemma \ref{conv-prob}, i.e. $\sigma_n \nabla^2 \mathbb{M}_n^s(\theta^*) \overset{P} \rightarrow Q$,which, along with Lemma \ref{asymp-normality}, establishes the main theorem.
\section{Introduction}
The simple linear regression model assumes a uniform linear relationship between the covariate and the response, in the sense that the regression parameter $\beta$ is the same over the entire covariate domain. In practice, the situation can be more complicated: for instance, the regression parameter may differ from sub-population to sub-population within a large (super-) population. Some common techniques to account for such heterogeneity include mixed linear models, introducing an interaction effect, or fitting different models among each sub-population which corresponds to a supervised classification setting where the true groups (sub-populations) are \emph{a priori known}.
\newline
\newline
\indent A more difficult scenario arises when the sub-populations are unknown, in which case regression and classification must happen simultaneously. Consider the scenario where the conditional mean of $Y_i$ given $X_i$ is different for different unknown sub-groups. A well-studied treatment of this problem -- the so-called change point problem -- considers a simple thresholding model where membership in a sub-group is determined by whether a real-valued observable $X$ falls to the left or right of an unknown parameter $\gamma$. More recently, there has been work for multi-dimensional covariates, namely when the membership is determined by which side a random vector $X$ falls with respect to an hyperplane with unknown normal vector $\theta_0$. A concrete example appears in \cite{wei2014latent} who extend the linear thresholding model due to \cite{kang2011new} to general dimensions:
\begin{eqnarray}\label{eq:weimodel}
Y=\mu_1\cdot 1_{X^{\top}\theta_0\geq 0}+\mu_2\cdot 1_{X^{\top}\theta_0<0}+\varepsilon\,,
\end{eqnarray}
and studied computational algorithms and consistency of the same. This model and others with similar structure, called \emph{change plane models}, are useful in various fields of research, e.g. modeling treatment effect heterogeneity in drug treatment (\cite{imai2013estimating}), modeling sociological data on voting and employment (\cite{imai2013estimating}), or cross country growth regressions in econometrics
(\cite{seo2007smoothed}).
\newline
\newline
\indent Other aspects of this model have also been investigated. \cite{fan2017change} examined the change plane model from the statistical testing point of view, with the null hypothesis being the absence of a separating hyperplane. They proposed a test statistic, studied its asymptotic distribution and provided sample size recommendations for achieving target values of power. \cite{li2018multi} extended the change point detection problem in the multi-dimensional setup by considering the case where $X^{\top}\theta_0$ forms a multiple change point data sequence.
The key difficultly with change plane type models is the inherent discontinuity in the optimization criteria involved where the parameter of interest appears as an argument to some indicator function, rendering the optimization extremely hard. To alleviate this, one option is to kernel smooth the indicator function, an approach that was adopted by Seo and Linton \cite{seo2007smoothed} in a version of the change-plane problem, motivated by earlier results of Horowitz \cite{horowitz1992smoothed} that dealt with a smoothed version of the maximum score estimator. Their model has an additive structure of the form:
\[Y_t = \beta^{\top}X_t + \delta^{\top} \tilde{X}_t \mathds{1}_{Q_t^{\top} \boldmath \psi > 0} + \epsilon_t \,,\]
where $\psi$ is the (fixed) change-plane parameter, and $t$ can be viewed as a time index. Under a set of assumptions on the model (Assumptions 1 and 2 of their paper), they showed asymptotic normality of their estimator of $\psi$ obtained by minimizing a smoothed least squares criterion
that uses a differentiable distribution function $\mathcal{K}$. The rate of convergence of $\hat{\psi}$ to the truth was shown to be $\sqrt{n/\sigma_n}$ where $\sigma_n$ was the bandwidth parameter used to smooth the least squares function. As noted in their Remark 3, under the special case of i.i.d. observations, their requirement that $\log n/(n \sigma_n^2) \rightarrow 0$ translates to a maximal convergence rate of $n^{3/4}$ up to a logarithmic factor. The work of \cite{li2018multi} who considered multiple parallel change planes (determined by a fixed dimensional normal vector) and high dimensional linear models in the regions between consecutive hyperplanes also builds partly upon the methods of \cite{seo2007smoothed} and obtains the same (almost) $n^{3/4}$ rate for the normal vector (as can be seen by putting Condition 6 in their paper in conjunction with the conclusion of Theorem 3).
\\\\
While it is established that the condition $n\sigma_n^2 \to \infty$ is sufficient (upto a log factor) for achieving asymptotic normality of the smoothed estimator, there is no result in the existing literature to ascertain whether its necessity. Intuitively speaking, the necessary condition for asymptotic normality ought to be $n \sigma_n \to 0$, as this will ensure a growing number of observations in a $\sigma_n$ neighborhood around the true hyperplane, allowing the central limit theorem to kick in. In this paper we \emph{bridge this gap} by proving that asymptotic normality of the smoothed change point estimator is, in fact, achievable with $n \sigma_n \to \infty$.
This implies that the best possible rate of convergence of the smoothed estimator can be arbitrarily close to $n^{-1}$, the minimax optimal rate of estimation for this problem. To demonstrate this, we focus on two change plane estimation problems, one with a continuous and another with a binary response. The continuous response model we analyze here is the following:
\begin{equation}
\label{eq:regression_main_eqn}
Y_i = \beta_0^{\top}X_i + \delta_0^{\top}X_i\mathds{1}_{Q_i^{\top}\psi_0 > 0} + {\epsilon}_i \,.
\end{equation}
for i.i.d. observations $\{(X_i, Y_i, Q_i\}_{i=1}^n$, where the zero-mean transitory shocks ${\epsilon}_i \rotatebox[origin=c]{90}{$\models$} (X_i, Q_i)$. Our calculation can be easily extended to the case when the covariates on the either side of the change hyperplane are different and $\mathbb{E}[{\epsilon} \mid X, Q] = 0$ with more tedious bookkeeping. As this generalization adds little of interest, conceptually, to our proof, we posit the simpler model for ease of understanding.
As the parameter $\psi_0$ is only identifiable upto its norm, we assume that the first co-ordinate is $1$ (along the lines of \cite{seo2007smoothed}) which removes one degree of freedom and makes the parameter identifiable.
\\\\
To illustrate that a similar phenomenon transpires with binary response, we also study a canonical version of such a model which can be briefly described as follows: The covariate $Q \sim P$ where $P$ is distribution on $\mathbb{R}^d$ and the conditional distribution of $Y$ given $Q$ is modeled as follows:
\begin{equation}
\label{eq:classification_eqn}
P(Y=1|Q) = \alpha_0 \mathds{1}(Q^{\top}\psi_0 \le 0) + \beta_0\mathds{1}(Q^{\top}\psi_0 > 0)
\end{equation}
for some parameters $\alpha_0, \beta_0\in (0,1)$ and $\psi_0\in\mathbb{R}^d$ (with first co-ordinate being one for identifiability issue as for the continuous response model), the latter being of primary interest for estimation.
This model is identifiable up to a permutation of $(\alpha_0, \beta_0)$, so we further assume $\alpha_0 < \beta_0$. For both models, we show that $\sqrt{n/\sigma_n}(\hat \psi - \psi_0)$ converges to zero-mean normal distribution as long as $n \sigma_n \to \infty$ but the calculations for the binary model are completely relegated to Appendix \ref{sec:supp_classification}.
\\\\
{\bf Organization of the paper:} The rest of the paper is organized as follows: In Section \ref{sec:theory_regression} we present the methodology, the statement of the asymptotic distributions and a sketch of the proof for the continuous response model \eqref{eq:regression_main_eqn}. In Section \ref{sec:classification_analysis} we briefly describe the binary response model \eqref{eq:classification_eqn} and related assumptions, whilst the details can be found in the supplementary document. In Section \ref{sec:simulation} we present some simulation results, both for the binary and the continuous response models to study the effect of the bandwidth on the quality of the normal approximation in finite samples. In Section \ref{sec:real_data}, we present a real data analysis where we analyze the effect of income and urbanization on the $CO_2$ emission in different countries.
\\\\
{\bf Notations: } Before delving into the technical details, we first setup some notations here. We assume from now on, $X \in {\bbR}^p$ and $Q \in {\bbR}^d$. For any vector $v$ we define by $\tilde v$ as the vector with all the co-ordinates expect the first one. We denote by $K$ the kernel function used to smooth the indicator function. For any matrix $A$, we denote by $\|A\|_2$ (or $\|A\|_F$) as its Frobenious norm and $\|A\|_{op}$ as its operator norm. For any vector, $\| \cdot \|_2$ denotes its $\ell_2$ norm.
\input{regression.tex}
\input{classification.tex}
\section{Simulation studies}
\label{sec:simulation}
In this section, we present some simulation results to analyse the effect of the choice of $\sigma_n$ on the finite sample approximation of asymptotic normality, i.e. Berry-Essen type bounds. If we choose a smaller sigma, the rate of convergence is accelerated but the normal approximation error at smaller sample sizes will be higher, as we don't have enough observations in the vicinity of the change hyperplane for the CLT to kick in. This problem is alleviated by choosing $\sigma_n$ larger, but this, on the other hand, compromises the convergence rate. Ideally, a Berry-Essen type of bound will quantify this, but this will require a different set of techniques and is left as an open problem. In our simulations, we generate data from following setup:
\begin{enumerate}
\item Set $N = 50000, p = 3, \alpha_0 = 0.25, \beta = 0.75$ and some $\theta_0 \in \mathbb{R}^p$ with first co-ordinate $ = 1$.
\item Generate $X_1, \dots, X_n \sim \mathcal{N}(0, I_p)$.
\item Generate $Y_i \sim \textbf{Bernoulli}\left(\alpha_0\mathds{1}_{X_i^{\top}\theta_0 \le 0} + \beta_0 \mathds{1}_{X_i^{\top}\theta_0 > 0}\right)$.
\item Estimate $\hat \theta$ by minimizing $\mathbb{M}_n(\theta)$ (replacing $\gamma$ by $\bar Y$) based on $\{(X_i, Y_i)\}_{i=1}^n$ for different choices of $\sigma_n$.
\end{enumerate}
We repeat Step 2 - Step 4 a hundred times to obtain $\hat \theta_1, \dots, \hat \theta_{100}$. Define $s_n$ to be the standard deviation of $\{\hat \theta_i\}_{i=1}^{100}$. Figures ref{fig:co2} and \ref{fig:co3} show the qqplots of $\tilde \theta_i = (\hat \theta_i - \theta_0)/s_n$ against the standard normal for four different choices of $\sigma_n = n^{-0.6}, n^{-0.7}, n^{-0.8}, n^{-0.9}$.
\begin{figure}
\centering
\includegraphics[scale=0.4]{Coordinate_2}
\caption{In this figure, we present qqplot for estimating second co-ordinate of $\theta_0$ with different choices of $\sigma_n$ mentioned at the top of each plots.}
\label{fig:co2}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.4]{Coordinate_3}
\caption{In this figure, we present qqplot for estimating third co-ordinate of $\theta_0$ with different choices of $\sigma_n$ mentioned at the top of each plots.}
\label{fig:co3}
\end{figure}
It is evident that smaller value of $\sigma_n$ yield a poor normal approximation. Although our theory shows that asymptotic normality holds as long as $n\sigma_n \to \infty$, in practice we recommend choosing $\sigma_n$ such that $n\sigma_n \ge 30$ for the central limit of theorem to take effect.
\begin{comment}
\section{Real data analysis}
\label{sec:real_data}
We illustrate our method using cross-country data on pollution (carbon-di-oxide), income and urbanization obtained from the World Development Indicators (WDI), World Bank (website?). The Environmental Kuznets Curve hypothesis (EKC henceforth), a popular and ongoing area of research in environmental economics, posits that at an initial stage of economic development pollution increases with economic growth, and then diminishes when society’s priorities change, leading to an inverted U-shaped relation between income (measured via real GDP per capita) and pollution. The hypothesis has led to numerous empirical papers (i) testing the hypothesis (whether the relation is inverted U-shaped for countries/regions of interest in the sample), (ii) exploring the threshold level of income (change point) at which pollution starts falling, as well as (iii) examining the countries/regions which belong to the upward rising part versus the downward sloping part of the inverted U-shape, if at all. The studies have been performed using US state level data or cross-country data (e.g. \cite{shafik1992economic}, \cite{millimet2003environmental}, \cite{aldy2005environmental}, \cite{lee2019nonparametric},\cite{boubellouta2021cross}, \cite{list1999environmental}, \cite{grossman1995economic}, \cite{bertinelli2005environmental}, \cite{azomahou2006economic}, \cite{taskin2000searching} to name a few) and most have found strong evidence in favor of the EKC hypothesis. While regressing pollution emission per capita on income and its squared term, most studies find a significantly positive effect of income and a significantly negative effect of its quadratic term on pollution, thus concluding in favor of the inverted U-shapedness of the relation.
\\\\
\noindent
While income-pollution remains the focal point of most EKC studies, several of them have also included urban agglomeration (UA) or some other measures of urbanization as an important control variable especially while investigating carbon emissions.\footnote{Although income growth is connected to urbanization, they are different due to varying geographical area, population density, and infrastructure of the countries. Also, different countries follow different income growth paths – labor intensive manufacturing, technology based manufacturing, human capital based servicing, technology based service sector, or natural resource (oil) based growth owing to their differences in terms of location, ownership of natural resources and capital.} (see for example, \cite{shafik1992economic}, \cite{boubellouta2021cross}, \cite{liang2019urbanization}). The ecological economics literature finds mixed evidence in this regard – (i) urbanization leading to more pollution (due to its close links with sanitation or dense transportations issues and proximities to polluting manufacturing industries), (ii) urbanization leading to less pollution (explained by ‘compact city theory’). The ‘compact city theory’ (see \cite{burton2000compact}, \cite{capello2000beyond}, \cite{sadorsky2014effect}) explains the benefits of increased urbanization in terms of economies of scale (for example, replacing dependence on automobiles with subway systems, the use of costlier but improved and green technology for basic infrastructure etc). \cite{cole2004examining}, using a set of 86 countries, and \cite{liddle2010age}, using 17 developed countries, find a positive and significant effect of urbanization on pollution. On the contrary, using a set of 69 countries \cite{sharma2011determinants} find a negative and significant effect of urbanization on pollution while \cite{du2012economic} find an insignificant effect of urbanization on carbon emission in China. Using various empirical strategies \cite{sadorsky2014effect} conclude that the positive and negative effects of urbanization on carbon pollution may cancel out depending on the countries involved, often leaving insignificant effects on pollution. In summary, based on the existing literature, the relationship between urbanization and pollution relation appears to depend largely on the set of countries considered in the sample. This motivates us to use UA along with income in our change plane model for analyzing carbon-dioxide emission to plausibly separate the countries into two regimes.
\\\\
\noindent
Following the broad literature we use pollution emission per capita (carbon-dioxide measured in metric tons per capita) as the dependent variable and real GDP per capita (measured in 2010 US dollars), its square (as is done commonly in the EKC literature) and a popular measure of urbanization, namely urban agglomeration (UA)\footnote{The exact definition can be found in WDI website} as covariates (in our notation $X$) in our regression. In light of the preceding discussions we fit a change plane model comprising real GDP per capita and UA (in our notation $Q$). To summarize the setup, we use the continuous response model as described in equation \eqref{eq:regression_main_eqn}, i.e
\begin{align*}
Y_i & = X_i^{\top}\beta_0 + X_i^{\top}\delta_0\mathds{1}_{Q_i^{\top}\psi_0 > 0} + {\epsilon}_i \\
& = X_i^{\top}\beta_0\mathds{1}_{Q_i^{\top}\psi_0 \le 0} + X_i^{\top}(\beta_0 + \delta_0)\mathds{1}_{Q_i^{\top}\psi_0 > 0} + {\epsilon}_i
\end{align*}
with the per capita $CO_2$ emission in metric ton as $Y$, per capita GDP, square of per capita GDP and UA as $X$ (hence $X \in \mathbb{R}^3$) and finally, per capita GDP and UA as $Q$ (hence $Q \in \mathbb{R}^2$). Observe that $\beta_0$ represents the regression coefficients corresponding to the countries with $Q_i^{\top}\psi_0 \le 0$ (henceforth denoted by Group 1) and $(\beta_0+ \delta_0)$ represents the regression coefficients corresponding to the countries with $Q_i^{\top}\psi_0 \ge 0$ (henceforth denoted by Group 2). As per our convention, in the interests of identifiability we assume $\psi_{0, 1} = 1$, where $\psi_{0,1}$ is the change plane parameter corresponding to per capita GDP. Therefore the only change plane coefficient to be estimated is $\psi_{0, 2}$, the change plane coefficient for UA. For numerical stability, we divide per capita GDP by $10^{-4}$ (consequently square of per capital GDP is scaled by $10^{-8}$)\footnote{This scaling helps in the numerical stability of the gradient descent algorithm used to optimize the least squares criterion.}. After some pre-processing (i.e. removing rows consisting of NA and countries with $100\%$ UA) we estimate the coefficients $(\beta_0, \delta_0, \psi_0)$ of our model based on data from 115 countries with $\sigma_n = 0.05$ and test the significance of the various coefficients using the methodologies described in Section \ref{sec:inference}. We present our findings in Table \ref{tab:ekc_coeff}.
\begin{table}[!h]
\centering
\begin{tabular}{|c||c||c|}
\hline
Coefficients & Estimated values & p-values \\
\hline \hline
$\beta_{0, 1}$ (\text{RGDPPC for Group 1}) & 6.98555060 & 4.961452e-10 \\
$\beta_{0, 2}$ (\text{squared RGDPPC for Group 1}) & -0.43425991 & 7.136484e-02 \\
$\beta_{0, 3}$ (\text{UA for Group 1}) & -0.02613813 & 1.066065e-01
\\
$\beta_{0, 1} + \delta_{0, 1}$ (\text{RGDPPC for Group 2}) & 2.0563337 & 0.000000e+00\\
$\beta_{0, 2} + \delta_{0, 2}$ (\text{squared RGDPPC for Group 2}) & -0.1866490 & 4.912843e-04 \\
$\beta_{0, 3} + \delta_{0, 3}$ (\text{UA for Group 2}) & 0.1403171& 1.329788e-05 \\
$\psi_{0,2 }$ (\text{Change plane coeff for UA}) & -0.07061785 & 0.000000e+00\\
\hline
\end{tabular}
\caption{Table of the estimated regression and change plane coefficients along with their p-values.}
\label{tab:ekc_coeff}
\end{table}
\\\\
\noindent
From the above analysis, we find that both GDP and its square have statistically significant effects on carbon pollution for both groups of countries with their expected signs (positive for GDP and negative for its square term), supporting the inverted U-shaped income-pollution nexus and being in line with most papers in the EKC literature. The urban variable, on the other hand, is seen to have insignificant effect on Group 1 countries (less developed and emerging) which is in keeping with \cite{du2012economic}, \cite{sadorsky2014effect}. However, the urban variable seems to have a positive and significant impact on Group-2 countries which is in line with \cite{liddle2010age} for example. Note that many of the group-1 countries are yet to experience sizeable levels of urbanization compared to the other group and this is even truer for our sample period.\footnote{We use 6 years average from 2010-2015 for GDP and pollution measures. Such averaging is in accordance with the cross-sectional empirical literature using cross-country/regional data and helps avoid business cycle fluctuations in GDP. It also minimizes the impacts of outlier events such as the financial crisis or great recession period. The years that have we have chosen are ones for which we could find data for the largest number of countries.}Further, note that UA plays a crucial role in dividing the countries into different regimes, as the estimated value of $\psi_{0,2}$ is significant.
\\\\
\noindent
There are many future potential applications of our method in economics. Similar exercises can be followed for other pollutants (such as sulfur emission, electrical waste/e-waste, nitrogen pollution etc.). While income/GDP remains a common, indeed the most crucial variable in the pollution study, other covariates (including change plane defining variables) may vary, depending on the pollutant of interest. One may also be interested in the determinants of health expenses in the household survey data. Often the families are asked about their health expenses incurred in the past one year. An interesting case in point may be household surveys collected in India where there are many large joint families with several children and old people live in the same household and most families are uninsured. It is often seen that health expenses increases with income as it includes expenses on regularly performed preventative medical examinations which are affordable only beyond an income level is reached. The important covariates are per capita family income, family wealth, 'dependency ratio' (number of children and old to the total number of people in the family) and binary indicator of any history of major illness/hospitalizations in the family in the past year. Family income per capita and history of major illness can potentially define the change plane.
\end{comment}
\input{Real_data_analysis.tex}
\section{Conclusion}
\label{sec:conclusion}
In this paper we have established that under some mild assumptions the kernel-smoothed change plane estimator is asymptotically normal with near optimal rate $n^{-1}$. To the best of our knowledge, the state of the art result in this genre of problems is due to \cite{seo2007smoothed}, where they demonstrate a best possible rate about $n^{-3/4}$ for i.i.d. data. The main difference between their approach and ours is mainly the proof of Lemma \ref{bandwidth}. Our techniques are based upon modern empirical process theory which allow us to consider much smaller bandwidths $\sigma_n$ compared to those in \cite{seo2007smoothed}, who appear to require larger values to achieve the result, possibly owing to their reliance on the techniques developed in \cite{horowitz1992smoothed}. Although we have established it is possible to have asymptotic normality with really small bandwidths, we believe that the finite sample approximation (e.g. Berry-Essen bound) to normality could be poor, which is also evident from our simulation.
\section{Real data analysis}
\label{sec:real_data}
We illustrate our method using cross-country data on pollution (carbon-dioxide), income and urbanization obtained from the World Development Indicators (WDI), World Bank. The Environmental Kuznets Curve hypothesis (EKC henceforth), a popular and ongoing area of research in environmental economics, posits that at an initial stage of economic development pollution increases with economic growth, and then diminishes when society’s priorities change, leading to an inverted U-shaped relation between income (measured via real GDP per capita) and pollution. The hypothesis has led to numerous empirical papers (i) testing the hypothesis (whether the relation is inverted U-shaped for countries/regions of interest in the sample), (ii) exploring the threshold level of income at which pollution starts falling, as well as (iii) examining the countries/regions which belong to the upward rising part versus the downward sloping part of the inverted U-shape, if at all. The studies have been performed using US state level data or cross-country data (e.g. \cite{shafik1992economic}, \cite{millimet2003environmental}, \cite{aldy2005environmental}, \cite{lee2019nonparametric},\cite{boubellouta2021cross}, \cite{list1999environmental}, \cite{grossman1995economic}, \cite{bertinelli2005environmental}, \cite{azomahou2006economic}, \cite{taskin2000searching} to name a few). While some of these papers have found evidence in favor of the EKC hypothesis (inverted U-shaped income-pollution relation), others have found evidence against it (monotonically increasing or other shapes for the relation). The results often depend on countries/regions in the sample, period of analysis, as well as the pollutant studied.
\\\\
\noindent
While income-pollution remains the focal point of most EKC studies, several of them have also included urban agglomeration (UA) or some other measures of urbanization as an important control variable especially while investigating carbon emissions.\footnote {Although income growth is connected to urbanization, countries are heterogenous and follow different growth paths due to their varying geographical structures, population densities, infrastructures, ownerships of resources making a case for using urbanization as another control covariate in the income-pollution study. The income growth paths of oil rich UAE, manufacturing based China, serviced based Singapore, low population density Canada (with vast land) are all different.} (see for example, \cite{shafik1992economic}, \cite{boubellouta2021cross}and \cite{liang2019urbanization}). The theory of ecological economics posits potentially varying effects of increased urbanization on pollution– (i) urbanization leading to more pollution (due to its close links with sanitations, dense transportations, and proximities to polluting manufacturing industries), (ii) urbanization potentially leading to less pollution based on ‘compact city theory’ (see \cite{burton2000compact}, \cite{capello2000beyond}, \cite{sadorsky2014effect}) that explains the potential benefits of increased urbanization in terms of economies of scale (for example, replacing dependence on automobiles with large scale subway systems, using multi-storied buildings instead of single unit houses, keeping more open green space). \cite{liddle2010age}, using 17 developed countries, find a positive and significant effect of urbanization on pollution. On the contrary, using a set of 69 countries \cite{sharma2011determinants} find a negative and significant effect of urbanization on pollution while \cite{du2012economic} find an insignificant effect of urbanization on carbon emission. Using various empirical strategies \cite{sadorsky2014effect} conclude that the positive and negative effects of urbanization on carbon pollution may cancel out depending on the countries involved often leaving insignificant effects on pollution. They also note that many countries are yet to achieve a sizeable level of urbanization which presumably explains why many empirical works using less developed countries find insignificant effect of urbanization. In summary, based on the existing literature, both the relationship between urbanization and pollution as well as the relationship between income and pollution appear to depend largely on the set of countries considered in the sample. This motivates us to use UA along with income in our change plane model for analyzing carbon-dioxide emission to plausibly separate the countries into two regimes.
\\\\
\noindent
Following the broad literature we use pollution emission per capita (carbon-dioxide measured in metric tons per capita) as the dependent variable and real GDP per capita (measured in 2010 US dollars), its square (as is done commonly in the EKC literature) and a popular measure of urbanization, namely urban agglomeration (UA)\footnote{The exact definition can be found in the World Development Indicators database from the World Bank website.} as covariates (in our notation $X$) in our regression. In light of the preceding discussions we fit a change plane model comprising real GDP per capita and UA (in our notation $Q$). To summarize the setup, we use the continuous response model as described in equation \eqref{eq:regression_main_eqn}, i.e
\begin{align*}
Y_i & = X_i^{\top}\beta_0 + X_i^{\top}\delta_0\mathds{1}_{Q_i^{\top}\psi_0 > 0} + {\epsilon}_i \\
& = X_i^{\top}\beta_0\mathds{1}_{Q_i^{\top}\psi_0 \le 0} + X_i^{\top}(\beta_0 + \delta_0)\mathds{1}_{Q_i^{\top}\psi_0 > 0} + {\epsilon}_i
\end{align*}
with the per capita $CO_2$ emission in metric ton as $Y$, per capita GDP, square of per capita GDP and UA as $X$ (hence $X \in \mathbb{R}^3$) and finally, per capita GDP and UA as $Q$ (hence $Q \in \mathbb{R}^2$). Observe that $\beta_0$ represents the regression coefficients corresponding to the countries with $Q_i^{\top}\psi_0 \le 0$ (henceforth denoted by Group 1) and $(\beta_0+ \delta_0)$ represents the regression coefficients corresponding to the countries with $Q_i^{\top}\psi_0 \ge 0$ (henceforth denoted by Group 2). As per our convention, in the interests of identifiability we assume $\psi_{0, 1} = 1$, where $\psi_{0,1}$ is the change plane parameter corresponding to per capita GDP. Therefore the only change plane coefficient to be estimated is $\psi_{0, 2}$, the change plane coefficient for UA. For numerical stability, we divide per capita GDP by $10^{-4}$ (consequently square of per capital GDP is scaled by $10^{-8}$)\footnote{This scaling helps in the numerical stability of the gradient descent algorithm used to optimize the least squares criterion.}. After some pre-processing (i.e. removing rows consisting of NA and countries with $100\%$ UA) we estimate the coefficients $(\beta_0, \delta_0, \psi_0)$ of our model based on data from 115 countries with $\sigma_n = 0.05$ and test the significance of the various coefficients using the methodologies described in Section \ref{sec:inference}. We present our findings in Table \ref{tab:ekc_coeff}.
\begin{table}[!h]
\centering
\begin{tabular}{|c||c||c|}
\hline
Coefficients & Estimated values & p-values \\
\hline \hline
$\beta_{0, 1}$ (\text{RGDPPC for Group 1}) & 6.98555060 & 4.961452e-10 \\
$\beta_{0, 2}$ (\text{squared RGDPPC for Group 1}) & -0.43425991 & 7.136484e-02 \\
$\beta_{0, 3}$ (\text{UA for Group 1}) & -0.02613813 & 1.066065e-01
\\
$\beta_{0, 1} + \delta_{0, 1}$ (\text{RGDPPC for Group 2}) & 2.0563337 & 0.000000e+00\\
$\beta_{0, 2} + \delta_{0, 2}$ (\text{squared RGDPPC for Group 2}) & -0.1866490 & 4.912843e-04 \\
$\beta_{0, 3} + \delta_{0, 3}$ (\text{UA for Group 2}) & 0.1403171& 1.329788e-05 \\
$\psi_{0,2 }$ (\text{Change plane coeff for UA}) & -0.07061785 & 0.000000e+00\\
\hline
\end{tabular}
\caption{Table of the estimated regression and change plane coefficients along with their p-values.}
\label{tab:ekc_coeff}
\end{table}
\\\\
\noindent
From the above analysis, we find that GDP has significantly positive effect on pollution for both groups of countries. The effect of its squared term is negative for both groups; but the effect is significant for Group-2 consisting of mostly high income countries whereas its effect is insignificant (at the 5\% level) for the Group-1 countries (consisting of mostly low or middle income and few high income countries). Thus, not surprisingly, we find evidence in favor of EKC for the developed countries, but not for the mixed group. Notably, Group-1 consists of a mixed set of countries like Angola, Sudan, Senegal, India, China, Israel, UAE etc., whereas Group-1 consists of rich and developed countries like Canada, USA, UK, France, Germany etc. The urban variable, on the other hand, is seen to have insignificant effect on Group-1 which is in keeping with \cite{du2012economic}, \cite{sadorsky2014effect}. Many of them are yet to achieve substantial urbanization and this is more true for our sample period \footnote{We use 6 years average from 2010-2015 for GDP and pollution measures. Such averaging is in accordance with the cross-sectional empirical literature using cross-country/regional data and helps avoid business cycle fluctuations in GDP. It also minimizes the impacts of outlier events such as the financial crisis or great recession period. The years that we have chosen are ones for which we could find data for the largest number of countries.}. In contrast, UA has a positive and significant effect on Group-2 (developed) countries which is consistent with the findings of \cite{liddle2010age}, for example. Note that UA plays a crucial role in dividing the countries into different regimes, as the estimated value of $\psi_{0,2}$ is significant. Thus, we are able to partition countries into two regimes: a mostly rich and a mixed group.
\\\\
\noindent
Note that many underdeveloped countries and poorer regions of emerging countries are still swamped with greenhouse gas emissions from burning coal, cow dung etc., and usage of poor exhaust systems in houses and for transport. This is more true for rural and semi-urban areas of developing countries. So even while being less urbanized compared to developed nations, their overall pollution load is high (due to inefficient energy usage and higher dependence on fossil fuels as pointed out above) and rising with income and they are yet to reach the descending part of the inverted U-shape for the income-pollution relation. On the contrary, for countries in Group-2, the adoption of more efficient energy and exhaust systems are common in households and transportations in general, leading to eventually decreasing pollution with increasing income (supporting EKC). Both the results are in line with the existing EKC literature. Additionally we find that the countries in Group 2 are yet to achieve ‘compact city’ and green urbanization. This is a stylized fact that is confirmed by the positive and significant effect of UA on pollution in our analysis.
\\\\
\noindent
There are many future potential applications of our method in economics. Similar analyses can be performed for other pollutants (such as sulfur emission, electrical waste/e-waste, nitrogen pollution etc.). While income/GDP remains a common, indeed the most crucial variable in pollution studies, other covariates (including change plane defining variables) may vary, depending on the pollutant of interest. Another potential application can be that of identifying the determinants of family health expenses in household survey data. Families are often asked about their health expenses incurred in the past one year. An interesting case in point may be household surveys collected in India where one finds numerous (large) joint families with several children and old people residing in the same household and most families are uninsured. It is often seen that health expenditure increases with income with a major factor being the costs associated with regularly performed preventative medical examinations which are affordable only once a certain income level is reached. The important covariates here are per capita family income, family wealth, `dependency ratio' (number of children and old to the total number of people in the family) and the binary indicator of any history of major illness/hospitalizations in the family in the past year. Family income per capita and history of major illness are natural candidate covariates for defining the change plane.
\section{Binary response model}
\label{sec:classification_analysis}
Recall our binary response model in equation \eqref{eq:classification_eqn}. To estimate $\psi_0$, we resort to the following loss (without smoothing):
\begin{equation}
\label{eq:new_loss}
\mathbb{M}(\psi) = \mathbb{E}\left((Y - \gamma)\mathds{1}(Q^{\top}\psi \le 0)\right)\end{equation}
with $\gamma \in (\alpha_0, \beta_0)$, which can be viewed as a variant of the square error loss function:
$$
\mathbb{M}(\alpha, \beta, \psi) = \mathbb{E}\left(\left(Y - \alpha\mathds{1}(Q^{\top}\psi < 0) - \beta\mathds{1}(Q^{\top}\psi > 0)\right)^2\right)\,.
$$
We establish the connection between these losses in sub-section \ref{loss_func_eq}. It is easy to prove that under fairly mild conditions (discussed later)
$\psi_0 = {\arg\min}_{\psi \in \Theta}\mathbb{M}(\psi)$, uniquely. Under the standard classification paradigm, when we know a priori that
$\alpha_0 < 1/2 < \beta_0$, we can take $\gamma = 1/2$, and in the absence of this constraint, $\bar{Y}$, which converges to some $\gamma$ between $\alpha_0$ and $\beta_0$, may be substituted in the loss function. In the rest of the paper, we confine ourselves to a known $\gamma$, and for technical simplicity, we take $\gamma = \frac{(\beta_0 + \alpha_0)}{2}$, but this assumption can be removed with more mathematical book-keeping. Thus, $\psi_0$ is estimated by:
\begin{equation}
\label{non-smooth-score}
\hat \psi = {\arg\min}_{\psi \in \Theta} \mathbb{M}_n(\psi) = {\arg\min}_{\psi \in \Theta} \frac{1}{n}\sum_{\i=1}^n (Y_i - \gamma)\mathds{1}(Q_i^{\top}\psi \le 0)\,.
\end{equation} We resort to a smooth approximation of the indicator function in
\eqref{non-smooth-score} using a distribution kernel with suitable bandwidth. The smoothed version of the population score function then becomes:
\begin{equation}
\label{eq:kernel_smoothed_pop_score}
\mathbb{M}^s(\psi) = \mathbb{E}\left((Y - \gamma)\left(1-K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right)\right)
\end{equation}
where as in the continuous response model, we use $K(x) = \Phi(x)$, and the corresponding empirical version is:
\begin{equation}
\label{eq:kernel_smoothed_emp_score}
\mathbb{M}^s_n(\psi) = \frac{1}{n}\sum_{i=1}^n \left((Y_i - \gamma)\left(1-K\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right)\right)\right)
\end{equation}
Define $\hat{\psi}^s$ and $\psi_0^s$ to be the minimizer of the smoothed version of the empirical (equation \eqref{eq:kernel_smoothed_emp_score}) and population score (equation \eqref{eq:kernel_smoothed_pop_score}) function respectively. Here we only consider the choice of bandwidth $n\sigma_n \to \infty$ and $n\sigma_n^2 \to 0$. Analogous to Theorem \ref{thm:regression} we prove the following result for binary response model:
\begin{theorem}
\label{thm:binary}
Under Assumptions (\ref{as:distribution} - \ref{as:eigenval_bound}):
$$
\sqrt{\frac{n}{\sigma_n}}\left(\hat{\psi}_n - \psi_0\right) \Rightarrow N(0, \Gamma) \,,
$$
for some non-stochastic matrix $\Gamma$, which will be defined explicitly in the proof.
\end{theorem}
We have therefore established that in the regime $n\sigma_n \to \infty$ and $n\sigma_n^2 \to 0$, it is possible to attain asymptotic normality using a smoothed estimator for binary response model.
\section{Inferential methods}
\label{sec:inference}
We draw inferences on $(\beta_0, \delta_0, \psi_0)$ by resorting to similar techniques as in \cite{seo2007smoothed}. For the continuous response model, we need consistent estimators of $V^{\gamma}, Q^{\gamma}, V^{\psi}, Q^{\psi}$ (see Lemma \ref{conv-prob} for the definitions) for hypothesis testing. By virtue of the aforementioned Lemma, we can estimate $Q^{\gamma}$ and $Q^{\psi}$ as follows:
\begin{align*}
\hat Q^{\gamma} & = \nabla^2_{\gamma} \mathbb{M}_n^s(\hat \theta) \,, \\
\hat Q^{\psi} & = \sigma_n \nabla^2_{\psi} \mathbb{M}_n^s(\hat \theta) \,.
\end{align*}
The consistency of the above estimators is established in the proof of Lemma \ref{conv-prob}. For the other two parameters $V^{\gamma}, V^{\psi}$ we use the following estimators:
\begin{align*}
\hat V^{\psi} & = \frac{1}{n\sigma_n^2}\sum_{i=1}^n\left(\left(Y_i - X_i^{\top}(\hat \beta + \hat \delta)\right)^2 - \left(Y_i- X_i^{\top}\hat \beta\right)^2\right)^2\tilde Q_i \tilde Q_i^{\top}\left(K'\left(\frac{Q_i^{\top}\hat \psi}{\sigma_n}\right)\right)^2 \\
\hat V^{\gamma} & = \hat \sigma^2_{\epsilon} \begin{pmatrix} \frac{1}{n}X_iX_i^{\top} & \frac{1}{n}X_iX_i^{\top}\mathds{1}_{Q_i^{\top}\hat \psi > 0} \\ \frac{1}{n}X_iX_i^{\top}\mathds{1}_{Q_i^{\top}\hat \psi > 0} & \frac{1}{n}X_iX_i^{\top}\mathds{1}_{Q_i^{\top}\hat \psi > 0} \end{pmatrix}
\end{align*}
where $\hat \sigma^2_{\epsilon}$ can be obtained as $(1/n)(Y_i - X_i^{\top}\hat \beta - X_i^{\top}\hat \delta \mathds{1}(Q_i^{\top}\hat \psi > 0))^2$, i.e. the residual sum of squares. The explicit value of $V_\gamma$ (as derived in equation \eqref{eq:def_v_gamma} in the proof Lemma \ref{asymp-normality}) is:
$$
V^{\gamma} = \sigma_{\epsilon}^2 \begin{pmatrix}\mathbb{E}\left[XX^{\top}\right] & \mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \\
\mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] & \mathbb{E}\left[XX^{\top}\mathds{1}_{Q^{\top}\psi_0 > 0}\right] \end{pmatrix}
$$
Therefore, the consistency of $\hat V_\gamma$ is immediate from the law of large numbers. The consistency of $\hat V^{\psi}$ follows via arguments similar to those employed in proving Lemma \ref{conv-prob} but under somewhat more stringent moment conditions: in particular, we need $\mathbb{E}[\|X\|^8] < \infty$ and $\mathbb{E}[(X^{\top}\delta_0)^k \mid Q]$ to be Lipschitz functions over $Q$ for $1 \le k \le 8$. The inferential techniques for the classification model are similar and hence skipped, to avoid repetition.
\section{Introduction}
The simple linear regression model assumes a uniform linear relationship between the covariate and the response, in the sense that the regression parameter $\beta$ is the same over the entire covariate domain. In practice, the situation can be more complicated: for instance, the regression parameter may differ from sub-population to sub-population within a large (super-) population. Some common techniques to account for such heterogeneity include mixed linear models, introducing an interaction effect, or fitting different models among each sub-population which corresponds to a supervised classification setting where the true groups (sub-populations) are \emph{a priori known}.
\newline
\newline
\indent A more difficult scenario arises when the sub-populations are unknown, in which case regression and classification must happen simultaneously. Consider the scenario where the conditional mean of $Y_i$ given $X_i$ is different for different unknown sub-groups. A well-studied treatment of this problem -- the so-called change point problem -- considers a simple thresholding model where membership in a sub-group is determined by whether a real-valued observable $X$ falls to the left or right of an unknown parameter $\gamma$. More recently, there has been work for multi-dimensional covariates, namely when the membership is determined by which side a random vector $X$ falls with respect to an hyperplane with unknown normal vector $\theta_0$. A concrete example appears in \cite{wei2014latent} who extend the linear thresholding model due to \cite{kang2011new} to general dimensions:
\begin{eqnarray}\label{eq:weimodel}
Y=\mu_1\cdot 1_{X^{\top}\theta_0\geq 0}+\mu_2\cdot 1_{X^{\top}\theta_0<0}+\varepsilon\,,
\end{eqnarray}
and studied computational algorithms and consistency of the same. This model and others with similar structure, called \emph{change plane models}, are useful in various fields of research, e.g. modeling treatment effect heterogeneity in drug treatment (\cite{imai2013estimating}), modeling sociological data on voting and employment (\cite{imai2013estimating}), or cross country growth regressions in econometrics
(\cite{seo2007smoothed}).
\newline
\newline
\indent Other aspects of this model have also been investigated. \cite{fan2017change} examined the change plane model from the statistical testing point of view, with the null hypothesis being the absence of a separating hyperplane. They proposed a test statistic, studied its asymptotic distribution and provided sample size recommendations for achieving target values of power. \cite{li2018multi} extended the change point detection problem in the multi-dimensional setup by considering the case where $X^{\top}\theta_0$ forms a multiple change point data sequence.
The key difficultly with change plane type models is the inherent discontinuity in the optimization criteria involved where the parameter of interest appears as an argument to some indicator function, rendering the optimization extremely hard. To alleviate this, one option is to kernel smooth the indicator function, an approach that was adopted by Seo and Linton \cite{seo2007smoothed} in a version of the change-plane problem, motivated by earlier results of Horowitz \cite{horowitz1992smoothed} that dealt with a smoothed version of the maximum score estimator. Their model has an additive structure of the form:
\[Y_t = \beta^{\top}X_t + \delta^{\top} \tilde{X}_t \mathds{1}_{Q_t^{\top} \boldmath \psi > 0} + \epsilon_t \,,\]
where $\psi$ is the (fixed) change-plane parameter, and $t$ can be viewed as a time index. Under a set of assumptions on the model (Assumptions 1 and 2 of their paper), they showed asymptotic normality of their estimator of $\psi$ obtained by minimizing a smoothed least squares criterion
that uses a differentiable distribution function $\mathcal{K}$. The rate of convergence of $\hat{\psi}$ to the truth was shown to be $\sqrt{n/\sigma_n}$ where $\sigma_n$ was the bandwidth parameter used to smooth the least squares function. As noted in their Remark 3, under the special case of i.i.d. observations, their requirement that $\log n/(n \sigma_n^2) \rightarrow 0$ translates to a maximal convergence rate of $n^{3/4}$ up to a logarithmic factor. The work of \cite{li2018multi} who considered multiple parallel change planes (determined by a fixed dimensional normal vector) and high dimensional linear models in the regions between consecutive hyperplanes also builds partly upon the methods of \cite{seo2007smoothed} and obtains the same (almost) $n^{3/4}$ rate for the normal vector (as can be seen by putting Condition 6 in their paper in conjunction with the conclusion of Theorem 3).
\\\\
While it is established that the condition $n\sigma_n^2 \to \infty$ is sufficient (upto a log factor) for achieving asymptotic normality of the smoothed estimator, there is no result in the existing literature to ascertain whether its necessity. Intuitively speaking, the necessary condition for asymptotic normality ought to be $n \sigma_n \to 0$, as this will ensure a growing number of observations in a $\sigma_n$ neighborhood around the true hyperplane, allowing the central limit theorem to kick in. In this paper we \emph{bridge this gap} by proving that asymptotic normality of the smoothed change point estimator is, in fact, achievable with $n \sigma_n \to \infty$.
This implies that the best possible rate of convergence of the smoothed estimator can be arbitrarily close to $n^{-1}$, the minimax optimal rate of estimation for this problem. To demonstrate this, we focus on two change plane estimation problems, one with a continuous and another with a binary response. The continuous response model we analyze here is the following:
\begin{equation}
\label{eq:regression_main_eqn}
Y_i = \beta_0^{\top}X_i + \delta_0^{\top}X_i\mathds{1}_{Q_i^{\top}\psi_0 > 0} + {\epsilon}_i \,.
\end{equation}
for i.i.d. observations $\{(X_i, Y_i, Q_i\}_{i=1}^n$, where the zero-mean transitory shocks ${\epsilon}_i \rotatebox[origin=c]{90}{$\models$} (X_i, Q_i)$. Our calculation can be easily extended to the case when the covariates on the either side of the change hyperplane are different and $\mathbb{E}[{\epsilon} \mid X, Q] = 0$ with more tedious bookkeeping. As this generalization adds little of interest, conceptually, to our proof, we posit the simpler model for ease of understanding.
As the parameter $\psi_0$ is only identifiable upto its norm, we assume that the first co-ordinate is $1$ (along the lines of \cite{seo2007smoothed}) which removes one degree of freedom and makes the parameter identifiable.
\\\\
To illustrate that a similar phenomenon transpires with binary response, we also study a canonical version of such a model which can be briefly described as follows: The covariate $Q \sim P$ where $P$ is distribution on $\mathbb{R}^d$ and the conditional distribution of $Y$ given $Q$ is modeled as follows:
\begin{equation}
\label{eq:classification_eqn}
P(Y=1|Q) = \alpha_0 \mathds{1}(Q^{\top}\psi_0 \le 0) + \beta_0\mathds{1}(Q^{\top}\psi_0 > 0)
\end{equation}
for some parameters $\alpha_0, \beta_0\in (0,1)$ and $\psi_0\in\mathbb{R}^d$ (with first co-ordinate being one for identifiability issue as for the continuous response model), the latter being of primary interest for estimation.
This model is identifiable up to a permutation of $(\alpha_0, \beta_0)$, so we further assume $\alpha_0 < \beta_0$. For both models, we show that $\sqrt{n/\sigma_n}(\hat \psi - \psi_0)$ converges to zero-mean normal distribution as long as $n \sigma_n \to \infty$ but the calculations for the binary model are completely relegated to Appendix \ref{sec:supp_classification}.
\\\\
{\bf Organization of the paper:} The rest of the paper is organized as follows: In Section \ref{sec:theory_regression} we present the methodology, the statement of the asymptotic distributions and a sketch of the proof for the continuous response model \eqref{eq:regression_main_eqn}. In Section \ref{sec:classification_analysis} we briefly describe the binary response model \eqref{eq:classification_eqn} and related assumptions, whilst the details can be found in the supplementary document. In Section \ref{sec:simulation} we present some simulation results, both for the binary and the continuous response models to study the effect of the bandwidth on the quality of the normal approximation in finite samples. In Section \ref{sec:real_data}, we present a real data analysis where we analyze the effect of income and urbanization on the $CO_2$ emission in different countries.
\\\\
{\bf Notations: } Before delving into the technical details, we first setup some notations here. We assume from now on, $X \in {\bbR}^p$ and $Q \in {\bbR}^d$. For any vector $v$ we define by $\tilde v$ as the vector with all the co-ordinates expect the first one. We denote by $K$ the kernel function used to smooth the indicator function. For any matrix $A$, we denote by $\|A\|_2$ (or $\|A\|_F$) as its Frobenious norm and $\|A\|_{op}$ as its operator norm. For any vector, $\| \cdot \|_2$ denotes its $\ell_2$ norm.
\input{regression.tex}
\input{classification.tex}
\section{Simulation studies}
\label{sec:simulation}
In this section, we present some simulation results to analyse the effect of the choice of $\sigma_n$ on the finite sample approximation of asymptotic normality, i.e. Berry-Essen type bounds. If we choose a smaller sigma, the rate of convergence is accelerated but the normal approximation error at smaller sample sizes will be higher, as we don't have enough observations in the vicinity of the change hyperplane for the CLT to kick in. This problem is alleviated by choosing $\sigma_n$ larger, but this, on the other hand, compromises the convergence rate. Ideally, a Berry-Essen type of bound will quantify this, but this will require a different set of techniques and is left as an open problem. In our simulations, we generate data from following setup:
\begin{enumerate}
\item Set $N = 50000, p = 3, \alpha_0 = 0.25, \beta = 0.75$ and some $\theta_0 \in \mathbb{R}^p$ with first co-ordinate $ = 1$.
\item Generate $X_1, \dots, X_n \sim \mathcal{N}(0, I_p)$.
\item Generate $Y_i \sim \textbf{Bernoulli}\left(\alpha_0\mathds{1}_{X_i^{\top}\theta_0 \le 0} + \beta_0 \mathds{1}_{X_i^{\top}\theta_0 > 0}\right)$.
\item Estimate $\hat \theta$ by minimizing $\mathbb{M}_n(\theta)$ (replacing $\gamma$ by $\bar Y$) based on $\{(X_i, Y_i)\}_{i=1}^n$ for different choices of $\sigma_n$.
\end{enumerate}
We repeat Step 2 - Step 4 a hundred times to obtain $\hat \theta_1, \dots, \hat \theta_{100}$. Define $s_n$ to be the standard deviation of $\{\hat \theta_i\}_{i=1}^{100}$. Figures ref{fig:co2} and \ref{fig:co3} show the qqplots of $\tilde \theta_i = (\hat \theta_i - \theta_0)/s_n$ against the standard normal for four different choices of $\sigma_n = n^{-0.6}, n^{-0.7}, n^{-0.8}, n^{-0.9}$.
\begin{figure}
\centering
\includegraphics[scale=0.4]{Coordinate_2}
\caption{In this figure, we present qqplot for estimating second co-ordinate of $\theta_0$ with different choices of $\sigma_n$ mentioned at the top of each plots.}
\label{fig:co2}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.4]{Coordinate_3}
\caption{In this figure, we present qqplot for estimating third co-ordinate of $\theta_0$ with different choices of $\sigma_n$ mentioned at the top of each plots.}
\label{fig:co3}
\end{figure}
It is evident that smaller value of $\sigma_n$ yield a poor normal approximation. Although our theory shows that asymptotic normality holds as long as $n\sigma_n \to \infty$, in practice we recommend choosing $\sigma_n$ such that $n\sigma_n \ge 30$ for the central limit of theorem to take effect.
\begin{comment}
\section{Real data analysis}
\label{sec:real_data}
We illustrate our method using cross-country data on pollution (carbon-di-oxide), income and urbanization obtained from the World Development Indicators (WDI), World Bank (website?). The Environmental Kuznets Curve hypothesis (EKC henceforth), a popular and ongoing area of research in environmental economics, posits that at an initial stage of economic development pollution increases with economic growth, and then diminishes when society’s priorities change, leading to an inverted U-shaped relation between income (measured via real GDP per capita) and pollution. The hypothesis has led to numerous empirical papers (i) testing the hypothesis (whether the relation is inverted U-shaped for countries/regions of interest in the sample), (ii) exploring the threshold level of income (change point) at which pollution starts falling, as well as (iii) examining the countries/regions which belong to the upward rising part versus the downward sloping part of the inverted U-shape, if at all. The studies have been performed using US state level data or cross-country data (e.g. \cite{shafik1992economic}, \cite{millimet2003environmental}, \cite{aldy2005environmental}, \cite{lee2019nonparametric},\cite{boubellouta2021cross}, \cite{list1999environmental}, \cite{grossman1995economic}, \cite{bertinelli2005environmental}, \cite{azomahou2006economic}, \cite{taskin2000searching} to name a few) and most have found strong evidence in favor of the EKC hypothesis. While regressing pollution emission per capita on income and its squared term, most studies find a significantly positive effect of income and a significantly negative effect of its quadratic term on pollution, thus concluding in favor of the inverted U-shapedness of the relation.
\\\\
\noindent
While income-pollution remains the focal point of most EKC studies, several of them have also included urban agglomeration (UA) or some other measures of urbanization as an important control variable especially while investigating carbon emissions.\footnote{Although income growth is connected to urbanization, they are different due to varying geographical area, population density, and infrastructure of the countries. Also, different countries follow different income growth paths – labor intensive manufacturing, technology based manufacturing, human capital based servicing, technology based service sector, or natural resource (oil) based growth owing to their differences in terms of location, ownership of natural resources and capital.} (see for example, \cite{shafik1992economic}, \cite{boubellouta2021cross}, \cite{liang2019urbanization}). The ecological economics literature finds mixed evidence in this regard – (i) urbanization leading to more pollution (due to its close links with sanitation or dense transportations issues and proximities to polluting manufacturing industries), (ii) urbanization leading to less pollution (explained by ‘compact city theory’). The ‘compact city theory’ (see \cite{burton2000compact}, \cite{capello2000beyond}, \cite{sadorsky2014effect}) explains the benefits of increased urbanization in terms of economies of scale (for example, replacing dependence on automobiles with subway systems, the use of costlier but improved and green technology for basic infrastructure etc). \cite{cole2004examining}, using a set of 86 countries, and \cite{liddle2010age}, using 17 developed countries, find a positive and significant effect of urbanization on pollution. On the contrary, using a set of 69 countries \cite{sharma2011determinants} find a negative and significant effect of urbanization on pollution while \cite{du2012economic} find an insignificant effect of urbanization on carbon emission in China. Using various empirical strategies \cite{sadorsky2014effect} conclude that the positive and negative effects of urbanization on carbon pollution may cancel out depending on the countries involved, often leaving insignificant effects on pollution. In summary, based on the existing literature, the relationship between urbanization and pollution relation appears to depend largely on the set of countries considered in the sample. This motivates us to use UA along with income in our change plane model for analyzing carbon-dioxide emission to plausibly separate the countries into two regimes.
\\\\
\noindent
Following the broad literature we use pollution emission per capita (carbon-dioxide measured in metric tons per capita) as the dependent variable and real GDP per capita (measured in 2010 US dollars), its square (as is done commonly in the EKC literature) and a popular measure of urbanization, namely urban agglomeration (UA)\footnote{The exact definition can be found in WDI website} as covariates (in our notation $X$) in our regression. In light of the preceding discussions we fit a change plane model comprising real GDP per capita and UA (in our notation $Q$). To summarize the setup, we use the continuous response model as described in equation \eqref{eq:regression_main_eqn}, i.e
\begin{align*}
Y_i & = X_i^{\top}\beta_0 + X_i^{\top}\delta_0\mathds{1}_{Q_i^{\top}\psi_0 > 0} + {\epsilon}_i \\
& = X_i^{\top}\beta_0\mathds{1}_{Q_i^{\top}\psi_0 \le 0} + X_i^{\top}(\beta_0 + \delta_0)\mathds{1}_{Q_i^{\top}\psi_0 > 0} + {\epsilon}_i
\end{align*}
with the per capita $CO_2$ emission in metric ton as $Y$, per capita GDP, square of per capita GDP and UA as $X$ (hence $X \in \mathbb{R}^3$) and finally, per capita GDP and UA as $Q$ (hence $Q \in \mathbb{R}^2$). Observe that $\beta_0$ represents the regression coefficients corresponding to the countries with $Q_i^{\top}\psi_0 \le 0$ (henceforth denoted by Group 1) and $(\beta_0+ \delta_0)$ represents the regression coefficients corresponding to the countries with $Q_i^{\top}\psi_0 \ge 0$ (henceforth denoted by Group 2). As per our convention, in the interests of identifiability we assume $\psi_{0, 1} = 1$, where $\psi_{0,1}$ is the change plane parameter corresponding to per capita GDP. Therefore the only change plane coefficient to be estimated is $\psi_{0, 2}$, the change plane coefficient for UA. For numerical stability, we divide per capita GDP by $10^{-4}$ (consequently square of per capital GDP is scaled by $10^{-8}$)\footnote{This scaling helps in the numerical stability of the gradient descent algorithm used to optimize the least squares criterion.}. After some pre-processing (i.e. removing rows consisting of NA and countries with $100\%$ UA) we estimate the coefficients $(\beta_0, \delta_0, \psi_0)$ of our model based on data from 115 countries with $\sigma_n = 0.05$ and test the significance of the various coefficients using the methodologies described in Section \ref{sec:inference}. We present our findings in Table \ref{tab:ekc_coeff}.
\begin{table}[!h]
\centering
\begin{tabular}{|c||c||c|}
\hline
Coefficients & Estimated values & p-values \\
\hline \hline
$\beta_{0, 1}$ (\text{RGDPPC for Group 1}) & 6.98555060 & 4.961452e-10 \\
$\beta_{0, 2}$ (\text{squared RGDPPC for Group 1}) & -0.43425991 & 7.136484e-02 \\
$\beta_{0, 3}$ (\text{UA for Group 1}) & -0.02613813 & 1.066065e-01
\\
$\beta_{0, 1} + \delta_{0, 1}$ (\text{RGDPPC for Group 2}) & 2.0563337 & 0.000000e+00\\
$\beta_{0, 2} + \delta_{0, 2}$ (\text{squared RGDPPC for Group 2}) & -0.1866490 & 4.912843e-04 \\
$\beta_{0, 3} + \delta_{0, 3}$ (\text{UA for Group 2}) & 0.1403171& 1.329788e-05 \\
$\psi_{0,2 }$ (\text{Change plane coeff for UA}) & -0.07061785 & 0.000000e+00\\
\hline
\end{tabular}
\caption{Table of the estimated regression and change plane coefficients along with their p-values.}
\label{tab:ekc_coeff}
\end{table}
\\\\
\noindent
From the above analysis, we find that both GDP and its square have statistically significant effects on carbon pollution for both groups of countries with their expected signs (positive for GDP and negative for its square term), supporting the inverted U-shaped income-pollution nexus and being in line with most papers in the EKC literature. The urban variable, on the other hand, is seen to have insignificant effect on Group 1 countries (less developed and emerging) which is in keeping with \cite{du2012economic}, \cite{sadorsky2014effect}. However, the urban variable seems to have a positive and significant impact on Group-2 countries which is in line with \cite{liddle2010age} for example. Note that many of the group-1 countries are yet to experience sizeable levels of urbanization compared to the other group and this is even truer for our sample period.\footnote{We use 6 years average from 2010-2015 for GDP and pollution measures. Such averaging is in accordance with the cross-sectional empirical literature using cross-country/regional data and helps avoid business cycle fluctuations in GDP. It also minimizes the impacts of outlier events such as the financial crisis or great recession period. The years that have we have chosen are ones for which we could find data for the largest number of countries.}Further, note that UA plays a crucial role in dividing the countries into different regimes, as the estimated value of $\psi_{0,2}$ is significant.
\\\\
\noindent
There are many future potential applications of our method in economics. Similar exercises can be followed for other pollutants (such as sulfur emission, electrical waste/e-waste, nitrogen pollution etc.). While income/GDP remains a common, indeed the most crucial variable in the pollution study, other covariates (including change plane defining variables) may vary, depending on the pollutant of interest. One may also be interested in the determinants of health expenses in the household survey data. Often the families are asked about their health expenses incurred in the past one year. An interesting case in point may be household surveys collected in India where there are many large joint families with several children and old people live in the same household and most families are uninsured. It is often seen that health expenses increases with income as it includes expenses on regularly performed preventative medical examinations which are affordable only beyond an income level is reached. The important covariates are per capita family income, family wealth, 'dependency ratio' (number of children and old to the total number of people in the family) and binary indicator of any history of major illness/hospitalizations in the family in the past year. Family income per capita and history of major illness can potentially define the change plane.
\end{comment}
\input{Real_data_analysis.tex}
\section{Conclusion}
\label{sec:conclusion}
In this paper we have established that under some mild assumptions the kernel-smoothed change plane estimator is asymptotically normal with near optimal rate $n^{-1}$. To the best of our knowledge, the state of the art result in this genre of problems is due to \cite{seo2007smoothed}, where they demonstrate a best possible rate about $n^{-3/4}$ for i.i.d. data. The main difference between their approach and ours is mainly the proof of Lemma \ref{bandwidth}. Our techniques are based upon modern empirical process theory which allow us to consider much smaller bandwidths $\sigma_n$ compared to those in \cite{seo2007smoothed}, who appear to require larger values to achieve the result, possibly owing to their reliance on the techniques developed in \cite{horowitz1992smoothed}. Although we have established it is possible to have asymptotic normality with really small bandwidths, we believe that the finite sample approximation (e.g. Berry-Essen bound) to normality could be poor, which is also evident from our simulation.
\section{Methodology and Theory for Continuous Response Model}
\label{sec:theory_regression}
In this section we present our analysis for the continuous response model. Without smoothing, the original estimating equation is:
$$
f_{\beta, \delta, \psi}(Y, X, Q) = \left(Y - X^{\top}\beta - X^{\top}\delta\mathds{1}_{Q^{\top}\psi > 0}\right)^2
$$
and we estimate the parameters as:
\begin{align}
\label{eq:ls_estimator}
\left(\hat \beta^{LS}, \hat \delta^{LS}, \hat \psi^{LS}\right) & = {\arg\min}_{(\beta, \delta, \psi) \in \Theta} \mathbb{P}_n f_{\beta, \delta, \psi} \notag \\
& := {\arg\min}_{(\beta, \delta, \psi) \in \Theta}\mathbb{M}_n(\beta, \delta, \psi)\,.
\end{align}
where $\mathbb{P}_n$ is empirical measure based on i.i.d. observations $\{(X_i, Y_i, Q_i)\}_{i=1}^n$ and $\Theta$ is the parameter space. Henceforth, we assume $\Theta$ is a compact subset of dimension ${\bbR}^{2p+d}$. We also define $\theta = (\beta, \delta, \psi)$, i.e. all the parameters together as a vector and by $\theta_0$ is used to denote the true parameter vector $(\beta_0, \delta_0, \psi_0)$. Some modification of equation \eqref{eq:ls_estimator} leads to the following:
\begin{align*}
(\hat \beta^{LS}, \hat \delta^{LS}, \hat \psi^{LS}) & = {\arg\min}_{\beta, \delta, \psi} \sum_{i=1}^n \left(Y_i - X_i^{\top}\beta - X_i^{\top}\delta\mathds{1}_{Q_i^{\top}\psi > 0}\right)^2 \\
& = {\arg\min}_{\beta, \delta, \psi} \sum_{i=1}^n \left[\left(Y_i - X_i^{\top}\beta\right)^2\mathds{1}_{Q_i^{\top}\psi_0 \le 0} \right. \\
& \hspace{14em} \left. + \left(Y_i - X_i^{\top}\beta - X_i^{\top}\delta\right)^2\mathds{1}_{Q_i^{\top}\psi > 0} \right] \\
& = {\arg\min}_{\beta, \delta, \psi} \sum_{i=1}^n \left[\left(Y_i - X_i^{\top}\beta\right)^2 + \left\{\left(Y_i - X_i^{\top}\beta - X_i^{\top}\delta\right)^2 \right. \right. \\
& \hspace{17em} \left. \left. - \left(Y_i - X_i^{\top}\beta\right)^2\right\}\mathds{1}_{Q_i^{\top}\psi > 0} \right]
\end{align*}
Typical empirical process calculations yield under mild conditions:
$$
\|\hat \beta^{LS} - \beta_0\|^2 + \|\hat \delta^{LS} - \delta_0\|^2 + \|\hat \psi^{LS} - \psi_0 \|_2 = O_p(n^{-1})
$$
but inference is difficult as the limit distribution is unknown, and in any case, would be a highly non-standard distribution. Recall that even in the one-dimensional change point model with fixed jump size, the least squares change point estimator converges at rate $n$ to the truth with a non-standard limit distribution, namely a minimizer of a two-sided compound Poisson process (see \cite{lan2009change} for more details). To obtain a computable estimator with tractable limiting distribution, we resort to a smooth approximation of the indicator function in \eqref{eq:ls_estimator} using a distribution kernel with suitable bandwidth, i.e we replace $\mathds{1}_{Q_i^{\top}\psi > 0}$ by $K(Q_i^{\top}\psi/\sigma_n)$ for some appropriate distribution function $K$ and bandwidth $\sigma_n$, i.e.
\begin{align*}
(\hat \beta^S, \hat \delta^S, \hat \psi^S) & = {\arg\min}_{\beta, \delta, \psi} \left\{ \frac1n \sum_{i=1}^n \left[\left(Y_i - X_i^{\top}\beta\right)^2 + \left\{\left(Y_i - X_i^{\top}\beta - X_i^{\top}\delta\right)^2 \right. \right. \right. \\
& \hspace{15em} \left. \left. \left. - \left(Y_i - X_i^{\top}\beta\right)^2\right\}K\left(\frac{Q_i^{\top}\psi}{\sigma_n}\right) \right] \right\} \\
& = {\arg\min}_{(\beta, \delta, \psi) \in \Theta} \mathbb{P}_n f^s_{(\beta, \delta, \psi)}(X, Y, Q) \\
& := {\arg\min}_{\theta \in \Theta} \mathbb{M}^s_n(\theta) \,.
\end{align*}
Define $\mathbb{M}$ (resp. $\mathbb{M}^s$) to be the population counterpart of $\mathbb{M}_n$ and $\mathbb{M}_n^s$ respectively which are defined as:
\begin{align*}
\mathbb{M}(\theta) & = \mathbb{E}\left(Y - X^{\top}\beta\right)^2 + \mathbb{E}\left(\left[-2\left(Y_i - X^{\top}\beta\right)X^{\top}\delta + (X^{\top}\delta)^2\right] \mathds{1}_{Q^{\top}\psi > 0}\right) \,, \\
\mathbb{M}^s(\theta) & = \mathbb{E}\left[(Y - X^{\top}\beta)^2 + \left\{-2(Y-X^{\top}\beta)(X^{\top}\delta) + (X^{\top}\delta)^2\right\}K\left(\frac{Q^{\top}\psi}{\sigma_n}\right)\right] \,.
\end{align*}
As noted in the proof of \textcolor{blue}{Seo and Linton}, the assumption $\log{n}/n\sigma_n^2 \to 0$ was only used to show:
$$
\frac{\left\|\hat \psi^s - \psi_0\right\|}{\sigma_n} = o_p(1) \,.
$$
In this paper, we show that one can achieve the same conclusion as long as $n\sigma_n \to \infty$. The rest of the proof for the normality is similar to that of \cite{seo2007smoothed}, we will present it briefly for the ease the readers. The proof is quite long and technical, therefore we break the proof into several lemmas. We, first, list our assumptions:
\begin{assumption}
\label{eq:assm}
\begin{enumerate}
\item Define $f_\psi(\cdot \mid \tilde Q)$ to be the conditional distribution of $Q^{\top}\psi$ given $\tilde Q$. (In particular we will denote by $f_0(\cdot \mid \tilde q)$ to be conditional distribution of $Q^{\top}\psi_0$ given $\tilde Q$ and $f_s(\cdot \mid \tilde q)$ to be the conditional distribution of $Q^{\top}\psi_0^s$ given $\tilde Q$. Assume that there exists $F_+$ such that $\sup_t f_0(t | \tilde Q) \le F_+$ almost surely on $\tilde Q$ and for all $\psi$ in a neighborhood of $\psi_0$ (in particular for $\psi_0^s$). Further assume that $f_\psi$ is differentiable and the derivative is bounded by $F_+$ for all $\psi$ in a neighborhood of $\psi_0$ (again in particular for $\psi_0^s$).
\vspace{0.1in}
\item Define $g(Q) = {\sf var}(X \mid Q)$. There exists $c_-$ and $c_+$ such that $c_- \le \lambda_{\min}(g(Q)) \le \lambda_{\max}(g(Q)) \le c_+$ almost surely. Also assume that $g$ is a Lipschitz with constant $G_+$ with respect to $Q$.
\vspace{0.1in}
\item There exists $p_+ < \infty$ and $p_- > 0, r > 0$ such that:
$$
p_- \|\psi - \psi_0\| \le \mathbb{P}\left(\text{sign}\left(Q^{\top}\psi\right) \neq \text{sign}\left(Q^{\top}\psi_0\right)\right) \le p_+ \|\psi - \psi_0\| \,,
$$
for all $\psi$ such that $\|\psi - \psi_0\| \le r$.
\vspace{0.1in}
\item For all $\psi$ in the parameter space $0 < \mathbb{P}\left(Q^{\top}\psi > 0\right) < 1$.
\vspace{0.1in}
\item Define $m_2(Q) = \mathbb{E}\left[\|X\|^2 \mid Q\right]$ and $m_4(Q) = \mathbb{E}\left[\|X\|^4 \mid Q\right]$. Assume $m_2, m_4$ are bounded Lipschitz function of $Q$.
\end{enumerate}
\end{assumption}
\subsection{Sufficient conditions for above assumptions }
We now demonstrate some sufficient conditions for the above assumptions to hold. The first condition is essentially a condition on the conditional density of the first co-ordinate of $Q$ given all other co-ordinates. If this conditional density is bounded and has bounded derivative, then first assumption is satisfied. This condition is satisfied in fair generality. The second assumption implies that the conditional distribution of X given Q has variance in all the direction over all $Q$. This is also very weak condition, as is satisfied for example if X and Q and independent (with $X$ has non-degenerate covariance matrix) or $(X, Q)$ are jointly normally distributed to name a few. This condition can further be weaken by assuming that the maximum and minimum eigenvalues of $\mathbb{E}[g(Q)]$ are bounded away from $\infty$ and $0$ respectively but it requires more tedious book-keeping. The third assumption is satisfied as long as as $Q^{\top}\psi$ has non-zero density near origin, while the fourth assumption merely states that the support of $Q$ is not confined to one side of the hyperplane for any hyperplane and a simple sufficient condition for this is $Q$ has continuous density with non-zero value at the origin. The last assumption is analogous to the second assumption for the conditional fourth moment which is also satisfied in fair generality.
\\\\
\noindent
{\bf Kernel function and bandwidth: } We take $K(x) = \Phi(x)$ (distribution of standard normal random variable) for our analysis. For the bandwidth we assume $n\sigma_n^2 \to 0$ and $n \sigma_n \to \infty$ as the other case, (i.e. $n\sigma_n^2 \to \infty$) is already established in \cite{seo2007smoothed}.
\\\\
\noindent
Based on Assumption \ref{eq:assm} and our choice of kernel and bandwidth we establish the following theorem:
\begin{theorem}
\label{thm:regression}
Under Assumption \ref{eq:assm} and the above choice of kernel and bandwidth we have:
$$
\sqrt{n}\begin{pmatrix}\begin{pmatrix} \hat \beta^s \\ \hat \delta^s \end{pmatrix} - \begin{pmatrix} \beta_0 \\ \delta_0 \end{pmatrix} \end{pmatrix} \overset{\mathscr{L}}{\implies} \mathcal{N}(0, \Sigma_{\beta, \delta})
$$
and
$$
\sqrt{n/\sigma_n} \left(\hat \psi^s - \psi_0\right) \overset{\mathscr{L}}{\implies} \mathcal{N}(0, \Sigma_\psi) \,,
$$
for matrices $\Sigma_{\beta, \delta}$ and $\Sigma_\psi$ mentioned explicitly in the proof. Moreover they are asymptotically independent.
\end{theorem}
The proof of the theorem is relatively long, so we break it into several lemmas. We provide a roadmap of the proof in this section while the elaborate technical derivations of the supporting lemmas can be found in Appendix. Let $\nabla \mathbb{M}_n^s(\theta)$ and $\nabla^2 \mathbb{M}_n^s(\theta)$ be the gradient and Hessian of $\mathbb{M}_n^s(\theta)$ with respect to $\theta$. As $\hat \theta^s$ minimizes $\mathbb{M}_n^s(\theta)$, we have from the first order condition, $\nabla \mathbb{M}_n^s(\hat \theta^s) = 0$. Using one step Taylor expansion we have:
\allowdisplaybreaks
\begin{align*}
\label{eq:taylor_first}
\nabla \mathbb{M}_n^s(\hat \theta^s) = \nabla \mathbb{M}_n^s(\theta_0) + \nabla^2 \mathbb{M}_n^s(\theta^*)\left(\hat \theta^s - \theta_0\right) = 0
\end{align*}
i.e.
\begin{equation}
\label{eq:main_eq}
\left(\hat{\theta}^s - \theta_0\right) = -\left(\nabla^2 \mathbb{M}_n^s(\theta^*)\right)^{-1} \nabla \mathbb{M}_n^s(\theta_0)
\end{equation}
for some intermediate point $\theta^*$ between $\hat \theta^s$ and $\theta_0$.
Following the notation of \cite{seo2007smoothed}, define a diagonal matrix $D_n$ of dimension $2p + d$ with first $2p$ elements being 1 and the last $d$ elements being $\sqrt{\sigma_n}$.
we can write:
\begin{align}
\sqrt{n}D_n^{-1}(\hat \theta^s - \theta_0) & = - \sqrt{n}D_n^{-1}\nabla^2\mathbb{M}_n^s(\theta^*)^{-1}\nabla \mathbb{M}_n^s(\theta_0) \notag \\
\label{eq:taylor_main} & = \begin{pmatrix} \nabla^2\mathbb{M}_n^{s, \gamma}(\theta^*) & \sqrt{\sigma_n}\nabla^2\mathbb{M}_n^{s, \gamma \psi}(\theta^*) \\
\sqrt{\sigma_n}\nabla^2\mathbb{M}_n^{s, \gamma \psi}(\theta^*) & \sigma_n\nabla^2\mathbb{M}_n^{s, \psi}(\theta^*)\end{pmatrix}^{-1}\begin{pmatrix} \sqrt{n}\nabla \mathbb{M}_n^{s, \gamma}(\theta_0) \\ \sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0)\end{pmatrix}
\end{align}
where $\gamma = (\beta, \delta) \in {\bbR}^{2p}$. The following lemma establishes the asymptotic properties of $\nabla \mathbb{M}_n^s(\theta_0)$:
\begin{lemma}[Asymptotic Normality of $\nabla \mathbb{M}_n^s(\theta_0)$]
\label{asymp-normality}
\label{asymp-normality}
Under assumption \ref{eq:assm} we have:
\begin{align*}
\sqrt{n}\nabla \mathbb{M}_n^{s, \gamma}(\theta_0) \implies \mathcal{N}\left(0, 4V^{\gamma}\right) \,,\\
\sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0) \implies \mathcal{N}\left(0, V^{\psi}\right) \,.
\end{align*}
for some n.n.d. matrices $V^{\gamma}$ and $V^{\psi}$ which is mentioned explicitly in the proof. Further more $\sqrt{n}\nabla \mathbb{M}_n^{s, \gamma}(\theta_0)$ and $\sqrt{n\sigma_n}\nabla \mathbb{M}_n^{s, \psi}(\theta_0)$ are asymptotically independent.
\end{lemma}
\noindent
Next, we analyze the convergence of $\nabla^2 \mathbb{M}_n^s(\theta^*)$ which is stated in the following lemma:
\begin{lemma}[Convergence in Probability of $\nabla^s \mathbb{M}_n^s(\theta^*)$]
\label{conv-prob}
Under Assumption \eqref{eq:assm}, for any random sequence $\breve{\theta} = \left(\breve{\beta}, \breve{\delta}, \breve{\psi}\right)$ such that $\breve{\beta} \overset{p}{\to} \beta_0, \breve{\delta} \overset{p}{\to} \delta_0, \|\breve{\psi} - \psi_0\|/\sigma_n \overset{P} \rightarrow 0$, we have:
\begin{align*}
\nabla^2_{\gamma} \mathbb{M}_n^s(\breve{\theta}) & \overset{p}{\longrightarrow} 2Q^{\gamma} \,, \\
\sqrt{\sigma_n}\nabla^2_{\psi \gamma} \mathbb{M}_n^s(\breve{\theta}) & \overset{p}{\longrightarrow} 0 \,, \\
\sigma_n \nabla^2_{\psi} \mathbb{M}_n^s(\breve{\theta}) & \overset{p}{\longrightarrow} Q^{\psi} \,.
\end{align*}
for some matrices $Q^{\gamma}, Q^{\psi}$ mentioned explicitly in the proof. This, along with equation \eqref{eq:taylor_main}, establishes:
\begin{align*}
\sqrt{n}\left(\hat \gamma^s - \gamma_0\right) & \overset{\mathscr{L}}{\implies} \mathcal{N}\left(0, Q^{\gamma^{-1}}V^{\gamma}Q^{\gamma^{-1}}\right) \,, \\
\sqrt{n/\sigma_n}\left(\hat \psi^s - \psi_0\right) & \overset{\mathscr{L}}{\implies} \mathcal{N}\left(0, Q^{\psi^{-1}}V^{\psi}Q^{\psi^{-1}}\right) \,.
\end{align*}
where as before $\hat \gamma^s = (\hat \beta^s, \hat \delta^s)$.
\end{lemma}
It will be shown later that the condition $\|\breve{\psi}_n - \psi_0\|/\sigma_n \overset{P} \rightarrow 0$ needed in Lemma \ref{conv-prob} holds for the (random) sequence $\psi^*$, the intermediate point in the Taylor expansion. Then, combining Lemma \ref{asymp-normality} and Lemma \ref{conv-prob} we conclude the proof of Theorem \ref{thm:regression}.
Observe that, to show $\left\|\psi^* - \psi_0 \right\| = o_P(\sigma_n)$, it suffices to to prove that $\left\|\hat \psi^s - \psi_0 \right\| = o_P(\sigma_n)$. Towards that direction, we have following lemma:
\begin{lemma}[Rate of convergence]
\label{lem:rate_smooth}
Under Assumption \ref{eq:assm} and our choice of kernel and bandwidth,
$$
n^{2/3}\sigma_n^{-1/3} d^2_*\left(\hat \theta^s, \theta_0^s\right) = O_P(1) \,,
$$
where
\begin{align*}
d_*^2(\theta, \theta_0^s) & = \|\beta - \beta_0^s\|^2 + \|\delta - \delta_0^s\|^2 \\
& \qquad \qquad + \frac{\|\psi - \psi_0^s\|^2}{\sigma_n} \mathds{1}_{\|\psi - \psi_0^s\| \le \mathcal{K}\sigma_n} + \|\psi - \psi_0^s\| \mathds{1}_{\|\psi - \psi_0^s\| > \mathcal{K}\sigma_n} \,.
\end{align*}
for some specific constant $\mathcal{K}$. (This constant will be mentioned precisely in the proof). Hence as $n\sigma_n \to \infty$, we have $n^{2/3}\sigma_n^{-1/3} \gg \sigma_n^{-1}$ which implies $\|\hat \psi^s - \psi_0^s\|/\sigma_n \overset{P} \longrightarrow 0 \,.$
\end{lemma}
\noindent
The above lemma establishes $\|\hat \psi^s - \psi_0^s\|/\sigma_n = o_p(1)$ but our goal is to show that $\|\hat \psi^s - \psi_0\|/\sigma_n = o_p(1)$. Therefore, we further need $\|\psi^s_0 - \psi_0\|/\sigma_n \rightarrow 0$ which is demonstrated in the following lemma:
\begin{lemma}[Convergence of population minimizer]
\label{bandwidth}
Under Assumption \ref{eq:assm} and our choice of kernel and bandwidth, we have: $\|\psi^s_0 - \psi_0\|/\sigma_n \rightarrow 0$.
\end{lemma}
\noindent
Hence the final roadmap is the following: Using Lemma \ref{bandwidth} and Lemma \ref{lem:rate_smooth} we establish that $\|\hat \psi^s - \psi_0\|/\sigma_n = o_p(1)$ if $n\sigma_n \to 0$. This, in turn, enables us to prove Lemma \ref{conv-prob}, i.e. $\sigma_n \nabla^2 \mathbb{M}_n^s(\theta^*) \overset{P} \rightarrow Q$,which, along with Lemma \ref{asymp-normality}, establishes the main theorem.
|
1,108,101,564,933 | arxiv | \section{Introduction}
Unstructured documents contain a vast amount of knowledge that can be useful information for responding to users in goal-oriented dialog systems.
The shared task at the first DialDoc Workshop focuses on grounding and generating agent responses in such systems.
Therefore, two subtasks are proposed: given a dialog extract the relevant information for the next agent turn from a document and generate a natural language agent response based on dialog context and grounding document.
In this paper, we present our submissions to both subtasks.
In the first subtask, we focus on modeling spans directly using a biaffine classifier and restricting the model's output to valid spans.
We notice that replacing BERT with alternative language models results in significant improvements.
For the second subtask, we notice that providing a generation model with an entire, possibly long, grounding document often leads to models struggling to generate factually correct output.
Hence, we split the task into two subsequent stages, where first a grounding span is selected according to our method for the first subtask which is then provided for generation.
With these approaches, we report strong improvements over the baseline in both subtasks.
Additionally, we experimented with marginalizing over all spans in order to be able to account for the uncertainty of the span selection model during generation.
\section{Related Work}
Recently, multiple datasets and challenges concerning conversational question answering have been proposed.
For example, \citet{saeidi2018} introduced ShARC, a dataset containing ca.\ 32k utterances which include follow-up questions on user requests which can not be answered directly based on the given dialog and grounding.
Similarly, the CoQA dataset \cite{reddy2019} provides 127k questions with answers and grounding obtained from human conversations.
Closer related to the DialDoc shared task, the task in the first track of DSTC~9 \cite{kim2020} was to generate agent responses based on relevant knowledge in task-oriented dialog.
However, the considered knowledge has the form of FAQ documents, where snippets are much shorter than those considered in this work.
Pre-trained trained language models such as BART \cite{lewis2020bart} or RoBERTa \cite{liu2019} have recently become a successful tool for different kinds of natural language understanding tasks, such as question answering (QA), where they obtain state-of-the-art results \cite{liu2019, clarkELECTRAPreTrainingText2020}.
Naturally, they have recently also found their way into task-oriented dialog systems \cite{lewis2020bart}, where they are either used as end-to-end systems \cite{budzianowskiHelloItGPT22019, hamEndtoEndNeuralPipeline2020} or as components for a specific subtask \cite{he2021learning}.
\section{Task Description}
The task of dialog systems is to generate an appropriate systems response $u_{T+1}$ to a user turn $u_T$ and preceding dialog context $u_1^{T-1} \coloneqq u_1, ..., u_{T-1}$.
In a document-grounded setting, $u_{T+1}$ is based on knowledge from a set of relevant documents $D^\prime \subseteq D$, where $D$ denotes all knowledge documents.
\citet{feng2020doc2dial} identify three tasks relevant to such systems, namely 1) user utterance understanding; 2) agent response prediction; 3) relevant document identification.
The shared task deals with the second task and assumes the result of the third task to be known.
They further split this task into \emph{agent response grounding prediction} and \emph{agent response generation}.
More specifically, one subtask focuses on identifying the grounding of $u_{T+1}$ and the second subtask on generating $u_{T+1}$.
In both subtasks exactly one document $d \in D$ is given.
Each document consists of multiple sections, whereby each section consists of a title and the content.
In the doc2dial dataset, the latter is split into multiple subspans.
In the following, we refer to these given subspans as \emph{phrases} in order to avoid confusing them with arbitrary spans in the document.
\paragraph{Agent Response Grounding Prediction}
\label{sec:response_grounding}
The first subtask is to identify a span in a given document that grounds the agent response \(u_{T+1}\).
It is formulated as a span selection task where the aim is
to return a tuple $(a_s, a_e)$ of start and end position of the relevant span within the grounding document $d$ based on the dialog history $u_1^T$.
In the context of the challenge, these spans always correspond to one of the given phrases in the documents.
\paragraph{Agent Response Generation}
The goal of response generation is to provide the user with a system response $u_{T+1}$ that is based on the dialog context $u_1^T$ and document $d$ and fits naturally into the preceding dialog.
\section{Methods}
\subsection{Baselines}
\paragraph{Agent Response Grounding Prediction}
For the first subtask, \citet{feng2020doc2dial} fine-tune BERT for question answering as proposed by \citet{devlin2019bert}.
Therefore, a start and end score for each token is calculated by a linear projection from the last hidden states of the model.
These scores are normalized using a softmax over all tokens to obtain probabilities for the start and end positions.
In order to obtain the probability of a specific span, the probabilities of the start and end positions are multiplied.
If the length of the documents exceeds the maximum length supported by the model, a sliding window with stride over the document is used and each window is passed to the model.
In training, if the correct span is not included in the window, the span only consisting of the begin of sequence token is used as target.
In decoding the scores of all windows are combined to find the best span.
\paragraph{Agent Response Generation} The baseline provided for the shared task uses a pre-trained BART model \cite{lewis2020bart} to generate agent responses.
The model is fine-tuned on the tasks training data by minimizing the cross-entropy of the reference tokens.
As input, it is provided with the dialog context, title of the document, and the grounding document separated by special tokens.
Inputs longer than the maximum sequence length supported by the model (1,024 tokens for BART) are truncated.
Effectively, this means that parts of the document are removed that may include the information relevant to the response.
An alternative to truncating the document would be to truncate the dialog context (i.e. removing the oldest turns which may be less relevant than the document).
We did not do experiments with this approach in this work and always included the full dialog context in the input.
For decoding beam search with a beam size of 4 is used.
\subsection{Agent Response Grounding Prediction}
\paragraph{Phrase restriction}
In contrast to standard QA tasks, in this task, possible start and end positions of spans are restricted to phrases in the document.
This motivated us to also restrict the possible outputs of the model to these positions.
That is, instead of applying the softmax over all tokens, it is only applied over tokens corresponding to the start or end positions of a phrase
and thus only consider these positions in training and decoding.
\paragraph{Span-based objective}
The training objective for QA assumes that the probability of the start and end position are conditionally independent.
Previous work \cite{fajcikRethinkingObjectivesExtractive2020} indicates that directly modeling the joint probability of start and end position can improve performance.
Hence, to model this joint probability, we use a biaffine classifier as proposed by \citet{dozatDeepBiaffineAttention2017} for dependency parsing.
\paragraph{Ensembling} In our submission, we use an ensemble of multiple models for the prediction of spans to capture their uncertainty.
More precisely, we use Bayesian Model Averaging \cite{bma}, where the probability of a span $a = (a_s, a_e)$ is obtained by marginalizing the joint probability of span and model over all models $H$ as:
\begin{align}
p\left(a \mid u_1^T, d\right) &= \sum_{h \in H} p_h\left(a \mid u_1^T, d\right) \cdot p\left(h\right)
\end{align}
The model prior $p\left(h\right)$ is obtained by applying a softmax function over the logarithm of the F1 scores obtained on a validation set.
Furthermore, we approximate the span posterior distribution $p_h\left(a \mid u_1^T, d\right)$ by an n-best list of size 20.
\subsection{Agent Response Generation}
\paragraph{Cascaded Response Generation}
One main issue with the baseline approach is that the model appears to be unable to identify the relevant knowledge when provided with long documents.
Additionally, due to the truncation, the input of the model may not even contain the relevant parts of the document.
To solve this issue, we propose to model the problem by cascading span selection and response generation.
This way, we only have to provide the comparatively short grounding span to the model instead of the full document.
This allows the model to focus on generating an appropriate utterance and less on identifying relevant grounding information.
Similar to the baseline, we fine-tune BART \cite{lewis2020bart}.
In training, we provide the model with the dialog context $u_1^T$ concatenated with the document title and reference span, each separated by a special token.
In decoding, the reference span is not available and we use the span predicted by our span selection model as input.
\paragraph{Marginalization over Spans}
Conditioning on only the ground truth span creates a mismatch between training and inference time since the ground truth span is not available at test time but has to be predicted.
This leads to errors occurring in span selection being propagated in response generation.
Further, the generation model is unable to take the uncertainty of the span selection model into account.
Similar to \citet{lewisRetrievalAugmentedGenerationKnowledgeIntensive2020} and \citet{Thulke2021rag} we propose to marginalize over all spans $S$.
We model the response generation as:
\begin{align*}
p\left( \hat{u} = u_{T+1} \mid u_1^T; d \right) &= \\
\prod_i^N \sum_{s \in S} \: & p\left(\hat{u}_i, s \mid \hat{u}_1^{i-1}; u_1^T; d \right)
\end{align*}
where the joint probability may be factorized into a span selection model $p\left( s \mid u_1^T; d \right)$ and a generation model $p\left( u_{T+1} \mid u_1^T,s; d \right)$ corresponding to our models for each subtask.
For efficiency, we approximate $S$ by the top 5 spans which we renormalize to maintain a probability distribution.
The generation model is then trained with cross-entropy using an n-best list obtained from the separately trained selection model.
A potential extension which we did not yet try is to train both models jointly.
\begin{table*}[h]
\centering
\caption{Results of our best system on test and validation set.}
\label{table:main_results}
\begin{tabular}{|l|r|r|r|r|l|r|r|}
\hline
\multicolumn{5}{|l|}{Subtask 1} & \multicolumn{3}{l|}{Subtask 2} \\
& \multicolumn{2}{|r|}{test} & \multicolumn{2}{|r|}{val} & & test & val \\
\hline
model & F1 & EM & F1 & EM & model & \multicolumn{2}{|r|}{BLEU} \\ \hline\hline
baseline & 67.9 & 51.5 & 70.8 & 56.3 & baseline (ours) & 28.1 & 32.9 \\
RoBERTa & 73.2 & 58.3 & 77.3 & 65.6 & cascaded (RoBERTa) & 39.1 & 39.6 \\
ensemble & \textbf{75.9} & \textbf{63.5} & \textbf{78.8} & \textbf{68.4} & cascaded (ensemble) & \textbf{40.4} & \textbf{41.5} \\
\hline
\end{tabular}
\end{table*}
\section{Data}
The shared task uses the doc2dial dataset \cite{feng2020doc2dial} which contains 4,793 annotated dialogs based on a total of 487 documents.
All documents were obtained from public government service websites and stem from the four domains \textit{Social Security Administration (ssa)}, \textit{Department of Motor Vehicles (dmv)}, \textit{United States Department of Veterans Affairs (va)}, and \textit{Federal Student Aid (studentaid)}.
In the shared task, each document is associated with exactly one domain and is annotated with sections and phrases.
The latter is described by a start and end index within the document and associated with a specific section that has a title and text.
Each dialog is based on one document and contains a set of turns.
Turns are taken either by a \textit{user} or an \textit{agent} and described by a dialog act and a list of grounding reference phrases in the document
The training set of the shared task contains 3,474 dialogs with in total 44,149 turns.
In addition to the training set, the shared task organizers provide a validation set with 661 dialogs and a testdev set with 198 dialogs which include around 30\% of the dialogs from the final test set.
The final test set includes an additional domain of unseen documents and comprises a total of 787 dialogs.
Documents are rather long, have a median length of 817.5, and an average length of 991 tokens (using the BART subword vocabulary).
Thus, in many cases, truncation of the input is required.
\section{Experiments}
We base our implementation\footnote{Our code is made available at \url{https://github.com/ndaheim/dialdoc-sharedtask-21}} on the provided baseline code of the shared task \footnote{Baseline code is available at \url{https://github.com/doc2dial/sharedtask-dialdoc2021}}.
Furthermore, we use the workflow manager Sisyphus \cite{peterSisyphusWorkflowManager2018} to organize our experiments.
For the first subtask, we use the base and large variants of RoBERTa \cite{liu2019} and ELECTRA \cite{clarkELECTRAPreTrainingText2020} instead of BERT large uncased.
In the second subtask, we use BART base instead of the large variant, which was used in the baseline code, since even after reducing the batch size to one, we were not able to run the baseline with a maximum sequence length of 1024 on our Nvidia GTX 1080 Ti and RTX 2080 Ti GPUs due to memory constraints.
All models are fine-tuned with an initial learning rate of 3e-5.
Base variants are trained for 10 epochs and large variants for 5 epochs.
We include agent follow-up turns in our training data, i.e. such turns $u_t$ made by agents, where the preceding turn $u_{t-1}$ was already taken by the agent.
Similar to other agent turns, i.e. where the preceding turn was taken by the user, these turns are annotated with their grounding span and can be used as additional samples in both tasks.
In the baseline implementation, these are excluded from training and evaluation.
To maintain comparability, we do not include them in the validation or test data.
For evaluation, we use the same evaluation metrics as proposed in the baseline.
In the first subtask, exact match (EM), i.e. the percentage of exact matches between the predicted and reference span (after lowercasing and removing punctuation, articles, and whitespace) and the token-level F1 score is used.
The second subtask is evaluated using SacreBLEU \cite{post2018call}.
\begin{table}[bh!]
\centering
\caption{Ablation analysis of our systems for subtask 1 on the validation set. The best single model results are underlined.}
\label{table:ablation_task1}
\begin{tabular}{|l|r|r|r|}
\hline
model & F1 & EM & EM@5 \\
\hline\hline
baseline (BERT large) & 70.8 & 56.3 & 68.2 \\
ELECTRA large & 75.1 & 63.1 & 79.5 \\
RoBERTa large & \underline{77.3} & \underline{65.6} & 82.1 \\
\; -- phrase restriction & 77.0 & 65.1 & 79.7\\
\; \hphantom{--} -- follow-up turns & 76.5 & 64.5 & 80.9 \\
\; -- follow-up turns & 75.7 & 63.2 & 80.3\\
RoBERTa base & 74.8 & 63.1 & 79.5 \\
\; + span-based & 73.6 & 62.5 & \underline{83.0} \\
ensemble & \textbf{78.8} & \textbf{68.4} & \textbf{85.0} \\
\hline
\end{tabular}
\end{table}
\subsection{Results}
\Cref{table:main_results} summarizes our main results and submission to the shared task.
The first line shows the results obtained by reproducing the baseline provided by the organizers (using BART base for Subtask 2).
We note that these results differ from the ones reported in \citet{feng2020doc2dial} due to slightly different data conditions in the shared task and their paper.
The second line shows the results of our best single model.
In Subtask 1, we obtained our best results by using RoBERTa large, trained additionally on agent follow-up turns, and by restricting the model to phrases occurring in the document.
Using an ensemble of this model, an ELECTRA large model trained with the same approach, and a RoBERTa base model trained with the span-based objective, we achieve our best result.
In the second subtask, our cascaded approach using this model and BART base significantly outperforms the baseline by over 10\% absolute in BLEU.
Using the results of the ensemble in Subtask 2 also translates to a significant improvement in BLEU, which indicates a strong influence of the agent response grounding prediction task.
\subsection{Ablation Analysis}
\paragraph{Agent Response Grounding Prediction}
\Cref{table:ablation_task1} gives an overview of our ablation analysis for the first subtask.
In addition to F1 and EM, we report the EM@5 which we define as the percentage of turns where an exact match is part of the 5-best list predicted by the model.
This metric gives an indication of the quality of the n-best list produced by the model.
Both RoBERTa and ELECTRA large outperform BERT large concerning F1 and EM with RoBERTa large performing best.
Removing agent follow-up turns in training consistently degrades the results for both models.
Restricting the predictions of the model to valid phrases during training and evaluation gives consistent improvements in the EM and EM@5 scores.
Training RoBERTa base using the span-based objective, we observe degradations in F1 and EM but observe an improvement in EM@5 which indicates that it better models the distribution across phrases.
Due to instabilities during training, we were not able to train a large model with the span-based objective.
Additionally, we only did experiments with the biaffine classifier discussed in \Cref{sec:response_grounding}.
It would be interesting to compare the results with other span-based objectives as the ones proposed by \citet{fajcikRethinkingObjectivesExtractive2020}.
\paragraph{Agent Response Generation}
\begin{table}
\centering
\caption{Ablation analysis of our systems for subtask 2 on the validation set.}
\label{table:ablation_task2}
\begin{tabular}{|l|r|}
\hline
model & BLEU \\
\hline\hline
baseline (ours) & 32.9 \\
span marginalization & 38.4 \\
cascaded (RoBERTa large) & \underline{39.6} \\
\; + section title & 39.6 \\
\; \hphantom{+} + extended context & 39.5 \\
cascaded (ensemble) & 41.2 \\
\; + follow-up turns & 41.2 \\
\; \hphantom{+} + beam-size 6 & 41.3 \\
\; \hphantom{+} \hphantom{+} + repetition-penalty & \textbf{41.5} \\
\hline
cascaded (ground truth) & 46.2 \\
\hline
\end{tabular}
\end{table}
\Cref{table:ablation_task2} shows an ablation study of our results in response generation.
The results show that our cascaded approach outperforms the baseline by a large margin.
Further experiments with additional context, such as the title of a section or a window of 10 tokens to each side of the span, do not give improvements.
This indicates that the selected spans seem to be sufficient to generate suitable responses.
Furthermore, marginalizing over multiple spans leads to degradations, which might be because training is based on an n-best list from an uncertain model.
We observe our best results when using only the predicted span and a beam size of 6.
Furthermore, we add a repetition penalty of 1.2 \cite{keskar2019ctrl} to discourage repetitions in generated responses.
Finally, the last line of the table reports the results of the cascaded method when using ground truth spans instead of the spans predicted by a model.
That is, a perfect model for the first subtask would additionally improve the results by 4.7 points absolute in BLEU.
\section{Conclusion}
In this paper, we have described our submissions to both subtasks of the first DialDoc shared task.
In the first subtask, we have experimented with restricting the set of spans that can be predicted to valid phrases, which yields constant improvements in terms of EM.
Furthermore, we have employed a model to directly hypothesize entire spans and shown the benefits of combining multiple models using Bayesian Model Averaging.
In the second subtask, we have shown how cascading span selection and response generation improves results when compared to providing an entire document in generation.
We have compared marginalizing over spans to just using a single span for generation, with which we obtain our best results in the shared task.
\section*{Acknowledgements}
This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 694537, project ``SEQCLAS''). The work reflects only the authors' views and the European Research Council Executive Agency (ERCEA) is not responsible for any use that may be made of the information it contains.
\bibliographystyle{acl_natbib}
|
1,108,101,564,934 | arxiv | \section{Introduction}
The existence of gauge/sting dualities has been anticipated from the very beginning of these fields, based on the striking similarity
between the large N t'Hooft limit and the genus expansion in string theory. Further insights were revealed with the discovery of
D-branes, their description in terms of DBI action and identification as sources of the known p-brane solutions in super gravity.
However it was not until Maldacena conjectured his decoupling limit when significant progress was made and the AdS/CFT correspondence
emerged \cite{Aharony:1999ti} as a manifestation of the gauge/string duality providing holographic description of super gravity on
$AdS_{5}\times S^5$ space in terms of $\cal N$=4 SUSY Yang-Mills theory living at the asymptotic boundary. The observed correspondence
was conjectured to hold for the full string theory on any asymptotically $AdS_5\times S^5$ background. One of the most remarkable
features of the correspondence is that it is a strong-weak correspondence and that it can give us tools to explore the strongly coupled
regimes of the Yang-Mills theory.
In recent years progress has been made towards the study of matter in fundamental representation in the context of AdS/CFT correspondence. One way
to achieve this is by introducing space filling flavor D7-branes in the probe limit \cite{Karch:2002sh} and in order to keep the probe
limit valid the condition $N_{f} \ll N_{c}$ is imposed. The fundamental strings stretched between the stack of $N_{c}$ D3 branes and
the flavor $N_{f}$ D7-branes give rise to $\cal N$=2 hypermultiplet, the separation of the D3 and D7 branes in the transverse
directions corresponds to the mass of the hypermultiplet, the classical shape of the D7-brane encodes the value of the fermionic
condensate and its quantum fluctuations describe the light meson spectrum of the theory \cite{Kruczenski:2003be}. This technique for
introducing fundamental matter has been widely employed in different backgrounds. Of particular interest was the study of non
supersymmetric backgrounds and phenomena such as spontaneous chiral symmetry breaking. These phenomena were first studied in this context in \cite{Babington:2003vm}, where the authors developed an appropriate numerical technique. In recent years this approach received further development, and has proven itself as powerful tool for the exploration of confining gauge theories, in particular, for the description of their thermodynamic properties or for the building of phenomenological models relevant to QCD\cite{Kruczenski:2003uq}-\cite{Myers:2007we}.
The paper is organized as follows:\\
In the second section we review the method of introducing magnetic field to the theory, employed in \cite{Filev:2007gb}. We describe the basic properties of the D7 brane embedding and the thermodynamic properties of the dual gauge theory, in particular the dependence of the fermionic condensate on the bare quark mass. We describe the spontaneous chiral symmetry breaking caused by the external magnetic field and comment on the spiral structure in the condensate vs. bare quark mass diagram.
The third section contains our main results and splits into two parts:\\
The first part is dedicated to the detailed study of the spiral structure described in \cite{Filev:2007gb}. We perform analysis similar to the one considered in \cite{Frolov:2006tc} for the study of merger transitions and calculate the critical exponents of the bare quark mass and the fermionic condensate. We also describe the discrete self-similarity of the spiral and calculate the scaling factor characterizing it.
In the second part of this section we consider the meson spectrum of the states corresponding to the spiral. First we study the critical embedding corresponding to the center of the spiral and reveal an infinite tower of tachyonic states organized in a decreasing geometrical series. Next we consider the dependence of the meson spectrum on the bare quark mass and confirm the expectations based on thermodynamic considerations that only the lowest branch of the spiral is stable. We observe that at each turn of the spiral there is one new tachyonic state. We comment on the self-similar structure of the spectrum and calculate the critical exponent of the meson mass. We also consider the spectrum corresponding to the lowest branch of the spiral and for a large bare quark mass reproduce the result for pure ${\cal N}=2$ Supersymmetric Yang Mills Theory obtained in \cite{Kruczenski:2003be}.
We end with a short discussion of our results and the possible directions of future study.
\section{Fundamental matter in external magnetic field}
In this section we briefly review the method of introducing external magnetic field to the theory considered in \cite{Filev:2007gb} and the basic properties of the D7-brane probe in this background. We also review the properties of the corresponding dual theory and the effect that the external magnetic field has on it.
\subsection{Basic Configuration}
Let us consider the $AdS_{5} \times S^5$ geometry describing the near horizon geometry of a stack of $N_{c}$ extremal D3-branes.
\begin{eqnarray}
ds^2=\frac{u^2}{R^2}(-dx_{0}^2+d\vec x^2)+R^2\frac{du^2}{u^2}+R^2d\Omega_{5}^2,\label{AdS}\\
g_{s}C_{(4)}=\frac{u^4}{R^4}dx^0\wedge dx^1\wedge dx^2 \wedge dx^3,\nonumber\\\
e^\Phi=g_s,\nonumber\\
R^4=4\pi g_{s}N_{c}\alpha'^2\nonumber\ .
\end{eqnarray}
In order to introduce fundamental matter we first rewrite the metric in the following form :
\begin{eqnarray}
ds^2&=&\frac{\rho^2+L^2}{R^2}[ - dx_0^2 + dx_1^2 +dx_2^2 + dx_3^2 ]+\frac{R^2}{\rho^2+L^2}[d\rho^2+\rho^2d\Omega_{3}^2+dL^2+L^2d\phi^2],\nonumber\\
d\Omega_{3}^2&=&d\psi^2+\cos^2\psi d\beta^2+\sin^2\psi d\gamma^2, \label{geometry1}
\end{eqnarray}
where $\rho, \psi, \beta,\gamma$ and $L,\phi$ are polar coordinates in the transverse ${\cal R}^4$ and ${\cal R}^2$ respectively, satisfying: $u^2=\rho^2+L^2$.
Next we use $x_{0,1,2,3},\rho,\psi,\beta,\gamma$ to parametrise the world volume of the D7-brane and consider the following ansatz
\cite{Karch:2002sh} for it's embedding:
\begin{eqnarray}
\phi\equiv const,\\
L\equiv L(\rho)\ .\nonumber \label{ansatzEmb}
\end{eqnarray}
Leading to the following form of the induced metric:
\begin{equation}
d\tilde s=\frac{\rho^2+L(\rho)^2}{R^2}[ - dx_0^2 + dx_1^2 +dx_2^2
+dx_3^2]+\frac{R^2}{\rho^2+L(\rho)^2}[(1+L'(\rho)^2)d\rho^2+\rho^2d\Omega_{3}^2]\ .\label{inducedMetric}
\end{equation}
Now let us consider the NS part of the general DBI action:
\begin{eqnarray}
S_{DBI}=-\frac{\mu_{7}}{g_s}\int\limits_{{\cal M}_{8}}d^{8}\xi det^{1/2}(P[G_{ab}+B_{ab}]+2\pi\alpha' F_{ab})\ . \label{DBI}
\end{eqnarray}
Here $\mu_{7}=[(2\pi)^7)\alpha'^4]^{-1}$ is the D7-brane tension, $P[G_{ab}]$ and $P[B_{ab}]$ are the induced metric and $B$-field on the D7-brane's world volume, while $F_{ab}$ is its gauge field. A simple way to
introduce magnetic field would be to consider pure gauge $B$-field along the "flat" directions
of the geometry $x_{0}-x_{3}$ corresponding to the D3-branes world volume:
\begin{equation}
B^{(2)}= Hdx_{2}\wedge dx_{3}\ . \label{ansatz}
\end{equation}
The constant $H$ is proportional to the magnetic component of the EM field. Note that since the $B$-field is a pure gauge $dB=0$ the corresponding background is still a solution to the supergravity equations. On the other side the gauge field $F_{ab}$ comes in next order in $\alpha'$ expansion compared to the metric and the $B$-field components. Therefore to study the classical embedding of the D-brane one can leave only the $G_{ab}+B_{ab}$ part of the DBI-action. It was argued in \cite{Filev:2007gb} that one can consistently satisfy the constraints imposed on the classical embedding resulting from integrating out $F_{ab}$. The resulting effective lagrangian is:
\begin{equation}
{\cal L}=-\frac{\mu_{7}}{g_s}\rho^3\sin\psi\cos\psi\sqrt{1+L'^2}\sqrt{1+\frac{R^4H^2}{(\rho^2+L^2)^2}}\ .
\label{lagrangian}
\end{equation}
The equation of motion for the profile $L_0(\rho)$ of the D7-brane is given by:
\begin{equation}
\partial_{\rho}\left(\rho^3\frac{L_0'}{\sqrt{1+L_0'^2}}\sqrt{1+\frac{R^4H^2}{(\rho^2+L_0^2)^2}}\right)+ \frac{\sqrt{1+L_0'^2}}{\sqrt{1+\frac{R^4h^2}{(\rho^2+L_0^2)^2}}}\frac{2\rho^3L_0R^4H^2}{(\rho^2+L_0^2)^3}=0\ .
\label{eqnMnL}
\end{equation}
As expected for large $(L_0^2+\rho^2) \to \infty$ or $H \to 0$, we get the equation for the pure AdS$_{5}\times S^5$ background \cite{Karch:2002sh}:
\begin{eqnarray*}
\partial_{\rho}\left(\rho^3\frac{L_0'}{\sqrt{1+L_0'^2}}\right)=0\ .
\end{eqnarray*}
Therefore the solutions to equation (\ref{eqnMnL}) have the following behavior at infinity:
\begin{equation}
L_0(\rho)=m+\frac{c}{\rho^2}+\dots,
\end{equation}
where the parameters $m$ (the asymptotic separation of the D7- and D3- branes) and $c$ (the degree of bending of the D7-brane) are related to the bare quark mass $m_{q}=m/2\pi\alpha'$ and the fermionic condensate $\langle\bar\psi\psi\rangle\propto -c$ respectively \cite{Kruczenski:2003uq}. We have provided derivation of these relations in Appendix A. As we shall see below, the presence of the external magnetic field and its effect on the dual SYM provide a non vanishing value for the fermionic condensate, furthermore the theory exhibits chiral symmetry breaking.
Now notice that $H$ enters in (\ref{lagrangian}) only through the combination $H^2R^4$. The other natural scale is the asymptotic separation $m$. It turns out that different physical configurations can be studied in terms of the ratio $\tilde m^2={m^2}/{(H R^2)}$: Once the $\tilde m$ dependence of our solutions are known, the $m$ and $H$ dependence follows. Indeed let us introduce dimensionless variables {\it via}:
\begin{eqnarray}
\rho=R\sqrt{H}\tilde\rho\ , \quad
L_0=R\sqrt{H}\tilde L\ , \quad
L_0'(\rho)=\tilde L'(\tilde\rho)\ .\label{cordchange}
\end{eqnarray}
The equation of motion (\ref{eqnMnL}) then takes the form:
\begin{equation}
\partial_{\tilde\rho}\left(\tilde\rho^3\frac{\tilde L'}{\sqrt{1+{\tilde L}'^2}}\sqrt{1+\frac{1}{(\tilde\rho^2+\tilde L^2)^2}}\right)+ \frac{\sqrt{1+\tilde L'^2}}{\sqrt{1+\frac{1}{(\tilde\rho^2+\tilde L^2)^2}}}\frac{2\tilde\rho^3\tilde L}{(\tilde\rho^2+\tilde L^2)^3}=0\ .
\label{eqnMnLD}
\end{equation}
The solutions for $\tilde L(\tilde\rho)$ can be expanded again to:
\begin{equation}
\tilde L(\tilde\rho)=\tilde m+\frac{\tilde c}{\tilde\rho^2}+\dots, \label{ExpansionD}
\end{equation}
and using the transformation (\ref{cordchange}) we can get:
\begin{equation}
c=\tilde c R^3H^{3/2} \label{Hdepend}\ .
\end{equation}
\subsection{Properties of the Solution}
The properties of the solution have been explored in \cite{Filev:2007gb}, both numerically and analytically, when possible. Let us briefly review the main results.
For weak magnetic field $H$ and non-zero bare quark mass $m$ it was shown that the theory develops a fermionic condensate:
\begin{equation}
\langle\bar\psi\psi\rangle \propto -c =-\frac{R^4}{4m}H^2\ , \label{condSmA}
\end{equation}
or using dimensionless variables:
\begin{equation}
\tilde c=\frac{1}{4\tilde m} \label{1/m}\ .
\end{equation}
The case of strong magnetic field $H$ can be explored by numerically solving equation (\ref{eqnMnLD}), it is convenient to use initial conditions in the IR as has been recently discussed in the literature \cite{Albash:2006ew}, \cite{Albash:2006bs}. We used the boundary condition $\tilde L'(\tilde\rho)\vert_{\tilde\rho=0}=0$. We used shooting techniques to generate the embedding of the D7 for a wide range of $\tilde m$. Having done so we expanded numerically the solutions for $\tilde L(\tilde\rho)$ as in equation (\ref{ExpansionD}) and generated the points in the $(\tilde m,-\tilde c)$ plane corresponding to the solutions. The resulting plot is presented in figure~\ref{fig:fig1}.
\begin{figure}[h]
\centering
\includegraphics[width=10cm]{f1.pdf}
\caption{\small The black line corresponds to (\ref{1/m}), one can observe that the analytic result is valid for large $\tilde m$.
It is also evident that for $\tilde m=0$ $\langle\bar\psi\psi\rangle\neq0$. The corresponding value of the condensate is $\tilde c_{\rm cr}=0.226$.} \label{fig:fig1}
\end{figure}
As one can see there is a non zero fermionic condensate for zero bare quark mass and hence there is a Spontaneous Breaking of the Chiral Symmetry. The corresponding value of the condensate is $\tilde c_{\rm cr}=0.226$. It is also evident that the analytical expression for the condensate (\ref{1/m}) that we got in the previous section is valid for large $\tilde m$, as expected. Now using equation (\ref{Hdepend}) we can deduce the dependence of $c_{\rm cr}$ on $H$:
\begin{equation}
c_{\rm cr}=\tilde c_{\rm cr}R^3H^{3/2}=0.226R^3H^{3/2}\ . \label{Ccr}
\end{equation}
Another interesting feature of our
phase diagram is the spiral behavior near the origin of the $(\tilde m,-\tilde c)$-plane which can be seen in figure \ref{fig:spiral-revisited}. Note that the spiral presented in this figure has two arms, we have used the fact that any two points in the $(\tilde m,-\tilde c)$ plane related by reflection with respect to the origin describe the same physical state. A similar spiraling feature has been observed
in ref.~\cite{Albash:2006bs}, where the authors have argued that only the lowest branch of the spiral corresponding to positive values of
$m$ is the stable one (corresponding to the lowest energy state). The spiral behavior near the origin signals instability of the
embedding corresponding to $L_0\equiv 0$. If we trace the curve of the diagram in
figure \ref {fig:spiral-revisited} starting from large $m$, as we go to smaller values of $m$ we will reach zero bare quark mass for some
large negative value of the fermionic condensate $c_{cr}$. Now if we continue tracing along the diagram one can verify numerically that all other points correspond to embeddings of the D7-brane which intersect the origin of the transverse plane at least once. After further study of the right arm of the spiral, one finds that the part of the diagram corresponding to negative values of $\tilde m$ represents solutions for the D7-brane embedding which intersect the origin of the transverse plane odd number of times, while the positive part of the spiral represents solutions which intersect the origin of the transverse plane even number of times. The lowest positive branch corresponds to solutions which don't intersect the origin of the transverse plane and is the stable one, while the upper branches have correspondingly $2,4, {\it etc.,}$ intersection points and are ruled out after evaluation of the free energy.
Indeed let us explore the stability of the spiral by calculating the regularized free energy of the system. We identify the free energy of the dual gauge theory \cite{Erdmenger:2007bn}, \cite{Albash:2007bk} with the wick rotated and regularized on-shell action of the D7-brane:
\begin{eqnarray}
&&F=2\pi^2N_fT_{D7}R^4H^2\tilde I_{D7}\ ,\\
&&\tilde I_{D7}=\int\limits_{0}^{\tilde\rho_{max}}d\tilde\rho\left({\tilde\rho}^3\sqrt{1+\frac{1}{({\tilde\rho}^2+{\tilde L}^2)}}\sqrt{1+{\tilde L}'^2}-\tilde\rho\sqrt{{\tilde\rho}^4+1}\right)\label{freeenergy}
\end{eqnarray}
The second term under the sign of the integral in (\ref{freeenergy}), corresponds to the subtracted free energy of the $\tilde L(\tilde\rho)\equiv 0$ embedding and serves as a regulator.
Now we can evaluate numerically the integral in (\ref{freeenergy}) for the first several branches of the spiral. The corresponding plot is presented in figure \ref{fig:free-energy}. Note that we have plotted $\tilde I_{D7}$ versus $|\tilde m |$, since the bare quark mass depends only on the absolute value of the parameter $\tilde m$. The lowest curve on the plot corresponds to the lowest positive branch of the spiral, as one can see it has the lowest energy and thus corresponds to the stable phase of the theory.
\begin{figure}[h]
\centering
\includegraphics[width=9cm]{free-energy.pdf}
\caption{\small The lowest lying curve correspond to the positive $\tilde m$ part of the lowest branch of the spiral, suggesting that this is the stable phase of the theory. }
\label{fig:free-energy}
\end{figure}
In the next section we will provide more detailed analysis of the spiral structure from Figure \ref{fig:spiral-revisited} and explore the discrete self-similarity associated to it.
\begin{figure}[h]
\centering
\includegraphics[width=9cm]{f2.pdf}
\caption{\small A magnification of figure \ref{fig:fig1} to show the spiral behavior near the origin of the $(-\tilde c,\tilde m)$-plane. We have added the second (left) arm of the spiral representing the $(\tilde m, -\tilde c)\to (-\tilde m,\tilde c)$ symmetry of the diagram.}
\label{fig:spiral-revisited}
\end{figure}
\section{Criticality and Spontaneous chiral symmetry breaking}
\subsection{The Spiral Revisited}
In the following section we analyze the spiral structure described in \cite{Filev:2007gb}. The technique that we employ is similar to the one used in \cite{Frolov:2006tc} and \cite{Mateos:2006nu} , where the authors studied merger transitions in brane-black-hole systems.
Let us explore the asymptotic form of the equation of motion of the D7-brane probe (\ref{eqnMnLD}) in the near horizon limit $\tilde \rho^2+\tilde L^2\to 0$. To this end we change coordinates to:
\begin{eqnarray}
\tilde\rho\to\lambda\hat\rho;~~~ \tilde L\to \lambda\hat L;
\label{rescaling}
\end{eqnarray}
and consider the limit $\lambda\to0$. The resulting equation of motion is:
\begin{equation}
\partial_{\hat\rho}(\frac{\hat\rho^3}{\hat\rho^2+\hat L^2}\frac{\hat L'}{\sqrt{1+\hat L'^2}})+2\sqrt{1+\hat L'^2}\frac{\hat\rho^3\hat L}{(\hat\rho^2+\hat L^2)^2}=0\ .
\label{rescaled}
\end{equation}
Equation (\ref{rescaled}) enjoys the scaling symmetry:
\begin{equation}
\hat \rho\to \mu\hat\rho;~~~\hat L\to \mu\hat L;\ .
\end{equation}
In the sense that if $\hat L=f(\hat\rho)$ is a solution to the E.O.M. then $\frac{1}{\mu}f(\mu\hat\rho)$ is also a solution.
Next we focus on the region of the parametric space, close to the trivial $L\equiv 0$ embedding, by considering the expansion:
\begin{equation}
\hat L=0+(2\pi\alpha')\hat\chi
\end{equation}
and linearizing the E.O.M. . The resulting equation of motion is:
\begin{equation}
\hat\rho\partial_{\hat\rho}(\hat\rho\partial_{\hat\rho}\hat\chi)+2\hat\chi=0
\end{equation}
and has the solution :
\begin{equation}
\hat\chi=A\cos(\sqrt{2}\ln\hat\rho)+B\sin(\sqrt{2}\ln\hat\rho)\ .
\label{linafter}
\end{equation}
Now under the scaling symmetry $\hat\rho\to\mu\hat\rho$ the constants of integration $A$ and $B$ transform as:
\begin{equation}
\begin{pmatrix}A \\ B\\ \end{pmatrix}\to\frac{1}{\mu}\begin{pmatrix}\cos\sqrt{2}\ln\mu &\sin\sqrt{2}\ln\mu \\ -\sin\sqrt{2}\ln\mu &\cos\sqrt{2}\ln\mu \end{pmatrix}\begin{pmatrix}A\\ B \end{pmatrix}\ .
\label{scaling1}
\end{equation}
The above transformaton defines a class of solutions represented by a logarithmic spiral in the parametric space $(A,B)$ generated by some $(A_{in},B_{in})$, the fact that we have a discrete symmetry $\chi\to-\chi$ suggests that $(-A_{in},-B_{in})$ is also a solution and therefore the curve of solutions in the parametric space is a double spiral symmetric with respect to the origin. Actually as we are going to show there is a linear map from the parametric space $(A,B)$ to the plane $(\tilde m,-\tilde c)$ which explains the spiral structure, a subject of our study.
To show this let us consider the linearized E.O.M. before taking the $\lambda\to 0$ limit :
\begin{eqnarray}
\tilde\rho\sqrt{1+\tilde\rho^4}\partial_{\tilde\rho}(\tilde\rho\sqrt{1+\tilde\rho^4}\partial_{\tilde\rho}\tilde\chi)+2\tilde\chi=0;~~~
\tilde\chi=\lambda\hat\chi;\ ,
\end{eqnarray}
with the solution:
\begin{equation}
\tilde\chi=\tilde A\cos\sqrt{2}\ln\frac{\tilde\rho}{\sqrt{1+\sqrt{1+\tilde\rho^4}}}+\tilde B\sin\sqrt{2}\ln\frac{\tilde\rho}{\sqrt{1+\sqrt{1+\tilde\rho^4}}}\ .
\label{linbefore}
\end{equation}
Expanding at infinity:
\begin{eqnarray}
\tilde\chi=\tilde m+\frac{\tilde c}{\tilde\rho^2}+\dots=\tilde A-\frac{\tilde B}{\sqrt{2}}\frac{1}{\tilde\rho^2}+\dots,
\end{eqnarray}
we get:
\begin{equation}
\begin{pmatrix}\tilde m \\ \tilde c\end{pmatrix}=\begin{pmatrix}\tilde A\\-{\tilde B}/{\sqrt{2}}\end{pmatrix}\ .
\end{equation}
Now if we match our solution (\ref{linbefore}) with the solution in the $\tilde\rho\to 0$ limit (\ref{linafter}) we should identify $(\tilde A,\tilde B)$ with the parameters $(A, B)$. Combining the rescaling property of $(A,B)$ with the linear map to $(\tilde m,-\tilde c)$ we get that the embeddings close to the trivial embedding $L\equiv 0$ are represented in the $(\tilde m,-\tilde c)$ plane by a double spiral defined {\it via} the transformation:
\begin{equation}
\begin{pmatrix}\tilde m\\ \tilde c\\ \end{pmatrix}\to\frac{1}{\mu}\begin{pmatrix}\cos\sqrt{2}\ln\mu &-\sqrt{2}\sin\sqrt{2}\ln\mu \\ \frac{1}{\sqrt{2}}\sin\sqrt{2}\ln\mu & \cos\sqrt{2}\ln\mu \end{pmatrix}\begin{pmatrix}\tilde m\\ \tilde c \end{pmatrix}\ .
\label{scaling2}
\end{equation}
Note that the spiral is double, because we have the symmetry $(\tilde m,-\tilde c)\to(-\tilde m,\tilde c)$. This implies that in order to have similar configurations at scales $\mu_{1}$ and $\mu_{2}$ we should have:
\begin{equation}
\cos\sqrt{2}\ln\mu_1=\pm\cos\sqrt{2}\ln\mu_2
\end{equation}
and hence :
\begin{equation}
\sqrt{2}\ln\frac{\mu_2}{\mu_{1}}=-n\pi,
\end{equation}
which is equivalent to:
\begin{equation}
\frac{\mu_2}{\mu_1}=e^{-n\pi/\sqrt{2}}=q^n\ .
\end{equation}
Therefore we obtain that the discrete self-similarity is described by a rescaling by a factor of:
\begin{equation}
q=e^{-\pi/\sqrt{2}}\approx 0.10845\ .
\label{q}
\end{equation}
This number will appear in the next subsection where we will study the meson spectrum. As one may expect the meson spectrum also has a self-similar structure.
It is interesting to confirm numerically the self-similar structure of the spiral and to calculate the critical exponents of the bare quark mass and the fermionic condensate. It is convenient to use the separation of the D3 and D7 branes at $\tilde\rho=0$, $\tilde L_{in}=\tilde L(0)$ as an order parameter. There is a discrete set of initial separations $L_{in}$, corresponding to the points $H_0, H_1, H_2, \dots$ in figure \ref{fig:spiral-revisited} , for which the corresponding D7 brane's embeddings asymptote to $\tilde m=\tilde L_{\infty}=0$ as $\tilde\rho\to\infty$. The trivial $\tilde L\equiv 0$ embedding has ${\tilde L}_{in}=0$ and is the only one which has a zero fermionic condensate $(\tilde c=0)$, the rest of the states have a non zero $\tilde c$ and hence a chiral symmetry is spontaneously broken. Each such point determines separate branch of the spiral where $\tilde c=\tilde c(\tilde m)$ is a single valued function. On the other side each such branch has both positive $\tilde m$ and negative $\tilde m$ parts. The symmetry of the double spiral from figure \ref{fig:spiral-revisited}, suggests that the states with negative $\tilde m$ are equivalent to positive $\tilde m$ states but with an opposite sign of $\tilde c$. This implies that the positive and negative $\tilde m$ parts of each branch correspond to two different phases of the theory, with opposite signs of the condensate. As we can see from figure \ref{fig:free-energy} the lowest positive branch of the spiral has the lowest free energy and thus corresponds to the stable phase of the theory. In the next subsection we will analyze the stability of the spiral further by studying the light meson spectrum of the theory near the critical $\tilde L\equiv 0$ embedding.
Here we are going to show that both the bare quark mass $\tilde m$ and the fermionic condensate $\tilde c$ have critical exponent one, as $\tilde L_{in} \to 0$. Indeed let us consider the scaling property (\ref{scaling1}), (\ref{scaling2}). If we start from some $\tilde L_{in}^0$ and transform to $\tilde L_{in}=\frac{1}{\mu} \tilde L_{in}^0$, we can solve for $\mu$ and using equation (\ref{scaling2}) we can verify that the bare quark mass and the fermionic condensate approach zero linearly as $\tilde L_{in}\to0$. To verify numerically our analysis we generated plots of $\tilde m/\tilde L_{in}$ vs. $\sqrt{2}\log{\tilde L_{in}}/2\pi$ and $\tilde c/\tilde L_{in}$ vs. $\sqrt{2}\log{\tilde L_{in}}/2\pi$ presented in figure \ref{fig:mspiral}.
\begin{figure}[p]
\centering
\includegraphics[width=11cm]{f31.pdf}
\includegraphics[width=11cm]{f32.pdf}
\caption{\small The red curves represent fit with trigonometric functions of unit period. For small $\tilde L_{in}$ the fit is very good, while for large $\tilde L_{in}$ we get the results for pure $AdS_{5}\times S^5$ space, namely $\tilde L=const$, $\tilde c=0$. The plots also verify that the critical exponents of $\tilde m$ and $\tilde c$ are equal to one.}
\label{fig:mspiral}
\end{figure}
The red curves in these figures represent a fit with trigonometric functions of a unit period, as one can see the fit is very good as $\tilde L_{in}\to 0$. On the other side for large $\tilde L_{in}$ we obtain the results for a pure $AdS_{5}\times S^5$ space, namely $\tilde L=const$, $\tilde c=0$. It is also evident from the plots that the critical exponents of $\tilde m$ and $\tilde c$ are equal to one.
\subsection{The Meson Spectrum}
In this section we will explore the light meson spectrum of the theory corresponding to quadratic fluctuations of the D7 brane embedding. In particular we will consider the spectrum corresponding to the fluctuations of $\tilde L$. The equations of motion of the fluctuation modes were derived in \cite{Filev:2007gb} and it was shown that the vector and the scalar spectrum mix due to the non-zero magnetic field. Some interesting effects such as Zeeman splitting of the states and a characteristic $\sqrt{m}$ dependence of the meson spectrum have been reported. However the analysis performed in \cite{Filev:2007gb} is only for the fluctuations along $\phi$, for the lowest positive branch of the spiral from figure \ref{fig:spiral-revisited} (the one corresponding to point $H_0$). In this letter we extend the analysis of the spectrum to all branches of the spiral (points $H_1, H_2,\dots$ in figure \ref{fig:spiral-revisited}) and show that the ground states of all inner branches of the spiral are tachyonic, proving that the phases described by these branches of the spiral are unstable as opposed to metastable. Our analysis reveals the self-similar structure of the spectrum and we obtain the critical exponents of the tachyonic spectrum as one approaches the critical $\tilde L\equiv 0$ embedding. The chapter is organized as follows:
First we study the spectrum of the $\tilde L\equiv 0$ embedding in the spirit of the analysis provided in \cite{Hoyos:2006gb}. We perform both a numerical and analytical study and show that the spectrum contains infinitely many tachyonic states approaching zero in a decreasing geometrical series, representing the self-similar structure of the meson spectrum.
Next we study the spectrum as a function of the bare quark mass and show that at each turn of the spiral one of the energy levels become tachyonic. Similar behavior has been recently reported in \cite{Mateos:2007vn}. We show that as we approach the critical $\tilde L\equiv 0$ embedding the spectrum becomes tachyonic and the corresponding critical exponent is two. We also present plots showing the spiraling of the spectrum as one approaches criticality.
Finally we provide an analysis of the spectrum of the stable branch of the spiral and comment on the small $\tilde m$ behavior of the spectrum as a consistent with the spontaneous chiral symmetry breaking scenario.
\subsubsection{The critical $\tilde L\equiv 0$ embedding}
In this section we study the $\tilde L\equiv 0$ embedding and in particular the spectrum of the fluctuations along the $\tilde L$ coordinate. Let us go back to dimensionfull coordinates and consider the following change of coordinates in the transverse $R^6$ space:
\begin{eqnarray}
\rho=u\cos\theta\ ,\\
L=u\sin\theta\nonumber\ .
\end{eqnarray}
In these coordinates the trivial embedding corresponds to $\theta\equiv 0$ and in order to study the quadratic fluctuations we perform the expansion:
\begin{eqnarray}
\theta=0+(2\pi\alpha')\delta\theta(t,u)\ ,\\
\delta\theta=e^{-i\Omega t}h(u)\ .
\end{eqnarray}
Note that in order to study the mass spectrum we restrict the D7 brane to fluctuate only in time. In a sense this corresponds to going to the rest frame. Note that due to the presence of the magnetic field there is a coupling of the scalar spectrum to the vector one, however for the fluctuations along $\theta$ the coupling depends on the momenta in the $(x_2,x_3)$ plane and this is why considering the rest frame is particularly convenient .
Our analysis follows closely the one considered in \cite{Hoyos:2006gb}, where the authors have calculated the quasinormal modes of the D7-brane embedding in the AdS-black hole background by imposing an in-going boundary condition at the horizon of the black hole. Our case is the $T\to0$ limit and the horizon is extremal, however the $\theta\equiv 0$ embedding can still have quasinormal excitations with imaginary frequencies, corresponding to a real wave function so that there is no flux of particles falling into the zero temperature horizon.
The resulting equation of motion is:
\begin{equation}
h''+\left(\frac{3}{u}+\frac{2 u^3}{u^4+R^4H^2}\right)h'+\left(\frac{R^4}{u^4}\omega^2+\frac{3}{u^2}\right)h=0\ .
\end{equation}
It is convenient to introduce the following dimensionless quantities:
\begin{equation}
z=\frac{R}{u}\sqrt{H};~~~\omega=\frac{\Omega R}{\sqrt{H}};\ ,
\end{equation}
and make the substitution \cite{Hoyos:2006gb}
\begin{equation}
h(z)=\sigma(z)f(z);~~~\frac{\sigma'(z)}{\sigma(z)}=\frac{1}{2z}+\frac{1}{z(1+z^4)};\ ,
\end{equation}
leading to the equation for the new variable $f(z)$:
\begin{equation}
f''(z)+\left(\omega^2-V(z)\right)f(z)=0\ .
\label{Shr}
\end{equation}
Where the effective potential is equal to:
\begin{equation}
V(z)=\frac{3}{4z^2}\frac{(1+3z^4)(1-z^4)}{(1+z^4)^2}\ .
\label{Potential}
\end{equation}
The potential in (\ref{Potential}) goes as $\frac{3}{4z^2}$ for $z\to 0$ and as $-\frac{9}{4z^2}$ for $z\to\infty$ and is presented in figure \ref{fig:Potential}. As it was discussed in \cite{Hoyos:2006gb} if the potential gets negative the imaginary part of the frequency may become negative. Furthermore the shape of the potential suggests that there might be bound states with a negative $\omega^2$. To obtain the spectrum we look for regular solutions of (\ref{Shr}) imposing an in-falling boundary condition at the horizon ($z\to\infty$).
\begin{figure}[h]
\centering
\includegraphics[width=10cm]{f4.pdf}
\caption{\small A plot of the effective potential $V(z)$ given in equation (\ref{Potential}). }
\label{fig:Potential}
\end{figure}
The asymptotic form of the equation of motion at $z\to\infty$ is that of the harmonic oscillator:
\begin{equation}
f''(z)+\omega^2f(z)=0\ ,
\end{equation}
with the solutions $e^{\pm i\omega z}$, the in-falling boundary condition implies that we should choose the positive sign. In our case the corresponding spectrum turns out to be tachyonic and hence the exponents are real. Therefore the in-falling boundary condition simply means that we have selected the regular solution at the horizon: $z\to \infty$. We look for a solution of the form:
\begin{equation}
f(z)=e^{+i\omega z}S(z)\ .
\end{equation}
The resulting equation of motion for $S(z)$ is:
\begin{equation}
(-3-6z^4+9z^8)S(z)+4z^2(1+z^4)^2\left(2i\omega S'(z)+S''(z)\right)=0\ .
\label{eqnS}
\end{equation}
Next we study numerically equation (\ref{eqnS}). After solving the asymptotic form of the equation at the Horizon, we impose the following boundary condition at $z=1/\epsilon$, where $\epsilon$ is a numerically small number typically $\epsilon=10^{-9}$ :
\begin{equation}
S(1/\epsilon)=1-\frac{9i\epsilon}{8\omega};~~~S'(1/\epsilon)=\frac{9i\epsilon^2}{8\omega};\ ,
\label{initialcond}
\end{equation}
after that we explore the solution for a wide range of $\omega=i\omega_{I}$. We look for regular solutions which have $|S(\epsilon)|\approx 0$, this condition follows from the requirement that $\chi \propto z^3$ as $z\to 0$. It turns out that regular solutions exist for a discrete set of positive $\omega_{I}\ll1$. The result for the first six modes that we obtained is presented in table \ref{tab:1}.
\begin{table}[h]
\caption{}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$n$&$\omega_{I}^{(n)}$&$\omega_{I}^{(n)}/\omega_{I}^{(n-1)}$\\\hline
0&$2.6448\times10^{-1}$&-\\\hline
1&$2.8902\times10^{-2}$&0.10928\\\hline
2&$3.1348\times10^{-3}$&0.10846\\\hline
3&$3.3995\times10^{-4}$&0.10845\\\hline
4&$3.6865\times10^{-5}$&0.10844\\\hline
5&$3.9967\times10^{-6}$&0.10841\\\hline
\end{tabular}
\end{center}
\label{tab:1}
\end{table}%
The data suggests that as $\omega_{I}\to 0$ the states organize in a decreasing geometrical series with a factor $q\approx0.1084$. Up to four significant digits, this is the number from equation (\ref{q}), which determines the period of the spiral. We can show this analytically. To this end let us consider the rescaling of the variables in equation (\ref{eqnS}) given by:
\begin{eqnarray}
z=\lambda \hat z;~~~\hat\omega=\omega/\lambda;~~~\lambda\to\infty;\ .
\end{eqnarray}
This is leading to:
\begin{equation}
9\hat S(\hat z)+4\hat z^2(2i\hat\omega\hat S'(\hat z)+\hat S''(\hat z))+O({\lambda}^{-4})=0\ .
\label{small}
\end{equation}
The solution consistent with the initial conditions at infinity (\ref{initialcond}) can be found to be:
\begin{equation}
\hat S(\hat z)=\frac{1+i}{2} e^{-i\frac{\pi}{\sqrt{2}}}e^{-i\hat z\hat\omega}\sqrt{\pi\hat z\hat\omega}H_{i\sqrt{2}}^{(1)}(\hat z\hat\omega);~~~\hat\omega=i\hat\omega_{I};\ ,
\label{Shat}
\end{equation}
where $H_{i\sqrt{2}}^{(1)}$ is the Hankel function of the first kind. Our next assumption is that in the $\omega_I\to 0 $ limit, this asymptotic form of the equation describes well enough the spectrum. To quantize the spectrum we consider some $\hat z_0=z_0/\lambda\ll1$, where we have $1\ll z_0\ll\lambda$ so that the simplified form of equation (\ref{small}) is applicable and impose:
\begin{equation}
\hat S(\hat z_0)=0\ .
\label{Shat2}
\end{equation}
Using that $\hat z\hat\omega=iz\omega_I$ this boils down to:
\begin{equation}
H_{i\sqrt{2}}^{(1)}(i\omega_I z_0)=0\ .
\end{equation}
Now using that $\omega_I z_0\ll1$ for a sufficiently small $\omega_I$, we can make the expansion:
\begin{equation}
H_{i\sqrt{2}}^{(1)}(i\omega_I z_0)\approx-A_1\left((\omega_I z_0)^{i\sqrt{2}}-(\omega_I z_0)^{-i\sqrt{2}}\right)+iA_2\left((\omega_I z_0)^{i\sqrt{2}}+(\omega_I z_0)^{-i\sqrt{2}}\right)\ ,
\end{equation}
where $A_1$ and $A_2$ are real numbers defined {\it via}:
\begin{equation}
A_1+iA_2=-\frac{1}{\pi}{i(i/2)^{-i\sqrt{2}}\Gamma(i\sqrt{2})}\ .
\end{equation}
This boils down to:
\begin{equation}
\cos(\sqrt{2}\ln(\omega_I z_0)+\phi)=0;~~~\phi\equiv\pi/2-\arg(A_1+iA_2);\ .
\label{20}
\end{equation}
The first equation in (\ref{20}) leads to:
\begin{equation}
\omega_I^{(n)}=\frac{1}{z_0}e^{-\frac{\pi/2+\phi}{\sqrt{2}}}e^{-n\frac{\pi}{\sqrt{2}}}=\omega_I^{(0)}q^n\ ,
\label{geom}
\end{equation}
suggesting that:
\begin{equation}
q=e^{-\frac{\pi}{\sqrt{2}}}\approx 0.10845\ .
\label{an_q}
\end{equation}
This is the number given in (\ref{q}). Note that the value of $z_0$ is a free parameter that we can fix by matching equation (\ref{geom}) to the data in table \ref{tab:1}. On the other side $\hat S(\hat z)$ given in equation (\ref{Shat}) depends only on $\hat z\hat\omega=i\omega_I z$ and therefore once we have fixed $z_0$ we are left with a function of $\omega_I$, which zeroes determine the spectrum, equation (\ref{Shat2}). It is interesting to compare it to the numerically obtained plot of $|S(\epsilon)|$ vs. $\omega_I$, that we have used to determine the spectrum numerically. The result is presented in figure \ref{fig:spectrum}, where we have used the $n=3$ entry from table \ref{tab:1} to fix $z_0$. One can see the good agreement between the spectrum determined by equation (\ref{Shat2}), the red curve in figure \ref{fig:spectrum} and the numerically determined one, the dotted blue curve.
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{f5.pdf}
\caption{\small The dotted blue curve corresponds to the numerical solution of equation (\ref{eqnS}), while the thick red curve is the one determined by equation (\ref{Shat2}). The plots are scaled to match along the vertical axis.}
\label{fig:spectrum}
\end{figure}
\subsubsection{The Spectrum near criticality}
In this chapter we study the light meson spectrum of the states forming the spiral structure in the $(\tilde m, -\tilde c)$ plane, figure \ref{fig:spiral-revisited}. In particular we focus on the study of the fluctuations along $ L$. The corresponding equation of motion was derived in \cite{Filev:2007gb}. The effect of the magnetic field $H$ is to mix the vector and the meson parts of the spectrum. However if we consider the rest frame by allowing the fluctuations to depend only on the time direction of the D3 branes' world volume, the equation of motion for the fluctuations along $ L$ decouple from the vector spectrum. To this end we expand:
\begin{eqnarray}
L=L_0(\rho)+(2\pi\alpha')\chi(\rho,t)\ ,\\
\chi=h(\rho)\cos{M t}\nonumber\ .
\end{eqnarray}
Here $L_0(\rho)$ is the profile of the D7 brane's classical embedding. The resulting equation of motion for $h(\rho)$ is:
\begin{eqnarray}
&\partial_\rho(g\frac{h'}{(1+L_0'^2)^2})+\left(g\frac{R^4}{(\rho^2+L_0^2)^2}\frac{M^2}{1+L_0'^2}-\frac{\partial^2 g}{\partial L_0^2}+\partial_\rho(\frac{\partial g}{\partial L_0}\frac{L_0'}{1+L_0'^2})\right)h=0\ ,\\
{\rm where}\quad&g(\rho,L_0,L_0')=\rho^3\sqrt{1+{L_0}'^2}\sqrt{1+\frac{R^4H^2}{(\rho^2+L_0^2)^2}}\ .\nonumber
\end{eqnarray}
It is convenient to introduce the dimensionless variables:
\begin{equation}
\tilde h=\frac{h}{R\sqrt{H}};~~\tilde L_{0}=\frac{L_0}{R\sqrt{H}};~ \tilde\rho=\frac{\rho}{R\sqrt{H}};~\tilde M=\frac{M R}{\sqrt{H}};\ ,
\label{dimensionless}
\end{equation}
leading to:
\begin{eqnarray}
&\partial_{\tilde\rho}(\tilde g\frac{\tilde h'}{(1+\tilde L_0'^2)^2})+\left(\tilde g\frac{1}{(\tilde\rho^2+\tilde L_0^2)^2}\frac{\tilde M^2}{1+\tilde L_0'^2}-\frac{\partial^2 \tilde g}{\partial \tilde L_0^2}+\partial_{\tilde\rho}(\frac{\partial \tilde g}{\partial \tilde L_0}\frac{\tilde L_0'}{1+\tilde L_0'^2})\right)\tilde h=0\ ,\label{fluctPsi}\\
{\rm with}\quad&\tilde g(\tilde\rho,\tilde L_0,\tilde L_0')=\tilde\rho^3\sqrt{1+{\tilde L_0}'^2}\sqrt{1+\frac{1}{(\tilde\rho^2+\tilde L_0^2)^2}}\nonumber\ .
\end{eqnarray}
We study the normal modes of the D7 brane described by equation (\ref{fluctPsi}) by imposing Neumann boundary conditions at $\tilde\rho=0$. Since our analysis is numerical we solve the equation of motion (\ref{fluctPsi}) in terms of a power series for small $\tilde\rho$ and impose the appropriate initial conditions for the numerical solution at $\tilde\rho=\epsilon$, where $\epsilon$ is some very small number. In order to quantize the spectrum we look for numerical solutions which are normalizable and go as $1/\tilde\rho^2$ at infinity.
Let us study the dependence of the spectrum of $\tilde M$ on the bare quark mass $\tilde m$, for the states corresponding to the spiral structure from figure \ref{fig:spiral-revisited}. A plot of the spectrum of the first three excited states is presented in figure \ref{fig:spectrum-spiral}. The classification of the states in terms of the quantum number $n$ is justified, because at large $\tilde m$ the equation of motion for the fluctuations asymptotes to the equation of motion for the pure $AdS_5\times S^5$ space, considered in \cite{Kruczenski:2003be}, where the authors obtained the spectrum in a closed form. Note that the diagram has a left-right symmetry. This is because we plotted the spectrum for both arms of the spiral in order to emphasize its self-similar structure, physically only one side of the diagram is sufficient.
\begin{figure}[htbp]
\centering
\includegraphics[width=12cm]{61.pdf}
\includegraphics[width=12cm]{62.pdf}
\caption{\small A plot of the meson spectrum corresponding to the two arms of the spiral structure at the origin of the $(\tilde m,-\tilde c)$ plane. The ground state ($n=0$) becomes tachyonic for the inner branches of the spiral, while only the lowest branch is a tachyon free one. The tachyon sector of the diagram reveals the self-similar structure of the spectrum.}
\label{fig:spectrum-spiral}
\end{figure}
Let us trace the blue curve corresponding to the $n=0$ state starting from the right-hand side. As $\tilde m$ decreases the mass of the meson decreases and at $\tilde m=0$ it has some non-zero value. This part of the diagram corresponds to the lowest positive branch of the spiral from figure \ref{fig:spiral-revisited} (the vicinity of point $H_0$).
It is satisfying to see that the lowest positive $\tilde m$ branch of the spiral is tachyon free and therefore stable under quantum fluctuations. Note that despite that the negative $\tilde m$ part of the lowest branch has no tachyonic modes in its fluctuations along $L$, it has a higher free energy (as can be seen from figure \ref{fig:free-energy}) and is thus at best metastable.
One can also see that the spectrum drops to a zero and becomes tachyonic exactly at the point where we start exploring the upper branch of the spiral. This proves that all inner branches correspond to true instability of the theory and cannot be reached by super-cooling. As we go deeper into the spiral, the $n=0$ spectrum remains tachyonic and spirals to some critical value. The dashed line denoted by $\omega_I^{(0)}$ in figure \ref{fig:spiral-revisited} corresponds to the first entry in table \ref{tab:1}. As one can see this is the critical value approached by the spectrum.
Now let us comment on the $n=1,2$ levels of the spectrum represented by the red and green curves, respectively. As one can see the $n=1$ spectrum becomes tachyonic when we reach the third branch of the spiral (the vicinity of point $H_2$ in figure \ref{fig:spiral-revisited}) and after that follows the same pattern as the $n=0$ level, spiraling to the second entry $\omega_I^{(1)}$ from table \ref{tab:1}. The $n=2$ level has a similar behavior, but it becomes tachyonic at the next turn of the spiral and it approaches the next entry from table \ref{tab:1}. Similar feature was reported recently in \cite{Mateos:2007vn} where the authors studied topology changing transitions. The above analysis suggests that at each turn of the spiral, there is one new tachyonic state appearing. It also suggests that the structure of the $n$-th level is similar to the structure of the $n+1$-th level and in the $n\to\infty$ limit this similarity becomes an exact discrete self-similarity. The last feature is apparent from the tachyonic sector of the diagram in the second plot in figure \ref{fig:spectrum-spiral}, the blue, red and green curves are related by an approximate scaling symmetry, the analysis of the spectrum of the critical $L\equiv 0$ embedding suggests that this symmetry becomes exact in the $n\to\infty$ limit with a scaling factor of $q$ given in equation (\ref{q}).
It is interesting to analyze the way the meson mass $\tilde M$ approaches its critical value and compute the corresponding critical exponent. Let us denote the critical value of $\tilde M$ by $\tilde M_*$ and consider the bare quark mass $\tilde m$ as an order parameter, denoting its critical value by $\tilde m_*$. We are interested in calculating the critical exponent $\alpha$ defined by:
\begin{equation}
| \tilde M-\tilde M_* |\propto|\tilde m-\tilde m_*|^\alpha\ .
\label{critexp}
\end{equation}
We will provide a somewhat heuristic argument that $\alpha=2$ and will confirm this numerically. To begin with let us consider the energy density of the gauge theory $\tilde E$ as a function of the bare quark mass $\tilde m$. Now let us consider a state close to the critical one, characterized by:
\begin{equation}
\tilde M=\tilde M_*+\delta\tilde M;~~~\tilde m=\tilde m_*+\delta\tilde m;~~~ \tilde E=\tilde E_*+\delta\tilde E;\ .
\end{equation}
Next we assume that as we approach criticality the variation of $\tilde E$ and $\tilde M$ are proportional to the variation of the energy scale and hence $\delta \tilde E\propto\delta\tilde M$. Therefore we have:
\begin{equation}
\frac{\delta \tilde M}{\delta \tilde m}\propto \frac{\delta\tilde E}{\delta \tilde m}\propto \tilde c\ ,
\label{relation}
\end{equation}
where $\tilde c$ is the fermionic condensate. The second relation in (\ref{relation}) was argued in \cite{Kruczenski:2003uq}. In the previous section we argued that the critical exponent of the condensate is one and since the critical embedding has a zero condensate it follows that $\tilde c\propto |\tilde m-\tilde m_{*}|$. Therefore we have:
\begin{equation}
\frac{\delta \tilde M}{\delta \tilde m}\propto \alpha|\tilde m-\tilde m_*|^{\alpha-1}\propto |\tilde m-\tilde m_*|
\end{equation}
and hence $\alpha=2$.
Now let us go back to figure \ref{fig:spectrum-spiral}. As we discussed above, for each energy level $n$ the tachyonic spectrum spirals to the critical value $\omega_I^{(n)}$, corresponding to the center of the spiral. If we focus on the $\tilde m=0$ axis, we can see that for each level we have a tower of tachyonic states at a zero bare quark mass, corresponding to the different branches of the spiral. Let us denote by $\tilde M_k^{(n)}$ the imaginary part of the meson spectrum, corresponding to the $k$-th tachyonic state of the $n$-th energy level, at a zero bare quark mass $\tilde m$. As we go deeper into the spiral, $k\to\infty$ and $\tilde M_k^{(n)}\to \tilde M_*^{(n)}$, the data in figure \ref{fig:spectrum-spiral} suggests that $\tilde M_*^{(n)}=\omega_I^{(n)}$. On the other side if the meson spectrum has a critical exponent of two, one can show that for a large $k$:
\begin{equation}
\frac{\tilde M_k^{(n)}-\tilde M_*^{(n)}}{\tilde M_{k-1}^{(n)}-\tilde M_*^{(n)}}=q^2\ ,
\label{geom2}
\end{equation}
where $q$ is given by equation (\ref{q}). We can solve for $\tilde M_{*}^{(n)}$:
\begin{equation}
\tilde M_{*}^{(n)}=\tilde M_{k-1}+\frac{\tilde M_k^{(n)}-\tilde M_{k-1}^{(n)}}{1-q^2}\ .
\label{M*}
\end{equation}
Now assuming that for $k=1,2$ the approximate geometrical series defined {\it via} (\ref{geom2}) is already exact we calculate numerically $\tilde M_1^{(n)}, \tilde M_2^{(n)}$ for the $n=0,1,2$ levels and compare the value of $\tilde M_{*}^{(n)}$ obtained by equation (\ref{M*}) to the first three entries in table \ref{tab:1}. The results are presented in table \ref{tab:2}.
\begin{table}[h]
\caption{}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
$n$&$\tilde M_{1}^{(n)}$&$\tilde M_{2}^{(n)}$&$\tilde M_{*}^{(n)}$&$\omega_I^{(n)}$\\\hline
0&$2.7530\times10^{-1}$&$2.6460\times10^{-1}$&$2.6447\times10^{-1}$&$2.6448\times10^{-1}$\\\hline
1&$3.0162\times10^{-2}$&$2.8917\times10^{-2}$&$2.8902\times10^{-2}$&$2.8902\times10^{-2}$\\\hline
2&$3.2715\times10^{-3}$&$3.1363\times10^{-3}$&$3.1347\times10^{-3}$&$3.1348\times10^{-3}$\\\hline
\end{tabular}
\end{center}
\label{tab:2}
\end{table}%
One can see that up to four significant digits the critical value of the meson spectrum is given by the imaginary part of the quasi normal modes presented in table \ref{tab:1}. This supports the above argument that the meson spectrum has a critical exponent of two. Another way to justify this, is to generate a plot of the meson spectrum similar to the one presented in figure \ref{fig:mspiral} for the bare quark mass $\tilde m$ and the fermionic spectrum $\tilde c$. Notice that $\tilde M$ approaches criticality from above, while the parameter $\tilde m$ oscillates around the critical value $\tilde m_*=0$. This suggests to use $\tilde M$ as an order parameter and to generate a plot of $\tilde m/(\tilde M-\tilde M_*)^2$ vs. $\sqrt{2}\log{|\tilde M-\tilde M_*|}/{2\pi}$. Note that according to equation (\ref{geom2}) the plot should represent periodic function of an unit period. The resulting plot for the $n=0$ level, using $\tilde M_*^{(0)}$ from table \ref{tab:2} as a critical value, is presented in figure \ref{fig:messpiral}.
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{f7.pdf}
\caption{\small A plot of the bare quark mass meson vs. the meson spectrum, in an appropriate parameterization, determined by the critical exponents of $\tilde m$ and $\tilde M$. The discrete self-similar structure of the spectrum is manifested by the periodicity of the plotted function.}
\label{fig:messpiral}
\end{figure}
\subsubsection{The stable branch of the spiral}
In this subsection we consider the spectrum corresponding to the states far from the origin of the $(\tilde m, -\tilde c)$, which is the outermost branch of the spiral ending at point $H_0$ from figure \ref{fig:spiral-revisited}. The fluctuations of the D7-brane corresponding to the massless scalar $\phi$ were studied in \cite{Filev:2007gb} and some features consistent with the spontaneous chiral symmetry breaking, such as a characteristic $\sqrt{m}$ behavior \cite{Gell-Mann:1968rz} were reported.
Here we complement the analysis by presenting the results for the fluctuations along the $\tilde L$ coordinate. Since this is the massive field in the spontaneous chiral symmetry breaking scenario, we expect a $\sqrt{const+\tilde m}$ behavior of the meson spectrum for small values of $\tilde m$. Note that such a behavior simply means that the spectrum of the $\tilde L$ fluctuations has a mass gap at zero bare quark mass and that the slope of the spectrum vs. the bare quark mass function is finite. It is satisfying that our results are in accord with this expectations.
To obtain the spectrum, we solve numerically equation (\ref{fluctPsi}) imposing Neumann boundary conditions at $\tilde\rho=0$. A plot of the first five energy levels is presented in figure \ref{fig:final}. As one can see at large $\tilde m$ the spectrum approximates that of the pure ${\cal N}=2$ Flavored Yang Mills theory studied in \cite{Kruczenski:2003be}, where the dependence of the meson spectrum on the bare quark mass was obtained in a closed form:
\begin{equation}
M_0=\frac{2m}{R^2}\sqrt{(n+l+1)(n+l+2)}\ .
\label{spectrAds}
\end{equation}
Here $l$ is the quantum number corresponding to the angular modes along the internal $S^3$ sphere wrapped by the D7 brane and is zero in our case. After introducing the dimensionless variables defined in (\ref{dimensionless}), equation (\ref{spectrAds}) boils down to:
\begin{equation}
\tilde M_0= 2\sqrt{(n+1)(n+2)}\tilde m\ .
\label{dmlAdS}
\end{equation}
The black dashed lines in figure \ref{fig:final} represent equation (\ref{dmlAdS}). The fact that the meson spectrum asymptotes to the one described by (\ref{dmlAdS}) justifies the use of the quantum number $n$ to classify the meson spectrum. One can also see that as expected the spectrum at zero bare quark mass has a mass gap.
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{f8.pdf}
\caption{\small A plot of the meson spectrum corresponding to the stable branch of the spiral. The black dashed lines correspond to equation (\ref{dmlAdS}), one can see that for large $\tilde m$ the meson spectrum asymptotes to the result for pure $AdS_5\times S^5$ space. One can also see that at zero bare quark mass $\tilde m$ there is a mass gap in the spectrum.
}
\label{fig:final}
\end{figure}
\section{Conclusion}
In this paper we performed a detailed analysis of the spiral structure at the origin of the condensate vs. bare quark mass diagram. We revealed the discrete self-similar behavior of the theory near criticality and calculated the corresponding critical exponents for the bare quark mass, the fermionic condensate and the meson spectrum.
Our study of the meson spectrum confirmed the expectations based on thermodynamic considerations that the lowest positive $\tilde m$ branch of the spiral corresponds to a stable phase of the theory and that the inner branches are real instabilities characterized by a tachyonic ground state and cannot be reached by a supercooling. The lowest negative $\tilde m$ branch of the spiral is tachyon free and thus could be metastable.
The supercooling mentioned above could be attempted by considering the finite temperature background, namely the AdS Black hole geometry, in the presence of an external magnetic field. We could prepare the system in the phase corresponding to the trivial $\tilde L\equiv 0$ embedding and then take the $T\to0$ limit. If some of the inner branches of the spiral were metastable the theory could end up in the corresponding phase. The study of the finite temperature case is of a particular interest. Due to the additional scale introduced by the temperature, the theory has two dimensionless parameters and is described by a two dimensional phase diagram. The effect of the temperature is to restore the chiral symmetry and is competing with that of the external magnetic field. On the other side the magnetic field affects the melting of the mesons \cite{Albash:2007bk}.
\section{Acknowledgments}
V. Filev would like to thank: T. Albash, C. V. Johnson, A. Kundu and R. Rashkov for useful comments and discussions. This work was supported in part by the US Department of Energy.
\newpage
|
1,108,101,564,935 | arxiv | \chapter{Further Considerations on Boson Sampling with Ultracold Atoms}
\label{appendix:appendix_d}
\section{Pair distribution in the optical lattice}\label{pair_distribution}
To estimate the rate of two-body collisions in section~\ref{ScalingAndErrors}, we need to know the probability of finding $k$ pairs of atoms occupying the same lattice sites, regardless of their spin states. To calculate this probability, we assume that the quantum circuit realises a Haar-random unitary matrix $U$ and, importantly, that the system is in a linear superposition of all possible states, where the probabilities of all states are the same. This second assumption, namely the uniform distribution (on average) over all possible bosonic configurations, has been proven for the output modes of the quantum circuit~\cite{Arkhipov12} and, for convenience, it is assumed valid in this work for the intermediate stages of the quantum circuit too. This will be justified by exact numerical simulations in section~\ref{subsec:HamilModel}.
We can calculate the probability of finding exactly $k_2$ pairs by simply counting the number of configurations with $k_2$ pairs of atoms in the same site, and dividing it by the overall number of possible bosonic configurations.
This latter is given by the multiset coefficient $\big(\hspace{-2.5pt}\binom{M}{N} \hspace{-2.5pt}\big)$. To determine the number of configurations containing $k_2$ pairs, we should first consider that there are $\binom{M/2}{k_2}$ different ways in which $k_2$ pairs of atoms can be arranged in $M/2$ sites. Since for each site doubly occupied, there are three possible spin configurations, $|2\rangle_{\downarrow}|0\rangle_{\uparrow}$,
$|1\rangle_{\downarrow}|1\rangle_{\uparrow}$, and $|0\rangle_{\downarrow}|2\rangle_{\uparrow}$, the previous number of combinations should be multiplied by $3^{k_2}$ to obtain the total number of configurations. Secondly, we should consider that there are $\binom{M/2-k_2}{N-2k_2}$ combinations in which the remaining $N-2k_2$ atoms could be arranged in the remaining $M/2-k_2$ sites. Since for each site singly occupied, there are only two possible spin configurations, $|1\rangle_{\downarrow}|0\rangle_{\uparrow}$ and $|0\rangle_{\downarrow}|1\rangle_{\uparrow}$, the previous number of combinations should be multiplied by $2^{N-2k_2}$ to obtain the total number of configurations in which the $N-2k_2$ single atoms could be arranged. Thus, the probability of finding $k_2$ pairs of atoms in distinct sites is:
\begin{equation}\label{NPairProb1}
P_\text{pair}(N,M;k_2)=\frac{3^{k_2}\binom{M/2}{k_2}2^{N-2k_2}\binom{M/2-k_2}{N-2k_2}}{\big(\hspace{-2.5pt}\binom{M}{N} \hspace{-2.5pt}\big)}\Big.
\end{equation}
Equation~(\ref{NPairProb1}) only takes into account particle pairs, forgetting about states that have more than two particles in a site. Those can be taken into account defining a more general expression $P(k_2,k_3,k_4,...)$, corresponding to the probability of having $k_2$ pairs, $k_3$ trios, $k_4$ quartets and so on. For simplicity, we will consider the case for only pairs and trios. The number of configurations in which $k_3$ trios can be placed in $M/2$ sites is given by $\binom{M/2}{k_3}$. This number has to be multiplied by $4^{k_3}$, as $4$ different states can represent a trio in a site: $|3\rangle_{\downarrow}|0\rangle_{\uparrow}$,
$|2\rangle_{\downarrow}|1\rangle_{\uparrow}$, $|1\rangle_{\downarrow}|2\rangle_{\uparrow}$ and $|0\rangle_{\downarrow}|3\rangle_{\uparrow}$. Then, the number of configurations in which $k_2$ pairs can be placed in the remaining $M/2-k_3$ sites is given by $\binom{M/2-k_3}{k_2}$. Finally, the number of configurations in which the remaining $N-2k_2-3k_3$ particles can be placed in $N-2k_2-3k_3$ sites is given by $\binom{M/2-k_2-k_3}{N-2k_2-3k_3}$. The probability of having $k_2$ pairs and $k_3$ trios is then given by
\begin{equation}\label{NPairProb2}
P(k_2,k_3)=4^{k_3}\binom{M/2}{k_3}3^{k_2}\binom{M/2-k_3}{k_2}2^{N-2k_2-3k_3}\binom{M/2-k_2-k_3}{N-2k_2-3k_3}\Big/\bigg(\hspace{-2.5pt}\binom{M}{N} \hspace{-2.5pt}\bigg).
\end{equation}
In the limit of large $N$, both Eqs.~(\ref{NPairProb1}) and (\ref{NPairProb2}) converge to a Poissonian distribution
\begin{equation}\label{eq:SitePoissonian}
\lim_{N\to\infty} P_\text{pair}(N,M;k)=\frac{\lambda^{k}}{e^{\lambda}k!},
\end{equation}
with average value $\lambda=3/2c$. Here, the constant factor $c$ denotes the ratio $c=M/N^2$, which in the main text has been simply assumed equal to 1. As an example, the probability associated to the collision-free subspace (the subspace of states where all atoms are at different sites), i.e. $P_\text{pair}(N,M;0)$, tends to $1/\exp(3/2c)$ for large $N$.
To prove how Eq.~(\ref{NPairProb2}) tends to Eq.~(\ref{eq:SitePoissonian}) for large $N$, it is useful to recall that
\begin{equation}\label{MultisetProd}
\bigg(\hspace{-2.5pt}\binom{M}{N} \hspace{-2.5pt}\bigg)=\frac{M^N}{N!}\prod_{a=0}^{N-1}(1+a/M)\approx \exp{(\sum_{a=0}^{N-1}a/M)}\approx\exp{(N^2/2M)},
\end{equation}
and
\begin{equation}\label{BinomialProd}
\binom{M}{N} =\frac{M^N}{N!}\prod_{a=0}^{N-1}(1-a/M)\approx \exp{(-\sum_{a=0}^{N-1}a/M)}\approx\exp{(-N^2/2M)},
\end{equation}
for $M\ll N$. Using these relations, Eq.~(\ref{NPairProb2}) can be rewritten as
\begin{equation}\label{AlmostLimit}
P(k_2,k_3)\approx \frac{1}{k_2!k_3!}\bigg(\frac{N^3}{2M^2}\bigg)^{k_3}\bigg(\frac{3N^2}{2M}\bigg)^{k_2}e^{-3N^2/2M}.
\end{equation}
For a $M=cN^2$ scaling, $N^3/2M^2$ tends to zero, which means that the probability of having trios tends to zero for large $N$. Of course, the same will happen with the probability of having four or more particles in a site. For $k_3=0$, the probability of having $k_2$ pairs can be written as
\begin{equation}\label{Poissonian2}
P_\textrm{ pair}(k_2)=P(k_2,0)\approx \frac{1}{k_2!}\bigg(\frac{3}{2c}\bigg)^{k_2}e^{-3/2c},
\end{equation}
which equals Eq.~(\ref{eq:SitePoissonian}).
\section{Derivation of average values of $V$ and $V^2$}\label{App:AvVs}
In the following we present a derivation of the average value of $V$ and $V^2$ evaluated in a uniform state, $|\psi\rangle_u={D}^{-1/2}\sum_{d=1}^D|d\rangle $, where $D=\big(\hspace{-2.5pt}\binom{M}{N} \hspace{-2.5pt}\big)$. It may be useful to recall the form of $V$,
\begin{equation}
V=\frac{\Gamma_\textrm{ tb}}{4}\sum_{m=1}^M\hat{n}_m(\hat{n}_m-1)
+\frac{\Gamma_\textrm{ tb}}{2}\sum_{s=1}^{M/2}\hat{n}_{2s-1}\hat{n}_{2s}.
\end{equation}
Then, $\langle V\rangle_u$ is
\begin{equation}
\langle V\rangle_u=\frac{\Gamma_\textrm{ tb}}{4}M\langle \hat{n}_m(\hat{n}_m-1)\rangle_u
+\frac{\Gamma_\textrm{ tb}}{2}\frac{M}{2}\langle\hat{n}_{2s-1}\hat{n}_{2s}\rangle_u,
\end{equation}
where we used that, because of the symmetry of the uniform state, $\langle\sum_{m=1}^M \hat{A}_m\rangle_u=M\langle \hat{A}_m\rangle_u$.
To evaluate $\langle \hat{n}_m\rangle_u$, we can think on the number of states that can have $k$ bosons in mode $m$. This is just equivalent to the number of configurations in which you can put $N-k$ particles in $M-1$ modes, which is given by $\big(\hspace{-2.5pt}\binom{M-1}{N-k} \hspace{-2.5pt}\big)$. $\langle \hat{n}_m\rangle_u$ is then given by $\sum_{k=0}^Nk p_k$, where $p_k=\big(\hspace{-2.5pt}\binom{M-1}{N-k} \hspace{-2.5pt}\big)/\big(\hspace{-2.5pt}\binom{M}{N} \hspace{-2.5pt}\big)$. In the same way, $\langle \hat{n}_m^2\rangle_u$ is given by $\sum_{k=0}^Nk^2 p_k$. We call $p_k$, the probability of having $k$ particles in mode $m$. To evaluate $\langle \hat{n}_m\hat{n}_{m'}\rangle_u$, one needs to take into account the probability of having $k$ particles in mode $m$ while having $k'$ particles in mode $m'$, which is given by $p_{k,k'}=\big(\hspace{-2.5pt}\binom{M-2}{N-k-k'} \hspace{-2.5pt}\big)/\big(\hspace{-2.5pt}\binom{M}{N} \hspace{-2.5pt}\big)$. $\langle \hat{n}_m\hat{n}_{m'}\rangle_u$ is then given by $\sum_{k=0}^N\sum_{k'=0}^{N-k}kk' p_{k,k'}$. Using all this,
\begin{equation}\label{avVsimple}
\langle V\rangle_u=\frac{\Gamma_\textrm{ tb}}{4}M \sum_{k=0}^Np_k k(k-1)
+\frac{\Gamma_\textrm{ tb}}{2}\frac{M}{2} \sum_{k=0}^N\sum_{k'=0}^{N-k}p_{k,k'}kk' .
\end{equation}
To make the calculation of the summations in Eq.~(\ref{avVsimple}) easier, we can make use of
\begin{equation}\label{BirthdayApprox}
\bigg(\hspace{-2.5pt}\binom{M}{N-k} \hspace{-2.5pt}\bigg)/\bigg(\hspace{-2.5pt}\binom{M}{N} \hspace{-2.5pt}\bigg)\sim \frac{N^k}{(M+N)^k}
\end{equation}
when $k\ll N,M$~\cite{Arkhipov12}. Then, $p_k\approx a_1\lambda_1^k$ and $p_{k,k'}\approx a_2\lambda_2^{(k+k')}$, where $a_j=\prod_{i=1}^j\frac{M-i}{M+N-i}$ and $\lambda_j=\frac{N}{M+N-j}$. The summations can be then reduced to geometric series, for example,
\begin{equation}
\sum^N_{k=0} k(k-1) p_k=a_1\lambda_1^2\frac{\partial^2}{\partial k^2}\sum_{k=0}^N \lambda_1^k\approx a_1\lambda_1^2 \frac{\partial^2}{\partial k^2}\frac{1}{1-\lambda_1}=\frac{2a_1\lambda_1^2}{(1-\lambda_1)^3}.
\end{equation}
Now, assuming that $M=cN^2$, one can calculate the limit at large $N$ of $M2a_1\lambda_1^2/(1-\lambda_1)^3$, which gives $2/c$. The rest of the summations are done using Wolfram Mathematica, obtaining $\langle V\rangle_u=3\Gamma_\textrm{ tb}/4c$.
To calculate $\langle V^2\rangle_u$, the same approach is followed. First, we write $V^2$,
\begin{eqnarray}
V^2&=&\frac{\Gamma_\textrm{ tb}^2}{16}\sum_{m=1}^M\sum_{m'=1}^M\hat{n}_m(\hat{n}_m-1)\hat{n}_{m'}(\hat{n}_{m'}-1) \nonumber\\
&+&\frac{\Gamma_\textrm{ tb}^2}{4}\sum_{s=1}^{M/2}\sum_{s'=1}^{M/2}\hat{n}_{2s-1}\hat{n}_{2s}\hat{n}_{2s'-1}\hat{n}_{2s'} \\
&+&\frac{\Gamma_\textrm{ tb}^2}{4}\sum_{m=1}^M\sum_{s=1}^{M/2}\hat{n}_m(\hat{n}_m-1)\hat{n}_{2s-1}\hat{n}_{2s}.\nonumber
\end{eqnarray}
The first term can be written as
\begin{eqnarray}
&&\langle \sum_{m=1}^M\sum_{m'=1}^M\hat{n}_m(\hat{n}_m-1)\hat{n}_{m'}(\hat{n}_{m'}-1) \rangle_u \nonumber \\
&=&M\langle \hat{n}_m^2(\hat{n}_m-1)^2\rangle_u +M(M-1)\langle \hat{n}_m(\hat{n}_m-1) \hat{n}_{m'}(\hat{n}_{m'}-1)\rangle_u \\
&=&M\sum_{k=0}^N p_k k^2 (k-1)^2 + M(M-1)\sum_{k=0}^N \sum_{k=0}^{N-k}p_{k,k'}k(k-1)k'(k'-1). \nonumber
\end{eqnarray}
Similar with the second term
\begin{eqnarray}
&&\langle \sum_{s=1}^{M/2}\sum_{s'=1}^{M/2}\hat{n}_{2s-1}\hat{n}_{2s}\hat{n}_{2s'-1}\hat{n}_{2s'} \rangle_u \nonumber \\
&=&\frac{M}{2}\langle \hat{n}_{2s-1}^2\hat{n}_{2s}^2 \rangle_u +\frac{M}{2}(\frac{M}{2}-1)\langle \hat{n}_{2s-1}\hat{n}_{2s}\hat{n}_{2s'-1}\hat{n}_{2s'} \rangle_u \\
&=&\frac{M}{2}\sum_{k=0}^N\sum_{k'=0}^{N-k}p_{k,k'}k^2k'^2 +\frac{M}{2}(\frac{M}{2}-1)\sum_{k=0}^N\sum_{k'=0}^{N'}\sum_{k''=0}^{N''}\sum_{k'''=0}^{N'''}p_{k,k',k'',k'''}kk'k''k''',\nonumber
\end{eqnarray}
where $N'=N-k$, $N''=N-k-k'$, $N'''=N-k-k'-k''$, and $p_{k,k',k'',k'''}=\big(\hspace{-2.5pt}\binom{M-4}{N-k-k'-k''-k'''} \hspace{-2.5pt}\big)/\big(\hspace{-2.5pt}\binom{M}{N} \hspace{-2.5pt}\big)$, which can be approximated to $p_{k,k',k'',k'''}\approx a_4\lambda_4^{(k+k'+k''+k''')}$. The third term is:
\begin{eqnarray}
&&\langle \sum_{m=1}^M\sum_{s=1}^{M/2}\hat{n}_m(\hat{n}_m-1) \hat{n}_{2s-1}\hat{n}_{2s}\rangle_u \nonumber \\
&=&M\langle \hat{n}_m^2(\hat{n}_m-1)\hat{n}_{m'}\rangle_u +\frac{M}{2}(M-1)\langle \hat{n}_m(\hat{n}_m-1) \hat{n}_{m'}\hat{n}_{m''}\rangle_u \\
&=&M\sum_{k=0}^N\sum_{k'=0}^{N'} p_{k,k'} k^2 (k-1)k' + \frac{M}{2}(M-1)\sum_{k=0}^N \sum_{k'=0}^{N'}\sum_{k''=0}^{N''} p_{k,k',k''}k(k-1)k'k'', \nonumber
\end{eqnarray}
where $p_{k,k',k''}=\big(\hspace{-2.5pt}\binom{M-3}{N-k-k'-k''} \hspace{-2.5pt}\big)/\big(\hspace{-2.5pt}\binom{M}{N} \hspace{-2.5pt}\big)$, which can be approximated to $p_{k,k',k''}\approx a_3\lambda_3^{(k+k'+k'')}$. Carrying out these summations using Wolfram Mathematica, assuming $M=cN^2$, and calculating the limit at large $N$ yields to $\langle V^2\rangle_u=(3/2c + 9/4c^2)\Gamma^2_\textrm{ tb}/4$.
\begin{figure}[t]
\centering\includegraphics*[width=1\columnwidth]{figures/Figures_D/MathematicaCode.pdf}
\caption{Wolfram Mathematica code to calculate the summatories.\label{mathematica_code}}
\end{figure}
\chapter{Further Considerations on Quantum Sensing with NV Centers}
\label{appendix:appendix_e}
\section{Finding $\Omega(t)$ from $F(t)$}\label{FindOmega}
The term representing the MW driving in Eq.~(\ref{simulations}) is $\frac{\Omega(t)}{2} (|1\rangle\langle 0| e^{i\phi} +\textrm{ H.c.})$, and the associated propagator for, e.g., the $m$th $\pi$-pulse is
\begin{equation}
U_t = \exp{[-i\int_{t_m}^{t_m+t_{\pi}} \frac{\Omega(s)}{2} (|1\rangle\langle 0| e^{i\phi} +\textrm{ H.c.}) \ ds]}.
\end{equation}
During the $m$th $\pi$-pulse, i.e. in a certain time between $t_m$ and $t_m + t_{\pi}$, $U_t $ has the following effect on the electron spin $\sigma_z$ operator (in the following, $\sigma_\phi = |1\rangle\langle 0| e^{i\phi} +\textrm{ H.c.}$)
\begin{eqnarray}
e^{i\int_{t_m}^{t_m+t} \frac{\Omega(s)}{2} \sigma_\phi \ ds} \sigma_z e^{-i\int_{t_m}^{t_m+t} \frac{\Omega(s)}{2} \sigma_\phi \ ds} = e^{\big(i\int_{t_m}^{t_m+t} \Omega(s) \ ds \big) \sigma_\phi } \sigma_z \nonumber \\ =\cos{\bigg(\int_{t_m}^{t_m+t} \Omega(s) \ ds \bigg)} \sigma_z + i \sin{\bigg(\int_{t_m}^{t_m+t} \Omega(s) \ ds \bigg)} \sigma_{\phi}\sigma_z.
\end{eqnarray}
In this manner, it is cleat that $F(t) = \cos{\big(\int_{t_m}^{t_m+t} \Omega(s) \ ds \big)}$. The other spin component, i.e. the one going with $ \sin{\big(\int_{t_m}^{t_m+t} \Omega(s) \ ds \big)}$, does not participate in the joint NV-nucleus dynamics for sequences with alternating pulses~\cite{Lang17} such as the XY8 $\equiv$ XYXYYXYX pulse sequence we are using in chapter~\ref{chapter:chapter_4}. A valid argument to neglect this term is that its period is twice the period of the pulse sequence, i.e. $2T$, and, thus, it is kept off resonance. Now, assuming that $F(t)$ and $\Omega(t)$ are differentiable, one can invert the expression $F(t) = \cos{\big(\int_{t_m}^{t_m+t} \Omega(s) \ ds \big)}$ and find
\begin{equation}
\Omega(t) = \frac{\partial}{\partial_t} \arccos[F(t)]=-\frac{1}{\sqrt{1-F(t)^2}}.
\end{equation}
The latter corresponds to Eq.~(\ref{modulatedOmega}).
\section{Calculation of $f_l$ coefficients}
\subsection{Coefficients for extended pulses}\label{calcext}
The analytical expression for the coefficients $f_l$ is given by
\begin{equation}
f_{l}=\frac{2}{T}\int_{0}^{T} F(s) \cos{\Big(\frac{2 \pi l s}{T}\Big)} \ ds,
\end{equation}
where $T=2 \pi /\omega_\textrm{ M}$. With a rescaling of the integrating variable given by $s=xT/2 $, this is rewritten as
\begin{equation}\label{equation22}
f_{l}=\int_{0}^{2} F(x) \cos{(\pi l x)} \ dx.
\end{equation}
The function inside the integral is symmetric or antisymmetric with respect to $x=1$, depending on $l$ been odd or even. This can be easily demonstrated by using $F(x+1)=-F(x)$ and $\cos{[\pi l (x+1)]}=\cos(\pi l)\cos{\pi l x}$. Thus, if $l$ is even and the function is symmetric with respect to $x=1$, the value of the integral will be zero. Anyway, one can work a general expression for Eq.~(\ref{equation22}). First, we can divide the integral in two parts,
\begin{equation}
f_{l}=\int_{0}^{1} F(x) \cos{(\pi l x)} \ dx +\int_{1}^{2} F(x) \cos{(\pi l x)} \ dx,
\end{equation}
and substitute $x$ for $x+1$ in the second integral. Using the symmetry properties specified above, the equation reduces to
\begin{equation}
f_{l}=(1-\cos{(\pi l)})\int_{0}^{1} F(x) \cos{(\pi l x)} \ dx.
\end{equation}
Now, from $x=0$ to $1$, $F(x)$ can be divided in three parts defined by $\tau_m\equiv 2 t_{m}/T$ and $1-\tau_m$, the times corresponding to the beginning and the end of the pulse,
\begin{equation}\label{integrals}
\int_{0}^{\tau_m} F(x) \cos{(\pi l x)} \ dx +\int_{\tau_m}^{1-\tau_m} F(x) \cos{(\pi l x)} \ dx+\int_{1-\tau_m}^{1} F(x) \cos{(\pi l x)} \ dx,
\end{equation}
and the integral in the middle is zero for the extended pulses. This leaves us with the first and third integrals for which $F(x)$ is $1$ and $-1$ respectively, obtaining
\begin{equation}
f_{l}^\textrm{ m}=(1-\cos{(\pi l)})\Bigg\{\int_{0}^{\tau_m} \cos{(\pi l x)} \ dx -\int_{1-\tau_m}^{1} \cos{(\pi l x)} \ dx\Bigg\},
\end{equation}
that leads to
\begin{equation}
f_{l}^\textrm{ m}=\frac{1}{\pi l }(1-\cos{(\pi l)})\Bigg\{\sin{(\pi l \tau_m)} + \sin{(\pi l (1-\tau_m))} \Bigg\}.
\end{equation}
Using $\sin{[\pi l (1-\tau_m)]}=-\sin{(\pi l \tau_m)}\cos(\pi l)$ and $\sin^2{\theta}=(1-\cos{(2\theta)})/2$, the expression for $f_{l}^\textrm{ m}$ reduces to
\begin{equation}
f_{l}^\textrm{ m}=\frac{4}{\pi l }\sin^4{(\pi l/2)}\sin{(\pi l \tau_m)}.
\end{equation}
Now, by using the relation $T=4t_m+2t_\pi$, $f_{l}^\textrm{ m}$ becomes
\begin{equation}\label{eqbat}
f_{l}^\textrm{ m}=\frac{4}{\pi l }\sin^4{(\pi l/2)}\sin{\Big(\pi l \Big(\frac{1}{2}+\frac{t_\pi}{T}\Big)\Big)},
\end{equation}
where $t_{\pi}$ is the duration of a $\pi$-pulse. Eq.~(\ref{eqbat}) is equivalent to Eq.~(\ref{modulatedf}) in the main text. To prove that, one may use the trigonometric identity $\sin(\theta+\pi l /2)=\sin{(\theta)}\cos{(\pi l /2)} +\cos{(\theta)}\sin{(\pi l /2)}$ which leads us to
\begin{equation}\label{eqlast}
f_{l}^\textrm{ m}=\frac{4}{\pi l }\cos{\Bigg(\pi \frac{t_{\pi}}{T/l}\Bigg)}\sin{(\pi l /2)},
\end{equation}
as $\sin^4{(\pi l /2)}\cos{(\pi l /2)}=0$ and $\sin^5{(\pi l /2)}=\sin{(\pi l /2)}$.
\begin{figure}[t]
\includegraphics[width=0.9\columnwidth]{figures/Figures_E/Subfig.pdf}
\caption{Plot of $F(x)$ and $\cos{(\pi l x)}$ (where $l=13$) functions between $x=0$ and $x=2$, corresponding to $t=0$ and $t=T$ respectively. }
\label{Subfig}
\end{figure}
\subsection{Coefficients for top-hat pulses}\label{calcth}
For calculating the value of $f_l$ coefficients in the case of top-hat pulses, we just need to sum the contribution of the second integral on Eq.~(\ref{integrals}), which is not zero for top-hat pulses. The value of $F(s)$ during the pulse is $F(s)=\cos{[\pi(s-t_m)/t_\pi]}$, where $t_{p}=t_m+t_\pi/2$. With the rescaling of the integrating variable introduced in the previous section this is rewritten as
\begin{equation}
F(s)=\cos{[\pi(x-\tau_m)/\tau_\pi]},
\end{equation}
where $\tau_\pi=2 t_\pi /T $. So, we need to solve the following integral
\begin{equation}
\int_{\tau_m}^{1-\tau_m} F(x) \cos{(\pi l x)} \ dx=\int_{\tau_m}^{1-\tau_m} \cos{[\pi(x-\tau_m)/\tau_\pi]}\cos{(\pi l x)} \ dx
\end{equation}
which is not zero. To solve the integral, we can displace the reference frame by a factor of $\tau_p=1/2$, by the change of variable $x=y+\tau_m+\tau_\pi/2=y+1/2$. Now, the integral will be centered at zero and will look like
\begin{eqnarray}
\int_{-\tau_{\pi}/2}^{\tau_{\pi}/2} \cos{[\pi y/\tau_{\pi}+\pi/2]} \cos{[\pi l (y+1/2)]} \ dy\nonumber\\=-\int_{-\tau_{\pi}/2}^{\tau_{\pi}/2} \sin{(\pi y/\tau_{\pi})} \cos{[\pi l (y+1/2)]} \ dy,
\end{eqnarray}
which using $\cos[\pi l (y+1/2)]=\cos{(\pi l y)}\cos{(\pi l /2)} - \sin{(\pi l y)}\sin{(\pi l /2)}$ becomes
\begin{eqnarray}
\sin{(\pi l /2)}\int_{-\tau_{\pi}/2}^{\tau_{\pi}/2} \sin{(\pi y/\tau_{\pi})} \sin{(\pi l y)} \ dy \nonumber\\- \cos{(\pi l /2)}\int_{-\tau_{\pi}/2}^{\tau_{\pi}/2} \sin{(\pi y/\tau_{\pi})} \cos{(\pi l y)} \ dy.
\end{eqnarray}
The second integral is zero owing to symmetry reasons, i. e. $\int_{-a}^{a}F(x)dx=0$ if $F(-x)=-F(x)$. Again, because of symmetry arguments, the first integral is
\begin{equation}
2\sin{(\pi l /2)}\int_{0}^{\tau_{\pi}/2} \sin{(\pi y/\tau_{\pi})} \sin{(\pi l y)} \ dy,
\end{equation}
which using trigonometric identities reads
\begin{equation}
\sin{(\pi l /2)}\Big\{\int_{0}^{\tau_{\pi}/2} \cos{(\pi y(l-1/\tau_{\pi})} \ dy -\int_{0}^{\tau_{\pi}/2} \cos{(\pi y(l+1/\tau_{\pi})} \ dy.\Big\}
\end{equation}
Solving the integrals one gets
\begin{equation}
\frac{-1}{\pi}\sin{(\pi l /2)}\cos{(\pi l\tau_{\pi}/2)}\Big\{ \frac{1}{l-1/\tau_{\pi}} + \frac{1}{l+1/\tau_{\pi}} \Big\},
\end{equation}
which is simplified to
\begin{equation}
\frac{2l\tau_\pi^2}{\pi(1-l^2\tau_{\pi}^2)}\sin{(\pi l /2)}\cos{(\pi l\tau_{\pi}/2)}.
\end{equation}
It is straightforward to prove that the sum of the three integrals in Eq.~(\ref{integrals}) gives
\begin{equation}
f_{l}^\textrm{ th}=\frac{4 \sin{(\pi l /2)}\cos{(\pi l t_{\pi}/T)}}{\pi l(1-4l^2t_{\pi}^2/T^2)},
\end{equation}
which correspond to the expression written in the main text.
\section{Energy delivery}\label{energydelivery}
The Poynting vector, that describes the energy flux for an electromagnetic wave, is given by
\begin{equation}
\vec{P}=\frac{1}{\mu_{0}} \vec{E} \times \vec{B},
\end{equation}
where $\mu_{0}$ is the vacuum permeability, and $\vec{E}$ and $\vec{B}$ are the electric field and magnetic field vectors at the region of interest, i.e. the NV center. The latter, in the nanoscale, is sufficiently small compared with the wavelength of the MW radiation to assume a plane wave description of the radiation, so the magnetic field can be written as
\begin{equation}
\vec{B}=\vec{B}_{0}(t)\cos{(\vec{k}\cdot \vec{x}-\omega t +\phi)},
\end{equation}
where $\vec{k}$ is the wavevector and $\omega$ the frequency of the microwave field. We will also assume an extra time dependence $B_{0}(t)$ whose time scales will be several orders of magnitude larger than the period $2\pi/\omega$. From Maxwell equations in vacuum it is derived that, for such a magnetic field, $\vec{k}\cdot \vec{B}=0$, $\vec{k}\cdot \vec{E}=0$, and $\vec{E}\cdot \vec{B}=0$. From the equation $\vec{\nabla}\times\vec{B}=\frac{1}{c^2}\partial \vec{E}/\partial t$, it follows that
\begin{equation}\label{electric}
\vec{E}=c^2\int\!dt \ (\vec{\nabla}\times\vec{B})=-c^2\int\! dt \ (\vec{k}\times\vec{B_0}(t)) \sin{(\vec{k}\cdot \vec{x}-\omega t +\phi)}.
\end{equation}
We choose $\vec{B}$ to be perpendicular to the NV axis ($z$ axis), specifically, on the $x$ axis. The control Hamiltonian, is then
\begin{equation}
H_{c}(t)=-\gamma_e\vec{B}\cdot \vec{S}=\gamma_eB_{x}(t)S_x \cos{(\omega t - \phi)},
\end{equation}
where $\vec{S}$ corresponds to the spin of the NV center, $\gamma_e$ is the gyromagnetic ratio of the electron and $\vec{x}=0$ the position of the NV. To recover Eq.~(\ref{controlHamil}) of the main text, we require that $\sqrt{2} \Omega(t)=\gamma_e B_x(t)$. The magnetic field vector at $\vec{x}=0$ is then
\begin{equation}
\vec{B}(t)=\frac{\sqrt{2}\Omega(t)}{\gamma_e} \cos{(\omega t -\phi)} \hat{x}\\
\end{equation}
and the electric field is, from Eq.~(\ref{electric}),
\begin{equation}
\vec{E}(t)=\frac{\sqrt{2}\omega c}{\gamma_e}\int dt \Omega(t) \sin{(\omega t -\phi)} \ \hat{k}\times \hat{x},
\end{equation}
which, using the wave equation $\partial^2 \vec{E}/\partial^2 t=c^2\nabla^2 \vec{E}=\omega^2\vec{E}$, converts into
\begin{equation}\label{elec2}
\vec{E}(t)=\frac{\sqrt{2}}{k\gamma_e} \frac{\partial}{\partial t}\Big[ \Omega(t) \sin{(\omega t -\phi)} \Big]\ \hat{x}\times \hat{k}.
\end{equation}
\subsection{The case of top-hat $\pi$ pulses}
For top-hat pulses we have that $\partial\Omega(t)/\partial t=0$ during the pulse, thus, the energy delivery per unit of area we obtain for top-hat pulses is
\begin{equation}
E^\textrm{ th}(t_{\pi})=\int_0^{t_\pi}dt |\vec{P}(t)|=\frac{c}{\mu_0}\frac{2}{\gamma_{e}^2}\int_0^{t_\pi}dt \ \Omega^2\cos^2(\omega t-\phi)
\end{equation}
which gives
\begin{equation}\label{eth}
E^\textrm{ th}(t_{\pi})=\frac{c}{\mu_0}\frac{\Omega^2}{\gamma_e^2} \Big\{t_\pi+\frac{1}{2\omega}\sin{(2\omega t_\pi -2\phi)}\Big\}.
\end{equation}
The second part of the formula is upper bounded by $(2\omega)^{-1}$, which , on the other hand, is several orders of magnitude smaller than $t_\pi$, thus negligible. As $t_{\pi}=\pi/\Omega$, Eq.~(\ref{eth}) can be rewritten as
\begin{equation}\label{Stophat}
E^\textrm{ th}(t_{\pi})\approx \frac{\pi c}{\mu_0}\frac{ \Omega}{\gamma_e^2},
\end{equation}
meaning that the energy increases linearly with the Rabi frequency.
\subsection{The case of extended $\pi$ pulses}
To study the case of an extended $\pi$ pulse, we need to calculate both terms on Eq.~(\ref{elec2}), which are non zero in general. The complete expression is given by
\begin{equation}\label{Sextended}
E^\textrm{ ext}(t_{\pi})=\frac{c}{\mu_0}\frac{2}{\gamma_e^2}\int_{0}^{t_\pi}\! \!dt \ \Bigg[ \Omega^2(t)\cos^2{(\omega t -\phi)} +\frac{1}{\omega} \Omega(t)\frac{\partial\Omega(t)}{\partial t}\cos{(\omega t -\phi)} \sin{(\omega t -\phi)} \Bigg] .
\end{equation}
As a final comment, for all cases simulated in the main text we find that the second term at the right hand side of Eq.~(\ref{Sextended}) is negligible, thus it can be written
\begin{equation}
E^\textrm{ ext}(t_{\pi})\approx \frac{c}{\mu_0}\frac{2}{\gamma_e^2}\int_{0}^{t_\pi}\! \!dt \ \Bigg[ \Omega^2(t)\cos^2{(\omega t -\phi)} \Bigg] .
\end{equation}
\subsection{Equivalent top-hat Rabi frequency}
To calculate the constant Rabi frequency leading to top-hat pulses with the same energy than extended pulses, one has to equal $E^\textrm{ top-hat}(t_{\pi}) = E^\textrm{ ext}(t_{\pi})$ and extract the value of the constant $\Omega$. With Eqs.~(\ref{Stophat}, \ref{Sextended}) one can easily find that
\begin{equation}
\Omega=\frac{\mu_0 \gamma_e^2}{\pi c} E^\textrm{ ext}(t_{\pi}).
\end{equation}
\chapter{Further Considerations on the Selective Interactions of the QRM}
\label{appendix:appendix_c}
\section{Dyson series of the Rabi-Stark Hamiltonian}\label{app:QRSDyson}
The Rabi-Stark Hamiltonian as written in Eq.~(\ref{QRS1}) is
\begin{equation}\label{SQRS1}
H=\frac{\omega_0}{2}\sigma_z +\omega a^\dag a + \gamma a^\dag a \sigma_z + g(\sigma_+ +\sigma_-)(a+a^\dag)
\end{equation}
where $\sigma_z,\sigma_+,\sigma_-$ are operators of the two-level system and $a^\dagger$ and $a$ are infinite dimensional creation and annihilation operators of the bosonic field. Using the ket-bra notation, the two-level matrices are $\sigma_+=|e \rangle\langle g|$, $\sigma_-=|g \rangle\langle e|$ and $\sigma_z=|e \rangle\langle e|-|g \rangle\langle g|$ where $|e\rangle$ and $|g\rangle$ the excited and ground states of the two level system, respectively. On the other hand, the bosonic operators can be written as
\begin{eqnarray}\label{SBosOp}
a^\dagger=\sum_{n=0}^{\infty}\sqrt{n+1}|n+1\rangle\langle n| \\
a=\sum_{n=0}^{\infty}\sqrt{n+1}|n\rangle\langle n+1|
\end{eqnarray}
where $|n\rangle$ is the $n$-th Fock state. With this notation, the Hamiltonian in Eq.~(\ref{SQRS1}) can be rewritten as
\begin{eqnarray}\label{SQRS2}
H=\sum_{n=0}^{\infty}\omega_n^e|e \rangle\langle e|\otimes |n \rangle\langle n|+\omega^g_{n}|g \rangle\langle g|\otimes |n \rangle\langle n|\nonumber\\+\Omega_n(|e \rangle\langle g|+\textrm{H.c.})\otimes(|n+1\rangle\langle n| +\textrm{H.c.})
\end{eqnarray}
where $\omega_n^e=(\omega +\gamma )n+\omega_0/2$, $\omega_n^g=(\omega -\gamma )n-\omega_0/2$ and $\Omega_n=g\sqrt{n+1}$. We can move to an interaction picture with respect the diagonal part of Eq.~(\ref{SQRS2}), and the non-diagonal elements will rotate as
\begin{eqnarray}\label{SNonDiag2}
|e \rangle\langle g|\otimes |n+1 \rangle\langle n| &\rightarrow& |e \rangle\langle g| \otimes |n+1 \rangle\langle n| e^{i(\omega^e_{n+1}-\omega_n^g)t} \\
|g \rangle\langle e|\otimes |n+1 \rangle\langle n| &\rightarrow& |g \rangle\langle e| \otimes |n+1 \rangle\langle n| e^{-i(\omega^e_n-\omega_{n+1}^g)t} \\
|e \rangle\langle g|\otimes |n \rangle\langle n+1| &\rightarrow& |e \rangle\langle g| \otimes |n \rangle\langle n+1| e^{i(\omega^e_{n}-\omega_{n+1}^g)t} \\
|g \rangle\langle e|\otimes |n \rangle\langle n+1| &\rightarrow& |g \rangle\langle e| \otimes |n \rangle\langle n+1| e^{-i(\omega^e_{n+1}-\omega_{n}^g)t}
\end{eqnarray}
where $\delta^+_{n}=\omega_{n+1}^e-\omega_n^g=\omega+[\omega_0+\gamma(2n+1)]$ and $\delta^-_{n}=\omega_{n+1}^g-\omega_{n}^e=\omega-[\omega_0+\gamma(2n+1)]$. The Hamiltonian in the interaction picture can be then rewritten as
\begin{equation}\label{SQRSIntPic}
H_I(t)=\sum_{n=0}^{\infty}\Omega_n(\sigma_+ e^{i\delta^+_{n} t}+\sigma_- e^{i\delta^-_{n} t})\otimes |n+1\rangle\langle n| +\textrm{H.c.}
\end{equation}
which corresponds to Eq.~(\ref{QRSIntPic}).
\subsection{Second-order Hamiltonian}\label{subapp:QRSSecondOrder}
The second order Hamiltonian that corresponds to Eq.~(\ref{SQRSIntPic}) is given by~\cite{Sakurai94}
\begin{equation}\label{S2Dyson}
H^{(2)}(t)=-i\int_0^t dt' H_I(t)H_I(t').
\end{equation}
We can write $H_{I}(t)$ as
\begin{equation}\label{SredHI}
H_I(t)=\sum_{n=0}^{\infty} \Omega_n \Big(S_{n}(t)|n+1\rangle\langle n| +S^\dagger_{n}(t)|n\rangle\langle n+1|\Big),
\end{equation}
and, then,
\begin{eqnarray}\label{S2orderQRS}
H^{(2)}(t)=-i\sum_{n,n'}\Omega_{n}\Omega_{n'}\Big(S_{n}(t)|n+1\rangle\langle n| +S^\dagger_{n}(t)|n\rangle\langle n+1|\Big)\nonumber \\ \times \int_0^tdt'\Big(S_{n'}(t')|n'+1\rangle\langle n'| +S^\dagger_{n'}(t')|n'\rangle\langle n'+1|\Big),
\end{eqnarray}
which gives $H^{(2)}=H_A^{(2)}+H_B^{(2)}$, where
\begin{eqnarray}\label{S2orderQRS_2}
H^{(2)}_A(t)=-i\sum_{n}\Omega_{n}^2\Big(S_{n}(t)\int_0^tdt'S^\dagger_{n}(t')\Big)|n+1\rangle\langle n+1| \nonumber\\ +\Omega^2_{n}\Big(S^\dagger_{n}(t)\int_0^tdt'S_{n}(t')\Big)|n\rangle\langle n|
\end{eqnarray}
gives diagonal elements and
\begin{eqnarray}\label{S2orderQRS_3}
H^{(2)}_B(t)=-i\sum_{n}\Omega_{n}\Omega_{n+1}\Big(S_{n+1}(t)\int_0^tdt'S_{n}(t')\Big)|n+2\rangle\langle n| \nonumber\\+\Omega_{n}\Omega_{n+1}\Big(S^\dagger_{n}(t)\int_0^tdt'S^\dagger_{n+1}(t')\Big)|n\rangle\langle n+2|
\end{eqnarray}
is related with two-photon processes. Calculating the two-level operators we obtain
\begin{eqnarray}\label{S2orderSpinOp}
S_{n}(t)\int_0^tdt'S^\dagger_{n}(t')&=&\frac{i}{\delta^+_n}\sigma_+\sigma_- + \frac{i}{\delta^-_n}\sigma_-\sigma_+ \nonumber\\ &-&\frac{i}{\delta^+_n}\sigma_+\sigma_-e^{i\delta^+_n t} - \frac{i}{\delta^-_n}\sigma_-\sigma_+e^{i\delta^-_n t} \\
S^\dagger_{n}(t)\int_0^tdt'S_{n}(t')&=&-\frac{i}{\delta^+_n}\sigma_-\sigma_+ - \frac{i}{\delta^-_n}\sigma_+\sigma_- \nonumber\\ &+& \frac{i}{\delta^+_n}\sigma_-\sigma_+e^{-i\delta^+_n t} + \frac{i}{\delta^-_n}\sigma_+\sigma_-e^{-i\delta^-_n t} \\
S_{n+1}(t)\int_0^tdt'S_{n}(t')&=& -\frac{i}{\delta^-_{n}}\sigma_+\sigma_-(e^{i(\delta^+_{n+1} +\delta^-_{n})t}-e^{i\delta^+_{n+1}t}) \nonumber\\&-& \frac{i}{\delta^+_{n}}\sigma_-\sigma_+(e^{i(\delta^-_{n+1} +\delta^+_{n})t} -e^{i\delta^-_{n+1}t})\\
S^\dagger_{n}(t)\int_0^tdt'S^\dagger_{n+1}(t')&=& \frac{i}{\delta^-_{n+1}}\sigma_-\sigma_+(e^{-i(\delta^+_{n} +\delta^-_{n+1})t} -e^{-i\delta^+_{n}t}) \nonumber\\ &+& \frac{i}{\delta^+_{n+1}}\sigma_+\sigma_-(e^{-i(\delta^-_{n} +\delta^+_{n+1})t} -e^{i\delta^-_{n}t}).
\end{eqnarray}
We can ignore the terms oscillating with $\pm\delta^{\pm}_n$, as these frequencies correspond to resonances of the first-order Hamiltonian and one-photon processes. Keeping the other terms we have that
\begin{eqnarray}\label{S2orderQRS_4}
H^{(2)}_A\approx\sum_{n}\Omega_{n}^2\Big(\frac{\sigma_+\sigma_-}{\delta_n^+} +\frac{\sigma_-\sigma_+}{\delta_n^-}\Big)|n+1\rangle\langle n+1| -\Omega^2_{n}\Big(\frac{\sigma_-\sigma_+}{\delta^+_n} + \frac{\sigma_+\sigma_-}{\delta^-_n}\Big)|n\rangle\langle n|
\end{eqnarray}
and
\begin{eqnarray}\label{S2orderQRS_5}
H^{(2)}_B(t)\approx \sum_{n}-\Omega_{n}\Omega_{n+1}\Big(\frac{\sigma_+\sigma_-}{\delta^-_{n}}e^{i(\delta^+_{n+1} +\delta^-_{n})t} + \frac{\sigma_-\sigma_+}{\delta^+_{n}}e^{i(\delta^-_{n+1} +\delta^+_{n})t} \Big)|n+2\rangle\langle n| \nonumber\\
+\Omega_{n}\Omega_{n+1}\Big(\frac{\sigma_-\sigma_+}{\delta^-_{n+1}}e^{-i(\delta^+_{n} +\delta^-_{n+1})t} + \frac{\sigma_+\sigma_-}{\delta^+_{n+1}}e^{-i(\delta^-_{n} +\delta^+_{n+1})t} \Big)|n\rangle\langle n+2|.
\end{eqnarray}
The two-photon transition terms in Eq.~(\ref{S2orderQRS_5}) oscillate with frequencies $\delta^+_n+\delta^-_{n+1}=2\omega-2\gamma$ and $\delta^+_{n+1}+\delta^-_{n}=2\omega+2\gamma$, which are zero only in the points of the spectral collapse. Thus, we do not expect to see two-photon transitions in the regime where the Hamiltonian is bounded from below. The terms in Eq.~(\ref{S2orderQRS_4}) will induce an additional Stark shift that can induce a shift in the resonance conditions of the higher order processes, as we will see later. The Hamiltonian can be simplified to
\begin{equation}\label{S2orderQRS_6}
H^{(2)}_A\approx\sum_{n}\Bigg\{\Big(\frac{\Omega_{n-1}^2}{\delta_{n-1}^+} -\frac{\Omega_{n}^2}{\delta_n^-}\Big)\sigma_+\sigma_- +\Big(\frac{\Omega_{n-1}^2}{\delta_{n-1}^-} -\frac{\Omega_{n}^2}{\delta_n^+}\Big)\sigma_-\sigma_+ \Bigg\}|n\rangle\langle n|.
\end{equation}
\subsection{Third-order Hamiltonian}\label{subapp:QRSThirdOrder}
The third order Hamiltonian is calculated by
\begin{equation}\label{S3Dyson}
H^{(3)}(t)=(-i)^2\int_0^t dt' \int_0^{t'}dt'' H_I(t)H_I(t')H(t'').
\end{equation}
Following the same notation of the previous section, the third order Hamiltonian is
\begin{eqnarray}\label{S3orderQRS}
H^{(3)}(t)=-\sum_{n, n', n''}\Omega_{n}\Omega_{n'}\Omega_{n''}\Big(S_{n}(t)|n+1\rangle\langle n| +\textrm{H.c.}\Big)\nonumber\\ \times\int_0^t dt' \Big(S_{n'}(t')|n'+1\rangle\langle n'| +\textrm{H.c.}\Big) \int_0^{t'}dt'' \Big(S_{n''}(t'')|n''+1\rangle\langle n''| +\textrm{H.c.}\Big).
\end{eqnarray}
If we focus on the three-photon resonances, the following Hamiltonian contains them
\begin{eqnarray}\label{S3orderQRS_A}
H_A^{(3)}(t)=-\sum_{n}\Omega_{n}\Omega_{n+1}\Omega_{n+2}S_{n+2}(t) \nonumber\\ \times\int_0^t dt' S_{n+1}(t') \Big(\int_0^{t'}dt'' S_{n}(t'')\Big)|n+3\rangle\langle n| +\textrm{H.c}.
\end{eqnarray}
The contribution of the two-level operators can be easily calculated by noticing that from
\begin{eqnarray}\label{S3order_TL1}
S_{n+2}(t) S_{n+1}(t') S_{n}(t'')&=&(\sigma_+ e^{i\delta^+_{n+2} t}+\sigma_- e^{i\delta^-_{n+2} t})\\&\times&(\sigma_+ e^{i\delta^+_{n+1} t'}+\sigma_- e^{i\delta^-_{n+1} t'}) (\sigma_+ e^{i\delta^+_{n} t''}+\sigma_- e^{i\delta^-_{n} t''}),\nonumber
\end{eqnarray}
only the following two terms are not zero (notice that $\sigma_\pm^2=0$)
\begin{equation}\label{S3order_TL2}
S_{n+2}(t) S_{n+1}(t') S_{n}(t'')=\sigma_+ e^{i\delta^+_{n+2} t} e^{i\delta^-_{n+1} t'}e^{i\delta^+_{n} t''} +\sigma_- e^{i\delta^-_{n+2} t} e^{i\delta^+_{n+1} t'}e^{i\delta^-_{n} t''}.
\end{equation}
After calculating the integral we obtain that the Hamiltonian is
\begin{eqnarray}\label{S3orderQRS_A}
H_A^{(3)}(t)=\sum_{n=0}^{\infty}\Omega_{n}\Omega_{n+1}\Omega_{n+2}\Bigg\{\frac{1}{\delta_n^+(\delta_{n+1}^-+\delta_n^+)}\Big(e^{i\delta^{(3)}_{+n}t} - e^{i\delta_{n+2}^+t}\Big) \nonumber\\+ \frac{1}{\delta^+_n\delta_n^-}\Big(e^{i2(\omega+\gamma)t} - e^{i\delta_{n+2}^+t}\Big) \Bigg\}\sigma_+|n+3\rangle\langle n| +\textrm{H.c} \nonumber\\
+\Omega_{n}\Omega_{n+1}\Omega_{n+2}\Bigg\{\frac{1}{\delta_n^-(\delta_{n+1}^++\delta_n^-)}\Big(e^{i\delta^{(3)}_{-n}t} - e^{i\delta_{n+2}^-t}\Big) \nonumber
\\+ \frac{1}{\delta^-_n\delta_n^+}\Big(e^{i2(\omega-\gamma)t} - e^{i\delta_{n+2}^-t}\Big) \Bigg\}\sigma_-|n\rangle\langle n+3| +\textrm{H.c}
\end{eqnarray}
where $\delta^{(3)}_{+n}=\delta_{n+2}^++\delta_{n+1}^{-}+\delta_n^+=2\omega+\delta^+_{n+1}$ and $\delta^{(3)}_{-n}=\delta_{n+2}^-+\delta_{n+1}^{+}+\delta_n^-=2\omega+\delta^-_{n+1}$. Ignoring the resonances $\delta^+_{n+2}$ or $\delta^-_{n+2}$ that correspond to the first-order processes, and $2(\omega\pm\gamma)$ which is only zero at the point of the spectral collapse we are leaved with
\begin{eqnarray}\label{S3orderQRS_A2}
H_A^{(3)}(t)=\sum_{n=0}^{\infty}\Omega_{n}\Omega_{n+1}\Omega_{n+2}\Bigg\{\frac{1}{2\delta_n^+(\omega-\gamma)}\sigma_+e^{i\delta^{(3)}_{n+}t} \nonumber\\+ \frac{1}{2\delta_n^-(\omega+\gamma)}\sigma_-e^{i\delta^{(3)}_{n-}t} \Bigg\}|n+3\rangle\langle n| +\textrm{H.c},
\end{eqnarray}
where the 3-photon JC and anti-JC resonances are easily identified as $\delta_{n+}^{(3)}=0$ and $\delta_{n-}^{(3)}=0$ respectively. In a simplified way, Eq.~(\ref{S3orderQRS_A2}) is rewritten as
\begin{equation}\label{S3orderQRS_A3}
H_A^{(3)}(t)=\sum_{n=0}^{\infty}(\Omega^{(3)}_{n+}\sigma_+e^{i\delta^{(3)}_{n+}t} + \Omega_{n-}^{(3)}\sigma_-e^{i\delta^{(3)}_{n-}t})|n+3\rangle\langle n| +\textrm{H.c},
\end{equation}
where $\Omega^{(3)}_{n+}=g^3\sqrt{(n+3)!/n!}/2\delta^+_n(\omega-\gamma)$ and $\Omega^{(3)}_{n-}=g^3\sqrt{(n+3)!/n!}/2\delta^-_n(\omega+\gamma)$, as shown in section~\ref{subsect:MultiPhoton}.
\section{ Derivation of the Rabi-Stark Hamiltonian in trapped ions}\label{app:QRSTI}
In this section we will explain in detail how to go from Eq.~(\ref{Scheme2}) to Eq.~(\ref{TIQRS}) in the main text. Equation~(\ref{Scheme2}) reads
\begin{equation}\label{SScheme2}
H_{A}(t)=-i\frac{\eta\Omega_r}{2} a \sigma_+ e^{-i\delta_rt} -i\frac{\eta\Omega_b}{2} a^\dagger \sigma_+ e^{-i\delta_bt} + e^{i\phi_\textrm{S}}\hat{g}_{\textrm S}\sigma_++\textrm{H.c.}
\end{equation}
At this point the vibrational RWA has been applied, however for a precise understanding of the effective dynamics we need to consider the following additional terms that have been neglected, so that the total Hamiltonian is $H=H_A+H_B$, where
\begin{eqnarray}\label{SSchemeRemaining}
H_{B}(t)&=&-\frac{\Omega_r}{2} \sigma_+ e^{-i(-\nu +\delta_r)t} -\frac{\Omega_b}{2} \sigma_+ e^{-i(\nu +\delta_b)t} \nonumber\\ &+& i\eta e^{i\phi_{\textrm S}}\frac{\Omega_{\textrm S}}{2}\sigma_+(ae^{-i\nu t}+a^\dagger e^{i\nu t}) +\textrm{H.c.}
\end{eqnarray}
The first two terms are the off-resonant carrier interactions of the red and blue drivings which are usually neglected given that $\Omega_{r,b}\ll \nu$. The last term represents the coupling to the motional mode of the carrier driving. The latter does not commute with the first and second terms, and moreover, they both oscillate at similar frequencies (as $\delta_r,\delta_b\ll\nu$). The consequence is that, at second order, these terms produce interactions that cannot be neglected as we will see in the following. The second order effective Hamiltonian is
\begin{eqnarray}\label{SSecondOrder}
H^{(2)}(t)&=&-i\int_{0}^{t}H(t)H(t')dt' \nonumber\\&=&-i\int_{0}^{t}\Big(H_A(t)+H_B(t)\Big)\Big(H_A(t')+H_B(t')\Big)dt'.
\end{eqnarray}
We are only interested in terms arising from $\int_{0}^{t}H_B(t)H_B(t')dt'$ whose oscillating frequency is $\delta_r$ or $\delta_b$. These are
\begin{eqnarray}\label{SSecondOrderList}
\frac{\Omega_r}{2}\sigma_+e^{-i(-\nu+\delta_r)t}\int_{0}^t \!\!dt' i\eta\frac{\Omega_{\textrm S}}{2}e^{-i\phi_{\textrm S}}\sigma_- ae^{-i\nu t'} =-\eta\frac{\Omega_{\textrm S}\Omega_r}{4\nu}\sigma_+\sigma_-e^{-i\phi_{\textrm S}}e^{-i\delta_r t} a \\
\frac{\Omega_b}{2}\sigma_+e^{-i(\nu+\delta_b)t}\int_{0}^t \!\!dt' i\eta\frac{\Omega_{\textrm S}}{2}e^{-i\phi_{\textrm S}}\sigma_- a^\dagger e^{i\nu t'} =\eta\frac{\Omega_{\textrm S}\Omega_b}{4\nu}\sigma_+\sigma_-e^{-i\phi_{\textrm S}}e^{-i\delta_b t} a^\dagger \\
-i\eta\frac{\Omega_{\textrm S}}{2}e^{i\phi_{\textrm S}}\sigma_+a^\dagger e^{i\nu t}\int_0^t \!\!dt' \frac{\Omega_r}{2}\sigma_-e^{i(-\nu+\delta_r)t'}=\eta\frac{\Omega_{\textrm S}\Omega_r}{4(\nu-\delta_r)}\sigma_+\sigma_-e^{i\phi_{\textrm S}}e^{i\delta_r t} a^\dagger \\
-i\eta\frac{\Omega_{\textrm S}}{2}e^{i\phi_{\textrm S}}\sigma_+a e^{-i\nu t}\int_0^t \!\!dt'\frac{\Omega_b}{2}\sigma_-e^{i(\nu+\delta_b)t'}=-\eta\frac{\Omega_{\textrm S}\Omega_b}{4(\nu+\delta_b)}\sigma_+\sigma_-e^{i\phi_{\textrm S}}e^{i\delta_b t} a \\
-\frac{\Omega_r}{2}\sigma_-e^{i(-\nu+\delta_r)t}\int_{0}^t\!\!dt'i\eta\frac{\Omega_{\textrm S}}{2}e^{i\phi_{\textrm S}}\sigma_+ a^\dagger e^{i\nu t'} =-\eta\frac{\Omega_{\textrm S}\Omega_r}{4\nu}\sigma_-\sigma_+e^{i\phi_{\textrm S}}e^{i\delta_r t} a^\dagger \\
-\frac{\Omega_b}{2}\sigma_-e^{i(\nu+\delta_b)t}\int_{0}^t\!\!dt'i\eta\frac{\Omega_{\textrm S}}{2}e^{i\phi_{\textrm S}}\sigma_+ a e^{-i\nu t'} =\eta\frac{\Omega_{\textrm S}\Omega_b}{4\nu}\sigma_-\sigma_+e^{i\phi_{\textrm S}}e^{i\delta_b t} a \\
i\eta\frac{\Omega_{\textrm S}}{2}e^{-i\phi_{\textrm S}}\sigma_-a e^{-i\nu t}\int_0^t\!\!dt' \frac{\Omega_r}{2}\sigma_+e^{-i(-\nu+\delta_r)t'}=\eta\frac{\Omega_{\textrm S}\Omega_r}{4(\nu-\delta_r)}\sigma_-\sigma_+e^{-i\phi_{\textrm S}}e^{-i\delta_r t} a \\
i\eta\frac{\Omega_{\textrm S}}{2}e^{-i\phi_{\textrm S}}\sigma_-a^\dagger e^{i\nu t}\int_0^t\!\!dt' \frac{\Omega_b}{2}\sigma_+e^{-i(\nu+\delta_b)t'}=-\eta\frac{\Omega_{\textrm S}\Omega_b}{4(\nu+\delta_b)}\sigma_-\sigma_+e^{-i\phi_{\textrm S}}e^{-i\delta_b t} a^\dagger.
\end{eqnarray}
If we assume that $1/(\nu\pm\delta_j)\sim1/\nu $ and reorganise all the terms we get that the second-order effective Hamiltonian is
\begin{equation}\label{SSecondOrderEff}
H_B^{(2)}(t)\approx \eta\frac{\Omega_{\textrm S}\Omega_r}{4\nu} (ie^{-i\phi_{\textrm S}}e^{-i\delta_r t}a+\textrm{H.c.})\sigma_z- \eta\frac{\Omega_{\textrm S}\Omega_b}{4\nu} (ie^{-i\phi_{\textrm S}}e^{-i\delta_b t}a^\dagger+\textrm{H.c.})\sigma_z
\end{equation}
which can be incorporated to the first-order Hamiltonian in Eq.~(\ref{SScheme2}) giving
\begin{eqnarray}\label{SEffectiveQRS}
H_\textrm{ eff}(t)&=&-i(2g^{(1)}_r \sigma_+ -g^{(2)}_{r}e^{-i\phi_\textrm{S}}\sigma_z)ae^{-i\delta_rt} -i(2g_b^{(1)}\sigma_+ +g_{b}^{(2)}e^{-i\phi_\textrm{S}}\sigma_z) a^\dagger e^{-i\delta_bt} \nonumber\\ &+& \frac{\Omega_0}{2} \sigma_+e^{i\phi_{\textrm S}} -\eta^2\frac{\Omega_\textrm{S}}{2}a^\dagger a\sigma_+e^{i\phi_\textrm{S}} +\textrm{H.c.},
\end{eqnarray}
where $g_{r,b}^{(1)}=\eta\Omega_{r,b}/4$ and $g_{r,b}^{(2)}=\eta\Omega_\textrm{S}\Omega_{r,b}/4\nu$ and $\Omega_0=\Omega_\textrm{S}(1-\eta^2/2)$. Now, if we assume $\phi_\textrm{S}=0$ or $\pi$ and move to a frame with respect $\frac{\Omega_\textrm{DD}}{2}\sigma_{x}$, we obtain (below $\Omega\equiv\Omega_\textrm{DD}$ for clarity)
\begin{eqnarray}\label{SEffectiveQRS2}
H_\textrm{eff}^{I}&=& \frac{\omega_0^{R}}{2}\sigma_+ \mp \eta^2\frac{\Omega_\textrm{S}}{2}a^\dagger a\sigma_+ -i\Big(g^{(1)}_r (\sigma_x+i\sigma_ye^{- i\Omega t\sigma_x}) \mp g^{(2)}_{r}\sigma_ze^{- i\Omega t\sigma_x}\Big)ae^{-i\delta_rt} \nonumber\\
&-&i\Big(g_b^{(1)}(\sigma_x+i\sigma_ye^{- i\Omega t\sigma_x}) \pm g_{b}^{(2)}\sigma_ze^{- i\Omega t\sigma_x}\Big) a^\dagger e^{-i\delta_bt} +\textrm{H.c.},
\end{eqnarray}
where $\Omega_\textrm{DD}=\pm\Omega_0-\omega_0^\textrm{ R}$ for $\phi_\textrm{S}=0$ and $\phi_\textrm{S}=\pi$ respectively. Using that
\begin{eqnarray}\label{SRotatingPaulis}
\sigma_ye^{- i\Omega t\sigma_x}=\cos{(\Omega_\textrm{DD}t)} \sigma_y- \sin{(\Omega_\textrm{DD}t)}\sigma_z =\tilde{\sigma}_+e^{ i\Omega_\textrm{DD}t} + \tilde{\sigma}_-e^{- i\Omega_\textrm{DD}t}\\
\sigma_ze^{- i\Omega t\sigma_x}=\cos{(\Omega_\textrm{DD}t)} \sigma_z+ \sin{(\Omega_\textrm{DD}t)}\sigma_y = -i(\tilde{\sigma}_+e^{ i\Omega_\textrm{DD}t} - \tilde{\sigma}_-e^{- i\Omega_\textrm{DD}t}),
\end{eqnarray}
where $\tilde{\sigma}_{\pm}=(\sigma_y\pm i\sigma_z)/2$, and that the detunings are chosen to be $\delta_r=\Omega_\textrm{DD}+\omega^\textrm{ R}$ and $\delta_b=\Omega_\textrm{DD}-\omega^\textrm{ R}$, Eq.~(\ref{SEffectiveQRS}) is rewritten as
\begin{eqnarray}\label{SEffectiveQRS3}
H_\textrm{eff}^{I}&=& \frac{\omega_0^\textrm{ R}}{2}\sigma_+ \mp \eta^2\frac{\Omega_{\textrm S}}{2}a^\dagger a\sigma_+ \nonumber\\ &+& \Big(g^{(1)}_r (-i\sigma_x+[\tilde{\sigma}_+e^{i\Omega_\textrm{DD}t} +\textrm{H.c.}]) \pm g^{(2)}_{r}(\tilde{\sigma}_+e^{i\Omega_\textrm{DD}t} -\textrm{H.c.}\Big)ae^{- i\Omega_\textrm{DD} t} e^{- i\omega^\textrm{ R} t} \nonumber\\
&+& \Big(g_b^{(1)}(-i\sigma_x+[\tilde{\sigma}_+e^{ i\Omega_\textrm{DD}t} + \textrm{H.c.}]) \mp g_{b}^{(2)}(\tilde{\sigma}_+e^{ i\Omega_\textrm{DD}t} - \textrm{H.c.}\Big) a^\dagger e^{- i\Omega_\textrm{DD} t}e^{ i\omega^\textrm{ R} t} \nonumber\\ &+&\textrm{H.c.}
\end{eqnarray}
where all terms rotating with $\pm\Omega_\textrm{DD}$ or higher can be neglected by the RWA. After the approximation we have
\begin{eqnarray}\label{SEffectiveQRS4}
H_\textrm{eff}^{I}= \frac{\omega_0^\textrm{ R}}{2}\sigma_x\mp \eta^2\frac{\Omega_\textrm{S}}{2}a^\dagger a\sigma_+ &+& (g^{(1)}_r \pm g^{(2)}_{r})\tilde{\sigma}_+ a e^{- i\omega^\textrm{ R} t} \nonumber\\ &+&(g_b^{(1)} \mp g_{b}^{(2)})\tilde{\sigma}_+ a^\dagger e^{ i\omega^\textrm{ R} t} +\textrm{H.c.}
\end{eqnarray}
which in a rotating frame with respect $-\omega^\textrm{ R}a^\dagger a$ is
\begin{equation}\label{SEffectiveQRS5}
H_\textrm{eff}^{II}= \frac{\omega^\textrm{ R}_0}{2}\sigma_x +\omega^\textrm{ R} a^\dagger a + g_{\textrm{JC}}(\tilde{\sigma}_+ a+\tilde{\sigma}_- a^\dagger) +g_{\textrm{aJC}}(\tilde{\sigma}_+ a^\dagger+\tilde{\sigma}_- a) \mp \eta^2\frac{\Omega_\textrm{S}}{2}a^\dagger a\sigma_x
\end{equation}
where $g_{\textrm{JC}}=\eta\Omega_r(1\pm\Omega_{\textrm S}/\nu)/4$ and $g_{\textrm{aJC}}=\eta\Omega_b(1\mp\Omega_\textrm{S}/\nu)/4$, depending on the choice of the phase $\phi_\textrm{S}=0$ or $\phi_\textrm{S}=\pi$.
\chapter{Further Considerations on the Pulsed DD Two-Qubit Gate}
\label{appendix:appendix_a}
\section{Initial approximations}\label{app:InitialApp}
\subsubsection{Two-Level Approximation}
In this section we numerically argue that the presence of the additional hyperfine levels of the $^{171}$Yb$^+$ ion, the fluctuations of the magnetic field, and the effect of fast rotating terms does not threaten the gate fidelities claimed in section~\ref{sect:1_PDD}. For numerical simplicity we have considered a single four level system and looked for the fidelity of the propagator after a sequence of 20 $\pi$-pulses is applied. We find that the error (infidelity) is on the order of $10^{-5}$, hence being one order of magnitude below the gate errors reported in Table~\ref{table1}. Therefore, we conclude that the presence of the additional levels, counter-rotating terms, and the fluctuations of the magnetic field have a negligible effect on the final fidelity of the gate to the order claimed in section \ref{subsect:Tailored}. We detail now the parameters and conditions in our numerical simulations.
In the hyperfine ground state of the $^{171}$Yb$^+$ ion, transitions can be selected with the appropriate polarisation of the control fields. However, experimental imperfections might generate unwanted leakage of population from the qubit-states to other states. On the other hand, the presence of fluctuations of the magnetic field may also result in imperfect $\pi$-pulses which may also damage the performance of the gate. To account for these experimental imperfections we simulate the following 4-level Hamiltonian
\begin{eqnarray}
H_{4l}&=&E_0 |0\rangle \langle 0 | + E_1|1\rangle \langle 1 | + E_2 |2\rangle \langle 2 | + E_3 |3\rangle \langle 3 |
+ X(t) |1\rangle \langle 1 | -X(t) |3\rangle \langle 3 | \nonumber \\
&+& \Omega(t) ( | 0 \rangle \langle 1| + \epsilon_\perp | 0 \rangle \langle 2 | + \epsilon_\perp | 0 \rangle \langle 3 | +\textrm{H.c.} ) \cos{[\omega t + \phi(t)]},
\end{eqnarray}
where the energies of the hyperfine levels, $E_i$, are those corresponding to a $^{171}$Yb$^+$ ion in a magnetic field of $100$ G, and the qubit is codified in levels $ \{|0 \rangle, | 1 \rangle \}$. Function $X(t)$ represents a fluctuating magnetic field, which shifts the magnetically sensitive levels $|1\rangle $ and $|3\rangle$ in opposite directions. Numerically we have constructed this function as an OU process~\cite{Gillespie96}, where the parameters have been chosen such that the qubit-levels, in the absence of any pulses, show a coherence decaying exponentially with a $T_2$ coherence-time of $3$ms, as experimentally observed~\cite{Wineland98}. Particularly, this corresponds to values $\tau_B=50\mu$s for the correlation time, and $c_d=2/(\tau_B*T_2)$ for the diffusion constant of the OU process. $\Omega(t)$ is a step function taking exclusively values $\Omega$ and $0$, and $\epsilon_\perp$ is a small number representing the leaking of the qubit population through unwanted transitions. For the numerical analysis we have used unfavourable values for this set of parameters. More specifically, the Rabi frequency was assigned a value of $\Omega=(2\pi)\times20$MHz, which is already twice the maximum value used in all the other simulations throughout the analysis, having therefore a larger probability of exciting other, undesired, hyperfine transitions. Moreover, the simulations were performed for the longest sequence discussed in section \ref{subsect:Tailored}, which lasts $80\mu$s.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{figures/Figures_A/InfidVSeps}
\caption{a) State infidelity after a AXY-4 sequence, consisting of 20 $\pi$-pulses and a total time of $80\mu$s, vs the strength of the leakage of population to other spectator levels. Each point is the average of the infidelities of 100 runs of the sequence in the presence of stochastic fluctuations of the magnetic field. b) State infidelity as a consequence of coupling to a radial mode of the kind in Eq.~(\ref{radial}). We observe a growing infidelity for larger values of $\beta$ with $\beta=\Delta_r/\eta_1\nu_1$. The case of $g_B=150$T/m for two qubit gate phases $\varphi=\pi/4$ squares, and $\varphi=\pi/8$ circles. In c) we use $g_B= 300$T/m and, again, we use squares for $\varphi=\pi/4$, and circles for $\varphi=\pi/8$. For both plots we used $\psi_{4}=|{\textrm e}\rangle \otimes (|{\textrm g}\rangle - i |{\textrm e}\rangle) + |{\textrm g}\rangle \otimes |{\textrm g}\rangle$ (up to normalisation) as the initial state.}
\label{fig:Infidelity}
\end{figure}
We compare the propagator resulting from our simulations to the identity, which is what one would expect after an even number of $\pi$-pulses, 20 in our case, and we compute a value for the fidelity according to the definition
\begin{equation}
F_{A,B}=\frac{|\textrm{Tr}(AB^\dag)|}{\sqrt{\textrm{Tr}(AA^{\dag}) \textrm{Tr} (BB^\dag)}},
\end{equation}
where $F_{A,B}$ is the fidelity between operators $A$ and $B$. To account for the stochastic effects of the OU process that models the fluctuations of the magnetic field, we have averaged the resulting fidelities over $100$ runs of our numerical simulator. In Fig.~\ref{fig:Infidelity} we show the value of the infidelity, $1-F$, for a number of values of $\epsilon$. We can see that the error grows non-linearly with the strength of the leakage $\epsilon$ due to polarisation errors in the control fields. However, for alignment errors below $20\%$ ($\epsilon_\perp=0.2$) we obtain that the infidelity is smaller than $10^{-4}$. Hence, for polarisation errors below $20\%$, the effect of additional hyperfine levels, magnetic field fluctuations, and fast counter rotating terms should only be detectable in the fifth significant order of the gate fidelity, and not alter the $99.9\%$ fidelity claimed in section~\ref{subsect:Tailored}.
\subsubsection{Coupling with radial modes}
In this section we study the influence of the motional radial modes of the ion in our proposal. To account for the effect of a given radial mode $d$, Hamiltonian~(\ref{Hamiltonianbare}) needs to be complemented with a term of the form
\begin{eqnarray}\label{radial}
\nu_r d^{\dag}d + \Delta_r (d + d^\dag) [ \sigma_1^z + \sigma_2^z].
\end{eqnarray}
Because of computational restrictions, in this analysis we will only consider one radial mode, and assume no motional decoherence. Term~(\ref{radial}) is justified because of the unavoidable presence of some remanent magnetic field gradient in the radial direction, which leads to the coupling $\Delta_r$ that we model as a fraction of the coupling $\eta_1\nu_1$ in Hamiltonian~(\ref{Hamiltonianbare}) of the main text, i.e. $\Delta_r= \beta\eta_1\nu_1$. We have compared the states evolved under Hamiltonian~(\ref{Hamiltonianbare}) in the main text including and not the coupling term in Eq.~(\ref{radial}), and computed the infidelity between them. In Fig.~\ref{fig:Infidelity} we show the results for different values of $\beta$. The value of $\nu_r =(2\pi)\times2.5$ MHz was used in the simulations, and the initial state for the qubits was chosen to be $\psi_4$ in Table~\ref{table1}, while a thermal state with 2 phonons was used as the initial state of the radial mode. We observe that even for large values of $\beta$, the impact of the radial mode is negligible, on the order of $10^{-5}$ for values of $\beta$ up to 0.4, which are experimentally unexpected.
\section{Two hyperfine ions under a magnetic field gradient}\label{app:twoions}
The Hamiltonian of the relevant hyperfine levels of the two-qubit system (composed, in our case, of two $^{171}$Yb$^+$ ions) under a $z$ dependent magnetic field can be expressed as
\begin{eqnarray}\label{model}
H = \nu_1 a^\dag a + \nu_2 c^\dag c &+& [\omega_{\textrm{e}} + \gamma_e B(z_1)/2] |{\textrm{e}}\rangle \langle {\textrm{e}}|_1 + \omega_{\textrm{g}} |{\textrm{g}} \rangle \langle {\textrm{g}}|_1 \nonumber\\
&+& [\omega_{\textrm{e}} + \gamma_e B(z_2)/2] |{\textrm{e}}\rangle \langle {\textrm{e}}|_2 + \omega_{\textrm{g}} |{\textrm{g}} \rangle \langle {\textrm{g}}|_2.
\end{eqnarray}
If we assume that the ions, which interact through direct Coulomb force, perform only small oscillations around their equilibrium positions, $z_j=z_j^0+q_j$, and we expand $B(z_{j})$ to the first order in $q_j$, then $B(z_j)=B_{j} + g_{B} \ q_{j}$, where $B_j\equiv B(z_j^0)$ and $g_B\equiv \partial B/{\partial z_j}\big|_{z_j=z_j^0}$. With this, and up to an energy displacement, the Hamiltonian~(\ref{model}) reads
\begin{eqnarray}\label{Hamil2}
H &=& \frac{1}{2}[\omega_{\textrm{e}} + \gamma_e B_1/2 - \omega_{\textrm g}] \ \sigma_1^z + \frac{1}{2}[\omega_{\textrm{e}} + \gamma_e B_2/2 - \omega_{\textrm g}] \ \sigma_2^z \\
&+& \nu_1 a^{\dag}a + \nu_2 c^{\dag}c +\frac{\gamma_e g_{B}}{4}(q_1+q_2) +\frac{\gamma_e g_{B}}{4}(q_1\sigma_1^z+q_2\sigma_2^z), \nonumber
\end{eqnarray}
where we have used the relations $|{\textrm{e}}\rangle \langle {\textrm{e}}|_j=\frac{1}{2}(\mathds{1}+\sigma_{\! j} ^z)$ and $|{\textrm{g}}\rangle \langle {\textrm{g}}|_j=\frac{1}{2}(\mathds{1}-\sigma_{\! j}^z)$. At this moment, it may be useful to recall that the displacement of the ions from their equilibrium positions, $q_1$ and $q_2$ can be expressed in terms of the collective normal modes, $Q_1$ and $Q_2$, as
\begin{eqnarray}\label{quantizedpositions}
q_1&=& \frac{Q_1 - Q_2}{\sqrt{2}}= \sqrt{\frac{\hbar}{4M\nu_1}} \big(a+a^\dag\big) - \sqrt{\frac{\hbar}{4M\nu_2}} \big(c+c^\dag\big), \nonumber \\
q_2&=& \frac{Q_1 + Q_2}{\sqrt{2}}= \sqrt{\frac{\hbar}{4M\nu_1}} \big(a+a^\dag\big) + \sqrt{\frac{\hbar}{4M\nu_2}} \big(c+c^\dag\big),
\end{eqnarray}
$M$ being the mass of each ion. Using these relations, which follow the prescription in~\cite{James98}, Eq.~(\ref{Hamil2}) can be rewritten as
\begin{eqnarray}\label{Hamil3}
H &=& \frac{1}{2}[\omega_{\textrm{e}} + \gamma_e B_1/2 - \omega_{\textrm g}] \sigma_1^z + \eta_1\nu_1(a + a^\dag) \sigma_1^z - \eta_2\nu_2 (c + c^\dag) \sigma_1^z\nonumber\\
&+& \frac{1}{2}[\omega_{\textrm{e}} + \gamma_e B_2/2 - \omega_{\textrm g}] \sigma_2^z + \eta_1\nu_1(a + a^\dag) \sigma_2^z + \eta_2\nu_2(c + c^\dag) \sigma_2^z\nonumber\\
&+& \nu_1 a^{\dag}a + \nu_2 c^{\dag}c + \frac{\gamma_e g_{B}}{4} \sqrt{\frac{\hbar}{M \nu_1}} (a + a^\dag),
\end{eqnarray}
where we have defined $\eta_{1,2} \equiv \frac{\gamma_e g_{B}}{8\nu_{1,2}}\sqrt{\frac{\hbar}{M \nu_{1,2}}}$ as the coupling strength between the qubits and the normal modes. The last term in Eq.~(\ref{Hamil3}) can be absorbed by a redefined bosonic operator $b = a + 2\eta_1$, which results in Hamiltonian
\begin{eqnarray}\label{simplifiyedJorge}
\nonumber H= \nu_1 b^\dag b + \nu_2 c^\dag c &+& \frac{\omega_1}{2}\sigma_1^z + \eta_1\nu_1 (b+b^\dag) \sigma_1^z - \eta_2\nu_2(c+c^\dag)\sigma_1^z\\
&+& \frac{\omega_2}{2}\sigma_2^z + \eta_1\nu_1 (b+b^\dag) \sigma_2^z + \eta_2\nu_2(c+c^\dag)\sigma_2^z\end{eqnarray}
where $\omega_{1,2} \equiv \omega_{\textrm{e}} - \omega_{\textrm g} - 4\eta_1^2\nu_1 + \gamma_e B_{1,2}/2$. Furthermore, we can easily compute the quantity $\omega_{2} - \omega_{1}=\gamma_e (B_{2} - B_{1})/2=\gamma_e g_B (z^0_2-z^0_1)/2 = \gamma_e g_B \Delta z/2$.
\section{The interaction Hamiltonian}\label{app:IntHamil}
A bichromatic MW field of frequencies $\omega_j$ and phase $\phi$ will be applied to the system described by Eq.~(\ref{simplifiyedJorge}). The action of such MW field on the ions is described by the following Hamiltonian
\begin{equation}\label{MWField}
H_c(t)=\sum_{j=1}^2\Omega_j(t) (\sigma_1^x + \sigma_2^x)\cos(\omega_j t - \phi)
\end{equation}
where $\Omega_j$ is the Rabi frequency associated to the intensity of the MW field with frequency $\omega_j$. If we add this term to the Hamiltonian (\ref{simplifiyedJorge}), and we move to an interaction picture with respect to $H_0=\frac{\omega_1}{2} \sigma_1^z + \frac{\omega_2}{2} \sigma_2^z + \nu_1 b^{\dag}b + \nu_2 c^{\dag}c$, the complete Hamiltonian in the interaction picture will be given by
\begin{eqnarray}\label{MWField2}
H^{\textrm I}(t)&=&
\eta_1\nu_1b e^{-i \nu_1 t} \sigma_1^z - \eta_2\nu_2ce^{-i\nu_2 t} \sigma_1^z + \eta_1\nu_1b e^{-i \nu_1 t} \sigma_2^z +\eta_2\nu_2ce^{-i\nu_2 t} \sigma_2^z \nonumber \\
&+& \big[\sum_{j=1}^2\Omega_j(t)\cos(\omega_j t - \phi) \big](\sigma_1^+ e^{i \omega_1 t} + \sigma_2^+ e^{i \omega_2 t} ) +\textrm{H.c.}
\end{eqnarray}
where we have use the relations $e^{i\theta a^\dagger a} a e^{-i\theta a^\dagger a}=ae^{-i\theta}$ and $e^{i\theta \sigma^z} \sigma^+ e^{-i\theta \sigma^z}=\sigma^+e^{i2\theta}$. Rewriting the last term leads to
\begin{eqnarray}\label{MWField3}
H^{\textrm I}(t)&=&\eta_1\nu_1b e^{-i \nu_1 t} \sigma_1^z - \eta_2\nu_2ce^{-i\nu_2 t} \sigma_1^z + \eta_1\nu_1b e^{-i \nu_1 t} \sigma_2^z +\eta_2\nu_2ce^{-i\nu_2 t} \sigma_2^z \nonumber \\
&+& \frac{\Omega_1(t)}{2}\Big[\sigma_1^+ e^{i \phi}(e^{i2(\omega_1 t -\phi)} + 1) + \sigma_2^+( e^{i(\omega_1+ \omega_2) t}e^{-i\phi} + e^{-i(\omega_1-\omega_2)t}e^{i\phi}) \Big] \nonumber\\
&+& \frac{\Omega_2(t)}{2}\Big[\sigma_1^+( e^{i(\omega_2+ \omega_1) t}e^{-i\phi} + e^{-i(\omega_2-\omega_1)t}e^{i\phi}) + \sigma_2^+ e^{i \phi}(e^{i2(\omega_2 t -\phi)} + 1)\Big] \nonumber\\
&+&\textrm{H.c.}
\end{eqnarray}
At this point we can safely neglect the terms that rotate with frequencies $\pm|2\omega_1|,\pm|2\omega_2|$ and $\pm|\omega_1+\omega_2|$ by invoking the RWA. Because $|\omega_1|,|\omega_2| \gg \Omega_1,\Omega_2$, these terms will have a negligible effect on the evolution of the system. On the other hand, terms that rotate with frequencies $\pm|\omega_2-\omega_1|$ would have a significant effect on the evolution of the system. However this will be suppressed at the end of the two-qubit gate, because of the design of the pulse sequence. How this elimination occurs is covered in section \ref{subsect:Tailored} and appendix \ref{app:PulseP}. Hence, we can assume that these terms do not have any effect on the system and we can neglect them, thus the Hamiltonian is
\begin{eqnarray}\label{MWField4}
H^{\textrm I}(t)&=& \eta_1\nu_1(b e^{-i \nu_1 t} + b^\dag e^{i\nu_1 t}) \sigma_1^z - \eta_2\nu_2(ce^{-i\nu_2 t} + c^\dag e^{i \nu_2 t}) \sigma_1^z \\ \nonumber &+& \eta_1\nu_1(b e^{-i \nu_1 t} + b^\dag e^{i\nu_1 t}) \sigma_2^z + \eta_2\nu_2(ce^{-i\nu_2 t} + c^\dag e^{i \nu_2 t}) \sigma_2^z\\
\nonumber &+& \frac{\Omega_1(t)}{2}(\sigma_1^+ e^{i \phi} + \sigma_1^- e^{-i\phi}) + \frac{\Omega_2(t)}{2}(\sigma_2^+ e^{i \phi} + \sigma_2^- e^{-i\phi}),
\end{eqnarray}
which corresponds to Eq.~(\ref{casi}) in the main text.
\section{The time-evolution operator}\label{app:TimeEvol}
An analytical expression for the time evolution operator exits for any Hamiltonian of the form
\begin{eqnarray}\label{Hdd2}
H^\textrm{II}(t) = \sum_{j=1}^N\sum_{m=1}^M f_{j}(t)\eta_{jm}\nu_m (a_m e^{-i\nu_mt} + a_m^\dag e^{i\nu_mt}) \sigma_j^z.
\end{eqnarray}
The Hamiltonian for our ion system, undergoing a sequence of instantaneous $\pi$-pulses, is given by
\begin{eqnarray}\label{Hdd}
H^{\textrm II}(t) &=& f_{1}(t)\eta_1\nu_1 (b e^{-i\nu_1t} + b^\dag e^{i\nu_1t}) \ \sigma_1^z \nonumber\\
&-& f_{1}(t)\eta_2\nu_2 (c e^{-i\nu_2t} + c^\dag e^{i\nu_2t})\ \sigma_1^z\nonumber\\ &+& f_{2}(t)\eta_1\nu_1 (b e^{-i\nu_1t} + b^\dag e^{i\nu_1t}) \ \sigma_2^z \nonumber\\
&+& f_{2}(t)\eta_2\nu_2 (c e^{-i\nu_2t} + c^\dag e^{i\nu_2t})\ \sigma_2^z,
\end{eqnarray}
and therefore belongs to this category with $a_1=b$, $a_2=c$ and $\eta_{11}=\eta_{21}=\eta_1$, $\eta_{12}=-\eta_{22}=-\eta_2$.
The time evolution operator of a time dependent Hamiltonian is given by the Dyson series or equivalently by the Magnus expansion:
\begin{eqnarray}\label{Magnus}
U(t) = \exp{\big\{\Omega^\textrm{M}_1(t) +\Omega^\textrm{M}_2(t)+\Omega^\textrm{M}_3(t)+...\big\}},
\end{eqnarray}
where (in general for $t_0\neq0$)
\begin{eqnarray}\label{MagnusTerms}
\Omega^\textrm{M}_1(t,t_0) &=& -i\int_{t_0}^t dt_1H(t_1)\nonumber\\
\Omega^\textrm{M}_2(t,t_0) &=& -\frac{1}{2}\int_{t_0}^{t}dt_1\int_{t_0}^{t_1} dt_2 [H(t_1),H(t_2)] \\
\Omega^\textrm{M}_3(t,t_0) &=& \frac{i}{6} \int_{t_0}^{t}dt_1\int_{t_0}^{t_1} dt_2 \int_{t_0}^{t2} dt_3 \Big([H(t_1),[H(t_2),H(t_3)]]\nonumber\\ &+& [H(t_3),[H(t_2),H(t_1)]] \Big),\nonumber
\end{eqnarray}
and so on. In our case, $\Omega^\textrm{M}_k$ terms for $k>2$ are zero because $[H(s),[H(s'),H(s'')]]=0$. The first term $\Omega^\textrm{M}_1$ can be written as
\begin{eqnarray}\label{Magnus1}
\Omega^\textrm{M}_1(t,t_0) =-i \sum_{j,m}\eta_{jm}\big[a_m G_{jm}(t,t_0) +a_m^\dag G_{jm}^*(t,t_0)\big]\sigma_j^z
\end{eqnarray}
where
\begin{eqnarray}\label{Gfunc}
G_{jm}(t,t_0) =\nu_m\int_{t_0}^t dt' f_j(t')e^{-i\nu_m t'}.
\end{eqnarray}
The second term can be calculated to be
\begin{eqnarray}\label{Magnus2}
\Omega^\textrm{M}_2(t,t_0) &=& -\frac{1}{2}\int_{t_0}^{t}dt_1\int_{t_0}^{t_1} dt_2 [H(t_1),H(t_2)] = -\frac{i}{2}\int_{t_0}^{t}dt_1 \big[H(t_1),\Omega^\textrm{M}_1(t_1,t_0)] \nonumber\\
&=& -\frac{i}{2}\int_{t_0}^{t}dt_1\sum_{jm}\sum_{j'm'}(-i)\eta_{jm}\eta_{j'm'}\nu_m\nonumber\\
&\times&[f_j(a_me^{-i\nu_mt_1}+a_{m}^\dag e^{i\nu_m t_1})\sigma_j^z,(a_{m'}G_{j'm'}+a_{m'}^\dag G^*_{j'm'} )\sigma_{j'}^z] \nonumber\\
&=& -\frac{i}{2}\int_{t_0}^{t}dt_1\sum_{jj'}\sum_{m}(-i)\eta_{jm}\eta_{j'm}\nu_{m}\nonumber\\ &\times&\big(f_jG^*_{j'm}e^{-i\nu_m t_1}[a_m,a_m^\dag]+ f_{j}G_{j'm}e^{i\nu_m t_1}[a_m^\dag,a_m]\big)\sigma_j^z\sigma^z_{j'},
\end{eqnarray}
which, using $[a_m,a_{m'}^\dag]=\delta_{m,m'}$ and $(\sigma_j^z)^2=\mathds{1}$ becomes
\begin{eqnarray}\label{Magnus22}
\Omega^\textrm{M}_2(t,t_0) &=& i\varphi(t,t_0)\sigma_1^z\sigma_2^z + K(t,t_0)\mathds{1},
\end{eqnarray}
where the phase $\varphi$ is a time dependent function given by
\begin{eqnarray}\label{GenPhase2}
\varphi(t,t_0)=\sum_m \Im{ \int_{t_0}^{t}dt'\eta_{1m}\eta_{2m}\nu_m\big\{f_1(t')G_{2m}(t',t_0)+f_2(t')G_{1m}(t',t_0)\big\}e^{i\nu_m t'}}.
\end{eqnarray}
where $\Im$ indicates the imaginary part. We have ignored the term $K(t)$, as it will only contribute with a global phase. Finally, we can easily check that the time evolution operator can be written as
\begin{equation}\label{timevol}
U(t,t_0) =U_s(t,t_0)U_c(t,t_0)
\end{equation}
where
\begin{equation}\label{twogate}
U_s(t,t_0)= \exp \bigg\{-\! i \sum_{j,m}\eta_{jm} \left[ a_m G_{jm}(t,t_0) + a_m^\dag G_{jm}^*(t,t_0)\right]\sigma_j^z \bigg\},
\end{equation}
and
\begin{equation}
U_c(t,t_0)=\exp \left[i \varphi(t,t_0) \sigma_1^z \sigma_2^z\right].
\end{equation}
\section{Properties of the $G_{jm}(t)$ and $\varphi(t)$ functions}\label{app:Properties}
\begin{figure*}[t!]
\centering
\includegraphics[width=1\linewidth]{figures/Figures_A/NormAXY4.pdf}
\caption{ Modulation function $f(x)$ function that corresponds to the first two blocks of the AXY-4 pulse sequence. Time has been normalised by the characteristic time of the sequence $\tau$ ($x=t/\tau$), as well as $\tilde{\tau}_a=\tau_a/\tau$ and $\tilde{\tau}=\tau_b/\tau$.}\label{fig:AXY2}
\end{figure*}
Searching for all different sequences that fulfil the conditions $G_{jm}(T_\textrm{G})=0$ and $\varphi(T_\textrm{G})\neq0$ gets easy if we identify which are the indispensable variables that define the problem. The sequence function $f_1(t)=f_2(t)=f(t)$ is completely defined by the four parameters $\tau_a,\tau_b,\tau$, and $n_\textrm{B}$, for the case of an AXY-$n_\textrm{B}$ sequence. The duration of the sequence is of course only determined by two of them: $\tau$ and $n_\textrm{B}$ ($T_\textrm{G}=n_\textrm{B}\tau$). It is useful to rescale the domain on the $f$ function using $\tau$, the characteristic time of the sequence, as $t=x\tau$. Then, the property $f(t+\tau)=-f(t)$ becomes $f(x+1)=-f(x)$, and also $\tau_a,\tau_b$ may be redefined as $\tilde{\tau}_a=\tau_a/\tau$ and $\tilde{\tau}_b=\tau_b/\tau$. The AXY-4 sequence with this time rescaling is shown in Fig.~\ref{fig:AXY2}. With this change of variable, the $G_{jm}$ functions at a time $T_\textrm{G}$ read
\begin{equation}\label{Gchange}
G_{jm}(T_\textrm{G})=\nu_m\int_{0}^{T_\textrm{G}}\!dt \ f(t)\ e^{-i\nu_m t}=\nu_m\tau\int_{0}^{M}\!dx \ f(x) \ e^{-i\nu_m \tau x}.
\end{equation}
Now, if we relate $\tau$ and $\nu$ as
\begin{equation}\label{nutau}
\nu \tau=2\pi r \ \ \mbox{with} \ \ r\in \mathbb{N},
\end{equation}
these functions become independent of the frequency $\nu$,
\begin{eqnarray}\label{Gtilde}
G_{j1}(T_\textrm{G})=\nu_1\tau \int_{0}^{M}\!dx \ f(x)\ e^{-i2\pi r x}=2\pi r\int_{0}^{M}\!dx \ f(x)\ e^{-i2\pi r x} \\
G_{j2}(T_\textrm{G})=\nu_2\tau \int_{0}^{M}\!dx \ f(x)\ e^{-i2\pi\sqrt{3} r x}=2\pi \sqrt{3}r\int_{0}^{M}\!dx \ f(x)\ e^{-i2\pi\sqrt{3} r x}.
\end{eqnarray}
This simplification works also for the $\varphi(t)$ function, that can be redefined in terms of $\tilde{\varphi}_1(t)$ and $\tilde{\varphi}_1(t)$ as
\begin{eqnarray}\label{varphi}
\varphi(t)&=&\eta_1^2\tilde{\varphi}_1(t)-\eta_2^2\tilde{\varphi}_2(t)=\eta_1^2(\tilde{\varphi}_1(t)-\frac{1}{3\!\sqrt{3}}\tilde{\varphi}_2(t))=\eta_1^2\tilde{\varphi}(t),
\end{eqnarray}
where
\begin{eqnarray}\label{phitilde}
\tilde{\varphi}_1(T_\textrm{G})=(2\pi r)^2 \ \Im{\int_{0}^{M}\!dx \int_{0}^{x}\!dy \ [f_1(x)f_2(y)+f_2(x)f_1(y)] \ e^{i2\pi r(x-y)}} \\
\tilde{\varphi}_2(T_\textrm{G})=(2\pi \sqrt{3}r)^2 \ \Im{ \int_{0}^{M}\! dx\int_{0}^{x}\!dy \ [f_1(x)f_2(y)+f_2(x)f_1(y)] \ e^{i2\pi \sqrt{3}r(x-y)}},
\end{eqnarray}
or
\begin{eqnarray}\label{phitilde}
\tilde{\varphi}(T_\textrm{G})=(2\pi r)^2 \ \Im{\int_{0}^{M}\!dx \int_{0}^{x}\!dy} [f_1(x)f_2(y)+f_2(x)f_1(y)] \\ \times (e^{i2\pi r(x-y)}-\frac{1}{\sqrt{3}}e^{i2\pi \sqrt{3}r(x-y) }).
\end{eqnarray}
Now, it is clear that the functions $\tilde{G}_{j1}(T_\textrm{G})$, $\tilde{G}_{j2}(T_\textrm{G})$, and $\tilde{\varphi}(T_\textrm{G})$, only depend on $\tilde{\tau}_a$, $\tilde{\tau}_b$, $n_\textrm{B}$ and $r$. Therefore, the functions plotted in Fig.~(\ref{Gplot}) (corresponding to cases $n_\textrm{B}=4$, $r=1,2,3$), do not depend on the frequencies $\nu_m$, but only on $\tilde{\tau}_a$ and $\tilde{\tau}_b$.
\section{Pulse propagator}\label{app:PulseP}
Our strategy to produce fast $\pi$-pulses on a certain ion qubit and, at the same time, eliminate undesired effects on the off-resonant qubit consists in appropriately choosing the Rabi frequency of the driving. When a $\pi$-pulse is applied, say on the first qubit, we have the following Hamiltonian
\begin{equation}
H = \frac{\omega_1}{2} \sigma_1^z + \frac{\omega_2}{2} \sigma_2^z + \Omega_1 \cos(\omega_1 t - \phi) (\sigma_1^x + \sigma_2^x ).
\end{equation}
In a rotating frame with respect to $ \frac{\omega_1}{2} \sigma_1^z + \frac{\omega_2}{2} \sigma_2^z$ and after eliminating fast rotating terms, the above Hamiltonian reads
\begin{eqnarray}\label{ctH}
H = \frac{\Omega_1}{2} \sigma_1^{\phi} + \frac{\Omega_1}{2}[ \sigma_2^+ e^{i\phi} e^{i\delta_2t} + \textrm{H.c.}],
\end{eqnarray}
where $\sigma_1^{\phi} =\sigma_1^+ e^{i\phi} + \sigma_1^- e^{-i\phi}$ and $\delta_2 = \omega_2 - \omega_1$. At this level one can argue that, only when the Rabi frequency is small, the second term on the right-hand side of the above equation is negligible. This unavoidably limits the value of $\Omega_1$ and, consequently, how fast our decoupling pulses can be. In this respect, note that for a value of $\omega_2 - \omega_1\approx (2\pi)\times45$ MHz, which is even larger than the ones used in section~\ref{sect:1_PDD}, we find that $\Omega_1$ should be significantly smaller than $(2\pi)\times1$ MHz if we want to eliminate the crosstalk between ions during 20 $\pi$-pulses, see Fig.~\ref{crosstalkfigure}.
To eliminate this restriction, we can use the following expression
\begin{eqnarray}\label{prop}
U_{[t:t_0]} &\equiv& \hat{T}e^{-i\int_{t_0}^t H(s) ds} = U_0 \tilde{U}_{[t:t_0]} \\ \nonumber
&\equiv& e^{-iH_\delta (t-t_0)} \hat{T}e^{-i\int_{t_0}^t U^{\dag}_0 (H(s) - H_\delta) U_0ds} ,
\end{eqnarray}
where $\hat{T}$ is the time ordering operator, $H_\delta=-(\delta_2/2) \sigma_2^z$, and find the time evolution operator for Eq.~(\ref{ctH}) in a generic time interval $(t, t_0)$. This is
\begin{equation}
U_{[t:t_0]} = e^{-i \frac{\Omega_1}{2} \sigma_1^{\phi} (t-t_0) } e^{i\frac{\delta_2}{2} \sigma_2^{z} (t-t_0)} e^{-i \gamma (t-t_0) \hat{n}_0\cdot\vec{\sigma}_2} ,
\end{equation}
where $\gamma = \frac{1}{2}\sqrt{ \Omega_1^2 + \delta_2^2 }$ and
\begin{equation}\label{unitcross}
\hat{n}_{0}=\frac{1}{2\gamma}\Big[\Omega\cos{(\phi-\varphi_0)},-\Omega\sin{(\phi+\varphi_0)}, \delta\Big],
\end{equation}
with $\varphi_0=\delta t_0 /2$.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{figures/Figures_A/Crosstalk.pdf}
\caption{Fidelity after 20 $\pi$-pulses between the propagators associated to the Hamiltonians $H = \frac{\Omega_1}{2} \sigma_1^{\phi} + \frac{\Omega_1}{2}[ \sigma_2^+ e^{i\phi} e^{i\delta_2t} + {\textrm H.c.}]$ and $H = \frac{\Omega_1}{2} \sigma_1^{\phi}$ as a function of the Rabi frequency $\Omega$. We can observe how the fidelity decays because of the fail of the RWA. For this plot we have taken $\delta_2\approx (2\pi)\times 45$ MHz, a value that is even larger than the ones we use in section \ref{subsect:Tailored}.}
\label{crosstalkfigure}
\end{figure}
Note that the first and the second terms in Eq.~(\ref{ctH}) commute, which allows to apply the relation~(\ref{prop}) only to the part $\frac{\Omega_1}{2}[ \sigma_2^+ e^{i\phi} e^{i\delta_2t} + \textrm{H.c.} ]$. Finally, for the sake of realising a $\pi$-pulse we will set $(t-t_0) = t^{(1)}_{\pi} \equiv \frac{\pi}{\Omega_1}$ which gives rise to
\begin{eqnarray}\label{piuno}
U_{t_\pi}^{(1)} = e^{-i(\Omega_1/2) \sigma_1^{\phi} t_\pi } e^{i(\delta_2/2) \sigma_2^{z} t_\pi} e^{-i \gamma t_\pi (\hat{n}_0\cdot\vec{\sigma}_2)}.
\end{eqnarray}
In the same manner, for a $\pi$-pulse (with $ t^{(2)}_{\pi} \equiv \frac{\pi}{\Omega_2}$) in resonance with the second ion we would have
\begin{eqnarray}\label{pidos}
U_{t_\pi}^{(2)} = e^{-i (\Omega_2/2) \sigma_2^{\phi} t_\pi } e^{i(\delta_1/2) \sigma_1^{z} t_\pi} e^{-i \gamma t_\pi(\hat{n}_0\cdot\vec{\sigma}_1)}.
\end{eqnarray}
Equations~(\ref{piuno}) and~(\ref{pidos}) contain the terms corresponding to the $\pi$-pulses in which we are interested ($\exp{[-i \frac{\Omega_1}{2} \sigma_1^{\phi} t_\pi]}$ and $\exp{[-i \frac{\Omega_2}{2} \sigma_2^{\phi} t_\pi]}$) plus the crosstalk contributions we want to get rid off. To eliminate terms $\exp{[-i\gamma t_\pi (\hat{n}_0\cdot\vec{\sigma}_2)]}$ and $\exp{[-i \gamma t_\pi (\hat{n}_0\cdot\vec{\sigma}_1)]}$, we will adjust the Rabi frequencies $\Omega_{1,2}$ such that
\begin{equation}
\gamma t_\pi = \frac{1}{2} \sqrt{ (\Omega_{1,2})^2 + (\delta_{2,1})^2 }\frac{\pi}{\Omega_{1,2}} = \pi \times k, \mbox{with} \ k\in \mathbb{Z}.
\end{equation}
In this case we have that $\exp{[-i \gamma t_\pi(\hat{n}_0\cdot\vec{\sigma}_2)]} = \exp{[-i \gamma t_\pi(\hat{n}_0\cdot\vec{\sigma}_1)}]] = \pm \mathds{1}$, i.e. the unwanted terms contribute as a global phase. Hence, only pure dephasing terms remain in both pulses, $\exp{[i\frac{\delta_2}{2} \sigma_2^{z} t_\pi]}$ and $\exp{[i\frac{\delta_1}{2} \sigma_1^{z} t_\pi]}$, which will be cancelled by our tailored MW sequences as explained in section \ref{subsect:Tailored}.
\section{Heating rates estimation}\label{app:Heating}
To estimate the $\Gamma_{b,c}$ parameters, we will rely on the data provided in~\cite{Weidt16}, as well as on the scaling relations one can extract from~\cite{Brownnutt15}. More specifically we take as reference values (for the center-of-mass mode) $\dot{n}_\textrm{com}^\textrm{ref} = 41$ phonons/second for a frequency $\nu^\textrm{ref}_1/(2\pi) = 427$ kHz, and (for the breathing mode) $\dot{n}_\textrm{bre}^\textrm{ref} = 7$ phonons/second for a frequency $\nu^\textrm{ref}_2/(2\pi) = 459$ kHz,~\cite{Weidt16}. The latter values correspond to a configuration at room temperature ($T^\textrm{ref}=300$ K) with an ion-electrode distance of $d_\textrm{i-e}^\textrm{ref} \approx 310 \ \mu$m, that would give rise to a magnetic field gradient $g_B = 23.6$T/m.
Our operating conditions require, for the first studied case, an ion-electrode distance of $d_\textrm{i-e}\approx 150 \ \mu$m, to generate a magnetic field gradient of $g_B = 150$T/m where $\nu_1=\nu$ and $\nu_2 = \sqrt{3} \nu$ with $\nu/(2\pi) = 150$ kHz, while we will consider low temperatures of $T=50$ K. In this situation one can derive new values for $\dot{n}_\textrm{com}$ and $\dot{n}_\textrm{bre}$ using scaling relations~\cite{Brownnutt15} which in our case are
\begin{eqnarray}\label{scaling1}
\dot{n}_\textrm{com}\approx \dot{n}_\textrm{com}^\textrm{ref} \ \bigg(\frac{\nu_1^\textrm{ref}}{\nu_1}\bigg)^2 \bigg(\frac{d_\textrm{i-e}^\textrm{ref}}{d_\textrm{i-e}}\bigg)^4 \bigg(\frac{T^\textrm{ref}}{T}\bigg)^{-2.13},
\end{eqnarray}
and
\begin{eqnarray}\label{scaling2}
\dot{n}_\textrm{bre}\approx \dot{n}_\textrm{bre}^\textrm{ref} \ \bigg(\frac{\nu_2^\textrm{ref}}{\nu_2}\bigg)^2 \bigg(\frac{d_\textrm{i-e}^\textrm{ref}}{d_\textrm{i-e}}\bigg)^4 \bigg(\frac{T^\textrm{ref}}{T}\bigg)^{-2.13}.
\end{eqnarray}
Then, one can use that, when close to the motional ground state, we have~\cite{Brownnutt15}
\begin{equation}
\dot{n}_{\textrm{com,bre}} = \Gamma_{b,c} \ \bar{N}_{b, c},
\end{equation}
that together with the definition of $\bar{N}_{b, c} \equiv N^\textrm{thermal}_{b,c} =1/(e^{\hbar \nu_{1,2}/k_{\textrm B} T}-1)$, allows us to obtain the values for $\Gamma_{b,c}$.
In the second studied case, a magnetic field gradient of $g_B = 300 \frac{\textrm T}{\textrm m}$ would require to locate the ions closer to the electrodes, which would induce more heating. We estimate a distance according to the relation $d_\textrm{i-e} = \sqrt{\frac{150}{300}} \ 150 \ \mu$m $\approx 106 \ \mu$m that assumes a dependence $g_B \sim \frac{1}{d_\textrm{i-e}^2}$. This new distance can be used in Eqs.~(\ref{scaling1}) and~(\ref{scaling2}) to derive new values for the heating rates $\Gamma_{b,c}$.
\chapter{Quantum Sensing with NV Centers}
\label{chapter:chapter_4}
\thispagestyle{chapter}
The ability to manipulate a small quantum system using electromagnetic radiation, and initialize and read-out its quantum state, can be used to gain control over nearby systems. It is the case of nanoscale NMR, whose goal is that of detecting and controlling magnetic field emitters (as nuclear spins) with high frequency and spatial resolution~\cite{Rondin14,Schmitt17,Boss17,Zopes18a,Glenn18, Zopes18b}. The NV center is a prominent quantum sensor candidate owing to its long coherence times working on room temperature environments. Its electronic spin is controlled with MW radiation, while initialization and read-out is done with optical fields~\cite{Schirhagl14, Wu16}. At room temperature, the decay time $T_1$ of the NV center is of the order of milliseconds~\cite{Degen17}. Coherence times due to dephasing $T_2$ are typically shorter, due to the interaction among $^{13}$C nuclei in the diamond and the NV~\cite{Maze08}. Here, DD techniques have played an important role, taking the coherence time $T_2$ to the decay time~$T_1$~\cite{Souza12}. On the other hand, DD techniques are also employed to couple the NV to a target signal, e.g. classical electromagnetic radiation~\cite{Taylor08} or the hyperfine fields emitted by nuclear spins~\cite{Abobeih18,Muller14b, Wang16a}. In particular, DD techniques generate filters that allow the passage of signals with only specific frequencies~\cite{Haase18}. It is the accuracy of this filter what determines the fidelity in the detection and control on the target signal. Continuous or pulsed (or stroboscopic) DD schemes are typically considered. While the former requires to fulfill the Hartmann-Hahn condition~\cite{Hartmann62, Casanova19}, the latter uses the time spacing among $\pi$ pulses to induce a rotation frequency in the NV, matching that of the target signal~\cite{Taminiau12}. Pulsed DD schemes have advantages over continuous methods, such as the achievement of enhanced frequency selectivity by using large harmonics of the generated modulation function~\cite{Taminiau12, Casanova15}. Another advantage is the demonstrated robustness against control errors of certain pulse sequences such as those of the XY family~\cite{Maudsley86, Gullion90, Souza11, Wang17a, Arrazola18}. However, the use of large harmonics makes DD sequences sensitive to environmental noise, and leads to signal overlaps which hinders spectral readout~\cite{Casanova15}. As we will show, these issues can be minimised by applying a large static magnetic field $B_z$. Unfortunately, the performance of pulsed DD techniques under large $B_z$ gets spoiled unless $\pi$ pulses are fast, i.e. highly energetic, compared with nuclear Larmor frequencies (note these are proportional to $B_z$). This represents a serious disadvantage, especially when DD sequences act over biological samples, since fast $\pi$ pulses require high MW power causing damage as a result of the induced heating~\cite{Cao20}.
In this chapter, we propose a design of amplitude modulated decoupling pulses that solves these problems and achieves tuneable, hence highly selective, NV-nuclei interactions. This can be done without fast $\pi$ pulses, i.e. with low MW power, and involving large magnetic fields. We use an NV center in diamond to illustrate our method, although this is general thus applicable to arbitrary hybrid spin systems. Furthermore, our protocol can be incorporated to standard pulsed DD sequences such as the widely used XY8 sequence, demonstrating its flexibility. We note that a different approach based on a specific continuous DD method~\cite{Casanova19} has been proposed to operate with NV centers under large $B_z$ fields.
\section{Model: NV center for nanoscale NMR}
We consider an NV center coupled to nearby nuclear spins. This is described by
\begin{equation}\label{original}
H = D S_z^2 - \gamma_e B_z S_z -\sum_j \omega_\textrm{ L} I_j^z + S_z \sum_j \vec{A}_j \cdot \vec{I}_j,
\end{equation}
where $D = (2\pi)\times 2.87$ GHz is the so-called zero-field splitting, $\gamma_e = -(2\pi)\times 28.024$ GHz/T is the electronic gyromagnetic ratio of the NV center, and $B_z$ is the intensity of the magnetic field applied in the NV axis (the $z$ axis). The nuclear Larmor frequency is $\omega_\textrm{ L} = \gamma_n B_z$ with $\gamma_n$ the nuclear gyromagnetic ratio. $S_z =|1\rangle\langle 1|-|-\!1\rangle\langle -1|$ is the spin-$1$ operator representing the NV center, $|1\rangle$ and $|\!-\!\!1\rangle$ being two of the three hyperfine levels of the NV. The nuclear spin-$1/2$ operators are $I_j^\alpha = 1/2 \ \sigma_j^{\alpha}$ ($\alpha = x,y,z$) and $\vec{A}_j$ is the hyperfine vector mediating the NV-nucleus coupling. The latter is given by
\begin{equation}\label{hypervec}
\vec{A}_j=\frac{\mu_0\gamma_e\gamma_n\hbar}{4\pi r_j^3}\bigg[\hat{z}-\frac{3(\hat{z}\cdot\vec{r}_j)\vec{r}_j}{r^2_j} \bigg],
\end{equation}
where $\vec{r}_j$ is the position vector of nucleus $j$ with respect to the origin (the vacancy site), and $\mu_0=4\pi\times10^{-7} \ \textrm{ T}\cdot \textrm{m/A}$ is the vacuum permeability.
The internal state of the NV is manipulated with an external MW field polarized in the $x$ axis. The Hamiltonian corresponding to such field is
\begin{equation}\label{controlHamil}
H_\textrm{ c} = \sqrt{2} \Omega(t) S_x \cos{[\omega t -\phi]},
\end{equation}
where $S_x = \frac{1}{\sqrt 2} (|1\rangle\langle 0| +|\!-\!1\rangle\langle 0| + \textrm{ H.c.})$, $\phi$ is the pulse phase, and $\omega$ is the MW driving frequency. If this frequency equals the natural frequency of the $|1\rangle \leftrightarrow |0\rangle$ transition ($D+ \gamma_e B_z/2$), Eq.~(\ref{original}) is, in the rotating frame defined by $D S_z^2 - \gamma_e B_z S_z$,
\begin{equation}\label{simulations}
H = \sum_j \omega_j \ \hat{\omega}_j\cdot \vec{I}_j + \frac{\sigma_z}{2}\sum_j \vec{A}_j\cdot \vec{I}_j + \frac{\Omega(t)}{2} (|1\rangle\langle 0| e^{i\phi} + \textrm{ H.c.}),
\end{equation}
where $\sigma_z=|1\rangle\langle1|-|0\rangle\langle0|$, and $\hat{\omega}_j = \vec{\omega}_j / |\vec{\omega}_j|$ with $\vec{\omega}_j = \omega_\textrm{ L} \hat{z} - \frac{1}{2} \vec{A}_j$ (note that $|\vec{\omega}_j| = \omega_j$). It is worth mentioning that the use of the $|1\rangle \leftrightarrow |0\rangle$ transition translates into a position-dependent shift of the Larmor frequency of the nuclei, $\omega_j \approx \omega_\textrm{ L}- \frac{1}{2}A_j^z$. For some applications, it may be convenient to avoid this shift using the $|1\rangle \leftrightarrow |-1\rangle$ transition instead\footnote{For an example check article 14 from the list of publications.}.
In pulsed DD methods resonance is achieved by the stroboscopic application of the MW driving leading to periodic $\pi$ rotations in the NV electronic spin. In an interaction picture with respect to $\frac{\Omega(t)}{2} (|1\rangle\langle 0| e^{i\phi} + \textrm{ H.c.})$, the stroboscopic application of these MW pulses is described as
\begin{equation}
H = -\sum_j \omega_j \ \hat{\omega}_j\cdot \vec{I}_j +F(t) \frac{\sigma_z}{2}\sum_j \vec{A}_j\cdot \vec{I}_j,
\end{equation}
with the modulation function $F(t)$ taking periodically the values $+1$ or $-1$, depending on the number of $\pi$ pulses on the NV. Notice that a similar thing has been done in section~\ref{sect:1_PDD} in the context of trapped ions.
A common assumption of standard DD techniques is that $\pi$ pulses are nearly instantaneous, thus highly energetic. However, in real cases we deal with finite-width pulses such that a time $t_{\pi} = \frac{\pi}{\Omega}$ is needed to produce a $\pi$ pulse (if $\Omega$ constant during the pulse). This has adverse consequences on the NV-nuclei dynamics such as the appearance of spurious resonances~\cite{Lorentz15, Haase16, Lang17}, or the drastic reduction of the NMR sensitivity at large $B_z$~\cite{Casanova18MW}. In Ref.~\cite{Casanova18MW} a strategy to signal recovery is presented. Here, we extend this approach to achieve selective nuclear interactions. In the following, we demonstrate that the introduction of extended pulses with tailored $\Omega$ leads to tunable NV-nuclei interactions with low-power MW radiation.
\section{DD with instantaneous MW pulses}
We consider the widely used XY8 = XYXYYXYX scheme, X (Y) being a $\pi$ angle rotation around the $x$ ($y$) axis of the Bloch sphere corresponding to states $|0\rangle$ and $|1\rangle$. The sequential application of XY8 on the NV leads to a periodic, even, $F(t)$ that expands in harmonic functions as $F(t)=\sum_n f_n \cos{(n\omega_\textrm{ M} t)}$, where $f_n = 2/T\int_0^T F(s) \cos{(n \omega_\textrm{ M} s)} ds$, and $\omega_\textrm{ M} = \frac{2\pi}{T}$ with $T$ the period of $F(t)$. See an example of $F(t)$ in the inset of Fig.~\ref{instantaneous}~(a). In an interaction picture with respect to $-\sum_j \omega_j \ \hat{\omega}_j\cdot \vec{I}_j $, Eq.~(\ref{simulations}) is\footnote{To prove it, one can use the following: $e^{i a\left(\hat{n} \cdot \vec{\sigma}\right)} ~ \vec{\sigma}~ e^{-i a\left(\hat{n} \cdot \vec{\sigma}\right)} = \vec{\sigma} \cos (2a) + \hat{n} \times \vec{\sigma} ~\sin (2a)+ \hat{n} ~ (\hat{n} \cdot \vec{\sigma}) ~ (1 - \cos (2a))~$.}
\begin{equation}\label{modulated}
H = \sum_{n,j} \frac{f_n \cos{(n \omega_\textrm{ M} t)} \sigma_z}{2} \bigg[A_{j}^{x} I^x_j \cos{(\omega_j t)} + A_{j}^{y} I^y_j \sin{(\omega_j t)} + A_{j}^{z} I^z_j\bigg],
\end{equation}
where $A_{j}^{x,y,z} = |\vec{A}_{j}^{x,y,z}|$ with $\vec{A}_{j}^{x} = \vec{A}_{j} - (\vec{A}_{j}\cdot \hat{\omega}_j) \ \hat{\omega}_j$, $\vec{A}_{j}^{y} = \hat{\omega}_j\times \vec{A}_{j}$, $\vec{A}_{j}^{z} = (\vec{A}_{j}\cdot \hat{\omega}_j) \ \hat{\omega}_j$, and $I_{x}^j = \vec{I}_j \cdot \hat x_j$, $I_{y}^j = \vec{I}_j \cdot \hat{y}_j$, $I_{z}^j = \vec{I}_j \cdot \hat{z}_j$ with $\hat{x}_j = \vec{A}_{j}^{x}/ A_{j}^{x}$, $\hat{y}_j = \vec{A}_{j}^{y}/ A_{j}^{y}$ and $\hat{z}_j = \vec{A}_{j}^{z}/ A_{j}^{z}$. Notice that the Cartesian basis vectors have been redefined for each nucleus $j$.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{figures/Figures_4/fig1.pdf}
\caption{Signal (black-solid) harvested with instantaneous $\pi$ pulses, $ B_z = 500$ G in (a) (b) and $B_z = 1$ T in (c) (d). Circles and triangles are the theoretically expected values for $\langle \sigma_x \rangle$. In (a) we select $l=13, 15$ and their signals are clearly separated. In (b) we use $l=33, 35$ and observe a spectral overlap (green arrow). In (c) (d) the spectral overlap is removed owing to a large $B_z$, while the signal (black-solid) matches the theoretically expected values. Final sequence time for (a) (c) is $\approx 0.5$ ms, and $\approx 1.2$ ms for (b) (d).}
\label{instantaneous}
\end{figure}
Now, one selects a harmonic in the expansion of $F(t)$ and the period $T$, to create a resonant interaction of the NV with a target nucleus (namely the $k$th nucleus). To this end, in Eq.~(\ref{modulated}) we set $n=l$, and $T$ such that $l \omega_\textrm{ M} \approx \omega_k$. After eliminating fast rotating terms we get
\begin{eqnarray}\label{big}
H &\approx& \frac{f_l A_k^x}{4}\sigma_z [I_k^{-} e^{i(\omega_k - l\omega_\textrm{ M}) t} + \textrm{ H.c.}]\nonumber\\
&+&\sum_{j\neq k} \frac{f_l A_j^x}{4}\sigma_z [I_j^{-} e^{i(\omega_j - l\omega_\textrm{ M}) t} + \textrm{ H.c.}]\nonumber\\
&+&\sum_{n\neq l} \sum_j \frac{f_n A_j^x}{4}\sigma_z [I_j^{-}e^{i(\omega_j - n\omega_\textrm{ M})t} + \textrm{ H.c.}].
\end{eqnarray}
By inspecting the first line of (\ref{big}), one finds that nuclear spin addressing at the $l$th harmonic is achieved when
\begin{equation}\label{resonancecond}
l \omega_\textrm{ M} = l \frac{2\pi}{T}= \omega_k.
\end{equation}
With this resonance condition, the first line in (\ref{big}) is the resonant term $f_l A_k^x/4\sigma_z I_k^x$, while detuned contributions (those in second and third lines) would average out by the RWA. More specifically, with Eq.~(\ref{resonancecond}) at hand we can remove the second line in Eq.~(\ref{big}) if
\begin{equation}\label{cond1}
|\omega_j - \omega_k| \gg f_l A_j^x/4.
\end{equation}
Detuned contributions corresponding to harmonics with $n\neq l$ are in the third line of~(\ref{big}). These can be neglected if
\begin{equation}\label{cond2}
|\omega_j - n/l \omega_k| \approx \omega_\textrm{ L} (l-n)/l \gg f_n A_j^x/4 \ \ \forall n.
\end{equation}
To strengthen condition~(\ref{cond1}), one can reduce the value of $f_l$ by selecting a large harmonic (see later), while condition~(\ref{cond2}) applies better for large values of $B_z$ since $\omega_\textrm{ L}\propto B_z$. After neglecting the off resonant terms in Eq.~(\ref{big}), the Hamiltonian is
\begin{equation}
H= \frac{f_lA_k^x}{4}\sigma_z I_k^x.
\end{equation}
For the above Hamiltonian the dynamics can be exactly solved, and the evolution of $\langle \sigma_x \rangle$ (when the initial state is $\rho=|+\rangle\langle +| \otimes \frac{1}{2} \mathbb{I}$, i.e. we consider the nucleus in a thermal state) reads
\begin{equation}
\langle \sigma_x \rangle = \cos{\bigg(\frac{f_lA_k^{x}}{4}t\bigg)}.
\end{equation}
This is the ideal signal retrieved by perfect nuclear addressing when $\omega_\textrm{ M}=\omega_k/l$, and its represented by the depth of each panel (circles or triangles) in Fig.~\ref{instantaneous}.
Assuming instantaneous $\pi$ pulses, standard DD sequences with constant $\Omega$~\cite{Maudsley86, Gullion90, Souza11} lead to $|f_l| = \frac{4}{\pi l}, 0$ for $l$ odd, even. Thus, large harmonics (i.e. with large $l$) reinforce condition~(\ref{cond1}) as they lead to a smaller value for $f_l$. In Fig.~\ref{instantaneous} we compute the signal corresponding to the NV observable $\langle\sigma_x\rangle$ in a sample that contains 150 $^{13}$C nuclei ($\gamma_{^{13}\textrm{ C}} = (2\pi)\times 10.708$ MHz/T). To obtain sufficient spectral resolution we use large harmonics. Figure~\ref{instantaneous} (a) shows the signal for $l=13, 15$ and the theoretically expected values for $\langle\sigma_x\rangle$ (triangles for $l=13$ and circles for $l=15$) that would appear if perfect single nuclear addressing is considered. We observe that the computed signal does not match with the theoretically expected values. In addition to a flawed accomplishment of conditions~(\ref{cond1}, \ref{cond2}), this is also a consequence of using large harmonics since, for large $l$, the period $T=2\pi l/\omega_k$ and the spacing between $\pi$ pulses grows, see inset in Fig.~\ref{instantaneous}(a), spoiling the efficient elimination of the $\sigma_z A_{j}^{z} I^z_j$ terms in Eq.~(\ref{modulated}) by the RWA. In the inset of Fig.~\ref{instantaneous} (a) there is a sketch of the pulse structure we repeatedly apply ($20$ times in (a) and (b), while in (c) and (d) that structure is used 400 times) to get the signals in Fig.~\ref{instantaneous}, red blocks are instantaneous $\pi$ pulses, while their associated $F(t)$ is in blue. Working with even larger harmonics introduces the problem of spectral overlaps. These appear when the signal associated to a certain harmonic contains resonance peaks corresponding to other harmonics. In Fig.~\ref{instantaneous} (b) one can see (green arrow) how a peak of $l=35$ (green circle) is mixed with the signal of $l=33$ (orange triangle). This is an additional disadvantage since the interpretation of the spectrum gets challenging.
Condition~(\ref{cond2}) is strengthened using a large $B_z$. This also implies a larger resonance frequency (namely $\omega_k$) for each nucleus. Addressing large $\omega_k$ is beneficial since the period $T$ (note that, in resonance $T=2\pi l/\omega_k$) and the interpulse spacing get shorter turning into a better cancellation of $\sigma_z A_{j}^{z} I^z_j$ terms. In Fig.~\ref{instantaneous} (c) (d), we use a large $B_z=1$ T and the spectral overlap is removed, while the computed signal matches the theoretically expected values (blue triangles).
Unfortunately, to consider $\pi$ pulses as instantaneous in situations with large $B_z$ is not correct, since nuclei have time to evolve during $\pi$ pulse execution leading to signal drop~\cite{Casanova18MW}. Hence, if one cannot deliver enough MW power to the sample, the results in Fig.~\ref{instantaneous} (c) and (d) are not achievable.
\section{A solution with extended pulses}
In realistic situations $\pi$ pulses are finite, thus the value of $f_l=2/T\int_0^T F(s) \cos{(l \omega_\textrm{ M} s)} ds$ has to be computed by considering the intrapulse contribution. This is (for a generic $m$th pulse) $2/T \int_{t_m}^{t_m+t_{\pi}} F(s) \cos{(l\omega_\textrm{ M} s)} \ ds$, with $t_\pi$ being the $\pi$ pulse time and $t_m$ the instant we start applying MW radiation, see Fig.~\ref{whole} (a). In addition, the $F(t)$ function must hold the following conditions: Outside the $\pi$ pulse region $F(t)=\pm 1$, while $F(t)$ is bounded as $-1 \leq F(t) \leq 1$ $\forall t$, Fig.~\ref{whole} (a).
Now, we present a design for $F(t)$ that satisfies the above conditions, cancels intrapulse contributions, and leads to tunable NV-nuclei interactions. In particular, for the $m$th pulse
\begin{equation}\label{modulatedF}
F(t) = \cos{\big[\pi(t - t_{m})/t_\pi \big]} + \sum_{q}\alpha_{q}(t) \sin{\big[ q l \omega_\textrm{ M}(t - t_{p}) \big]}.
\end{equation}
Here, $\alpha_{q}(s)$ are functions to be adjusted (see later) and $t_p=t_m+t_{\pi}/2$ is the central point of the $m$th pulse, Fig.~\ref{fig:modulatedfun}. We modulate $F(s)$ in the intrapulse region such that (for the $m$th pulse) $\int_{t_m}^{t_m+t_{\pi}} F(s) \cos{(l\omega_\textrm{ M} s)} \ ds =0$ and $F(t)$ cancels the intrapulse contribution. Once we have $F(t)$, we find that the associated Rabi frequency is
\begin{equation}\label{modulatedOmega}
\Omega(t) = \frac{\partial}{\partial t} \arccos[F(t)]=-\frac{1}{\sqrt{1-F(t)^2}},
\end{equation}
if $F(t)$ differentiable. See appendix~\ref{FindOmega} for additional details. Now, the value of the $f_l$ coefficient obtained with the \textit{ modulated} $F(t)$ in Eq.~(\ref{modulatedF}) (from now on denoted $f_l^\textrm{ m}$) depends only on the integral out of $\pi$ pulse regions. This can be calculated leading to
\begin{equation}\label{modulatedf}
f_l^\textrm{ m} = \frac{4}{\pi l }\cos{\bigg(\pi \frac{t_{\pi}}{T/l}\bigg)}\sin{(\pi l /2)},
\end{equation}
which is our main result. For the derivation, see appendix~\ref{calcext}. By modifying the ratio between $t_\pi$ (the extended $\pi$ pulse length) and $T/l$ we can select a value for $f^\textrm{ m}_l$ and achieve tunable NV-nuclei interactions. According to Eq.~(\ref{modulatedf}), $f_l^\textrm{ m}$ can be taken to any amount between $-\frac{4}{l\pi}$ and $\frac{4}{l\pi}$, see solid-black curve in Fig.~\ref{whole}(a). In addition, owing to the periodic character of Eq.~(\ref{modulatedf}), one can get an arbitrary value (between $-\frac{4}{l\pi}$ and $\frac{4}{l\pi}$) for $f_l^\textrm{ m}$ even with large $t_{\pi}$. This implies highly extended $\pi$ pulses, thus a low delivered MW power. On the contrary, for standard $\pi$ pulses in the form of \textit{ top-hat} functions (i.e. generated with constant $\Omega$) one finds (see appendix~\ref{calcth} for the calculation)
\begin{equation}\label{tophatcoeff}
f_l^\textrm{ th} = \frac{4\sin{(\pi l /2)}\cos{(\pi l t_{\pi}/T)}}{\pi l (1-4l^2t_{\pi}^2/T^2 )}.
\end{equation}
Unlike $f^\textrm{ m}_l$, the expression for $f_l^\textrm{ th}$ shows a decreasing fashion for growing $t_{\pi}$. Note that $ |f_l^\textrm{ th}| \propto [t_{\pi}/(T/l)]^{-2}$. This behaviour can be observed in Fig.~\ref{whole}(a), curve over the yellow area. Hence, standard top-hat pulses cannot operate with a large $t_{\pi}$, as this leads to a strong decrease of $f_l^\textrm{ th}$, thus to signal loss.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{figures/Figures_4/Fig2.pdf}
\caption{Upper panel, one period of $F(t)$ (solid-blue) including the intrapulse behavior, and the $\cos{(l\omega_\textrm{ M}t)}$ function. Extended $\pi$ pulses span during $t_{\pi}$ (intrapulse regions appear marked in red). In this example, $l=13$ and $t_{\pi}\approx4.5\times(T/l)$. Solid-black, behavior of $F(t)$ in case standard top-hat pulses are applied. Bottom panel, train of modulated $\Omega(t)$ leading to $F(t)$. }
\label{fig:modulatedfun}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{figures/Figures_4/Fig3.pdf}
\caption{ (a) $f_l^\textrm{ m}$ (black-solid) and $f_l^\textrm{ th}$ (curve on the yellow area) as a function of the ratio $t_{\pi}/(T/l)$ for $l=13$. (b) $\langle \sigma_x\rangle$ (curves over dark and clear areas) for the conditions discussed in the main text. Inset, $\langle \sigma_x\rangle$ computed with top-hat pulses. For all numerical simulations in (b) we assume a $1\%$ of error in $\Omega(t)$~\cite{Cai12}.}
\label{whole}
\end{figure}
To show the performance of our theory, we select, a Gaussian form for $\alpha_{1}(t) = a_1 e^{-(t-t_p)^2/2c_1^2}$ and set $\alpha_{q}(t)=0$, $\forall q>1$. See one example of a modulated $F(t)$ in Fig.~\ref{fig:modulatedfun} (solid-blue) as well as the behavior of $F(t)$ if common top-hat $\pi$ pulses are used (solid-black). Once we choose the $t_{\pi}$, $l$, and $c$ parameters that will define the shape of $F(t)$, we select the remaining constant $a_1$ such that it cancels the intrapulse contribution, i.e. $\int_{t_m}^{t_m+t_{\pi}} F(s) \cos{(l\omega_\textrm{ M} s)} \ ds =0$. By inspecting Eq.~(\ref{modulatedF}) one easily finds that a natural fashion for $a_1$ is given by
\begin{equation}\label{a1ratio}
a_1=-\frac{\int_{t_m}^{t_m+t_\pi} \cos{\big[\pi(s - t_{m})/t_\pi \big]} \cos{(l\omega_\textrm{ M} s)} \ ds}{\int_{t_m}^{t_m+t_\pi} e^{-\frac{(s-t_p)^2}{2c^2}}\sin{\big[ l \omega_\textrm{ M}(s - t_{p}) \big]} \cos{(l\omega_\textrm{ M} s)} \ ds}.
\end{equation}
In Fig.~\ref{whole}(b) we simulated a sample containing 5 protons\footnote{ Note that the presence of extended pulses disables the use of numerical techniques, as those in \cite{Maze08}, to simulate large nuclear samples. Hence, because of machine restrictions, here we focus in a sample that includes five nuclei and the NV} at an average distance from the NV of $\approx 2.46$ nm. Numerical simulations have been performed starting from Eq.~(\ref{simulations}) without doing further assumptions. The 5-H target cluster has the hyperfine vectors (note $\gamma_\textrm{ H} = (2\pi)\times 42.577$ MHz/T) $\vec{A}_1 = (2\pi)\times[-1.84, -3.19, -11.02]$, $\vec{A}_2 = (2\pi)\times[2.38, 5.04, -8.78]$, $\vec{A}_3 = (2\pi)\times[8.09, 2.66, -1.02]$, $\vec{A}_4 = (2\pi)\times[4.26, 2.46, 3.48]$, and $\vec{A}_5 = (2\pi)\times[4.07, 1.00, -7.09]$ kHz. We simulate two different sequences, leading to two signals, using our extended $\pi$ pulses under a large magnetic field $B_z=1$ T. Vertical panels with yellow squares mark the theoretically expected resonance positions and signal contrast. For the first computed signal, curve over dark area in Fig.~\ref{whole}(b), we display a XY8 sequence where the phase of each X (Y) extended pulse is $\phi=0$ ($\phi=\pi/2$). The modulated Rabi frequency $\Omega(t)$ is selected such that it leads to $f_{13}^\textrm{ m}=4\pi/13 = 0.0979$ for $l=13$ (note this corresponds to the maximum value for $f_{13}^\textrm{ m}$) with a pulse length $t_{\pi} = 6 \times(T/l)$. In addition, we take the width of the Gaussian function $\alpha_{1}(t)$ as $c_1 = 0.07 t_{\pi}$. The scanning frequency $\omega_\textrm{ M}$ spans around $\gamma_\textrm{ H} B_{z}/l$ for $l=13$, see horizontal axis in Fig.~\ref{whole}(b). After repeating the XY8 sequence $400$ times, i.e. 3200 extended $\pi$ pulses have been applied leading to a final sequence time of $t_{f} \approx 0.488$ ms, we get the signal over the dark area. As we observe in Fig.~\ref{whole}(b), this sequence does not resolve all nuclear resonances of the 5-H cluster.
\pagebreak
To overcome this situation, we make use of the tunability of our method, and simulate a second sequence with extended $\pi$ pulses leading to the signal over the clear area in Fig.~\ref{whole}~(b). This has been computed with a smaller value for $f_{13}^\textrm{ m} = 0.0979/3 = 0.0326$ which is achieved with $t_{\pi} \approx 6.4\times(T/l)$, i.e. a slightly longer $\pi$ pulse than those in the preceding situation, and $c_1=0.07 t_{\pi}$. As the $f_{13}^\textrm{ m}$ coefficient is now smaller, we have repeated the XY8 sequence $400\times 3$ times (i.e. 9600 pulses) to get the same contrast than in the previous case. The final time of the sequence is $t_{f} \approx 1.5$ ms. As we observe in Fig.~\ref{whole}(b), our method faithfully resolves all resonances in the 5-H cluster, and reproduces the theoretically expected signal contrast. It is noteworthy to comment that the tunability offered by our method will be of help for different quantum algorithms with NV centers~\cite{Ajoy15, Perlin18, Casanova16, Casanova17}.
\section{MW power and nuclear signal comparison}
In the inset of Fig.~\ref{whole}(b) we plot the signals one would get using standard top-hat pulses with the same average power than our extended pulses in Fig.~\ref{whole} (b). We use that the energy of each top-hat and extended $\pi$ pulse, $E^\textrm{ th}(t_\pi)$ and $E^\textrm{ ext}(t_\pi)$, is $\propto \int \Omega^2(s) ds$ where the integral extends during the $\pi$ pulse duration (top-hat or extended). For an explicit derivation of the energy relations see appendix~\ref{energydelivery}. The solid-orange signal in the inset has been computed with a XY8 sequence containing 3200 top-hat $\pi$ pulses with a constant $\Omega \approx (2\pi)\times 18.2$ MHz. For this value of $\Omega$, a top-hat $\pi$ pulse contains the same average power than each extended $\pi$ pulse used to compute the signal over dark area in Fig.~\ref{whole}(b), i.e. $E^\textrm{ th}(t_\pi)= E^\textrm{ ext}(t_\pi)$. Unlike our method, the sequence with standard top-hat $\pi$ pulses produces a signal with almost no-contrast. Note that the vertical axis of inset in Fig.~\ref{whole}(b) has a maximum depth value of 0.98, and the highest contrast achieved with top-hat pulses falls below 0.99. The dashed signal in the inset has been obtained with top-hat $\pi$ pulses with $\Omega\approx(2\pi)\times 4.68$ MHz. Again, this is done to assure we use the same average power than the sequence leading to the curve over the clear area in Fig.~\ref{whole}(b). In this last case, we observe that the signal harvested with standard top-hat $\pi$ pulses does not show any appreciable contrast. These results indicate that our method using pulses with modulated amplitude is able to achieve tunable electron nuclear interactions, while regular top-hat pulses with equivalent MW power fail to resolve these interactions.
In summary, in this chapter we presented a general method to design extended $\pi$ pulses which are energetically efficient, and incorporable to stroboscopic DD techniques such as the widely used XY8 sequence. Our method leads to tunable interactions, hence selective, among an NV quantum sensor and nuclear spins at large static magnetic fields which represents optimal conditions for nanoscale NMR.
\chapter{Introduction}
\label{chapter:chapter_0}
\thispagestyle{chapter}
Quantum mechanics, which describes the natural processes that take place at the atomic scale, is one of the most successful theories in physics. From the very beginning, light-matter interaction, or the interaction between atoms and electromagnetic fields, has played a major role in the development of the quantum theory. The quantisation of energy was first proposed by Max Plack in 1900 to describe the electromagnetic spectral distribution produced by a thermal source~\cite{Planck00}. In 1905, Einstein explained the photoelectric effect~\cite{Einstein05}, and, by 1917, he had developed a model for light-matter interaction that accounted for the absorption and stimulated or spontaneous emission of light by atoms~\cite{Einstein17}. The quantum theory was further formalised by Erwin Schr\"{o}dinger in 1926, when he proposed a wave equation that correctly predicted the spectral lines of the hydrogen atom~\cite{Schrodinger26}. By the same years, Paul Dirac made the first attempt to quantise the electromagnetic field~\cite{Dirac27}. However, it was not until the late 1940's that a theory for the interaction between quantised light and matter became available~\cite{Feynman85}, namely quantum electrodynamics (QED), accurately predicting the Lamb shift measured in the hydrogen microwave (MW) spectrum~\cite{Lamb47}.
The rapid growth of MW and radio-frequency technology for telecommunications in the first half of the twentieth century led to the invention of the maser in 1953~\cite{Gordon55}. The maser produces coherent MW radiation through amplification by stimulated emission. This, along with other discoveries, such as optical pumping~\cite{Kastler50}, paved the way to the construction of the first laser in 1960~\cite{Taylor00}. The laser helped to formalise the theory of optical coherence~\cite{Glauber63a,Glauber63b}, and enabled coherently controlled interactions between light and matter. In the mid 70's, the ability to trap charged atoms~\cite{Paul90} was combined with lasers, leading to the first laser cooling protocol~\cite{Wineland75,Hansch75}. During the 80's laser cooling and trapping techniques for neutral atoms were developed~\cite{Phillips98}, quantum jumps were observed~\cite{Nagourney86,Sauter86,Bergquist86} and ground-state cooling of a single trapped ion was achieved~\cite{Diedrich89}. Lasers also allowed to study Rydberg atoms in optical or MW cavity resonators, a research field called cavity QED~\cite{Haroche89}. The interaction between these atoms with highly-excited electronic states and cavity modes leads to changes in the atomic properties, e.g. the enhancement or suppression of the spontaneous emission rate\footnote{The enhancement of the spontaneous emission rate is called the Purcell effect, and was previously discovered by Edward M. Purcell in the context of nuclear magnetic resonance~\cite{Purcell46}.}~\cite{Hulet85,Jhe87}. In 1992, the so-called strong-coupling (SC) regime between light and matter was achieved for a single atom~\cite{Thompson92}, allowing to study the interaction between atoms and photons at the level of single quanta~\cite{Brune96}. Light-matter interaction at the SC regime allows the formation of hybrid modes called polaritons, which share properties of both light and matter. These play a role, for example, in the localisation of light at small volumes via surface plasmons for subwavelength imaging~\cite{Vasa19}.
In a parallel effort, in the 1930's, Alan Turing set the grounds of the theory of computation when he showed that a machine with enough memory and following a very limited set of instructions, the so-called Turing machine, could efficiently perform any algorithmic process~\cite{Turing37}. This idea, along with the discovery of the first transistor\footnote{The Nobel Prize in Physics 1956 was awarded jointly to William B. Shockley, John Bardeen and Walter H. Brattain ``for their researches on semiconductors and their discovery of the transistor effect".}, and later, the MOSFET~\cite{Lojek07} in 1959, led to the exponential development of modern digital computers~\cite{Moore65}. Algorithms started to be classified in terms of the amount of resources, time and memory needed in order to find a solution using a Turing machine~\cite{Arora09}. For some problems, like multiplication, this amount of resources increased polynomially with the size of the input number, making them easy to solve. Other problems were hard to solve, for example, factoring integer numbers, for which no algorithm is known that scales polynomially with the input size. Another important issue was that, in the model of computation described by Turing, logical operations are irreversible. In 1961, Landauer used Shannon's information theory~\cite{Shannon48} to state that each of these irreversible operations would increase entropy, setting a fundamental energy cost to each operation~\cite{Landauer61}. Landauer's principle motivated the search for reversible computation models~\cite{Bennett73}, and, in the early 80's, the idea of a quantum Turing machine was proposed and developed by Benioff~\cite{Benioff80}, Manin~\cite{Manin80} and Deutsch~\cite{Deutsch85}. This quantum computer would store the information in two-level quantum systems, called quantum bits or qubits. Due to the linearity of the Schr\"odinger equation, this model of computation would be reversible. Furthermore, Feynman suggested that the quantum computer would be a natural testbed for simulating quantum physics~\cite{Feynman82}, while this would typically require an exponential amount of resources in a classical Turing machine~\cite{Poplavskii75}. In this way, quantum simulation appeared as the first practical application of a quantum computer. During the same years, the first ideas for secure communication using quantum mechanical variables were introduced~\cite{Wiesner83,Bennett14}, originating a research field that is known today as quantum cryptography~\cite{Bennett92}. In 1994, Peter Shor proposed a quantum algorithm for efficiently factoring integer numbers~\cite{Shor94}, which boosted what has been later called ``the second quantum revolution" by Dowling and Milburn~\cite{Dowling03}.
By that time, the control of individual quantum systems was already possible\footnote{In 2012, Serge Haroche and David J. Wineland received the Nobel Prize ``for ground-breaking experimental methods that enable measuring and manipulation of individual quantum systems".}, and the race for building a quantum computer started. Few months after the announcement of Shor's algorithm, Cirac and Zoller proposed a method to implement the controlled-NOT gate using two trapped ions~\cite{Cirac95}. Based on that proposal, the first two-qubit gate was realized by the group led by Wineland~\cite{Monroe95}. In the years that followed, numerous proposals for the physical realization of qubits and gates using photons~\cite{Chuang95,Knill01}, ions~\cite{Sorensen99,Sorensen00,Solano99,Jonathan00}, atoms in cavity QED~\cite{Pellizzari95} or optical lattices~\cite{Brennen99}, anyons~\cite{Kitaev03}, semiconductors~\cite{Loss98}, nuclear magnetic resonance (NMR)~\cite{Cory97,Gershenfeld97}, crystallographic defects in diamond~\cite{Cappellaro09}, and superconducting circuits~\cite{Nakamura99} were introduced. Besides, new quantum algorithms were developed~\cite{Grover96,Harrow09}, and the first protocols for quantum error correction were introduced~\cite{Shor95,Steane96}. Ideally, a quantum computer ought to be built by qubits that interact strongly among them, yet are isolated from the environment to avoid any source of decoherence. On the other hand, we should be able to control and measure the system from the exterior, which requires the aforementioned isolation to be highly selective. In this regard, quantum error correction~\cite{Gottesman09} provides pathways to build fault-tolerant quantum computers with qubit-errors below an acceptable given threshold~\cite{Campbell17}. In exchange, one needs to encode the quantum information of a logical qubit in many physical qubits.
Despite the impressive progress made increasing both the number of qubits and their coherence properties~\cite{Monz11,Barends14,Kaufman15,Ebert15,Lekitsch17,Popkin16,Wang16c,Barredo18,Zeng17,Bernien17,Kandala17,Wang20,Bermudez17,Zajac17}, the present-day quantum technology, sometimes referred to as noisy intermediate-scale quantum (NISQ) technology~\cite{Preskill18}, remains insufficient for a physical realisation of a fault-tolerant universal quantum computer~\cite{Wilczek16}. This has stimulated the development of alternative models of quantum computation adapted to NISQ devices, such as digital-analog quantum computation~\cite{Parra20}, variational quantum eigensolvers~\cite{McClean16}, constant-depth quantum circuits \cite{Terhal04}, temporally unstructured quantum circuits~\cite{Shepherd09}, quantum circuits with identical noninteracting bosons (also known as boson-sampling devices)~\cite{Aaronson11}, random circuits of coupled qubits~\cite{Boixo18} or quantum annealers \cite{Das08}. Because of their reduced experimental complexity, all these models of quantum computation are promising candidates to show a speedup over classical algorithms in NISQ architectures, a feat often named as quantum supremacy or quantum advantage~\cite{Preskill13,Papageorgiou13,Harrow17}. In fact, last year, quantum supremacy was claimed for the first time by the group led by Martinis using random circuits of superconducting qubits~\cite{Arute19}.
Arguably the most important application of quantum computation is quantum simulation~\cite{Cirac12,BlochDalibard12,Blatt12,AspuruGuzik12,Houck12,VanHoucke12,Georgescu14,Gross17}, which consists in the reproduction of relevant quantum models using controllable quantum systems. The idea was first proposed by Manin~\cite{Manin80} and Feynman~\cite{Feynman82} in the early 80's, and was further formalised by Lloyd~\cite{Lloyd96}, who theoretically proved that quantum computers can efficiently simulate any local quantum system. Because the dimension of the Hilbert space grows exponentially with the number of constituents of the system, the classical simulation of quantum many-body phenomena is, in general, extremely inefficient, and, often, impossible~\cite{Laughlin00}.
On the contrary, quantum simulators can efficiently simulate models in quantum field theory~\cite{Byrnes06,Cirac10,Casanova11,Mazza12,Jordan12,Hauke13}, quantum chemistry~\cite{Kassal11,Yung14,ArguelloLuengo19}, condensed matter~\cite{Jotzu14,Yan20,Muniz20,Tang20}, or even quantum gravity~\cite{Keren19}. The universal simulator described by Lloyd was later called digital quantum simulator, and it is different from analog~\cite{Britton12,Parsons16,Zhang17} or digital-analog~\cite{Arrazola16,Lamata18} quantum simulators which are non-universal platform-dependent models of computation designed to mimic the behaviour of specific quantum systems. These, however, are better adapted to NISQ systems, and have already been used to realise physical predictions beyond the reach of classical methods~\cite{Trotzky12}. Quantum computers and simulators are better than classical computers at the task of simulating nature~\cite{Bernstein97}, and, thus, are expected to become an important tool for scientific discovery~\cite{Alexeev19}.
The same extraordinary sensitivity to external agents that makes the construction of a quantum computer challenging, can be exploited to build quantum sensors. Quantum sensing is the use of quantum objects or quantum coherence to measure a physical quantity~\cite{Degen17}. It also refers to the use of entanglement to improve the precision of a measurement beyond classical limits~\cite{Giovannetti11}. Examples of quantum sensors include atomic clocks~\cite{Brewer19}, which serve as the best time and frequency standards, superconducting quantum interference devices and the thermal vapour of atoms, which make the most precise magnetometers~\cite{Dang10}, nitrogen-vacancy (NV) centers in diamond as magnetic sensors at the nanoscale~\cite{Maletinsky12,Rondin12}, trapped ions as electric-field and force sensors~\cite{Maiwald09,Biercuk10}, or squeezed states of light for gravitational wave detection~\cite{Abadie11}. Among quantum technologies, quantum sensors have the greatest potential for practical applications in the near term. Quantum correlations among photons can be exploited to achieve target detection in unfavourable scenarios with bright background noise and a low-reflectivity target. Extending this sensing scheme, called quantum illumination~\cite{Tan08,Lloyd08,Zhang13}, to the MW regime is believed to be the way forward in the construction of the first quantum radar~\cite{Barzanjeh15}. Another example of practical applications is given by atomic clocks. Clocks in relative motion or at different gravitational potentials experience time differently. Quantifying this time difference can be then useful to determine the structure of massive objects in geophysics and hydrology~\cite{Chou10,McGrew18}. Furthermore, quantum sensing may also play a crucial role in scientific discovery. For example, creating quantum superpositions with massive particles~\cite{Bose99,Chang10,RomeroIsart11b,Scala13,Pedernales20,Hall98,Fein19} is useful to test collapse models, which state that the Schr\"odinger equation is an approximation that breaks down with large enough masses delocalised above a critical distance~\cite{RomeroIsart11a,Bassi13}, or investigate the quantum nature of gravity, for instance, observing gravity-mediated entanglement~\cite{Bose17,Marletto17}.
Quantum science and technology is a burgeoning research field with promising applications that will have a tremendous impact in society. Light-matter interactions are at the heart of quantum platforms, and are essential to exploit the quantum behaviour of these systems. As an example, MW radiation or laser fields can be applied to quantum systems based on trapped atoms for the sake of high-fidelity quantum information processing. Existing models of light-matter interaction provide us with the theoretical framework needed to explore new forms of manipulating quantum states, and, helped by numerical simulations, search for methods that achieve, e.g., energy-efficient quantum sensing or robust quantum logic operations. Conversely, controlled quantum systems can be used to study models of light-matter interaction without conventional approximations, in regimes where classical numerical methods breakdown. In summary, this thesis focuses on the design of electromagnetic radiation patterns capable of tailoring light-matter interactions in quantum systems to achieve fast and robust quantum information processing, energy-efficient quantum sensing or the simulation of quantum models whose dynamics can be engineered.
\section{What you will find in this thesis}
In this thesis, we design new forms to control light-matter interactions for specific applications in quantum computing, quantum simulation and quantum sensing with trapped ions, ultracold atoms in optical lattices, and NV centers. For that, \textbf{ in chapter~\ref{chapter:chapter_0}} we first review central models of light-matter interaction, namely the semiclassical and quantum Rabi models. We also explain basic concepts of dynamical decoupling (DD), such as the spin echo or the dressed state approach. Finally, we present the quantum platforms studied in the thesis, and review their importance in the current ecosystem of quantum technologies.
\textbf{ In chapter~\ref{chapter:chapter_1}}, we propose two different methods to realise high-fidelity entangling gates with trapped ions using MW fields. The first uses pulsed MW radiation to generate fast gates with experimental parameters achievable in the near term. The second method applies to current experimental regimes and uses continuous radiation patterns to drive the gate. As opposed to lasers, MW sources are easy to control and can be incorporated to scalable trap designs, which make MW-driven trapped ions a leading approach to scale up trapped-ion quantum processors. Our presented gate-designs account for the main sources of decoherence present in those processors, and, using pulsed and continuous DD techniques, we show how to minimise their effect reaching fidelities above $99.9\%$ in realistic experimental scenarios.
\textbf{ In chapter~\ref{chapter:chapter_2}}, we explore two models of light-matter interaction and study their dynamics in different parameter regimes. In the case of the Rabi-Stark model, we find selective multiphoton interactions in the SC and USC regimes. Using time-dependent perturbation theory, we develop an analytical framework to explain these interactions. In the case of the nonlinear quantum Rabi model, we find its dynamical behaviour is limited to certain regions in the Fock space, diving the latter into different sections. Combining this property with dissipation, we design a method to generate large-$n$ Fock states with trapped ions. We also provide methods to simulate both the Rabi-Stark and the nonlinear quantum Rabi models with a laser-driven trapped ion.
\textbf{ In chapter~\ref{chapter:chapter_3}}, we introduce a method to realise boson sampling with ultracold atoms in optical lattices. The control of such system is achieved using both MW and laser fields, and combining pulsed and continuous radiation patterns. Boson sampling is a model of quantum computation with potential to demonstrate quantum supremacy in the near future. Using simple error-scaling models, we estimate how the experimental errors should scale with the number of bosons (atoms, in this case) in order to show advantage with respect to the best classical algorithms. We benchmark our error model with exact numerical simulations of non-Hermitian Hamiltonian models that include particle loss.
\textbf{ In chapter~\ref{chapter:chapter_4}}, we present a design of amplitude modulated MW pulses able to achieve selective NV-nuclei interactions at strong magnetic fields. Working with strong magnetic fields can provide an enhancement on the resolution of NMR spectra, however, if the MW power is not accordingly increased, it leads to a decay of the NMR signal. On the other hand, working with a high MW power may not be convenient, especially when dealing with biological samples that could be damaged when exposed to a large amount of radiation. Our presented method circumvents these issues by using amplitude modulated pulses that enhances the resolution of the measured signals.
\section{Models for light-matter interaction}\label{sec:Intro}
\subsection{The Rabi model}\label{subsec:RabiModel}
The Rabi model or the semiclassical Rabi model was introduced by Isaac Rabi in 1937 to describe the effect of oscillating magnetic fields onto atoms with nuclear spin~\cite{Rabi37}. Despite being originally introduced in the context of NMR, the Rabi model is also the most simple model that describes the interaction between a two-level atom and a classical electromagnetic field. Think of a spin-$1/2$ particle under the influence of a static magnetic field $\vec{B}_0=B_0\hat{z}$, where $\hat{z}$ is the unit vector in the $z$ direction. The energy of such a particle would be described by the following Hamiltonian
\begin{equation}\label{Rabi1}
H_0=-\vec{\mu}\cdot \vec{B}_0=\frac{\hbar\omega_0}{2}\sigma_z,
\end{equation}
where $\vec{\mu}=-\gamma\frac{\hbar}{2} \vec{\sigma}$, $\gamma$ being the particle's gyromagnetic ratio, $\hbar$ the reduced Planck constant, and $\sigma_{x,y,z}$ the Pauli matrices. In NMR, $\omega_0=-\gamma B_0$ is known as the Larmor frequency, and, according to Eq.~(\ref{Rabi1}) any spin state will rotate around the $z$ axis with a period $2\pi/\omega_0$. When a orthogonal gyrating magnetic field $\vec{B}(t)=B[\cos{(\omega t)} \hat{x}+\sin{(\omega t)}\hat{y}]$ is applied, we obtain the Rabi model
\begin{equation}\label{Rabi2}
H_\textrm{ R}=\frac{\omega_0}{2}\sigma_z + \frac{\Omega}{2}(\sigma_+e^{-i\omega t}+ \sigma_-e^{i\omega t}),
\end{equation}
where $\sigma_\pm=(\sigma_x\pm i\sigma_y)/2$ and $\Omega=\gamma B/2$. In Eq.~(\ref{Rabi2}) and from now on, all Hamiltonians will be redefined as $H \rightarrow H/\hbar$, thus, will be given in units of angular frequency. To solve the dynamics given by Eq.~(\ref{Rabi2}) we move to an interaction picture~\cite{Sakurai94} with respect to $H=\frac{\omega}{2}\sigma_z$, $H_\textrm{ R}^I=e^{iHt}H_\textrm{ R}e^{-iHt}-H$, obtaining
\begin{equation}\label{Rabi3}
H_\textrm{ R}^I=\frac{\Delta}{2}\sigma_z + \frac{\Omega}{2}\sigma_x,
\end{equation}
where $\Delta=\omega_0-\omega$ is the detuning with respect to the Larmor frequency. At resonance, Eq.~(\ref{Rabi3}) describes the rotation of the spin around the $\hat{x}$ axis at a frequency $\Omega$, called the Rabi frequency. In Fig.~\ref{fig:IntSch}(b), the evolution of state $|\!\!\downarrow\rangle$ ($\sigma_z|\!\!\downarrow\rangle=-|\!\!\downarrow\rangle$) is shown, in terms of the average values of $\sigma_{x,y,z}$\footnote{Quantum mechanics is a probabilistic theory and it predicts average values of observables. The measurement of these average values requires several experimental runs to collect statistical data.} for a time $\pi/\Omega$. This operation, where the population of state $|\!\!\downarrow\rangle$ is transferred to state $|\!\!\uparrow\rangle$, is known as a $\pi$ pulse.
\begin{figure}[t!]
\centering
\includegraphics[width=1\textwidth]{figures/Figures_0/InteractionSchrodinger.pdf}
\caption{a) A particle with spin precessing under a static magnetic field $\vec{B}_0$ and an oscillating transverse field $\vec{B}(t)$. b) A $\pi$ pulse in the Bloch sphere. Evolution of initial state $|\!\!\downarrow\rangle$, according to the resonant Rabi model and during a time $\pi/\Omega$, is shown, both in the interaction and Schr\"odinger pictures, with $\Omega=\omega_0/10$.}\label{fig:IntSch}
\end{figure}
The Rabi model describes the evolution of a two-level system under an oscillating magnetic field, however, it does not account for environmental effects that cause decoherence. In 1946, Felix Bloch introduced phenomenological equations that described the dynamics of the spin including those effects~\cite{Bloch46}, characterised by two coherence times $T_1$ and $T_2$. The former, called relaxation time, is related with population decay from the $|\!\!\uparrow\rangle$ to the $|\!\!\downarrow\rangle$ state, while the latter represents the unpredictability (after a time $T_2$) of the relative phase between $|\!\!\downarrow\rangle$ and $|\!\!\uparrow\rangle$ states. Just as the Rabi model, Bloch equations are valid to describe the evolution of a two-level atom driven by a coherent electromagnetic field and affected by decoherence, which in that case, take the name of Maxwell-Bloch or optical Bloch equations~\cite{Arecchi65}.
\subsubsection{The rotating wave approximation}
Instead of the gyrating field used in Eq.~(\ref{Rabi2}), it is more typical to consider a time-varying field of the form $\vec{B}(t)=B\cos(\omega t)\hat{x}$. In such case, Eq.~(\ref{Rabi3}) reads
\begin{equation}\label{Rabi4}
H_\textrm{ R}^I=\frac{\Delta}{2}\sigma_z + \frac{\Omega}{2}\sigma_x +\frac{\Omega}{2}(\sigma_+e^{-i2\omega t} +\sigma_-e^{i2\omega t}).
\end{equation}
If the intensity of the transverse field is small compared to the static magnetic field, then $\omega\approx\omega_0\gg\Omega$, and the last terms in Eq.~(\ref{Rabi4}), called counter-rotating terms, can be neglected under the rotating-wave approximation (RWA). A measurable effect of those counter-rotating terms is a shift of the Larmor frequency by an amount $\Omega^2/4\omega_0$, known as the Bloch-Siegert shift\footnote{This shift was first measured by Felix Bloch and Arnold J. F. Siegert in 1940~\cite{Bloch40}.}. To derive this, one has to use time-dependent perturbation theory, see for example, Eqs.~(\ref{Magnus}) and (\ref{MagnusTerms}) in appendix~\ref{app:TimeEvol}.
\subsubsection{Dressed states}
We call dressed states to electronic or nuclear-spin states whose energy has been shifted because of the interaction with the radiation field. A typical example is the so-called light shift~\cite{Foot07}, obtained when the absolute value of the detuning $|\Delta|$ is much larger than the Rabi frequency $\Omega$, yet $|\Delta|\ll\omega_0$. The effective Hamiltonian in this regime is
\begin{equation}\label{HamilLight}
H^I_\textrm{ light}\approx\frac{\Omega^2}{4\Delta}\sigma_z,
\end{equation}
which describes a change of the energy difference between states $|\!\!\downarrow\rangle$ and $|\!\!\uparrow\rangle$, positive or negative depending on the sign of the detuning $\Delta$. When the Rabi frequency $\Omega$ (which is proportional to the intensity of the radiation field) changes with the position $\vec{x}$, the shift can induce a force in the particle.
Another kind of dressed state is also described by Eq.~(\ref{Rabi3}). When $\Delta=0$, an energy difference of $\Omega$ between states $|\pm\rangle=|\!\!\uparrow\rangle\pm |\!\!\downarrow\rangle$ (up to normalisation and in the interaction picture) is produced. The introduction of a weaker field $\vec{B}(t)=~\tilde{B}(\cos{(\tilde{\omega} t)} \hat{x}+\sin{(\tilde\omega t)}\hat{y}$, represented by
\begin{equation}\label{Rabi2}
H^I_\textrm{ R}=\frac{\Omega}{2}\sigma_x + \frac{\tilde{\Omega}}{2}(\sigma_+e^{i\tilde{\Delta} t}+ \sigma_-e^{-i\tilde{\Delta} t}),
\end{equation}
where $\tilde{\Omega}=\gamma \tilde{B}/2$ and $\tilde{\Delta}=\omega_0-\tilde{\omega}$, can lead to transitions between states $|+\rangle$ and $|-\rangle$ when $\tilde{\Delta}=\pm\Omega$. In atomic physics, these two transitions at $\omega\pm\Omega$ and the central transition at $\omega_0$ can be observed and are known as the Mollow triplet~\cite{Mollow69,Schuda74}. Both the light-shift and the Mollow triplet are often referred in the literature as the alternating current (AC) Stark effect~\cite{Foot07} or Autler-Townes effect~\cite{Autler55}.
States dressed with resonant driving fields are naturally protected from small shifts in $\omega_0$~\cite{Puebla16} and form the basis of continuous DD techniques~\cite{Bermudez12,Lemmer13}.
\subsubsection{Spin echo}
The spin echo or Hahn echo\footnote{The spin echo was first observed by Erwin L. Hahn in 1950~\cite{Hahn50}.} is a method by which a revival of the spin's coherence occurs after the application of a $\pi$ pulse. It is also the most common method to combat dephasing caused by uncontrolled shifts on $\omega_0$ due to fluctuations of the intensity of the magnetic field. For a simple analysis, let us assume we start with the spin in state $|\!\!\uparrow\rangle+|\!\!\downarrow\rangle$. According to Eq.~(\ref{Rabi1}) the state should evolve as $e^{i\omega_0t}|\!\!\uparrow\rangle+e^{-i\omega_0t}|\!\!\downarrow\rangle$, however, in a rotating frame with respect to $\frac{\omega_0}{2}\sigma_z$, the state should stay as $|\!\!\uparrow\rangle+|\!\!\downarrow\rangle$. If the Larmor frequency undergoes an unknown shift $\delta$, at a time $t$, the state would be $e^{i\delta t}|\!\!\uparrow\rangle+e^{-i\delta t}|\!\!\downarrow\rangle$. This would imply the loss of information about the relative phase between the two states, and therefore, a coherence loss\footnote{More precisely, the loss of coherence would occur if $\delta$ changes stochastically with each experimental run.}. The application of a $\pi$ pulse produces a population exchange between both states, transforming the state into $e^{-i\delta t}|\!\!\uparrow\rangle+e^{i\delta t}|\!\!\downarrow\rangle$. After a time $t$, the state would be $e^{i\delta t}e^{-i\delta t}|\!\!\uparrow\rangle+e^{-i\delta t}e^{i\delta t}|\!\!\downarrow\rangle=|\!\!\uparrow\rangle+|\!\!\downarrow\rangle$, and the relative phase produced by the unknown shift disappears, recovering coherence. It is important to remark that spin echo is used as the basis of pulsed DD techniques~\cite{Carr54,Meiboom58,Souza12}.
\subsection{The quantum Rabi model}
The quantum Rabi model (QRM) describes the interaction between a two-level atom and a quantised mode of the electromagnetic field. The model was first introduced by Jaynes and Cummings in 1963~\cite{Jaynes63}, however, its predictions were not measured until the late 80's in the context of cavity QED~\cite{Rempe87,Haroche89,Raimond01}. Besides, the QRM describes the basic interactions that take place in trapped ions~\cite{Leibfried03}, superconducting circuits~\cite{Clarke08} or semiconducting quantum dots~\cite{Bera10}. The Hamiltonian of the model is
\begin{equation}\label{QRM}
H_\textrm{ QRM}=\frac{\omega^\textrm{ R}_0}{2}\sigma_z+\omega^\textrm{ R} a^\dagger a + g (a+a^\dagger)(\sigma_++\sigma_-)
\end{equation}
where $\omega_0^\textrm{ R}$ is the natural frequency of the two-level atom, $\omega^\textrm{ R}$ and $a^\dagger (a)$ are the frequency and the creation (annihilation) operators of the electromagnetic mode, and $g$ measures how strong is the coupling between the atom's dipole moment and the bosonic field mode. See Fig.~\ref{fig:QRM} for a simple sketch of the system. Near resonance, that is, when $\omega^\textrm{ R}\approx\omega_0^\textrm{ R}$, and if $g\ll\omega^\textrm{ R}$, the terms $a\sigma_-$ and $a^\dagger\sigma_+$ can be eliminated by the RWA, retrieving the so-called Jaynes-Cummings (JC) model
\begin{equation}\label{JCM}
H_\textrm{ JC}=\frac{\omega_0^\textrm{ R}}{2}\sigma_z+\omega^\textrm{ R} a^\dagger a + g (a\sigma_++a^\dagger\sigma_-).
\end{equation}
The JC model is analytically solvable and, when $\omega^\textrm{ R}=\omega_0^\textrm{ R}$, it describes a periodic population exchange between states $|e,n\rangle$ and $|g,n+1\rangle$ at a rate given by $\Omega=g\sqrt{n+1}$~\cite{Jaynes63,Paul63}, as shown in Fig.~\ref{fig:QRM}(b). Here, $|g\rangle$ and $|e\rangle$ represent the ground and excited state of the atom, and $|n\rangle$ represents the $n$-th Fock state of the bosonic mode. Notice that the frequency of these oscillations $\Omega$, also called Rabi oscillations, depends on the number of photons $n$ in the cavity. At $n=0$, an excited atom may emit a photon to an empty cavity mode, and absorb it afterwards\footnote{These oscillations are known as vacuum Rabi oscillations~\cite{Loudon00}.}. The JC model also predicts the collapses and revivals of atomic state populations when the field mode is in a coherent state\footnote{These collapses and revivals were first observed by Gerhard Rempe, Herbert Walther and Norbert Klein in 1987~\cite{Rempe87}.}~\cite{Gerry04}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\textwidth]{figures/Figures_0/QRMFigure.pdf}
\caption{a) Sketch of an atom inside a cavity, where $\Gamma$ and $\kappa$ are the relaxation rates of the atom and the cavity, respectively. b) Time evolution of state populations for $g/\omega^\textrm{ R}=0.01$, $\omega_0^\textrm{ R}=\omega^\textrm{ R}$ according to the QRM. For initial states $|e,0\rangle$ (up) and $|e,3\rangle$ (down), periodic population exchange is shown, with states $|g,1\rangle$ and $|g,4\rangle$ and at rates $g$ and $2g$ respectively.}\label{fig:QRM}
\end{figure}
The experimental observation of Rabi oscillations requires $\Omega$ to be larger than the relaxation rates ($1/T_1$) of both the atom and the cavity, known as the SC regime~\cite{Haroche85,Gallas85}. In contrast to the weak-coupling regime, where substantial changes in the atomic properties (e.g. the Purcell effect) can already be observed, the SC regime allows the formation of hybrid dressed states, different from the ones introduced in section~\ref{subsec:RabiModel}, called polaritonic states or simply polaritons, which share both light and matter character~\cite{Vasa19}. In the case of the resonant JC model, these are represented by $|g,n+1\rangle\pm|e,n\rangle$ eigenstates, up to normalisation.
Based on previous work by Dicke~\cite{Dicke54}, Tavis and Cummings studied the interaction of $N$ atoms with a single mode of the electromagnetic field, predicting that the frequency of (now collective) Rabi oscillations can increase by a factor of $\sqrt{N}$~\cite{Tavis68}. In fact, the first cavity QED experiments that achieved the SC regime were done with more than one atom~\cite{Raizen89,Zhu90,Bernardot92}. Since then, the effective coupling strength $g$ of light-matter interactions has progressively increased, reaching the ultrastrong coupling (USC)~\cite{Gunter09,Niemczyk10,Rossatto17,Kockum19,Forn19} regime ($g/\omega^\textrm{ R}\gtrsim0.1$) or
the deep-strong coupling (DSC)~\cite{Yoshihara17,Bayer17,Mueller20} regime ($g/\omega^\textrm{ R}\gtrsim1$) with superconducting circuits, Landau polaritons or plasmon polaritons. These experimental developments have motivated the study of the full QRM in Eq.~(\ref{QRM})~\cite{Braak16}, as the RWA is not longer justified. Despite being the most simple representation of quantum light-matter interaction, an exact analytical solution for the QRM was only proposed recently, in 2011~\cite{Braak11}. Unlike the JC model, the QRM dynamics does not show clear features until it reaches the DSC regime, where periodic collapses and revivals of the qubit initial state survival probability are predicted~\cite{Casanova10}.
\section{Quantum technologies}
Quantum technologies that aim to achieve quantum computing or quantum sensing need a well defined qubit or quantum information register, and the ability to initialise and measure its state. In the case of quantum sensing, this qubit should also interact with the physical quantity of interest. For quantum computing, the design of the system should be scalable to a large number of qubits, while maintaining long coherence times and the ability to make single and two-qubit gates. In the following, we describe three of the most promising quantum platforms to develop both quantum sensing and quantum computing, which are also the ones studied in this thesis.
\subsubsection{Trapped ions}
Charged atomic ions are trapped and suspended in vacuum using oscillating electromagnetic fields~\cite{Brown86,Leibfried03}. The effective harmonic potential created by these fields confines the ion in all directions, with trapping frequencies on the order of few MHz. In linear traps, the ions arrange in a linear configuration in the $z$ direction, where the harmonic force is weaker than in the other directions (see Fig.~\ref{fig:IntroIons}(a) for an illustration of a linear trap). Two electronic states connected by an electric quadrupole transition, or two hyperfine states connected by Raman or MW transitions can serve as a qubit, see Figs.~\ref{fig:IntroIons}(a)~and~(b). In both cases, an electric dipole transition to a radiative state is used for cooling the ion's motion\footnote{Ground-state cooling with trapping frequencies of the order of one megahertz correspond to a temperature of tens of $\mu$K.}, initialisation and read-out of the qubit state~\cite{Wineland98}. For one of the qubit states, driving the radiative transition will induce spontaneous emission of photons that can be collected by a camera, realising the measurement of the qubit state.
Coherent operations are typically realised with lasers, these allow to manipulate the qubit state and, moreover, to entangle internal (electronic) and external (vibrational) degrees of freedom by the so-called sideband transitions. It is the control of this qubit-boson interaction which allows to realise entangling gates among different ions in few microseconds~\cite{Schafer18} and with the highest fidelity achieved so far in quantum technologies~\cite{Ballance16,Gaebler16}. Coherence times can range from milliseconds to minutes using decoherence-free subspaces and DD~\cite{Wang20}. A dispersive coupling with the collective motional modes also permits to engineer spin-spin interactions governed by the effective Hamiltonian $H=\sum_{i,j}J_{ij}\sigma_i^x\sigma_j^x$~\cite{Porras04,Schneider12,Bohnet16}, which is useful to study many-body quantum phenomena like quantum phase transitions~\cite{Zhang17,Jurcevic17,Cui20} or quantum chaos~\cite{Grass13,Sieberer19}. Moreover, trapped-ions have also proven to be an excellent testbed to simulate relativistic quantum mechanics~\cite{Lamata08,Gerritsma10,Wittemer19} or quantum field theories~\cite{Martinez16,Zhang18}.
In this thesis, we study trapped ions as quantum simulators for generalised QRMs beyond the SC regime, and also as a scalable platform to build quantum computers. In the case of the latter, we combine laser-free entangling operations driven by MW radiation with DD techniques to achieve high-fidelity entangling gates. Finally, trapped ions can also be excellent quantum sensors of electric fields. This can be exploited to measure minute forces with a sensitivity of $1~ \textrm{ yN}/\sqrt{\textrm{Hz}}$~\cite{Maiwald09,Biercuk10}, or for high-precision mass spectrometry\footnote{Check articles 8, 9, 10, and 12 from the list of publications}.
\begin{figure}[t!]
\centering
\includegraphics[width=1\textwidth]{figures/Figures_0/IntroIons.pdf}
\caption{a) Sketch of a linear radiofrequency trap similar to the one shown in Ref.~\cite{Leibfried03}. In the middle, a chain of 5 ions scattering blue light. b) Simplified level scheme of an optical qubit using a $^{40}$Ca$^+$ ion~\cite{Nagerl00}. The metastable transition $S\leftrightarrow D$ at $729$ nm it is used as a qubit with lifetime $T_1\sim1$ s. The radiative transition $S\leftrightarrow P$ at $397$ nm is used for read-out. c) Simplified level scheme of a hyperfine qubit using a $^9$Be$^+$ ion~\cite{Monroe95}. Off resonant excitation of the radiative $S\leftrightarrow P$ transition at $313$ nm is used to produce coherent population exchange between two hyperfine levels of the $S$ subspace.}\label{fig:IntroIons}
\end{figure}
\subsubsection{Ultracold atoms in optical lattices}
Neutral atoms can be trapped in periodic optical potentials, called optical lattices, created by the interference of two counter-propagating laser beams. The frequency of the laser fields is far detuned with respect to the frequency of an electric dipole transition of the atom. As a result, a light shift, similar to the one presented in section~\ref{subsec:RabiModel}, is induced with the Rabi frequency proportional to the intensity of the laser field. The interference between the two beams creates a periodic spatial pattern for the intensity ($\Omega\cos{(kx)}$ in one dimension), which induces a force on the atom, called optical dipole force. This force is used to trap atoms in minima of the light potential $V(x)=\Omega^2/\Delta\cos^2{(kx)}$ (see Fig.~\ref{fig:IntroNV}(a) for an illustration), while the photon scattering rate is highly reduced with a large detuning $\Delta$. Atoms trapped in optical potentials need to be cooled down to temperatures below tens of $\mu$K, achievable with laser cooling techniques\footnote{In 1997, Steven Chu, Claude Cohen-Tannoudji and William D. Phillips received the Nobel prize for their developments of methods to cool and trap atoms using laser light.}. Using different hyperfine states of the atoms, these can be initialised in predefined arrays, and similar to trapped ions, the measurement is done via resonance fluorescence.
Ultracold atoms in optical lattices are the leading technology for analog quantum simulation of many-body physics~\cite{BlochDalibard12,Gross17}. The periodic form of the optical potentials resemble structures of real crystal lattices, and the interaction among the atoms can be controlled with light fields. Important many-body systems, such as Bose-Hubbard or Fermi-Hubbard models can be simulated~\cite{Greiner02}, mechanisms such as thermalisation~\cite{Langen13,Langen15,Kaufman16} or many-body localisation~\cite{Anderson58,Choi16} observed, or fundamental theories for high-energy and condensed-matter physics studied via synthetic gauge fields~\cite{Montvay94}.
Quantum computation can also be pursued with optical lattices. Qubits can be encoded in long-lived hyperfine states and entangling gates can be realised via the strong dipole-dipole interactions among Rydberg atoms. However, and despite the significant progress, several challenges such as the generation of a high-fidelity two-qubit gate remain unsolved~\cite{Saffman16}.
In this thesis, we base on previous experiments of quantum interference of non-interacting atoms with optical lattices, such as the realisation of Hong-Ou-Mandel interference~\cite{RobensThesis}, to propose a scalable method to realise multiparticle quantum interference experiments, and more specifically, boson sampling.
\begin{figure}[t!]
\centering
\includegraphics[width=1\textwidth]{figures/Figures_0/IntroNV.pdf}
\caption{a) A neutral particle trapped in one minimum of the optical potential $V(x)$ (in red). Also, a simplified level scheme of the $^{133}$Cs atom, where the off resonant excitation of the $S\leftrightarrow P$ transition by the standing wave is the responsible of the dipole force. b) NV center in a diamond lattice, with $^{12}$C (grey) and $^{13}$C (blue) atoms nearby. A static magnetic field is applied parallel to the NV axis, and a transverse MW field is depicted in red. c) Level scheme for the NV center, where coloured (black) arrows indicate radiative (nonradiative) transitions.}\label{fig:IntroNV}
\end{figure}
\subsubsection{Nitrogen-vacancy centers in diamond}
The NV center is a point defect in diamond with a particular set of properties that makes it suitable for the study and exploitation of quantum phenomena~\cite{Walker79}. The defect is formed when a carbon atom in the diamond lattice is substituted by a nitrogen and an adjacent lattice site presents a vacancy~\cite{Aharonovich11,Suter17}, see Fig.~\ref{fig:IntroNV}(b), and can be produced by several techniques~\cite{Doherty13}. The NV$^-$ or NV center's\footnote{Other two charge states exist, NV$^+$ and NV$^0$, whose properties differ from those of the negatively charged defect.} electronic level structure is formed by two triplet states $^3A_2$ (ground) and $^3E$ (excited) and two intermediate singlet states $^1A_1$ and $^1E$, see Fig.~\ref{fig:IntroNV}(c). A qubit is usually encoded in the $m_s=0$ and one of $m_s=\pm1$ states of the subspace $^3A_2$, separated in energy by $D=(2\pi)\times2.88$ GHz, called the zero-field splitting. Also, a magnetic field in $B_z$ the direction of the NV breaks the degeneracy between states $m_s=\pm1$ by an amount $\gamma_eB_z$. Operations within this subspace are carried out using MW radiation, while optical transitions are used for initialisation and measurement of the NV. When the electron is excited to the $^3E$ subspace through a spin-conserving radiative transition, the excited states can decay to the ground state directly or via the intermediate states. The latter occurs with a higher probability for states $|^3E,\pm1\rangle$ and the intermediate states are long-lived ($\sim250$ ns) compared to the states in $^3E$. This results in a spin-dependent fluorescence signal, which is stronger for the $|^3A_2,0\rangle$ state. Most importantly, the decay via the intermediate states changes the spin state from $m_s=\pm1$ to $m_s=0$ and vice versa, and, because of the preference of one of the pathways, this can be used for the initialisation or polarisation of the spin state.
The coherent control of the NV center is possible at room temperature, with coherence times on the order of milliseconds~\cite{Degen17} and electron spin polarisation above $90\%$~\cite{Jelezko06}. Because of this, NV centers have been proposed as quantum sensors for high-resolution scanning probe microscopy~\cite{Chernobrod05,Balasubramanian08,Degen08}, magnetometry~\cite{Maze08,Taylor08,Cole09}, thermometry~\cite{Hodges13,Kucsko13,Neumann13,Toyli13}, pressure sensing~\cite{Doherty14} or to be used as biomarkers with living organisms~\cite{Fu07}. In this thesis, we focus on the possibility of realising nanoscale NMR with NV centers. This enables the control and measurement of magnetic field emitters (as nuclear spins) with sub-$100$nm resolution, with applications for imaging of nanometric magnetic structures~\cite{Balasubramanian08,Maletinsky12,Rondin12} or optical polarisation of magnetic nuclei for high-resolution magnetic resonance imaging~\cite{Schwartz18}.
NV centers also have interesting applications for quantum information processing. At low temperatures, the fidelity of initialisation and read-out increases significantly~\cite{Robledo11}, and $T_1$ coherence times approaching to $10^3$ s have been reported~\cite{Abobeih18} at $\approx 3.7$~K. Coherence times due to dephasing $T_2$ depend mainly on the number of $^{13}$C nuclei near the NV~\cite{Maze08}, and, with a small amount of these impurities, $T_2$ can be increased up to $T_1$ with DD techniques~\cite{Souza12}. Both the NV centers and the nearby $^{13}$C nuclei can be used as quantum information registers~\cite{Neumann10,Liu18}, current experiments being able to achieve high-fidelity gates~\cite{Rong15} and registers up to $10$ qubits~\cite{Bradley19}. Still, the deterministic fabrication of these solid-state devices is challenging as requires a three dimensional precision better than $10$ nm~\cite{Scarabelli16}. Alternative approaches to scalability consider NV centers coupled to superconducting circuits~\cite{Zhu11} or photons~\cite{Togan10,Kalb17}. Using the latter, entanglement between electron spins separated by $1.3$ km has been achieved~\cite{Hensen15}.
\chapter{Quantum Logic with MW-Driven Trapped Ions}
\label{chapter:chapter_1}
\thispagestyle{chapter}
The control and manipulation of the quantum information of individual atoms via electromagnetic fields has been extremely successful with trapped ions, which has placed trapped-ion quantum technology as a leading candidate to build reliable quantum simulators and computers~\cite{Nielsen10,Haffner08,Ladd10,Blatt12,Cirac12}. In the near future, these could solve computational problems in a more efficient manner than classical devices by exploiting the quantum correlations among their atomic constituents. To this end, the systematic generation of single-qubit and two-qubit gates with high-fidelity is crucial. The latter has been achieved by using laser light that couples the internal (atomic) and external (vibrational) degrees of freedom of the ions leading to fast single-qubit, and two-qubit gates of a high fidelity~\cite{Ballance16,Gaebler16,Schafer18}. Nevertheless, to scale laser-based quantum processors while maintaining high fidelities represents a hard technological challenge, since it requires the precise and simultaneous control of multiple laser sources.
An alternative approach to laser-driven systems was proposed by Mintert and Wunderlich~\cite{Mintert01}. This involves the use of MW fields together with magnetic field gradients to create interactions among the internal states of the ions. Unlike lasers, the control of MW sources is comparatively easy, and their introduction in scalable trap designs is less demanding~\cite{Lekitsch17}. In addition, MW driven quantum gates do not use any optical transition. This avoids spontaneous emission of some atomic states that define the qubit, which is an unavoidable limiting factor for laser-driven quantum gates~\cite{Plenio97,Gaebler16,Ballance16}. After the initial proposal in~\cite{Mintert01}, the use of MW schemes has been pursued in two distinct fashions, using either static magnetic field gradients~\cite{Mintert01,Weidt15,Piltz16,Welzel19}, or MW radiation in the near-field~\cite{Ospelkaus08,Ospelkaus11,Hahn19,Zarantonello19}. In the former, a static magnetic field gradient provides spatial field variations on the size of the wave packet of the ion, coupling the atomic and vibrational degrees of freedom. This coupling is combined with MW fields in the far-field regime, which are used to modulate the interaction. In the latter, oscillating MWs in the near-field regime are used as the generators of the qubit-boson interaction. Notice that MW fields in the near field naturally provide the required spatial field variations without the need to add a static magnetic field gradient. In addition, recently a method has been proposed to couple ionic internal and external degrees of freedom, that combines oscillating magnetic field gradients with MW fields on the far-field regime~\cite{Srinivas19,Sutherland19}.
Typically, ion qubits are encoded in hyperfine atomic states which are sensitive to magnetic field fluctuations. These represent the main source of decoherence, and have to be removed to achieve quantum information processing with high fidelity. To this end, pulsed and continuous DD techniques have been introduced~\cite{Jonathan00,Szwer11,Piltz13,Casanova15,Puebla16,Puebla17,Arrazola18,Wang19}. In particular, the
creation of dressed-states has been proved useful~\cite{Timoney11,Bermudez12,Lemmer13,Mikelsons15,Cohen15,Wolk17,Webb18} and has led to the best reported gate fidelities ($>98\%$) with MW fields on the far-field regime~\cite{Weidt16}. On the other hand, the best near-field MW gates, with fidelities of $99.7\%$~\cite{Harty16,Zarantonello19}, use a driving field on the carrier transition or MW amplitude modulation to protect the gate with respect to the main sources of error.
In this chapter we propose two different methods to achieve high-fidelity two-qubit gates using MW fields on the far-field regime combined with a static magnetic field gradient. In both cases, DD techniques are used to design gates which are robust against fluctuations of the MW and magnetic fields. The method presented in section \ref{sect:1_PDD} is specifically designed to work with a strong qubit-boson coupling and large MW power, leading to gate times of tens of microseconds. On the other hand, the gate scheme in section \ref{sect:Slow_ions} works with a lower qubit-boson coupling and MW power, making it more accessible experimentally, and achieving gate times on the order of a millisecond.
\section{Pulsed DD for fast and robust two-qubit gates}
\label{sect:1_PDD}
Fast trapped-ion entangling gates require an effectively large qubit-boson coupling, which is usually obtained increasing either the Lamb-Dicke (LD) parameter, i.e. the original qubit-boson coupling, or the intensity of the field driving the qubit. Experiments using far-field MW with static magnetic field gradients work with a LD parameter of $\eta \sim0.01$ and less than a hundred kilohertz of MW Rabi frequency. This leads to gate times on the order of the millisecond. Reference~\cite{Cohen15}, for example, proposes to use hundreds of kilohertz of MW Rabi frequency, obtaining gate times of a few hundreds of microseconds. In this section we propose to boost the speed of two-qubit gates by increasing the magnetic field gradient and thus the LD parameter to $\eta\sim0.1$. This complicates the application of schemes where the qubit-qubit interaction is mediated by a single motional mode, as the spectroscopic discrimination of the rest is no longer possible. The use of multiple modes has also been explored theoretically~\cite{Duan04} and experimentally~\cite{Mizrahi13} for laser based systems.
In this section, we propose a scheme leading to fast and high-fidelity two-qubit gates through a specifically designed sequence of MW $\pi$-pulses acting in the presence of a magnetic field gradient. Our method employs the two vibrational modes in the axial direction of the two-ion chain leading to gate times approaching the inverse of the trap frequency. On top of that, the sequence uses pulsed DD to protect qubits from uncontrolled noise sources. The high speed and robustness of our scheme results in two-qubit gates of high fidelity even in presence of motional heating. Our detailed numerical simulations show that state-of-the-art in MW trapped-ion technology allows for two-qubit gates sufficiently fast to pave the way for scalable quantum computers.
\subsection{System: two $^{171}$Yb$^+$ ions}\label{subsect:system}
We consider a setup consisting of two $^{171}$Yb$^+$ ions in a MW quantum computer module~\cite{Lekitsch17}. For each ion, we define our qubit between the lowest energy state ${|{\textrm g}\rangle\equiv\{F=0, m_F=0\}}$ and the first excited state with positive magnetic moment ${| {\textrm e} \rangle\equiv\{F=1, m_F=1\}}$ in the hyperfine manifold, see Fig.~\ref{fig:Fig1}. The conditions under which transitions to other hyperfine levels can be safely neglected are covered in appendix~\ref{app:InitialApp}. The motion of different ions is coupled via direct Coulomb interaction and can be collectively described by two harmonic normal modes in each of the three spatial dimensions. In the following, we will restrict our analysis to the normal modes in the axial direction $\hat{z}$, namely the center-of-mass and the breathing modes, which are independent of the radial ones. That is, we assume that the magnetic field gradient in the radial direction is negligible compared to that in the axial direction. This configuration is described by the Hamiltonian
\begin{eqnarray}
\label{Hamiltonianbare}
\nonumber H= \nu_1 a^\dag a + \nu_2 c^\dag c &+& [\omega_{\textrm e} +\gamma_e B(z_1)/2] | {\textrm e} \rangle \langle {\textrm e} |_1 + \omega_{\textrm g} | {\textrm g} \rangle \langle {\textrm g} |_1 \\
&+& [\omega_{\textrm e} + \gamma_e B(z_2)/2] | {\textrm e} \rangle \langle {\textrm e} |_2 + \omega_{\textrm g} |{\textrm g} \rangle \langle {\textrm g} |_2.
\end{eqnarray}
\begin{figure}[t!]
\centering
\includegraphics[width=1\textwidth]{figures/Figures_1/Scheme_1.pdf}
\caption{Two trapped $^{171}$Yb$^+$ ions under a magnetic field gradient. Hyperfine levels of the two ions are shown. The magnetic field $B(z)$ removes the degeneracy of the $F=1$ manifold separating the $\{F=1, m_F=\pm1\}$ and $\{F=1, m_F=0\}$ levels of both ions by an amount of $\pm\gamma_e B(z_j)/2$ respectively. }\label{fig:Fig1}
\end{figure}
Here, $\gamma_e = (2\pi)\times 2.8$ MHz/Gauss is the gyromagnetic ratio of the electron, $a(a^\dag)$ and $c(c^\dag)$ are the bosonic annihilation(creation) operators of the motional modes, which have frequencies ${\nu_1=\nu}$ and ${\nu_2=\sqrt{3}\nu}$, respectively~\cite{James98}. Typical values for $\nu/(2\pi)$ range from hundreds of kilohertz to one or two megahertz, and $B(z_j)$ is the magnetic field at the position of ion $j$. We consider a magnetic field gradient that leads to a linearly growing $B(z_j)$ term, $\partial B/\partial z\!=\!g_B$. Then, by expressing the ion coordinates in terms of the vibrational normal modes~\cite{James98} and suitably shifting the zero-point energy of the qubits, the Hamiltonian in Eq.~(\ref{Hamiltonianbare}) can be rewritten as
\begin{eqnarray}
\label{modelHamiltonian}
\nonumber H= \nu_1 b^\dag b + \nu_2 c^\dag c &+& \frac{\omega_1}{2}\sigma_1^z + \eta_1\nu_1 (b+b^\dag) \sigma_1^z - \eta_2\nu_2(c+c^\dag)\sigma_1^z\\
&+& \frac{\omega_2}{2}\sigma_2^z + \eta_1\nu_1(b+b^\dag) \sigma_2^z + \eta_2\nu_2(c+c^\dag)\sigma_2^z.
\end{eqnarray}
Here, the operators of the center-of-mass mode have been redefined as $b=a+2\eta_1$, where $M$ is the mass of each ion and $\eta_m= \frac{\gamma_e g_B}{8\nu_m} \sqrt{\frac{\hbar}{M \nu_m}}$ is the effective LD parameter which quantifies the coupling strength between the qubits and the $m$-th motional mode. The qubit energy splittings are $\omega_j= \omega_{\textrm e} - \omega_{\textrm g} - 4\eta_1^2\nu_1 + \gamma_e B_j/2$, where $B_j\equiv B(z_j^0)$ is the magnetic field on the equilibrium position of the $j$-th ion. For a detailed derivation of Hamiltonian in Eq.~(\ref{modelHamiltonian}), please refer to appendix~\ref{app:twoions}.
In presence of the magnetic field gradient $g_B$, the energy splitting of the two ion-qubits differs by $\omega_2 - \omega_1=\gamma_e g_B \Delta z/2$, where $\Delta z$ is the distance between the equilibrium positions of the ions. Later, we will see that $\omega_2- \omega_1= \frac{\gamma_e g_B}{2} \left( \frac{2e^2}{4\pi \varepsilon_0 M \nu^2}\right)^{1/3}$ is a quantity on the order of tens of megahertz for the parameters considered here. This energy difference combined with a specifically designed MW-pulse sequence to efficiently cancel crosstalk effects (see section~\ref{subsect:Tailored}) will allow us to address individually each qubit. We consider a bichromatic electromagnetic field of frequencies $\omega_j$ and phases $\phi$ as described by the Hamiltonian
\begin{eqnarray}\label{control}
\nonumber H_c(t)&=&\Omega_1(t) (\sigma_1^x + \sigma_2^x)\cos(\omega_1 t - \phi)\\
&+& \Omega_2(t) (\sigma_1^x + \sigma_2^x) \cos(\omega_2 t - \phi).
\end{eqnarray}
Under the influence of such MW control fields our system Hamiltonian would be given by
\begin{eqnarray}\label{casi}
H^{\textrm I}(t)&=& \eta_1\nu_1(b e^{-i \nu_1 t} + b^\dag e^{i\nu_1 t}) \sigma_1^z - \eta_2\nu_2(ce^{-i\nu_2 t} + c^\dag e^{i \nu_2 t}) \sigma_1^z\\
\nonumber &+& \eta_1\nu_1(b e^{-i \nu_1 t} + b^\dag e^{i\nu_1 t}) \sigma_2^z + \eta_2\nu_2(ce^{-i\nu_2 t} + c^\dag e^{i \nu_2 t}) \sigma_2^z\\
\nonumber &+& \frac{\Omega_1(t)}{2}(\sigma_1^+ e^{i \phi} + \sigma_1^- e^{-i\phi}) + \frac{\Omega_2(t)}{2}(\sigma_2^+ e^{i \phi} + \sigma_2^- e^{-i\phi}).
\end{eqnarray}
The Hamiltonian above is posed in a rotating frame with respect to $H_0= \nu_1 b^\dag b + \nu_2 c^\dag c + \frac{\omega_1}{2}\sigma^z_1 + \frac{\omega_2}{2}\sigma_2^z$. The non-resonant components of the MW driving have been eliminated under the RWA, whose validity will be later confirmed by detailed numerical simulations using the first-principles Hamiltonian. In this respect, we want to remark that terms that rotate at a rate similar to $2 \omega_j$ (which corresponds to a frequency of tens of gigahertz for the $^{171}$Yb$^{+}$ ion, see for example~\cite{Olmschenk07}) can be safely neglected by invoking the RWA, see appendix~\ref{app:InitialApp}. The slower frequencies rotating at a rate of $|\omega_2- \omega_1|$ (on the order of tens of MHz for our simulated conditions) lead to off-resonant couplings and require a specific treatment covered in section \ref{subsect:Tailored}.
Now we move to a rotating frame with respect to $\frac{\Omega_1(t)}{2}(\sigma_1^+ e^{i \phi} + \sigma_1^- e^{-i\phi}) + \frac{\Omega_2(t)}{2}(\sigma_2^+ e^{i \phi} + \sigma_2^- e^{-i\phi})$. The Rabi frequencies $\Omega_{1,2}(t)$ will be switched on and off, i.e. the driving is applied stroboscopically in the form of $\pi$-pulses, leading to
\begin{eqnarray}
\label{TheHamiltonian}
\nonumber H^\textrm{II}(t)&=& f_{1}(t) \sigma_1^z[\eta_1\nu_1 b e^{-i \nu_1 t} - \eta_2\nu_2ce^{-i\nu_2 t} + \textrm{ H.c.}] \\
&+& f_{2}(t)\sigma_2^z[\eta_1\nu_1b e^{-i \nu_1 t} + \eta_2\nu_2ce^{-i\nu_2 t} + \textrm{ H.c.}],
\end{eqnarray}
where the modulation functions $f_{j}(t)$ take the values $\pm 1$ depending on the number of $\pi$-pulses applied to the $j$-th ion. More specifically, for an even (odd) number of pulses we have $f_{j} =1 (-1)$. The idealised description in Eq.~(\ref{TheHamiltonian}) assumes instantaneous $\pi$-pulses, which is a good approximation if the Rabi frequencies are much larger than any other frequency in Eq.~(\ref{TheHamiltonian}). Nevertheless, to match realistic experimental conditions, our numerical simulations will consider sequences of finite $\pi$-pulses in the form of top-hat functions of length $t_{\pi} = \pi/\Omega$.
The Schr\"odinger equation corresponding to Eq.~(\ref{TheHamiltonian}) is analytically solvable and leads to the propagator $U(t)=U_s(t) U_c(t)$ where
\begin{equation}\label{solution1}
U_s(t)= \exp{\left[-i \sum_{j=1}^{2} \{\eta_1 G_{j1}(t) b +(-1)^j \eta_2 \ G_{j2}(t) c+ \textrm{ H.c.}\}\sigma_j^z\right]},
\end{equation}
and
\begin{equation}\label{solutionphase}
U_c(t)=\exp \left[i \varphi(t) \sigma_1^z \sigma_2^z\right],
\end{equation}
see appendix~\ref{app:TimeEvol} for the derivation. The $G_{jm}(t)$ functions in $U_s(t)$ are
\begin{equation}\label{Gfuncs}
G_{jm}(t)=\nu_m \int_0^t dt' f_j(t') e^{-i\nu_m t'},
\end{equation}
while the achieved two-qubit phase $\varphi(t)$ in Eq.~(\ref{solutionphase}) is
\begin{eqnarray}\label{phase}
\varphi(t)=\eta_1^2[ \tilde{\varphi}_1(t)-\frac{1}{3\!\sqrt{3}} \tilde{\varphi}_2(t)]=\eta_1^2 \tilde{\varphi}(t),
\end{eqnarray}
where
\begin{equation}\label{normphase}
\tilde{\varphi}_m(t)=\nu_m \ \! \Im \ \!\! {\int_{0}^t \!\!\! \ dt'} \big[ f_1(t')G_{2m}(t')+f_2(t')G_{1m}(t') \big] \ e^{i\nu_m t'},
\end{equation}
and $\Im$ being the imaginary part of the subsequent integral. One can demonstrate that, at the end of the sequence, $\tilde{\varphi}(t)$ does not depend on the values of $\eta_{1,2}$ and $\nu_{1,2}$ but on the ratio between mode frequencies $\nu_2/\nu_1=\sqrt{3}$ (appendix \ref{app:PulseP}). Hence, the study of $\tilde{\varphi}(t)$ covers all situations regardless of the value of $\eta_{1,2}$ and $\nu_{1,2}$.
From the solution $U(t)$, it is clear that a $\pi$-pulse sequence of duration $T_{\textrm G}$, satisfying conditions
\begin{eqnarray}
\label{conditions}
G_{jm}(T_{\textrm G})=0, \ \ \varphi(T_{\textrm G})\neq 0,
\end{eqnarray}
results in a phase gate between the two qubits and leaves the hyperfine levels of the ions decoupled from their motion. To accomplish these two conditions, we will design a specific MW pulse sequence that, in addition, will eliminate the dephasing noise due to magnetic field fluctuations or frequency offsets on the registers. Note that, if the latter are not averaged out, they would spoil the generation of a high-fidelity two-qubit gate.
\subsection{The AXY-$n$ MW sequence}\label{subsect:MW}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{figures/Figures_1/AXY.pdf}
\caption{ AXY-4 pulse sequence. (a) Each composite pulse includes five $\pi$-pulses with tuneable distances among them. (b) Zoom on the composite X and Y pulses with the corresponding pulse-phases in $H_c(t)$. (c) Modulation function associated to the composite pulses.}\label{AXYblock}
\end{figure}
In order to satisfy Eqs.~(\ref{conditions}) we propose to use variations of the adaptive XY-$n$ (AXY-$n$)
family of DD sequences introduced in Ref.~\cite{Casanova15} for nanoscale nuclear magnetic
resonance~\cite{Wu16,Wang16a, Casanova16,Wang17a, Casanova17}. Unlike previously used pulsed ion-trap
DD schemes~\cite{Piltz13}, AXY-$n_\textrm{ B}$ consists of $n_\textrm{ B}$ blocks of 5 non-equally separated $\pi$-pulses, as depicted
in Fig.~\ref{AXYblock} for the AXY-4 case, where the inter-pulse spacing can be arbitrarily tuned while the sequence remains robust~\cite{Casanova15}. Each $\pi$-pulse is applied along an axis in the $x$-$y$ plane of the Bloch sphere of each qubit state that is rotated an angle $\phi$ with respect to the $x$ axis.
We define two blocks: the X block, made of 5 $\pi$-pulses along the axes corresponding to ${\vec\phi^x\equiv\{\phi^x_1,\phi^x_2,\phi^x_3,\phi^x_4,\phi^x_5\}=\{ \frac{\pi}{6}, \frac{\pi}{2}, 0 ,\frac{\pi}{2},\frac{\pi}{6}\}} + \zeta$, with $\zeta$ an arbitrary constant phase, and the Y block, with rotations along the same axes but shifted by a $\pi/2$ phase, i.e. ${\vec{\phi}^y=\{ \frac{\pi}{6} + \frac{\pi}{2}, \pi, \frac{\pi}{2} ,\pi,\frac{\pi}{6} + \frac{\pi}{2}\}} + \zeta$. The sequence then has $n_\textrm{ B}$ consecutive X and Y blocks with the same, tuneable, inter-pulse spacing. For example, the AXY-$4$ sequence is XYXY. As illustrated in Fig.~\ref{AXYblock}{(b)}, each block is symmetric and has a duration $\tau$. Therefore, within a five-pulse block the time of application of the first and second pulses, $\tau_a$ and $\tau_b$ where $\tau_a<\tau_b<\tau/2$, together with $\tau$ define the whole sequence.
At the end of any AXY-$n_\textrm{ B}$ sequence of length $n_\textrm{ B}\tau$, where $n_\textrm{ B}$ is an even integer, the function $G_{jm}(n\tau)$ is zero for values of $\tau$ that are a multiple of the oscillation period of mode $m$, that is for $\nu_m \tau=2\pi r$ with $r\in \mathbb{N}$. This is due to the translational symmetry of the $f_{j}(t)$ functions, for which $f_j(t'+\tau)=-f_j(t')$ and $f_j(t'+2\tau)=f_j(t')$ holds, meaning that
\begin{eqnarray}
G_{jm}(n\tau) &=&\nu_m\int_{0}^{n\tau}dt^{\prime}f_{j}(t^{\prime})e^{-i\nu_{m}t^{\prime}} \\
&=\sum_{p=0}^{n/2-1}&\nu_m\int_{0}^{\tau}dt' f_j(t')\Big(e^{-i\nu_m[t'+2p\tau]}-e^{-i\nu_m[t'+(2p+1)\tau]}\Big)=0\nonumber
\end{eqnarray}
if $\nu_m \tau$ is a multiple of $2\pi$, and for $n$ even.
This means that a qubit can be left in a product state with a specific motional mode $m$ regardless of the values of $\tau_a$ and $\tau_b$. Unfortunately, the two motional modes in our system have incommensurable oscillation frequencies (note that $\nu_2/\nu_1=\sqrt{3}$) which leads to the impossibility of finding a $\tau$ that, independently of $\tau_a$ and $\tau_b$, decouples the qubits from both vibrational modes.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figures/Figures_1/GPlot.pdf}
\caption{Absolute value of $G_{j2}(t)$ after an AXY-$4$ sequence as a function of $\tau_a$ and $\tau_b$ ($\tau_a<\tau_b<\tau/2$), for (a): $\tau=1\times2\pi/\nu_1$, {(c)}: $\tau=2\times2\pi/\nu_1$, (e): $\tau=3\times2\pi/\nu_1$. The dark blue regions show the $\tau_a$ and $\tau_b$ values that correspond to a complete decoupling of the qubits with the modes at the end of the sequence. The phases $\tilde{\varphi}(t)$ are represented in {(b)}, {(d)}, {(f)} by the red panels.}
\label{Gplot}
\end{figure}
An AXY-4 sequence of a duration $4\tau$ such that $\tau=2\pi r/\nu_1$, makes $G_{j1}(4\tau)=0$ for any choice of $\tau_a$ and $\tau_b$, while we will numerically look for the values of $\tau_a$ and $\tau_b$ that minimise $|G_{j2}(4\tau)|$. For the sake of simplicity in the presentation of this part, we consider $f_1(t)=f_2(t)$, i.e. the same sequence is simultaneously applied to both qubits leading to $G_{1m}=G_{2m}$. However, when considering real pulses, we will not use simultaneous driving in order to efficiently eliminate crosstalk effects which leads to an optimal performance of the method, see section~\ref{subsect:Tailored}. In Fig.~\ref{Gplot}{(a)} we give a contour colour plot of $|G_{j2}(4\tau)|$ with $\tau=2\pi/\nu_1$ for all combinations of $\tau_a$ and $\tau_b$. The dark blue regions represent the values of $\tau_a$ and $\tau_b$ that minimise the $|G_{j2}(4\tau)|$ functions. Then any pair of $\tau_{a,b}$ in that region defines a valid sequence for a two-qubit phase gate. At Fig.~\ref{Gplot}{(b)}, we give the corresponding value for $\tilde{\varphi}(4\tau)$ of the resulting two-qubit gate (red panels). In Figs.~\ref{Gplot}{(c)}, \ref{Gplot}{(d)} and \ref{Gplot}{(e)}, \ref{Gplot}{(f)} the same procedure is shown for $\tau=2\times2\pi/\nu_1$ and $\tau=3\times2\pi/\nu_1$, respectively, i.e. for values $r=2$ and $r=3$, obtaining several combinations of $\tau_a$ and $\tau_b$ that result in a phase gate. Finally, to recover the actual phase $\varphi(4\tau)$, we multiply $\tilde{\varphi}(4\tau)$ by $\eta_1^2=\frac{\hbar\gamma_e^2g_B^2}{64 M \nu^3}$, according to Eq.~(\ref{phase}), showing the dependance of the total phase $\varphi$ on $\nu$ and $g_B$.
\subsection{Tailored sequences and results}\label{subsect:Tailored}
We will benchmark the performance of our MW-pulse scheme by means of detailed numerical simulations. The total Hamiltonian governing the dynamics is $H+H_c$. In a rotating frame with respect to $H_0$ and after neglecting terms that rotate at a speed of tens of GHz (see appendices~\ref{app:InitialApp} and \ref{app:IntHamil} for more details), the effective Hamiltonian reads
\begin{eqnarray}\label{simstart}
\nonumber H^{\textrm I}(t)&=& \eta_1\nu_1 (b e^{-i\nu_1 t}+b^\dag e^{+i\nu_1 t}) \sigma_1^z - \eta_2\nu_2(c e^{-i\nu_2 t}+c^\dag e^{+i\nu_2 t})\sigma_1^z\\
\nonumber &+& \eta_1\nu_1(b e^{-i\nu_1 t}+b^\dag e^{+i\nu_1 t}) \sigma_2^z +\eta_2\nu_2(c e^{-i\nu_2 t}+c^\dag e^{+i\nu_2 t})\sigma_2^z\\
\nonumber &+& \frac{\Omega_1(t)}{2} \sigma_1^{\phi} + \frac{\Omega_1(t)}{2} (\sigma_2^+ e^{i\delta_2 t} e^{i\phi} + \textrm{H.c}.)\\
&+& \frac{\Omega_2(t)}{2} \sigma_2^{\phi} + \frac{\Omega_2(t)}{2} (\sigma_1^+ e^{i\delta_1 t} e^{i\phi} + \textrm{H.c}.).
\end{eqnarray}
Here, $\sigma_j^{\phi} = \sigma_j^+ e^{i\phi} + \sigma_j^- e^{-i\phi}$, and the last two lines contain both the resonant terms giving rise to the $\pi$-pulses, i.e. $ \frac{\Omega_1(t)}{2} \sigma_1^{\phi}$ and $ \frac{\Omega_2(t)}{2} \sigma_2^{\phi}$, as well as crosstalk contributions of each $\pi$-pulse on the off-resonant ion. The latter are $\frac{\Omega_1(t)}{2} (\sigma_2^+ e^{i\delta_2t} e^{i\phi} + \textrm{H.c}.)$ and $\frac{\Omega_2(t)}{2} (\sigma_1^+ e^{i\delta_1t} e^{i\phi} + \textrm{H.c}.)$, where $\delta_2 = - \delta_1 = \omega_2 - \omega_1$. We use Eq.~(\ref{simstart}) as the starting point of our simulations without any further assumptions. In addition, our numerical simulations include motional decoherence described by a Lindblad equation accounting for an environment at a temperature of 50 K as well as static errors on $\Omega_{1,2}$, $\omega_{1,2}$, and $\nu$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\linewidth]{figures/Figures_1/ComSeq.pdf}
\caption{{(a)} Pulse sequence on the first(second) ion, upper (bottom) panel. The first (second) ion is driven with an AXY-4 sequence, red (yellow) blocks represent $\pi$-pulses. Each pulse on the second ion is separated by $\Delta t$ from the pulses acting on the first ion. {(b)} Zoom on two pulses. We can observe the propagators leading to $\pi$-pulses, i.e. $\exp{[-i \frac{\Omega_{1,2}}{2} \sigma_{1,2}^{\phi} t_\pi ]}$
(red and yellow blocks), and their unwanted side-effects in the adjacent ions $\exp{[i\frac{\delta_{2,1}}{2} \sigma_{2,1}^{z} t_\pi]}$ (empty blocks). }
\label{combinedAXY}
\end{figure}
To get rid of crosstalk effects, we use a DD scheme acting non simultaneously on both ions that, at the same time, meets conditions in Eqs.~(\ref{conditions}), and gives rise to a tuneable phase gate between the ions. In this respect, one can demonstrate that a term like
\begin{equation}
H=\frac{\Omega_1(t)}{2} \sigma_1^{\phi} + \frac{\Omega_1(t)}{2} (\sigma_2^+ e^{i\delta_2 t} e^{i\phi} + \textrm{H.c}.),
\end{equation}
for a final time $t^{(1)}_\pi = \frac{\pi}{\Omega_1}$, i.e. the required time for a $\pi$-pulse on the first ion, has the associated propagator
\begin{equation}
\label{crosstalk}
U_{t^{(1)}_\pi}= e^{-i \frac{\Omega_1}{2} \sigma_1^{\phi} t_\pi } e^{i\frac{\delta_2}{2} \sigma_2^{z} t_\pi},
\end{equation}
if and only if the Rabi frequency $\Omega_1$ satisfies
\begin{equation}
\Omega_1=\frac{\delta_2}{\sqrt{4k^2-1}}, \mbox{with} \ k\in \mathbb{N}.
\end{equation}
See appendix~\ref{app:PulseP} for a demonstration of this. In the same manner, the term $ \frac{\Omega_2(t)}{2} \sigma_2^{\phi} + \frac{\Omega_2(t)}{2} (\sigma_1^+ e^{i\delta_1 t} e^{i\phi} + \textrm{H.c}.)$
gives rise to $U_{t^{(2)}_\pi} = e^{-i \frac{\Omega_2}{2} \sigma_2^{\phi} t_\pi } e^{i\frac{\delta_1}{2} \sigma_1^{z} t_\pi}$ under the conditions $t^{(2)}_\pi = \frac{\pi}{\Omega_2}$ and
$\Omega_2=|\delta_1|/\sqrt{4{k}^2-1}, \mbox{with} \ k\in \mathbb{N} $. Hence, when the MW driving is applied non-simultaneously over the registers, one can clearly argue that a $\pi$-pulse on the first ion induces a dephasing-like propagator on the second ion (i.e. $e^{i\frac{\delta_2}{2} \sigma_2^{z} t_\pi}$) and vice versa. It turns out that our DD sequence successfully eliminates such undesired contribution.
Two blocks of our non simultaneous AXY-$n_\textrm{ B}$ sequence are depicted in Fig.~\ref{combinedAXY}{(a)}, where one has to select $\tau$, $\tau_{a,b}$ and $\Delta t$. While $\tau$ and $\tau_{a,b}$ define the sequence acting on the first ion, a temporal translation $\Delta t$ of each $\pi$-pulse sets the sequence on the second ion. Note that $\Delta t$ must satisfy $\Delta t>t_{\pi}$ to assure there is no pulse overlap, see Fig.~\ref{combinedAXY}{(b)}. As we said before, the construction in Fig.~\ref{combinedAXY}{(a)} eliminates the dephasing terms $e^{i\frac{\delta_{2,1}}{2} \sigma_{2,1}^{z} t_\pi}$. For example, the propagator for the first ion after a XY block $U^{(1)}_\textrm{XY}$, upper panel in Fig.~\ref{combinedAXY}{(a)}, reads
\begin{eqnarray}\label{firstprop}
U^{(1)}_\textrm{XY}&=&\bigg[ e^{i\frac{\delta_1}{2} \sigma_1^{z} t_\pi}\sigma_1^{\phi_5^y}\bigg]\bigg[e^{i\frac{\delta_1}{2} \sigma_1^{z} t_\pi}\sigma_1^{\phi_4^y}\bigg]\bigg[e^{i\frac{\delta_1}{2} \sigma_1^{z} t_\pi}\sigma_1^{\phi_3^y}\bigg] \bigg[ e^{i\frac{\delta_1}{2} \sigma_1^{z} t_\pi}\sigma_1^{\phi_2^y}\bigg]\bigg[ e^{i\frac{\delta_1}{2} \sigma_1^{z} t_\pi}\sigma_1^{\phi_1^y}\bigg]\nonumber\\&&
\bigg[e^{i\frac{\delta_1}{2} \sigma_1^{z} t_\pi}\sigma_1^{\phi_5^x}\bigg] \bigg[ e^{i\frac{\delta_1}{2} \sigma_1^{z} t_\pi}\sigma_1^{\phi_4^x}\bigg] \bigg[e^{i\frac{\delta_1}{2} \sigma_1^{z} t_\pi} \sigma_1^{\phi_3^x} \bigg]\bigg[e^{i\frac{\delta_1}{2} \sigma_1^{z} t_\pi} \sigma_1^{\phi_2^x} \bigg]\bigg[e^{i\frac{\delta_1}{2} \sigma_1^{z} t_\pi} \sigma_1^{\phi_1^x}\bigg]\nonumber \\
&=& \sigma_1^{\phi_5^y} \sigma_1^{\phi_4^y} \sigma_1^{\phi_3^y} \sigma_1^{\phi_2^y} \sigma_1^{\phi_1^y} \sigma_1^{\phi_5^x} \sigma_1^{\phi_4^x} \sigma_1^{\phi_3^x} \sigma_1^{\phi_2^x} \sigma_1^{\phi_1^x},
\end{eqnarray}
where the last equality can be achieved using $\{\sigma_1^z, \sigma_1^{\phi^{x,y}}\} = 0$. Equation~(\ref{firstprop}) describes a situation without motional degrees of freedom. However the cancelation of the dephasing terms is still valid if one includes the spin-motion coupling terms because they depend on $\sigma^z_{1,2}$, see Eq.~(\ref{Hamiltonianbare}), and the operators $e^{i\frac{\delta_{1,2}}{2} \sigma_{1,2}^z t_{\pi}}$ commute with them leading to the same cancelation.
In the same manner, one can find the propagator for the second ion $U^{(2)}_\textrm{ XY}$, see lower panel in Fig.~\ref{combinedAXY}{(a)}. This propagator reads
\begin{eqnarray}\label{secondprop}
U^{(2)}_\textrm{XY}&=&\bigg[\sigma_2^{\phi_5^y} e^{i\frac{\delta_2}{2} \sigma_2^{z} t_\pi}\bigg]\bigg[\sigma_2^{\phi_4^y} e^{i\frac{\delta_2}{2} \sigma_2^{z} t_\pi}\bigg]\bigg[\sigma_2^{\phi_3^y} e^{i\frac{\delta_2}{2} \sigma_2^{z} t_\pi}\bigg] \bigg[\sigma_2^{\phi_2^y} e^{i\frac{\delta_2}{2} \sigma_2^{z} t_\pi}\bigg]\bigg[ \sigma_2^{\phi_1^y} e^{i\frac{\delta_2}{2} \sigma_2^{z} t_\pi}\bigg] \nonumber\\ &&
\bigg[\sigma_2^{\phi_5^x} e^{i\frac{\delta_2}{2} \sigma_2^{z} t_\pi}\bigg] \bigg[\sigma_2^{\phi_4^x} e^{i\frac{\delta_2}{2} \sigma_2^{z} t_\pi}\bigg] \bigg[\sigma_2^{\phi_3^x} e^{i\frac{\delta_2}{2} \sigma_2^{z} t_\pi} \bigg]\bigg[\sigma_2^{\phi_2^x} e^{i\frac{\delta_2}{2} \sigma_2^{z} t_\pi}\bigg]\bigg[ \sigma_2^{\phi_1^x}e^{i\frac{\delta_2}{2} \sigma_2^{z} t_\pi}\bigg]\nonumber \\
&=& \sigma_2^{\phi_5^y} \sigma_2^{\phi_4^y} \sigma_2^{\phi_3^y} \sigma_2^{\phi_2^y} \sigma_2^{\phi_1^y} \sigma_2^{\phi_5^x} \sigma_2^{\phi_4^x} \sigma_2^{\phi_3^x} \sigma_2^{\phi_2^x} \sigma_2^{\phi_1^x}.
\end{eqnarray}
We can see that after an XY block, there is no contribution of dephasing like operators, see the last lines in Eqs.~(\ref{firstprop}) and~(\ref{secondprop}). Hence, a sequence XYXY applied to both ions following the scheme in Fig.~\ref{combinedAXY}{(a)} will also share this property with the additional advantage of being robust against control errors~\cite{Casanova15}.
After simulating the application of a non-simultaneous AXY-4 sequence, we show the results (infidelities) in Table~\ref{table1}. It is noteworthy that our numerical results have been calculated including motional decoherence. More specifically, we have added to the dynamics governed by the Hamiltonian in Eq.~(\ref{simstart}) a dissipative term of the form, see for example~\cite{Brownnutt15},
\begin{eqnarray}\label{MotionalHeating}
D(\rho) &=& \frac{\Gamma_b}{2}\Big\{(\bar{N}_b+1) (2 b \rho b^{\dag} - b^{\dag} b \rho - \rho b^{\dag} b) + \bar{N}_b (2 b^\dag \rho b - b b^{\dag} \rho - \rho b b^{\dag})\Big\}\\
&+&\frac{\Gamma_c}{2}\Big\{(\bar{N}_c+1) (2 c \rho c^{\dag} - c^{\dag} c \rho - \rho c^{\dag} c) + \bar{N}_c (2 c^\dag \rho c - c c^{\dag} \rho - \rho c c^{\dag})\Big\}, \nonumber
\end{eqnarray}
where an estimation of the values for the heating rates $\Gamma_{b,c}$ is given in appendix~\ref{app:Heating} for each of the specific examples considered here, while $\bar{N}_{b,c} =1/(e^{\hbar \nu_{1,2}/k_{\textrm B} T}-1)$ where we have considered a temperature of $T=50$K.
\begin{table}
\centering
\caption{Infidelities (I) for two-qubit gates after the application of 20 imperfect MW pulses on each ion, according to our AXY-4 protocol, for several initial states, $\psi_j$, and different experimental conditions, see main text. We focus in $\pi/4$ and $\pi/8$ entangling phase gates, however our method is general and can achieve any phase. Initial states, up to normalisation, are $\psi_{1}=|\textrm{g}\rangle \otimes (|\textrm{g}\rangle + |\textrm{e}\rangle)$, $\psi_{2}= (|\textrm{g}\rangle + |\textrm{e}\rangle) \otimes (|\textrm{g}\rangle + |\textrm{e}\rangle)$, $\psi_{3}=|\textrm{g}\rangle \otimes (|\textrm{g}\rangle + i |\textrm{e}\rangle) + |\textrm{e}\rangle \otimes |\textrm{e}\rangle$, $\psi_{4}=|\textrm{e}\rangle \otimes (|\textrm{g}\rangle - i |\textrm{e}\rangle) + |\textrm{g}\rangle \otimes |\textrm{g}\rangle$, and $\psi_{5}=|\textrm{e}\rangle \otimes (|\textrm{g}\rangle - i |\textrm{e}\rangle) + |\textrm{g}\rangle \otimes (|\textrm{g}\rangle + i |\textrm{e}\rangle)$.}
\label{table1}
\begin{tabular}{{ |c | c | c | c | c| c|}}
\hline
I ($\times 10^{-4}$) &exp$( i\frac{\pi}{4} \sigma_1^z \sigma_2^z)$ &exp$( i\frac{\pi}{8} \sigma_1^z \sigma_2^z)$&exp$( i\frac{\pi}{4} \sigma_1^z \sigma_2^z)$ &exp$( i\frac{\pi}{8} \sigma_1^z \sigma_2^z)$\\
& $\eta_1 =0.069 $& $\eta_1=0.069 $ &$\eta=0.078$ & $\eta_1=0.078$\\
&$T_{\textrm G} = 80 \ \mu$s&$T_{\textrm G} = 80 \ \mu$s&$T_{\textrm G} = 36.3 \ \mu$s&$T_{\textrm G} = 36.3 \ \mu$s \\
\hline
$\psi_1$ & $1.172$ &$0.128$ & $2.060$ &$0.144$ \\
\hline
$\psi_2$ & $2.229 $ &$0.136 $ & $4.905$ & $0.304 $ \\
\hline
$\psi_3$ & $3.052$ &$0.116 $ & $5.899 $ & $0.371$ \\
\hline
$\psi_4$ & $ 4.631 $ &$0.172 $ & $5.946$ & $0.413$ \\
\hline
$\psi_5$ & $3.250$ &$0.110 $ & $4.635 $ & $0.293$ \\
\hline
\end{tabular}
\end{table}
We computed the gate infidelity for the following situations. Firstly, we simulated the gates exp$( i\frac{\pi}{4} \sigma_1^z \sigma_2^z)$ and exp$( i\frac{\pi}{8} \sigma_1^z \sigma_2^z)$, second and third columns in Table~\ref{table1}, with a gate time of $80 \ \mu$s for a magnetic field gradient of $g_B=150 \frac{T}{m} $~\cite{Weidt16}. We designed the MW sequence such that $\tau= 3\times2\pi r/\nu_1$ leading to a gate time which is 12 times the period of the center-of-mass mode. Other relevant parameters are $\nu_1=\nu_2/\sqrt{3} = (2\pi)\times 150$ kHz, $\pi$-pulse time of $\approx 75$ ns that implies a Rabi frequency of $\Omega_1=\Omega_2=\Omega\approx (2\pi)\times 6.63$ MHz, and $\omega_2 - \omega_1 = (2\pi)\times 25.7$ MHz, while we have chosen $\Delta t$ as 1.05 times the $\pi$-pulse time. Both bosonic modes are initially in a thermal state\footnote{A thermal state of a bosonic mode is defined as $\rho_T=\sum_{n=0}^{\infty} \frac{ \bar{n}^n}{(\bar{n}+1)^{n+1}}|n\rangle\langle n|$} with $0.2$ phonons each~\cite{Weidt15}. In addition to heating processes with rates $\Gamma_b \bar{N}_b \approx (2\pi) \times 133$ Hz and $\Gamma_c \bar{N}_c \approx (2\pi) \times 9$~Hz (appendix~\ref{app:Heating}), our simulations include a Rabi frequency mismatch of $1\%$, a trap frequency shift of $0.1\%$, and an energy shift of $(2\pi)\times 20$ kHz on both ions.
Secondly, we also target the gates exp$( i\frac{\pi}{4} \sigma_1^z \sigma_2^z)$ and exp$( i\frac{\pi}{8} \sigma_1^z \sigma_2^z)$, fourth and fifth columns in Table~\ref{table1}, but now with $g_B=300 \frac{T}{m}$. The gate time is $36.3 \ \mu$s, i.e. 8 times the oscillation period of the center-of-mass mode whose frequency is $\nu=\nu_1=\nu_2/\sqrt{3} = (2\pi)\times 220$ kHz. Other parameters are $\Omega\approx (2\pi)\times 10$ MHz, $\pi$-pulse time of $\approx 49$ ns, $\omega_2 - \omega_1 = (2\pi)\times 39.8$ MHz and the energy shift upon the ions, errors on Rabi and trap frequencies, $\Delta t$, and the initial bosonic states are the same as in the previous case. Because of the new value for $g_B$, the
heating rates had to be recalculated leading to $\Gamma_b \bar{N}_b \approx (2\pi) \times 248$ Hz and $\Gamma_c \bar{N}_c \approx (2\pi) \times16$ Hz.
In Table~\ref{table1} we find that, even in the presence of the errors we have included, our method leads to fast two-qubit gates with fidelities exceeding 99,9\%. Finally, we note that higher values of $g_B$ will result in faster gates.
In summary, we have demonstrated that pulsed DD schemes are efficient generators of fast and robust two-qubit gates. Our MW sequence forces the two motional modes in a certain direction to cooperate and makes the gate fast and robust against external noise sources including motional heating.
\section{Hybrid MW radiation patterns for high-fidelity quantum gates}
\label{sect:Slow_ions}
In this section, we present a method to generate two-qubit gates among trapped ions that combines pulsed and continuous MW radiation patterns in the far-field regime. As opposed to the previous method, this one is designed to be applied with a small LD parameter $\eta\sim0.01$, and it is directly applicable in chains with more than two ions as the gate is mediated by a single motional mode. Similar to the previous method, this scheme is also protected against magnetic fluctuations, errors on the delivered MW fields, and crosstalk effects caused by the use of long wavelength MW radiation. Moreover, our protocol is flexible since it runs with arbitrary values of the MW power. In particular, inspired by results in Refs.~\cite{Casanova18MW,Casanova19}, this method involves phase-modulated drivings, phase flips, and refocusing $\pi$ pulses leading to high-fidelity entangling gates within current experimental limitations. We numerically test the performance of our gates in the presence of magnetic fluctuations of different intensities, deviations on the MW Rabi frequencies, as well as under motional heating. We demonstrate the achievement of fidelities largely exceeding $99\%$ in realistic experimental scenarios, while values larger than $99.9\%$ are reachable with small improvements.
\subsection{Method: bichromatic gate with continuous DD}\label{subsect:method}
As in the previous section, we consider two $^{171}$Yb$^+$ ions sitting next to each other in the longitudinal direction $z$ of a linear harmonic trap. We define a qubit using two states of the 6s$^2S_{1/2}$ hyperfine manifold. These are $|{\textrm g}\rangle\equiv\{F=0,m_F=0\}$, and $|{\textrm e}\rangle\equiv\{F=1,m_F=1\}$. Due to the Zeeman effect, the frequency of the $j$th qubit is $\omega_j=\omega_0 + \gamma_e B(z^0_j)/2$, where $\omega_0=(2\pi)\times12.6$ GHz, $\gamma_e=(2\pi)\times 2.8$ MHz/Gauss, see~\cite{Olmschenk07}, and $z^0_j$ is the equilibrium position of the ion. The presence of a constant magnetic field gradient $\partial B/\partial z=g_B$ in the $z$ direction results in different values of $\omega_j$ for each qubit, which allows individual control on each ion with MW fields~\cite{Arrazola18, Piltz14}. The Hamiltonian of the system can be written as
\begin{equation}\label{Hsys}
H = \frac{\omega_1}{2}\sigma_1^z +\frac{\omega_2}{2}\sigma_2^z +\nu a^\dagger a + \eta \nu (b+b^\dagger)S_z \, ,
\end{equation}
where $S_z=\sigma^z_1+\sigma_2^z$, $b^\dagger$($b$) is the creation (annihilation) operator that correspond to the center-of-mass mode, $\nu$ is the trap frequency and $\eta=\frac{\gamma_eg_B}{8\nu}\sqrt{\frac{\hbar}{M\nu}}$ is the LD parameter that quantifies the strength of the qubit-boson interaction.
Bichromatic MW drivings, at detuning $\delta$, can be applied to both ions. Then, Hamiltonian~(\ref{Hsys}) in a rotating frame with respect to $H_0=\frac{\omega_1}{2}\sigma_1^z +\frac{\omega_2}{2}\sigma_2^z +\nu b^\dagger b$ reads (see appendix~\ref{app:InteractionPic} for additional details about the involved interaction pictures)
\begin{equation}\label{HMS}
H= \eta \nu (be^{-i\nu t}+b^\dagger e^{i\nu t})S_z + \Omega\cos{(\delta t)}S_x.
\end{equation}
For the sake of clarity, we have omitted the presence of the breathing mode in Eqs.~(\ref{Hsys}) and~(\ref{HMS}), as well as the crosstalk terms in Eq.~(\ref{HMS}). However, these will be included in our numerical simulations to demonstrate that they have a negligible impact in our scheme. Furthermore, in appendix~\ref{app:CompleteHamil} one can find a complete description of the system Hamiltonian. Now, we move to a second rotating frame with respect to $\Omega\cos{(\delta t)}S_x$ (this is known as the bichromatic interaction picture~\cite{Sutherland19,Roos08,Sutherland20}) and use the Jacobi-Anger expansion ($e^{iz\sin{(\theta)}} = \sum_{n=-\infty}^{+\infty} J_n(z) \ e^{i n\theta}$, with $J_n(z)$ being Bessel functions of the first kind) to obtain
\begin{equation}\label{HMSI}
H= \eta \nu (be^{-i\nu t}+\textrm{H.c.})\Big\{J_0\Big(\frac{2\Omega}{\delta}\Big)S_z+2J_1\Big(\frac{2\Omega}{\delta}\Big)\sin{(\delta t)}S_y\Big\}.
\end{equation}
Note we only keep terms up to the first order of the Jacobi-Anger expansion, since higher order terms would not lead to any significant contribution if $\Omega\ll\delta$. If we choose $\delta=\nu+\xi$ with $\xi\ll \nu$, and neglect all terms that rotate with $\nu$ by invoking the RWA, we find the gate Hamiltonian
\begin{equation}\label{HMSII}
H_{\textrm G} = i\eta\nu J_1\Big(\frac{2\Omega}{\delta}\Big) \Big\{b^\dagger e^{-i\xi t} -\textrm {H.c.}\Big\} S_y \approx i\frac{\eta\nu\Omega}{\delta} \Big\{b^\dagger e^{-i\xi t} -\textrm {H.c.}\Big\} S_y,
\end{equation}
where we used $J_1(x)\approx x/2$ for small $x$. For evolution times $t_n=2\pi n_\textrm{ RT}/\xi$ where $n_\textrm{ RT} \in \mathbb{N} $, the time-evolution operator associated to Eq.~$(\ref{HMSII})$ is
\begin{equation}\label{HMSUnitary}
U_{\textrm G}(t_n)= \exp{(i\theta_n S_y^2)}
\end{equation}
with $\theta_n=2\pi n_\textrm{ RT} \eta^2 \nu^2J_1^2(2\Omega/\delta)/\xi^2\approx 2\pi n_\textrm{ RT} \eta^2 \Omega^2/\xi^2$~\cite{Sorensen99,Sorensen00,Solano99}. By tuning the parameters such that $\theta_n=\pi/8$, the propagator $U_{\textrm G}$ evolves the initial (separable) state $|\textrm{g,g}\rangle$ into the maximally entangled Bell state $\frac{1}{\sqrt{2}} (|\textrm{g,g}\rangle+i|\textrm{e,e}\rangle)$.
In order to protect this gate scheme from magnetic field fluctuations of the kind $\frac{\epsilon_1(t)}{2}\sigma_1^z+\frac{\epsilon_2(t)}{2}\sigma_2^z$ ($\epsilon_{1,2}(t)$ being stochastic functions) we introduce an additional MW driving that will suppress their effect. We select a MW driving such that it enters in Eq.~(\ref{HMS}) as a carrier term of the form $\frac{\Omega_\textrm{DD}}{2}S_y$ leading to
\begin{equation}\label{HMSDD}
H= \eta \nu (be^{-i\nu t}+b^\dagger e^{i\nu t})S_z + \Omega\cos{(\delta t)}S_x + \frac{\Omega_\textrm{DD}}{2}S_y.
\end{equation}
In the bichromatic picture, Eq.~(\ref{HMSDD}) reads (note that in the following, we adopt the convention $J_{0,1}\Big(\frac{2\Omega}{\delta}\Big) \equiv J_{0,1}$)
\begin{eqnarray}\label{HMSIDD}
H= \eta \nu (be^{-i\nu t}+b^\dagger e^{i\nu t})\Big\{J_0S_z+2J_1\sin{(\delta t)}S_y\Big\}\nonumber\\
+\frac{\Omega_\textrm{DD}}{2}\Big\{J_0S_y-2J_1\sin{(\delta t)}S_z\Big\}.
\end{eqnarray}
The new driving $\frac{\Omega_\textrm{DD}}{2}S_y$ leads to the appearance of the second line in Eq.~(\ref{HMSIDD}).
Here, the $\frac{\Omega_\textrm{DD}}{2}J_0 S_y$ term is the responsible of removing magnetic field fluctuations, while $J_1\Omega_\textrm{DD}\sin{(\delta t)}S_z$ interferes with the gate and has to be eliminated. This term can be neglected under a
RWA only if $\Omega_\textrm{DD}\ll\delta$, thus, its presence limits the range of applicability of our method since larger
values for $\Omega_\textrm{DD}$ are desirable to better remove the effect of magnetic field fluctuations. To overcome
this problem, we introduce in all MW drivings, i.e. those leading to the terms $\Omega\cos{(\delta t)}S_x$ and $\frac{\Omega_\textrm{DD}}{2}S_y$ in Eq.~(\ref{HMSDD}), a time-dependent phase that will eliminate $J_1\Omega_\textrm{DD}\sin{(\delta t)}S_z$.
This time-dependent phase follows equation
\begin{equation}\label{TP}
\phi(t)=4\frac{\Omega_\textrm{DD} J_1}{\delta J_0}\sin^2{(\delta t/2)}.
\end{equation}
\begin{figure}[t!]
\centering
\includegraphics[width=1\textwidth]{figures/Figures_1/StaticErrors.pdf}
\caption{Resilience to constant errors for $g_{B}=20.9$ T/m and $\nu=(2\pi)\times138$~kHz ($\eta=0.011$) and $\Omega=(2\pi)\times26$~kHz. (a) Bell state fidelity for $\Omega_\textrm{DD}=0$ (blue dashed curve) and $\Omega_\textrm{DD}=(2\pi)\times49$~kHz (green solid curve). Right and left panels show the cases with and without phase modulation respectively. (b) Bell state fidelity for $n_{\textrm PF}=1$ (solid green curve) and for $n_{\textrm PF}=0$ (red dashed curve). (c) Bell state fidelity with respect to constant shifts in $\Omega(t)$.}\label{fig:StaticErrors}
\end{figure}
The presence of $\phi(t)$ changes Hamiltonian~(\ref{HMSDD}) to (see appendix~\ref{app:CompleteHamil})
\begin{equation}\label{HMSDDTP}
H= \eta \nu (be^{-i\nu t}+b^\dagger e^{i\nu t})S_z + \Omega\cos{(\delta t)}S^\parallel_\phi + \frac{\Omega_\textrm{DD}}{2}S^{\bot}_\phi,
\end{equation}
with $S^\parallel_\phi \equiv S^+e^{i\phi(t)}+\textrm{H.c.}$ and $S^\bot_\phi \equiv -iS^+e^{i\phi(t)}+\textrm{H.c.}$ In a rotating frame with respect to $-\frac{\dot{\phi}(t)}{2}S_z$ we find
\begin{equation}\label{HMSDDTPbt}
H=\Big\{ \eta \nu (be^{-i\nu t}+\textrm{H.c.})+\frac{\dot{\phi}(t)}{2}\Big\}S_z + \Omega\cos{(\delta t)}S_x+ \frac{\Omega_\textrm{DD}}{2}S_y.
\end{equation}
Now, in the bichromatic interaction picture, the previous Hamiltonian transforms as
\begin{eqnarray}\label{HMSIDDI}
\tilde{H}&=&\eta\nu (be^{-i\nu t}+\textrm{H.c.})\Big\{J_0S_z+2J_1\sin{(\delta t)}S_y\Big\} \\
&+&\frac{\tilde{\Omega}_\textrm{DD}}{2}S_y -\Omega_\textrm{DD}J_1^2/J_0\cos{(2\delta t)}S_y, \nonumber
\end{eqnarray}
where $\tilde{\Omega}_\textrm{DD}=J_0\Omega_\textrm{DD}(1+2J_1^2/J_0^2)$. Here we can see that, due to the action of $\phi(t)$, the interfering $J_1\Omega_\textrm{DD}\sin{(\delta t)}S_z$ term is removed. Instead, in Eq.~(\ref{HMSIDDI}) we find the term $\Omega_\textrm{DD}J_1^2/J_0\cos{(2\delta t)}S_y$, which has a small coupling constant ($\Omega_\textrm{DD}J_1^2/J_0$) and that commutes with the gate Hamiltonian~(\ref{HMSII}). In Fig.~\ref{fig:StaticErrors}(a) (left panel) we show the obtained Bell state fidelity without phase modulation, i.e. by using Hamiltonian~(\ref{HMSDD}), for different values of a constant energy deviation $\epsilon$ in the qubit resonance frequencies. The blue dashed curve corresponds to the case $\Omega_\textrm{DD}=0$, where the scheme does not offer protection against $\epsilon$, while the green solid curve incorporates the driving leading to the carrier $\frac{\Omega_\textrm{DD}}{2}S_y$. Fig.~\ref{fig:StaticErrors}(a) (right panel) shows the case with phase modulation in Eq.~(\ref{HMSDDTP}) that achieves larger fidelities.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\textwidth]{figures/Figures_1/MWScheme_2.pdf}
\caption{Scheme of the control parameters. (a) Bichromatic driving $\Omega(t)=\Omega\cos{(\delta t)}$. (b) Phase modulation of the bichromatic driving. The change of sign during the second half of the evolution is due to the phase flip of the carrier driving. Here, $\phi_m=\max{(|\phi(t)|)}$ (c) Carrier driving acting equivalently on both ions except in the middle, where each ion undergoes a $180^{\circ}$ (red) and $360^{\circ}$ (green) rotation respectively. (d) Phase modulation of the DD field. During the second part of the evolution, the phase is flipped from $-\pi/2$ to $\pi/2$. } \label{fig:MWScheme}
\end{figure}
Our method can be further improved since the first term in Eq.~(\ref{HMSIDDI}), i.e. $\eta\nu (be^{-i\nu t}+\textrm{H.c.}) J_0S_z$, leads to undesired accumulative effects that we can correct. The latter can be calculated in a rotating frame with respect to $\tilde{\Omega}_\textrm{DD}S_y/2$ and computing the second-order Hamiltonian, which reads
\begin{equation}\label{HMSDDIIPrecise}
\tilde{H}\approx H_{\textrm G} - g_{\tilde{\Omega}}(2b^\dagger b +1)S_y - \frac{g_{\nu}}{2}(S_x^2+S_z^2) ,
\end{equation}
where, $g_{\tilde{\Omega}}=\frac{\tilde{\Omega}_\textrm{DD}\eta^2J_0^2}{1-{\tilde{\Omega}^2}_\textrm{DD}/\nu^2}$, $g_\nu=\frac{\nu\eta^2J_0^2}{1-{\tilde{\Omega}^2}_\textrm{DD}/\nu^2}$, see appendix~\ref{app:SecondOrder}. Although small, the terms $ g_{\tilde{\Omega}}(2b^\dagger b +1)S_y $ and $ \frac{g_{\nu}}{2}(S_x^2+S_z^2)$ spoil a superior gate performance. Hence, we will eliminate them by introducing refocusing techniques. In particular, to nearly remove the $g_{\tilde{\Omega}}(2b^\dagger b +1)S_y$ term, we divide the evolution in two parts, and flip the phase of the carrier driving during the second part of the evolution. In this respect, a scheme of the control parameters can be seen in Fig.~\ref{fig:MWScheme}. This phase flip causes a change in the sign of $\Omega_\textrm{DD}$, (i.e. $\Omega_\textrm {DD} \rightarrow -\Omega_\textrm{DD}$) which acts as a refocusing of unwanted shifts in $S_y$. This strategy is also valid to minimise the errors due to constant shifts in $\Omega_\textrm{DD}$ as it can be seen in Fig.~\ref{fig:StaticErrors}(b). Note that the phase flip of the carrier forces us to also change the sign of $\phi(t)$, since Eq.~(\ref{TP}) should hold during the implementation of the gate. As we will see later, performing a large number of phase flips ($n_\textrm{PF}$) will further suppress fluctuations on the carrier driving, while it also limits the possible values for $\Omega_\textrm{DD}$, check appendix~\ref{app:InteractionPic} for additional details.
A partial refocusing of the term $\frac{g_{\nu}}{2}(S_x^2+S_z^2)$ in Eq.~(\ref{HMSDDIIPrecise}) is also possible by rotating one of the qubits in the middle and at the end of the gate via $\pi$ pulses. In particular, if these $\pi$ pulses are performed along the $y$ axis, i.e., each $\pi$ pulse equals $\exp{(i\pi/2\sigma_1^y)}$, the $S_x^2$ and $S_z^2$ operators change their sign simultaneously, while $S_y^2$ remains unchanged. The combined action of phase flips and $\pi$ pulses allows us to approximate Eq.~(\ref{HMSDDIIPrecise}) as $\tilde{H}\approx H_{\textrm G}$. It is noteworthy to mention that off-resonant vibrational modes would contribute with accumulative factors similar to the last term in Eq.~(\ref{HMSDDIIPrecise}). Then, these are refocused by the two $\pi$ pulses as we will show in our numerical simulations. As our method removes undesired effects due to additional vibrational modes, it is directly applicable to produce entangling gates between any two ions in a large chain. In addition, one can always consider to concatenate sequences of two-qubit operations leading to multi-qubit gates~\cite{Nielsen10}.
For completeness in our analysis, in Fig.~\ref{fig:StaticErrors}(c) we plot the Bell state fidelity with respect to constant shifts in $\Omega(t)$. Note that our scheme also shows robustness with respect to this kind of errors, since a shift in $\Omega(t)$ rotates with frequency $\delta$ which diminishes its effect.
\subsection{Optimal refocusing and results} To demonstrate the performance of our method in realistic experimental scenarios, we calculate the Bell state fidelity with fluctuating errors in both magnetic and driving fields, as well as in the presence of motional heating. Furthermore, our simulations include crosstalk terms, and the off-resonant breathing mode (the initial state of both motional modes is a thermal state with an average number of phonons $\bar{n}=1$). The results are shown in Fig.~\ref{fig:FluctuatingErrors} for two different parameter regimes. These are $g_{B}=20.9$ T/m, $\nu=(2\pi)\times138$~kHz and $\Omega= (2\pi)\times37$~kHz on the left figure, and $g_{B}=38.5$ T/m, $\nu=(2\pi)\times207$~kHz and $\Omega= (2\pi)\times26.6$~kHz on the right figure (note both regimes have a LD parameter $\eta=0.011$).
Blue squares indicate Bell state infidelities obtained with our method. Here, a total number of 31 phase flips were employed. Notice that, for ${\Omega}_\textrm{DD}=0$ fidelities are below $99\%$ even without fluctuating errors. We identify that this is due to the crosstalk of the MW driving fields at Rabi frequency $\Omega$, since these induce frequency shifts in the off-resonant qubits. If $\Omega_\textrm{DD} \neq 0$, these energy shifts are cancelled and we find fidelities ranging from $99,9\%$ to $99,99\%$. Note that these values are obtained using moderate MW radiation power, rather than with Rabi frequencies on the order of MHz as in section~\ref{sect:1_PDD}. The parameters in the right panel are more favourable for several reasons: first, the Rabi frequency is smaller than for the left case and the magnetic field gradient is larger; both lower crosstalk effects. Second, a larger trap frequency lowers the effect of the off-resonant mode. Finally, a smaller $\Omega/\delta$ ratio is also preferable to avoid any effect of higher-order Bessel functions. In this respect, note we always truncate the Jacobi-Anger expansion to the first order, see Eq.~(\ref{HMSIDDI}). Black squares indicate the infidelities obtained without phase modulation. As expected, phase modulation is crucial to remove energy shifts induced by ${\Omega}_\textrm{DD}$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\textwidth]{figures/Figures_1/FluctuatingErrors.pdf}
\caption{Logarithm of the Bell state infidelity as a function of $\Omega_\textrm{DD}$, for $g_{B}=20.9$ T/m, $\nu=(2\pi)\times138$~kHz and $\Omega= (2\pi)\times37$~kHz (left panel), and $g_{B}=38.5$ T/m, $\nu=(2\pi)\times207$~kHz and $\Omega= (2\pi)\times26.6$~kHz (right panel). Blue and black squares take into account crosstalk effects and the presence of the off-resonant vibrational mode, with and without phase modulation, respectively. Other curves include motional heating of the center-of-mass mode, and fluctuations of the magnetic field as well as the driving fields. The red, green and purple squares correspond to different error parameters $(\tau,T_2)=(0.05,0.5)$, $(0.1,1)$, and $(0.2,2)$ ms that characterise magnetic field fluctuations.}\label{fig:FluctuatingErrors}
\end{figure}
Other curves take into account the heating of the center-of-mass mode and fluctuating errors in the magnetic field and MW drivings. The effect of the former is introduced in our model with a dissipative term as the one described in Eq.~(\ref{MotionalHeating}) for two modes, whereas in this case $T=300$K was chosen. In the left panel, we consider a heating rate of $\dot{\bar{n}}\approx300$ ph/s~\cite{Weidt15,Weidt16,Brownnutt15}. For the right panel, we consider a more favourable scenario with $\dot{\bar{n}}\approx200$ ph/s.
Magnetic and MW fluctuations are introduced via an Ornstein-Uhlenbeck (OU) stochastic process~\cite{Gillespie96}. Each point in Fig.~\ref{fig:FluctuatingErrors} corresponds to 100 realisations. The OU process is characterised by the correlation time $\tau_{B}$ and coherence time $T_2$ for the magnetic field fluctuations, while $\tau_\Omega$ and the relative amplitude error $\delta_{\Omega}$ are used for the MW driving field fluctuations. For the driving fields, we choose a correlation time of $\tau_B=500~\mu$s and a relative amplitude error of $\delta_{\Omega}=0.5\%$ in the left panel, and a correlation time of $\tau_\Omega=1$~ms and $\delta_{\Omega}=0.25\%$ in the right panel~\cite{Cohen17}. Different strengths for the magnetic field fluctuations are given by the red, green and purple squares, with parameters $(\tau,T_2)=(0.05,0.5)$, $(0.1,1)$, and $(0.2,2)$~ms, respectively.
Our numerical simulations predict fidelities above $99\%$ even for the worst case corresponding to $(\tau,T_2)=(0.05,0.5)$ ms and with current heating rates (left panel). In a more optimistic experimental scenario with $(\tau,T_2)=(0.2,2)$ ms, our protocol leads to fidelities larger than $99,9\%$ for distinct values of $\Omega_\textrm{DD}$ (right panel).
In this section, we have proposed a method for the generation of entangling gates that combines phase-modulated continuous MW drivings with phase flips and refocusing $\pi$ pulses to produce entangling gates with high fidelity. Numerical simulations including the main sources of decoherence show that fidelities on Bell-state preparation exceeding 99\% are possible within current experimental limitations, while fidelities larger than 99.9\% are achievable with further experimental improvement.
\chapter{Boson Sampling with Ultracold Atoms}
\label{chapter:chapter_3}
\thispagestyle{chapter}
The realization of quantum supremacy is a milestone on the path towards fault-tolerant quantum computing. On the one hand, it demonstrates that quantum speedup is indeed possible, which shall serve to discard the existence of some unknown and fundamental physical principle capable of hindering the realization of a quantum computer~\cite{Bassi13}. On the other hand, it constitutes the first violation of the so-called extended Church-Turing thesis~\cite{Elthan97}, which states that any physically realizable computational model can be efficiently simulated by a classical Turing machine. The latter can only be claimed if the computational complexity of the task performed by the quantum machine is known. For example, the outperformance of classical computers by quantum annealers or quantum simulators would not serve as a violation of the extended Church-Turing thesis, as the computational complexity of the problem being solved is not well defined. In this sense, a rigorous test of quantum supremacy has to be based on solid assumptions about the computational complexity of the accomplished task~\cite{Aaronson17}.
Quantum sampling problems enjoy the fact that their computational complexity can be precisely assessed~\cite{Terhal04}. These problems consist on generating samples according to a probability distribution associated to the output of a quantum circuit. Moreover, these models can be implemented by quantum computers without using error correcting codes~\cite{Lund17}, dramatically reducing their experimental cost in comparison to other quantum algorithms. Particularly, the so-called boson-sampling problem~\cite{Aaronson11,Gard15,Lund17} has received a lot of attention because non-interacting particles, for example photons, are enough for its realization.
Boson sampling consists in the problem of sampling from the probability distribution of the outcomes generated when you introduce $N$ bosons in a linear interferometer with $M$ modes. Both initial and final states are written in the Fock basis, and the linear interferometer is described by an $M\times M$ unitary matrix. The probability amplitude of each outcome state is proportional to the permanent of an $N\times N$ submatrix, where the rows and columns relate the initial state with that particular outcoming state. The permanent appears due to all $N!$ different physical paths contributing to the same outcome, see Fig.~\ref{BSProblem}(b). Even if it looks similar to the determinant, whose computation is efficient by classical means, calculating the permanent of a complex-valued matrix is a $\#$P-hard computational problem~\cite{Valiant79}, meaning that the simulation of the boson sampling problem is believed to be extremely inefficient for a classical computer. It has being conjectured that sampling, even approximately, from the probability distribution of a boson sampler is already a $\#$P-hard problem~\cite{Aaronson11,Aaronson11b}. This, along with boson sampling being robust against experimental imperfections~\cite{Rohde12a,Rohde12b,Arkhipov15,Drummond16,Dittel18}, has motivated the rapid progress of boson sampling machines.
\begin{figure}[t]
\centering
\includegraphics*[width=1\columnwidth]{figures/Figures_3/BSproblem.png}
\caption{a) Given an $M\times M$ Haar random unitary matrix $U$, a probability distribution can be defined by the modulus squared permanents of $N\times N$ submatrices. Boson sampling consists in sampling from this probability distribution $P_{BS}$. b) In an experimental boson sampler, $P_{BS}$ contains the probabilities of all possible outcomes of the machine, given an $N$-boson initial state. For example, the $3\times 3$ submatrix $A_S$ formed by the 9 elements in red squares in a) relates the initial state $a_1^\dagger a_2^\dagger a_3^\dagger|0\rangle$ with the output state $a_1^\dagger a_3^\dagger a_6^\dagger|0\rangle$. As a result of quantum interference among $N!$ physical paths, the probability of this output state is given by the modulus squared permanent of $A_S$. \label{BSProblem}}
\end{figure}
Up to now, quantum boson sampling has been realized using photonic quantum circuits~\cite{Broome13,Tillmann13,Spring13,Crespi13,Carolan14,Spagnolo14,Tillmann15,Carolan15,Loredo17,Wang17c,Wang18}, with the current record being 20 input and 14 output photons in 60 modes~\cite{Wang19_BS}. Although significant conceptual and technological advances have been made scaling-up these photonic devices to a larger number of photons~\cite{Motes14,Lund14,Bentivegna15,Aaronson16b,He17,Hamilton17,Wang18}, the simultaneous control of more than twenty photons remains, so far, inaccessible experimentally. In addition, the progress of classical methods to simulate boson sampling has been significant in the last few years~\cite{Neville17,Wu18,Lundow19}. These two facts complicate a near-term quantum-supremacy test by a photonic boson-sampling machine. On the other hand, alternative methods for doing boson sampling with trapped ions~\cite{Shen14}, superconducting circuits~\cite{Peropadre16} or optical lattices~\cite{Deshpande18,Muraleedharan19} have been proposed. However, up to now only a proof-of-principle experiment using coupled vibrational modes of a single ion has been realized~\cite{Shen17}.
In this chapter, we present a scalable method to implement boson sampling using ultracold atoms in state-dependent optical lattices. In our scheme, atoms cooled into their vibrational ground state play the role of indistinguishable bosons, while both the lattice sites and the internal state of the atoms serve as the bosonic modes. Polarization-synthetized optical lattices can be used to realize state-dependent lattice-shift operations~\cite{Robens16pol,Robens17}, which allows bringing together spatially separated modes. In addition, pairwise interactions among these modes, analogous to the beamsplitters used in photonic devices, can be achieved via the combination of MW radiation with site-resolved optical pulses. The latter is the basic building block of our proposal, and has already been demonstrated in Ref.~\cite{RobensThesis}, where the Hong-Ou-Mandel interference between two optically trapped atoms is reported.
The chapter is organized as follows: In section~\ref{PSOLforQC} we describe the method to realize quantum circuits with ultracold atoms in polarization-synthesized optical lattices. In section~\ref{ScalingAndErrors} we discuss the case of the boson-sampling quantum circuit and the scaling of such a device to tens of atoms. We study how the two-body collisions affect to the rate in which valid samples are generated, and also how they change the form of the final probability distribution. In section~\ref{subsec:additional}, we also consider other sources of error like dephasing or imperfect ground state cooling.
\section{Quantum circuits with spin-dependent optical lattices}\label{PSOLforQC}
In this section we present ultracold atoms in state-dependent optical lattices as a scalable architecture to realize discrete-time quantum circuits, and, in particular, boson sampling. The idea is based on the fact that any $M\times M$ unitary matrix $U$ describes the evolution of a particular $M$-mode linear interferometer, allowing only nearest-neighbour coupling among the modes~\cite{Reck94,Clements16}. Linear interferometers of noninteracting atoms are an interesting alternative to photonic interferometers because of the ability of controlling a large number of particles in atomic systems. Moreover, these systems can exploit controlled coherent collisions~\cite{Chin10} among particles at the same site. These nonlinear processes could increase the amount of quantum correlations in the output states~\cite{Brunner18}, possibly making the classical simulation of such a sampler even harder than for the linear case.
\subsubsection{Preparing an array of identical atoms}
Interference between two optical laser beams is routinely used to create arrays of optical micropotentials called optical lattices. Atoms can be trapped in those lattice sites and repositioned one by one using state-dependent moving potentials~ \cite{Kim16,Barredo16,Endres16,Robens17,Kumar18}. As a result, these atoms can be prepared in states were the position of each atom in the lattice is predefined. Currently, the record in the number of assorted atoms is in $N=111$~\cite{OhldeMello19}, well above the $N=20$ photons achieved with photonic devices~\cite{Wang19_BS}. Furthermore, ideas to increase this number up to $1000$ exist for the case of two-dimensional state-dependent optical lattices~\cite{Robens17}. After being rearranged, the atoms can be cooled down to their vibrational ground state by means of sideband cooling. Using this, a purity or ground-state probability of up to $90\%$ has being demonstrated~\cite{Kumar18}, the main limitation being the small trap frequency along at least one of the confining directions. In this respect, probabilities close to unity are expected in deep three-dimensional optical lattices.
Alternative techniques to create low-entropy atom ensembles exist, which involve the preparation of a Mott insulator state and the subsequent selection of atoms at predefined lattice sites. With this method, a purity of $99\%$ per site has being experimentally achieved for $12$ atoms~\cite{Lukin19,Rispoli19}.
\begin{figure*}[t]
\includegraphics*[width=\linewidth]{figures/Figures_3/BosonSamplingCircuit.png}
\caption{\label{BSwithPSOLScheme}
Illustration of the quantum-circuit scheme based on ultracold atoms in polarization-synthesized optical lattices. Each lattice site hosts two modes of the quantum circuit, represented by two atomic internal states, $|{\uparrow}\rangle$ and $|{\downarrow}\rangle$. A representative initial state is shown in the figure, where every second mode is occupied. Atoms are displaced by state-dependent shift operations, while their internal states are coupled using MW radiation and light shifts. Inset: A combination of local addressing pulses and MW pulses realize the equivalent of a photonic beamsplitter. }
\end{figure*}
\subsubsection{Wiring the quantum circuit with polarization-synthesized optical lattices}
For the realization of discrete-time quantum circuits, we consider polarization-synthesized optical lattices \cite{Robens17}. These consist of two independent optical potentials that are polarization selective, trapping atoms depending on the polarization of the transition associated with an internal state. For that, two states of the 6s$^2S_{1/2}$ hyperfine manifold of the $^{133}$Cs atom can be used, e.g., $|\!\!\uparrow\rangle=|F=4,m_F=4\rangle$ and $|\!\!\downarrow\rangle=|F=3,m_F=3\rangle$. The lattice wavelength is set at $\lambda_L=870$ nm. Then, due to different polarizability~\cite{Deutsch98,Grimm00,LeKien13}, atoms in $|\!\!\uparrow\rangle$ or $|\!\!\downarrow\rangle$ feel only either one of the two periodic potentials:
\begin{equation}
V_{\uparrow,\downarrow}(x) = V^0_{\uparrow,\downarrow}\cos^2\{2\pi/\lambda_L[x-x_{\uparrow,\downarrow}(t)]\},
\end{equation}
where the position of both lattice sites $x_{\uparrow,\downarrow}(t)$ can be independently controlled with subnanometer precision by a fast polarization synthesizer~\cite{Robens16pol}. Also, the depth of the micropotentials $V^0_{\uparrow,\downarrow}$ is sufficiently large to suppress the tunnelling of atoms to neighbouring sites.
In Fig.~\ref{BSwithPSOLScheme} we illustrate how different lattice sites can be ``wired'' by using fast, state-dependent shifts of the optical lattice, generating discrete-time quantum circuits with ultracold atoms. In our scheme, the modes of the quantum circuit are represented by both the different lattice sites and the two internal states. Thus, with our method, $M/2$ lattice sites are sufficient to represent $M$ modes. The shift operation preserves the coherence between the two internal states~\cite{Robens15}, and its duration can be as short as the trapping period, which is about $3$ femtoseconds. The geometry of the circuit depicted in Fig.~\ref{BSwithPSOLScheme} is equivalent to those realized with photonic systems, except that in our case a single spatial dimension is enough, in contrast to the two spatial dimensions used in photonic circuits~\cite{Carolan15}. As in photonic circuits, the evolution of an $M$-mode interferometer coupled pairwise at discrete time steps is represented by an $M{\times}M$ unitary matrix $U$~\cite{Reck94,Clements16}. The matrix representation of a general pairwise interaction acting at time $t$ on lattice site $s$ is
\begin{equation}
\label{eq:basiccoupling}
T(t,s)=\left(\begin{array}{cc} e^{-i\phi}\cos(\theta/2)&-\sin(\theta/2)\\e^{-i\phi}\sin(\theta/2)&\cos(\theta/2) \end{array}\right),
\end{equation}
where $\phi$ is a phase imprinted onto only the $|\!\!\uparrow\rangle$ mode, and $\theta$ is the angle by which the pseudospin (i.e., the two coupled modes, $|\!\!\uparrow\rangle$ and $|\!\!\downarrow\rangle$) is rotated around the $y$ axis of the Bloch sphere. Interestingly, a general $M{\times}M$ unitary matrix $U$ can be represented by the product of $M(M{-}1)/2$ independent operations $T(t,s)$~\cite{Reck94,Clements16}. Notice that, as each operation is characterized by two parameters, the whole protocol is then defined by $M(M{-}1)$ independent parameters, corresponding to the number of free parameters characterizing a generic $M\times M$ unitary matrix, up to $M$ phase shifts applied to the $M$ output modes, which are irrelevant for most applications. In principle, all these operations can be implemented in $M$ time steps, resulting in an $M$-step circuit depth~\cite{Clements16}.
\subsubsection{Arbitrary mode coupling by light pulses}\label{sec:mode_coupling}
Two modes in the same lattice site can be coupled by combining site-resolved light pulses with global MW pulses, realizing $T(t,s)$. More specifically, site-resolved light pulses are used to imprint a local differential phase shift between the two hyperfine states, $|\!\!\uparrow\rangle$ and $|\!\!\downarrow\rangle$, while global MW radiation results in a global Hadamard operation. The latter can be implemented by a $\pi/2$ MW pulse, rotating the pseudospin by $90$ degrees around the $x$ axis of the Bloch sphere. This is represented by a Hadamard-like transformation $H_{2\times 2} = \exp(-i \sigma_x \pi/4)$. Differential light shifts can be realized with cesium atoms~\cite{Deutsch98,Grimm00,LeKien13}. In addition, these pulses can be focused so that they act only onto the target sites~\cite{Weitenberg11,Preiss15,Robens16mbg}, allowing the realization of independent rotations around the $z$ axis, $A(\varphi_s)=\exp[-i \sigma_z \varphi/2]$, where the angle $\varphi_s$ is controlled by the product of the laser intensity and the pulse duration. The duration of the global MW pulse is about $150\mu$s, however, can be shortened to tens of microseconds just by increasing the MW power. On the other hand, the local laser pulses can be realized in about $10\mu$s using approximately $1\mu$W of laser power per addressed lattice site. Moreover, in the case of the latter, the probability of error by scattering of light is of the order of $10^{-5}$. As the main advantage, this method avoids using site-resolved Raman pulses in order to couple the two modes.
The described quantum gates are sufficient to build a generic $T(t,s)$ operation, as shown by the following formula:
\begin{equation}\label{eq:decomposition}
T'(t,s)= e^{i\phi/2}\,T(t,s)= H_{2\times 2}^\dagger A(\theta) H_{2\times 2} A(\phi).
\end{equation}
Notice that the $e^{i\phi/2}$ is global only to the pseudospin subspace and, thus, we should account for it when building the $M\times M$ unitary. However, it can be shown that the algorithm in Clements \emph{et al.}~\cite{Clements16} can be easily adapted to employ $T'(t,s)$ as the basic building block of the quantum circuit, instead of $T(t,s)$. Thus, the control of this global phase is not necessary for the purpose at hand. In the inset of Fig.~\ref{BSwithPSOLScheme} an illustration of the application of a $T(t,s)$ operation between a $|\!\!\uparrow\rangle$ mode in site $s$ and a $|\!\!\downarrow\rangle$ mode in site $s+1$ is shown. First, we use a spin-dependent displacement operation to bring the $|\!\!\downarrow\rangle$ mode in site $s+1$ to the site $s$. Then, the quantum gates described above can be applied, and the operation is finished by shifting back the $|\!\!\downarrow\rangle$ mode to site $s+1$.
\subsubsection{Site- and state-resolved detection of individual atoms}
Being able to measure the output state is a necessary condition for any useful quantum computation. A fluorescence image is captured to measure the final state~\cite{Alberti15,Robens16mbg}. Using a high-resolution objective lens, the position of the atoms in the optical lattices can be reconstructed with high-fidelity, exceeding $99\%$ \cite{Robens16b}.
This fluorescence technique provides information about the occupation of the lattice sites, however, to discriminate between the two modes in the same lattice sites, a spin-sensitive detection scheme is needed. For that, as demonstrated in Ref.~\cite{Robens16b}, a long-distance state-dependent shift can be performed, turning apart atoms in different states, $|\!\!\uparrow\rangle$ and $|\!\!\downarrow\rangle$, that were initially on the same lattice site.
Ideally, one should also be able to detect how many atoms are in each site. Unfortunately, standard fluorescence imaging produces pairwise atom losses, allowing only the parity of the occupation number to be measured. To solve this, one could try to spread the atoms along the perpendicular lattice sites before imaging, for example, through a ballistic expansion. Similar methods are employed for number-resolving photodetection~\cite{Hadfield09,Carolan14}, and have been recently adapted to optical lattices~\cite{Omran15,Lukin19}. Alternatively, one could exploit interaction blockade to induce occupation-dependent tunneling to distinct sites of a multilayer optical lattice \cite{Preiss15b}.
\section{Scaling of atomic boson sampling}\label{ScalingAndErrors}
Quantum circuits with ultracold atoms, described in section~\ref{PSOLforQC}, provide a way to implement boson sampling, as an alternative to photonic devices. Currently, atomic systems allow controlling a hundred of particles distributed in hundreds of lattice sites, which makes these systems highly attractive to scale up boson sampling and achieve quantum supremacy. The initial state could be given by $N$ atoms uniformly distributed along the first $N$ sites $|\psi_0\rangle =\sum_{s=1}^{N}\hat{a}^{\dagger}_{2s-1} |0\rangle$, where $|0\rangle$ is the vacuum of all $M$ modes. If the generated unitary $U$ is chosen randomly according to the Haar measure, then to sample from the output probability distribution $P(n_1,n_2,..., n_{M})=|\langle n_1,n_2,...,n_{M}|\hat{U}|\psi_0\rangle|^2$ is believed to be a hard task for classical computers~\cite{Aaronson11}. Although the formal mathematical proof of hardness requires $N\leq M^{1/6}$, typically, a more feasible condition, i.e. $N\leq M^{1/2}$, is assumed sufficient.
In order to propose a quantum supremacy demonstration experiment by solving the boson sampling problem, it is necessary to take into account experimental imperfections. In this section we study how particle loss may, on the one hand, decrease the rate in which valid samples can be generated, and, on the other hand, change the complexity of the probability distribution we are sampling from.
\subsection{A simple model for the sampling rate}
The repetition rate of the experiment is directly related to the time necessary to prepare the initial state, make all the interference operations, and measure the final state. The initial state can be prepared efficiently in about $100$ ms up to $100$ atoms~\cite{Robens17}, that is why we assume a fixed time $t_\textrm{in}$ for carrying out this initial step. The time required to make all interference operations, however, directly depends on the number of modes in the system, which grows as $M=N^2$ as a requirement of the problem itself. As we already mentioned, any unitary transformation of dimension $M\times M$ can be done by $M(M-1)/2$ two-dimensional unitary operations~\cite{Reck94,Clements16} from which we consider that, in average, $(M-1)/2$ can be done in parallel, i.e., at the same time (see Fig.~\ref{BSwithPSOLScheme}). The final state measurement is done in a single operation lasting about $t_\textrm{det}=50$ ms, which detects the position and spin of the atoms by fluorescence imaging.
All this suggests that the processing time scales as $t_\textrm{pr}\equiv R_\textrm{pr}^{-1} \approx N^2 t_\textrm{op}+t_\textrm{in} +t_\textrm{det}$ with respect to the number of particles $N$. Here, $t_\textrm{op}$ is the time required to make an interference operation, while $t_\textrm{in}$ and $t_\textrm{det}$ are the initialization and measurement times respectively. In addition, we are mainly interested in detecting the output states that have at most one atom in each mode, since the probability of those events is the one predicted to be the hardest for classical algorithms. Thus, the states belonging to this subspace, called collision-free-subspace, are post-selected from all the output states. With a quadratic scaling of the number of modes, approximately a fraction $e^{-1}$ of the final states correspond to these kind~\cite{Arkhipov12}.
The above analysis describes the ideal situation where all atoms are prepared in the ground state of motion, none of them is lost in the experiment, and the measurement is carried out with 100\% efficiency. However, in a realistic scenario, one should account for experimental errors that affect the rate at which post-selected samples are generated. Following a similar approach as the one presented in Refs.~\cite{Wang17c,Neville17}, we estimate the sampling rate as
\begin{equation}\label{atomrate}
R=\frac{1}{e}R_\textrm{pr}\eta_\textrm{d}^{N}P_\textrm{surv},
\end{equation}
where $P_\textrm{surv}$ is the survival probability of $N$ atoms and $\eta_\textrm{d}$ is the detection efficiency per atom. The latter is controlled significantly well with the best reported detection efficiency of $99\%$ \cite{Robens16b}. The survival probability of the $N$ atoms during the complete evolution will depend on the processing time $t_\textrm{pr}$. In principle, an atom may escape from the trap due to background vapor collisions. The survival probability that accounts for this effect for single atoms is given by the exponential formula $\exp{(-t_\textrm{pr}/\tau_\textrm{bg})}$, where $\tau_\textrm{bg}$ is the mean lifetime of a single atom before it is lost by background collisions.
Also, the presence of more than one atom in the same lattice site may cause the loss of one of the atoms, or even the loss of both of them due to inelastic collisions (e.g., spin exchanging collisions). For this type of collisions, the survival probability of a pair of atoms can be written as a $(1+(t_\textrm{ pr}-t_\textrm{ in})/\tau_\textrm{tb})^{-1}$ function of time \cite{Roberts00}, where $\tau_\textrm{tb}$ is the mean lifetime of two-body collisions. Then, one has to consider the probability of having a particle pair on the same lattice site and per each step of the process. If we start from a configuration that has all the atoms placed on different lattice sites, this probability can be considered zero until $t_\textrm{in}$. During the process of interference operations and measurement, this probability cannot be ignored. To approximate the value of this probability, we consider that the bosons are at all times uniformly distributed. We can then write the survival probability of $N$ atoms per time step $\tau$ as
\begin{equation}\label{SurvProbStep}
P_\textrm{step}(\tau)=(e^{-N\frac{\tau}{\tau_\textrm{bg}}})\sum_{k=0}^{N/2} P_\textrm{pair}(k)\Big[1+\frac{\tau}{\tau_\textrm{tb}}\Big]^{-k}
\end{equation}
where $P_\textrm{pair}(k)$ is the probability of finding $k$ pairs distributed in different sites. For example, $k=0$ corresponds to the case where all atoms are on distinct lattice sites. $P_\textrm{pair}(k)$ can be analytically calculated, leading to $(3/2)^k/(e^{3/2}k!)$ for large $N$, see appendix~\ref{pair_distribution} for details. Equation~(\ref{SurvProbStep}) does not take into account states with more than two particles in a lattice site. For a more accurate description, $P_\textrm{pair}(k)$ can be substituted by $P(k_2,k_3,k_4,k_5)$, meaning the probability of having $k_2$ pairs, $k_3$ trios, $k_4$ quartets, and so on. In Fig.~\ref{BSEstimations}(a), different configurations are shown for $N=4$ and $M=12$. However, and as proved in appendix~\ref{pair_distribution}, this generalized probability tends to the same Poissonian distribution $(3/2)^{k_2}/(e^{3/2}k_2!)$ at large $N$. With this model, the total survival provability can be written as
\begin{equation}\label{SurvProb}
P_\textrm{surv}(N)=P_\textrm{step}(t_\textrm{ op})^MP_\textrm{step}(t_\textrm{ det}).
\end{equation}
Later, we will benchmark this model with exact numerical simulations for a small number of particles $N$. Up to then, we can assume this model to be correct and compare it with other models that describe the scaling of the boson sampling problem in photonic circuits or classical computers.
\begin{table}
\centering
\caption{Experimental parameters}
\label{table2}
\begin{tabular}{{ c c c c c c}}
\hline
\hline
& Conservative & State of the art \\
\hline
Spin addresing + Lattice shift time ($t_\textrm{ op}$) & $150\mu$s +$20\mu$s &$30\mu$s +$3\mu$s \\
\hline
Initialization time ($t_\textrm{ in}$) & $0.75$s &$0.1$s \\
\hline
Detection time ($t_\textrm{ det}$) & $60$ms &$30$ms \\
\hline
Detection efficiency ($\eta_\textrm{ d}$) & 0.99 & 0.999 \\
\hline
One-body loss lifetime ($\tau_\textrm{ bg}$)& $60$s &$360$s \\
\hline
Two-body loss lifetime ($\tau_\textrm{ tb}$) & $40$ms &$400$ms \\
\hline
\hline
\end{tabular}
\end{table}
Conservative and state of the art values for the experimental parameters can be found in Table~\ref{table2}. For the two-body losses, we take $\tau_\textrm{tb}=400$ms as the state-of-art value, which is in principle achievable using Feshbach resonances~\cite{Chin04}. These two cases are represented by the two solid blue curves in Fig.~\ref{BSEstimations}(b), were the lower and upper curves corresponding to current and best cases, respectively. We have also included an additional line (dotted blue) that represents the case with the best reported parameters except for the mean lifetime of two-body collisions, which is assumed to remain $\tau_\textrm{tb}=40$ms. Notice that the upper blue lines meet the solid gray line (representing the classical computing rate) around $N\gtrsim30$, thus, predicting quantum advantage for $N\gtrsim30$ particles.
\begin{figure}[t]
\centering
\includegraphics*[width=\columnwidth]{figures/Figures_3/ScalingFigure.pdf}
\caption{ a) Examples of states with $M=12$ and $N=4$. $P(0,0,0)$ is the ratio between the number of states with zero pairs, trios and quartets, and the total number of states. $P(1,0,0)$ corresponds to the proportion of states with a single particle pair, and zero trios and quartets. $P(2,0,0)$ and $P(0,1,0)$ for states with two particle pairs, zero trios and zero quartets, and a single particle trio, zero pars and zero quartets, respectively. b) Scaling of boson sampling machines. The sampling rate versus the number of particles is shown for atomic (blue), photonic (purple), and classical (grey) boson sampling devices. If experimental parameters are improved, both photonic and atomic devices may surpass the classical algorithms, at $N\approx 23$ and $N\approx 30$, respectively. In the case of the photonic device, for the dashed curve we allow $\approx 30\%$ of lost photons. \label{BSEstimations}}
\end{figure}
Photonic boson sampling devices also suffer from particle loss, which is, actually, the main limitation when scaling up. Although the rate in which indistinguishable photons are created is of $R_0=76$MHz, the rate in which valid experimental samples are generated in current setups drops down to $R=295$Hz for 5-photon boson sampling~\cite{Wang19_BS}. If the number $M$ of modes scale quadratically with $N$, the sampling rate of a photonic experiment can be characterized by
\begin{equation}
R=\frac{1}{e}\frac{R_0}{N}\eta^N
\end{equation}
where $R_0/N$ is the rate in which an $N$-photon initial state can be prepared and $\eta=\eta_\textrm{f} \eta_\textrm{c}^d$ is the survival probability of a single photon during the experiment. The latter is a product of a fixed survival probability $\eta_\textrm{f}$ that accounts for errors that do not increase with the number of particles $N$ and the transmission probability of the photon through the circuit $\eta_\textrm{c}^d$, which depends on the transmission probability per unit of length $\eta_\textrm{c}$ and the optical depth $d$, that increases quadratically ($d=M=N^2$ for a square circuit~\cite{Clements16}) with $N$. If photon loss is allowed in the model, the sampling rate becomes
\begin{equation}
R=\frac{1}{e}\frac{R_0}{N}\binom{N}{N-k_l}\eta^{N-k_l}(1-\eta)^{k_l},
\end{equation}
where $k_l$ is the number of lost photons. In boson sampling with lost photons, if the loss of $2$ photons is allowed $R$ increases by a factor of $N(N-1)/2$ with respect to the case without losses.
From the $R=295$Hz count rate reported in Ref.~\cite{Wang19_BS} for the 5-photon boson sampling, and taking into account that the transmission rate of the $60\times60$ optical circuit is $98,7\%$, and, thus, $\eta_\textrm{c}=0.987^{1/60}$, we can extract a fixed survival probability of $\eta_\textrm{f}=0.14$. According to Ref.~\cite{Wang18}, the single-photon source, interferometer and detection efficiencies can be increased up to $0.8$, $0.9$, and $0.9$ respectively, obtaining $\eta_\textrm{f}=0.65$. In Fig.~\ref{BSEstimations}(b), lower purple curves (both solid and dashed) assume $\eta_\textrm{f}=0.14$ and a circuit transmission rate of $\eta_\textrm{c}=0.987^{1/60}$, while the upper curves assume $\eta_\textrm{f}=0.65$ and a perfect transmission rate. With the best parameters, quantum advantage could be achieved around $N\approx23$ with a $30\%$ photon loss.
Finally, the time required by the Metropolised independence sampling algorithm introduced in Ref.~\cite{Neville17} to produce a valid sample scales as
\begin{equation}\label{computerrate}
1/R=100\tilde{a}N^22^N
\end{equation}
where $\tilde{a}$ relates to the speed of the classical computer. This value has been reported to be $\tilde{a}=3\times10^{-15}$ s in the case of the Tianhe 2 supercomputer~\cite{Wu18}. For the case of a regular computer we choose this value to be $\tilde{a}=3\times10^{-9}$ s. If comparing with the photonic boson sampler with photon loss, we should substitute $N$ by $N-k_l$ in Eq.~(\ref{computerrate}). The latter corresponds to the dotted grey line in Fig.~\ref{BSEstimations}(b), which crosses the photonic sampling rate at around $N\approx 23$.
Since both $t_\textrm{pr}$ and $d$ scale quadratically with the number of particles, the atomic and photonic sampling-rate formulas scale worse than the classical algorithm. However, and as it is shown in Fig.~\ref{BSEstimations}(b), at meaningful time scales (i.e., for all practical purposes) both experimental machines are expected to sample much faster than classical supercomputers. In the case of the atomic device, we show that sampling faster than a classical supercomputer is possible around $N\gtrsim 30$, both when $\tau_\textrm{tb}=40$ms and $\tau_\textrm{tb}=400$ms.
One thing that has to be considered is that two-body losses may change the form the probability distribution we are sampling from. If that is the case, we need to quantify how far are we from the probability distribution that corresponds to the boson sampling problem. For that, in the next section we present a Hamiltonian model that will serve us to quantify this distance between both probability distributions in terms of the state fidelity.
\subsection{Hamiltonian model for particle loss}\label{subsec:HamilModel}
In this section, we use the second quantization formalism of quantum mechanics to build a Hamiltonian model that will represent the boson sampling problem with particle loss. The system will contain $N$ particles in $M$ bosonic modes, and will be subjected to incoherent one-body and two-body losses. Those losses can be represented, as we will see, by an anti-Hermitian part in the system Hamiltonian.
Using second quantization, the free-energy Hamiltonian corresponding to a one-dimensional optical lattice of $M/2$ sites (for simplicity, we will assume that the number of modes is even) where each site can hold atoms in two different atomic states is written as
\begin{equation}
H_0=\sum_{s=1}^{M/2} \omega_\uparrow a^\dagger_{2s-1,\uparrow}a_{2s-1,\uparrow} + \omega_\downarrow a^\dagger_{2s,\downarrow}a_{2s,\downarrow}
\end{equation}
where $a^\dagger_j (a_j)$ is the creation (annihilation) operator associated to the $j$-th mode. From now on and for simplicity, $a_{j,\uparrow}\equiv a_{j}$ and $a_{j,\downarrow}\equiv a_{j}$, odd or even $j$ numbers being associated to $|\!\!\uparrow\rangle$ or $|\!\!\downarrow\rangle$ atomic states, respectively.
Coherent population exchange between $|\!\!\uparrow\rangle$ and $|\!\!\downarrow\rangle$ states can be achieved using a MW field with frequency $\omega_\uparrow-\omega_\downarrow$, giving rise to Rabi oscillations. The effective Hamiltonian associated to this process is, in an interaction picture with respect $H_0$,
\begin{equation}\label{BSHamil}
H_{x}=\sum_{s=1}^{M/2} \frac{\Omega_0}{2} (a^\dagger_{2s-1}a_{2s}e^{i\varphi_0} + a_{2s-1}a^\dagger_{2s}e^{-i\varphi_0} )
\end{equation}
where $\Omega_0$ and $\varphi_0$ are the Rabi frequency and the phase associated to the MW driving. As anticipated in section \ref{PSOLforQC}, local differential light shifts can be produced by focused laser beams, which allow to shift the energies defined by $H_0$. This can be represented by the following Hamiltonian,
\begin{equation}\label{shiftHamil}
H_{z}=\sum_{s=1}^{M/2} \frac{\Omega_s}{2} (a^\dagger_{2s-1}a_{2s-1} - a^\dagger_{2s}a_{2s}),
\end{equation}
where the value of $\Omega_s$ can be different for each site $s$. Combining the evolution of Hamiltonians (\ref{BSHamil}) and (\ref{shiftHamil}), any $2\times2$ unitary tranformation between on-site modes can be represented. The time evolution operator corresponding to this generic operation would be
\begin{equation}\label{OneTimeEvol}
\hat{U}_{t}=e^{-iH^{(\varphi=\pi)}_{x}\tau/4} e^{-iH^{\vec{\theta}}_{z}\tau/4} e^{-iH^{(\varphi=0)}_{x}\tau/4} e^{-iH^{\vec{\phi}}_{z}\tau/4},
\end{equation}
where $\tau=2\pi/\Omega_0$, and $\Omega_{s}\tau/4=\theta_s$ at $\exp{[-iH^{\vec{\theta}}_{z}\tau/4]} $ and $\Omega_{s}\tau/4=\phi_s$ at $\exp{[-iH^{\vec{\phi}}_{z}\tau/4]}$. The operation described in Eq.~(\ref{OneTimeEvol}) applies simultaneously in all $M/2$ lattice sites, and it is characterized by $M$ independent parameters $\vec{\theta}_t=(\theta_1,\theta_2,...,\theta_{M/2})$ and $\vec{\phi}_t=(\phi_1,\phi_2,...,\phi_{M/2})$. Those parameters describe $M/2$ arbitrary unitary operations, each as defined in Eq.~(\ref{eq:decomposition}), between all on-site modes. An example is shown in Fig.~\ref{CircuitFigure} for the case $M=6$. As explained previously, the ability to shift the optical potential associated to one of the atomic states allows to connect neighbouring modes of the circuit. Thus, the following Hamiltonians, can also be implemented:
\begin{equation}\label{BSHamilp}
H'_{x}=\sum_{s=1}^{M/2-1} \frac{\Omega_0}{2} (a^\dagger_{2s}a_{2s+1}e^{i\varphi_0} + a_{2s}a^\dagger_{2s+1}e^{-i\varphi_0} ),
\end{equation}
and
\begin{equation}\label{shiftHamilp}
H'_{z}=\sum_{s=1}^{M/2-1} \frac{\Omega_s}{2} (a^\dagger_{2s}a_{2s} - a^\dagger_{2s+1}a_{2s+1}).
\end{equation}
\begin{figure}[t]
\centering
\includegraphics*[width=\columnwidth]{figures/Figures_3/Circuit.pdf}
\caption{ Implementation of a $6\times6$ unitary operation in $6$ time steps. At each time step $t$, a $2\times 2$ unitary transformation is applied on each site $s$, characterized by $\theta_s^t$ and $\phi_s^t$. Red and blue lines represent bosonic modes associated with state $|\!\!\uparrow\rangle$ and $|\!\!\downarrow\rangle$. Green circles represent the light shift generated by a focused laser beam, which allows to perform an independent operations on each lattice site. \label{CircuitFigure}}
\end{figure}
Up to now, we have assumed the number of modes in the circuit $M$ to be even. In the case $M$ is odd, the summatories in Eqs.~(\ref{BSHamil}), (\ref{shiftHamil}), (\ref{BSHamilp}) and (\ref{shiftHamilp}) have to sum up to $(M-1)/2$. Doing the same as in Eq.~(\ref{OneTimeEvol}) with Hamiltonians (\ref{BSHamilp}) and (\ref{shiftHamilp}), we are able to implement $M/2$ two-dimensional unitary operations among neighbouring modes. Concatenating these operations as
\begin{equation}\label{AllTimeEvol}
\hat{U}=\hat{U}_M \hat{U}_{M-1}...\hat{U}_{2}\hat{U}_1,
\end{equation}
we are able to transform the $M$ modes of the system according to a Haar random unitary matrix $U$. Unitaries $U$ and $\hat{U}$ should not be mixed: while $U$ is an $M\times M$ unitary matrix, $\hat{U}$ is an infinite-dimentional unitary operator acting on the Hilbert space of quantum states. Both can be related by the following formula
\begin{equation}\label{ModeTransformation}
\hat{U}^\dagger a^\dagger_{j} \hat{U} =\sum_{i=1}^MU_{ji}a^\dagger_i,
\end{equation}
which describes how the $j$-th mode transforms after all operations in Eq.~(\ref{AllTimeEvol}).
To model the effect of particle loss we can use a Lindblad master equation of the following form
\begin{equation}\label{Lindblad}
\dot{\rho}=-i[H,\rho] +\sum_b \Gamma_b \mathcal{L}_b(\rho),
\end{equation}
where
\begin{equation}\label{Lindblad2}
\mathcal{L}_b(\rho)=F_b \rho F_b^\dagger - \frac{1}{2}\{F^\dagger_b F_b,\rho\},
\end{equation}
and $F_b$ represents a jump operator acting on the system and producing the loss. The jump operator corresponding to one-particle loss due to collisions with background gas would be an annihilation operator acting independently on each mode $a_m$. Thus, the superoperator corresponding to this process would be
\begin{equation}\label{background}
\mathcal{L}_\textrm{ bg}(\rho)=\sum^M_{m=1} a_m \rho a_m^\dagger - \frac{1}{2}\{\hat{n}_m,\rho\},
\end{equation}
with a loss rate given by $\Gamma_\textrm{ bg}=1/\tau_\textrm{ bg}$. For two-body collisions, we need to consider two scenarios: on the one hand, two particles on the same mode can collide. The jump operator would be $a^2_m/\sqrt{2}$ and the superoperator
\begin{equation}\label{twobody1}
\mathcal{L}_\textrm{ tb1}(\rho)=\frac{1}{2}\sum^M_{m=1} a^2_m \rho (a_m^\dagger)^2 - \frac{1}{2}\{\hat{n}_m(\hat{n}_m-1),\rho\},
\end{equation}
with loss rate $\Gamma_\textrm{ tb}=1/\tau_\textrm{ tb}$. On the other hand, two particles on neighbouring modes on the same site could collide, in which case the jump operator would be $a_{2s-1}a_{2s}$, where $s$ is the site index. This jump operator would change to $a_{2s}a_{2s+1}$ when the lattice configuration is shifted to produce the interactions described in Eqs.~(\ref{BSHamilp}) and (\ref{shiftHamilp}). Thus, we would have two different superoperators acting depending on the lattice configuration. Those would be
\begin{equation}\label{twobody2}
\mathcal{L}_\textrm{ tb2}(\rho)=\sum^{M/2}_{s=1} a_{2s-1}a_{2s} \rho a^\dagger_{2s-1}a^\dagger_{2s} - \frac{1}{2}\{\hat{n}_{2s-1}\hat{n}_{2s},\rho\},
\end{equation}
or
\begin{equation}\label{twobody2p}
\mathcal{L}'_\textrm{ tb2}(\rho)=\sum^{M/2}_{s=1} a_{2s}a_{2s+1} \rho a^\dagger_{2s}a^\dagger_{2s+1} - \frac{1}{2}\{\hat{n}_{2s}\hat{n}_{2s+1},\rho\},
\end{equation}
both with decay rate $\Gamma_\textrm{ tb}=1/\tau_\textrm{ tb}$.
In the boson sampling problem the number of particles is conserved. In other words, the Hamiltonian commutes with $\hat{N}=\sum_m\hat{n}_m$, the operator corresponding to the total number of particles. In the master equation, the elements that describe population transfer from the $N$ particle subspace to another subspace with a lower amount of particles are the first terms in Eqs.~(\ref{background}), (\ref{twobody1}), (\ref{twobody2}), and (\ref{twobody2p}). If we are only interested in the $N$ particle subspace, we can safely ignore these terms, as they do not affect this subspace. Doing so, we loose the unitarity of the master equation, which will now describe loss of probability, $\textrm{ tr}(\rho)\leq1$. In exchange, the description of the problem becomes simpler and the dynamics of the lossy system is given by the non-Hermitian Hamiltonian
\begin{equation}
\tilde{H}=H-iV -i\frac{\Gamma_\textrm{ bg}}{2}\hat{N},
\end{equation}
where $V$ changes from
\begin{equation}
V=\frac{\Gamma_\textrm{ tb}}{4}\sum_{m=1}^M\hat{n}_m(\hat{n}_m-1)
+\frac{\Gamma_\textrm{ tb}}{2}\sum_{s=1}^{M/2}\hat{n}_{2s-1}\hat{n}_{2s}
\end{equation}
to
\begin{equation}
V'=\frac{\Gamma_\textrm{ tb}}{4}\sum_{m=1}^M\hat{n}_m(\hat{n}_m-1)
+\frac{\Gamma_\textrm{ tb}}{2}\sum_{s=1}^{M/2-1}\hat{n}_{2s}\hat{n}_{2s+1}
\end{equation}
depending on the lattice configuration. It can be proven that, at each step, the anti-Hermitian part commutes with the Hermitian part, i.e. $[H,V]=[H',V']=0$, and, in addition, we have that $[\hat{N},V]=[\hat{N},V']=[\hat{N},H]=[\hat{N},H']=0$, thus, the time evolution operator corresponding to the time step $t$ can be written as $\hat{U}_t e^{-V\tau} e^{-\Gamma_\textrm{ bg}\tau\hat{N}/2}$. At the end, the time evolution operator can be written as
\begin{equation}\label{LossyTimeEvol}
\hat{U} e^{-M\tau\Gamma_\textrm{ bg}\hat{N}/2}e^{-V_{M}\tau}e^{-V_{M-1}\tau}...e^{-V_{2}\tau} e^{-V_{1}\tau}
\end{equation}
where $V_t$ is given by $\hat{U}^\dagger_{t-1,1}V\hat{U}_{t-1,1}$ when $j$ odd, and $\hat{U}^\dagger_{t-1,1}V'\hat{U}_{t-1,1}$ when $j$ even, with $\hat{U}_{t,1}=\prod_{j=1}^{t}\hat{U}_j$. The probability of having an $N$-particle state at the output is then given by
\begin{equation}
p=e^{-M\tau\Gamma_\textrm{ bg}N}\langle\psi_0|\big(\prod_{t=1}^M e^{-V_{t}\tau}\big)^\dagger \prod_{t=1}^M e^{-V_{t}\tau} |\psi_0\rangle,
\end{equation}
where $|\psi_0\rangle$ is the initial state of the system, and $\hat{N}|\psi_0\rangle=N|\psi_0\rangle$. Notice that the exponential decay produced by uncorrelated particle loss $\exp{(-N\Gamma_\textrm{ bg}M\tau)}$ has the same form as in Eq.~(\ref{SurvProbStep}). Using Eq.~(\ref{LossyTimeEvol}), we can also write the fidelity between the final states produced by the cases with and without losses. This is given by\footnote{Typically, the fidelity between two pure states can be written as $|\langle\psi_1|\psi_2\rangle|^2$. However, if these are not normalized, the correct expression is $|\langle\psi_1|\psi_2\rangle|^2/(\langle\psi_1|\psi_1\rangle\langle\psi_2|\psi_2\rangle)$}
\begin{equation}\label{generalfidelity}
F=|\langle\psi_0| \prod_{t=1}^M e^{-V_{t}\tau} |\psi_0\rangle|^2 \bigg/\langle\psi_0|\big(\prod_{t=1}^M e^{-V_{t}\tau}\big)^\dagger \prod_{t=1}^M e^{-V_{t}\tau} |\psi_0\rangle.
\end{equation}
From Eq.~(\ref{generalfidelity}), we learn that uncorrelated particle loss does not affect the state fidelity nor the form of the final probability distribution. In consequence, one can claim that homogeneous one-body losses do not change the complexity associated to the probability distribution of the boson sampling problem. Unfortunately, one can not claim the same thing for two-body losses, which is why we need to know how both the fidelity and the success probability scale with the number of particles $N$. These quantities can be numerically calculated for a small number of particles, however, their calculation becomes intractable for more than five particles because the Hilbert space dimension increases exponentially. Thus, it is convenient to have analytical expressions for $F$ and $p$. For that, we need to do some approximations.
\subsubsection{Weak two-body losses}
If the two-body loss rate is small, meaning $\Gamma_\textrm{ tb} M\tau\ll1$, we can expand all exponentials to the second order, and we get that the success probability is
\begin{equation}
p\approx e^{-M\Gamma_\textrm{ bg}\tau N}\Big\{1-2\tau\sum_{t}^M\langle V_t \rangle +\tau^2[\sum_{t}^M\langle V_t^2\rangle +\sum_{t, t'}^M\langle V_t V_{t'}\rangle]\Big\} +O(\tau^3)
\end{equation}
and the fidelity is
\begin{equation}
F\approx 1-\tau^2\Big\{\sum_{t, t'}^M\langle V_t V_{t'}\rangle -\langle V_t \rangle\langle V_{t'}\rangle\Big\} +O(\tau^3).
\end{equation}
In Fig.~\ref{Fig:AvVs}, we can see the form of $\langle V_t\rangle$ and $\langle V_t V_{t'}\rangle$ for two different initial states and for $N=3$. For a uniform initial state $|\psi_u\rangle$, where all configurations have the same probability amplitude, $\langle V_t\rangle$ and $\langle V_t V_{t'}\rangle$ maintain, in average, the same value throughout the process. With an anti-bunched initial state, where the average distance among atoms is maximal, $\langle V_t\rangle$ and $\langle V_t V_{t'}\rangle$ are zero at the beginning and they increase their value with each step $t$. When increasing the number of particles, this behaviour is not expected to change, as long as the number of modes scales quadratically with $N$. Thus, one can assume the following lower bounds for $p$ and $F$
\begin{equation}\label{lowerprob}
p \gtrsim e^{-M\Gamma_\textrm{ bg}\tau N}\Big\{1-2M \tau \langle V \rangle_u +M\tau^2(1 +M)\langle V^2\rangle_u\Big\} +O(\tau^3)
\end{equation}
and
\begin{equation}\label{lowerfidelity}
F \gtrsim 1-M^2\tau^2\Big\{\langle V^2\rangle_u -\langle V \rangle^2_u\Big\} +O(\tau^3),
\end{equation}
where we assume that $\langle V_t\rangle\lesssim \langle V \rangle_u$ and $\langle V_t V_{t'}\rangle \lesssim\langle V^2\rangle_u$.
\begin{figure}[t]
\centering
\includegraphics*[width=1\columnwidth]{figures/Figures_3/AvVs.pdf}
\caption{Numerically calculated $\langle V_t\rangle$ and $\langle V_t V_{t'}\rangle$ for $N=3$ and $M=9$. (a) Mean value of $\langle V_t\rangle$ for 30 different random unitaries. Solid and dashed lines correspond to uniform and anti-bunched initial states, respectively. (b) and (c) show the mean value of $\langle V_t V_{t'}\rangle$ for 30 unitaries and for the cases of uniform and anti-bunched initial states, respectively. \label{Fig:AvVs} }
\end{figure}
Now, one can analytically calculate the values of $\langle V\rangle_u$ and $\langle V^2\rangle_u$, which, for large $N$, give $\langle V\rangle_u=3\Gamma_\textrm{ tb}/4$ and $\langle V^2\rangle_u=(3/2 + 9/4)\Gamma^2_\textrm{ tb}/4$, see appendix \ref{App:AvVs} for more details. Using these results, Eqs.~(\ref{lowerprob}) and (\ref{lowerfidelity}) give
\begin{equation}\label{lowerprob2}
p \gtrsim e^{-M\Gamma_\textrm{ bg}\tau N}\Big\{1-3M \Gamma_\textrm{ tb}\tau/2 +15 M^2\Gamma^2_\textrm{ tb}\tau^2(1 +1/M)/36\Big\} +O(\tau^3)
\end{equation}
and
\begin{equation}\label{lowerfidelity2}
F \gtrsim 1-3M^2\Gamma^2_\textrm{ tb}\tau^2/8 +O(\tau^3).
\end{equation}
From the above equations, we can infer that when increasing the number of particles, $\Gamma_\textrm{ tb}\tau$ has to scale as $1/M$, either by decreasing $\tau$ or $\Gamma_\textrm{ tb}$, if we want to maintain a constant value for $F$ and $p$. Notice that maintaining a constant value for the fidelity would imply that the complexity of the generated sample is kept constant~\cite{Aaronson16b}.
According to Eq.~(\ref{lowerfidelity2}), achieving quantum advantage at $N\approx30$ with $99\%$ state fidelity would require $\tau_\textrm{ tb}$ to be around five thousand times the operation time $\tau$. For example, if $\tau \sim 100\mu$s, then $\tau_\textrm{ tb}\sim 500$ms. This value is, in principle, achievable using Feshbach resonances~\cite{Chin04}. However, it may well be that high fidelities are not necessary to achieve quantum supremacy. Advantage with respect classical computers may be possible in a regime where $M\Gamma_\textrm{ tb}\tau \gtrsim1$, as long as sampling from the generated probability distribution, different from the one generated by standard boson sampling, is a hard task. This is why it is interesting to try to extend the previous analysis to the strong-loss case.
\subsubsection{Strong two-body losses}
\begin{figure}[t]
\centering\includegraphics*[width=1\columnwidth]{figures/Figures_3/Probs.pdf}
\caption{(a) and (b) Success probability per time step $p_t$, where $p=\prod_{t}p_t$, for different decay rates $M\Gamma_\textrm{ tb}\tau=1$ (upper lines), $M\Gamma_\textrm{ tb}\tau=5$ (middle lines), and $M\Gamma_\textrm{ tb}\tau=20$ (lower lines), $N=3$, and for a uniform and antibunched initial state, respectively. Blue lines represent the exact calculation, where each point is calculated after 30 different random unitaries. Dashed and dotted green lines represent the results of the models with algebraic and exponential decay, respectively. (c) Success probability versus $M\Gamma_\textrm{ tb}\tau$. One can see how the algebraic model (dashed green) is in very good agreement with the exact case with uniform initial state (solid blue). \label{Fig:Probs} }
\end{figure}
In the following, we try to estimate how the success probability would scale in a scenario where losses are strong and $M\Gamma_\textrm{ tb} \tau\ll1$ does not hold true. Exact numerical simulations can be used to calculate the behaviour of the success probability throughout the process. Similar to the case with weak losses, probability loss per time step is more or less constant for uniform initial states, see Fig.~\ref{Fig:Probs}(a). To estimate this value, we assume we have a uniform state at each time step, then, according to the model presented in the previous section, probability loss per time step would be given by\footnote{The loss rate of 3 particles (a particle trio) on the same site is given by $3\Gamma_\textrm{ tb}$. Notice that there are 3 different ways to form a pair, thus, the probability of a two-body collision is 3 times higher.}
\begin{equation}\label{simplemodel2}
P_\textrm{ step}=e^{-N\Gamma_\textrm{ bg}\tau}\sum_{k_3=0}^{N/3}\sum_{k_2=0}^{N/2-3k_3} P(k_2,k_3)\Big[1+3\Gamma_\textrm{ tb}\tau\Big]^{-k_3}\Big[1+\Gamma_\textrm{ tb}\tau\Big]^{-k_2},
\end{equation}
where $P(k_2,k_3)$ gives the probability having $k_2$ pair and $k_3$ trios. We ignore the cases where $4$ or more particles are in a lattice site, since these probabilities are zero for $N=3$. Notice that Eq.~(\ref{simplemodel2}) does not correspond to applying the loss Hamiltonian evolution operator to the uniform state for a time $\tau$, i.e. $e^{-N\Gamma_\textrm{ bg}\tau}\langle e^{-V\tau}\rangle_u$, which would end up with a similar result with exponential decays $\exp{(-\Gamma_\textrm{ tb}\tau)}$ instead of algebraic $(1+\Gamma_\textrm{ tb}\tau)^{-1}$. Interestingly, the formula with the algebraic decay approaches better to the exact result, as it can be seen in Fig.~\ref{Fig:Probs}(c). This tells us that the average state in the lossy case is not really uniform, as it is in the case without any loss. On the other hand, the excellent agreement between our model and the exact results justifies our model and its use in Fig.~\ref{BSEstimations}(b).
According to this model, advantage with respect classical computers can not be reached within the current experimental values. However, sampling faster than a classical supercomputer would be possible around $N\approx34$ with $\tau_\textrm{ tb}\approx 40$ms and $\tau\approx 33\mu$s. In this regime, $M\Gamma_\textrm{ tb}\tau\approx 1$, and most likely the generated probability distribution is different from the one required by standard boson sampling. Unfortunately, to determine wether it is still hard to sample from that probability distribution is beyond the scope of this work.
\subsection{Additional considerations}\label{subsec:additional}
Particle loss is the main experimental imperfection that affects to the sampling rate. However, there are other sources of error that, without affecting the sampling rate, can change the form the final probability distribution, i.e. the final state. Undesired differential light shifts may appear due to fluctuations of the magnetic field or imperfect light polarization of the optical lattice~\cite{Alberti14}. These light shifts will induce errors in the application of the $2\times2$ unitary transformations, which could deviate the total $M\times M$ unitary transformation from the ideal Haar random unitary. In this regard, Leverrier \emph{et al.}~\cite{Leverrier15} proved that the hardness of the final probability distribution is guaranteed, as long as the error introduced by each operation scales as $1/N^2$. A uniform light shift can be represented in our Hamiltonian model, just by including a term like $H_\textrm{ shift}=\epsilon\Omega_0\sum_{s=1}^{M/2}\hat{n}_{2s}$ through all the process. For $N=3$, a light shift with $\epsilon=8\times10^{-3}$ yields $F\approx 99\%$. Maintaining this fidelity at $N\approx30$ would then require $\epsilon \sim 8 \times10^{-5}$, which, for $\Omega_0\approx(2\pi)\times3$ MHz, would equal to an uncontrolled light shift of $(2\pi)\times 240$ Hz.
\pagebreak
Another relevant issue is the degree of distinguishability among the atoms. The boson sampling problem assumes that the particles are indistinguishable. In our case, two atoms on the same mode will be indistinguishable only if they share the same motional state. For that, ground-state cooling of all vibrational degrees of freedom is required. Moreover, the transport of atoms could change their motional state by inducing unwanted excitations. In this respect, Refs.~\cite{Rohde12a,Shchesnovich15} prove that quantum advantage is also possible with partially distinguishable particles. More precisely, Shchesnovich~\cite{Shchesnovich15} shows that, while increasing $N$, the final state fidelity (which measures how close we are to the boson sampling probability distribution) would remain constant, as long as the fidelity between the vibrational states of two atoms scales as $1-O(1/N)$.
Summarizing, in this chapter we have proposed a method to realize boson sampling with ultracold atoms in optical lattices. Using simple error models, we compare the scaling of atomic, photonic and classical boson sampling machines up to tens of particles, and conclude that, in the near future, atomic and photonic machines could achieve quantum advantage with $N\approx30$ and $N\approx23$ particles, respectively. We benchmark the atomic error model using exact numerical simulations of a Hamiltonian model that includes one-body and two-body losses. This two models are in good agreement for $N=3$ with weak and strong two-body losses\footnote{Numerical simulations of the Hamiltonian model become too demanding for $N>4$.}. Finally, with strong two-body losses one may still sample faster than a classical supercomputer, however, the probability distribution one is sampling from does not correspond to the boson sampling probability distribution, and further work is needed to prove wether it is still hard to sample from that probability distribution.
\chapter{Quantum Simulation of Light-Matter Interactions}
\label{chapter:chapter_2}
\thispagestyle{chapter}
Understanding the interactions that emerge among two-level atoms (qubits) and bosonic field modes is of major importance for the development of quantum technologies. The qubit-boson interaction governs the dynamics of distinct quantum platforms such as cavity QED~\cite{Haroche89,Raimond01}, trapped ions~\cite{Leibfried03} or superconducting circuit~\cite{Clarke08}, that can achieve the SC regime. Here, the qubit-boson Rabi coupling $g$ is usually much smaller than the field frequency, but larger than the coupling to the environment. In these conditions, the JC model~\cite{Jaynes63} that appears after applying the RWA provides an excellent description of the system. In the last decade, experiments in circuit QED have achieved couplings well above the USC regime ($g/\omega\gtrsim0.1$)~\cite{Yoshihara17}, hindering a perturbative treatment of the QRM. In the DSC regime, ($g/\omega\gtrsim1$) the full QRM has to be considered, and the associated physics is different from the one described by JC model~\cite{Casanova10}.
In this chapter, we study two different extensions of the QRM, namely the Rabi-Stark model and the nonlinear QRM. In the former, a Stark coupling term is added to the QRM, which leads to multi-photon selective interactions in the SC and USC regimes. The latter is natural to the implementation of the QRM in trapped ions, when moving beyond the LD regime.
\section{Selective interactions in the Rabi-Stark model}
The QRM with a Stark coupling term, named the Rabi-Stark model~\cite{Eckle17} (in the following we will also use that denomination) was first considered by Grimsmo and Parkins~\cite{Grimsmo13,Grimsmo14}. The study of its energy spectrum~\cite{Eckle17,Maciejewski14,Xie19} has revealed some interesting features such as a spectral collapse or a first-order phase transition~\cite{Xie19}, which connects it with the two-photon QRM~\cite{Travenec12,Maciejewski15,Travenec15,Felicetti18_1,Felicetti18_2,Cong19,Xie17} or with the anisotropic QRM~\cite{Xie14}. On the other hand, dynamical features of the JC model with a Stark coupling term have been studied in the past~\cite{Pellizzari94,Solano00,Franca01,Solano05,Franca05,Prado13}. The Stark coupling is useful to restrict the resonance condition and the Rabi oscillations to a preselected JC doublet, leaving the other doublets in a dispersive regime. This selectivity has found applications for state preparation and reconstruction of the bosonic modes in cavity QED~\cite{Pellizzari94,Franca01} or trapped ions~\cite{Solano00,Solano05,Franca05}. In light of the above, the dynamical study of the full QRM with a Stark coupling term in the SC and USC regimes is well justified.
In this section, we study the dynamical behaviour of the QRM with a Stark term, i.e. the Rabi-Stark model, and show that the interplay between the Stark and Rabi couplings gives rise to selective $k$-photon interactions in the SC and USC regimes. Note that, previously, $k$-photon (or multiphoton) resonances have been investigated in the linear QRM~\cite{Ma15,Garziano15}, driven linear qubit-boson couplings~\cite{Nha00,Chough00,Klimov04,Casanova18QRM,Puebla19_1} or nonlinear couplings~\cite{Shore93,Vogel95}, and recently have found applications for quantum information science~\cite{Macri18,Boas19}. In our case, $k$-photon transitions appear as higher-order processes of the linear QRM, while the Stark coupling is responsible for the selective nature of these interactions. This section is organised as follows: In section~\ref{subsect:OnePhoton} the Rabi-Stark model is introduced and we review the selective one-photon interactions that appear. In section~\ref{subsect:MultiPhoton} we use time-dependent perturbation theory to characterise the emergent $k$-photon interactions whose strength scales as $(g/\omega)^k$. Finally, in section~\ref{subsect:QRSTI} we introduce a method to simulate the Rabi-Stark model in a wide parameter regime using a single trapped ion. Moreover, we validate our proposal with numerical simulations which show an excellent agreement between the dynamics of the Rabi-Stark model and the one achieved by the trapped-ion simulator.
\subsection{Selectivity in one-photon interactions}\label{subsect:OnePhoton}
The Hamiltonian of the Rabi-Stark model is
\begin{equation}\label{QRS1}
H=\frac{\omega_0}{2}\sigma_z +\omega a^\dag a + \gamma a^\dag a \sigma_z + g(\sigma_+ +\sigma_-)(a+a^\dag)
\end{equation}
where $\omega_0$ is the frequency of the qubit or two-level system, $\omega$ is the frequency of the bosonic field, and $\gamma$ and $g$ are the couplings of the Stark and Rabi terms, respectively. Note that the Stark term is diagonal in the bare basis $\big\{|{\textrm e}\rangle,|{\textrm g}\rangle\big\}\otimes|n\rangle$ (where $\sigma_z|{\textrm e}\rangle=|{\textrm e}\rangle$, $\sigma_z|{\textrm g}\rangle=-|{\textrm g}\rangle$ and $a^\dagger a|n\rangle=n|n\rangle$), and it can be interpreted as a qubit energy shift that depends on the bosonic state. If we move to an interaction picture with respect to the first three terms in Eq.~(\ref{QRS1}), the system Hamiltonian reads (see appendix~\ref{app:QRSDyson} for additional details)
\begin{equation}\label{QRSIntPic}
H_I(t)=\sum_{n=0}^{\infty}\Omega_n(\sigma_+ e^{i\delta^+_{n} t}+\sigma_- e^{i\delta^-_{n} t})|n\!+\!1 \rangle\langle n| + \textrm{H.c.}
\end{equation}
where $\Omega_n=g\sqrt{n+1}$, $\delta_n^+=\omega+\omega^0_n$ and $\delta_n^-=\omega-\omega_n^0$, with $\omega_n^0=\omega_0+\gamma(2 n +1)$. If $\gamma=0$, these detunings are independent of the state $n$, and, for $|\delta^+| \gg \Omega_n$ and $\delta^-=\omega-\omega_0=0$ ($|\delta^-| \gg \Omega_n$ and $\delta^+=\omega+\omega_0=0$), a resonant JC (anti-JC) Hamiltonian is recovered when fast rotating terms are averaged out by invoking the RWA. In these conditions, the dynamics leads to Rabi oscillations between the states $|{\textrm e},n\rangle \leftrightarrow |{\textrm g},n+1\rangle$ ($|{\textrm g},n\rangle \leftrightarrow |{\textrm e},n+1\rangle$) for every $n$, and at a rate proportional to $\Omega_n$. These interactions are not selective as they apply to all Fock states in the same manner.
\pagebreak
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{figures/Figures_2/OnePhoton.pdf}
\caption{One-photon selective interactions of the Rabi-Stark model. Hamiltonian (\ref{QRS1}) acts during a time $t=\pi/2\Omega_n$ and we calculate $\langle a^\dagger a\rangle$ for different ratios of $\omega_0/\omega$ and initial states $|\textrm{e},0\rangle$ (blue), $|\textrm{e},1\rangle$ (orange), $|\textrm{e},2\rangle$ (purple) and $|\textrm{e},3\rangle$ (green) with fixed couplings $\gamma/\omega=-0.25$ and $g/\omega=0.02$ (solid lines). If $\gamma=0$, all JC peaks would be at $\omega-\omega_0=0$ (dashed lines).}\label{fig:QRSOnePhoton}
\end{figure}
The presence of a nonzero Stark coupling $\gamma$ makes these detunings dependent on $n$, allowing to identify a resonance condition for a selected Fock state $n=N_0$, while the rest of Fock states stay out of resonance. From Eq.~(\ref{QRSIntPic}) we note that if $\delta^{-}_{N_0}=0$ ($\delta^{+}_{N_0}=0$) and $|\delta^{-}_{n\neq N_0}|\gg \Omega_{n\neq N_0}$ ($|\delta^{+}_{n\neq N_0}|\gg \Omega_{n\neq N_0}$), the dynamics of Hamiltonian~(\ref{QRS1}) will produce a resonant one-photon JC (anti-JC) interaction only in the subspace $\{|{\textrm e}\rangle,|{\textrm g}\rangle\}\otimes\{|N_0\rangle,|N_0+1\rangle\}$. This is observed in Fig.~\ref{fig:QRSOnePhoton}, where resonance peaks appear for initial states $|{\textrm e},n\rangle$ with different number $n$. Here, a one-photon Rabi oscillation occurs if $\omega-\omega_0=\gamma(2n+1)$ i.e. $\delta^{-}_{n}=0$. In Fig.~\ref{fig:QRSOnePhoton}, we vary $(\omega-\omega_0)/\omega$ in the $x$ axis for fixed $\gamma/\omega=-0.25$ and $g/\omega=0.02$, and meet this resonance condition for $n=0,1,2,3$ that correspond to the four peaks on the left side (solid lines). The other two peaks on the right correspond to $\delta^{+}_{n}=0$ resonances leading to one-photon anti-JC interactions for $n=1,2$.
\subsection{Multi-photon interactions}\label{subsect:MultiPhoton}
As revealed previously, besides one-photon transitions, the Rabi-Stark Hamiltonian produces selective $k$-photon interactions. Unlike the selective one-photon interactions, which appear due to the interplay between the Stark term and the rotating or counter-rotating terms, these selective multi-photon interactions are a direct consequence of the interplay between the Stark term and both the rotating and counter-rotating terms. Calculating the Dyson series for Eq.~(\ref{QRSIntPic}), we obtain that the second order Hamiltonian is
\begin{equation}\label{QRSSecondOrder1}
H_I^{(2)}=\sum_{n=0}^{\infty}\big(\Delta_n^{\textrm e}\sigma_+\sigma_- +\Delta_n^{\textrm g}\sigma_-\sigma_+ \big)|n\rangle\langle n|
\end{equation}
where $\Delta_n^{\textrm e}=\Omega^2_{n-1}/\delta^+_{n-1}-\Omega^2_{n}/\delta_n^-$ and $\Delta_n^{\textrm g}=\Omega^2_{n-1}/\delta^-_{n-1}-\Omega^2_n/\delta_n^+$, plus a time-dependent part oscillating with frequencies $\delta_{n+1}^++\delta_n^-=2\omega+2\gamma$, $\delta_{n+1}^-+\delta_n^+=2\omega-2\gamma$, and $\delta_{n}^\pm,\delta_{n+1}^\pm$ that is averaged out due to the RWA (see appendix~\ref{subapp:QRSSecondOrder} for the derivation of Eq.~(\ref{QRSSecondOrder1})).
The third order Hamiltonian leads to three-photon transitions described by the following Hamiltonian (see appendix~\ref{subapp:QRSThirdOrder} for the derivation)
\begin{equation}\label{QRSThirdOrder1}
H_I^{(3)}(t)=\sum_{n=0}^{\infty}\big(\Omega^{(3)}_{n+}e^{i\delta^{(3)}_{n+}t}\sigma_++\Omega^{(3)}_{n-}e^{i\delta^{(3)}_{n-}t}\sigma_-\big)|n\!+\!3\rangle \langle n| + \textrm{H.c.},
\end{equation}
where $\Omega^{(3)}_{n\pm}=g^3\sqrt{(n+3)!/n!}/2\delta^\pm_n(\omega\mp\gamma)$ and $\delta^{(3)}_{n\pm}=\delta^\pm_{n+2}+\delta^\mp_{n+1}+\delta_n^\pm=2\omega+\delta_{n+1}^\pm$. According to this, a JC type three-photon process occurs for $|{\textrm e},N_0\rangle$ if $\delta^{(3)}_{N_0-}=0$ producing population exchange between the states $|{\textrm e}, N_0\rangle\leftrightarrow|{\textrm g}, N_0+3\rangle$. For the state $|{\textrm g}, N_0\rangle$, anti-JC-type transitions to the state $|{\textrm e}, N_0+3\rangle$ occur when $\delta^{(3)}_{N_0+}=0$. In the following we will check the validity of these effective Hamiltonians by numerically calculating the dynamics of Hamiltonian~(\ref{QRS1}).
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{figures/Figures_2/ThreePhoton.pdf}
\caption{Three-photon selective interactions of the Rabi-Stark model. (a) Resonance spectrum of anti-JC-like three-photon process for state $|{\textrm g},5\rangle$. After a time $t=\pi/2\Omega^{(3)}_{5+}$, $\langle a^\dagger a\rangle$ is shown for $\gamma/\omega=-0.4$. The peaks appear shifted from $\delta_{5+}^{(3)}=0$ (dashed line) at $\tilde{\delta}_{5+}^{(3)}=0$ which corresponds to the dark curve in the $xy$ plane representing the lower values of $\log_{10}{|\delta^{(3)}_{5+}|}$. (b) Time evolution of populations $P_{{\textrm g},4}$ (solid) and $P_{{\textrm e},7}$ (dashed) for initial state $|{\textrm g},4\rangle$ (green) and populations $P_{{\textrm g},5}$ and $P_{{\textrm e},8}$ for initial state $|{\textrm g},5\rangle$ (red) for $g/\omega=0.05$ (up) and $g/\omega=0.1$ (down).}
\label{fig:QRSThreePhoton}
\end{figure}
In Fig.~\ref{fig:QRSThreePhoton}(a) we let the system evolve for a time $t=\pi/2\Omega^{(3)}_{5+}$ for a fixed value of $\gamma/\omega=-0.4$ and calculate the average number of photons $\langle a^\dagger a\rangle$ for different values of $\omega_0/\omega$ and couplings $g/\omega$. We do this for the initial state $|{\textrm g},N_0=5\rangle$, near the resonance point $\delta^{(3)}_{5+}=3\omega+\omega_0+13\gamma=0$. We observe that resonances do not appear when $\delta^{(3)}_{5+}=0$, see dashed line on the left, owing to a resonance frequency shift that depends on the value of $g$. To explain this we go to an interaction picture with respect to Hamiltonian~(\ref{QRSSecondOrder1}), then, the oscillation frequencies in Eq.~(\ref{QRSThirdOrder1}) will be shifted to $\tilde{\delta}^{(3)}_{n+}=\delta^{(3)}_{n+}+\Delta^\textrm{e}_{n+3}-\Delta_n^\textrm{g}$ and $\tilde{\delta}^{(3)}_{n-}=\delta^{(3)}_{n-}+\Delta^\textrm{g}_{n+3}-\Delta_n^\textrm{e}$. In the $xy$ plane of Fig.~\ref{QRSThirdOrder1}(a) we make a grayscale colour plot of $\log_{10}{|\tilde{\delta}^{(3)}_{5+}|}$ as a function of $\omega_0$ and $g$ and see that the minima of $\tilde{\delta}^{(3)}_{5+}$ (dark line) is in very good agreement with the point in which the three-photon resonance appears (the logarithm scale is used to better distinguish the zeros of $\tilde{\delta}^{(3)}_{5+}$).
To show that the three-photon interaction applies only to the preselected subspace, in Fig.~\ref{fig:QRSThreePhoton}(b) we plot the evolution of initial states $|{\textrm g},4\rangle$ and $|{\textrm g},5\rangle$. As expected, the latter exchanges population with the state $|{\textrm e},8\rangle$ while the former remains constant. Besides, for $g/\omega=0.05$ (upper figure), the transition is slower but most of the population is transferred to $|{\textrm e},8\rangle$ at time $t=\pi/2\Omega^{(3)}_{5+}$. For $g/\omega=0.1$ (lower figure) the exchange rate is much faster but the transfer is not so efficient.
In this context, higher-order selective interactions will be produced by the Rabi-Stark model and could in principle be tracked by the calculation of higher-order Hamiltonians. However, being a high-order process, its strength decreases with order $k$ since $\Omega^{(k)}/\omega\propto (g/\omega)^k$. Then, high-order processes require longer times to be observed which may exceed the coherence times of the system. In any case, we find interesting to study the case for a higher $k$. Following the same procedure as for calculating Eqs.~(\ref{QRSSecondOrder1}) and (\ref{QRSThirdOrder1}), we conclude that for even $k$, the $k$-th order Hamiltonian will not produce selective interactions as they will average out as a consequence of the RWA. For odd $k$, the $k$-th order Hamiltonian predicts a $k$-photon transition of the form
\begin{equation}\label{QRSkthOrder1}
H_I^{(k)}(t)=\sum_{n=0}^{\infty}\big(\Omega^{(k)}_{n+}e^{i\delta^{(k)}_{n+}t}\sigma_++\Omega^{(k)}_{n-}e^{i\delta^{(k)}_{n-}t}\sigma_-\big)|n\!+\!k\rangle\langle n| + \textrm{H.c.},
\end{equation}
where
\begin{equation}\label{QRSdet}
\delta^{(k)}_{n\pm}=\sum_{s=0}^{k-1} \delta_{n+s}^\pm+\delta_{n+s+1}^\mp+\delta^\pm_{k}=(k-1)\omega+\delta^{\pm}_{n+(k-1)/2}
\end{equation}
and
\begin{equation}\label{QRSRabiFreq}
\Omega^{(k)}_{n\pm}=\frac{g^k}{(k-1)!!(\omega\mp\gamma)^{\frac{k-1}{2}}}\sqrt{\frac{(n+k)!}{n!}}\prod^{k-2}_{s=1,3...}\frac{1}{\delta^{(s)}_{n\pm}}.
\end{equation}
Using Eqs.~(\ref{QRSdet}) and (\ref{QRSRabiFreq}) and with the help of numerical simulations, it is easy to find $k$-photon processes to validate the effective Hamiltonian (\ref{QRSkthOrder1}). Here, numerical simulations are required as the analytic calculation of the exact resonance frequencies of higher-order processes rapidly becomes challenging. For example, for tracking a JC-type five-photon interaction for $N_0$, we use the condition $\delta^{(5)}_{N_0-}=0$ to retrieve an approximate value for the qubit frequency of $\omega^c_0= 5\omega-\gamma(2N_0+5)$. Then, we calculate the time evolution governed by Hamiltonian~(\ref{QRS1}) for a time $t=\pi/2\Omega^{(5)}_{N_0-}$ and plot $\langle\sigma_+\sigma_-\rangle$ for different values of $\omega_0$ close to $\omega_0^c$ until we find a peak corresponding to the resonant five-photon interaction.
\begin{figure}[t!]
\centering
\includegraphics[width=0.78\linewidth]{figures/Figures_2/FivePhoton.pdf}
\caption{Five-photon selective interactions of the Rabi-Stark model. a) On the left, $\langle\sigma_+\sigma_-\rangle$ is shown after a time $t=\pi/2\Omega^{(5)}_{2-}$, for different values of $\omega_0/\omega$ around $\omega_0^c=5\omega-\gamma(2\times2+5)$ and initial state $|{\textrm g},7\rangle$. Here, $g/\omega=0.1$ and $\gamma/\omega=0.9$. On the right, time evolution of populations $P_{{\textrm e},2}$ and $P_{{\textrm g},7}$ for initial state $|{\textrm g},7\rangle$ and populations $P_{{\textrm e},1}$ and $P_{{\textrm g},6}$ for initial state $|{\textrm g},8\rangle$, for $\omega_0/\omega=-3.227$. b) and c) The same procedure with initial state $|{\textrm g},8\rangle$ and $|{\textrm g},9\rangle$, where the peaks appear for $\omega_0/\omega=-5.072$ and $\omega_0/\omega=-6.918$. }
\label{fig:QRSFivePhoton}
\end{figure}
As an example, in Fig.~\ref{fig:QRSFivePhoton} we show these resonances for $N_0=2$, $3$ and $4$, with $g/\omega=0.1$ and $\gamma/\omega=0.9$. We find resonance peaks for $\omega_0/\omega=-3.227$,$-5.072$ and $-6.918$ which are close to the ones obtained with the approximate formula $\omega_0^c/\omega=-3.1$,$-4.9$ and $-6.7$. In comparison with the three-photon processes, five-photon transitions are slower, and the population transfer to the preselected state is partial for $g/\omega=0.1$. It is interesting to note that the revival of the initial state as well as the selectivity condition are maintained at the beginning of the USC regime. Note that for $\omega_0/\omega=-3.227$, an exchange between states $|{\textrm g},7\rangle \leftrightarrow |{\textrm e},2\rangle$ occurs while the neighbouring states $ |{\textrm g},6\rangle$ and $|{\textrm e},1\rangle$ are completely out of resonance. In this respect, with larger coupling constants such as $g/\omega\approx0.3$ one would still get signatures of selectivity, but the interaction will not longer be a JC (or anti-JC) type $k$-photon interaction as it would involve states out of the selected JC (or anti-JC) doublet. In Fig.~\ref{fig:QRSFivePhoton} the population transfer from $|{\textrm g},N_0+5\rangle$ to $|{\textrm e},N_0\rangle$ is already partial, and interestingly, the remaining population goes to states $|{\textrm g},N_0+1\rangle$ and $|{\textrm g},N_0-1\rangle$.
To experimentally verify our predictions regarding the selective $k$-photon interactions of the Rabi-Stark model, in the next section we propose an experimental implementation of the model.
\subsection{Implementation with trapped ions}\label{subsect:QRSTI}
Trapped ions are excellent quantum simulators~\cite{Leibfried03,Blatt12}, with experiments implementing the one-photon QRM~\cite{Pedernales15,Puebla16,Lv18} and proposals for the two-photon QRM~\cite{Felicetti15,Puebla17}. In the following, we propose a route to simulate the Rabi-Stark model using a single trapped ion.
The Hamiltonian of a single trapped ion interacting with co-propagating laser beams labeled with $j$ can be written, in an interaction picture with respect to the free energy Hamiltonian $H_0=\frac{\omega_I}{2}\sigma_z+\nu a^\dagger a$, as~\cite{Leibfried03}
\begin{equation}\label{TIHamil}
H=\sum_{j}\frac{\Omega_j}{2} \sigma^+e^{i\eta(ae^{-i\nu t}+a^\dag e^{i\nu t})}e^{-i(\omega_j-\omega_I)t}e^{i\phi_j} +\textrm {H.c.}.
\end{equation}
Here $\Omega_j$ is the Rabi frequency, $\eta$ and $a^\dagger (a)$ are the LD parameter and the creation (annihilation) operator acting on vibrational phonons, $\nu$ is the trap frequency, $\omega_j-\omega_I$ is the detuning of the laser frequency $\omega_j$ with respect to the carrier frequency $\omega_I$, and $\phi_j$ accounts for the phase of the laser.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{figures/Figures_2/QRSTI.pdf}
\caption{Selective one-photon and three-photon interactions with a trapped ion. a) Time evolution of the mean number of phonons and populations $P_{+,2}$ (solid) and $P_{-,3}$ (dashed) starting from state $|+,2\rangle$ for $g/\omega^{\textrm R}=0.05$, $\gamma/\omega^{\textrm R}=-0.4$ and $\omega_0^{\textrm R}/\omega^{\textrm R}=3$. b) Time evolution of the mean number of phonons and populations $P_{+,3}$ (solid) and $P_{-,0}$ (dashed) starting from state $|+,3\rangle$ for $g/\omega^{\textrm R}=0.3$, $\gamma/\omega^{\textrm R}=-0.1$ and $\omega_0^{\textrm R}/\omega^{\textrm R}=-2.4385$. Solid green lines evolve according to Eq.~(\ref{TIQRS}) while black squares evolve according to Eq.~(\ref{TIHamil}).}
\label{fig:QRSTI}
\end{figure}
As a possible implementation of the Rabi-Stark model we consider two drivings acting near the first red and blue sidebands, and a third one on resonance with the carrier interaction $\omega_{\textrm S}=\omega_I$. The Hamiltonian in the LD regime, $\eta \sqrt{\langle a^\dagger a\rangle}\ll 1$, and after the vibrational RWA approximation, reads
\begin{equation}\label{Scheme2}
H_\textrm{LD}=-ig_ra \sigma^+ e^{-i\delta_rt} -ig_b a^\dagger \sigma^+ e^{-i\delta_bt} - \hat{g}_{\textrm S}\sigma^++\textrm{H.c.}
\end{equation}
where $\omega_{r,b}=\omega_I\mp \nu +\delta_{r,b}$, $g_{r,b}=\eta\Omega_{r,b}/2$, $\phi_{r,b,{\textrm S}}=-\pi$ and $\hat{g}_{\textrm S}=\frac{\Omega_{\textrm S}}{2}(1-\eta^2/2)-\frac{\Omega_{\textrm S}}{2}\eta^2a^\dag a=\frac{\Omega_0}{2}-\gamma^{\textrm R} a^\dagger a$. Dependence of the carrier interaction on the phonon number appears when considering the expansion of $e^{i\eta(a+a^\dag)}$ up to the second order in $\eta$. At this point, if $\delta_r=-\delta_b=\omega^{\textrm R}$, Eq.~(\ref{Scheme2}) can already be mapped to a Rabi-Stark model in a frame rotated by $-\omega^{\textrm R}a^\dagger a$. However, the engineered Hamiltonian cannot explore all regimes of the model, as $\Omega_0$ and $\gamma^{\textrm R}$ cannot be independently tuned, thus restricting the Hamiltonian to regimes where $\gamma^{\textrm R}\ll\Omega_0$. This issue can be solved by moving to an interaction picture with respect to $\frac{\Omega_\textrm{DD}}{2}\sigma_x - \omega^{\textrm R}a^\dagger a$, where $\Omega_\textrm{DD}=-(\Omega_0+\omega_0^{\textrm R})$, and by shifting the detunings by $\delta_{r,b}=\Omega_\textrm{DD}\pm\omega^{\textrm R}$. The resulting Hamiltonian after neglecting terms oscillating at $\Omega_\textrm{DD}$ is
\begin{equation}\label{TIQRS}
H^{II}_\textrm{LD}=\frac{\omega_0^{\textrm R}}{2}\sigma_x+\omega^{\textrm R} a^\dagger a + g^{\textrm R}\sigma_y(a+a^\dagger)+\gamma^{\textrm R} a^\dagger a\sigma_x.
\end{equation}
Here $g^{\textrm R}=(\eta\Omega_r/4)(1-\epsilon_{\textrm S})$ if $\Omega_b=\Omega_r(1-\epsilon_{\textrm S})/(1+\epsilon_{\textrm S})$ with $\epsilon_{\textrm S}=\Omega_{\textrm S}/\nu$. See appendix~\ref{app:QRSTI} for a detailed derivation of Eq.~(\ref{TIQRS}). Notice that Eqs.~(\ref{QRS1}) and (\ref{TIQRS}) are equivalent by simply changing the qubit basis. For the latter, the diagonal basis is given by $\{|+\rangle,|-\rangle\}\otimes|n\rangle$, where $\sigma_x|\pm\rangle=\pm|\pm\rangle$. The parameters of the model are now $\omega^{\textrm R}_0=-(\Omega_0+\Omega_\textrm{DD})$, $\omega^{\textrm R}=(\delta_r-\delta_b)/2$, and $\gamma^{\textrm R}=\eta^2\Omega_{\textrm S}/2$. Regimes where $\gamma^{\textrm R}<0$ can be also reached by taking $\phi_{\textrm S}=0$, however, the frequency of the rotating frame changes to $\Omega_\textrm{DD}=\Omega_0-\omega_0^{\textrm R}$. Moreover, in this case $g^{\textrm R}=(\eta\Omega_r/4)(1+\epsilon_{\textrm S})$ if $\Omega_b=\Omega_r(1+\epsilon_{\textrm S})/(1-\epsilon_{\textrm S})$.
In the following, we verify the feasibility of the proposal by comparing the dynamics generated by the Hamiltonian~(\ref{TIHamil}) with the one of the Rabi-Stark model at Eq.~(\ref{TIQRS}). The results are shown in Figs.~\ref{fig:QRSTI}(a) and~\ref{fig:QRSTI}(b) for one-photon and three-photon oscillations respectively. The experimental parameters we use in Fig.~\ref{fig:QRSTI}(a) are $\nu=(2\pi)\times4.98$ MHz for the trapping frequency, $\eta=0.1$ for the LD parameter and $\Omega_{\textrm S}=(2\pi)\times120$ kHz for the carrier driving, leading to a Stark coupling of $|\gamma^{\textrm R}|=(2\pi)\times0.6$ kHz. We consider a Stark coupling of $\gamma^{\textrm R}/\omega^{\textrm R}=-0.4$, a Rabi coupling of $g^{\textrm R}/\omega^{\textrm R}=0.05$, and $\omega_0^{\textrm R}=\omega^{\textrm R}-\gamma^{\textrm R}(2N_0+1)$ with $N_0=2$. To achieve this regime the experimental parameters are $\Omega_r=(2\pi)\times2.94$ kHz, $\Omega_b=(2\pi)\times3.08$ kHz, and $\Omega_\textrm{DD}=(2\pi)\times114.86$ kHz. We observe that with an initial state $|+,2\rangle$, there is an exchange of population with the state $|-,3\rangle$.
In Fig.~\ref{fig:QRSTI}(b) we show that selective three-photon oscillations of the Rabi-Stark model can be observed in some milliseconds. Starting from $|+,3\rangle$, we can observe coherent population exchange with state $|-,0\rangle$. Here, the LD parameter is $\eta=0.05$ and the parameters of the model are $\gamma^{\textrm R}/\omega^{\textrm R}=-0.1$, $g^{\textrm R}/\omega^{\textrm R}=0.3$ and $\omega_0^{\textrm R}/\omega^{\textrm R}=-2.4385$ for which we require $\Omega_r=(2\pi)\times35.2$ kHz, $\Omega_b=(2\pi)\times36.9$ kHz, and $\Omega_\textrm{DD}=(2\pi)\times123.5$ kHz. Although in the previous case we have focused on the Rabi-Stark model in the SC and USC regimes, it is noteworthy to mention that our method is still valid for larger ratios of $g/\omega$. Thus, our method represents a simple and versatile route to simulate the Rabi-Stark model in all important parameter regimes.
In this section, we studied the dynamics of the Rabi-Stark model QRM in the SC and USC regimes and characterise the novel $k$-photon interactions that appear by using time-dependent perturbation theory. Due to the Stark-coupling term, these $k$-photon interactions are selective, thus their resonance frequency depends on the state of the bosonic mode. Finally, and with the support of detailed numerical simulations, we proposed an implementation of the Rabi-Stark model with a single trapped ion. The numerical simulations show an excellent agreement between the dynamics of the trapped-ion system and the Rabi-Stark model.
\section{Nonlinear quantum Rabi model in trapped ions}
The study of the nonlinear QRM covered in this section is also a study of the nonlinear behaviour of a single trapped ion when it is far away from the LD regime. While in the past, research beyond the LD regime was mainly focussed on the nonlinear JC model~\cite{Vogel95,MatosFilho96_1,MatosFilho96_2,Stevens98}, its implications in laser cooling~\cite{Morigi97,Morigi99,Foster09} or for possible applications to simulate Frack-Condon physics~\cite{Hu11} has also been investigated. To set up the stage for a subsequent analysis, in section~\ref{subsect:JCmodels} we first briefly review the JC model and take this as a reference to show the difference with the nonlinear JC model. The appearance of nonlinear terms in the Hamiltonian suppresses the collapses and revivals for a coherent-state evolution typical from linear cases. In section~\ref{subsect:antiJCmodel}, we investigate how the nonlinear anti-JC model, which appears as the counterpart of nonlinear JC model, can be combined with controlled depolarising noise, to generate arbitrary $n$-phonon Fock states. Moreover, the latter could in principle be done without a precise control of pulse duration or shape, and without the requirement of a previous high-fidelity preparation of the motional ground state. Finally, in section~\ref{subsect:NQRMTI}, we propose the quantum simulation of the nonlinear quantum Rabi model by simultaneous off-resonant nonlinear JC and anti-JC interactions.
\subsection{JC models in trapped ions}\label{subsect:JCmodels}
The Hamiltonian describing a laser-cooled two-level ion trapped in a harmonic potential and driven by a monochromatic laser field can be expressed as
\begin{equation}\label{IonHamil}
H=\frac{\omega_I}{2}\sigma_z+\nu a^{\dag}a+\frac{\Omega}{2}\sigma^x[e^{i(\eta(a+a^\dag)-\omega_\textrm{ L} t+\phi)}+\textrm{H.c.}],
\end{equation}
where $\omega_0$ is the two-level transition frequency, $\sigma_z,\sigma^x$ are Pauli matrices associated to this two-level system, $\Omega$ is the Rabi frequency, $\omega_\textrm{ L}$ is the driving laser frequency, and $\phi$ is the phase of the laser field.
In the LD regime, moving to an interaction picture with respect to $H_0=\frac{\omega_I}{2}\sigma_z+\nu a^{\dag}a$, and after the application of the so-called optical RWA, the Hamiltonian in Eq.~(\ref{IonHamil}) can be written as~\cite{Leibfried03}
\begin{equation}\label{NQRMLDregime}
H_\textrm{int}^\textrm{LD}=\frac{\Omega}{2}\sigma^+[1+i\eta(ae^{-i\nu t}+a^\dag e^{i\nu t})]e^{i(\phi-\delta t)}+\textrm{H.c.},
\end{equation}
where $\delta=\omega_\textrm{ L}-\omega_I$ is the laser detuning and the condition $\eta \ll1$ allows to keep only the zero and first order terms in the expansion of $\exp{[i\eta(a+a^\dag)]}$. When $\delta=-\nu$ and $\Omega\ll \nu$, after applying the vibrational RWA, the dynamics of such a system is described by the JC Hamiltonian, $H_\textrm{JC}=i g_r (\sigma^+ a - \sigma^-a^{\dag})$, where $g_r=\eta\Omega/2$ and $\phi=0$. This JC model is analytically solvable and generates population exchange between states $|\textrm{g},n\rangle \leftrightarrow |\textrm{e},n\!-\!1\rangle$ with rate $\Omega_{n,n-1}=\eta\Omega\sqrt{n}$. On the other hand, when the detuning is chosen to be $\delta=\nu$, the effective model is instead described by the anti-JC model $H_\textrm{aJC}=i g_b (\sigma^+ a^\dag - \sigma^-a)$, where $g_b=\eta\Omega/2$, which generates population transfer between states $|\textrm{g},n\rangle \leftrightarrow |\textrm{e},n\!+\!1\rangle$ with rate $\Omega_{n,n+1}=\eta\Omega\sqrt{n+1}$.
\begin{figure}[t!]
\centering
{\includegraphics[width=0.9 \textwidth]{figures/Figures_2/NLZeros.pdf}}
\caption{(a) Logarithm of the absolute value of the operator $f_1(\hat{n})$ evaluated for different Fock states $|n\rangle$ and LD parameters $\eta$. Dark (blue) regions represent cases where $f_1(\hat{n})|n\rangle\approx 0$. (b) Nonlinear function $f_1(n)$ for a fixed value of the LD parameter $\eta=0.5$ (oscillating blue curve). Zero value (horizontal orange line)\label{fig:NLZeros}}
\end{figure}
When the trapped-ion system is beyond the LD regime, the simplification of the exponential term described above is not justified and Eq.~(\ref{NQRMLDregime}) reads
\begin{eqnarray}\label{BLDregime}
H_\textrm{int}=\frac{\Omega}{2}\sigma^+ e^{i\eta(a^+ e^{i\nu t}+a e^{-i\nu t})-i(\delta t-\phi)}+\textrm{H.c.}\label{IntHam}.
\end{eqnarray}
When $\delta=-\nu$ and $\Omega \ll \nu$, after applying the vibrational RWA, the effective Hamiltonian describing the system is given by the nonlinear JC model~\cite{Vogel95}, which can be expressed as
\begin{eqnarray}
H_\textrm{nJC}=ig_r[\sigma^+ f_1(\hat{n}) a - \sigma^- a^\dag f_1(\hat{n})],
\end{eqnarray}
where the nonlinear function $f_1$~\cite{Vogel95} is given by
\begin{equation}\label{NLfunc}
f_1(\hat{n})=e^{-\eta^2/2}\sum_{l=0}^{\infty}\frac{(-\eta^2)^l}{l!(l+1)!}a^{\dag l} a^l,
\end{equation}
with $a^{\dag l} a^l=\hat{n}!/(\hat{n}-l)!$. The dynamics of this model can also be solved analytically, and as the linear JC model, yields to population exchange between states $|\textrm{g},n\rangle \leftrightarrow |\textrm{e},n\!-\!1\rangle$. However, in this case the Rabi frequencies are $\tilde{\Omega}_{n,n-1}= |f_1(n-1)|\Omega_{n,n-1}=\eta\Omega\sqrt{n} |f_1(n-1)|$, where $f_1(n)$ corresponds to the value of the diagonal operator $f_1$ evaluated on the Fock state $|n\rangle$, i.e. $f_1(n)\equiv\langle f_1(\hat{n})\rangle_n$.
If the detuning in Eq.~(\ref{BLDregime}) is chosen to be $\delta=\nu$, and $\Omega \ll \nu$, then the application of the vibrational RWA yields the nonlinear anti-JC model,
\begin{eqnarray}
H_\textrm{naJC}=ig_b[\sigma^+a^\dag f_1(\hat{n}) - \sigma^- f_1(\hat{n})a ],
\end{eqnarray}
which, as the linear anti-JC model, generates population exchange between states $|\textrm{g},n\rangle \leftrightarrow |\textrm{e},n\!+\!1\rangle$ with rate $\tilde{\Omega}_{n,n+1}=|f_1(n)|\Omega_{n,n+1}=\eta\Omega\sqrt{n+1} |f_1(n)|$. The nonlinear function $f_1$ depends on the LD parameter $\eta$ and on the Fock state $| n \rangle$ on which it is acting. The LD regime is then recovered when $\eta\sqrt{\langle(a+a^\dagger)^2\rangle}\ll 1$. In this regime, $|f_1(n)|\approx1$, and thus the dynamics is the one corresponding to the linear models.
Beyond the LD regime, the nonlinear function $f_1$, which has an oscillatory behaviour both in $n\in\mathbb{N}$ and $\eta\in\mathbb{R}$, needs to be taken into account. In Fig.~\ref{fig:NLZeros}(a), we plot the logarithm of the absolute value of $f_1(n,\eta)$ for different values of $n$ and $\eta$, where the green regions represent lower values of $\log{(|f_1(n,\eta)|)}$, i.e, values for which $f_1\approx 0$. This oscillatory behaviour can also be seen in Fig.~\ref{fig:NLZeros}(b) where we plot the value of $f_1$ as a function of the Fock state number $n$ for $\eta=0.5$. For this specific case, we can see that the function is close to zero around $n=14$ and $n=48$, meaning that for $\eta=0.5$, the rate of population exchange between states $|\textrm{g},15\rangle \leftrightarrow |\textrm{e},14\rangle$ and $|\textrm{g},49\rangle \leftrightarrow |\textrm{e},48\rangle$ will vanish for the nonlinear JC model. The same will happen to the exchange rate between states $|\textrm{g},14\rangle \leftrightarrow |\textrm{e},15\rangle$ and $|\textrm{g},48\rangle \leftrightarrow |\textrm{e},49\rangle$ for the nonlinear anti-JC model.
\begin{figure}[t!]
\centering
{\includegraphics[width=0.9 \textwidth]{figures/Figures_2/JaynesCollapse.pdf}}
\caption{Average value of $\sigma_z$ operator versus time for a coherent initial state $|\alpha=\sqrt{30}\rangle$ after (a) linear JC and (b) nonlinear JC evolution, both with the same coupling strength $g_r$ and $\eta=0.5$ for the nonlinear case. As shown in (a), there exists an approximate collapse and subsequent revival in the JC model dynamics, while for the nonlinear JC model this is not the case. \label{JaynesCollapse}}
\end{figure}
We observe approximate collapses and revivals for an initial coherent state\footnote{A coherent state is defined as $|\alpha\rangle=e^{-|\alpha|^2/2}\sum_{n=0}^\infty\frac{\alpha^n}{\sqrt{n!}}|n\rangle$} with an average number of photons of $|\alpha|^2=30$ by evolving with the JC model, as shown in Ref.~\cite{Gerry04}, see Fig.~\ref{JaynesCollapse}(a). Here, we plot $\langle\sigma^z(t)\rangle=\langle \psi(t)|\sigma^z|\psi(t)\rangle$ for a state that evolves according to the JC model. Comparing the same case for the nonlinear JC model with $\eta=0.5$, as depicted in Fig.~\ref{JaynesCollapse}(b), we appreciate that in the latter case the collapses and revivals vanish, and the dynamics is more irregular. This can seem natural given that the phenomenon of revival takes place whenever the most significant components of the quantum state, after some evolution time, turn out to oscillate in phase again, which may be more unlikely if the dynamics is nonlinear. Notice that we let the case of the nonlinear JC model evolve for a longer time, since the nonlinear function $f_1$ effectively slows down the evolution.
\subsection{Fock-state generation with a dissipative nonlinear anti-JC model}\label{subsect:antiJCmodel}
In this section we study the possibility of using the dynamics of the nonlinear anti-JC model introduced in the previous section to, along with depolarising noise, generate high-number Fock states in a dissipative manner. In particular, the depolarising noise that we consider corresponds to the spontaneous relaxation of the internal two-level system of the ion. Such a dissipative process, combined with the dynamics of the JC model in the LD regime (linear JC model), is routinely exploited in trapped-ion setups for the implementation of sideband cooling. It is noteworthy to mention that the effect of nonlinearities on sideband cooling protocols, which arise when outside the LD regime, have also been a matter of study~\cite{Morigi97,Morigi99}.
\begin{figure}[t!]
\centering
{\includegraphics[width=1 \linewidth]{figures/Figures_2/FockStatePrep.pdf}}
\caption{(a) The nonlinear function $f_1$ evaluated at different Fock states $n$, for the case of $\eta=0.4518$ (decreasing blue curve). Zero value (horizontal orange line). For this value of the LD parameter, $f_1|17\rangle=0$. (b) Phonon statistics of the initial thermal state with $\langle n\rangle=1$ (c) Time evolution of the average value of the number operator $\hat{n}$ starting from the state in (b) and following the evolution for the preparation of Fock state $| 17 \rangle$, that is during a nonlinear anti-JC model with spontaneous decay of the two-level system. (d) Phonon statistics at the end of the protocol, $t=100\times2\pi/g_b$, with all the population concentrated in Fock state $|17\rangle$. \label{fig:FockStatePrep}}
\end{figure}
Our method works as follows: we start in the ground state of both the motional and the internal degrees of freedom $|\textrm{g},0\rangle$ (as we will show later, our protocol works as well when we are outside the motional ground state, as long as the population of Fock states higher than the target Fock state is negligible). Acting with the nonlinear anti-JC Hamiltonian we induce a population transfer from state $|\textrm{g},0\rangle$ to state $|\textrm{e},1\rangle$, while at the same time, the depolarising noise transfers population from $|\textrm{e}, 1\rangle$ to $|\textrm{g}, 1\rangle$. The simultaneous action of both processes will ``heat" the motional state, progressively transferring the population of the system from one Fock state to the next one. Eventually, all the population will be accumulated in state $|\textrm{g},n\rangle$, where a blockade of the propagation of population through the chain of Fock states occurs, if $f_1(n)=0$, as the transfer rate between states $|\textrm{g},n\rangle$ and $|\textrm{e},n+1\rangle$ vanishes, $\tilde{\Omega}_{n,n+1}=0$. We point out that the condition $f_1(n)=0$ can always be achieved by tuning the LD parameter to a suitable value, i.e. for every Fock state $|n\rangle$, where $n>0$ there exists a value of the LD parameter $\eta$ for which $f_1(n,\eta)=0$. As an example, we choose the LD parameter $\eta=0.4518$, for which $f_1(17)=0$, and simulate our protocol using the master equation
\begin{eqnarray}\label{LabMasterEq}
\dot{\rho}=-i[H_\textrm{naJC},\rho]+\frac{\Gamma}{2} (2\sigma^-\rho \sigma^+ - \sigma^+ \sigma^-\rho-\rho \sigma^+ \sigma^-),
\end{eqnarray}
where $\Gamma=2g_{b}$ is the decay rate of the internal state.
In Fig.~\ref{fig:FockStatePrep} we numerically show how our protocol is able to generate the motional Fock state $ |17 \rangle$, starting from a thermal state with $\langle n\rangle=1$. In other words, one can obtain large final Fock states starting from an imperfectly cooled motional state, by a suitable tuning of the LD parameter. As an advantage of our method compared to previous approaches~\cite{Meekhof96}, we do not need a fine control over the Rabi frequencies or pulse durations, given that the whole wave-function, for an arbitrary initial state with motional components smaller than $n$, will converge to the target Fock state $|n\rangle$. We want to point out that this protocol relies only on the precision to which the LD parameter can be set, which in turn depends on the precision to which the wave number $k$ and the trap frequency $\nu$ can be controlled. These parameters enjoy a great stability in trapped-ion setups~\cite{Johnson16}, and therefore we deem the generation of high-number Fock states as a promising application of the nonlinear anti-JC model dynamics.
\subsection{Nonlinear quantum Rabi model}\label{subsect:NQRMTI}
Here we propose to implement the nonlinear quantum Rabi model (NQRM) in all its parameter regimes via the use of the Hamiltonian in Eq.~(\ref{IntHam}). We consider off-resonant first-order red and blue sideband drivings with the same coupling $\Omega$ and corresponding detunings $\delta_r$, $\delta_b$. The interaction Hamiltonian after the optical RWA reads~\cite{Leibfried03,Pedernales15},
\begin{eqnarray}
H_\textrm{int} = \sum\limits_{j=r,b}\frac{\Omega_j}{2}\sigma^+e^{i\eta(a^{\dag}e^{i \nu t}+a e^{-i \nu t})}e^{-i(\delta_j t-\phi_j)}+\textrm{H.c.},
\end{eqnarray}
where $\omega_r=\omega_I-\nu+\delta_r$ and $\omega_b=\omega_I+\nu+\delta_b$, with $\delta_r,\delta_b\ll \nu \ll \omega_I$ and $\Omega_r=\Omega_b \ll \nu$. We consider the system beyond the LD regime and set the laser-field phases to $\phi_{r,b}=0$. If we invoke the vibrational RWA, i.e. neglect terms that rotate with frequencies in the order of $\nu$, the remaining terms read
\begin{equation}
H_\textrm{int}=ig^\textrm{ R}\sigma^+\big(f_1ae^{-i \delta_r t}+a^{\dag}f_1e^{-i \delta_b t}\big)+\textrm{H.c.},
\end{equation}
where $g^\textrm{ R}=\eta\Omega_r/2$ and $f_1\equiv f_1(\hat{n})$ was introduced in Eq.~(\ref{NLfunc}). The latter corresponds to an interaction picture Hamiltonian of the NQRM with respect to the free Hamiltonian $H_0=\frac{1}{4}(\delta_b+\delta_r)\sigma_z +\frac{1}{2}(\delta_b-\delta_r)a^\dag a$. Therefore, undoing the interaction picture transformation, we have
\begin{equation}\label{NQRM}
H_\textrm{nQRM}=\frac{\omega_0^{\textrm R}}{2}\sigma_z+\omega^{\textrm R} a^{\dag}a+i g^\textrm{ R} (\sigma^+ - \sigma^-)(f_1a+a^{\dag}f_1),
\end{equation}
where $\omega_0^{\textrm R}=-\frac{1}{2}(\delta_r+\delta_b)$ and $\omega^{\textrm R}=\frac{1}{2}(\delta_r-\delta_b)$. Equation~(\ref{NQRM}) represents the general form of the NQRM, where $\omega_0^{\textrm R}$ is the level splitting of the simulated two level system, $\omega^{\textrm R}$ is the frequency of the simulated bosonic mode and $g$ is the coupling strength between them, which in turn will be modulated by the nonlinear function $f_1(\hat{n},\eta)$. The different regimes of the NQRM will be characterised by the relation among these four parameters. First, in the LD regime or $\eta\sqrt{\langle (a+a^{\dag})^2 \rangle}\ll 1$, Eq.~(\ref{NQRM}) can be approximated to the linear QRM~\cite{Pedernales15}. Beyond the LD regime, in a parameter regime where $|\omega^{\textrm R}-\omega_0^{\textrm R}|\ll g^\textrm{ R}\ll|\omega^{\textrm R}+\omega_0^{\textrm R}|$, the RWA can be applied. This would imply neglecting terms that rotate at frequency $\omega^{\textrm R}+\omega_0^{\textrm R}$ in an interaction picture with respect to $H_0$, leading to the nonlinear JC model studied in section~\ref{subsect:JCmodels}. On the other hand, the nonlinear anti-JC model would be recovered in a regime where $|\omega^{\textrm R}+\omega_0^{\textrm R}|\ll g^\textrm{ R}\ll|\omega^{\textrm R}-\omega_0^{\textrm R}|$. It is worth mentioning that the latter is only possible if the frequency of the two-level system and the frequency of the mode have opposite sign. The USC and DSC regimes are defined as $0.1\lesssim g^\textrm{ R}/\omega^{\textrm R} \lesssim 1$ and $g^\textrm{ R}/\omega^{\textrm R}\gtrsim1$ respectively, and in these regimes the RWA does not hold anymore.
\begin{figure}[]
{\includegraphics[width=1 \linewidth]{figures/Figures_2/NQRMEvolution.png}}
\caption{Fidelity with respect to the initial state $P(t)=|\langle\psi_0|\psi(t)\rangle|^2$ versus time is shown in the upper figures. In the lower figures, phonon statistics is shown at different times, where $\tilde{t}=g^\textrm{ R}t/2\pi$. In (a), $|0,\textrm{g}\rangle$ is chosen as initial state, while in (b) and (c) the initial state is $|\alpha\!=\!1,\textrm{g}\rangle$. In (a), the evolution occurs under the NQRM with LD parameter $\eta=0.67898$, where $f_1|7\rangle=0$, $g^\textrm{ R}/\omega^\textrm{R}=4$ and $\omega_0^\textrm{R}=0$. Observing the phonon statistics we see how Fock states where $n>7$ never get populated. In (b), the state evolve under the linear QRM. In (c), the state evolve under the NQRM with LD parameter $\eta=0.57838$, where $f_1|10\rangle=0$, $g^\textrm{ R}/\omega^\textrm{R}=3.7$ and $\omega_0^\textrm{R}=0$. \label{fig:NQRMEvolution}}
\end{figure}
As an example, here we investigate the NQRM in the DSC regime with initial Fock state $|0, \textrm{g}\rangle$, where $|0\rangle$ is the ground-state of the bosonic mode, and $|\textrm{g} \rangle$ stands for the ground state of the two-level system. In Fig.~\ref{fig:NQRMEvolution}(a), we study the case for $\eta=0.67898$, where $f_1|7\rangle=0$, $g^\textrm{ R}/\omega^\textrm{R}=4$ and $\omega_0^\textrm{R}=0$. More specifically, a quantum simulation of the model in this regime can be achieved with the following detunings and Rabi frequency: $\delta_r=2\pi\times11.31$ kHz, $\delta_b=-2\pi\times11.31$ kHz, $g^\textrm{ R}=2\pi\times45.24$ kHz and $\Omega_r=2\pi\times 133.26$ kHz. In Ref.~\cite{Casanova10}, it was shown that the linear QRM shows collapses and revivals and a round trip of the phonon-number wave-packet along the chain of Fock states, when in the DSC regime. Here, we observe that in the nonlinear case, Fig.~\ref{fig:NQRMEvolution}(a), collapses and revivals do not present the same clear structure, having a more irregular evolution. Most interestingly, the system dynamics never surpasses Fock state $|n\rangle$, for which $f_1(n)=0$. Regarding the simulated regime of the nonlinear QRM, we point out that the nonlinear term also contributes to the coupling strength. Therefore, to keep the NQRM in the DSC regime, the ratio $g^\textrm{ R}/\omega^\textrm{R}$ should be larger than that for the linear QRM since $f_1(n)< 1$ always. Summarising, our result illustrates that the Hilbert space is effectively divided into two subspaces by the NQRM, namely those spanned by Fock states below and above Fock state $| n \rangle$. We denote the Fock number $n$, where $f_1|n\rangle=0$, as ``the barrier'' of the NQRM.
To benchmark the effect of the barrier, we also provide simulations starting from an initial coherent state with $\alpha=1$ whose average phonon number is $\langle n \rangle=|\alpha|^2=1$, and make the comparison between the QRM and the NQRM in the DSC regime. For the parameter regime $g^\textrm{ R}/\omega^\textrm{R}=2$ and $\omega_0^\textrm{R}=0$, the fidelity with respect to the initial coherent state in the linear QRM performs periodic collapses and full revivals as it can be seen in Fig.~\ref{fig:NQRMEvolution}(b). In the lower figures of Fig.~\ref{fig:NQRMEvolution}(b), we observe a round trip of the phonon-number wave packet, similarly to what was shown in Ref.~\cite{Casanova10} for the case of the linear QRM starting from a Fock state. The NQRM, on the other hand, has an associated dynamics that is aperiodic and more irregular, as shown in Fig.~\ref{fig:NQRMEvolution}(c), and never crosses the motional barrier produced by the corresponding $f_1(n)=0$. This suggests that the NQRM could be employed as a motional filter. This filter is determined by the location of the barrier with respect to the initial state distribution. Here, by filter we mean that the population of Fock states above a given threshold can be prevented. For the simulation we choose the LD parameter $\eta=0.57838$ for which $f_1|10\rangle=0$, which is far from the centre of the distribution of the initial coherent state, as well as most of its width. The simulated parameter regime corresponds to the DSC regime with $g^\textrm{ R}/\omega^\textrm{R}=3.7$ and $\omega_0^\textrm{R}=0$. This case could also be simulated with trapped ions with detunings of $\delta_r=2\pi\times11.31$kHz and $\delta_b=-2\pi\times11.31$kHz, and a Rabi frequency of $\Omega_r=2\pi\times 133.26$kHz. As for the case corresponding to initial Fock state $|0,\textrm{g}\rangle$, the evolution of the NQRM in the coherent state case, depicted in Fig.~\ref{fig:NQRMEvolution}(c), never exceeds the barrier.
In summary, in this section we have proposed the implementation of the nonlinear QRM in arbitrary coupling regimes with trapped-ion quantum simulators. The nonlinear term that appears in our model is characteristic of the region beyond the LD regime. This nonlinear term causes the blockade of motional propagation at $|n\rangle$ whenever $f_1(\hat{n})|n\rangle=0$. In order to compare our model with the linear QRM, we have plotted the evolution of the population of the internal degrees of freedom of the ion evolving under the linear JC and nonlinear JC models and observe that for the latter the collapses and revivals disappear. Also, we have proposed a method for generating large Fock states in a dissipative manner, making use of the nonlinear anti-JC model and the spontaneous decay of the two-level system. Finally, we have studied the dynamics of the linear and nonlinear full QRM on the DSC regime and notice that the nonlinear case can act as a motional filter.
\chapter*{Acknowledgements}
\addcontentsline{toc}{section}{Acknowledgements}
During the development of this thesis I have been educated in the exciting field of quantum technologies, and been able to make a scientific contribution to it. Along the way, I have met great people and visit amazing cities like Boston, Munich or Shanghai. All this would not have been possible without QUTIS and its leader, Prof.~Enrique Solano, who from the very beginning has supported and guided me. His eagerness for avoiding comfort zones and keep evolving, his determination to give the best every day, and his sporadic musical recommendations have been always inspiring. I consider him a true brilliant minded rebel. If this thesis has another father, this is Dr.~Jorge Casanova. Jorge was leaving when I entered in QUTIS; fortunately he taught me all I needed to know in order to carry out my bachelor thesis. He put me on the right track, and occasionally come back from Ulm to give me a good push. For that I am deeply grateful. I still have to figure out where those ideas come from!
In QUTIS I have always felt valued and appreciated, from the more senior members to the younger ones. I had the opportunity to work under the supervision of Prof.~Lucas Lamata and Prof.~Enrique Rico, and I would like to thank them for their work and guidance. I would also like to thank Prof.~I\~nigo Egusquiza, who, lucky for us, has always been around whenever we needed an oracle. Of course, the younger group mates have also contributed to the positive work environment, and all that I have learned from long conversations with Dr.~Laura Garc\'ia-\'Alvarez, Dr.~Urtzi Las Heras and Dr.~Unai \'Alvarez-Rodr\'iguez, I subconsciously keep in my mind. I want to mention my office mates and travel companions Adrian Parra and Dr.~Julen S.~Pedernales, who have suffered and enjoyed my concerns and reflexions on a daily basis. Thank you guys.
\begin{CJK}{UTF8}{gbsn}
Throughout this thesis, I had the opportunity to visit other top-level research groups that have received me warmly. I want to thank Prof.~Xi Chen for inviting me, more than once, to Shanghai University. In such a different country, the unprecedented hospitality that I received was key to make my trip to China one of the best experiences in my life. From Shanghai, I want to especially thank Dr.~Xiao-Hang Cheng, Dr.~Lei Cong and Lijuan Dong, my closest collaborators and good friends. 谢谢. From the beginning, I have maintained a long-standing collaboration with the group of Prof.~Daniel Rodr\'iguez in Granada. I want to thank Prof.~Rodr\'iguez, Manu, and Fran for that, and for giving me the opportunity to see your lab evolving, while always answering my questions patiently. Much\'isimas gracias a todos. I wish to see the 7 Tesla Penning trap at its best soon! For this thesis, the collaboration with the group of Dr.~Andrea Alberti and Prof.~Dieter Meschede at the University of Bonn was very important. I want to thank Dr.~Alberti for bringing the topic of boson sampling to my table, for hosting me in his group and for helping me with the math. I also want to thank Dr.~Carsten Robens for his hospitality (both in Bonn and Boston), his guidance and for being always supportive. Danke Sch\"on. In addition, I would like to thank Prof.~Kihwan Kim for hosting me at Tsinghua University in Beijing, Prof.~Gerhard Kirchmair for inviting me to the IQOQI in Innsbruck, and Prof.~Tobias Sch\"atz for receiving me at the University of Freiburg, as well as their group members, that made me enjoy the stay.
\end{CJK}
Last but not least, I want to thank GNT for turning lunchtime into an exciting religious ceremony. I declare myself a devote GNTer with a single golden rule: 12:50 sharp. I am thankful to Dr.~Pablo Jimeno for the \LaTeX \ template and Dr.~Sof\'ia Mart\'inez-Garaot for the style of the references. Finally, I want to thank the natural mechanism that has made me a terrible soccer player, as this has allowed me to focus on important things such as mathematics, music, and cinema, and, of course, my deepest appreciation to my friends and family for all the unconditional support.
\chapter*{List of symbols}
\addcontentsline{toc}{section}{List of symbols}
\vspace{0.1cm}
\textbf{ \ \ \ Physical constants:}
\begin{tabular}{p{0.185\textwidth}p{0.7\textwidth}ll}
$\hbar$ & Reduced Planck constant $1.054571817\times10^{-34}$ m$^2$ kg / s \\
$e$ & Electric charge of the electron $1.602176634\times10^{-19}$ C \\
$c$ & Speed of light $299792458$ m/s \\
$\varepsilon_0$ & Vacuum permittivity $8.8541878128(13) \times10^{-12}$ F/m \\
$\mu_0$ & Vacuum permeability $4\pi\times10^{-7}$ T m/A
\end{tabular}
\vspace{0.5cm}
\textbf{ Chapter 1: \nameref{chapter:chapter_0}}
\begin{tabular}{p{0.185\textwidth}p{0.70\textwidth}ll}
$B_0, B_z, \vec{B}(t)$ & Intensity of static magnetic field, time-varying magnetic field \\
$\vec{\mu}$ & Magnetic dipole moment \\
$\gamma$ & Gyromagnetic ratio \\
$\sigma_{x,y,z}, \sigma_{+} (\sigma_{-})$ & Pauli matrices, creation (annihilation) operator for a two-level system \\
$\omega_0, \omega_0^\textrm{ R}, \omega^\textrm{ R}$ & Larmor frequency, frequency of electronic transition, frequency of MW or light field \\
$\Omega,\tilde{\Omega}$ & Rabi frequency \\
$\omega,\tilde{\omega}$ & Frequency of $\vec{B}(t)$ field \\
$\Delta,\tilde{\Delta}$ & Detuning between field and qubit frequency \\
$|\!\!\uparrow\rangle,|\!\!\downarrow\rangle$ & ``Up" and ``down" states of the magnetic dipole \\
$T_1, T_2$ & Depolarization time, dephasing time\\
$\delta$ & Unknown shift of the Larmor frequency \\
$g$ & Dipole-field coupling strengh (in units of angular frequency) \\
$a^\dagger , a$ & Bosonic creation and annihilation operators \\
$ |g\rangle, |e\rangle$ & ``Ground" and ``excited" electronic states \\
$\Gamma$ & Electronic relaxation (depolarization) rate\\
$\kappa$ & Cavity photon-loss rate \\
$|n\rangle$ & Fock state number $n$ \\
$P_{e,n}, P_{g,n}$ & Population of $|e,n\rangle$ and $|g,n\rangle$ states \\
$J_{i,j}$ & Coupling between spins $i$ and $j$ (in units of angular frequency)\\
$k$ & Wavenumber
\end{tabular}
\begin{tabular}{p{0.185\textwidth}p{0.7\textwidth}ll}
$x$ & Position of the atom \\
$V(x)$ & Potential energy \\
$S_{1/2}, P_{1/2}, D_{5/2}$ & Electronic subspaces of the atom \\
$D$ & Zero-field splitting \\
$^3A_2, ^3\!\!E, ^1\!\!A_1, ^1\!\!E$ & Electronic subspaces of the NV
\end{tabular}
\vspace{0.5cm}
\textbf{ Chapter 2: \nameref{chapter:chapter_1}}
\begin{tabular}{p{0.185\textwidth}p{0.7\textwidth}ll}
$\eta, \eta_m$ & Effective LD parameter $\eta=\frac{\gamma_eg_B}{8\nu}\sqrt{\frac{\hbar}{M\nu}}$, $\eta_m= \frac{\gamma_e g_B}{8\nu_m} \sqrt{\frac{\hbar}{M \nu_m}}$ \\
$\omega_e,\omega_g, \omega_j$ & Energy of excited and ground states (in units of angular frequency), $j$-th qubit frequency \\
$z^0_j, \Delta z$ & Equilibrium position of ion $j$, distance between equilibrium positions \\
$B(z)$ & Intensity of the magnetic field as a function of the position $z$ \\
$\gamma_e$ & Electronic gyromagnetic ratio \\
$M$ & Mass of the ion \\
$a^\dagger, c^\dagger (a,c)$ & Creation (annihilation) operators of the center-of-mass and breathing modes \\
$\nu, \nu_1 ,\nu_2$ & Frequencies of the trap, center-of-mass and breathing modes \\
$g_B$ & Magnetic field gradient \\
$b^\dagger (b)$ & Redefined creation (annihilation) operator of center-of-mass mode \\
$\Omega_j, \phi$ & Rabi frequency and phase of the MW driving with frequency $\omega_j$ \\
$f_j$ & Modulation function representing the effect of $\pi$ pulses on the $j$-th qubit \\
$G_{jm}$ & Function quantifying the displacement of ion $j$ in the phase space \\
$U_s, U_c$ & Time-evolution operators of spin-force and two-qubit gate \\
$\varphi, \theta_n$ & Accumulated gate phase \\
$\varphi_m$ & Gate phase accumulated by mode $m$ \\
$\tilde{\varphi}, \tilde{\varphi}_m$ & Rescaled $\varphi, \varphi_m$ \\
$T_\textrm{ G}$ & Gate final time \\
$\vec{\phi^x},\vec{\phi^y}$ & List of phases in X and Y pulse blocks \\
$\tau, \tau_a (\tau_b)$ & Duration of the pulse block, time of execution of the first (second) pulse of the block \\
$n_\textrm{ B}, n_\textrm{ RT}, n_\textrm{ PF}$ & Number of blocks applied, phase-space round trips, phase flips \\
$r$ & Number of periods $2\pi/\nu_1$ in $\tau$ \\
$\delta_1,\delta_2$ & Difference between qubit frequencies $\delta_2 = - \delta_1 = \omega_2 - \omega_1$ \\
$t_\pi$ & $\pi$-pulse time \\
$\bar{N}_b,\bar{N}_c $ & Average number of phonons for thermal states of the center of mass and breathing modes \\
$\Delta t$ & Time between MW pulse in ion 1 and ion 2
\end{tabular}
\begin{tabular}{p{0.185\textwidth}p{0.7\textwidth}ll}
$U^{(1)}_\textrm{XY}, U^{(2)}_\textrm{XY}$ & Time-evolution operator of an XY block acting on the first (second) ion subspace \\
$\rho$ & Density matrix \\
$\Gamma_b,\Gamma_c $ & Heating rates of center-of-mass and breathing modes \\
$T $ & Temperature of the trap electrodes \\
$\delta$ & Detuning with respect $\omega_j$ \\
$S_{\alpha}$ & Collective spin-$1/2$ operator, e.g. $S_{\alpha}=\sigma_1^{\alpha}+\sigma_2^{\alpha}$ \\
$\Omega, \Omega_\textrm{ DD}, \tilde{\Omega}_\textrm{DD}$ & Rabi frequency of bichromatic field, Rabi frequency of DD field, reescaled $\Omega_\textrm{DD}$ \\
$J_n(z)$ & Bessel function of the first kind \\
$\xi$ & Detuning with respect sideband frequency $\delta=\nu+\xi$\\
$t_n$ & Time after $n_\textrm{ RT}$ phase-space round trips \\
$\epsilon_j(t)$ & Energy fluctuation in the $j$-th qubit (in units of angular frequency) \\
$\phi(t), \dot{\phi},\phi_\textrm{ DD} $ & Time-varying phase of bichromatic driving, time-derivative of $\phi(t)$, phase of DD field \\
$g_{\tilde{\Omega}}, g_{\nu}$ & Coupling strengths of effective second-order terms \\
$\bar{n}$ & Average number of phonons of initial state \\
$\dot{\bar{n}}$ & Reescaled heating rate of center-of-mass mode $\dot{\bar{n}}=\Gamma_b\bar{N}_b$ \\
$\tau_{B}, T_2$ & Correlation time of magnetic field fluctuations, dephasing time induced by magnetic field fluctuations \\
$\tau_{\Omega}, \delta_{\Omega}$ & Correlation time of MW field fluctuations, relative amplitude of MW field fluctuations
\end{tabular}
{In the corresponding appendices:}
\begin{tabular}{p{0.185\textwidth}p{0.7\textwidth}ll}
$|0\rangle, |1\rangle , |2\rangle , |3\rangle $ & States of the hyperfine subspace \\
$E_0, E_1, E_2, E_3 $ & Energies of hyperfine states \\
$X(t)$ & Change of the qubit frequency due to magnetic field fluctuations \\
$\epsilon_\perp$ & Relative amplitude (in terms of $\Omega(t)$) of the MW field inducing transitions outside the qubit subspace \\
$c_d$ & Diffusion constant of the OU process \\
$\nu_r, \Delta_r$ & Radial trapping frequency, radial coupling strength of qubit-mode interaction \\
$d^\dagger (d)$ & Creation (annihilation) operators of a collective radial mode \\
$\beta$ & Ratio between axial and radial qubit-mode coupling strengths \\
$q_j$ & $j$-th ion's displacement around the equilibrium position \\
$Q_j$ & Normal mode coordinates \\
$a^\dagger_m (a_m)$ & Normal mode creation (annihilation) operator where $a_1=b$ and $a_2=c$ \\
$\Omega_k^\textrm{ M}$ & $k$-th order of the Magnus expansion \\
$\tilde{\tau}_a, \tilde{\tau}_b, x$ & $\tau_a, \tau_b, t$ in terms of $\tau$ \\
$\gamma, \hat{n}_{0}, \varphi_0$ & Frequency, unit vector and phase characterizing crosstalk defined before Eq.~(\ref{unitcross}), in Eq.~(\ref{unitcross}) and after Eq.~(\ref{unitcross}) respectively
\end{tabular}
\begin{tabular}{p{0.185\textwidth}p{0.7\textwidth}ll}
$\dot{n}_\textrm{com}^\textrm{ref}, \dot{n}_\textrm{bre}^\textrm{ref} $ & Reference center-of-mass and breathing mode heating rates \\
$\nu^\textrm{ref}_1, \nu^\textrm{ref}_2$ & Reference center-of-mass and breathing mode frequencies \\
$T^\textrm{ref}, d_\textrm{ i-e}^\textrm{ref} $ & Reference temperature, reference ion-electrode distance \\
$d_\textrm{ i-e}$ & Ion-electrode distance \\
$N-1$ & Number of trap periods in $2\pi/\xi$ \\
$m$ & Number of $2\pi/\tilde{\Omega}_\textrm{ DD}$ periods in $t_n$ \\
$\tilde{\Omega}$ & Equivalent to $\tilde{\Omega}_\textrm{ DD}$ \\
$\tilde{S}_{\pm}$ & Redefined spin operators $\tilde{S}_{\pm}=\frac{1}{2}(S_z\pm i S_x)$
\end{tabular}
\vspace{0.5cm}
\textbf{ Chapter 3: \nameref{chapter:chapter_2}}
\begin{tabular}{p{0.185\textwidth}p{0.7\textwidth}ll}
$\omega_0, \omega, \omega_n^0, \omega_I$ & Frequency of qubit, frequency of bosonic mode, $\omega_n^0=\omega_0+\gamma(2 n +1)$, ion-qubit frequency \\
$ a^\dagger (a)$ & Creation (annihilation) operator of a light mode in the QRM and, also, of a vibrational mode of a single trapped ion \\
$g, \gamma$ & Coupling strengths of Rabi and Stark terms (in units of angular frequency) \\
$\Omega_n, \Omega_{n,n+1}$ & Rabi frequency of the QRM \\
$\delta_n^-, \delta_n^+$ & Frequency of rotating and counter-rotating terms \\
$\Delta_n^{\textrm e}, \Delta_n^{\textrm g}$ & Second-order energy shift associated with the $|e\rangle$ and $|g\rangle$ state \\
$\Omega^{(3)}_{n-} (\Omega^{(3)}_{n+})$ & Rabi frequency of third-order JC-like (anti-JC-like) interaction \\
$\delta^{(3)}_{n-}, \delta^{(3)}_{n+}$ & Frequency of third-order rotating and counter-rotating terms \\
$\tilde{\delta}^{(3)}_{n-}, \tilde{\delta}^{(3)}_{n+}$ & Frequency of third-order rotating and counter-rotating terms, corrected up to second-order shifts \\
$\Omega^{(k)}_{n-} (\Omega^{(k)}_{n+})$ & Rabi frequency of $k$-order JC-like (anti-JC-like) interaction \\
$\delta^{(k)}_{n-}, \delta^{(k)}_{n+}$ & Frequency of $k$-order rotating and counter-rotating terms \\
$\omega^c_0$ & Approximate value of resonance frequency \\
$\Omega_{\textrm{ S},r,b}, \omega_{\textrm{ S},r,b}, \phi_{\textrm{ S},r,b}$ & Rabi frequency, frequency, and phase of carrier, red-detuned and blue-detuned drivings \\
$g_{r}, g_{b},\hat{g}_\textrm{ S}$ & Coupling strength of first red sideband, first blue sideband and carrier interactions \\
$\Omega_0$ & $\Omega_0\equiv \Omega_\textrm{ S}(1-\eta^2/2)$ \\
$\omega_0^\textrm{ R}, \omega^\textrm{ R}$ & Frequency of simulated qubit and simulated bosonic mode \\
$g^\textrm{ R}, \gamma^\textrm{ R}$ & Coupling strengths of simulated qubit-boson interaction and simulated Stark interaction \\
$\Omega, \omega_\textrm{ L}, \phi, \delta$ & Rabi frequency, frequency, phase and detuning of a generic laser driving \\
$\hat{f}_1(\hat{n}), f_1(\hat{n})$ & Nonlinear operator, definition in Eq.~(\ref{NLfunc}), nonlinear operator evaluated in Fock state $|n\rangle$ \\
$\tilde{\Omega}_{n,n+1}$ & Rabi frequency of the NQRM, $|f_1(n)|\Omega_{n,n+1}$ \\
$\langle n\rangle$ & Average number of phonons of initial state \\
$\alpha$ & Complex number characterizing a coherent state, where $|\alpha|$ and ${\arg}(\alpha)$ are called ``amplitude" and ``phase" respectively \\
\end{tabular}
In the corresponding appendix :
\begin{tabular}{p{0.185\textwidth}p{0.7\textwidth}ll}
$\omega_n^e, \omega_n^g$ & $\omega_n^e=(\omega +\gamma )n+\omega_0/2, \omega_n^g=(\omega -\gamma )n-\omega_0/2$ \\
$S_{n}(t)$ & $S_{n}(t)\equiv\sigma_+ e^{i\delta^+_{n} t}+\sigma_- e^{i\delta^-_{n} t}$ \\
$g_{r,b}^{(1)}$ & Coupling strength of red and blue sideband (in units of angular frequency) \\
$g_{r,b}^{(2)}$ & Second-order coupling strength of red and blue sideband (in units of angular frequency) \\
$\tilde{\sigma}_{\pm}$ & $\tilde{\sigma}_{\pm}\equiv(\sigma_y\pm i\sigma_z)/2$ \\
$g_\textrm{ JC}, g_\textrm{ aJC}$ & Coupling strength of JC and anti-JC terms (in units of angular frequency)
\end{tabular}
\vspace{0.5cm}
\textbf{ Chapter 4: \nameref{chapter:chapter_3}}
\begin{tabular}{p{0.185\textwidth}p{0.7\textwidth}ll}
$N, M$ & Number of bosons, number of modes \\
$t, \tau$ & Natural number describing discrete time steps, actual duration of a time step (spin-addressing operation) \\
$a^\dagger_m (a_m), \hat{n}_m, n_m$ & Creation (annihilation) operator of bosonic mode $m$, number operator at mode $m$, number of particles in mode $m$ \\
$\hat{N}$ & Total number operator $\hat{N}=\sum_m\hat{n}_m$ \\
$U, U_{ij}$ & Haar random unitary matrix, matrix element ($i$-th row and $j$-th column) \\
$P_{BS}$ & Boson sampling probability distribution \\
$|{\uparrow}\rangle, |{\downarrow}\rangle$ & Atomic hyperfine states \\
$\lambda_L$ & Optical lattice wavelength \\
$V_{\uparrow,\downarrow}(x)$ & Potential energy due to optical trapping \\
$x_{\uparrow,\downarrow}(t)$ & ``Position" of $|\!\!\uparrow\rangle$ and $|\!\!\downarrow\rangle$ lattices \\
$T(s,t)$ & $2\times 2$ unitary matrix, building block of $U$ \\
$H_{2\times 2}, A(\theta), A(\phi)$ & $2\times 2$ unitary matrices, building blocks of $T(s,t)$ \\
$|\psi_0\rangle, |0\rangle, |n_m\rangle$ & Initial state, vacuum state of all $M$ modes, Fock state of $m$-th mode\\
$|\psi_u\rangle$ & Uniform initial state \\
$P(n_1,n_2,..., n_{M})$ & Probability of generic final configuration $n_1,n_2,..., n_{M}$ \\
$\hat{U}$ & Boson sampling time-evolution operator \\
$t_\textrm{ in}, t_\textrm{ op}, t_{det}, t_\textrm{ pr}$ & Time required for initial state preparation, interference operation, measurement, and the whole process \\
$R_\textrm{ pr} (R_0), R$ & Generation-rate of atomic (photonic) experimental samples, generation-rate of valid experimental samples \\
$\eta_\textrm{ d}, \eta, \eta_\textrm{f}, \eta_\textrm{c} $ & Atomic detection efficiency, single-photon survival probability, single-photon fixed survival probability, single-photon survival probability per unit of length of circuit \\
$\tau_\textrm{ bg}, \tau_\textrm{ tb}$ & One-body loss lifetime, two-body loss lifetime
\end{tabular}
\begin{tabular}{p{0.185\textwidth}p{0.7\textwidth}ll}
$P_\textrm{surv, step, pair}(k)$ & Total survival probability of $N$-body sample, survival probability per time step $\tau$, probability of finding $k$ particle pairs (a pair of particles in the same lattice site) \\
$d$ & Length of photonic circuit \\
$ k_l$ & Number of lost photons \\
$\tilde{a}$ & Real positive value quantifying the speed of classical computer \\
$\omega_{\uparrow}, \omega_{\downarrow}$ & Energy of $|\!\!\uparrow\rangle$ and $|\!\!\downarrow\rangle$ states (in units of angular frequency) \\
$\Omega_0 (\varphi_0), \Omega_s, $ & Rabi frequency (phase) of MW driving, Rabi frequency of light field at site $s$ \\
$\hat{U}, \hat{U}_t, \hat{U}_{t,1} $ & Boson sampling time-evolution operator, time-evolution operator corresponding to step $t$, time-evolution operator from time step $1$ to $t$, i.e. $\hat{U}_{t,1}=\prod_{j=1}^{t}\hat{U}_j$ \\
$\theta_s^t, \phi_s^t$ & Phases characterizing $T(s,t)$ transformation \\
$\vec{\theta}_t, \vec{\phi}_t$ & Lists of phases at time step $t$ \\
$\mathcal{L}_b(\rho), \Gamma_b, F_b$ & Generic Lindblad superoperator, decay rate, and jump operator \\
$\Gamma_\textrm{ bg}, \Gamma_\textrm{ tb}$ & One-body-loss rate, two-body-loss rate \\
$F$ & Fidelity with respect the boson sampling final state \\
$p$ & Survival probability after all interference operations \\
$V_t$ & Two-body-loss Hamiltonian at time step $t$, in an interaction picture with respect the boson-sampling Hamiltonian, i.e. $\hat{U}^\dagger_{t-1,1}V\hat{U}_{t-1,1}$ or $\hat{U}^\dagger_{t-1,1}V'\hat{U}_{t-1,1}$, depending if $j$ is odd or even \\
$\epsilon$ & Fluctuation in energy difference $\omega_\uparrow-\omega_\downarrow$, in terms of $\Omega_0$
\end{tabular}
In the corresponding appendix :
\begin{tabular}{p{0.185\textwidth}p{0.7\textwidth}ll}
$k_2, k_3, k_4$ & Number of pairs, trios, and quartets \\
$P(k_2,k_3,k_4)$ & Probability of having $k_2$ pairs, $k_3$ trios, and $k_4$ quartets \\
$c$ & Ratio between $M$ and $N^2$ \\
$D, d, |d\rangle$ & Total number of configurations, index for a possible configuration, state of a possible configuration \\
$p_k, p_{k,k'}$ & Ratio between number of configurations where $k$ atoms are at mode $m$ and $D$, ratio between number of configurations where $k$ atoms are at mode $m$ while $k'$ atoms at mode $m'$ ($m\neq m'$), and $D$ \\
$a_j$ & $a_j\equiv\prod_{i=1}^j\frac{M-i}{M+N-i}$ \\
$\lambda_j$ & $\lambda_j\equiv\frac{N}{M+N-j}$
\end{tabular}
\newpage
\textbf{ Chapter 5: \nameref{chapter:chapter_4}}
\begin{tabular}{p{0.185\textwidth}p{0.7\textwidth}ll}
$S_{x,y,z}$ & Spin operators for spin-$1$ \\
$|0\rangle (|1\rangle), |+\rangle$ & Ground (excited) state of the NV qubit, superposition state $|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ \\
$\omega_\textrm{ n}, \omega_{j} $ & Nuclear Larmor frequency, Larmor frequency of $j$-th nucleus accounting for a shift due to the coupling with the NV \\
$I_j^{\alpha}$ & $j$-th nucleus spin-$1/2$ operator, $I_j^{\alpha}=1/2\sigma_j^{\alpha}$ with $\alpha=x,y,z,+,-$ \\
$\gamma_\textrm{ n}$ & Nuclear gyromagnetic ratio \\
$\vec{A}_j$ & Vector characterizing the NV-nucleus coupling, definition in Eq.~(\ref{hypervec}) \\
$\vec{r}_j$ & Position vector of $j$-th nucleus, taking the vacancy site as the origin \\
$\omega(t), \omega, \phi$ & Rabi frequency, frequency and phase of MW field \\
$\hat{\omega}_j$ & Unit vector representing the new precession axis of nucleus $j$ \\
$F(t)$ & Modulation function representing the effect of $\pi$ pulses\\
$T$ & Period of $F(t)$ \\
$n, l$ & Number of the harmonic in the Fourier series, number of the harmonic chosen such that $l \omega_\textrm{ M} \approx \omega_k$ \\
$\omega_k$ & Larmor frequency of the nucleus in which we want to induce resonance \\
$f_n, f^\textrm{ m}_n$ and $f^\textrm{ th}_n$ & Fourier coefficient corresponding to the $n$-th harmonic, Fourier coefficient with modulated and top-hat pulses defined in Eq.~(\ref{modulatedf}) and Eq.~(\ref{tophatcoeff}) \\
$\omega_\textrm{ M}$ & Angular frequency associated with period $T$ \\
$\hat{x}_j, \hat{y}_j, \hat{z}_j$ & Unit vector of new Cartesian basis defined below Eq.~(\ref{modulated}) \\
$t_m, t_p$ & The instant we star applying the $m$th pulse, central point of the $m$th pulse $t_p=t_m+t_\pi/2$ \\
$\alpha_q(t)$ & Real, positive, time-varying function, $q$ being a natural number \\
$a_1, c_1$ & Free parameters of Gaussian function $\alpha_1(t)$ \\
$\gamma_{^{13}\textrm{ C}}, \gamma_\textrm{ H}$ & ${^{13}\textrm{ C}}$ nuclear gyromagnetic ratio, $^{1}\textrm{ H}$ nuclear (proton) gyromagnetic ratio \\
$t_f$ & Final time of the sequence \\
$E^\textrm{ th}$ and $E^\textrm{ ext}$ & Energy delivered by top-hat and extended (modulated) $\pi$ pulses
\end{tabular}
In the corresponding appendix :
\begin{tabular}{p{0.185\textwidth}p{0.7\textwidth}ll}
$\tau_m, \tau_\pi$ & $t_m$ and $t_\pi$ in terms of $T/2$ \\
$x, y$ & Integrating variables \\
$\vec{P}, \vec{E}, \vec{B}$ & Poynting vector, electric field and magnetic field of MW driving \\
$\vec{k}$ & Wavevector of MW driving \\
$\vec{x}$ & Position of the NV center \\
$B_0(t)$ & Time-varying amplitude of magnetic field \\
\end{tabular}
\chapter*{Abstract}
\addcontentsline{toc}{section}{Abstract}
Quantum mechanics, is at the heart of many of the technological and scientific milestones of the last century such as the laser, the integrated circuit, or the magnetic resonance imaging scanner. However, only few decades have passed since we have the possibility to coherently manipulate the quantum states encoded in physical registers of specific quantum platforms. Understanding the light-matter interaction mechanisms that govern the dynamics of these systems is crucial for manipulating these registers and scaling up their quantum volume, which is a must for building a full-fledged quantum computer with vast implications in physics, chemistry, economics, and beyond. In 2019, the group of Prof.~John Martinis at Google achieved a technological milestone when they performed an experiment with 53 superconducting quantum bits (qubits). Although this can still be considered a small quantum system, the Google group claimed that, to solve the problem run by their quantum setup, the best classical computers would need thousands of years, and in consequence, they had achieved quantum supremacy. This is a matter of debate, as, short after, IBM argued that their classical computers could run the algorithm in a few days. Nevertheless, it is believed that noisy intermediate-scale quantum devices are capable of outperforming the best classical computers in specific tasks such as calculating properties of many-body quantum systems. In this regard, quantum simulators are specific purpose quantum computers that are expected to boost our understanding of, e.g., high-temperature superconductors or light-matter interactions beyond perturbative regimes. A different application of quantum technologies is that of quantum sensors. These quantum devices, that can be manipulated with suited radiation patterns, enable the measurement of physical quantities with unprecedented spatial resolution. This procedure is known as quantum sensing, and, among other things, it is a field that promises a better understanding of biological systems. In light of the above, it is clear that the endeavour of investigating light-matter interactions and finding optimal scenarios for their manipulation is of major importance for quantum technology and its emerging applications.
In this Thesis, we develop novel proposals for efficient quantum information processing and quantum sensing, quantum simulation of generalized light-matter interactions beyond the strong-coupling regime, and quantum supremacy experiments with neutral atoms. In particular, we propose two different methods to generate quantum logic gates with trapped ions driven by microwave radiation. One is aimed to be applied in current setups, while the other one assumes experimental parameters reachable in the near term. We demonstrate that both methods are robust against the main sources of decoherence in these systems. Moreover, our quantum gates work without laser radiation which is an advantage for scaling up trapped-ion quantum processors, as optical tables are substituted by microwave antennas that can be easily integrated in microtrap arrays. We also study different models of light-matter interaction, more specifically, the Rabi-Stark and the nonlinear quantum Rabi models, and propose a method for their implementation using laser-driven trapped ions, where quantum simulations of light-matter interaction have already been realized. In these models, we discover interesting properties such as the appearance of selective multi-photon interactions or the blockade of population distribution in the Hilbert space. Furthermore, we propose how ultracold atoms in optical lattices can be controlled with microwave and laser radiation in order to realize the boson sampling problem, a model of computation capable of showing quantum supremacy with tens of particles. Taking into account experimental error sources, such as particle losses, we estimate that, using neutral atoms in spin-dependent optical lattices and, within realistic conditions, quantum supremacy could be achieved with tens of atoms. Finally, we develop a method to achieve selective interactions between a nitrogen-vacancy center and nearby carbon-13 atoms. These interactions are obtained by suitably designed microwave pulse sequences, and can be used to perform nuclear magnetic resonance at the nanoscale, with applications in biological sciences. Compared to other methods, ours is energy efficient, and thus, less invasive and suitable to be applied in biological samples.
All in all, in this Thesis we design radiation patterns capable of creating effective light-matter interactions suited to applications in quantum computing, quantum simulation and quantum sensing. In this manner, the results presented here significantly expand our knowledge on the control of light-matter interactions, and provide optimal scenarios for current quantum devices to generate the next-generation of quantum applications. Moreover, we introduce novel methods to simulate generalized light-matter interactions beyond perturbative regimes. Thereby, we believe our results will boost the construction of better quantum sensors, quantum simulators and trapped-ion quantum processors, as well as the first experimental realization of quantum supremacy using neutral atoms.
\chapter*{Laburpena}
\chapter{Conclusions}
\label{chap_conclusions}
\thispagestyle{chapter}
In this Thesis we have designed protocols that tailor light-matter interactions for specific applications in quantum platforms such as trapped ions, ultracold atoms in optical lattices, and NV centers. In short, we have used DD techniques to design quantum operations that are robust against errors in environmental and control fields, achieving high-fidelity quantum logic in trapped ions and energy-efficient NMR at the nanoscale with NV centers in diamond. We have also studied generalised models of light-matter interaction, leading to the discovery of selective $k$-photon interactions in the Rabi-Stark model and a proposal for preparing non-classical quantum states using the NQRM. Moreover, we have shown how the appropriate tailoring of interactions among ultracold atoms in optical lattices could lead to solve the boson sampling problem faster than the best supercomputers, thus demonstrating quantum supremacy. More specifically:
In chapter~\ref{chapter:chapter_1}, we proposed two different methods to generate robust high-fidelity two-qubit gates with MW-driven trapped ions. On the one hand, we have demonstrated that pulsed DD schemes are efficient generators of fast and robust two-qubit gates. Particularly, our MW sequence induces all motional modes to cooperate, which results in a faster gate. In addition, our sequence is specifically designed to be robust against fluctuations in the magnetic and MW fields. Then, it achieves fidelities larger than $99.9\%$ including these realistic sources of decoherence. On the other hand, we proposed a different method that uses phase-modulated continuous DD combined with phase flips and refocusing $\pi$ pulses, to produce entangling gates with high fidelity. Contrary to the previous case, here we considered low-power MW radiation which matches current experimental scenarios. In particular, we demonstrated that fidelities on the preparation of maximally entangled Bell states exceeding $99\%$ are possible within current experimental limitations. Moreover, we also showed that fidelities larger than $99.9\%$ are reachable with minimal experimental improvements. Summarising, with the help of DD methods, we have developed two gate schemes that can arguably improve the fidelities of current entangling operations up to the threshold required for the application of quantum error correction techniques. Finally, in the same direction, one could study whether amplitude modulated pulses, like the ones used in chapter~\ref{chapter:chapter_4}, can bring extra robustness to our gates.
\pagebreak
In chapter~\ref{chapter:chapter_2}, we have studied two different models of light-matter interaction and propose means for their quantum simulation with a laser-driven trapped ion. First, we have considered the Rabi-Stark model, both in the SC and USC regimes. Here, we discovered $k$-photon interactions whose resonance frequency depends on the state of the bosonic mode. We have identified that this selective behaviour is caused by the Stark term, and developed an analytical framework to characterise these selective interactions. Second, we propose the NQRM, as a natural extension of the QRM in trapped-ion systems. The nonlinear term $f_1(\hat{n})$, which appears when moving outside the LD regime, causes the blockade of motional-state propagation at $|n\rangle$, whenever $f_1(\hat{n})|n\rangle=0$. In the SC regime, we compared the linear and nonlinear Jaynes-Cummings models, and observe that collapses and revivals of coherent states disappear. As an application of the model, we proposed a method to generate large-$n$ Fock states in a dissipative manner, making use of the nonlinear anti-JC model, and the spontaneous decay of the two-level system. Regarding this, we find interesting to study how the time needed to prepare the Fock state scales with the number $n$\footnote{We thank Prof. Jonathan P. Home for bringing up this suggestion.}. Finally, we showed how the Rabi-Stark and the NQRM can be implemented using a single trapped ion. Natural follow-ups of these works could involve proposals to realise these models in cavity or circuit QED systems or the extension to more qubits.
In chapter~\ref{chapter:chapter_3}, we have introduced a method to realise boson sampling with ultracold atoms in optical lattices. The group of Prof. Dieter Meschede and Dr. Andrea Alberti had previously demonstrated discrete-time random walks~\cite{Robens16b} and two-particle quantum interference~\cite{RobensThesis} using MW pulses combined with spin-dependent optical potentials. Boson sampling, equivalent to a multiparticle quantum walk, is hard to simulate classically, as proved by Aaronson and Arkhipov~\cite{Aaronson11}. On the one hand, we have studied the effect of particle loss in the boson sampling problem. In particular, we studied how correlated two-body losses hinders the rate in which valid experimental samples can be generated. With low particle loss, we proved that the samples are close enough to the boson sampling probability distribution (BSPD). Interestingly, we also observed that sampling faster than a classical supercomputer is possible for strong particle losses, however, the probability distribution we are sampling from does not resemble the BSPD. It is still an open question whether it is hard to sample from the probability distribution generated by a boson sampler with strong two-body losses. We also estimated how small other experimental errors have to be when increasing the number of bosons, in order to keep sampling from the BSPD. These experimental errors include fluctuations of the magnetic field or imperfect ground-state cooling. We note that, in principle, DD techniques could be used to relax the constrains on the fluctuating errors. As a final remark, we notice that one of the advantages of using atoms for boson sampling may come from the point of view of verification. This is, to verify that the results delivered by the boson sampler are correct, even in regimes where the results cannot be retrieved with classical supercomputers. In this regard, instead of bosonic atoms, a fermionic species could in principle be loaded in the optical lattice. Doing the same operations to the fermionic atoms would also result in a sampling problem (fermion sampling). However, in this case, the final probability distribution can be calculated in a time that scales polynomially with the number of particles. One could then verify that the machine samples correctly from the fermionic probability distribution, and expect that the same will happen for the bosonic case.
In chapter~\ref{chapter:chapter_4}, we presented a general method to design extended MW pulses that achieve tuneable interactions, hence selective, among an NV quantum sensor and nuclear spins at large static magnetic fields. The latter represent optimal conditions for nanoscale NMR, enhancing the precision with which signals from magnetic field emitters can be retrieved by the NV center. At large magnetic fields, the values of the Larmor frequencies increase and these may surpass the value of the Rabi frequency associated to the applied MW radiation. This induces a reduction in the contrast of the NMR spectra, something that is solved by our amplitude-modulated extended $\pi$ pulses. Our method avoids having to increase the value of the Rabi frequency to that of the Larmor frequencies, leading to an energetically efficient method. Furthermore, the method is general and can be incorporated to all stroboscopic DD techniques such as the widely used XY8 sequence. During the course of this thesis, we have also proposed the use of amplitude-modulated pulses for double quantum magnetometry\footnote{Check article 14 from the list of publications}. In that case, a more complex two-tone stroboscopic driving is required, achieving a larger spectral signal.
All in all, this thesis explores new avenues in the control of quantum systems through light-matter interactions shaped by specifically designed radiation patterns. We expect that the results presented here will boost the experimental generation of MW-driven two-qubit operations with fidelities well above the required threshold to apply quantum error correction techniques, better quantum sensors performing NMR at the nanoscale, and the first quantum supremacy experiment using trapped atoms. Moreover, our results will help in the development of quantum technologies which, besides leading to technological progress, will certainly be key assets to unveil the unsolved questions of nature.
\chapter*{List of abbreviations}
\addcontentsline{toc}{section}{List of abbreviations}
\begin{enumerate}[leftmargin=5cm]
\item [\textbf{ BSPD}]{Boson-Sampling Probability Distribution}
\item [\textbf{ DSC}]{Deep-Strong Coupling}
\item [\textbf{DD}]{Dynamical Decoupling}
\item [\textbf{JC}]{Jaynes-Cummings}
\item [\textbf{ LD}]{Lamb-Dicke}
\item [\textbf{ MW}]{Microwave}
\item [\textbf{ NV}]{Nitrogen-Vacancy}
\item [\textbf{ NISQ}]{Noisy Intermediate-Scale Quantum}
\item [\textbf{NQRM}]{Nonlinear Quantum Rabi Model}
\item [\textbf{ NMR}]{Nuclear Magnetic Resonance}
\item [\textbf{ OU}]{Ornstein-Uhlenbeck}
\item [\textbf{QED}]{Quantum Electrodynamics}
\item [\textbf{QRM}]{Quantum Rabi Model}
\item [\textbf{RWA}]{Rotating-Wave Approximation}
\item [\textbf{SC}]{Strong Coupling}
\item [\textbf{USC}]{Ultrastrong Coupling}
\end{enumerate}
\chapter*{List of publications}
\addcontentsline{toc}{section}{List of publications}
This thesis is based in the following publications:
\\
\textbf{ Chapter 2: \nameref{chapter:chapter_1}}
\begin{enumerate}
\item {I. Arrazola, J. Casanova, J. S. Pedernales, Z.-Y. Wang, E. Solano, and M. B. Plenio, \\
\textit{ Pulsed dynamical decoupling for fast and robust two-qubit gates on trapped ions},\\
\href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.97.052312}{Physical Review A \textbf{97}, 052312 (2018).}}
\item {I. Arrazola, M. B. Plenio, E. Solano, and J. Casanova \\
\textit{ Hybrid Microwave-Radiation Patterns for High-Fidelity Quantum Gates with Trapped Ions},\\
\href{https://journals.aps.org/prapplied/abstract/10.1103/PhysRevApplied.13.024068}{Physical Review Applied \textbf{13}, 024068 (2020).}}
\end{enumerate}
\textbf{ Chapter 3: \nameref{chapter:chapter_2}}
\begin{enumerate}[resume]
\item { X.-H. Cheng, I. Arrazola, J. S. Pedernales, L. Lamata, X. Chen, and E. Solano\\
\textit{ Nonlinear quantum Rabi model in trapped ions},\\
\href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.97.023624}{Physical Review A \textbf{97}, 023624 (2018).}}
\item { L. Cong, S. Felicetti, J. Casanova, L. Lamata, E. Solano, and I. Arrazola \\
\textit{ Selective interactions in the quantum Rabi model},\\
\href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.101.032350}{Physical Review A \textbf{101}, 032350 (2020).}}
\end{enumerate}
\textbf{ Chapter 5: \nameref{chapter:chapter_4}}
\begin{enumerate}[resume]
\item { I. Arrazola, E. Solano, and J. Casanova \\
\textit{ Selective hybrid spin interactions with low radiation power},\\
\href{https://journals.aps.org/prb/abstract/10.1103/PhysRevB.99.245405}{Physical Review B \textbf{99}, 245405 (2019).}}
\end{enumerate}
Other articles published in the course of this thesis yet not included in it are:
\begin{enumerate}[resume]
\item { I. Arrazola, J. S. Pedernales, L. Lamata, and E. Solano \\
\textit{ Digital-Analog Quantum Simulation of Spin Models in Trapped Ions},\\
\href{https://www.nature.com/articles/srep30534}{Scientific Reports \textbf{6}, 30534 (2016).}}
\item { X.-H. Cheng, I. Arrazola, J. S. Pedernales, L. Lamata, X. Chen, and E. Solano \\
\textit{ Switchable particle statistics with an embedding quantum simulator},\\
\href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.95.022305}{Physical Review A \textbf{95}, 022305 (2017).}}
\item { F. Dom\'{i}nguez, I. Arrazola, J. Dom\'{e}nech, J. S. Pedernales, L. Lamata, E. Solano, and D. Rodr\'{i}guez \\
\textit{ A Single-Ion Reservoir as a High-Sensitive Sensor of Electric Signals},\\
\href{https://www.nature.com/articles/s41598-017-08782-5}{Scientific Reports \textbf{ 7}, 8336 (2017).}}
\item {F. Dom\'{i}nguez, M. J. Guti\'{e}rrez, I. Arrazola, J. Berrocal, J. M. Cornejo, J. J. Del Pozo, R. A. Rica, S. Schmidt, E. Solano, and D. Rodr\'{i}guez \\
\textit{ Motional studies of one and two laser-cooled trapped ions for electric-field sensing applications},\\
\href{https://www.tandfonline.com/doi/abs/10.1080/09500340.2017.1406157}{Journal of Modern Optics \textbf{65}, 613 (2018).}}
\item {M. J Guti\'errez, J. Berrocal, J. M Cornejo, F. Dom\'{i}nguez, J. J. Del Pozo, I. Arrazola, J. Ba\~nuelos, P. Escobedo, O. Kaleja, L. Lamata, R. A. Rica, S. Schmidt, M. Block, E. Solano, and D. Rodr\'{i}guez \\
\textit{ The TRAPSENSOR facility: an open-ring 7 tesla Penning trap for laser-based precision experiments}, \\
\href{https://iopscience.iop.org/article/10.1088/1367-2630/aafa45}{New Journal of Physics \textbf{21} 023023 (2019).}}
\item {R. Puebla, G. Zicari, I. Arrazola, E. Solano, M. Paternostro, and J. Casanova \\
\textit{ Spin-boson model as a simulator of non-Markovian multiphoton Jaynes-Cummings models}, \\
\href{https://www.mdpi.com/2073-8994/11/5/695}{Symmetry \textbf{11}, 695 (2019).}}
\item {M. J. Guti\'{e}rrez, J. Berrocal, F. Dom\'{i}nguez, I. Arrazola, M. Block, E. Solano, and D. Rodr\'{i}guez \\
\textit{ Dynamics of an unbalanced two-ion crystal in a Penning trap for application in optical mass spectrometry}, \\
\href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.100.063415}{Physical Review A \textbf{100}, 063415 (2019).}}
\item {T. Xin, S. Wei, J. Cui, J. Xiao, I. Arrazola, L. Lamata, X. Kong, D. Lu, E. Solano, and G. Long \\
\textit{ Quantum algorithm for solving linear differential equations: Theory and experiment}, \\
\href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.101.032307}{Physical Review A \textbf{101}, 032307 (2020).}}
\item {C. Munuera-Javaloy, I. Arrazola, E. Solano, and J. Casanova \\
\textit{ Double quantum magnetometry at large static magnetic fields}, \\
\href{https://journals.aps.org/prb/abstract/10.1103/PhysRevB.101.104411}{Physical Review B \textbf{101}, 104411 (2020).}}
\item {J.-N. Zhang, I. Arrazola, J. Casanova, L. Lamata, K. Kim, and E. Solano \\
\textit{ Probabilistic eigensolver with a trapped-ion quantum processor}, \\
\href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.101.052333}{Physical Review A \textbf{101}, 052333 (2020).}}
\end{enumerate}
|
1,108,101,564,936 | arxiv | \section{Introduction}
Recently, increasing experimental and theoretical attention was given
to topological aspects of condensed matter physics \cite{Wen2019,*Sachdev2019}.
In one-dimensional (1D) systems, an early essential role of topology was provided by the
so-called \textit{Haldane conjecture}\cite{nobelhaldane,PhysRevLett.50.1153,*Haldane1983}:
the ground state of integer (half-integer) spin chains is gapped (gapless).
In fact, the conjecture was experimentally verified in spin-1 chains \cite{Buyers1986,*Tun1991};
further, density matrix renormalization group (DMRG) studies confirmed that
the \textit{bulk} gapped ground state displays spin-1/2 fractionalized \textit{edge} states
in open chains \cite{White1992,*White1993}.
Topological insulators \cite{Hasan2010} share with these systems some general aspects \cite{Chen2011,Chen2013,Verresen2018}:
an insulating bulk and a conducting surface (edge states) are intrinsically connected, a phenomenon known as bulk-boundary correspondence. The Su-Schrieffer-Heeger (SSH) dimerized model \cite{Su1979}, and trimer models \cite{MartinezAlvarez2019}, including a diamond chain \cite{Pelegri2019}, are examples of models that
manifest the bulk-boundary correspondence in regions of their parameter space.
In addition, the phonon structures arising from mechanical isostatic \cite{Kane2014} and Maxwell \cite{Mao2018}
lattices can be understood
from the akin framework of topological band theory of electronic systems, including the bulk-boundary correspondence.
Also, chiral magnonic edge states in ferromagnetic skyrmion crystals controlled by
magnetic fields were reported \cite{PhysRevResearch.2.013231}.
Besides, we mention that the association of a two-dimensional Chern number with a one-dimensional system was also
suggested for photonic quasicrystals \cite{Kraus2012}, and fermionic systems in quasi-periodic optical
superlattices \cite{MartinezAlvarez2019,Lang2012}.
Gapped ground states of spin chains, either with spin-1 or more complex unit cells with spin-1/2 sites, imply plateaus
in the magnetization ($m$) curves as a function of the magnetic field ($h$): $m(h)$.
This is a topological quantization of the magnetization due to the
presence of $h$, analogously to the quantum Hall effect \cite{OYAPrl97}.
Recently, this issue was investigated in modulated spin
chains \cite{Hu2014,*Hu2015}, with particular attention to the edge states of open systems.
On the other hand, a magnetization plateau at 1/3 of the saturation magnetization (1/3 -- plateau) has been observed
in several model systems.
The isotropic $AB_2$ chain exhibits a ferrimagnetic ground state
\cite{Macedo1995,Tian1996,AlcarazandMa,PRL97Raposo,*PRB99Raposo} and the 1/3 -- plateau
in $m(h)$ \cite{PhysA2005,Coutinho-Filho2008}.
The topological nature of the ground state manifests in topological Wess-Zumino terms of the non-linear sigma
model \cite{PRL97Raposo,*PRB99Raposo} or through its representation on a valence-bond
state basis \cite{Kolezhuk1997}. Likewise, the spin-(1/2,1) and spin-(1/2,5/2) alternating spin chains
also exhibits a ferrimagnetic ground state, together with the 1/3 -- plateau \cite{AlcarazandMa,PhysRevB.55.8894,Maisinger1998,DaSilva2017}, and the 2/3 -- plateau \cite{Tenorio2011}, respectively.
Besides, we mention the 1/3 -- plateau state of the quantum spin-1/2 XX diamond chain
in a magnetic field \cite{Verkholyak2011}.
Further, in the
phase diagram of anisotropic spin models, the 1/3 -- plateau closes in a transition of the
Kosterlitz-Thouless (KT) type \cite{Kosterlitz1973,*Kosterlitz1974,*Kosterlitz2016,*nobelkosterlitz} as the
anisotropy changes \cite{YamamotoPRB99,Solid15Liu}. The KT transition is also observed in anisotropic ferrimagnetic
branched chains \cite{Verissimo2019,Karlova2019}.
On the experimental side, the 1/3 -- plateau was observed in materials with three spin-1/2 sites
per unit cell (diamond chain): the mineral azurite Cu$_3$(CO$_3$)$_2$(OH)$_2$
\cite{Kikuchi2005,Rule2008,Aimo2009,Rule2011,Jeschke2011}; and the compounds copper hydroxydiphosphate
Cu$_3$(P$_2$O$_6$OH)$_2$ \cite{Hase2006}, and alumoklyuchevskite K$_3$Cu$_3$AlO$_2$(SO$_4$)$_4$
\cite{Morita2017,*Fujihala2017}. Also, the 2/3 -- plateau was observed in
a new mixed spin-(1/2,5/2) chain in a charge-transfer salt
(4-Br-$o$-MePy-V)FeCl$_4$ \cite{Yamaguchi2020}.
In this work, DMRG and exact diagonalization (ED) results
for open and closed anisotropic Heisenberg-$AB_2$ chains, respectively, unveil a very rich phase diagram and
related notable features. In particular, in open chains we identify a secondary plateau associated with
edge and extended magnon excitations from the 1/3--plateau. We stress that the edge magnon states that emerge
from this plateau are many-body quantum states.
As one approaches the symmetry-protected [translational and $U(1)$ symmetries] topological quantum KT transition, the bulk penetration of
the edge states is enhanced,
their degeneracy is broken, and the squeezed chain effect is observed. Further, at the KT transition and beyond, the
bulk magnon gap closes, while the edge states mix with the continuum and the Luttinger liquid (LL) excitations
dominate the scenario.
In Sec. \ref{sec:pd}, we discuss the topology and phase diagram of the anisotropic Heisenberg-$AB_2$,
and a precise determination of the KT transition point. The edge states associated with the 1/3--plateau are
considered in Sec. \ref{sec:edge}, while
gapped and gapless excitations around the topological KT transition are discussed in Sec. \ref{sec:bands}.
The boundary scattering length for the 1/3 -- plateau and the magnon-magnon scattering length for the
fully polarized (FP) -- plateau magnons are reported in Sec. \ref{sec:sca}. A summary and conclusions are found in Sec. \ref{sec:summary}.
\section{Topology and Phase diagram}
\label{sec:pd}
The anisotropic Heisenberg model on the $AB_2$ chain in an applied magnetic field $h$ reads:
\begin{eqnarray}
H &=&\sum_{i=1}^{N_c}[S^x_{A,i}(S^x_{B,i}+S^x_{B,i-1})+S^y_{A,i}(S^y_{B,i}+S^y_{B,i-1})\nonumber\\
& &+\lambda S^z_{A,i}(S^z_{B,i}+S^z_{B,i-1})]-hS^z,
\label{eq:ham}
\end{eqnarray}
where $S^{x,y,z}_{B,i}=S^{x,y,z}_{B_1,i}+S^{x,y,z}_{B_2,i}$, $N_c$ is the number of unit cells of the system, the exchange couplings in the
$xy$ plane define the unit of energy, $\lambda$
is the exchange coupling in the $z$-direction, and
$S^z=\sum_{i=1}^{N_c}(S^z_{A,i}+S^z_{B_1,i}+S^z_{B_2,i})$
is the $z$ component of the total spin of the system, as illustrated in Fig. \ref{fig:mag}(a).
We use DMRG to study open chains of $N_c$ unit cells,
with one $A$ site at each boundary, retaining 243 states per block and performing 12 sweeps in each calculation, such that the higher discarded weight was of order $10^{-9}$. We also study closed systems with $N_c=10$ and $N_c=12$ through ED.
The magnetization curves are obtained from the lowest energy in each total spin
$S^z$ sector and $h=0$: $E(S^z)$,
since the Zeeman term in the Hamiltonian (\ref{eq:ham}) implies $E_{h}(S^z)=E(S^z)-hS^z$
for $h\neq0$. In a finite size system, the $m(h)$ curve is composed of finite size
steps of width $\Delta h(S^z)$ at total spin $S^z$. Considering
$h_{S^z+}$ and $h_{S^z-}$ as the extreme points of these steps, such that
$\Delta h(S^z)=h_{S^z+}-h_{S^z-}$, we thus have $h_{S^z\pm}=\pm[E(S^z\pm1)-E(S^z)]$.
If $S^z$ is not at a thermodynamic-limit magnetization plateau state, we have
$\Delta h(S^z)\rightarrow 0$ as $N_c\rightarrow\infty$, otherwise $\Delta h(S^z)\neq 0$
as $N_c\rightarrow\infty$.
\begin{figure}
\includegraphics*[width=0.47\textwidth]{fig1.eps}
\caption{(a) Schematic representation of the anisotropic Heisenberg Hamiltonian on the AB$_2$ spin-1/2 chain,
under a magnetic field $h$. DMRG results for the open AB$_2$ chain with $N_c=121$ unit cells:
(b) Magnetization per unit cell $m(h)$ for $1\geq\lambda\geq 0.1$ (left panel) and
$0.0\geq\lambda\geq -0.9$ (right panel), in
steps of $\Delta \lambda=0.1$. Inset of the left panel:
$m(h)$ for $\lambda=1.0$ in the vicinity of the 1/3 -- plateau bounded by $h_-=0$ and $h_{+}=1.76$, with
a step at $h_0=1.28$;
(c) Phase diagram: the color code refers to the $m$ values in (b). The exact critical line $h_s$ bounds
the FP -- plateau, while $h_-$, $h_0$, and $h_+$ are related to the 1/3 -- plateau. The gapped phases,
with dynamical exponent $z=2$, are separated by the gapless Luttinger liquid (LL) phase with $z=1$. The 1/3--plateau closes at
a Kosterlitz-Thouless (KT) transition: $\lambda_{KT}=-0.419\pm0.004$ and $h_{KT}=0.290\pm0.002$.}
\label{fig:mag}
\end{figure}
In Fig. \ref{fig:mag}(b) we present DMRG results ($N_c=121$) for $m(h)$ and the
anisotropy in the interval $-0.9\leq \lambda \leq1$.
The $m(h)$ curves display the FP -- plateau at the thermodynamic-limit (bulk) saturation
magnetization $m_s=3/2$, a plateau slightly below the bulk 1/3 -- plateau at $m_s/3=1/2$, and
a secondary plateau, as shown in the inset for $\lambda=1.0$. The fields $h_-$, $h_0$ and $h_+$ define the
width of the plateaus: the secondary one is associated with edge and extended magnon excitations from the 1/3--plateau. Here, these excitations will be examined in detail around the KT transition, in which case LL excitations also take place. In fact, in Fig. \ref{fig:mag}(c), a rich
$h$-$\lambda$ phase diagram exhibits the various phases that play a significant role in our analysis.
In bulk, without broken translational symmetry, the possible occurrence of a plateau in $m(h)$ must satisfy
the topological criterion \cite{OYAPrl97}:
\begin{equation}
S_{c}-m=\text{integer},
\end{equation}
where $S_c$ is the maximum spin of a unit cell.
In our model, $S_{c}=3/2$, $m=1/2$ for the 1/3 -- plateau and $m=3/2$ for the FP -- plateau.
Also, this topological criterion can be
related \cite{Hu2014,Hu2015} to a Chern number $C_m$ defined in the two-dimensional parameter space
of an associated periodically modulated closed system under a twisted boundary condition.
Indeed, an $m$-plateau obeys the relation:
\begin{equation}
C_m=-(S_c-m),
\end{equation}
for $m\geq0$, with $C_{m}=-C_{-m}$ for $m<0$, i. e., $h<0$ not shown in Fig. \ref{fig:mag}. Thus, the FP -- plateau has a Chern
number $C_{3/2}=0$ and is a trivial insulating state;
while the 1/3--plateau is a topological insulator with $C_{1/2}=-1$.
In Sec. \ref{secsec:fpsca}, we present a detailed discussion of the trivial
insulating FP -- plateau state.
In our open finite-size chain, a remarkable feature is the presence of edge states, leading to the splitting of the 1/3 -- plateau
into two plateaus. Consider, for example, the isotropic case shown in the inset of Fig. \ref{fig:mag}(b). The bulk
1/3 -- plateau has extreme points
at $h_{-}=0$ and $h_{+}=1.76$ (for both spin-(1/2,1) \cite{PhysRevB.57.13610} and AB$_2$ \cite{PhysA2005} chains). However, in
the open finite-size system and $h_0\leq h<h_{+}$, the
magnon excitations occupy edge states inside the gap between the lower and upper bulk
band states and give rise to the two plateaus in $m(h)$. The transition between these
two plateaus occurs at $h_0=1.28$ for $\lambda=1$.
The phase diagram of the AB$_2$-chain with $N_c=121$ unit cells is shown in Fig. \ref{fig:mag}(c).
The extreme lines of the bulk plateaus, $h_-(\lambda)$, $h_+(\lambda)$, and $h_s(\lambda)$, are quantum critical
lines separating
a gapped insulating phase from the gapless LL phase, with dynamic critical exponent $z=2$ and $z=1$, respectively.
The FP -- plateau
is bounded by $h_s(\lambda)=\frac{3\lambda}{2}+\frac{1}{2}\sqrt{8+\lambda^2}$,
since the energy of the exact Goldstone mode (a $\Delta S^z=-1$ magnon) associated with this line reads:
$\varepsilon_{\text{FP}}(k)=-\frac{3\lambda}{2}-\frac{1}{2}\sqrt{\lambda^2+8\cos^2(k/2)}+h$.
Therefore, for $h$ close to $h_s(\lambda)$, a high-dilute regime of magnons is verified,
with the following low-lying excitation energy:
\begin{equation}
\varepsilon(k)=-\mu+\frac{v^2 k^2}{2h_s},
\label{eq:diluteregime}
\end{equation}
where $\mu=h_s-h$ and the spin-wave velocity is
\begin{equation}
v=\frac{1}{\sqrt{2\left(1-\frac{3\lambda}{2h_{s}}\right)}}.
\label{eq:vfp}
\end{equation}
In addition, the 1/3 -- plateau is bounded by the critical lines $h_{-}(\lambda)$ and $h_{+}(\lambda)$, with a width
$\Delta(\lambda)=h_+(\lambda)-h_-(\lambda)$. The plateau width $\Delta (\lambda)$ is the \textit{bulk gap} that
separates the two regions of the gapless LL phase: one with $m<1/2$, and the other with $m>1/2$, for the
same value of $\lambda$.
On the other hand, the low-energy theory of magnons in a gapped system under a magnetic field is
that of a Lieb-Liniger \cite{Lieb1963} Bose fluid with $\delta$-function interactions \cite{Affleck91}. In addition, in the
high dilute
regime of magnons, the theory is equivalent to a Tonks-Girardeau \cite{Tonks,*Girardeau} Bose system with a hard-core
repulsion \cite{Affleck91} or a fermionic system \cite{Tsvelik90,Affleck91,Montenegro-Filho2008,Tenorio2011}.
Thereby in the high-dilute regime $h\rightarrow h_-\text{ or }h_+$, the low-energy magnon excitations
from the 1/3--plateau have dispersion relations as in Eq. (\ref{eq:diluteregime}), with $\mu=\pm (h-h_{\pm})$.
For $h\lesssim h_{-}$, the magnons carry spin $\Delta S^z=-1$, while
for $h\gtrsim h_{+}$, the excitations carry spin $\Delta S^z=+1$.
The $\Delta S^z=-1$ excitations can thus be understood as holes, in the
reciprocal $q$-space, in a filled band of $\Delta S^z =+1$ hard-core magnons, and the bulk gap $\Delta(\lambda)$
is the particle-hole gap. The plateau closes at the KT quantum critical point:
$\lambda_{KT}=-0.419\pm0.004$ and $h_{KT}=0.290\pm0.002$, estimated through the procedure described below.
\subsection{Kosterlitz-Thouless transition point: $\lambda_{KT}$ and $h_{KT}$}
In the LL gapless phase shown in Fig. \ref{fig:mag}(c), the transverse spin correlation function
should obey the asymptotic power-law behavior given by \cite{giamarchi2003quantum}
\begin{equation}
\Gamma(r)\sim \frac{1}{r^\frac{1}{2K}},
\label{eqG}
\end{equation}
where $r$ is
the distance between spins and $K$ is the Luttinger liquid parameter $K$, which
depends on $h$ (or $m$) and $\lambda$.
In the Kosterlitz-Thouless transition, the magnetization has
the fixed value $m=1/2$ and the transition is induced by changing $\lambda$.
In this case, $K=2$ at the critical point $\lambda=\lambda_{KT}$.
We estimate the value of $\lambda_{KT}$ through a method successfully used to estimate the KT
transition points in a one-dimensional Bose-Hubbard model in Ref. \cite{Kuhner}.
In our case, the procedure consists in identifying the values of $\lambda$
at which $K=2$ for $m=1/2$ in finite size systems, and extrapolating the results to $N_c\rightarrow\infty$.
We calculate the transverse spin correlation functions as
\begin{equation}
\Gamma(r)\equiv\langle\langle S^+(l)S^-(l+r)\rangle\rangle_l,
\end{equation}
where the $\langle\langle \ldots \rangle\rangle_l$ indicate the quantum expectation value
and an average of the correlation over all pairs
of cells with a distance $l$ between then, in order to minimize the effects of the
open boundaries of the chain.
\begin{figure}
\includegraphics*[width=0.43\textwidth]{fig2.eps}
\includegraphics*[width=0.43\textwidth]{fig2de.eps}
\caption{Critical $\lambda$ of the Kosterlitz-Thouless transition: $\lambda_{KT}$.
(a) Transverse spin correlation functions $\Gamma(r)=\langle\langle S^+(l)S^-(l+r)\rangle\rangle_l$
between $A$ spins as a function of distance $r$ for $\lambda=-0.5$ at the magnetization ($m$) of
the 1/3--plateau: $m=(1/2)-(1/2N_c)$, for the number of unit cells indicated. For a given system size,
$\Gamma(r)$ is calculated by averaging over all pairs of spins separated by the distance $r$. (b) Luttinger liquid
exponent $K$ as a function of $1/N_c$
for the three system sizes shown in (a) and $\lambda=-0.5$. The value of $K$ is determined by fitting $\Gamma(r)$
to the expected long-distance power-law behavior $1/r^{1/2K}$ through the indicated intervals of $r$.
Full lines are linear extrapolations of $K$ to $N_c\rightarrow\infty$, by considering the two highest system sizes.
(c) Extrapolated value of $K$ as a function of $\lambda$ for each fitting interval
indicated in (b). The critical $\lambda$ is estimated from the minimum and maximum values of
$\lambda$ at which $K=2$, within the set of investigated fitting intervals.
(d) 1/3 -- plateau width $\Delta(\lambda)_{N_c}$ as a function of $1/N_c$ for
the indicated values of $\lambda$, dashed lines are fittings to a polynomial expression.
(e) ($\bullet$) $\Delta(\lambda)$ from (d) as a function of $\lambda$. The full line is the
fitting of this data to the essential singularity formula $A\exp{\left(B/\sqrt{\lambda-\lambda_{KT}}\right)}$.}
\label{fig:lc}
\end{figure}
In Fig. \ref{fig:lc}(a), we show $\Gamma(r)$ between $A$ spins for $\lambda=-0.5$ and $N_c=121,181,\text{ and }241$,
at $m=1/2-(1/2N_c)$.
For each system size, we fit the data in different intervals of $r$ to the asymptotic expression in Eq. (\ref{eqG}). The following intervals were considered for $r$: $[1,8]$; $[1,16]$; $[1,60]$; $[16,32]$;
and $[32,48]$ for values of $\lambda$ around the KT transition. In particular, in Fig. \ref{fig:lc}(b) we show $K$ as
a function of the system size for $\lambda=-0.5$ and the chosen $r$-intervals. We see that a straight line can be a good scale
function for $K$ in all studied $r$-intervals. Hence, we fit a linear function to the
data of the two largest system sizes in order to obtain very confident extrapolated value of $K$, i. e.,
with very little dispersion. Indeed, for the case shown in Fig. \ref{fig:lc}(b), $\lambda=-0.5$, the extrapolated
value of $K$ is in the range $2.218 \pm 0.006$. In Fig. \ref{fig:lc}(c), we show the extrapolated values of $K$ as
a function of $\lambda$ for each of the chosen $r$-intervals. The KT critical value of $\lambda$:
\begin{equation}
\lambda_{KT}=-0.419\pm0.004,
\end{equation}
is estimated by considering the minimum and maximum values of $\lambda$ at which $K=2$, in all chosen $r$-intervals.
The bulk gap $\Delta (\lambda)$ nullifies following an essential singularity form
\begin{equation}
\Delta(\lambda)=A\exp{\frac{B}{\sqrt{\lambda-\lambda_{KT}}}},\label{essenSin}
\end{equation}
where $A$ and $B$ are constants. In Fig. \ref{fig:lc}(d) we show a scale analysis of the plateau width
for some values of $\lambda$ in the gapped phase. In Fig. \ref{fig:lc}(e), we present the extrapolated
values of the bulk gap as a function of $\lambda$ and the fitting of them to the expression (\ref{essenSin}).
\begin{figure}
\includegraphics*[width=0.40\textwidth]{fig3.eps}
\caption{Critical $h$ of the KT transition: $h_{KT}$.
Extreme fields of the finite-size 1/3 -- plateau magnetization: $m=(1/2)-(1/2N_c)$, as
a function of $1/N_c$ for $\lambda=-0.415$ and $\lambda=-0.423$, which are the estimated minimum and
maximum values of $\lambda$ at the KT transition.
For each value of $\lambda$, we use a linear extrapolation in $1/N_c$ to evaluate
the thermodynamic-value of $h$ for $m=1/2$. The critical field is estimated
as the average of the extrapolated values.}
\label{fig:hc}
\end{figure}
The value of the critical field $h_{KT}$ can be estimated by a scaling analysis of the extreme fields $h_{-}$ and $h_{+}$ of the
finite-size 1/3 -- plateau magnetization at $m=1/2-(1/2N_c)$. In Fig. \ref{fig:hc}, we
present $h_{-}$ and $h_{+}$ as a function of system size for the minimum and maximum values of $\lambda_{KT}$: -0.415 and -0.423.
In both cases, an excellent linear scale function fits the data for $h_-$ and $h_+$.
For $\lambda=-0.415$, the extrapolated values of $h_{-}$ and $h_{+}$
differ by $7\times10^{-5}$; while for $\lambda=-0.419$, the difference is $5\times10^{-5}$. We estimate the critical
field of the KT transition, $h_{KT}$, as the range from the extrapolated value of $h_{-}$ at $\lambda=-0.423$ to
the extrapolated value of $h_{+}$ at $\lambda=-0.415$, thus obtaining:
\begin{equation}
h_{KT}=0.290\pm0.002.
\end{equation}
The $AB_2$ anisotropic chain is invariant under the exchange of the two $B$ sites of a unit cell, so
the Hamiltonian does not connect the singlet and triplet states of these pairs. The localized singlet pairs appear
in higher energy states of the system that are not activated by either the magnetic field nor the anisotropy.
Thus, the $h\text{ vs. }\lambda$ phase diagram of the $AB_2$ anisotropic chain is the same as that
of the alternating spin-(1/2,1) anisotropic chain \cite{YamamotoPRB99,Solid15Liu},
and we can compare the results for this chain with our estimates for $\lambda_{KT}=-0.419\pm0.004$ and $h_{KT}=0.290\pm0.002$.
These values disagree with the ones suggested for the anisotropic alternating chain
in Ref. \cite{Solid15Liu} by observing the behavior
of the two-site entanglement calculated by the infinite time-evolving block-decimation (iTEBD)
algorithm: $\lambda=-0.53$ and $h=0.23$. On the other hand, the values estimated in Ref. \cite{YamamotoPRB99} through a finite size analysis of the central charge and plateau size: $\lambda=-0.41\pm0.01$ and $h=0.293$, are compatible with our more precise results.
\section{Edge magnon excitations of the gapped 1/3 -- plateau}
\label{sec:edge}
In our open chain, the topological quantum
phase transition
from the insulating ($z=2$) to the metallic phase ($z=1$) manifests in the penetration into the bulk of the
edge (surface) states \cite{Griffith2018,*Rufo2019}. We start by discussing
the magnon edge states associated with the topological insulator at the 1/3 -- plateau in
the open AB$_2$-chain of size $N_c=121$ and $\lambda=0.4$. In Fig. \ref{fig:edgea}(a) we present $m(h)$ in the vicinity of the 1/3 -- plateau ($m=0.5$ in the thermodynamic limit).
In this finite-size system, the $m$-states that characterize the 1/3 -- plateau phase are
labeled by \cn{1} ($m=60/121$), \cn{2} ($m=61/121$), and \cn{3} ($m=62/121$); while the first extended state
above the plateau is labeled by \cn{4} ($m=63/121$).
As $m$ changes from a state $\cn{i}$ to a state $\cn{f}$, the change in the average distribution of $\Delta S^z=+1$ magnons on
sites $A$, $\langle n_A\rangle$, and sites $B=B_1+B_2$, $\langle n_B\rangle$, are calculated through
$\langle n_X\rangle_{\cn{i}\rightarrow\cn{f}}=\langle S^z_X\rangle_{f}-\langle S^z_X\rangle_{i}$,
with $X=A\text{ or }B$, as shown in the panels of Fig. \ref{fig:edgea}(b).
In panel $\cn{1}\rightarrow\cn{2}$, the magnon distribution indicates that a magnon added to the
state $\cn{1}$ is localized at the left edge of the chain; while
a second magnon added to $\cn{1}$, panel $\cn{1}\rightarrow\cn{3}$, is localized at the right edge. Thus, the
distributions of one- and two-magnon states above $\cn{1}$ indicate the presence of localized states at
both edges of the chain, implied by the inversion
symmetry of the finite-size chain relative to its center, with the density on $A$ sites higher than those on $B$ sites.
Concerning the three-magnon state, panel $\cn{1}\rightarrow\cn{4}$ in Fig. \ref{fig:edgea}(b), the magnon
distribution evidences that the third magnon occupies a metallic state, which extends throughout the bulk.
Indeed, panel $\cn{3}\rightarrow\cn{4}$ in Fig. \ref{fig:edgea}(b) presents the distribution of
this one-magnon extended state, which is clearly isolated from the edge states. In Appendix \ref{sec:appendixA}
we show that the magnetization and magnon distributions for an even number of unit cells and the same
boundary conditions have the same physical features; while using a boundary condition with a $B_1,B_2$
at one extreme gives rise to only one edge state. Further, in Appendix \ref{sec:appendixB} we present the
average local magnetizations along the chain, from which the magnon distributions were calculated.
\begin{figure}
\includegraphics*[width=0.47\textwidth]{fig4.eps}
\caption{DMRG results for $m(h)$ and the average magnon distribution along the AB$_2$ open chain with $N_c=121$, at $\lambda=0.4$.
(a) $m(h)$ in the vicinity of the 1/3 -- plateau displaying the indicated $m$-states: \cn{1} ($m=60/121$),
\cn{2} ($m=61/121$), and \cn{3} ($m=62/121$); and the first gapless $m$-state above the plateau (onset of the continuum):
\cn{4} ($m=63/121$).
(b) Average magnon distribution at sites $A$, $\langle n_A\rangle$, and $B$,
$\langle n_B\rangle\equiv\langle n_{B_1}\rangle+\langle n_{B_2}\rangle$, as a function of cell
position $l-1$. Excitations $\cn{1}\rightarrow\cn{2}$,
$\cn{1}\rightarrow\cn{3}$, and $\cn{1}\rightarrow\cn{4}$ create 1, 2, and 3 magnons above the $m$-state \cn{1};
while $\cn{3}\rightarrow\cn{4}$ creates one magnon in the $m$-state \cn{3}.}
\label{fig:edgea}
\end{figure}
Now, we shall focus on the very interesting behavior of edge and bulk magnon excitations as the 1/3 -- plateau gets
closer to the KT critical point: $\lambda_{KT},h_{KT}$. In Fig. \ref{fig:edgeb} (semi-log plots), we
present the average distributions of one ($\cn{1}\rightarrow\cn{2}$) and two ($\cn{1}\rightarrow\cn{3}$) magnon
excitations above $\cn{1}$, as well as the isolated one-magnon extended state (excitation $\cn{3}\rightarrow\cn{4}$),
for $\lambda=0.1$, $0.0$, and $-0.1$, corresponding to the first, second, and third columns, respectively. For $\lambda=0.1$
(first column) the one-magnon state is exponentially localized at the right edge, while the two-magnon state displays
one localized magnon at each edge, similarly to the $\lambda=0.4$ case in Fig. \ref{fig:edgea}(b). Thus, left and right
edge states are still degenerate. However, at $\lambda=0$ (second column), the gap between the two edge states
[$\equiv \Delta h=6\times 10^{-4}$, as shown in Fig. \ref{fig:edgebb}(a)] is open and the one-magnon state
displays a symmetrical density on \textit{both edges} of the chain due to hybridization, thus leading to
bulk penetration. Also, the two-magnon state exhibits similar behavior with a small dip
at the center of the chain. Further, as shown in Fig. \ref{fig:edgeb}, as the bulk gap $\Delta(\lambda)$
(width of the 1/3 -- plateau) decreases the localization length $\xi$ of
the edge states
increases, since $\xi(\lambda)\sim 1/\Delta(\lambda)$, and the edge state becomes more extended. In fact,
for $\lambda=-0.1$,
the density profile of one- and two-magnon edge states are very extended,
with the density at the boundaries approaching their values in bulk. Using data
from the excitation $\cn{1}\rightarrow\cn{3}$ in Fig. \ref{fig:edgeb} for $\lambda=0.1$, $0.0$, and $-0.1$,
we have estimated the values of the localization length: $\xi=7.4,~18,\text{ and }41$, respectively.
On the other hand, for $\lambda=0.1$, the weight at the boundaries of the isolated one-magnon extended state, excitation $\cn{3}\rightarrow\cn{4}$,
is much higher than the practically negligible weight in the
$\lambda=0.4$ case [see Fig. \ref{fig:edgea}(b)].
In fact, as the gap closes, the insulating bulk is squeezed, as
shown in Fig. \ref{fig:edgeb} by the decreasing of the distance between the two minima in the $\cn{3}\rightarrow\cn{4}$
excitation, and also by the increasing penetration of the edge states for the two-magnon
$\cn{1}\rightarrow\cn{3}$ state.
Notably, far enough from the boundaries, the bulk wavefunction
of the $\cn{3}\rightarrow\cn{4}$ one-magnon state is that of a squeezed chain of size $L-2a_b$, where $a_b$ is
the boundary scattering length of an effective
repulsive potential \cite{Sca2}. A more detailed quantitative discussion
is presented in Sec. \ref{secsec:bsca}.
\begin{figure}
\includegraphics*[width=0.47\textwidth]{fig5.eps}
\caption{DMRG results for the average magnon distributions $\langle n_A\rangle$ and $\langle n_B\rangle$ along the AB$_2$ open chain
with $N_c=121$, as the KT transition gets closer. A log-normal scale is used in the figures. The panel columns are data for
$\lambda=-0.1,~0.0,~\text{ and }0.1$, from left to right; while
panel lines show $\langle n_A\rangle$ and $\langle n_B\rangle$ for the $\cn{1}\rightarrow\cn{2}$, $\cn{1}\rightarrow\cn{3}$, and
$\cn{3}\rightarrow\cn{4}$ excitations. The localization length $\xi$ shown
in the second line of the panels is obtained by fitting the data of $\langle n_A\rangle$ in the range
$30\leq x \leq 40$ to $\text{e}^{-x/\xi}$, with cell position $x=l-1$.}
\label{fig:edgeb}
\end{figure}
\section{Gapped and gapless excitations around the topological KT transition}
\label{sec:bands}
In Fig. \ref{fig:edgebb}(a), we show $m(h)$ in the vicinity of the 1/3 -- plateau
for the indicated values of $\lambda$ and using the same state labeling of Figs. \ref{fig:edgea} and \ref{fig:edgeb}.
A remarkable feature is the \textit{breaking of the degeneracy} between states
$\cn{2}$ and $\cn{3}$ for $\lambda\sim 0.0$ (black curve), as one decreases $\lambda$ from
$\lambda=0.1$ (green curve), in accord with the magnon distribution in Fig. \ref{fig:edgeb}.
In fact, for $\lambda=0.0$, there is a gap of size $6\times10^{-4}$ between these states,
implying a $m$-step of width $\Delta h =6\times10^{-4}$ in the $m$-state $\cn{2}$.
Further, the width of the $m$-step increases (decreases) at the $m$-state $\cn{2}$ ($\cn{1}$) as
the gap closes and all states take part of the continuum at the KT critical point $(\lambda_{KT},h_{KT})$ in
the thermodynamic limit.
Accordingly, in our finite-size system we observe uniformity in the values of the widths of the
$m$-steps, as shown in Fig. \ref{fig:edgebb}(a) for $\lambda=-0.5$ (blue curve),
a signature of a gapless LL phase.
\begin{figure}
\includegraphics*[width=0.47\textwidth]{fig6.eps}
\caption{(a) DMRG results for $m(h)$ in the vicinity of the 1/3--plateau of the AB$_2$ open
chain with $N_c=121$ for the indicated values of $\lambda$
and the indicated $m$-states: \cn{1} ($m=60/121$),
\cn{2} ($m=61/121$), \cn{3} ($m=62/121$), and \cn{4} ($m=63/121$), as in Fig. \ref{fig:edgea}. Notably,
for $\lambda=0.0$, there is a finite-size step of size $\Delta h = 6\times 10^{-4}$ at the $m$-state \cn{2}.
(b) Upper and lower band energies for
$\Delta S^z =+1$ magnons of wave-vector $q$, with $h$ at the center of the
1/3 -- plateau, $(h_{+}+h_{-})/2$, for $\lambda=0.4$ (\textcolor{red}{$\blacktriangle$}) and
$-0.5$ (\textcolor{blue}{$\bullet$}), using ED results from $N_c=10$ and $N_c=12$ under
closed boundary conditions.
We also indicate the two-fold degenerate magnon edge states (\textcolor{red}{\textbf{--}}) below the bottom
of the magnon upper band, using DMRG for $\lambda=0.4$ and an open chain with $N_c=121$.
}
\label{fig:edgebb}
\end{figure}
In addition, a remarkable topological change in the dispersion relation of the low-energy magnetic
excitations takes place around the KT critical point.
There are two kinds of bulk magnetic excitations from the 1/3 -- plateau: one carrying a spin $\Delta S^z=+1$, which
increases the 1/3 -- plateau total spin $S^{z}_{1/3}$ by one unit; and the other, carrying a spin $\Delta S^z =-1$,
which decreases $S^{z}_{1/3}$ by one unit. The excitations with $\Delta S^z=-1$ can be understood as a hole, in the
reciprocal $q$-space, in a filled band of $\Delta S^z =+1$ excitations. The magnetic field acts as a chemical
potential: for $h=h_{-}$ the lower band
is filled and the upper one is empty; increasing $h$, the magnetization does not change (plateau region) up to
$h=h_+$, at which the upper band starts to be filled. Defining $E_{1/3}$ as the total energy of the 1/3 -- plateau
magnetization and $h=0$, the energy $\varepsilon_{\pm}(q)$ of the upper (+) and lower (-) bands are given
by
\begin{equation}
\varepsilon_{\pm}(q)=\pm [E_{\pm}(q)-E_{1/3}]-h,
\end{equation}
where $E_{+}(q)$ and $E_{-}(q)$ are the lowest total energy
at the sector $q$ for $S^z=S^{z}_{1/3}+1$ and $S^z=S^{z}_{1/3}-1$, respectively, with $h=0$.
In Fig. \ref{fig:edgebb}(b) we show $\varepsilon_{\pm}$ for a closed system with $N_c=10$ and 12, and $h=(h_++h_-)/2$
for $\lambda=0.4$ (gapped magnon in the 1/3 -- plateau phase) and $\lambda=-0.5$ (gapless spinon in the LL phase).
The expected \cite{Tsvelik90,Affleck91,Sorensen1993,Sachdev1994,PhysRevB.55.58} long-wavelength behavior is
also sketched with full lines.
For $h_{-} < h < h_{+}$ (inside the 1/3 -- plateau), the excitations should obey a quadratic dispersion
relation \cite{Tsvelik90,Affleck91,Sorensen1993,Sachdev1994,PhysRevB.55.58}
\begin{equation}
\varepsilon_{\pm}(q)\rightarrow h_{\pm}\pm\frac{v_{\pm}^2}{2h_{\pm}}q^2-h\text{ as }q\rightarrow 0,
\end{equation}
where $v_{\pm}$ are the spin-wave velocities (see discussion in Sec. \ref{sec:pd}).
For $\lambda=0.4$, shown in Fig. \ref{fig:edgebb}(b),
a fitting (full lines) gives $v^2/2h \approx 0.61$ (0.62) for the upper (lower) band.
On the other hand, in the gapless LL phase, the upper and lower bands are joined at $q=0$, and
the excitations follow a linear dispersion relation
\begin{equation}
\varepsilon_{\pm}(q)\rightarrow \pm v_s|q|\text{ as }q\rightarrow0,
\end{equation}
where $v_s$ is the spinon velocity.
\section{Boundary and magnon-magnon scattering lengths}
\label{sec:sca}
\subsection{Boundary scattering length for magnon excitations from the 1/3 -- plateau magnetization}
\label{secsec:bsca}
\begin{figure}
\includegraphics*[width=0.47\textwidth]{fig7.eps}
\caption{Average magnon density $\langle n_l \rangle$ per unit cell along the chain for
the extended one magnon excitation in the 1/3--plateau state. ({\color{red}$\bullet$}) DMRG results for $N_c=121$. The full line is a fitting of the DMRG data to the continuum limit expression for the probability density (far from the boundaries) of a particle in a box with a finite potential at the boundaries: $A\sin^2[\pi(x-a_b)/(N_c-2a_b)]$,
where $a_b$ parameterizes the interaction with the boundaries, $A$ is a fitting parameter, and
$x=l-1$. The fitting is done using the data in the range $x=45\ldots75$.
}
\label{fig:magnondens}
\end{figure}
Here, we consider the average density profile of the isolated extended magnon excitation, obtained
from the magnetization change $\cn{3}\rightarrow\cn{4}$, as described in Sec. \ref{sec:edge}.
In our open chain, the bulk magnon
lives on a squeezed chain with size \cite{Sca2} $N_c-2a_b$, where the
\textit{boundary scattering length} $a_b$
accounts for the repulsive ($a_b>0$) boundary potentials. Thereby far enough from the boundaries,
the bulk single-particle wavefunctions in the open chain can be written as \cite{Sca2}
\begin{equation}
\psi_p(x)= \sqrt{A}\sin\left[\frac{p\pi (x-a_b)}{(N_c-2a_b)}\right],
\label{eq:singlep}
\end{equation}
where $p=1,~2,\ldots$ and $A$ is a constant. In Fig. \ref{fig:magnondens} we fit the DMRG data for the chain
with $N_c=121$ unit cells to the expression in Eq. (\ref{eq:singlep}) with $p=1$,
and obtain $a_b=0.6$, $8.0$, and $18.0$, for $\lambda=1.0$, 0.1, and $-0.1$, respectively.
\subsection{Fully polarized plateau: insulator with trivial topology, and magnon-magnon scattering length}
\label{secsec:fpsca}
The fully polarized plateau is an example of a topological trivial insulator,
with a Chern number $C_{3/2}=0$ (see discussion in Sec. \ref{sec:pd}). Thus, in an open chain, the fully polarized state does not have edge states.
Below we present the bulk magnon excitations from the fully polarized plateau, including the linear correction
for the square-root law, and discuss the magnon density profile for two magnons in an open chain.
\begin{figure}
\includegraphics*[width=0.47\textwidth]{fig8.eps}
\caption{Dilute magnon regime and the scattering length $a$, DMRG results for $N_c=121$.
(a) Magnon density $\langle n_l\rangle$ along the chain for two magnons added to the FP state for the indicated values of $\lambda$.
(b) Average magnon density per unit cell $n$ for the FP -- plateau: $n=m_{FP}-m$, with $m_{FP}=(3/2)+(1/2N_c)$,
as a function of $\mu^{1/2}$, where $\mu=h_s-h$ is the effective chemical potential and $h_s$ is the saturation field. Inset: scattering length $a$ derived from a fitting of the DMRG results to the expression of the effective fermion model with a linear correction: $n/\mu^{1/2}=\beta -\frac{4}{3}a\beta^2\mu^{1/2}$, with $\beta$ and the scattering length $a$ as fitting parameters.}
\label{fig:bulk}
\end{figure}
In Fig. \ref{fig:bulk}(a) we present the two-particle average magnon density along the chain for the fully
polarized plateau, $\langle n_l\rangle$. For comparison, we show the free fermion density for two fermions in a
chain of size $N_c-1$ and vanishing boundary condition:
\begin{equation}
\frac{2}{N_c-1}\left[\sin^2\left(\frac{\pi x}{N_c-1}\right)+\sin^2\left(\frac{\pi x}{N_c-1}\right)\right],
\label{eq:twofermion}
\end{equation}
with $x=l-1$. We notice the absence of edge states in this case for $-0.9\leq\lambda\leq1.0$. A tiny departure
from the free fermion result is observed as $\lambda\rightarrow -1$, the critical ferromagnetic point. The average magnon density increases at the boundaries with a decrease in the central region as $\lambda\rightarrow -1$. We explain it
by noticing that if a $\Delta S^z = -1$ magnon is at a boundary $A$ site, with the other sites fully polarized, the value of the longitudinal term of the energy is $-\lambda$. If the magnon is not at a boundary site, this energy term is $-4\lambda$ (at a $B_1$ or $B_2$ site) or $-2\lambda$ (at an $A$ site). Hence, for $\lambda<0$ the effect of the boundaries is represented by an attractive potential at the chain ends. However, while in Fig. \ref{fig:magnondens} we can observe a crossover between the profiles at the center and at the boundaries of the
chain, this crossover is not evidenced in the density profiles shown in Fig. \ref{fig:bulk}(a).
In the high-dilute limit of magnons near the $h_s(\lambda)$ line, the bulk magnon density per unit cell is given by
\begin{equation}
n=\sqrt{\frac{2 h_s \mu}{\pi^2 v^2}},
\end{equation}
with $n=m_{FP}-m$, $\mu=h_s-h$, and $v$ in Eq. (\ref{eq:vfp}).
Including the linear first correction \cite{Sca1,Sca2,affleck2004,Affleck2005,Sca3} to the square-root law,
the magnon density becomes
\begin{equation}
n=\sqrt{\frac{2 h_s}{\pi^2 v^2}}\sqrt{\mu}-a\frac{4}{3}\frac{2 h_s}{\pi^2 v^2}\mu,
\label{eq:magnondens}
\end{equation}
where $a$ is the magnon-magnon scattering length, which can be positive or negative. For an infinite hard-core potential, $a>0$ and is equal to the core size, while $a<0$ for a repulsive delta-function potential. Hence,
from the effective low-energy theory, we expect $a<0$.
In Fig. \ref{fig:bulk}(b), we show DMRG data for $n$ normalized by $\mu^{1/2}$ as a function of $\mu^{1/2}$ for
$N_c=121$. The magnetization values shown range from $m=m_{FP}-(3/N_c)$ (three magnons) to $m=1$
(one magnon per unit cell). In order to obtain $a$ as a function of $\lambda$,
we compare the DMRG data with the expression in Eq. (\ref{eq:magnondens}).
In fact, from Eq. (\ref{eq:magnondens}), we find
\begin{equation}
\frac{n}{\mu^{1/2}}=\beta-a\frac{4}{3}\beta^2 \mu^{1/2},
\label{eq:fita}
\end{equation}
with
\begin{equation}
\beta(\lambda)=\sqrt{\frac{2h_s}{\pi^2 v^2}}.
\label{eq:beta}
\end{equation}
We fit the full set of DMRG data in Fig. \ref{fig:bulk}(b) to Eq. (\ref{eq:fita}), for each $\lambda$ value,
by considering $\beta$ and $a$ as fitting parameters. Indeed, the relative departure between the values
of $\beta$ from the fitting and the ones obtained from Eq. (\ref{eq:beta}) ranges from 5\% to 10 \%.
In Fig. \ref{fig:bulk}(b), we observe that $n/\mu^{1/2}$ is almost constant for $\lambda=0.9$, implying
the prevalence of the square-root behavior for these magnetization values. The scattering length $a$, shown
in the Inset, is $\approx 0$ for $\lambda=0.9$, and the hard-core boson or free fermion model is thus the best
effective theory. Notice that the value of $a$ decreases smoothly as $\lambda$ decreases
and takes only negative values as expected for a $\delta$-function potential.
\section{Summary and conclusions}
\label{sec:summary}
In summary, we use the density matrix renormalization group to discuss the phase diagram of
the anisotropic AB$_2$ chain with an applied magnetic field. In particular, we reveal the locus of
the magnon edge states, observed in finite size systems, inside the gap of the topological 1/3 -- plateau state.
Besides, we use the transverse spin correlation functions to
estimate the critical point of the Kosterlitz-Thouless transition: $\lambda_{KT}=-0.419\pm0.004$ and $h_{KT}=0.290\pm0.002$, such that we reach a better precision than known results. We also display the magnon distribution in
the edge states and in the first extended state above the gap. Further, we follow the penetration of the edge states
in the bulk as the 1/3 -- plateau gap closes. The gap closing is also
accompanied by an effective squeezing of the chain, parameterized by a boundary scattering length. Considering the bulk states, we also use exact diagonalization to show the topological change in the dispersion relation of the excitations
in the vicinity of the Kosterlitz-Thouless transition point. Furthermore, we studied the topologically trivial
fully polarized plateau state. Since this insulating state is trivial, we show that the boundary magnon distributions in this case are distinct from that of the excitations from the topological 1/3 -- plateau state.
Particularly, we estimate the magnon-magnon scattering length as a function of the anisotropy
and confirm that it provides a good correction (linear) to the square-root singularity in the
dilute regime of magnons.
We expect that the reported features of the quantum many-body edge and extended states, and the rich
phase diagram of the anisotropic Heisenberg AB$_2$ chain in a magnetic field, notably
the KT transition and the topological change of the excitations, will stimulate theoretical and experimental
investigations in quasi-one-dimensional compounds exhibiting topological 1/3 magnetization plateaus, including
ultra-cold optical lattice analogs.
\begin{acknowledgments}
We acknowledge support from Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'{\i}vel Superior (CAPES),
Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico (CNPq), and Funda\c{c}\~ao de Amparo \`a Ci\^encia e
Tecnologia do Estado de Pernambuco (FACEPE), Brazilian agencies, including the PRONEX Program which is funded by
CNPq and FACEPE, APQ-0602-1.05/14.
\end{acknowledgments}
|
1,108,101,564,937 | arxiv | \section{\@startsection{section}{1
\z@{1.1\linespacing\@plus\linespacing}{.8\linespacing
{\normalfont\Large\scshape\centering}}
\makeatother
\theoremstyle{plain}
\newtheorem*{hypab}{Hypothesis Ab}
\newtheorem*{hyp3}{Hypothesis (I)}
\newtheorem*{hyp4}{Hypothesis D5}
\newtheorem*{target}{Target Theorem}
\newtheorem*{thmA}{Theorem A}
\newtheorem*{thmB}{Theorem B}
\newtheorem*{thmC}{Theorem C}
\newtheorem*{T1}{Theorem 1}
\newtheorem*{T2}{Theorem 2}
\newtheorem*{T3}{Theorem 3}
\newtheorem*{MT}{Main Theorem}
\newtheorem*{MH}{Main Hypothesis}
\newtheorem*{nonexistence}{Nonexistence Theorem}
\newtheorem*{conj*}{Root Groups Conjecture}
\newtheorem*{thm1.2}{(1.2) Theorem}
\newtheorem*{thm1.3}{(1.3) Theorem}
\newtheorem*{thm1.4}{(1.4) Theorem}
\newtheorem*{prop*}{Proposition}
\newtheorem{Thm}{Theorem}
\newtheorem{conj}{Conjecture}
\newtheorem{prop}{Proposition}[section]
\newtheorem{question}[conj]{Question}
\newtheorem{thm}[prop]{Theorem}
\newtheorem{cor}[prop]{Corollary}
\newtheorem{lemma}[prop]{Lemma}
\newtheorem{hyp1}[prop]{Hypothesis}
\theoremstyle{definition}
\newtheorem{Def}[prop]{Definition}
\newtheorem*{Def*}{Definition}
\newtheorem{Defs}[prop]{Definitions}
\newtheorem{Defsnot}[prop]{Definitions and notation}
\newtheorem{example}[prop]{Example}
\newtheorem{notrem}[prop]{Notation and a remark}
\newtheorem{notdef}[prop]{Notation and definitions}
\newtheorem{notation}[prop]{Notation}
\newtheorem*{notation*}{Notation}
\newtheorem{remark}[prop]{Remark}
\newtheorem{remarks}[prop]{Remarks}
\newtheorem*{rem}{Remark}
\DeclareMathOperator{\tld}{\sim\!}
\DeclareMathOperator{\inv}{inv}
\newcommand{\mouf}{\mathbb{M}}
\newcommand{\cala}{\mathcal{A}}
\newcommand{\calb}{\mathcal{B}}
\newcommand{\calc}{\mathcal{C}}
\newcommand{\cald}{\mathcal{D}}
\newcommand{\cale}{\mathcal{E}}
\newcommand{\calf}{\mathcal{F}}
\newcommand{\calg}{\mathcal{G}}
\newcommand{\calh}{\mathcal{H}}
\newcommand{\calj}{\mathcal{J}}
\newcommand{\calk}{\mathcal{K}}
\newcommand{\call}{\mathcal{L}}
\newcommand{\calm}{\mathcal{M}}
\newcommand{\caln}{\mathcal{N}}
\newcommand{\calo}{\mathcal{O}}
\newcommand{\calot}{\tilde{\mathcal{O}}}
\newcommand{\calp}{\mathcal{P}}
\newcommand{\calq}{\mathcal{Q}}
\newcommand{\calr}{\mathcal{R}}
\newcommand{\cals}{\mathcal{S}}
\newcommand{\calt}{\mathcal{T}}
\newcommand{\calu}{\mathcal{U}}
\newcommand{\calv}{\mathcal{V}}
\newcommand{\calw}{\mathcal{W}}
\newcommand{\calx}{\mathcal{X}}
\newcommand{\calz}{\mathcal{Z}}
\newcommand{\solv}{\mathcal{SLV}}
\newcommand{\cc}{\mathbb{C}}
\newcommand{\ff}{\mathbb{F}}
\newcommand{\bbg}{\mathbb{G}}
\newcommand{\LL}{\mathbb{L}}
\newcommand{\kk}{\mathbb{K}}
\newcommand{\mm}{\mathbb{M}}
\newcommand{\bmm}{\bar{\mathbb{M}}}
\newcommand{\nn}{\mathbb{N}}
\newcommand{\oo}{\mathbb{O}}
\newcommand{\pp}{\mathbb{P}}
\newcommand{\qq}{\mathbb{Q}}
\newcommand{\rr}{\mathbb{R}}
\newcommand{\vv}{\mathbb{V}}
\newcommand{\zz}{\mathbb{Z}}
\newcommand{\fraka}{\mathfrak{a}}
\newcommand{\frakA}{\mathfrak{A}}
\newcommand{\frakm}{\mathfrak{m}}
\newcommand{\frakB}{\mathfrak{B}}
\newcommand{\frakC}{\mathfrak{C}}
\newcommand{\frakh}{\mathfrak{h}}
\newcommand{\frakI}{\mathfrak{I}}
\newcommand{\frakM}{\mathfrak{M}}
\newcommand{\N}{\mathfrak{N}}
\newcommand{\frakO}{\mathfrak{O}}
\newcommand{\frakP}{\mathfrak{P}}
\newcommand{\T}{\mathfrak{T}}
\newcommand{\frakU}{\mathfrak{U}}
\newcommand{\frakV}{\mathfrak{V}}
\newcommand{\ga}{\alpha}
\newcommand{\gb}{\beta}
\newcommand{\gc}{\gamma}
\newcommand{\gC}{\Gamma}
\newcommand{\gCt}{\tilde{\Gamma}}
\newcommand{\gd}{\delta}
\newcommand{\gD}{\Delta}
\newcommand{\gre}{\epsilon}
\newcommand{\gl}{\lambda}
\newcommand{\gL}{\Lambda}
\newcommand{\gLt}{\tilde{\Lambda}}
\newcommand{\gm}{\mu}
\newcommand{\gn}{\nu}
\newcommand{\gro}{\omega}
\newcommand{\gO}{\Omega}
\newcommand{\gvp}{\varphi}
\newcommand{\gr}{\rho}
\newcommand{\gs}{\sigma}
\newcommand{\gS}{\Sigma}
\newcommand{\gt}{\tau}
\newcommand{\gth}{\theta}
\newcommand{\gTH}{\Theta}
\newcommand{\rmA}{\mathrm{A}}
\newcommand{\rmD}{\mathrm{D}}
\newcommand{\rmI}{\mathrm{I}}
\newcommand{\rmi}{\mathrm{i}}
\newcommand{\rmL}{\mathrm{L}}
\newcommand{\rmO}{\mathrm{O}}
\newcommand{\rmS}{\mathrm{S}}
\newcommand{\rmU}{\mathrm{U}}
\newcommand{\rmV}{\mathrm{V}}
\newcommand{\nsg}{\trianglelefteq}
\newcommand{\rnsg}{\trianglerighteq}
\newcommand{\At}{A^{\times}}
\newcommand{\Dt}{D^{\times}}
\newcommand{\Ft}{F^{\times}}
\newcommand{\Kt}{K^{\times}}
\newcommand{\kt}{k^{\times}}
\newcommand{\ktv}{k_v^{\times}}
\newcommand{\Pt}{P^{\times}}
\newcommand{\Qt}{Q^{\times}}
\newcommand{\calqt}{\calq^{\times}}
\newcommand{\Rt}{R^{\times}}
\newcommand{\Lt}{L^{\times}}
\newcommand{\charc}{{\rm char}}
\newcommand{\df}{\stackrel{\text{def}} {=}}
\newcommand{\lr}{\longrightarrow}
\newcommand{\ra}{\rightarrow}
\newcommand{\hr}{\hookrightarrow}
\newcommand{\lra}{\longrightarrow}
\newcommand{\sminus}{\smallsetminus}
\newcommand{\lan}{\langle}
\newcommand{\ran}{\rangle}
\newcommand{\Ab}{{\rm Ab}}
\newcommand{\Aut}{{\rm Aut}}
\newcommand{\End}{{\rm End}}
\newcommand{\Ker}{{\rm Ker\,}}
\newcommand{\im}{{\rm Im\,}}
\newcommand{\Hom}{{\rm Hom}}
\newcommand{\FRAC}{\rm FRAC}
\newcommand{\Nrd}{\rm Nrd}
\newcommand{\PSL}{\rm PSL}
\newcommand{\SL}{{\rm SL}}
\newcommand{\GL}{{\rm GL}}
\newcommand{\GF}{{\rm GF}}
\newcommand{\R}{{\rm R}}
\newcommand{\Sz}{\rm Sz}
\newcommand{\St}{\rm St}
\newcommand{\Sym}{{\rm Sym}}
\newcommand{\tr}{{\rm tr}}
\newcommand{\tor}{{\rm tor}}
\newcommand{\PGL}{{\rm PGL}}
\newcommand{\ch}{\check}
\newcommand{\s}{\star}
\newcommand{\bu}{\bullet}
\newcommand{\dS}{\dot{S}}
\newcommand{\onto}{\twoheadrightarrow}
\newcommand{\HH}{\widebar{H}}
\newcommand{\NN}{\widebar{N}}
\newcommand{\GG}{\widebar{G}}
\newcommand{\gCgC}{\widebar{\gC}}
\newcommand{\gLgL}{\widebar{\gL}}
\newcommand{\hG}{\widehat{G}}
\newcommand{\hN}{\widehat{N}}
\newcommand{\hgC}{\widehat{\gC}}
\newcommand{\linv}{\varprojlim}
\newcommand{\In}{{\rm In}_K}
\newcommand{\Inc}{{\rm Inc}_K}
\newcommand{\Inv}{{\rm Inv}}
\newcommand{\tit}{\textit}
\newcommand{\tbf}{\textbf}
\newcommand{\tsc}{\textsc}
\newcommand{\hal}{\frac{1}{2}}
\newcommand{\half}{\textstyle{\frac{1}{2}}}
\newcommand{\restr}{\upharpoonright}
\newcommand{\rmk}[1]{\noindent\tbf{#1}}
\newcommand{\widebar}[1]{\overset{\mskip1mu\hrulefill\mskip1mu}{#1}
\vphantom{#1}}
\newcommand{\widedots}[1]{\overset{\mskip1mu\dotfill\mskip1mu}{#1}
\vphantom{#1}}
\newcommand{\llr}{\Longleftrightarrow}
\newcommand{\tem}{{\bf S}}
\newcommand{\tnem}{{\bf NS}}
\numberwithin{equation}{section}
\hyphenation{Tim-mes-feld}
\begin{document}
\title[]{Almost regular involutory automorphisms of uniquely $2$-divisible groups}
\author[Yoav Segev]{Yoav Segev}
\address{
Department of Mathematics \\
Ben-Gurion University \\
Beer-Sheva 84105 \\
Israel}
\email{[email protected]}
\keywords{almost regular involutory automorphism, uniquely $2$-divisible group.}
\subjclass[2000]{Primary: 20E36}
\begin{abstract}
We prove that a uniquely $2$-divisible group that admits an almost
regular involutory automorphism is solvable.
\end{abstract}
\date{\today}
\maketitle
\section{Introduction}
Recall that an automorphism $\nu$ of a group $H$ is
called {\it involutory} if $\nu\ne id$ and $\nu^2=id$.
The automorphism $\nu$ is called {\it almost regular}, if $C_H(\nu)$ is finite.
Recall that a group $U$ is {\it uniquely $2$-divisible}
if for each $u\in U$ there exists a unique $v\in U$ such that $v^2=u$.
Note that in particular a uniquely $2$-divisible group contains no involutions
(i.e.~elements of order $2$).
The purpose of this note is to use the techniques
introduced in the impressive paper \cite{Sh} of Shunkov,
where he proves that a periodic group that admits an almost regular
involutory automorphism is virtually solvable (i.e.~it has
a solvable subgroup of finite index). We prove.
\begin{thm}\label{thm main}
Let $U$ be a uniquely $2$-divisible group. If $U$
admits an involutory almost regular automorphism,
then $U$ is solvable.
\end{thm}
\noindent
Our main motivation for dealing with automorphisms of uniquely $2$-divisible
groups comes from questions about the root groups of special Moufang sets,
and those tend to be uniquely $2$-divisible, see, e.g., \cite{S}.
Indeed, using Theorem \ref{thm main} it immediately follows that
\begin{cor}
Let $\mouf(U,\gt)$ be a special Moufang set. If the Hua
subgroup contains an involution $\nu$ such that $C_U(\nu)$
is finite, then $U$ is abelian.
\end{cor}
\begin{proof}
If $U$ contains involutions, then $U$ is abelian by \cite[Theorem 5.5, p.~782]{DST}.
If $U$ does not contain involutions, then by \cite[Proposition 4.6, p.~5840]{DS},
$U$ is uniquely $2$-divisible, and then by Theorem \ref{thm main} and by the main theorem of \cite{SW},
$U$ is abelian.
\end{proof}
The proof of Theorem \ref{thm main} is obtained as follows.
First note that if $U$ is finite, then $U$ has odd order,
so by the Feit-Thompson theorem $U$ is solvable.
Hence we may assume that $U$ is infinite.
We let $A$ be a maximal abelian subgroup of $U$ (with respect to inclusion) inverted by $\nu$,
(i.e.~each element of $A$ is inverted by $\nu$). In Lemma \ref{lem exofA}(2)
we show that we can take $A$ to be infinite. We then show that
for elements $u_1,\dots, u_n\in U$, the involutions
$u_1\nu u_1^{-1},\dots, u_n\nu u_n^{-1}$ in the semi-direct product
$U\rtimes \lan \nu\ran$ invert a subgroup $D\le A$ with $|A:D|<\infty$
(Proposition \ref{prop A:Au}). The next step is to show that
$C_U(D)/D$ is finite and solvable (Lemma \ref{lem CUD}).
Since $K:=\lan \nu u_1\nu u_1^{-1},\dots, \nu u_n\nu u_n^{-1}\ran\le C_U(D)$,
the subgroup $K$ is solvable and $K/Z(K)$ is finite.
Next let $S:=\{x\in U\mid x^{\nu}=x^{-1}\}$.
It is easy to see that an element $y\in U$ is in $S$ iff $y=\nu u\nu u^{-1}$,
for some $u\in U$, so by the above each finitely generated subgroup $H$
of $R:=\lan S\ran$ is solvable and satisfies: $H/Z(H)$ is finite.
Hence $R'$ is periodic (Proposition \ref{prop <S>}).
Using the above mentioned result of Shunkov, we see that $R'$
is solvable, so $R$ is solvable.
As is well known (see \cite{K})
$U=R C_U(\nu)$ and $R\nsg U$. Since $C_U(\nu)$ is finite and uniquely $2$-divisible
it has odd order. By the Feit-Thomson theorem, $C_U(\nu)$
is solvable and this at last shows that $U$
is solvable.
We remark that it is possible that with the aid of the Theorem
on page 286 of \cite{HM},
one can get even more delicate information on $U$, but we do not need
that, so we do not pursue this avenue further.
\section{Notation and preliminary results}\label{sec not}
\begin{notation}\label{not A}
\begin{enumerate}
\item
Throughout this note $U$ is an infinite uniquely $2$-divisible group and
$\nu\in\Aut(U)$ is an involutory automorphism which is almost regular.
\item
We denote by $G$ the semi-direct product of $U$ by $\nu$
and we indentify $U$ and $\nu$ with their images in $G$.
We let $\Inv(G)$ denote the set of involutions of $G$.
\item
We let $S:=\{x\in U\mid x^{\nu}=x^{-1}\}$.
\item
The letter $A$ always denotes a fixed infinite maximal
(with respect to inclusion) abelian subgroup of $U$ which is
inverted by $\nu$ (i.e.~all of whose elements are
inverted by $\nu$). The existence of $A$ is guaranteed by
Lemma \ref{lem exofA}(2) and by Zorn's lemma.
\item
For each $u\in U$ we denote by $A_u$ the subgroup of $A$
inverted by $u\nu u^{-1}$.
\end{enumerate}
\end{notation}
\begin{remarks}\label{rem basic}
\begin{enumerate}
\item
Notice that $A$ is uniquely $2$-divisible.
\item
Note also that for any $u\in U$, the subgroup $A_u$ is
uniquely $2$-divisible.
\item
It is easy to check that $S=\{\nu\nu^x\mid x\in U\}$.
\end{enumerate}
\end{remarks}
\begin{lemma}[\cite{N}, Lemma 4.1, p.~239]\label{lem neumann}
Let the group $H$ be the union of finitely many, let us say $n$,
cosets of subgroups $C_1, C_2,\dots, C_n:$
\[
H=\textstyle{\bigcup}_{i=1}^n C_ig_i,
\]
Then the index of (at least) one of these subgroups in $H$ does not exceed $n$.
\end{lemma}
\begin{cor}\label{cor neumann}
Let the group $H$ be the union of finitely many, let us say $n$,
subsets $S_1, S_2,\dots, S_n:$
\[
H=\textstyle{\bigcup}_{i=1}^n S_i.
\]
For each $i$ set $C_i:=\lan ab^{-1}\mid a, b\in S_i\ran$. Then
the index of (at least) one of the subgroups $C_1,\dots, C_n$ in $H$ does not exceed $n$.
\end{cor}
\begin{proof}
For each $i=1,\dots, n$, pick an arbitrary $g_i\in S_i$. Notice that $S_i\subseteq C_ig_i$ for
all $i$, so $H=\bigcup_{i=1}^nC_ig_i$ and the Corollary follows from Lemma \ref{lem neumann}.
\end{proof}
\begin{lemma}
\begin{enumerate}
\item
All involutions in $G$ are conjugate;
\item
$S=\{\nu \gt\mid \gt\in\Inv(G)\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\gt\in\Inv(G)$. Then $\gt =x\nu,$ for some $x\in U$.
Since $\gt$ is an involution $x\in S$. Let $y\in U$ be the unique
element with $y^2=x$, then $y\in S$ and $\gt=x\nu =y^2\nu=y\nu y^{-1}$.
This shows (1). Part (2) is Remark \ref{rem basic}(3).
\end{proof}
\begin{lemma}\label{lem basic}
Let $D$ be an abelian uniquely $2$-divisible subgroup of $U$. Then
\begin{enumerate}
\item
$C_U(D)/D$ is a uniquely $2$-divisible group.
\item
If $D$ is inverted by $\nu,$
then $\nu D$ is an almost regular
involutory automorphism of $C_U(D)/D$.
\item
Assume that $D$ is inverted by $\nu$ and let
$E/D$ be a subgroup of $C_U(D)/D$
which is inverted by $\nu D$. Then
$E$ is inverted by $\nu$, so, in particular, $E$ is
abelian.
\end{enumerate}
\end{lemma}
\begin{proof}
(1):\quad
Set $C:=C_U(D)$. Assume that $a, b\in C$ and $a^2D=b^2D$.
Let $x, y\in D$ with $a^2x=b^2y$ and let
$u, v\in D$ with $u^2=x$ and $v^2=y$.
Then $a^2u^2=b^2v^2$ and since $a, b$
commute with $u, v$ we see that $(au)^2=(bv)^2$,
hence $au=bv$ so $aD=bD$.
Furthermore let $aD\in C/D$. Let $b\in U$ with $b^2=a$.
Then $b\in C$ and $bD$ is the square root of $aD$ in $C/D$.
\medskip
\noindent
(2):\quad
Clearly $\nu D$ is an involutory automorphism of $C/D$
(acting via conjugation). Assume that $aD\in C/D$
centralizes $\nu D$. Then $\nu^a =\nu d$, for some $d\in D$.
Let $x\in D$ with $x^2=d$. Then $\nu$ inverts $x$ and we see
that $\nu^a=\nu^x$ and $ax^{-1}\in C_U(\nu)$. It follows that
$C_{C/D}(\nu D)=C_C(\nu)D/D,$ and since $\nu$ is almost regular,
so is $\nu D$.
\medskip
\noindent
(3):\quad
Let $xD\in C/D$ be an element inverted by $\nu D$. Then
$x^{\nu}=x^{-1}d$, for some $d\in D$, and conjugating by $\nu$
we see that $x=x^{-\nu}d^{-1}$ which implies that $x^{\nu}=x^{-1}d^{-1}$.
Thus $d=d^{-1}$ so $d=1$.
Now let $e\in E$. Then, by hypothesis, $eD$ is inverted by $\nu D$,
so $e^{\nu}=e^{-1}$.
\end{proof}
\section{The proof of Theorem \ref{thm main}}
\begin{lemma}\label{lem exofA}
Let $D$ be an abelian subgroup of $U$ (we allow $D=1$) such
that $D$ is inverted by $\nu$ and such that $C_U(D)$ is infinite.
Assume that
\[
(S\cap C_U(D))\sminus D\ne\emptyset.
\]
Then
\begin{enumerate}
\item
there exists an element $w\in C_U(D)\sminus D$ which is inverted
by $\nu$ and such that $C_U(\lan D, w\ran)$ is infinite;
\item
There exists an infinite abelian subgroup of $U$ which is inverted by $\nu$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1):\quad
Set $V:=C_U(D)$. Then $V$ is an infinite uniquely $2$-divisible group, and
$\nu$ acts on $V$,
so without loss we may assume that $U=V$ and that $D\le Z(U)$.
Pick $b\in S\sminus D$ (note that $b$ exists by hypothesis),
and write $b=\nu\gt$ with $\gt\in\Inv(G)$.
Let
\[
u\in U \text{ with } u^{-2}=\nu \gt,
\]
and note that since $u$ is inverted by both $\nu$ and $\gt,$ we have
\[
\nu=\gt^u.
\]
We now find an element $h\in C_U(\gt)$ such that $hu$ is inverted by infinitely
many involutions of $G$. Note that $hu\notin D$; indeed, if $h=1$, then $hu=u$
and since $b\notin D$ also $u\notin D$. Otherwise if $hu\in D$ and $h\ne 1,$ then
\[
u^{-1}h^{-1}=(hu)^{\gt}=h^{\gt}u^{\gt}=hu^{-1},
\]
and it follows that $u$ inverts $h$ which is not possible in
a uniquely $2$-divisible group.
Since all involutions
in $G$ are conjugate, conjugating $hu$ by an appropriate
element we may assume that $\nu$ inverts $hu$
and since $hu$ is inverted by infinitely many involutions
we see that $C_U(hu)$ is infinite and taking $w=hu$ we are done.
It remains to show the existence of $h$. For each $a\in S$, let
\[
s_a:=\nu \gt^{a} \text{ and } \ell_a^{-2}=s_a.
\]
It is easy to check that since $\ell_a$ is inverted by $\nu$ and $\gt^a$, we have
$\gt^{a\ell_a}=\nu$. Hence
\[
\gt^{a\ell_a}=\gt^u,\text{ and hence }h_a:=a\ell_au^{-1}\in C_U(\gt).
\]
It follows that $\ell_a=a^{-1}h_au$. Since both $\ell_a$ and $a$ are inverted
by $\nu$ we get after conjugating by $\nu$ that $\ell_a^{-1}=a(h_a u)^{\nu}=(h_au)^{-1}a$.
Notice now that $a\nu\in\Inv(G)$
and it follows that
\[
(h_au)^{a\nu}=(h_au)^{-1}.
\]
By hypothesis the set $\{h_a\mid a\in S\}$ is finite since it is
contained in $C_U(\gt)$. Further, the set $S$ is infinite.
This implies the existence of $h\in C_U(\gt)$ such
that the number of involutions $a\nu$ that invert $hu$ is infinite.
This proves (1).
\medskip
\noindent
(2):\quad
If $D$ is finite and $C_U(D)$ is infinite, then $(S\cap C_U(D))\sminus D\ne\emptyset$.
Hence part (2) follows from (1) by starting with $D=1$ and iterating the process as long as the subgroup
$\lan D, w\ran$ is finite.
\end{proof}
\begin{lemma}\label{lem finiteB}
Let $B$ be a finitely generated abelian subgroup of $U$ which is inverted by $\nu$.
Then $A$ contains a subgroup $A_1$ of finite index such that $\lan A_1, B\ran$
is abelian.
\end{lemma}
\begin{proof}
Let $\calb$ be a finite set of generators for $B$ and set
$A_1:=\bigcap_{b\in \calb}A_b$. By Proposition \ref{prop A:Au} and since $\calb$
is finite $|A:A_1|<\infty$. Further, for each $b\in \calb$, $\nu$ and $b\nu b^{-1}$ invert
$A_1$, so $b^2=b\nu b^{-1}\nu\in C_U(A_1)$ (recall that $\nu$ inverts $b$).
Since $U$ is uniquely $2$-divisible, $b\in C_U(A_1)$. Hence $\calb\le C_U(A_1)$ and the lemma holds.
\end{proof}
\begin{lemma}\label{lem CUD}
Let $D$ be a uniquely $2$-divisible subgroup of $A$ of finite index. Then
$C_U(D)/D$ is finite and solvable.
\end{lemma}
\begin{proof}
Set $C:=C_U(D)$ and $\widebar C:=C/D$. Assume that $\widebar C$ is infinite. By Lemma \ref{lem basic}(1),
$\widebar C$ is uniquely $2$-divisible,
and by hypothesis $\widebar A:=A/D$ is a finite subgroup of $\widebar C$.
Let $\widebar \cala$ be an infinite maximal abelian subgroup
of $\widebar C$ inverted by $\nu D$. The existence of $\widebar \cala$
is guaranteed by Lemma \ref{lem basic}(2) and by Lemma \ref{lem exofA}(2) (with $\widebar C$ in place of $U$).
By Lemma \ref{lem finiteB} (with $\widebar C$ in place of $U$ and
$\widebar A$ in place of $B$), there exists an finite index $\widebar {\cala_1}\le \widebar \cala$
such that $\widebar {\cala_2}:=\lan\widebar {\cala_1}, \widebar A\ran$ is abelian.
Note that $\widebar {\cala_2}$ is inverted by $\nu D$, so by Lemma \ref{lem basic}(3),
the inverse image $\cala_2$ of $\widebar {\cala_2}$ in $C_U(D)$ is an abelian subgroup
inverted by $\nu$. Clearly $\cala_2$ properly contains $A$. This contradicts the maximality
of $A$ and shows that $\widebar C$ is finite.
Let $\cald\le C$ be a maximal central subgroup of $C$ which
is inverted by $\nu$. Of course $\cald\ge D$. Further, it
is clear that $\cald$ is a uniquely $2$-divisible group.
Suppose $t\cald$ is an involution in $C/\cald$. Then $t^2\in\cald$,
so also $t\in\cald$ and we see that $C/\cald$ has odd order.
By the Feit-Thompson theorem, $C/\cald$ is solvable, and the proof
of the lemma is complete.
\end{proof}
\begin{lemma}\label{lem xs}
Let $x\in U$ and let $s\in U$ be the unique element
such that $s^{-2}=\nu x^{-1}\nu x$. Then $xs\in C_U(\nu)$.
\end{lemma}
\begin{proof}
Notice that $s$ is inverted by $\nu$ and $\nu^x$. Hence
\[
1=s^2\nu \nu^x=\nu s^{-2}\nu^x=\nu s^{-1}\nu^xs,
\]
so the lemma holds.
\end{proof}
\begin{prop}\label{prop A:Au}
Let $A$ be as in Notation \ref{not A}(4) and let $u\in U$.
Let $A_u$ be as in Notation \ref{not A}(5). Then $|A:A_u|<\infty$.
\end{prop}
\begin{proof}
Fix $a\in A$ and consider the element
\[
\nu \nu^{au},\quad a\in A.
\]
This element is in $U$. Let $s\in U$ with $s^{-2}=\nu\nu^{au}$.
By Lemma \ref{lem xs} we get that
\begin{equation}\label{eq va}
v_a:=aus\in C_U(\nu).
\end{equation}
Now set
\[
\calm_a:=\{b\in A\mid v_b=v_a\}.
\]
Notice that since $|C_U(\nu)|<\infty,$
\begin{equation}\label{eq calma}
\text{the set }\{\calm_c\mid c\in A\}\text{ is finite and }A=\textstyle{\bigcup}_{c\in A}\calm_c.
\end{equation}
By equation \eqref{eq va} we get $s^{-1}=v_a^{-1}au$ and conjugating
by $\nu$ noticing that $\nu$ inverts $a$ and $s$ and centralizes $v_a$ we see that $s^{-1}=u^{-\nu}av_a$.
So we get the equality
\[
v_a^{-1}au=u^{-\nu}av_a,
\]
from which it follows that
\begin{equation}\label{eq seq}
u^{-1}\nu bv_au^{-1}=\nu v_a^{-1}b,\quad \forall b\in\calm_a.
\end{equation}
Let $c\in \calm_a$, then as in equation \eqref{eq seq} we get that
$u^{-1}\nu cv_au^{-1}=\nu v_a^{-1}c$ and this together with equation \eqref{eq seq}
yields
\[
uv_a^{-1}c^{-1}bv_au^{-1}=c^{-1}b,\quad \forall b, c\in\calm_a.
\]
Since $\nu$ inverts $c^{-1}b\in A$, it follows that $uv_a^{-1}\nu v_au^{-1}=u\nu u^{-1}$
inverts $c^{-1}b$. We thus can conclude that
\begin{equation}\label{eq unuu-1 inverts}
u\nu u^{-1}\text{ inverts }\lan bc^{-1}\mid b, c\in\calm_a\ran,\quad\forall a\in A.
\end{equation}
By equation \eqref{eq calma} and by Corollary \ref{cor neumann}
one of the groups $\lan bc^{-1}\mid b, c\in\calm_a\ran$ has finite index
in $A$, so $|A:A_u|<\infty$ as asserted.
\end{proof}
\begin{prop}\label{prop <S>}
Let $R:=\lan S\ran$, then
\begin{enumerate}
\item\
$R'$ is a periodic group;
\item
$R$ is solvable.
\end{enumerate}
\end{prop}
\begin{proof}
(1):\quad
We first show that
for elements $u_1,\dots, u_n\in U$
the subgroup $K:=\lan \nu u_1\nu u_1^{-1},\dots, \nu u_n\nu u_n^{-1}\ran$
is solvable, and $K/Z(K)$ is finite. By Remark \ref{rem basic}(3),
this will show that
\smallskip
\begin{itemize}
\item[(*)]
if $H$ is a f.g.~subgroup of $R,$ then $H$ is solvable,\\
and $H/Z(H)$ is finite.
\end{itemize}
\smallskip
Let $D:=\bigcap_{i=1}^n A_{u_i}$. By the definition
of $A_{u_i}$ and by Proposition \ref{prop A:Au}, $|A:D|<\infty$ and $D$ is inverted
by $\nu, u_1\nu u_1^{-1},\dots, u_n\nu u_n^{-1}$.
Also, by Remark \ref{rem basic}(2), $D$ is uniquely $2$-divisible.
By Lemma \ref{lem CUD}, $C_U(D)/D$ is finite and solvable, so since
$K\le C_U(D)$, we see that $K/Z(K)$ is finite and solvable.
Hence (*) holds.
Next let $g\in R'$. Then there exists a finitely generated subgroup
$H$ of $R$ such that $g\in H'$. By (*) and by \cite[(33.9), p.~168]{A},
$H'$ is finite, so the order of $g$ is finite.
This completes the proof of part (1).
\medskip
\noindent
(2):\quad
By (1), $R'$ is a periodic group and since $R$ is $\nu$-invariant,
$\nu$ is an almost regular automorphism of $R'$. By the main result
of Shunkov in \cite{Sh}, $R'$ is virtually solvable. But by (*),
$R'$ is also locally solvable, so this shows that $R'$ is solvable and
hence so is $R$.
\end{proof}
\medskip
\noindent
\begin{proof}[Proof of Theorem \ref{thm main}.]
By Proposition \ref{prop <S>}, $\lan S\ran$
is solvable.
By \cite[(3.4), p.~281]{K} (see also
\cite[Lemma 2.1(1) and Lemma 2.2(1)]{S}), $U=\lan S\ran C_U(\nu)$
and $\lan S\ran\nsg U$.
Since $C_U(\nu)$ is a finite uniquely $2$-divisible group,
so it has odd order. By the Feit-Thompson theorem it is solvable.
Hence $U$ is solvable.
\end{proof}
\subsection*{Acknowledgment.} I would like to
thank Pavel Shumyatsky for several fruitful
email exchanges.
|
1,108,101,564,938 | arxiv | \section{Introduction}\label{sec1}
Let $u(t,x)$ denote the solution to the stochastic heat equation
\begin{equation}\label{sec1-eq1.-1}
\frac{\partial}{\partial
t}u=\frac12\frac{\partial^2}{\partial x^2}u+\frac{\partial^2}{\partial
t\partial x}X(t,x),\quad t\geq 0, x\in {\mathbb R}
\end{equation}
with initial condition $u(0,x)\equiv 0$, where $\dot{X}$ is a
time-space white noise on $[0,\infty)\times {\mathbb R}$. That is,
$$
u(t,x)=\int_0^t\int_{\mathbb{R}}p(t-r,x-y)X(dr,dy),
$$
where $p(t,x)=\frac1{\sqrt{2\pi t}}e^{-\frac{x^2}{2t}}$ is the heat kernel. Then, these processes $(t,x)\mapsto u(t,x)$, $t\mapsto u(t,\cdot)$ and $x\mapsto u(\cdot,x)$ are Gaussian. Swanson~\cite{Swanson} has showed that
\begin{equation}\label{sec1-eq1.0}
E\left[u(t,x)u(s,x)\right]=\frac1{\sqrt{2\pi}}
\left((t+s)^{1/2}-|t-s|^{1/2}\right),\qquad t,s\geq 0,
\end{equation}
and the process $t\mapsto u(t,x)$ has a nontrivial quartic
variation. This shows that for every $x\in {\mathbb R}$, the process $t\mapsto u(t,x)$ coincides with the bi-fractional Brownian motion and it is not a semimartingale, so a stochastic integral with respect to the process $t\mapsto u(t,x)$ cannot be defined in the classical It\^o sense. Some surveys and complete literatures for bi-fractional Brownian motion could be found in Houdr\'e and Villa~\cite{Hou}, Kruk {\em et al}~\cite{Kruk}, Lei and Nualart~\cite{Lei-Nualart}, Russo and Tudor~\cite{Russo-Tudor}, Tudor and Xiao~\cite{Tudor-Xiao} and Yan {\em et al}~\cite{Yan4}, and the references therein. It is important to note for a large class of parabolic SPDEs, one obtains better regularity results when the solution $u$ is viewed as a process $t\mapsto u(t,x)$ taking values in Sobolev space, rather than for each fixed $x$. Denis~\cite{Denis} and Krylov~\cite{Krylov} considered a class of stochastic partial differential equations driven by a multidimensional Brownian motion and showed that the solution is a Dirichlet processes. These inspire one to consider stochastic calculus with respect to the solution to the stochastic heat equation. It is well known that many authors have studied some It\^o analysis questions of the solutions of some stochastic partial differential equations and introduced the related It\^o and Tanaka formula (see, for examples, Da Prato {\em et al}~\cite{Da Prato}, Deya and Tindel~\cite{Deya-Tindel}, Gradinaru {\em et al}~\cite{Grad5}, Lanconelli~\cite{Lanconelli1,Lanconelli2}, Le\'on and Tindel {\em et al}~\cite{Leon-Tindel}, Nualart and Vuillermot~\cite{Nua5}, Ouahhabi and Tudor~\cite{Ouahhabi-Tudor}, Pardoux~\cite{Pardoux}, Torres {\em et al}~\cite{Torres}, Tudor~\cite{Tudor}, Tudor and Xiao~\cite{Tudor-Xiao2}, Zambotti~\cite{Zambotti}, and the references therein). Almost all of these studies considered only the process in time, and there is a little discussion about the process $x\mapsto u(\cdot,x)$. This paper is an attempt to study stochastic analysis questions of the solution $u(t,x)$.
On the other hand, we shall see (in Section~\ref{sec4}) that
the process $x\mapsto u(\cdot,x)$ admits a nontrivial finite quadratic variation coinciding with the classical Brownian motion in any finite interval, and moreover we shall also see (in Section~\ref{sec4-1}) that the forward integral of some adapted processes with respect to $x\mapsto u(\cdot,x)$ coincides with "It\^o's integral". As a noise, the stochastic process $u=\{u(t,x), t\geq 0,x\in {\mathbb R}\}$ is {\em very rough} in time and it is not white in space. However, the process $x\mapsto u(\cdot,x)$ admits some characteristics similar to Brownian motion. These results, together with the works of Swanson~\cite{Swanson}, point out that the process $u=\{u(t,x)\}$ as a noise admits the next special structures:
\begin{itemize}
\item It is very rough in time and similar to fractional Brownian motion with Hurst index $H=\frac14$, but it has not stationary increments.
\item It is not white in space, but its quadratic variation coincides with the classical Brownian motion and it is not self-similar.
\item The process in space variable is not a semimartingale, but the forward integral of some adapted processes with respect to the process in space variable coincides with "It\^o's integral".
\item The process $u=\{u(t,x)\}$ admits a simple representation via Wiener integral with respect to Brownian sheet.
\item Though the process $u=\{u(t,x)\}$ is Gaussian, as a noise, its time and space parts are farraginous. We can not decompose its covariance as the product of two independent parts. This is very different from fractional noise and white noise. In fact, we have
\begin{align*}
Eu(t,x)u(s,y)=\frac1{\sqrt{2\pi}}\int_0^s\frac1{\sqrt{t+s-2r}}
\exp\left\{-\frac{(x-y)^2}{2(t+s-2r)}\right\}dr
\end{align*}
for all $t\geq s>0$ and $x,y\in {\mathbb R}$.
\end{itemize}
Therefore, it seems interesting to study the integrals
$$
\int_{\mathbb R}f(x)u(t,dx),\quad\int_0^tf(s)u(ds,x),\quad\int_0^t\int_{\mathbb R}f(s,x)u(ds,dx),
$$
and some related stochastic (partial) differential equations. For example, one can consider the following "iterated" stochastic partial differential equations:
$$
\frac{\partial}{\partial t}u^j=\frac12\frac{\partial^2}{\partial x^2}u^j+f(u^j)+\frac{\partial^2}{\partial t\partial x}u^{j-1}(t,x),\quad t\geq 0, x\in {\mathbb R},\quad j=1,2,\ldots,
$$
where $u^0$ is a time-space white noise. Of course, one can also consider some sample path properties and singular integrals associated with the solution process $u=\{u(t,x),t\geq 0,x\in {\mathbb R}\}$. We will carry out these projects in some forthcoming works. In the present paper our start points are to study the quadratic variations of the two processes $x\mapsto u(\cdot,x),t\mapsto u(t,\cdot)$. Our objects are to study the quadratic covariations of $x\mapsto u(\cdot,x)$ and $t\mapsto u(t,\cdot)$, and moreover, we shall also introduce some generalized It\^o formulas associated with $\{u(\cdot,x),x\in {\mathbb R}\}$ and $\{u(t,\cdot),t\geq 0\}$, respectively, and to consider their local times and Bouleau-Yor's identities.
To expound our aim, let us start with a basic definition. An elementary calculation can show that (see Section~\ref{sec2})
\begin{equation}\label{sec1-eq1.1}
E[(u(t,x)-u(s,y))^2]= \frac1{\sqrt{2\pi}}\left(\sqrt{2|t-s|}+\Delta(s,t,x-y)\right)
\end{equation}
for all $t,s>0$ and $x,y\in {\mathbb R}$, where
$$
\Delta(s,t,z)=\int_0^{s\wedge t}\left(
\frac1{\sqrt{2(t-r)}}-\frac{2}{\sqrt{t+s-2r}}
\exp\left\{-\frac{z^2}{2(t+s-2r)}\right\}+\frac1{\sqrt{2(s-r)}} \right)dr
$$
for $t,s>0$ and $z\in {\mathbb R}$. This simple estimate inspires us to consider the following limits:
\begin{equation}
\begin{split}
\lim_{\varepsilon\to 0}\frac1{\varepsilon}E[(u(t,x+\varepsilon)&-u(t,x))^2]
=\frac1{\sqrt{\pi}}\lim_{\varepsilon\to 0}\frac1{\varepsilon}\int_0^{t}
\frac1{\sqrt{r}}\left(1-e^{-\frac{\varepsilon^2}{4r}}\right)dr\\
&=\frac2{\sqrt{\pi}}\int_0^{\infty}
\frac1{s^2}\left(1-e^{-\frac{s^2}{4}}\right)ds=1
\end{split}
\end{equation}
and
\begin{equation}
\lim_{\varepsilon\to 0}\frac1{\sqrt{\varepsilon}} E[(u(t+\varepsilon,x)-u(t,x))^2] =\sqrt{\frac2{\pi}}
\end{equation}
for all $t\geq 0$ and $x\in {\mathbb R}$. That is,
$$
\lim_{\delta\to 0}\lim_{\varepsilon\to 0}\frac1{\sqrt{\varepsilon}+\delta} E[(u(t+\varepsilon,x+\delta)-u(t,x))^2]=1,
$$
$$
\lim_{\varepsilon\to 0}\lim_{\delta\to 0}\frac1{\sqrt{\varepsilon}+\delta} E[(u(t+\varepsilon,x+\delta)-u(t,x))^2]=\sqrt{\frac2{\pi}}
$$
for all $t\geq 0$ and $x\in {\mathbb R}$. However, it is easy to see that the limit
\begin{align*}
\lim\limits_{\substack{\varepsilon\to 0\\ \delta\to 0}}\frac1{\sqrt{\varepsilon}+\delta} E[(u(t+\varepsilon,x+\delta)-u(t,x))^2]
\end{align*}
does not exist for all $t>0, x\in {\mathbb R}$ by taking $\varepsilon=k\delta^2$ with $k>0$. Thus, the next definition is natural.
\begin{definition}
Denote $B:=\{B_t:=u(t,\cdot),t\geq 0\}$ and $W:=\{W_x:=u(\cdot,x),x\in {\mathbb R}\}$. Let $I_x=[0,x]$ for $x\geq 0$ and $I_x=[x,0]$ for $x\leq 0$. Define the integrals
\begin{align*}
I_\delta^1(f,x,t)&=\frac1{\delta}\int_{I_x} \left\{f(W_{y+\delta})-f(W_y)\right\}(W_{y+\delta}-W_y)dy,\\
I_\varepsilon^2(f,x,t)&=\frac1{\sqrt{\varepsilon}}\int_0^t \left\{f(B_{s+\varepsilon})-f(B_s)\right\}
(B_{s+\varepsilon}-B_s)\frac{ds}{2\sqrt{s}}
\end{align*}
for all $t\geq 0,x\in {\mathbb R},\varepsilon,\delta>0$, where $f$ is a measurable function on ${\mathbb R}$. The limits $\lim\limits_{\delta\to 0}I_\delta^1(f,t,x)$ and $\lim\limits_{\varepsilon\to 0}I_\varepsilon^2(f,t,x)$ are called the partial quadratic covariations (PQC, in short) in space and in time, respectively, of $f(u)$ and $u$, provided these limits exist in probability. We denote them by $[f(W),W]^{(SQ)}_x$ and $[f(B),B]^{(TQ)}_t$, respectively.
\end{definition}
Clearly, we have (see Section~\ref{sec4})
$$
[f(W),W]^{(SQ)}_x=\int_{I_x}f'(W_y)dy
$$
and $[W,W]^{(SQ)}_x=|x|$ for all $f\in C^1({\mathbb R}),t>0,x\in {\mathbb R}$. We also have (see Section~\ref{sec6})
$$
[f(B),B]^{(TQ)}_t=\int_0^tf'(B_s)\frac{ds}{\sqrt{2\pi s}}
$$
and $[B,B]^{(TQ)}_t=\sqrt{\frac{2}{\pi}t}$ for all $f\in C^1({\mathbb R}),t\geq 0,x\in {\mathbb R}$. These say that the process $W=\{W_x=u(\cdot,x),x\in {\mathbb R}\}$ admits a nontrivial finite quadratic variation in any finite interval $I_x$. This is also a main motivation to study the solution of~\eqref{sec1-eq1.-1}.
This paper is organized as follows. In Section~\ref{sec2}, we establish some technical estimates associated with the solution, and as some applications we introduce Wiener integrals with respect to the two processes $B=\{B_t=u(t,\cdot),t\geq 0\}$ and $W=\{W_x=u(\cdot,x),x\in {\mathbb R}\}$, respectively. In Section~\ref{sec4} we show that the quadratic variation $[W,W]^{(SQ)}$ exists in $L^2(\Omega)$ and equals to $|x|$ in every finite interval $I_x$. For a given $t>0$, by estimating in $L^2$
$$
\frac1{\varepsilon}\int_{I_x}f(W_{y+\varepsilon})
(W_{y+\varepsilon}-W_{y})dy\quad (x\in {\mathbb R})
$$
and
$$
\frac1{\varepsilon}\int_{I_x}f(W_y)(W_{y+\varepsilon}-W_y)dy
\quad (x\in {\mathbb R})
$$
for all $\varepsilon>0$, respectively, we construct a Banach space ${\mathscr H}_t$ of measurable functions such that the PQC $[f(W),W]^{(SQ)}$ in space exists in $L^2(\Omega)$ for all $f\in {\mathscr H}_t$, and in particular we have
$$
[f(W),W]_x^{(SQ)}=\int_{I_x}f'(W_y)dy
$$
provided $f\in C^1({\mathbb R})$. In Section~\ref{sec4-1}, as an application of Section~\ref{sec4}, we show that the It\^o's formula
\begin{align*}
F(W_x)=F(W_0)+\int_{I_x}f(W_y)\delta W_y+\frac1{2}[f(W),W]_x^{(SQ)}
\end{align*}
holds for all $t>0,x\in {\mathbb R}$, where the integral $\int_{I_x}f'(W_y)\delta W_y$ denotes the Skorohod integral, $F$ is an absolutely continuous function with the derivative $F'=f\in {\mathscr H}_t$. In order to show that the above It\^o formula we first introduce a standard It\^o type formula
\begin{equation}\label{sec1-1-eq4.80011}
F(W_x)=F(W_0)+\int_{I_x}F'(W_y)\delta W_y +\frac1{2}\int_{I_x}F''(W_y)dy
\end{equation}
for all $F\in C^2({\mathbb R})$ satisfying some suitable conditions. It is important to note that the Gaussian process
$W=\{W_x=u(\cdot,x),x\in {\mathbb R}\}$ does not satisfy the condition in Al\'os {\em et al}~\cite{Nua1} since
\begin{align*}
E\left[u(t,x)^2\right]=\sqrt{\frac{t}{\pi}},\quad \frac{d}{x}E\left[u(t,x)^2\right]=0
\end{align*}
for all $t\geq 0$ and $x\in {\mathbb R}$. We need to give the proof of the formula~\eqref{sec1-1-eq4.80011}. Moreover, we also show that the forward integral (see Russo-Vallois~\cite{Russo-Vallois2,Russo-Vallois3})
$$
\int_{I_x}f(W_y)d^{-}W_y:={\rm ucp}\lim_{\varepsilon\downarrow 0}\frac1{\varepsilon}\int_{I_x}f(W_y) \left(W_{y+\varepsilon}-W_y\right)dy
$$
coincides with the Skorohod integral $\int_{I_x}f(W_y)\delta W_y$, if $f$ satisfies the growth condition
\begin{equation}
|f(y)|\leq Ce^{\beta {y^2}},\quad y\in {\mathbb R}
\end{equation}
with $0\leq \beta<\frac{\sqrt{\pi}}{4\sqrt{t}}$, where the notation ${\rm ucp}\lim$ denotes the uniform convergence in probability on each compact interval. This is very similar to Brownian motion, but the process $W=\{W_x=u(\cdot,x),x\in {\mathbb R}\}$ is not a semimartingale. In Section~\ref{sec4-2} we consider some questions associated with the local time
$$
{\mathscr L}^t(x,a)=\int_0^x\delta(W_y-a)dy
$$
of the process $W=\{W_x=u(\cdot,x),x\geq 0\}$. In particular, we show that the Bouleau-Yor type identity
$$
[f(W),W]^{(SQ)}_x=-\int_{\mathbb {R}}f(v){\mathscr L}^t(x,dv)
$$
holds for all $f\in {\mathscr H}_t$. In Section~\ref{sec6} we consider some analysis questions associated with the quadratic covariation of the process $B=\{B_t=u(t,\cdot),t\geq 0\}$.
\section{Some basic estimates and divergence integrals}\label{sec2}
In this section we will establish divergence integral and some technical estimates associated with the solution
$$
u(t,x)=\int_0^t\int_{\mathbb{R}}p(t-r,x-y)X(dr, dy),\quad t\geq 0,x\in {\mathbb R},
$$
where $p(t,x)=\frac1{\sqrt{2\pi t}}e^{-\frac{x^2}{2t}}$ is the heat kernel. For simplicity throughout this paper we let $C$ stand for a positive constant depending only on the subscripts and its value may be different in different appearance, and this assumption is also adaptable to $c$. Moreover, we assume that the notation $F\asymp G$ means that there are positive constants $c_1$ and $c_2$ such that
$$
c_1G(x)\leq F(x)\leq c_2G(x)
$$
in the common domain of definition for $F$ and $G$.
The first object in this section is to introduce some basic estimates for the solution process $\{u(t,x),t\geq 0,x\in {\mathbb R}\}$. We have
\begin{align*}
R_{x,y}(s,t):&=Eu(t,x)u(s,y)=\int_0^s\int_{\mathbb{R}} p(t-r,x-z)p(s-r,y-z)dzdr\\
&=\frac1{2\pi}
\int_0^s\int_{\mathbb{R}}\frac1{\sqrt{(t-r)(s-r)}}
\exp\left\{-\frac{(x-z)^2}{2(t-r)}-\frac{(y-z)^2}{2(s-r)}\right\}
dzdr\\
&=\frac1{\sqrt{2\pi}}\int_0^s\frac1{\sqrt{t+s-2r}}
\exp\left\{-\frac{(x-y)^2}{2(t+s-2r)}\right\}dr
\end{align*}
for all $t\geq s>0$ and $x,y\in {\mathbb R}$. Denote
$$
\Delta(s,t,u)=\int_0^s\left(
\frac1{\sqrt{2(t-r)}}-\frac{2}{\sqrt{t+s-2r}}
\exp\left\{-\frac{u^2}{2(t+s-2r)}\right\}+\frac1{\sqrt{2(s-r)}} \right)dr
$$
for $t\geq s$. Then, we have
\begin{equation}\label{sec2-eq2.1}
E[(u(t,x)-u(s,y))^2]= \frac1{\sqrt{2\pi}}\left(\sqrt{2(t-s)}+\Delta(s,t,x-y)\right)
\end{equation}
for all $t>s>0$ and $x,y\in {\mathbb R}$.
\begin{lemma}\label{lem2.1}
For all $t\geq s>0$ and $x,y\in {\mathbb R}$ we have
\begin{equation}\label{sec2-eq2.2}
E\left[(u(t,x)-u(s,y))^2\right]\leq C\left(\sqrt{t-s}+|x-y|\right).
\end{equation}
\end{lemma}
\begin{proof}
Notice that
\begin{align*}
\int_0^s\frac{2}{\sqrt{t+s-2r}}&\left(
1-\exp\left\{-\frac{u^2}{2(t+s-2r)}\right\}\right)dr\\ &=|u|\int_{\frac{|u|}{\sqrt{t+s}}}^{\frac{|u|}{\sqrt{t-s}}}
\left(1-e^{-\frac{r^2}{2}}\right)\frac{dr}{r^2}\leq
|u|\int_0^{+\infty}
\left(1-e^{-\frac{r^2}{2}}\right)\frac{dr}{r^2}=|u|\sqrt{\frac{\pi}2}
\end{align*}
for all $t\geq s>0$ and $u\in {\mathbb R}$. We get
\begin{align*}
\Delta(s,t,u)&=\int_0^s\left(
\frac1{\sqrt{2(t-r)}}-\frac{2}{\sqrt{t+s-2r}}
+\frac1{\sqrt{2(s-r)}}\right)dr\\
&\qquad+\int_0^s\frac{2}{\sqrt{t+s-2r}}\left(
1-\exp\left\{-\frac{u^2}{2(t+s-2r)}\right\}\right)dr\\
&=\sqrt{2t}+\sqrt{2s}+(2-\sqrt{2})\sqrt{t-s}-2\sqrt{t+s}\\
&\qquad+\int_0^s\frac{2}{\sqrt{t+s-2r}}\left(
1-\exp\left\{-\frac{u^2}{2(t+s-2r)}\right\}\right)dr\\
&\leq C\left((3-\sqrt{2})\sqrt{t-s}+|u|\right)
\end{align*}
for all $t\geq s>0,u\in {\mathbb R}$ by the next estimate:
\begin{align*}
0\leq \sqrt{2t}+\sqrt{2s}&+2\sqrt{t-s}-2\sqrt{t+s}\\
&=2\sqrt{t-s} +\left(\sqrt{2t}-\sqrt{t+s}\right)+\sqrt{2s}-\sqrt{t+s}\leq 3\sqrt{t-s}.
\end{align*}
It follows from~\eqref{sec2-eq2.1} that
\begin{equation}\label{sec2-eq2.4}
E[(u(t,x)-u(s,y))^2]\leq C\left(\sqrt{t-s}+|x-y|\right)
\end{equation}
for all $t\geq s>0,x,y\in {\mathbb R}$. This completes the proof.
\end{proof}
\begin{lemma}
For all $t,s,r>0$ and $x\in {\mathbb R}$ we have
\begin{align*}
|E\left[u(r,x)(u(t,x)-u(s,x))\right]|&\leq C\sqrt{|t-s|}.
\end{align*}
\end{lemma}
\begin{proof}
For all $t,s,r>0$ and $x\in {\mathbb R}$, we have
\begin{align*}
E[u(r,x)(u(t,x)&-u(s,x))]=Eu(r,x)u(t,x)-Eu(r,x)u(s,x)\\
&=R_{x,x}(r,t)-R_{x,x}(r,s)\\
&=\frac1{\sqrt{2\pi}}\left(\sqrt{t+r}-\sqrt{|t-r|}-\sqrt{s+r} +\sqrt{|s-r|}\right),
\end{align*}
which gives
\begin{align*}
|E[u(r,x)&(u(t,x)-u(s,x))]|\\
&\leq \left|\sqrt{t+r}-\sqrt{s+r}\right| +\left|\sqrt{|t-r|}-\sqrt{|s-r|}\right|\leq 3\sqrt{|t-s|}
\end{align*}
for all $t,s,r>0$ and $x\in {\mathbb R}$.
\end{proof}
\begin{lemma}\label{lem2.3}
For all $t>0$ and $x,y,z\in {\mathbb R}$ we have
\begin{align*}
|E\left[u(t,x)(u(t,y)-u(t,z))\right]|&\leq C|y-z|.
\end{align*}
\end{lemma}
\begin{proof}
For all $t>0$ and $x,y,z\in {\mathbb R}$, we have
\begin{align*}
E[&u(t,x)(u(t,y)-u(t,z))]=Eu(t,x)u(t,y)-Eu(t,x)u(t,z)\\
&=R_{x,y}(t,t)-R_{x,z}(t,t)\\
&=\frac1{2\sqrt{\pi}}\left(\int_0^t\frac1{\sqrt{t-r}}
\exp\left\{-\frac{(x-y)^2}{4(t-r)}\right\}dr
-\int_0^t\frac1{\sqrt{t-r}} \exp\left\{-\frac{(x-z)^2}{4(t-r)}\right\}dr\right)\\
&=\frac{\sqrt{t}}{2\sqrt{\pi}}\left(|x-y| \int_{|x-y|}^{+\infty}\frac1{s^2}
e^{-\frac{s^2}{4t}}ds
-|x-z|\int_{|x-z|}^{+\infty}\frac1{s^2}
e^{-\frac{s^2}{4t}}ds\right).
\end{align*}
Consider the function $f:{\mathbb R}_{+}\to {\mathbb R}_{+}$ defined by
$$
f(x)=x\int_x^{+\infty}\frac1{s^2}e^{-\frac{s^2}{4t}}ds
=e^{-\frac{x^2}{4t}}-
\frac{x}{2t}\int_x^\infty e^{-\frac{r^2}{4t}}dr
$$
Then, by Mean value theorem we have
\begin{align*}
\left|f(u)-f(v)\right|& =\frac1{2t}|u-v|\int_\xi^{+\infty}e^{-\frac{s^2}{4t}}ds\\
&\leq \frac1{2t}|u-v| \int_0^{+\infty}e^{-\frac{s^2}{4t}}ds
\leq \frac{\sqrt{\pi}}{2\sqrt{t}}|u-v|
\end{align*}
for all $u,v\geq 0$ and some $\xi$ between $u$ and $v$. It follows that
$$
|E[u(t,x)(u(t,y)-u(t,z))]|\leq \frac14||x-y|-|x-z||\leq \frac14|y-z|
$$
for all $t>0$ and $x,y,z\in {\mathbb R}$.
\end{proof}
\begin{lemma}\label{lem2.4}
For all $t>s>t'>s'>0$ and $x\in {\mathbb R}$ we have
\begin{equation}\label{sec2-eq2.6}
|E\left[(u(t,x)-u(s,x))(u(t',x)-u(s',x))\right]|
\leq \frac{C(t'-s')\sqrt{t-s}}{\sqrt{ts(s-s')(t-t')}}
\end{equation}
\end{lemma}
\begin{proof}
For all $t>s>t'>s'>0$ and $x\in {\mathbb R}$ we have
\begin{align*}
E[(u(t,x)&-u(s,x))(u(t',x)-u(s',x))]\\ &=R_{x,x}(t,t')-R_{x,x}(s,t')-R_{x,x}(t,s')+R_{x,x}(s,s')\\
&=\frac1{\sqrt{2\pi}}\left(
\sqrt{t+t'}-\sqrt{t-t'}-\sqrt{s+t'}+\sqrt{s-t'}\right.\\
&\qquad\qquad\left.-\sqrt{t+s'}+\sqrt{t-s'} +\sqrt{s+s'}-\sqrt{s-s'}\right).
\end{align*}
Consider the function
$$
f(x)=\sqrt{t+x}-\sqrt{t-x}-\sqrt{s+x}+\sqrt{s-x}
$$
with $x\in [0,s]$. Then, we have
\begin{align*}
E[(u(t,x)-u(s,x))(u(t',x)-u(s',x))] =\frac1{\sqrt{2\pi}}\left(f(t')-f(s')\right),
\end{align*}
and by Mean value theorem
\begin{align*}
\left|f(t')-f(s')\right|&=\frac12(t'-s')
\left|\frac1{\sqrt{t+\xi}}-\frac1{\sqrt{t-\xi}}
-\frac1{\sqrt{s+\xi}}+\frac1{\sqrt{s-\xi}}\right|\\
&\leq \frac12(t'-s')\left(\frac{\sqrt{t+\xi}-\sqrt{s+\xi}}{
\sqrt{t+\xi}\sqrt{s+\xi}}+\frac{\sqrt{t-\xi}-\sqrt{s-\xi}}{ \sqrt{s-\xi}\sqrt{t-\xi}}\right)\\
&\leq \frac{C(t'-s')\sqrt{t-s}}{\sqrt{ts(s-s')(t-t')}}
\end{align*}
for some $s'\leq \xi\leq t'$, which shows the lemma.
\end{proof}
\begin{lemma}\label{lem2.5}
For all $t>0$ and $x>y>x'>y'$ we have
\begin{equation}\label{sec2-eq2.7}
|E\left[(u(t,x)-u(t,y))(u(t,x')-u(t,y'))\right]|
\leq \frac1{4\sqrt{t\pi}}(x-y)(x'-y')e^{-\frac{(y-x')^2}{4t}}.
\end{equation}
\end{lemma}
\begin{proof}
We have
\begin{align*}
E[&(u(t,x)-u(t,y))(u(t,x')-u(t,y'))]\\
&=R_{x,x'}(t,t)-R_{x,y'}(t,t)-R_{y,x'}(t,t)+R_{y,y'}(t,t)\\ &=\frac1{2\sqrt{\pi}}\left(\int_0^t\frac1{\sqrt{r}}
\exp\left\{-\frac{(x-x')^2}{4r}\right\}dr
-\int_0^t\frac1{\sqrt{r}}
\exp\left\{-\frac{(x-y')^2}{4r}\right\}dr\right.\\
&\qquad\qquad\left.-\int_0^t\frac1{\sqrt{r}}
\exp\left\{-\frac{(y-x')^2}{4r}\right\}dr
+\int_0^t\frac1{\sqrt{r}}
\exp\left\{-\frac{(y-y')^2}{4r}\right\}dr\right)\\
&=\frac{\sqrt{t}}{2\sqrt{\pi}}\left((x-x')\int_{(x-x')}^{ +\infty}\frac1{s^2}
e^{-\frac{s^2}{4t}}ds
-(x-y')\int_{(x-y')}^{+\infty}\frac1{s^2}
e^{-\frac{s^2}{4t}}ds\right.\\
&\qquad\qquad\left.-(y-x')\int_{(y-x')}^{+\infty}\frac1{s^2}
e^{-\frac{s^2}{4t}}ds+(y-y')\int_{(y-y')}^{+\infty}\frac1{s^2}
e^{-\frac{s^2}{4t}}ds\right)
\end{align*}
for all $t>0$ and $x>y>x'>y'$. Similar to the proof of Lemma~\ref{lem2.3} we define the function $f:{\mathbb R}_{+}\to {\mathbb R}_{+}$ by
$$
f(x)=x\int_x^{+\infty}\frac1{s^2}e^{-\frac{s^2}{4t}}ds.
$$
Then, we have
\begin{align*}
|E[&(u(t,x)-u(t,y))(u(t,x')-u(t,y'))]|\\
&=\frac{\sqrt{t}}{2\sqrt{\pi}}\left|f(x-x')-f(x-y')-f(y-x') +f(y-y')\right|\\
&=\frac{\sqrt{t}}{2\sqrt{\pi}}(x-y)\left|f'(\xi-x')-f'(\xi-y')\right|\\
&=\frac1{4\sqrt{t}\sqrt{\pi}}(x-y)\left|
\int_{\xi-x'}^{+\infty}e^{-\frac{s^2}{4t}}ds -\int_{\xi-y'}^{+\infty}e^{-\frac{s^2}{4t}}ds\right|\\
&=\frac1{2\sqrt{\pi}}(x-y)
\int_{\xi-x'}^{\xi-y'}\frac1{2\sqrt{t}}e^{-\frac{s^2}{4t}}ds\\
&\leq \frac1{4\sqrt{t\pi}}(x-y)(x'-y')e^{-\frac{(\xi-x')^2}{4t}}\\
&\leq \frac1{4\sqrt{t\pi}}(x-y)(x'-y')e^{-\frac{(y-x')^2}{4t}}
\end{align*}
for some $\xi\in [y,x]$ by Mean Value Theorem. This completes the proof.
\end{proof}
\begin{lemma}\label{lem2.6}
For all $t>s>0$ and $x\in {\mathbb R}$ denote
$\sigma^2_{t,x}=E\left[u(t,x)^2\right]$, $\sigma^2_{s,x}=E\left[u(s,x)^2\right]$, $\mu_{t,s,x}=E\left[u(t,x)u(s,x)\right]$. Then we have
\begin{equation}\label{sec2-eq2.8}
\frac1{\pi}\sqrt{s(t-s)}\leq \sigma^2_{t,x}\sigma^2_{s,x}-\mu_{t,s,x}^2\leq \frac3{\pi}\sqrt{s(t-s)}.
\end{equation}
\end{lemma}
\begin{proof}
Given $t>s>0$ and $x\in {\mathbb R}$. We have
\begin{align*}
\sigma^2_{t,x}\sigma^2_{s,x}-\mu_{t,s,x}^2& =\frac{1}{2\pi}\left(2\sqrt{ts}-
\left((t+s)^{1/2}-(t-s)^{1/2}\right)^2\right)\\
&=\frac{1}{\pi}\left(\sqrt{ts}-t+\sqrt{t^2-s^2}\right)\\
&=\frac{t}{\pi}\left(\sqrt{z}+\sqrt{1-z^2}-1\right)
\end{align*}
with $z=\frac{s}{t}$. Clearly,
$$
\sqrt{z}+\sqrt{1-z^2}-1=\sqrt{z}-\left(1-\sqrt{1-z^2}\right)\geq \sqrt{z}-\sqrt{z^2}\geq \sqrt{z(1-z)}
$$
for all $0\leq z\leq 1$. Conversely, we have also that
\begin{align*}
0\leq \sqrt{z}+\sqrt{1-z}-1&=\sqrt{z(1-z)}+ \sqrt{z}+\left(\sqrt{1-z}-\sqrt{z(1-z)}\right)-1\\
&=\sqrt{z(1-z)}+ \sqrt{z}+\sqrt{1-z}\left(1-\sqrt{z}\right)-1\\
&\leq \sqrt{z(1-z)}+\left(\sqrt{z}-z\right)\leq 2\sqrt{z(1-z)},
\end{align*}
which gives
\begin{align*}
\sqrt{z}+\sqrt{1-z^2}-1&=\sqrt{z}+\sqrt{1-z+z-z^2}-1\\
&\leq \sqrt{z}+\sqrt{1-z}-1+\sqrt{z(1-z)}\leq 3\sqrt{z(1-z)}.
\end{align*}
This completes the proof.
\end{proof}
\begin{lemma}\label{lem2.7}
For all $t>0$ and $x>y$ denote
$\mu_{t,x,y}=E\left[u(t,x)u(t,y)\right]$. Then, under the conditions of Lemma~\ref{lem2.6} we have
\begin{equation}\label{sec2-eq2.9}
\sigma^2_{t,x}\sigma^2_{t,y}-\mu_{t,x,y}^2\asymp \frac{(x-y)t}{\sqrt{t}+x-y}.
\end{equation}
In particular, we have
\begin{equation}\label{sec2-eq2.10}
0\leq \sigma^2_{t,z}-\mu_{t,x,y}\asymp \frac{(x-y)\sqrt{t}}{\sqrt{t}+x-y}
\end{equation}
for all $t>0$ and $x,y,z\in {\mathbb R}$.
\end{lemma}
\begin{proof}
Given $t>0$ and $x>y$. We have
\begin{align*}
\sigma^2_{t,x}\sigma^2_{t,y}-\mu_{t,x,y}^2&
=\frac{t}{\pi}-\frac{1}{4\pi}
\left(\int_0^t\frac1{\sqrt{r}}
\exp\left\{-\frac{(x-y)^2}{4r}\right\}dr\right)^2\\
&=\frac{1}{4\pi}\left(4t-\left(\int_0^t\frac1{\sqrt{r}}
\exp\left\{-\frac{(x-y)^2}{4r}\right\}dr\right)^2\right)\\
&=\frac{1}{4\pi}\int_0^t\frac{dr}{\sqrt{r}}
\left(1-\exp\left\{-\frac{(x-y)^2}{4r}\right\}\right)\\
&\qquad\cdot
\int_0^t\frac{dr}{\sqrt{r}}
\left(1+\exp\left\{-\frac{(x-y)^2}{4r}\right\}\right)\\
&\asymp \sqrt{t}\int_0^t\frac{dr}{\sqrt{r}}
\left(1-\exp\left\{-\frac{(x-y)^2}{4r}\right\}\right)\\
&=2\sqrt{t}(x-y)\int_{\frac{x-y}{\sqrt{t}}}^\infty
\left(1-e^{-\frac{s^2}{4}}\right)\frac{ds}{s^2}.
\end{align*}
As in Lemma~\ref{lem2.3}, we define the function $f:{\mathbb R}_{+}\to {\mathbb R}_{+}$ by
$$
f(z)=z\int_z^{+\infty}\frac1{s^2} \left(1-e^{-\frac{s^2}{4}}\right)ds.
$$
Then $f$ is continuous in ${\mathbb R}_{+}$ and
$$
\lim_{z\to 0}\frac{f(z)}{z}=\int_0^{+\infty}\frac1{s^2} \left(1-e^{-\frac{s^2}{4}}\right)ds=\frac12\int_0^{+\infty} e^{-\frac{s^2}{4}}ds=\frac{\sqrt{\pi}}2
$$
and
$$
\lim_{z\to \infty}\frac{f(z)}{\frac{z}{1+z}}=1,
$$
which show that the functions $z\mapsto \frac{f(z)}{\frac{z}{1+z}}$ and $z\mapsto \frac{\frac{z}{1+z}}{f(z)}$
are bounded in $[0,\infty]$, i.e., there is
$$
\frac{x-y}{\sqrt{t}}\int_{\frac{x-y}{\sqrt{t}}}^\infty
\left(1-e^{-\frac{s^2}{4}}\right)\frac{ds}{s^2}\asymp \frac{\frac{x-y}{\sqrt{t}}}{1+\frac{x-y}{\sqrt{t}}} =\frac{x-y}{\sqrt{t}+x-y}.
$$
This shows that
\begin{align*}
\sigma^2_{t,x}\sigma^2_{t,y}-\mu_{t,x,y}^2
&\asymp \frac{(x-y)t}{\sqrt{t}+x-y},
\end{align*}
and the lemma follows.
\end{proof}
The second object in this section is to discuss Skorohod integrals associated with the solution process $u=\{u(t,x),t\geq 0,x\in {\mathbb R}\}$. From the above discussion we have known that the processes $B=\{B_t=u(t,\cdot),t\geq 0\}$ and $W=\{W_x=u(\cdot,x),x\in {\mathbb R}\}$ are neither semimartingales nor a Markov processes, so many of the powerful techniques from stochastic analysis are not available when dealing with the three processes. However, as a Gaussian process, one can develop the stochastic calculus of variations with respect to them. We refer to Al\'os {\em et al}~\cite{Nua1}, Nualart~\cite{Nua4} and the references therein for more details of stochastic calculus of Gaussian process.
Let ${\mathcal E}_t$ and ${\mathcal E}_x$ be respectively, the sets of linear combinations of elementary functions $\{1_{I_x},x\in \mathbb{R}\}$ and $\{1_{[0,t]},0\leq t\leq T\}$. Assume that $\mathcal{H}_t$ and ${\mathcal H}_{\ast}$ are the Hilbert spaces defined as the closure of ${\mathcal E}_t$ and ${\mathcal E}_x$ respect to the inner products
$$
\langle 1_{I_x},1_{I_y} \rangle_{{\mathcal H}_t}=\frac1{2\sqrt{\pi}}\int_0^t\frac1{\sqrt{s}}
\exp\left\{-\frac{(x-y)^2}{4s}\right\}ds=
\frac{|x-y|}{\sqrt{\pi}}\int_{\frac{|x-y|}{\sqrt{t}}}^\infty \frac1{s^2}e^{-\frac{s^2}{4}}ds
$$
and
$$
\langle 1_{[0,t]},1_{[0,s]} \rangle_{{\mathcal H}_\ast}=\frac1{\sqrt{2\pi}}
\left((t+s)^{1/2}-|t-s|^{1/2}\right),
$$
respectively. These maps $1_{I_x}\mapsto W_x$ and $1_{[0,t]}\mapsto B_t$ are two isometries between ${\mathcal E}_t$, ${\mathcal E}_x$ and the Gaussian spaces $W(\varphi)$, $B(\varphi)$ of $\{W_x,x\in {\mathbb R}\}$ and $\{B_t,t\geq 0\}$, respectively, which can be extended to $\mathcal{H}_t$ and ${\mathcal H}_{\ast}$, respectively. We denote these extensions by
$$
\varphi\mapsto W(\varphi)=\int_{\mathbb R}\varphi(y)dW_y
$$
and
$$
\psi\mapsto B(\psi)=\int_0^T\psi(s)dB_s,
$$
respectively.
Denote by ${\mathcal S}_t$ and ${\mathcal S}_{\ast}$ the sets of smooth functionals of the form
$$
F_t=f(W(\varphi_1),W(\varphi_2),\ldots, W(\varphi_n))
$$
and
$$
F_\ast=f(B(\psi_1),B(\psi_2),\ldots, B(\psi_n)),
$$
where $f\in C^{\infty}_b({\mathbb R}^n)$ ($f$ and all their
derivatives are bounded), $\varphi_i\in {\mathcal H}_t$ and $\psi_i\in {\mathcal H}_{\ast}$. The {\it derivative operators} $D^t$ and $D^{\ast}$ (the Malliavin derivatives) of functionals $F_t$ and $F_\ast$ of the above forms are defined as
$$
D^tF_t=\sum_{j=1}^n\frac{\partial f}{\partial
x_j}(W(\varphi_1),W(\varphi_2),
\ldots,W(\varphi_n))\varphi_j
$$
and
$$
D^{\ast}F_\ast=\sum_{j=1}^n\frac{\partial f}{\partial
x_j}(B(\psi_1),B(\psi_2),\ldots,B(\psi_n))\psi_j,
$$
respectively. These derivative operators $D^t,D^{\ast}$ are then closable from $L^2(\Omega)$ into $L^2(\Omega;{\mathcal H}_t)$ and $L^2(\Omega;{\mathcal H}_\ast)$, respectively. We denote by ${\mathbb D}^{t,1,2}$ and ${\mathbb D}^{\ast,1,2}$ the closures of ${\mathcal S}_t$, ${\mathcal S}_{\ast}$ and ${\mathcal S}$ with respect to the norm
$$
\|F_t\|_{t,1,2}:=\sqrt{E|F|^2+E\|D^{t}F_t\|^2_{{\mathcal H}_t}}
$$
and
$$
\|F_{\ast}\|_{\ast,1,2}:=\sqrt{E|F|^2+E\|D^{\ast}F_{\ast} \|^2_{{\mathcal H}_\ast}},
$$
respectively. The {\it divergence integrals} $\delta^{t}$ and $\delta^{\ast}$ are the adjoint of derivative operators $D^t$ and $D^{\ast}$, respectively. That are, we say that random variables $v\in L^2(\Omega;{\mathcal H}_t)$ $w\in L^2(\Omega;{\mathcal H}_\ast)$ belong to the domains of the
divergence operators $\delta^{t}$ and $\delta^{\ast}$, respectively, denoted by ${\rm {Dom}}(\delta^t)$ and ${\rm {Dom}}(\delta^\ast)$ if
$$
E\left|\langle D^{t}F_t,v\rangle_{{\mathcal H}_t}\right|\leq
c\|F_t\|_{L^2(\Omega)}, E\left|\langle D^{\ast}F_\ast,w\rangle_{{\mathcal H}_\ast}\right|\leq
c\|F_\ast\|_{L^2(\Omega)},
$$
respectively, for all $F_t\in {\mathcal S}_t$ and $F_\ast\in {\mathcal S}_t$. In these cases $\delta^{t}(v)$ and $\delta^{\ast}(w)$ are defined by the duality relationships
\begin{align}\label{sec2-2-eq2.1}
E\left[F_t\delta^t(v)\right]&=E\langle D^{t}F_t,v\rangle_{{\mathcal H}_t},\\ \label{sec2-2-eq2.2}
E\left[F_\ast\delta^\ast(w)\right]&=E\langle D^{\ast}F_\ast,w\rangle_{{\mathcal H}_\ast},
\end{align}
respectively, for any $v\in {\mathbb D}^{t,1,2}$ and $w\in {\mathbb D}^{\ast,1,2}$. We have ${\mathbb D}^{t,1,2}\subset {\rm {Dom}}(\delta^t)$ and ${\mathbb D}^{\ast,1,2}\subset {\rm {Dom}}(\delta^\ast)$. We will use the notations
$$
\delta^t(v)=\int_{\mathbb R}v_y\delta W_y,\qquad \delta^\ast(w)=\int_0^Tw_s\delta B_s
$$
to express the Skorohod integrals, and the indefinite Skorohod integrals is defined as
$$
\int_{I_x}v_y\delta W_y=\delta^t(v1_{I_x}),\quad \int_0^tw_s\delta B_s=\delta^\ast(w1_{[0,t]}),
$$
respectively. Denote
$$
\tilde{D}\in \{D^t,D^\ast\},\quad \tilde{\delta}=\{\delta^t,\delta^\ast\},\quad \tilde{{\mathbb D}}^{1,2}\in \{{\mathbb D}^{t,1,2},{\mathbb D}^{\ast,1,2}\}.
$$
We can localize the domains of the operators $\tilde{D}$ and $\tilde{\delta}$. If $\mathbb{L}$ is a class of random variables (or processes) we denote by $\mathbb{L}_{\rm loc}$ the set of random variables $F$ such that there exists a sequence $\{(\Omega_n, F^n), n\geq 1\}\subset {\mathscr F}\times \mathbb{L}$ with the following properties:
\hfill
(i) $\Omega_n\uparrow \Omega$, a.s.
(ii) $F=F^n$ a.s. on $\Omega_n$.
\hfill\\
If $F\in \tilde{\mathbb{D}}^{1,2}_{\rm loc}$, and $(\Omega_n, F^n)$ localizes $F$ in $\mathbb{D}^{1,2}$, then $\tilde{D}F$ is defined without ambiguity by $\tilde{D}F=\tilde{D}F^n$ on $\Omega_n$, $n\geq 1$. Then, if $v\in \tilde{\mathbb{D}}^{1,2}_{\rm loc}$, the divergence $\tilde{\delta}(v)$ is defined as a random variable determined by the conditions
$$
\tilde{\delta}(v)|_{\Omega_n}=\tilde{\delta}(v^n)|_{\Omega_n}\qquad {\rm { for\;\; all\;\;}} n\geq 1,
$$
where $(\Omega_n, v^n)$ is a localizing sequence for $v$, but it may depend on the localizing sequence.
\section{The quadratic covariation of process $\{u(\cdot,x),x\in {\mathbb R}\}$}\label{sec4}
In this section, we study the existence of the
PQC $[f(u(t,\cdot)),u(t,\cdot)]^{(SQ)}$. Recall that
$$
I_\varepsilon^2(f,x,t)=\frac1{\varepsilon} \int_{I_x}\left\{f(u(s,y+\varepsilon))-f(u(s,y))\right\}
(u(s,y+\varepsilon)-u(s,y))dy
$$
for $\varepsilon>0$ and $x\in {\mathbb R}$, and
\begin{equation}\label{sec4-eq4.1}
[f(u(t,\cdot)),u(t,\cdot)]^{(SQ)}_x=\lim_{\varepsilon\downarrow
0}I_\varepsilon^2(f,x,t),
\end{equation}
provided the limit exists in probability. In this section we fix a time parameter $t>0$ and recall that
$$
W=\{W_x=u(\cdot,x),x\in {\mathbb R}\}.
$$
Recall that the local H\"{o}der index $\gamma_0$
of a continuous paths process $\{X_t: t\geq 0\}$ is the supremum of the exponents $\gamma$ verifying, for any $T>0$:
$$
P(\{\omega: \exists L(\omega)>0, \forall s,t \in[0,T],
|X_t(\omega)-X_s(\omega)|\leq L(\omega)|t-s|^\gamma\})=1.
$$
Recently, Gradinaru-Nourdin~\cite{Grad3} introduced the following
very useful result:
\begin{lemma}\label{Grad-Nourdin}
Let $g:{\mathbb R}\to {\mathbb R}$ be a function satisfying
\begin{equation}\label{eq4.2-Gradinaru--Nourdin}
|g(x)-g(y)|\leq C|x-y|^a(1+x^2+y^2)^b,\quad (C>0,0<a\leq 1,b>0),
\end{equation}
for all $x,y\in {\mathbb R}$ and let $X$ be a locally H\"older
continuous paths process with index $\gamma\in (0,1)$. Assume that
$V$ is a bounded variation continuous paths process. Set
$$
X^{g}_\varepsilon(t)=\int_0^tg\left(\frac{X_{s+\varepsilon}-X_s
}{\varepsilon^\gamma}\right)ds
$$
for $t\geq 0$, $\varepsilon>0$. If for each $t\geq 0$, as
$\varepsilon\to 0$,
\begin{equation}\label{condition}
\|X^{g}_\varepsilon(t)-V_t\|_{L^2}^2=O(\varepsilon^\alpha)
\end{equation}
with $\alpha>0$, then, $\lim_{\varepsilon\to
0}X^{g}_\varepsilon(t)=V_t$ almost surely, for any $t\geq 0$, and if $g$ is non-negative, for any continuous stochastic process $\{Y_t:\;t\geq
0\}$,
\begin{equation}
\lim_{\varepsilon\to 0}
\int_0^tY_sg\left(\frac{X_{s+\varepsilon}-X_s}{\varepsilon^\gamma} \right)ds
\longrightarrow \int_0^tY_sdV_s,
\end{equation}
almost surely, uniformly in $t$ on each compact interval.
\end{lemma}
According to the lemma above we get the next proposition.
\begin{proposition}\label{prop4.1}
Let $f\in C^1({\mathbb R})$. We have
\begin{equation}\label{sec4-eq3.5}
[f(W),W]^{(SQ)}_x=\int_{I_x}f'(W_y)dy
\end{equation}
and in particular, we have
$$
[W,W]^{(SQ)}_x=|x|
$$
for all $x\in {\mathbb R}$.
\end{proposition}
\begin{proof}
In fact, the H\"older continuity of the process $W=\{W_x=u(\cdot,x),x\in {\mathbb R}\}$ yields
\begin{align*}
\lim_{\varepsilon\downarrow 0}\frac{1}{\varepsilon}
\int_{I_x}o((W_{y+\varepsilon}-W_y)) (W_{y+\varepsilon}-W_y)^2dy=0
\end{align*}
for all $x\in {\mathbb R}$, almost surely. It follows that
\begin{align*}
\lim_{\varepsilon\downarrow 0}&\frac1{\varepsilon}
\int_{I_x}\left\{f(W_{y+\varepsilon})-f(W_y)\right\}
(W_{y+\varepsilon}-W_y)dy\\
&=\lim_{\varepsilon\downarrow 0}\frac{1}{\varepsilon}
\int_{I_x}
f'(W_y)(W_{y+\varepsilon}-W_y)^2dy
\end{align*}
almost surely. Thus, to end the proof we need to prove the next convergence
$$
[W,W]^{(SQ)}_x =|x|
$$
almost surely. That is, for each $t\geq 0$
\begin{equation}
\left\|W^\varepsilon(x)-|x|
\right\|_{L^2}^2 =O(\varepsilon^\alpha)
\end{equation}
with some $\alpha>0$, as $\varepsilon\to 0$, by the above lemma, where
$$
W^\varepsilon(x)=\frac1{\varepsilon} \int_{I_x}(W_{y+\varepsilon}-W_y)^2dy.
$$
We have
$$
E\left|W^\varepsilon(x)-|x| \right|^2=\frac{1}{\varepsilon^2}\int_{I_x}\int_{I_x}
B_\varepsilon(y,z)dydz
$$
for $x\in {\mathbb R}$ and $\varepsilon>0$, where
\begin{align*}
B_\varepsilon(y,z):&=E\left(
(W_{y+\varepsilon}-W_y)^2
-\varepsilon\right)\left(
(W_{z+\varepsilon}-W_z)^2-\varepsilon \right)\\
&=E(W_{y+\varepsilon}-W_y)^2(W_{z+\varepsilon}-W_z)^2
+\varepsilon^2\\
&\qquad-\varepsilon
E\left((W_{y+\varepsilon}-W_y)^2+(W_{z+\varepsilon}-W_z)^2\right).
\end{align*}
Recall that
\begin{align*}
E[(W_{y+\varepsilon}&-W_y)^2]= \frac1{\sqrt{\pi}}\int_0^t\frac{1}{\sqrt{t-r}}\left(
1-\exp\left\{-\frac{\varepsilon^2}{4(t-r)}\right\}\right)dr\\
&=\frac1{\sqrt{\pi}}\int_0^t\frac{1}{\sqrt{r}}\left(
1-e^{-\frac{\varepsilon^2}{4r}}\right)dr\\
&=\sqrt{\frac2{\pi}}\varepsilon \int_{\frac{\varepsilon}{\sqrt{2t}}}^\infty\frac{1}{s^2}\left(
1-e^{-\frac{s^2}{2}}\right)ds \equiv \phi_{t,y}(\varepsilon)+\varepsilon,
\end{align*}
where
\begin{align*}
\phi_{t,y}(\varepsilon)&=\sqrt{\frac2{\pi}}\varepsilon \int_{\frac{\varepsilon}{\sqrt{2t}}}^\infty\frac{1}{s^2}\left(
1-e^{-\frac{s^2}{2}}\right)ds-\varepsilon.
\end{align*}
Noting that
\begin{align*}
E[(W_{y+\varepsilon}&-W_y)^2(W_{z+\varepsilon}-W_z)^2]\\
&=
E\left[(W_{y+\varepsilon}-W_y)^2\right]
E\left[(W_{z+\varepsilon}-W_z)^2\right]\\
&\hspace{1cm}+2\left(E\left[(W_{y+\varepsilon}-W_y)
(W_{z+\varepsilon}-W_z)\right]\right)^{2}
\end{align*}
for all $\varepsilon>0$ and $y,z\in I_x$, we get
\begin{align*}
B_\varepsilon(y,z)&=\phi_{t,y}(\varepsilon)\phi_{t,y}(\varepsilon) +2(\mu_{y,z})^2
\end{align*}
where $\mu_{y,z}:=E\left[(W_{y+\varepsilon}-W_y)
(W_{z+\varepsilon}-W_z)\right]$.
Now, let us estimate the above function $\varepsilon\mapsto \phi_{t,y}(\varepsilon)$. We have
\begin{align*}
\phi_{t,y}(\varepsilon)
&=\sqrt{\frac2{\pi}}\varepsilon\left(\int_{\frac{\varepsilon}{ \sqrt{2t}}}^\infty\frac{1}{s^2}\left(
1-e^{-\frac{s^2}{2}}\right)ds-\sqrt{\frac{\pi}2}\right)\\
&=\sqrt{\frac2{\pi}}\varepsilon\left(\int_{\frac{\varepsilon}{ \sqrt{2t}}}^\infty\frac{1}{s^2}\left(
1-e^{-\frac{s^2}{2}}\right)ds-\int_0^\infty \frac{1}{s^2}\left(
1-e^{-\frac{s^2}{2}}\right)ds\right)\\
&=-\sqrt{\frac2{\pi}}\varepsilon
\int_0^{\frac{\varepsilon}{\sqrt{2t}}}\frac{1}{s^2}\left(
1-e^{-\frac{s^2}{2}}\right)ds\sim \frac1{2\sqrt{t\pi}}\varepsilon^2\quad (\varepsilon\to 0)
\end{align*}
by the fact
$$
\frac1{\sqrt{2\pi}} \int_0^\infty \frac1{s^2}\left(1-e^{-\frac{s^2}{2}}\right)ds=\frac1{\sqrt{2\pi}} \int_0^\infty e^{-\frac{s^2}{2}}ds=\frac12,
$$
which gives
\begin{align*}
\frac{1}{\varepsilon^2}&\int_{I_x}\int_{I_x} |\phi_{t,y}(\varepsilon)\phi_{t,z}(\varepsilon)|dydz\sim
\frac1{4t\pi}\varepsilon^2x^2\quad (\varepsilon\to 0).
\end{align*}
It follows from Lemma~\ref{lem2.4} that there is a
constant $\alpha>0$ such that
\begin{align*}
\lim_{\varepsilon\downarrow 0}\frac{1}{
\varepsilon^{1+\alpha}}\int_{I_x}\int_{I_x}B_\varepsilon(y,z)dydz=0
\end{align*}
for all $t>0$ and $x\in {\mathbb R}$, which gives the desired estimate
$$
\left\|W^\varepsilon(x)-x\right\|_{L^2}^2=O\left(
\varepsilon^\alpha\right)\qquad (\varepsilon\to 0)
$$
for all $x\in {\mathbb R}$ and some $\alpha>0$.
Notice that $g(y)=y^2$ satisfies the
condition~\eqref{eq4.2-Gradinaru--Nourdin}. We obtain the proposition by taking $Y_y=f'(W_y)$ for $y\in {\mathbb R}$.
\end{proof}
Now, we discuss the existence of the PQC $[f(W),W]^{(SQ)}$. Consider the decomposition
\begin{equation}\label{sec4-eq4.000000}
\begin{split}
I_\varepsilon^1(f,x,t)&=\frac1{\varepsilon}\int_{I_x} f(W_{y+\varepsilon})(W_{y+\varepsilon}-W_y)dy\\
&\hspace{2cm}-\frac1{\varepsilon}\int_{I_x} f(W_y)(W_{y+\varepsilon}-W_y)dy\\
&\equiv I_\varepsilon^{1,+}(f,x,t)-I_\varepsilon^{1,-}(f,x,t)
\end{split}
\end{equation}
for $\varepsilon>0$, and define the set
$$
{\mathscr H}_t=\{f\,:\,{\text { Borel functions on ${\mathbb R}$ such that $\|f\|_{{\mathscr H}_t}<\infty$}}\},
$$
where
\begin{align*}
\|f\|_{{\mathscr H}_t}^2:&=\frac{|x|}{\sqrt[4]{4\pi t}}\int_{\mathbb R}|f(z)|^2\left(\sqrt{t}+z^2\right) e^{-\frac{\sqrt{\pi}z^2}{2\sqrt{t}}}dz.
\end{align*}
Then ${\mathscr H}_t=L^2({\mathbb R},\mu(dz))$ with
$$
\mu(dz)=\left(\frac{|x|}{\sqrt[4]{4\pi t}}\left(\sqrt{t}+z^2\right) e^{-\frac{\sqrt{\pi}z^2}{2\sqrt{t}}}\right)dz
$$
and $\mu({\mathbb R})=C|x|<\infty$, which implies that the set ${\mathscr E}$ of elementary functions of the form
$$
f_\triangle(z)=\sum_{i}f_{i}1_{(x_{i-1},x_{i}]}(z)
$$
is dense in ${\mathscr H}_t$, where $f_i\in {\mathbb R}$ and $\{x_i,0\leq i\leq l\}$ is a finite sequence of real numbers such that $x_i<x_{i+1}$. Moreover, ${\mathscr H}_t$ includes all Borel functions $f$ satisfying the condition
\begin{equation}
|f(z)|\leq Ce^{\beta {z^2}},\quad z\in {\mathbb R}
\end{equation}
with $0\leq \beta<\frac{\sqrt{\pi}}{4\sqrt{t}}$.
\begin{theorem}\label{th3.1}
Let $f\in {\mathscr H}_t$. Then, the PQC $[f(W), W]^{(SQ)}$ exists in $L^2(\Omega)$ and
\begin{align}
E\left|[f(W), W]^{(SQ)}_x\right|^2\leq C \|f\|_{{\mathscr H}_t}^2
\end{align}
for all $x\in {\mathbb R}$.
\end{theorem}
In order to prove the theorem we claim that the following two statements with $f\in {\mathscr H}_t$:
\begin{itemize}
\item [(1)] for any $\varepsilon>0$ and $x\in {\mathbb R}$, $I_\varepsilon^{1,\pm}(f,x,\cdot)\in L^2(\Omega)$. That is,
\begin{align*}
&E\left|I_\varepsilon^{1,-}(f,x,\cdot)\right|^2\leq C \|f\|_{{\mathscr H}_t}^2,\\
&E\left|I_\varepsilon^{1,+}(f,x,\cdot)\right|^2\leq C \|f\|_{{\mathscr H}_t}^2.
\end{align*}
\item [(2)] $I_\varepsilon^{1,-}(f,x,t)$ and $I_\varepsilon^{1,+}(f,x,t)$ are two Cauchy's sequences in $L^2(\Omega)$ for all $t>0$ and $x\in {\mathbb R}$. That is,
\begin{equation*}
E\left|I_{\varepsilon_1}^{1,-}(f,x,t)-I_{\varepsilon_2}^{1,-}(f,x,t) \right|^2\longrightarrow 0,
\end{equation*}
and
\begin{equation*}
E\left|I_{\varepsilon_1}^{1,+}(f,x,t) -I_{\varepsilon_2}^{1,+}(f,x,t)\right|^2
\longrightarrow 0
\end{equation*}
for all $x\in {\mathbb R}$, as $\varepsilon_1,\varepsilon_2\downarrow 0$.
\end{itemize}
We split the proof of two statements into two parts.
\begin{proof}[Proof of the statement (1)]
Recall that $W_x:=u(\cdot,x)$. We have
\begin{align*}
E|I_\varepsilon^{1,-}(f,x,\cdot)|^2&=\frac{1}{\varepsilon^2}
\int_{I_x}\int_{I_x}dydy'E\left[f({W_y})f({W_{y'}}) (W_{y+\varepsilon}-{W_y}) (W_{y'+\varepsilon}-{W_{y'}})\right]
\end{align*}
for all $\varepsilon>0$ and $x\in {\mathbb R}$. Now, let us estimate the expression
$$
\Phi_{\varepsilon_1,\varepsilon_2}(y,y'):
=E\left[f({W_y})f({W_{y'}})(W_{y+\varepsilon_1}-{W_y}) (W_{y'+\varepsilon_2}-{W_{y'}})\right]
$$
for all $\varepsilon_1,\varepsilon_2>0$ and $y,y'\in {\mathbb R}$. To estimate the above expression, it is enough to assume that $f\in {\mathscr E}$ by denseness, and moreover, by approximating we can assume that $f$ is an infinitely differentiable function with compact support. It follows from the duality relationship~\eqref{sec2-2-eq2.1} that
\begin{equation}\label{sec7-eq7.9}
\begin{split}
\Phi_{\varepsilon_1,\varepsilon_2}(y,y') &=E\left[f({W_y})f({W_{y'}})(W_{y+\varepsilon_1} -{W_y})\int_{y'}^{y'+\varepsilon_2}\delta W(l)\right]\\
&=E\left[{W_y}(W_{y'+\varepsilon_2}-{W_{y'}})\right] E\left[f'({W_y})f({W_{y'}})(W_{y+\varepsilon_1}-{W_y})\right]\\
&\quad+E\left[{W_{y'}}(W_{y'+\varepsilon_2}-{W_{y'}})\right] E\left[f({W_y})f'({W_{y'}})(W_{y+\varepsilon_1}-{W_y})\right]\\
&\quad+E\left[(W_{y+\varepsilon_1}-{W_y}) (W_{y'+\varepsilon_2}-{W_{y'}})\right]
E\left[f({W_y})f({W_{y'}}\right]\\
&=E\left[{W_y}(W_{y'+\varepsilon_2}-{W_{y'}})\right] E\left[{W_y}(W_{y+\varepsilon_2}-{W_y})\right] E\left[f''({W_y})f({W_{y'}})\right]\\
&\quad+E\left[{W_y}(W_{y'+\varepsilon_2}-{W_{y'}})\right] E\left[{W_{y'}}(W_{y+\varepsilon_2}-{W_y})\right]
E\left[f'({W_y})f'({W_{y'}})\right]\\
&\quad+E\left[{W_{y'}}(W_{y'+\varepsilon_2}-{W_{y'}})\right] E\left[{W_y}(W_{y+\varepsilon_1}-{W_y})\right]
E\left[f'({W_y})f'({W_{y'}}))\right]\\
&\quad+E\left[{W_{y'}}(W_{y'+\varepsilon_2}-{W_{y'}})\right] E\left[{W_{y'}}(W_{y+\varepsilon_1}-{W_y})\right]
E\left[f({W_y})f''({W_{y'}})\right]\\
&\quad+E\left[(W_{y+\varepsilon_1}-{W_y}) (W_{y'+\varepsilon_2}-{W_{y'}})\right]
E\left[f({W_y})f({W_{y'}}\right]\\
&\equiv \sum_{j=1}^5\Psi_j(y,y',\varepsilon_1,\varepsilon_2)
\end{split}
\end{equation}
for all $y,y'\in {\mathbb R}$ and $\varepsilon_1,\varepsilon_2>0$. In order to end the proof we claim to estimate
$$
\Lambda_j:=\frac{1}{\varepsilon^2}\int_{I_x}\int_{I_x} \Psi_j(y,y',\varepsilon,\varepsilon)dydy',\quad j=1,2,3,4,5
$$
for all $\varepsilon>0$ small enough.
For $j=5$, from the fact
\begin{align*}
|E&\left[(W_{y+\varepsilon}-{W_y}) (W_{y'+\varepsilon}-{W_{y'}})\right]|\leq \varepsilon
\end{align*}
for $0<|y-y'|\leq \varepsilon$, we have
\begin{align*}
\frac{1}{\varepsilon^2}\int_{\substack{|y-y'|\leq \varepsilon\\ y,y'\in I_x}}&|\Psi_5(y,y',\varepsilon,\varepsilon)|dydy' \leq \frac{1}{\varepsilon}\int_{\substack{|y-y'|\leq \varepsilon\\ y,y'\in I_x}}
E\left|f({u_y})f({W_{y'}})\right|dydy'\\
&\leq \frac{1}{2\varepsilon}\int_{\substack{|y-y'|\leq \varepsilon\\ y,y'\in I_x}}
E\left[f({W_y})|^2+|f({W_{y'}})|^2\right]dydy'\\
&\leq \frac{1}{\varepsilon}\int_{\substack{|y-y'|\leq \varepsilon\\ y,y'\in I_x}}E\left|f({W_y})\right|^2dydy'\\
&\leq \int_{I_x}E\left|f({W_y})\right|^2dy=\|f\|_{{\mathscr H}_t}^2
\end{align*}
for all $\varepsilon>0$ and $x\in {\mathbb R}$. Moreover, for $|y-y'|>\varepsilon$ we have
\begin{align*}
|E&\left[(W_{y+\varepsilon}-{W_y}) (W_{y'+\varepsilon}-{W_{y'}})\right]|\leq \frac1{4\sqrt{t\pi}}\varepsilon^2 e^{-\frac{|y-y'-\varepsilon|^2}{4t}}
\end{align*}
by~\eqref{sec2-eq2.7}, which deduces
\begin{align*}
\frac{1}{\varepsilon^2}&\int_{\substack{|y-y'|>\varepsilon\\ y,y'\in I_x}}|\Psi_5(y,y',\varepsilon,\varepsilon)|dydy'\\
&\leq
\frac1{4\sqrt{t\pi}}\int_{\substack{|y-y'|>\varepsilon\\ y,y'\in I_x}}E\left|f({W_y})f({W_{y'}})\right|
e^{-\frac{|y-y'-\varepsilon|^2}{4t}}dydy'\\
&\leq \frac1{8\sqrt{t\pi}}\int_{\substack{|y-y'|>\varepsilon\\ y,y'\in I_x}}
E\left[f({W_y})|^2+|f({W_{y'}})|^2\right] e^{-\frac{|y-y'-\varepsilon|^2}{4t}}dydy'\\
&\leq \frac1{4\sqrt{t\pi}}\int_{\substack{|y-y'|>\varepsilon\\ y,y'\in I_x}}E\left|f({W_y})\right|^2 e^{-\frac{|y-y'-\varepsilon|^2}{4t}}dydy'\\
&\leq \frac1{2}\int_{I_x}
E\left|f({W_y})\right|^2dy\int_{-\infty}^\infty
\frac1{\sqrt{2\pi(2t)}}
e^{-\frac{|y-y'-\varepsilon|^2}{4t}}dy'\\
&=\frac1{2}\int_{I_x}E\left|f({W_y})\right|^2dy =\frac12\|f\|_{{\mathscr H}_t}^2
\end{align*}
for all $\varepsilon>0$ and $x\in {\mathbb R}$. This shows that
$$
\Lambda_5=\frac{1}{\varepsilon^2}\left|\int_{I_x}\int_{I_x} \Psi_5(y,y',\varepsilon,\varepsilon)dydy'\right|\leq \|f\|_{{\mathscr H}_t}^2
$$
for all $\varepsilon>0$ and $x\in {\mathbb R}$.
Next, let us estimate $\sum\limits_{j=1}^4\Lambda_j$. We have
\begin{align*}
E\left[f''({W_y})f({W_{y'}})\right]&=\int_{\mathbb{R}^2}
f(x)f(x')\frac{\partial^{2}}{\partial x^2}
\varphi(x,x')dxdx'\\
&=\int_{\mathbb{R}^2} f(x)f(x')\left\{\frac1{\rho^4}(\sigma^2_{t,y'}x-\mu_{t,y,y'}
x')^2-\frac{\sigma^2_{t,y'}}{\rho^2}\right\}\varphi(x,x')dxdx'
\end{align*}
and
\begin{align*}
E[&f'({W_y})f'({W_{y'}})]=\int_{\mathbb{R}^2}
f(x)f(x')\frac{\partial^{2}}{\partial x\partial
x'}\varphi(x,x')dxdx'\\
&=\int_{\mathbb{R}^2} f(x)f(x')\left\{\frac1{\rho^4}(\sigma^2_{t,y}x'-\mu_{t,y,y'}
x)(\sigma^2_{t,y'}x-\mu_{t,y,y'} x')+\frac{\mu_{t,y,y'}}{\rho^2}\right\}\varphi(x,x')dxdx',
\end{align*}
where $\rho^2=\sigma^2_{t,y}\sigma^2_{t,y'}-\mu_{t,y,y'}^2$ and
$\varphi(x,y)$ is the density function of $({W_y},{W_{y'}})$. That is
$$
\varphi(x,x')=\frac1{2\pi\rho}\exp\left\{-\frac{1}{2\rho^2}\left(
\sigma^2_{t,y'}x^2-2\mu_{t,y,y'}xx'+\sigma^2_{t,y}{x'}^2\right) \right\}.
$$
Combining this with the identity
\begin{align*}
(\sigma^2_{t,y}x'-\mu_{t,y,y'}
x)&(\sigma^2_{t,y'}x-\mu_{t,y,y'} x')\\
&=\rho^2x'\left(x-\frac{\mu_{t,y,y'}}{\sigma^2_{t,y'}}x'\right) -\mu_{t,y,y'}\sigma^2_{t,y'}\left(x -\frac{\mu_{t,y,y'}}{\sigma^2_{t,y'}}x'\right)^2,
\end{align*}
we get
\begin{align*}
E[f''({W_y})&f({W_{y'}})]+E[f'({W_y})f'({W_{y'}})]\\
&=\frac{\mu_{t,y,y'}-\sigma^2_{t,y'}}{\rho^2}\int_{\mathbb{R}^2} f(x)f(x') \varphi(x,x')dxdx'\\
&\qquad+\frac1{\rho^2}\int_{\mathbb{R}^2} f(x)f(x')
x'\left(x-\frac{\mu_{t,y,y'}}{\sigma^2_{t,y'}}x'\right) \varphi(x,x')dxdx'\\
&\qquad+\frac{\sigma^2_{t,y'}}{\rho^4}\left(\sigma^2_{t,y'}- \mu_{t,y,y'}\right)\int_{\mathbb{R}^2} f(x)f(x')
\left(x-\frac{\mu_{t,y,y'}}{\sigma^2_{t,y'}}x'\right)^2 \varphi(x,x')dxdx'\\
&\equiv \Upsilon_1+\Upsilon_2+\Upsilon_3.
\end{align*}
A straightforward calculation shows that
\begin{align*}
\int_{\mathbb{R}^2}|f(x')|^2 &\left(x-\frac{\mu_{t,y,y'}}{\sigma^2_{t,y'}}x'\right)^{2m} \varphi(x,x')dxdx'\\
&=C_m\left(\frac{\rho^2}{\sigma^2_{t,y'}}\right)^m \int_{\mathbb{R}}|f(x')|^2\frac1{\sqrt{2\pi}\sigma_{t,y'}} e^{-\frac{{x'}^2}{2\sigma^2_{t,y'}}}dx'\\
&\leq C_m\left(\frac{\rho^2}{\sqrt{t}}\right)^m \int_{\mathbb{R}}|f(x)|^2\frac1{\sqrt[4]{t}} e^{-\frac{\sqrt{\pi}x^2}{2\sqrt{t}}}dx
\end{align*}
for all $m\geq 1$ and
\begin{align*}
\int_{\mathbb{R}^2}&|f(x)x'|^2 \varphi(x,x')dxdx'\\
&=\int_{\mathbb{R}}|f(x)|^2\frac1{\sqrt{2\pi}\sigma_{t,y}} e^{-\frac{{x}^2}{2\sigma^2_{t,y}}}dx
\int_{\mathbb{R}}|x'|^2\frac{\sigma_{t,y}}{\sqrt{2\pi}\rho} e^{-\frac{\sigma^2_{t,y}}{2\rho^2}
\left(x'-\frac{\mu_{t,y,y'}}{\sigma^2_{t,y}}x\right)^2}dx'\\
&=\int_{\mathbb{R}}|f(x)|^2\frac1{\sqrt{2\pi}\sigma_{t,y}} e^{-\frac{{x}^2}{2\sigma^2_{t,y}}}dx
\left(\frac{\rho^2}{\sigma^2_{t,y}}+
\frac{\mu_{t,y,y'}^2}{\sigma^4_{t,y}}x^2\right)\\
&\leq C\frac1{\sqrt[4]{\pi t}}\int_{\mathbb{R}}|f(x)|^2 e^{-\frac{\sqrt{\pi}x^2}{2\sqrt{t}}}\left(\sqrt{t}+x^2\right)dx
\end{align*}
since $\sigma^2_{t,y}=\sigma^2_{t,y'}=\sqrt{\frac{t}{\pi}}$. It follows that
\begin{align*}
|\Upsilon_1|&\leq \frac1{\rho^2}\left(\int_{\mathbb{R}^2}|f(x)x'|^2
\varphi(x,x')dxdx'\int_{\mathbb{R}^2}|f(x')|^2
\left|x-\frac{\mu_{t,y,y'}}{\sigma^2_{t,y'}}x'\right|^2 \varphi(x,x')dxdx'\right)^{\frac12}\\
&\leq C\frac{1}{\rho\sqrt[4]{t}}\int_{\mathbb{R}}|f(x)|^2 \frac1{\sqrt[4]{t}} e^{-\frac{\sqrt{\pi}x^2}{2\sqrt{t}}}\left(\sqrt{t}+x^2\right)dx
\end{align*}
and
\begin{align*}
|\Upsilon_3|&\leq
\frac{\sigma^2_{t,y'}}{\rho^4}\left|\sigma^2_{t,y'}- \mu_{t,y,y'}\right|\\
&\quad\cdot\left( \int_{\mathbb{R}^2}|f(x)|^2\varphi(x,x')dxdx'
\int_{\mathbb{R}^2}
|f(x')|^2\left(x-\frac{\mu_{t,y,y'}}{\sigma^2_{t,y'}}x'\right)^4 \varphi(x,x')dxdx'\right)^{1/2}\\
&\leq \frac{C\left|\sigma^2_{t,y'}-\mu_{t,y,y'}\right|}{\rho^2}
\int_{\mathbb{R}}|f(x)|^2\frac1{\sqrt[4]{t}} e^{-\frac{\sqrt{\pi}x^2}{2\sqrt{t}}}dx.
\end{align*}
Thus, we get the estimate
\begin{equation}\label{sec3-eq3.999}
\begin{split}
|E[f''({W_y})f({W_{y'}})]&+E[f'({W_y})f'({W_{y'}})]|\leq |\Upsilon_1|+|\Upsilon_2|+|\Upsilon_3|\\
&\leq \int_{\mathbb{R}}|f(x)|^2 \frac1{\sqrt[4]{t}} e^{-\frac{\sqrt{\pi}x^2}{2\sqrt{t}}}\left(\sqrt{t}+x^2\right)dx
\end{split}
\end{equation}
and
\begin{equation}\label{sec3-eq3.1000}
|E\left[f''({W_y})f({W_{y'}})\right]|\leq \frac{C}{|y-y'|}\int_{\mathbb{R}}|f(x)|^2 \frac1{\sqrt[4]{t}} e^{-\frac{\sqrt{\pi}x^2}{2\sqrt{t}}}dx
\end{equation}
by Lemma~\ref{lem2.7} and Lemma~\ref{lem2.3}. Now, we can estimate $\sum\limits_{j=1}^4\Lambda_j$. We have
\begin{align*}
\sum_{j=1}^4&\Psi_j(y,y',\varepsilon,\varepsilon)\\
&=E\left[{W_y}(W_{y'+\varepsilon}-{W_{y'}})\right] E\left[({W_y}-{W_{y'}})(W_{y+\varepsilon}-{W_y})\right] E\left[f''({W_y})f({W_{y'}})\right]\\
&\;\;+E\left[{W_y}(W_{y'+\varepsilon}-{W_{y'}})\right] E\left[{W_{y'}}(W_{y+\varepsilon}-{W_y})\right]\\ &\qquad\qquad\qquad\cdot\left(E\left[f'({W_y})f'({W_{y'}})\right] +E\left[f''({W_y})f({W_{y'}})\right]\right)\\
&\;\;+E\left[{W_{y'}}(W_{y'+\varepsilon}-{W_{y'}})\right] E\left[{W_y}(W_{y+\varepsilon}-{W_y})\right]\\
&\qquad\qquad\qquad\cdot\left(E\left[f'({W_y})f'({W_{y'}})\right]+
E\left[f({W_y})f''({W_{y'}})\right]\right)\\
&\;\;+E\left[{W_{y'}}(W_{y'+\varepsilon}-{W_{y'}})\right] E\left[({W_{y'}}-{W_y})(W_{y+\varepsilon}-{W_y})\right]
E\left[f({W_y})f''({W_{y'}})\right].
\end{align*}
Combining this with~\eqref{sec3-eq3.999},~\eqref{sec3-eq3.1000}, Lemma~\ref{lem2.3} and Lemma~\ref{lem2.5}, we get
\begin{align*}
\left|\sum\limits_{j=1}^4\Lambda_j\right| \leq \frac{1}{\varepsilon^2}\int_{I_x}\int_{I_x} \left|\sum\limits_{j=1}^4\Psi_j(y,y',\varepsilon,\varepsilon)
\right|dydy'\leq C\|f\|_{{\mathscr H}_t}^2
\end{align*}
for all $\varepsilon>0$ and $x\in {\mathbb R}$. This shows that
\begin{align*}
E\left|I_\varepsilon^{1,-}(f,x,\cdot)\right|^2\leq C \|f\|_{{\mathscr H}_t}^2.
\end{align*}
Similarly, one can shows that the estimate
\begin{align*}
E\left|I_\varepsilon^{1,+}(f,x,\cdot)\right|^2\leq C \|f\|_{{\mathscr H}_t}^2,
\end{align*}
and the first statement follows.
\end{proof}
\begin{proof}[Proof of the statement (2)]
Without loss of generality we assume that $\varepsilon_1>\varepsilon_2$. We prove only the first convergence and similarly one can prove the second convergence. We have
\begin{align*}
E\bigl|&I_{\varepsilon_1}^{1,-}(f,x,t) -I_{\varepsilon_2}^{1,-}(f,x,t) \bigr|^2\\
&=\frac1{\varepsilon_1^2}\int_{I_x}\int_{I_x}Ef(W_y)f(W_{y'})
(W_{y+\varepsilon_1}-W_{y})(W_{y'+\varepsilon_1}-W_{y'})dydy'\\
&\qquad-2
\frac1{\varepsilon_1\varepsilon_2}\int_{I_x}\int_{I_x} Ef(W_y)f(W_{y'})
(W_{y+\varepsilon_1}-W_{y})(W_{y'+\varepsilon_2}-W_{y'})dydy'\\
&\qquad+\frac1{\varepsilon_2^2}\int_{I_x}\int_{I_x}Ef(W_y)f(W_{y'})
(W_{y+\varepsilon_2}-W_{y})(W_{y'+\varepsilon_2}-W_{y'})dydy'\\
&\equiv \frac1{\varepsilon_1^2\varepsilon_2}\int_{I_x}\int_{I_x}
\left\{\varepsilon_2\Phi_{y,y'}(1,\varepsilon_1)-\varepsilon_1
\Phi_{y,y'}(2,\varepsilon_1,\varepsilon_2)\right\}dydy'\\
&\qquad+
\frac1{\varepsilon_1\varepsilon_2^2}\int_{I_x}\int_{I_x}\left\{
\varepsilon_1\Phi_{y,y'}(1,\varepsilon_2)-\varepsilon_2
\Phi_{y,y'}(2,\varepsilon_1,\varepsilon_2)\right\}dydy',
\end{align*}
for all $\varepsilon_1,\varepsilon_2>0$ and $x\in {\mathbb R}$,
where
$$
\Phi_{y,y'}(1,\varepsilon) =E\left[f(W_y)f(W_{y'})(W_{y+\varepsilon}
-W_y)(W_{y'+\varepsilon}-W_{y'})\right],
$$
and
$$
\Phi_{y,y'}(2,\varepsilon_1,\varepsilon_2) =E\left[f(W_y)f(W_{y'})(W_{y+\varepsilon_1}
-W_y)(W_{y'+\varepsilon_2}-W_{y'})\right].
$$
To end the proof, it is enough to assume that $f\in {\mathscr E}$ by denseness, and moreover, by approximating we can assume that $f$ is an infinitely differentiable function with compact support. It follows from~\eqref{sec7-eq7.9} that
\begin{align*}
\Phi_{y,y'}(1,\varepsilon)=
\sum_{j=1}^5\Psi_j(y,y',\varepsilon,\varepsilon),\quad
\Phi_{y,y'}(2,\varepsilon_1,\varepsilon_2)=
\sum_{j=1}^5\Psi_j(y,y',\varepsilon_1,\varepsilon_2),
\end{align*}
which give
\begin{align*}
\varepsilon_j&\Phi_{y,y'}(1,\varepsilon_i)-\varepsilon_i
\Phi_{y,y'}(2,\varepsilon_1,\varepsilon_2)\\
&=A_{y,y'}(1,\varepsilon_i,j)E\left[f''({W_y})f({W_{y'}})\right] +A_{y,y'}(2-1,\varepsilon_i,j)E\left[f'({W_y})f'({W_{y'}})\right]\\
&\quad+A_{y,y'}(3,\varepsilon_i,j)E\left[f({W_y})f''({W_{y'}}) \right]+A_{y,y'}(2-2,\varepsilon_i,j)E\left[f'({W_y})f'({W_{y'}}) \right]\\
&\quad+A_{y,y'}(4,\varepsilon,j)E\left[f({W_y})f({W_{y'}}\right] \end{align*}
with $i,j\in \{1,2\}$ and $i\neq j$, where
\begin{align*}
A_{y,y'}(1,\varepsilon,j):=\varepsilon_j &E\left[{W_y}(W_{y'+\varepsilon} -{W_{y'}})\right] E\left[{W_y}(W_{y+\varepsilon}-{W_y})\right]\\
&\qquad-\varepsilon E\left[{W_y}(W_{y'+\varepsilon_2} -{W_{y'}})\right]E\left[{W_y}(W_{y+\varepsilon_1}-{W_y})\right],\\
A_{y,y'}(2-1,\varepsilon,j):=
\varepsilon_j&E\left[{W_y}(W_{y'+\varepsilon}-{W_{y'}})\right] E\left[{W_{y'}}(W_{y+\varepsilon}-{W_y})\right]\\
&\qquad-\varepsilon E\left[{W_y}(W_{y'+\varepsilon_2}-{W_{y'}})\right] E\left[{W_{y'}}(W_{y+\varepsilon_1}-{W_y})\right],\\
A_{y,y'}(2-2,\varepsilon,j):=\varepsilon_j& E\left[{W_{y'}}(W_{y'+\varepsilon}-{W_{y'}})\right] E\left[{W_y}(W_{y+\varepsilon}-{W_y})\right]\\
&\qquad-\varepsilon E\left[{W_{y'}}(W_{y'+\varepsilon_2}-{W_{y'}})\right] E\left[{W_y}(W_{y+\varepsilon_1}-{W_y})\right],\\
A_{y,y'}(3,\varepsilon,j):=
\varepsilon_j&E\left[{W_{y'}}(W_{y'+\varepsilon}-{W_{y'}})\right] E\left[{W_{y'}}(W_{y+\varepsilon}-{W_y})\right]\\
&\qquad-\varepsilon E\left[{W_{y'}}(W_{y'+\varepsilon_2}-{W_{y'}})\right] E\left[{W_{y'}}(W_{y+\varepsilon_1}-{W_y})\right],\\
A_{y,y'}(4,\varepsilon,j):=\varepsilon_j& E\left[(W_{y+\varepsilon}-{W_y}) (W_{y'+\varepsilon}-{W_{y'}})\right]
-\varepsilon E\left[(W_{y+\varepsilon_1}-{W_y}) (W_{y'+\varepsilon_2}-{W_{y'}})\right]
\end{align*}
for all $\varepsilon_1,\varepsilon_2>0$ and $y,y'\in {\mathbb R}$. Now, we claim that the following convergences hold:
\begin{align}\label{sec3-eq3.12}
\frac1{\varepsilon_i^2\varepsilon_j}\int_0^t\int_0^t
\left\{\varepsilon_j\Phi_{y,y'}(1,\varepsilon_i)-\varepsilon_i
\Phi_{y,y'}(2,\varepsilon_1,\varepsilon_2)\right\}dsdr
\longrightarrow 0
\end{align}
with $i,j\in \{1,2\}$ and $i\neq j$, as $\varepsilon_1,\varepsilon_2\to 0$. We decompose that
\begin{align*}
\varepsilon_j&\Phi_{y,y'}(1,\varepsilon_i)-\varepsilon_i
\Phi_{y,y'}(2,\varepsilon_1,\varepsilon_2)\\
&=\left\{A_{y,y'}(1,\varepsilon_i,j) -A_{y,y'}(2-1,\varepsilon_i,j)\right\} E\left[f''({W_y})f({W_{y'}})\right]\\
&\qquad+A_{y,y'}(2-1,\varepsilon_i,j)
\left\{E\left[f'({W_y})f'({W_{y'}})\right] +E\left[f''({W_y})f({W_{y'}})\right]\right\}\\
&\qquad+\left\{A_{y,y'}(3,\varepsilon_i,j) -A_{y,y'}(2-2,\varepsilon_i,j)\right\}E\left[f({W_y})f''({W_{y'}}) \right]\\
&\qquad+A_{y,y'}(2-2,\varepsilon_i,j)\left\{ E\left[f'({W_y})f'({W_{y'}})\right]+
E\left[f({W_y})f''({W_{y'}}) \right]\right\}\\
&\qquad+A_{y,y'}(4,\varepsilon,j)E\left[f({W_y})f({W_{y'}}\right] \end{align*}
with $i,j\in \{1,2\}$ and $i\neq j$. By symmetry we only need to introduce the convergence~\eqref{sec3-eq3.12} with $i=1$ and $j=2$.
{\bf Step I.} The following convergence holds:
\begin{align}\label{sec3-eq3.13}
\frac1{\varepsilon_1^2\varepsilon_2}\int_{I_x}\int_{I_x}
A_{y,y'}(4,\varepsilon_1,2)E\left[f({W_y})f(W_{y'})\right] dydy'
\longrightarrow 0,
\end{align}
as $\varepsilon_1,\varepsilon_2\to 0$. We have
\begin{align*}
A_{y,y'}(4,&\varepsilon_1,2)=\varepsilon_2 E\left[(W_{y+\varepsilon_1}-{W_y}) (W_{y'+\varepsilon_1}-{W_{y'}})\right]\\
&\qquad\qquad-\varepsilon_1E\left[(W_{y+\varepsilon_1}-{W_y}) (W_{y'+\varepsilon_2}-{W_{y'}})\right]\\
&=\varepsilon_2\left\{EW_{y+\varepsilon_1}W_{y'+\varepsilon_1}
-E{W_y}W_{y'+\varepsilon_1}-EW_{y+\varepsilon_1}{W_{y'}} +E{W_y}{W_{y'}}\right\}\\
&\qquad\qquad-\varepsilon_1\left\{EW_{y+\varepsilon_1}W_{y'+\varepsilon_2}
-E{W_y}W_{y'+\varepsilon_2}-EW_{y+\varepsilon_1}{W_{y'}} +E{W_y}{W_{y'}}\right\}\\
&=\frac1{2\sqrt{\pi}}\varepsilon_2\left(\int_0^t\frac2{\sqrt{r}}
e^{-\frac{(y-y')^2}{4r}}dr
-\int_0^t\frac1{\sqrt{r}}
e^{-\frac{(y-y'-\varepsilon_1)^2}{4r}}dr-\int_0^t\frac1{\sqrt{r}}
e^{-\frac{(y+\varepsilon_1-y')^2}{4r}}dr\right)\\
&\qquad\qquad-\frac1{2\sqrt{\pi}}\varepsilon_1\left(\int_0^t\frac1{\sqrt{r}}
e^{-\frac{(y-y'+\varepsilon_1-\varepsilon_2)^2}{4r}}dr
-\int_0^t\frac1{\sqrt{r}} e^{-\frac{(y-y'-\varepsilon_2)^2}{4r}}dr\right.\\
&\qquad\qquad\left.-\int_0^t\frac1{\sqrt{r}}
e^{-\frac{(y+\varepsilon_1-y')^2}{4r}}dr
+\int_0^t\frac1{\sqrt{r}}e^{-\frac{(y-y')^2}{4r}}dr\right).
\end{align*}
Consider the next function on ${\mathbb R}_{+}$ (see Section~\ref{sec2}):
$$
f(x)=x\int_{x}^\infty\frac1{s^2}e^{-\frac{s^2}2}ds =e^{-\frac{x^2}2}-x\int_{x}^\infty e^{-\frac{s^2}2}ds.
$$
Then we have
\begin{align*}
A_{y,y'}&(4,\varepsilon_1,2) =\frac{\sqrt{t}}{\sqrt{2\pi}}\varepsilon_2\left(
2f(\frac{y-y'}{\sqrt{2t}})-f(\frac{y-y'-\varepsilon_1}{\sqrt{2t}})
-f(\frac{y+\varepsilon_1-y'}{\sqrt{2t}})\right)\\
&-\frac{\sqrt{t}}{\sqrt{2\pi}} \varepsilon_1\left(f(\frac{y-y'+\varepsilon_1-\varepsilon_2}{ \sqrt{2t}})-f(\frac{y-y'-\varepsilon_2}{\sqrt{2t}})
-f(\frac{y+\varepsilon_1-y'}{\sqrt{2t}})+
f(\frac{y-y'}{\sqrt{2t}})\right).
\end{align*}
Notice that, by Taylor's expansion
$$
f(x)=1-\sqrt{\frac{\pi}{2}}x+\frac12x^2-\frac1{4!}x^4+o(x^4).
$$
One get
\begin{align*}
2f(\frac{y-y'}{\sqrt{2t}})&-f(\frac{y-y'-\varepsilon_1}{\sqrt{2t}})
-f(\frac{y-y'+\varepsilon_1}{\sqrt{2t}})\\
&=\frac1{4t}\left\{2(y-y')^2-(y-y'-\varepsilon_1)^2 -(y-y'+\varepsilon_1)^2\right\}\\
&\qquad-\frac1{4\times 4!t^2} \left\{2(y-y')^4-(y-y'-\varepsilon_1)^4 -(y-y'+\varepsilon_1)^4\right\}+\alpha_1\\
&=-\frac1{4t}\varepsilon_1^2+\frac1{4\times 4!t^2} \left(12(y-y')^2\varepsilon_1^2+2\varepsilon_1^4\right)+\alpha_1
\end{align*}
with $\alpha_1=\frac1{4t^2} o\left(12(y-y')^2\varepsilon_1^2+2\varepsilon_1^4\right)$ and
\begin{align*}
&f(\frac{y-y'+\varepsilon_1-\varepsilon_2}{ \sqrt{2t}})-f(\frac{y-y'-\varepsilon_2}{\sqrt{2t}})
-f(\frac{y-y'+\varepsilon_1}{\sqrt{2t}})+
f(\frac{y-y'}{\sqrt{2t}})\\
&=\frac1{4t}\left\{(y-y'+\varepsilon_1-\varepsilon_2)^2 -(y-y'-\varepsilon_2)^2 -(y-y'+\varepsilon_1)^2+(y-y')^2\right\}\\
&\qquad-\frac1{4\times 4!t^2} \left\{(y-y'+\varepsilon_1-\varepsilon_2)^4-(y-y'-\varepsilon_2)^4 -(y-y'+\varepsilon_1)^4+(y-y')^4\right\}+\alpha_2\\
&=-\frac1{4t}\varepsilon_1\varepsilon_2+\frac1{4\times 4!t^2} \left\{12(y-y')^2\varepsilon_1\varepsilon_2 +\varepsilon_1\varepsilon_2(\varepsilon_1-\varepsilon_2) \left(12(y-y')+
2\varepsilon_1-\varepsilon_2\right) \right\}+\alpha_2
\end{align*}
with $\alpha_2=\frac1{4t^2} o\left(12(y-y')^2\varepsilon_1\varepsilon_2 +\varepsilon_1\varepsilon_2(\varepsilon_1-\varepsilon_2) \left(12(y-y')+
2\varepsilon_1-\varepsilon_2\right)\right)$. It follows that
\begin{align*}
\frac1{\varepsilon_1^2\varepsilon_2}&|A_{y,y'}(4,\varepsilon_1,2)| \leq \frac{C}{t^{3/2}}\left(\varepsilon_1
+\frac{o(\varepsilon_1\varepsilon_2)}{\varepsilon_1\varepsilon_2}
+\frac{o(\varepsilon_1^2)}{\varepsilon_1^2}\right)(|x|^2+|x|+1).
\end{align*}
for all $0<\varepsilon_2<\varepsilon_1<1$ and $y,y'\in I_x$, which shows that the convergence~\eqref{sec3-eq3.12} holds since $f\in {\mathscr H}_t$.
{\bf Step II.} The following convergence holds:
\begin{equation}\label{sec3-eq3.14}
\begin{split}
\frac1{\varepsilon_1^2\varepsilon_2}\int_{I_x}\int_{I_x} &\left\{A_{y,y'}(1,\varepsilon_1,2) -A_{y,y'}(2-1,\varepsilon_1,2)\right\}\\
&\qquad\qquad\cdot E\left[f''({W_y})f({W_{y'}})\right] dydy'
\longrightarrow 0,
\end{split}
\end{equation}
as $\varepsilon_1,\varepsilon_2\to 0$. Keeping the notations in Step I, we have
\begin{align*}
f(\frac{y-y'-\varepsilon}{\sqrt{2t}})&-f(\frac{y-y'}{\sqrt{2t}}) =\frac1{\sqrt{\pi t}}\varepsilon-\frac{1}{4t}\left(2(y-y')-\varepsilon\right) \varepsilon\\
&-\frac1{4\times 4!t^2}\left(-4(y-y')^3+6\varepsilon(y-y')^2-4\varepsilon^3(y-y') +\varepsilon^3\right)\varepsilon
\end{align*}
for all $\varepsilon$ and
\begin{align*}
\Delta_1:&=\varepsilon_2\left(f(\frac{y-y'-\varepsilon_1}{\sqrt{2t}}) -f(\frac{y-y'}{\sqrt{2t}})\right)
-\varepsilon_1\left(
f(\frac{y-y'-\varepsilon_2}{\sqrt{2t}}) -f(\frac{y-y'}{\sqrt{2t}})\right)\\
&=\varepsilon_1\varepsilon_2(\varepsilon_1-\varepsilon_2)
\left\{\frac1{4t}-\frac{3(y-y')^2}{2\times 4!t^2}
+\frac{y-y'}{\times 4!t^2}(\varepsilon_1+\varepsilon_2)
-\frac1{4\times 4!t^2}(\varepsilon_1^2-\varepsilon_1\varepsilon_2+\varepsilon_2^2)
\right\}.
\end{align*}
It follows from~\eqref{sec2-eq2.7} that
\begin{align*}
|A_{y,y'}&(1,\varepsilon_1,2)-A_{y,y'}(2-1,\varepsilon_1,2)|\\
&=|\varepsilon_2E\left[{W_y}(W_{y'+\varepsilon_1} -{W_{y'}})\right] E\left[({W_y}-W_{y'})(W_{y+\varepsilon_1}-{W_y})\right]\\
&\qquad-\varepsilon_1 E\left[W_y(W_{y'+\varepsilon_2}-{W_{y'}})\right] E\left[(W_y-W_{y'})(W_{y+\varepsilon_1}-{W_y})\right]|\\
&=|E\left[(W_y-W_{y'})(W_{y+\varepsilon_1}-{W_y})\right]|\\
&\qquad\qquad\cdot\left|\varepsilon_2E\left[{W_y}(W_{y'+\varepsilon_1} -{W_{y'}})\right]-\varepsilon_1 E\left[W_y(W_{y'+\varepsilon_2}-{W_{y'}})\right]\right|\\
&=|E\left[(W_y-u_{y'})(W_{y+\varepsilon_1}-{W_y})\right]||\Delta_1|\\
&\leq C|y-y'|\varepsilon_1^2\varepsilon_2(\varepsilon_1-\varepsilon_2)
\left\{\frac1{t}+\frac1{t^2}({|x|^2}+|x|+1)\right\}
\end{align*}
for all $0<\varepsilon_2<\varepsilon_1<1$ and $y,y'\in I_x$, which implies that
\begin{align*}
\frac1{\varepsilon_1^2\varepsilon_2}&\int_{I_x}\int_{I_x} \left|A_{y,y'}(1,\varepsilon_1,2) -A_{y,y'}(2-1,\varepsilon_1,2)\right| |E\left[f''({W_y})f({W_{y'}})\right]|dydy'\\
&\leq C(\varepsilon_1-\varepsilon_2)
\left\{\frac1{t}+\frac1{t^2}({|x|^2}+|x|+1)\right\}
\int_{I_x}\int_{I_x}|y-y'||E\left[f''({W_y})f({W_{y'}})\right] |dydy'\\
&\leq C(\varepsilon_1-\varepsilon_2)
\left\{\frac1{t}+\frac1{t^2}({|x|^2}+|x|+1)\right\}
\|f\|^2_{{\mathscr H}_t}\longrightarrow 0,
\end{align*}
as $\varepsilon_1,\varepsilon_2\to 0$ since $f\in {\mathscr H}_t$. Similarly, we can show that the next convergence:
\begin{align}\label{sec3-eq3.15}
\frac1{\varepsilon_1^2\varepsilon_2}\int_{I_x}\int_{I_x} \left\{A_{y,y'}(3,\varepsilon_1,2) -A_{y,y'}(2-2,\varepsilon_1,2)\right\} E\left[f({W_y})f''({W_{y'}})\right]dydy'
\longrightarrow 0,
\end{align}
as $\varepsilon_1,\varepsilon_2\to 0$.
{\bf Step III.} The following convergence holds:
\begin{equation}\label{sec3-eq3.16}
\begin{split}
\frac1{\varepsilon_1^2\varepsilon_2}&\int_{I_x}\int_{I_x}
A_{y,y'}(2-1,\varepsilon_1,2)\\
&\qquad \cdot\left\{E\left[f'({W_y})f'({W_{y'}})\right] +E\left[f''({W_y})f({W_{y'}})\right]\right\}dydy'
\longrightarrow 0,
\end{split}
\end{equation}
as $\varepsilon_1,\varepsilon_2\to 0$. By Step II and Lemma~\ref{lem2.3}, we have
\begin{align*}
|A_{y,y'}(2-1,\varepsilon_1,2)|&= |E\left[{W_{y'}}(W_{y+\varepsilon_1}-{W_y})\right]|\\
&\qquad\cdot\left|
\varepsilon_2E\left[{W_y}(W_{y'+\varepsilon_1}-{W_{y'}})\right] -\varepsilon_1 E\left[{W_y}(W_{y'+\varepsilon_2}-{W_{y'}})\right] \right|\\
&=|E\left[{W_{y'}}(W_{y+\varepsilon_1}-{W_y})\right]||\Delta_1|\\
&\leq C\varepsilon_1^2\varepsilon_2(\varepsilon_1-\varepsilon_2)
\left\{\frac1{t}+\frac1{t^2}({|x|^2}+|x|+1)\right\}
\end{align*}
for all $0<\varepsilon_2<\varepsilon_1<1$ and $y,y'\in I_x$, which implies that
\begin{align*}
\frac1{\varepsilon_1^2\varepsilon_2}\int_{I_x}\int_{I_x}
&|A_{y,y'}(2-1,\varepsilon_1,2)| \left|E\left[f'({W_y})f'({W_{y'}})\right] +E\left[f''({W_y})f({W_{y'}})\right]\right|dydy'\\
&\leq
C(\varepsilon_1-\varepsilon_2)
\left\{\frac1{t}+\frac1{t^2}({|x|^2}+|x|+1)\right\}
\|f\|^2_{{\mathscr H}_t}\longrightarrow 0,
\end{align*}
as $\varepsilon_1,\varepsilon_2\to 0$ by~\eqref{sec3-eq3.999}, since $f\in {\mathscr H}_t$. Similarly, we can show that the next convergence:
\begin{equation}\label{sec3-eq3.17}
\begin{split}
\frac1{\varepsilon_1^2\varepsilon_2}&\int_{I_x}\int_{I_x}
A_{y,y'}(2-2,\varepsilon_1,2)\\
&\qquad\cdot\left\{E\left[f'({W_y})f'({W_{y'}})\right] +E\left[f({W_y})f''({W_{y'}})\right]\right\}dydy'
\longrightarrow 0,
\end{split}
\end{equation}
as $\varepsilon_1,\varepsilon_2\to 0$. Thus, we have proved the second statement.
\end{proof}
\begin{corollary}\label{cor4-1.1}
Let $f,f_1,f_2,\ldots \in {\mathscr H}_t$ such that $f_n\to f$ in ${\mathscr H}_t$. Then, the convergence
\begin{align}\label{sec7-eq7.6}
[f_n(W),W]^{(SQ)}_x\longrightarrow [f(W), W]^{(SQ)}_x
\end{align}
holds in $L^2(\Omega)$ for all $x\in {\mathbb R}$.
\end{corollary}
\section{The It\^o's formula of process $\{u(\cdot,x),x\in {\mathbb R}\}$}\label{sec4-1}
In this section, as a application of the previous section we discuss the It\^o calculus of the process $W=\{W_x=u(\cdot,x),x\in {\mathbb R}\}$ and fix a time parameter $t>0$. For a continuous processes $X$ admitting finite quadratic variation $[X,X]$, Russo and Vallois~\cite{Russo-Vallois2,Russo-Vallois3} have introduced the following It\^o formula:
$$
F(X_t)=F(X_0)+\int_0^tF'(X_s)d^{-}X_s+\frac1{2}\int_0^tF''(X_s) d[X,X]_s
$$
for all $F\in C^2({\mathbb R})$, where
$$
\int_0^tF'(X_s)d^{-}X_s:={\rm ucp}\lim_{\varepsilon\downarrow 0}\frac1{\varepsilon}\int_0^tF'(X_s) \left(X_{s+\varepsilon}-X_s\right)ds
$$
is called the forward integral, where the notation ${\rm ucp}\lim$ denotes the uniform convergence in probability on each compact interval. We refer to Russo and Vallois~\cite{Russo-Vallois2,Russo-Vallois3} and the references therein for more details of stochastic calculus of continuous processes with finite quadratic variations. It follows from the previous section (the quadratic variation of $\{W_x,x\in {\mathbb R}\}$ is equal to $|x|$ for all $x\in {\mathbb R}$) that
\begin{equation}\label{sec4-1-eq4.1}
F(W_x)=F(W_0)+\int_{I_x}F'(W_y)d^{-}W_y+\frac1{2}\int_{I_x}F''(W_y)dy
\end{equation}
for all $F\in C^2({\mathbb R})$. Thys, by smooth approximating, we have that the next It\^o type formula.
\begin{theorem}\label{th4-1.1}
Let $f\in {\mathscr H}_t$ be left continuous. If $F$ is an absolutely continuous function with the derivative $F'=f$,
then the following It\^o type formula holds:
\begin{equation}\label{sec4-1-eq4.1-1}
F(W_x)=F(W_0)+\int_{I_x}f(W_y)d^{-}W_y +\frac1{2}[f(W),W]^{(SQ)}_x.
\end{equation}
\end{theorem}
Clearly, this is an analogue of F\"ollmer-Protter-Shiryayev's formula. It is an improvement in terms of the hypothesis on $f$ and it is also quite interesting itself. Some details and more works could be found in Eisenbaum~\cite{Eisen1,Eisen2}, Feng--Zhao~\cite{Feng,Feng3}, F\"ollmer {\it et al}~\cite{Follmer}, Moret--Nualart~\cite{Moret}, Peskir~\cite{Peskir1}, Rogers--Walsh~\cite{Rogers2},
Russo--Vallois~\cite{Russo2,Russo-Vallois2,Russo-Vallois3},
Yan {\it et al}~\cite{Yan7,Yan2}, and the references therein. It is well-known that when $W$ is a semimartingale, the forward integral coincides with the It\^o integral. However, the following theorem points out that the two integrals are coincident for the process $W=\{W_x,x\in {\mathbb R}\}$. But, $W=\{W_x,x\in {\mathbb R}\}$ is not a semimartingale.
\begin{theorem}\label{th4-1.2}
Let $f$ be left continuous. If $F$ is an absolutely continuous function with the derivative $F'=f$ satisfying the condition
\begin{equation}\label{sec4-1-eq4.8011}
|F(y)|,|f(y)|\leq Ce^{\beta {y^2}},\quad y\in {\mathbb R}
\end{equation}
with $0\leq \beta<\frac{\sqrt{\pi}}{4\sqrt{t}}$, then the following It\^o type formula holds:
\begin{equation}\label{sec4-1-eq4.2}
F(W_x)=F(W_0)+\int_{I_x}f(W_y)\delta W_y +\frac1{2}[f(W_{\cdot}),W_{\cdot}]^{(SQ)}_x.
\end{equation}
\end{theorem}
According to the two theorems above we get the next relationship:
\begin{equation}\label{sec4-1-eq4.3}
\int_{I_x}f(W_y)\delta W_y=\int_{I_x}f(W_y)d^{-}W_y,
\end{equation}
if $f$ satisfies the growth condition~\eqref{sec4-1-eq4.8011}.
\begin{proof}[Proof of Theorem~\ref{th4-1.1}]
If $f\in C^1({\mathbb R})$, then this is It\^o's formula since
$$
\left[f(W),W\right]^{(SQ)}_x=\int_{I_x}f'(W_y)dy.
$$
For $f\not\in C^1({\mathbb R})$, by a localization argument we may
assume that the function $f$ is uniformly bounded. In fact, for any
$k\geq 0$ we may consider the set
$$
\Omega_k=\left\{\sup_{x\in {\mathbb R}}|W_x|<k\right\}
$$
and let $f^{[k]}$ be a measurable function such that $f^{[k]}=f$ on
$[-k,k]$ and such that $f^{[k]}$ vanishes outside. Then $f^{[k]}$ is uniformly bounded and $f^{[k]}\in {\mathscr H}_t$ for every $k\geq 0$. Set $\frac{d}{dx}F^{[k]}=f^{[k]}$ and $F^{[k]}=F$ on $[-k,k]$. If the theorem is true for all uniformly bounded functions on ${\mathscr H}_t$, then we get the desired formula
$$
F^{[k]}(W_x)=F^{[k]}(W_0)+\int_{I_x}
f^{[k]}(W_y)d^{-}u_y+\frac12[f^{[k]}(W),W]^{(SQ)}_x
$$
on the set $\Omega_k$. Letting $k$ tend to infinity we deduce the It\^o formula~\eqref{sec4-1-eq4.1}.
Let now $F'=f\in {\mathscr H}_t$ be uniformly bounded and left
continuous. Consider the function $\zeta$ on ${\mathbb R}$ by
\begin{equation}
\zeta(x):=
\begin{cases}
ce^{\frac1{(x-1)^2-1}}, &{\text { $x\in (0,2)$}},\\
0, &{\text { otherwise}},
\end{cases}
\end{equation}
where $c$ is a normalizing constant such that $\int_{\mathbb
R}\zeta(x)dx=1$. Define the mollifiers
\begin{equation}\label{sec4-eq00-4}
\zeta_n(x):=n\zeta(nx),\qquad n=1,2,\ldots
\end{equation}
and the sequence of smooth functions
$$
F_n(x):=\int_{\mathbb R}F(x-{y})\zeta_n(y)dy,\quad x\in {\mathbb R}.
$$
Then $F_n\in C^\infty({\mathbb R})$ for all
$n\geq 1$ and the It\^{o} formula
\begin{equation}\label{sec3-eq3-Ito-1}
F_n(W_x)=F_n(W_0)+\int_{I_x}f_n(W_y)d^{-}W_y+
\frac12\int_0^tf'_n(W_y)dy
\end{equation}
holds for all $n\geq 1$, where $f_n=F_n'$. Moreover, by using Lebesgue's dominated convergence theorem, one can prove that as $n\to \infty$, for each $x$,
$$
F_n(x)\longrightarrow F(x),\quad f_n(x)\longrightarrow f(x),
$$
and $\{f_n\}\subset {\mathscr H}_t$, $f_n\to f$ in ${\mathscr H}_t$. It follows that
\begin{align*}
\frac12\int_0^tf'_n(W_y)dy=\left[f_n(W),W\right]^{(SQ)}_x
\longrightarrow \left[f(W),W\right]^{(SQ)}_x
\end{align*}
and
$$
f_n(W_x)\longrightarrow f(W_x)
$$
in $L^2(\Omega)$ by Corollary~\ref{cor4-1.1}, as $n$ tends to infinity. It
follows that
\begin{align*}
\int_{I_x}f_n(W_y)d^{-}W_y&=F_n(W_y)-F_n(W_0)-
\frac12[f_n(W),W]^{(SQ)}_x\\
&\longrightarrow F(W_y)-F_n(W_0)-
\frac12[f(W),W]^{(SQ)}_x
\end{align*}
in $L^2(\Omega)$, as $n$ tends to infinity. This completes the proof since the integral is closed in $L^2(\Omega)$.
\end{proof}
Now, similar to proof of Theorem~\ref{th4-1.1} one can introduce Theorem~\ref{th4-1.2}. But, we need to give the following standard It\^o type formula:
\begin{equation}\label{sec4-1-eq4.8}
F(W_x)=F(W_0)+\int_{I_x}F'(W_y)\delta W_y +\frac1{2}\int_{I_x}F''(W_y)dy
\end{equation}
for all $F\in C^2({\mathbb R})$ satisfying the condition
\begin{equation}
|F(y)|,|F'(y)|,|F''(y)|\leq Ce^{\beta {y^2}},\quad y\in {\mathbb R}
\end{equation}
with $0\leq \beta<\frac{\sqrt{\pi}}{4\sqrt{t}}$. It is important to note that one have given a standard It\^o formula for a large class of Gaussian processes in Al\'os {\em et al}~\cite{Nua1}. However, the process $x\mapsto u(\cdot,x)$ does not satisfy the condition in Al\'os {\em et al}~\cite{Nua1} since
\begin{align*}
E\left[u(t,x)^2\right]=\sqrt{\frac{t}{\pi}},\quad \frac{d}{dx}E\left[u(t,x)^2\right]=0
\end{align*}
for all $t\geq 0$ and $x\in {\mathbb R}$. So, we need to give the proof of the formula~\eqref{sec4-1-eq4.8} in order to prove Theorem~\ref{th4-1.2}.
\begin{lemma}\label{lem4-1.1}
Let $x\in {\mathbb R}$ and let $x^n_j=\frac{jx}{n}; j=0,1,\ldots,n$. Then we have
\begin{equation}
\sum_{j=1}^n\left(W_{x^n_j}-W_{x^n_{j-1}}\right)^2\longrightarrow |x|,
\end{equation}
in $L^2$, as $n$ tends to infinity.
\end{lemma}
\begin{proof}
Similar to proof of Proposition~\ref{prop4.1} one can introduce the lemma, so we omit it.
\end{proof}
\begin{proof}[Proof of~\eqref{sec4-1-eq4.8}]
Let us fix $x\in {\mathbb R}$ and let $\pi\equiv \{x^n_j=\frac{jx}{n}; j=0,1,\ldots,n\}$ be a partition of $I_x$. Clearly, the growth condition~\eqref{sec4-1-eq4.2} implies that
\begin{equation}\label{sec4-1-eq4.8100}
E\left[\sup_{x\in {\mathbb R}}|G(W_x)|^p\right]\leq
c^pE\left[e^{p\beta\sup_{x\in {\mathbb R}}|W_x|}\right]<\infty
\end{equation}
for some constant $c>0$ and all $p<\frac{\sqrt{\pi}}{2\beta\sqrt{t}}$, where $G\in \{F,F',F''\}$. In particular, the estimate~\eqref{sec4-1-eq4.8100} holds for $p=2$. Using Taylor expansion, we have
\begin{equation}\label{sec3-eq3.3}
\begin{split}
F(W_x)&=F(W_0)+\sum^{n}_{j=1}
F^{'}(W_{x^n_{j-1}})(W_{x^n_{j}}-W_{x^n_{j-1}})\\
&\qquad+\frac{1}{2}\sum^{n}_{j=1}F^{''}
(W_{j}(\theta_j))(W_{x^n_{j}}-W_{x^n_{j-1}})^{2}\\
&\equiv F(W_0)+I^n +J^n
\end{split}
\end{equation}
where
$W_{j}(\theta_j)=W_{x^n_{j-1}}+\theta_j(W_{x^n_{j}}-W_{x^n_{j-1}})$
with $\theta_j\in (0,1)$ being a random variable. By~\eqref{sec2-eq2.1} we have
\begin{align*}
I^n&=\sum^n_{j=1}F^{'}(W_{x^n_{j-1}})(\delta^{t}(1_{(x^n_{j-1},
x^n_{j}]}))\\
&=\delta^{t}\left(\sum^n_{j=1}f^{'}(W_{x^n_{j-1}})1_{(x^n_{j-1},
x_{j}]}(\cdot)\right)
+\sum^n_{j=1}F^{''}(W_{x^n_{j-1}})\langle1_{(0,x^n_{j-1}]},
1_{(x^n_{j-1}, x^n_{j}]}\rangle_{\mathcal{H}_t}\\
& \equiv I^{n}_1+I^{n}_2.
\end{align*}
Now, in order to end the proof we claim that the following convergences in $L^2$ hold:
\begin{align}\label{sec4-1-eq4.11}
I^n_2\longrightarrow-\frac12\int^t_0F^{''}(W_y)dy,\\
I^{n}_1\longrightarrow\int_{I_x}F'(W_y)\delta W_y,\\
J^n\longrightarrow\int^t_0F^{''}(W_y)dy,
\end{align}
as $n$ tends to infinity.
To prove the first convergence, it is enough to establish that
\begin{align*}
\Lambda_n:=E\left|I^n_2+\frac12\sum_{j=1}^n F^{''}(W_{x^n_j})(x^n_j-x^n_{j-1})\right|^2\longrightarrow 0,
\end{align*}
as $n$ tends to infinity. By Minkowski inequality we have
\begin{align*}
\sqrt{\Lambda_n}&=\left(E\left|\sum^n_{j=1}F^{''}(W_{x^n_{j-1}}) \left\{\langle1_{(0,x^n_{j-1}]},
1_{(x^n_{j-1}, x^n_{j}]}\rangle_{\mathcal{H}_t}+\frac12 (x^n_j-x^n_{j-1})\right\}\right|^2\right)^{1/2}\\
&\leq C\sum^n_{j=1}\left|\langle1_{(0,x^n_{j-1}]},
1_{(x^n_{j-1}, x^n_{j}]}\rangle_{\mathcal{H}_t}+\frac12 (x^n_j-x^n_{j-1})\right|\\
&=C\sum^n_{j=1}\left|\frac1{2\sqrt{\pi}}\int_0^t \frac1{\sqrt{r}}\left(1-e^{-\frac1{4r}(x^n_j-x^n_{j-1})^2}\right)dr -\frac12 (x^n_j-x^n_{j-1})\right|\\
&=C\sum_{j=1}^n|x^n_j-x^n_{j-1}|
\left|\frac1{\sqrt{2\pi}} \int_{\frac{x^n_j-x^n_{j-1}}{\sqrt{2t}}}^{\infty} \frac1{s^2}\left(1-e^{-\frac{s^2}{2}}\right)ds-\frac12\right|\\
&=C\sum_{j=1}^n|x^n_j-x^n_{j-1}|
\left|\frac1{\sqrt{2\pi}} \int_0^{\frac{x^n_j-x^n_{j-1}}{\sqrt{2t}}} \frac1{s^2}\left(1-e^{-\frac{s^2}{2}}\right)ds\right|\\
&=C|x|\frac1{\sqrt{2\pi}} \int_0^{\frac{|x|}{n\sqrt{2t}}} \frac1{s^2}\left(1-e^{-\frac{s^2}{2}}\right)ds\longrightarrow 0,
\end{align*}
as $n$ tends to infinity by the fact
$$
\frac1{\sqrt{2\pi}} \int_0^\infty \frac1{s^2}\left(1-e^{-\frac{s^2}{2}}\right)ds=\frac1{\sqrt{2\pi}} \int_0^\infty e^{-\frac{s^2}{2}}ds=\frac12.
$$
Now, we prove the third convergence. We have
\begin{align*}
\Lambda_n(2):&=E\left|J^n-\int_{I_x} F^{''}(W_{y})dy\right|\\
&=E\left|\frac{1}{2}\sum^{n}_{j=1}F^{''}
(W_{j}(\theta_j))(W_{x^n_{j}}-W_{x^n_{j-1}})^{2}-\int_{I_x} F^{''}(W_{y})dy\right|.
\end{align*}
Suppose that $n\geq m$, and for any $j=1,\ldots,n$ let us denote by $x^{m(n)}_j$ the point of the $m$th partition that is closer to $x^n_j$ from the left. Then we obtain
\begin{align*}
\Lambda_n(2)&\leq \frac{1}{2}E\left|\sum^{n}_{j=1}\left(F^{''}
(W_{j}(\theta_j))-F^{''}(W_{x^{m(n)}_j})\right) (W_{x^n_{j}}-W_{x^n_{j-1}})^{2}\right|\\
&\qquad+\frac12E\left|\sum_{k=1}^mF^{''}(W_{x^{m(n)}_j})\sum_{
\{j:x^{m(n)}_{k-1}\leq x^{m(n)}_{j-1}<x^{m(n)}_{k}\}}
\left((W_{x_j^n}-W_{x^n_{j-1}})^2-(x_j^n-x^n_{j-1})\right)\right|\\
&\qquad+E\left|\sum_{k=1}^m\int_{x^n_{k-1}}^{x^n_k} \left(F^{''}(W_{x^{m(n)}_j})-F^{''}(W_y)\right)dy\right|\\
&\equiv \frac{1}{2}\Lambda_n(2,1)+\frac{1}{2}\Lambda_n(2,2)+\Lambda_n(2,3).
\end{align*}
Clearly, we have that $\Lambda_n(2,2)\to 0$ ($n,m\to \infty$) by Lemma~\ref{lem4-1.1} and the estimate~\eqref{sec4-1-eq4.8100},
\begin{align*}
\Lambda_n(2,3)&\leq |x|E\sup_{|z-y|\leq \frac{|x|}{m}}|F^{''}
(W_{z})-F^{''}(W_{y})|\longrightarrow 0\quad(m\to \infty)
\end{align*}
and
\begin{align*}
\Lambda_n(2,1)&=E\left|\sum^{n}_{j=1}\left(F^{''}
(W_{j}(\theta_j))-F^{''}(W_{x^{m(n)}_j})\right) (W_{x_{j}}-W_{x_{j-1}})^{2}\right|\\
&\leq CE\left\{
\sup_{|z-y|\leq \frac{|x|}{n}}|F^{''}
(W_{z})-F^{''}(W_{y})|
\sum^{n}_{j=1}(W_{x^n_{j}}-W_{x^n_{j-1}})^{2}\right\}\\
&\leq C\left\{E
\sup_{|z-y|\leq \frac{|x|}{n}}|F^{''}
(W_{z})-F^{''}(W_{y})|^2
E\left(\sum^{n}_{j=1}(W_{x^n_{j}}-W_{x^n_{j-1}})^{2}\right)^2 \right\}^{1/2}\\
&\leq C|x|\left\{E\sup_{|z-y|\leq \frac{|x|}{n}}|F^{''}
(W_{z})-F^{''}(W_{y})|^2\right\}^{1/2}\longrightarrow 0 \quad(n\to \infty)
\end{align*}
by~\eqref{sec2-eq2.2} and the estimate~\eqref{sec4-1-eq4.8100}. Thus, we obtain the third convergence, i.e., $J^n\to \int^t_0F^{''}(W_y)dy$ in $L^1$.
Finally, to end the proof we show that the second convergence:
$$
I^{n}_1=\delta^{t}\left(\sum^n_{j=1}F^{'}(W_{x^n_{j-1}}) 1_{(x^n_{j-1},
x^n_{j}]}(\cdot)\right)\longrightarrow\int_{I_x}F'(W_y)\delta W_y \quad (n\to \infty).
$$
We need to show that
$$
A_n:=\sum^n_{j=1}F^{'}(W_{x^n_{j-1}})1_{(x^n_{j-1},x^n_{j}]}(\cdot)
\longrightarrow F^{'}(W_{\cdot})1_{I_x}(\cdot)
$$
in $L^2(\Omega;{\mathcal H}_t)$, as $n$ tends to infinity. We have
\begin{align*}
E\|A_n&-F^{'}(W_{\cdot})1_{I_x}(\cdot)\|^2_{{\mathcal H}_t}\\
&=E\Bigl\|\sum^n_{j=1}\left(F^{'}(W_{x_{j-1}})-F^{'}(W_{\cdot}) \right) 1_{(x^n_{j-1},x^n_{j}]}(\cdot)
\Bigr\|^2_{|\mathcal{H}_t|}\\
&=E\sum^{n}_{j,l=1}\int^{x^n_{j}}_{x^n_{j-1}}
\int_{x^n_{l-1}}^{x^n_{l}}\left|F^{'}(W_{x^n_{j-1}})-F^{'}(W_y) \right|\left|F^{'}(W_{x^n_{j-1}})-F^{'}(W_z)\right|\phi(y,z)dydz\\
&\leq E\sup_{|y-z|\leq \frac{|x|}n}
\left|F^{'}(W_y)-F^{'}(W_z)\right|^2
\int_{I_x}\int_{I_x}\phi(y,z)dydz\\
&=E\sup_{|y-z|\leq \frac{|x|}n}
\left|F^{'}(W_y)-F^{'}(W_z)\right|^2
\int_{I_x}\int_{I_x}\frac1{2\sqrt{\pi t}}e^{-\frac{(y-z)^2}{4t}} dydz\\
&\leq E\sup_{|y-z|\leq \frac{|x|}n}
\left|F^{'}(W_y)-F^{'}(W_z)\right|^2
\int_{I_x}dz\int_{\mathbb R}\frac1{2\sqrt{\pi t}}e^{-\frac{(y-z)^2}{4t}}dy\\
&\leq |x|E\sup_{|y-z|\leq \frac{|x|}n}
\left|F^{'}(W_y)-F^{'}(W_z)\right|^2\longrightarrow 0,
\end{align*}
as $n$ tends to infinity by the estimate~\eqref{sec4-1-eq4.8100}, which implies that
$$
A_n:=\sum^n_{j=1}F^{'}(W_{x^n_{j-1}})1_{(x^n_{j-1},x^n_{j}]}(\cdot)
\longrightarrow F^{'}(W_{\cdot})1_{I_x}(\cdot)
$$
in $L^2(\Omega;{\mathcal H}_t)$, as $n$ tends to infinity. The above steps prove that
\begin{align*}
I^n_1=F(W_x)-F(W_0)-I^n_2-J^n\longrightarrow F(W_x)-F(W_0)-\frac12\int_{I_x}F''(W_y)dy
\end{align*}
in $L^2(\Omega)$, as $n$ tends to infinity. This completes the proof since the integral $\int_0^\cdot u_s\delta W_s$ is closed in $L^2(\Omega)$.
\end{proof}
\section{The Bouleau-Yor identity of $\{u(\cdot,x),x\geq 0\}$}\label{sec4-2}
In this section, we consider the local time of the process $\{W_x=u(\cdot,x),x\geq 0\}$. Our main object is to prove that the integral
$$
\int_{\mathbb R}g(a){\mathscr L}^t(x,da)
$$
is well-defined and that the identity
\begin{equation}\label{sec4-2-eq0}
\int_{\mathbb R}g(a){\mathscr L}^t(x,da)=-[g(u(t,\cdot)),u(t,\cdot)]_x^{(SQ)}
\end{equation}
holds for all $g\in {\mathscr H}_t$ and $t>0$, where
$$
{\mathscr L}^t(x,a)=\int_{I_x}\delta(u(t,y)-a)dy.
$$
is the local time of $\{W_x=u(\cdot,x),x\geq 0\}$. The identity~\eqref{sec4-2-eq0} is called the Bouleau-Yor identity. More works for this can be found in Bouleau-Yor~\cite{Bouleau}, Eisenbaum~\cite{Eisen1,Eisen2}, F\"ollmer {\it
et al}~\cite{Follmer}, Feng--Zhao~\cite{Feng,Feng3},
Peskir~\cite{Peskir1}, Rogers--Walsh~\cite{Rogers2},
Yan {\it et al}~\cite{Yan7,Yan2}, and the references therein.
Recall that for any closed interval $I\subset {\mathbb R}_{+}$ and for any $a\in {\mathbb R}$, the local time $L(a,I)$ of $u$ is defined as the density of the occupation measure $\mu_I$ defined by
$$
\mu_I(A)=\int_I1_A(W_x)dx
$$
It can be shown (see Geman and Horowitz~\cite{Geman}, Theorem 6.4) that the following occupation density formula holds:
$$
\int_Ig(W_x,x)dx=\int_{\mathbb R}da\int_Ig(a,x)L(a,dx)
$$
for every Borel function $g(a,x)\geq 0$ on $I\times {\mathbb R}$. Thus, some estimates in Section~\ref{sec2} and Theorem 21.9 in Geman-Horowitz~\cite{Geman} together imply that the following result holds.
\begin{corollary}
The local time ${\mathscr L}^t(a,x):=L(a, [0,x])$ of $W=\{W_x=u(\cdot,x),x\geq 0\}$ exists and ${\mathscr L}^t\in L^2(\lambda\times P)$ for all $x\geq 0$ and $(a,x)\mapsto {\mathscr L}^t(a,x)$ is jointly continuous, where $\lambda$ denotes Lebesgue measure. Moreover, the occupation formula
\begin{equation}\label{sec4-2-eq1}
\int_0^t\psi(W_x,x)dx=\int_{\mathbb R}da\int_0^t\psi(a,x) {\mathscr L}^t(a,dx)
\end{equation}
holds for every continuous and bounded function $\psi(a,x):{\mathbb
R}\times {\mathbb R}_{+}\rightarrow {\mathbb R}$ and any $x\geq 0$.
\end{corollary}
\begin{lemma}
For any $f_\triangle=\sum_jf_j1_{(a_{j-1},a_j]}\in {\mathscr E}$, we define
$$
\int_{\mathbb R}f_\triangle(y){\mathscr
L}^t(dy,x):=\sum_jf_j\left[{\mathscr L}^t(a_j,x)-{\mathscr
L}^{t}(a_{j-1},x)\right].
$$
Then the integral is well-defined and
\begin{equation}\label{sec4-2-eq2}
\int_{\mathbb R}f_{\Delta}(y)\mathscr{L}^t(dy,x)=
-\bigl[f_\triangle(W),W\bigr]^{(SQ)}_x
\end{equation}
almost surely, for all $x\geq 0$.
\end{lemma}
\begin{proof}
For the function $f_\triangle(y)=1_{(a,b]}(y)$ we define the sequence of smooth functions $f_n,\;n=1,2,\ldots$ by
\begin{align}
f_n(y)&=\int_{\mathbb
R}f_\triangle(y-z)\zeta_n(z)dz=\int_a^b\zeta_n(y-z)dz
\end{align}
for all $y\in \mathbb R$, where $\zeta_n,n\geq 1$ are the so-called mollifiers given in~\eqref{sec4-eq00-4}. Then $\{f_n\}\subset
C^{\infty}({\mathbb R})\cap {\mathscr H}_t$ and $f_n$ converges to $f_\triangle$ in ${\mathscr H}_t$, as $n$ tends to infinity. It follows from the occupation formula that
\begin{align*}
[f_n(W),W]^{(SQ)}_x& =\int_0^xf'_n(W_y)dy\\
&=\int_{\mathbb R}f_n'(y){\mathscr L}^t(y,x)dy=\int_{\mathbb
R}\left(\int_a^b\zeta_n'(y-z)dz\right){\mathscr L}^t(y,x)dy\\
&=-\int_{\mathbb R}{\mathscr
L}^t(y,x)\left(\zeta_n(y-b)-\zeta_n(y-a)\right)dy\\
&=\int_{\mathbb R}{\mathscr L}^{t}(y,x)\zeta_n(y-a)dy
-\int_{\mathbb R}{\mathscr L}^{t}(y,x)\zeta_n(y-b)dy\\
&\longrightarrow {\mathscr L}^{t}(a,x)-{\mathscr L}^{t}(b,x)
\end{align*}
almost surely, as $n\to \infty$, by the continuity of $y\mapsto
{\mathscr L}^{t}(y,x)$. On the other hand, we see also that there exists a subsequence $\{f_{n_k}\}$ such that
$$
[f_{n_k}(W),W]^{(SQ)}_x\longrightarrow [1_{(a,b]}(W),W]^{(SQ)}_x
$$
for all $x\geq 0$, almost surely, as $k\to \infty$ since $f_n$ converges to $f_\triangle$ in ${\mathscr H}_t$. It follows that
$$
[1_{(a,b]}(W),W]^{(SQ)}_x=\left({\mathscr L}^{t}(a,x)-{\mathscr L}^{t}(b,x)
\right)
$$
for all $x\geq 0$, almost surely. Thus, the identity
$$
\sum_jf_j[{\mathscr L}^{t}(a_j,x)-{\mathscr L}^{t}(a_{j-1},x)]=
-[f_\triangle(W),W]^{(SQ)}_x
$$
follows from the linearity property, and the lemma follows.
\end{proof}
As a direct consequence of the above lemma, for every $f\in {\mathscr H}_t$ if
$$
\lim_{n\to \infty}f_{\triangle,n}(y)=\lim_{n\to
\infty}g_{\triangle,n}(x)=f(y)
$$
in ${\mathscr H}_t$, where $\{f_{\triangle,n}\},\{g_{\triangle,n}\}\subset {\mathscr E}$, we then have that
\begin{align*}
\lim_{n\to \infty}\int_{\mathbb
R}&f_{\triangle,n}(y){\mathscr L}^{t}(y,x)dy
=-\lim_{n\to \infty}[f_{\triangle,n}(W),W]^{(SQ)}_x= -[f(W),W]^{(SQ)}_x\\
&=-\lim_{n\to \infty}[g_{\triangle,n}(W),W]^{(SQ)}_x=\lim_{n\to
\infty}\int_{\mathbb R}g_{\triangle,n}(y){\mathscr L}^{t}(y,x)dy
\end{align*}
in $L^2(\Omega)$. Thus, by the denseness of ${\mathscr
E}$ in ${\mathscr H}_t$ we can define
$$
\int_{\mathbb R}f(y){\mathscr L}^{t}(dy,x):=\lim_{n\to
\infty}\int_{\mathbb R}f_{\triangle,n}(y){\mathscr L}^{t}(dy,x)
$$
for any $f\in {\mathscr H}_t$, where $\{f_{\triangle,n}\}\subset
{\mathscr E}$ and
$$
\lim_{n\to \infty}f_{\triangle,n}=f
$$
in ${\mathscr H}_t$. These considerations are enough to prove the following theorem.
\begin{theorem}\label{th4-2.1}
For any $f\in {\mathscr H}_t$, the integral
$$
\int_{\mathbb R}f(y){\mathscr L}^{t}(dy,x)
$$
is well-defined in $L^2(\Omega)$ and the Bouleau-Yor type identity
\begin{equation}\label{sec4-2-eq4}
[f(W),W]^{(SQ)}_x=-\int_{\mathbb R}f(y)\mathscr{L}^{t}(dy,x)
\end{equation}
holds, almost surely, for all $x\geq 0$.
\end{theorem}
\begin{corollary}[Tanaka formula]
For any $a\in {\mathbb R}$ we have
\begin{align*}
(W_x-a)^{+}=(W_0-a)^{+}+\int_0^x{1}_{\{W_y>a\}} \delta W_y+\frac12{\mathscr L}^{t}(a,x),\\
(W_x-a)^{-}=(W_0-a)^{-}-\int_0^x{1}_{\{W_y<a\}}
\delta W_y+\frac12{\mathscr L}^{t}(a,x),\\
|W_x-a|=|W_0-a|+\int_0^x{\rm sign}(W_x-a)\delta W_y+{\mathscr L}^{t}(a,x).
\end{align*}
\end{corollary}
\begin{proof}
Take $F(y)=(y-x)^{+}$. Then $F$ is absolutely continuous and
$$
F(x)=\int_{-\infty}^y1_{(x,\infty)}(y)dy.
$$
It follows from the identity~\eqref{sec4-2-eq2} and It\^o's formula~\eqref{th4-1.2} that
\begin{align*}
{\mathscr
L}^{t}(a,x)&=[1_{(a,+\infty)}(W),W]^{(SQ)}_x\\
&=2(W_x-a)^{+}-2(-a)^{+}-2 \int_0^x{1}_{\{W_y>a\}}\delta W_y
\end{align*}
for all $x\geq 0$, which gives the first identity. In the same
way one can obtain the second identity, and by subtracting the last identity from the previous one, we get the third identity.
\end{proof}
According to Theorem~\ref{th4-2.1}, we get an analogue of the It\^o formula (Bouleau-Yor type formula).
\begin{corollary}\label{cor4-2.3}
Let $f\in {\mathscr H}_t$ be a left continuous function with right limits. If $F$ is an absolutely continuous function with $F'=f$, then the following It\^o type formula holds:
\begin{equation}\label{sec4-2-eq5}
F(W_x)=F(W_0)+\int_0^xf(W_y)\delta W_y-\frac12\int_{\mathbb R}f(y){\mathscr L}^{t}(dy,x).
\end{equation}
\end{corollary}
Recall that if $F$ is the difference of two convex functions, then
$F$ is an absolutely continuous function with derivative of bounded
variation. Thus, the It\^o-Tanaka formula
\begin{align*}
F(W_x)&=F(0)+\int_0^xF^{'}(W_y)\delta W_y+\frac12\int_{\mathbb R}{\mathscr L}^{t}(y,x)F''(dy)\\
&\equiv F(0)+\int_0^xF^{'}(W_y)\delta W_y-\frac12\int_{\mathbb R}F'(y){\mathscr L}^{t}(dy,x)
\end{align*}
holds.
\section{The quadratic covariation of process $\{u(t,\cdot),t\geq 0\}$}\label{sec6}
In this section, we study the existence of the
PQC $[f(u(\cdot,x)),u(\cdot,x)]^{(TQ)}$. Recall that
$$
I_\varepsilon^2(f,x,t)=\frac1{\sqrt{\varepsilon}}\int_0^t \left\{f(u(s+\varepsilon,x))-f(u(s,x))\right\}
(u(s+\varepsilon,x)-u(s,x))\frac{ds}{2\sqrt{s}}
$$
for $\varepsilon>0,t\geq 0$ and $x\in {\mathbb R}$, and
\begin{equation}\label{sec6-eq6.1}
[f(u(\cdot,x)),u(\cdot,x)]^{(TQ)}_t=\lim_{\varepsilon\downarrow
0}I_\varepsilon^2(f,x,t),
\end{equation}
provided the limit exists in probability. In this section, we study some analysis questions of the process $\{u(t,\cdot),t\geq 0\}$ associated with the quadratic covariation $[f(u(\cdot,x)),u(\cdot,x)]^{(TQ)}$, and the researches include the existence of the
PQC $[f(u(\cdot,x)),u(\cdot,x)]^{(TQ)}$, the It\^o and Tanaka formulas. Recall that
\begin{align*}
E\left[u(t,x)^2\right]=\sqrt{\frac{t}{\pi}}
\end{align*}
for all $t\geq 0$ and $x\in {\mathbb R}$. Denote
$$
B_t=u(t,\cdot),\quad t\in [0,T].
$$
It follows from Al\'os {\em et al}~\cite{Nua1} that the It\^o formula
\begin{equation}\label{sec6-eq6.2}
f(B_t)=f(0)+\int_0^tf'(B_s)\delta B_s+\frac1{2\sqrt{2}}\int_0^tf''(B_s)\frac{ds}{\sqrt{2\pi s}}
\end{equation}
for all $t\in [0,T]$ and $f\in C^2({\mathbb R})$ satisfying the condition
\begin{equation}\label{sec6-eq6.2000}
|f(x)|,|f'(x)|,|f''(x)|\leq Ce^{\beta {x^2}},\quad x\in {\mathbb R}
\end{equation}
with $0\leq \beta<\frac{\sqrt{\pi}}{4\sqrt{T}}$.
\begin{proposition}\label{lem6.1}
Let $f\in C^1({\mathbb R})$. We have
\begin{equation}\label{sec6-eq6.3}
[f(B),B]^{(TQ)}_t=\int_0^tf'(B_s)\frac{ds}{\sqrt{2\pi s}}
\end{equation}
and in particular, we have
$$
[B,B]^{(TQ)}_t =\sqrt{\frac{2t}{\pi}}
$$
for all $t\geq 0$.
\end{proposition}
\begin{proof}
The proof is similar to Proposition~\ref{prop4.1}. It is enough to show that, for each $t\geq 0$
\begin{equation}\label{sec6-eq6.4}
\left\|B^\varepsilon_t-\sqrt{\frac{2t}{\pi}}
\right\|_{L^2}^2 =O(\varepsilon^\alpha)
\end{equation}
with some $\alpha>0$, as $\varepsilon\to 0$, by Lemma~\ref{Grad-Nourdin}, where
$$
B^\varepsilon_t=\frac1{\sqrt{\varepsilon}} \int_0^t(B_{s+\varepsilon}-B_s)^2d\sqrt{s}.
$$
We have
$$
E\left|B^\varepsilon_t-\sqrt{\frac{2t}{\pi}}\right|^2 =\frac{1}{\varepsilon}\int_0^t\int_0^t
A_\varepsilon(s,r)d\sqrt{s}d\sqrt{r}
$$
for $t\geq 0$ and $\varepsilon>0$, where
\begin{align*}
A_\varepsilon(s,r):&=E\left(
(B_{s+\varepsilon}-B_s)^2
-\sqrt{\frac{2\varepsilon}{\pi}}\right)\left(
(B_{r+\varepsilon}-B_r)^2
-\sqrt{\frac{2\varepsilon}{\pi}}\right)\\
&=E(B_{s+\varepsilon}-B_s)^2(B_{r+\varepsilon}-B_r)^2
+\frac{2\varepsilon}{\pi}\\
&\qquad-\sqrt{\frac{2\varepsilon}{\pi}}
E\left((B_{s+\varepsilon}-B_s)^2+(B_{r+\varepsilon}-B_r)^2\right).
\end{align*}
Defined the function $\phi_s:\;{\mathbb R}_{+}\to {\mathbb R}_{+}$ by
$$
\phi_s(x) =\frac1{\sqrt{2\pi}}\left(\sqrt{2(s+x)}
-2\sqrt{2s+x}+\sqrt{2s}\right)
$$
for every $s>0$. Then, we have
\begin{align*}
E\left[(B_{s+\varepsilon}-B_s)^2\right]&= \frac1{\sqrt{2\pi}}\left(\sqrt{2(s+\varepsilon)}
-2\sqrt{2s+\varepsilon}+2\sqrt{\varepsilon}+\sqrt{2s}\right) =\phi_s(\varepsilon)+\sqrt{\frac{2\varepsilon}{\pi}}
\end{align*}
for all $s>0$. Noting that
\begin{align*}
E[(&B_{s+\varepsilon}-B_s)^2(B_{r+\varepsilon}-B_r)^2]\\
&=E\left[(B_{s+\varepsilon}-B_s)^2\right]
E\left[(B_{r+\varepsilon}-B_r)^2\right] +2\left(E\left[(B_{s+\varepsilon}-B_s)
(B_{r+\varepsilon}-B_r)\right]\right)^{2}
\end{align*}
for all $r,s\geq 0$ and $\varepsilon>0$, we get
\begin{align*}
A_\varepsilon(s,r)&=\phi_s(\varepsilon)\phi_r(\varepsilon) +2(\mu_{s,r})^2
\end{align*}
where $\mu_{s,r}:=E\left[(B_{s+\varepsilon}-B_s)
(B_{r+\varepsilon}-B_r)\right]$. Now, let us estimate the function
$$
\phi_s(\varepsilon) =\frac1{\sqrt{2\pi}}\left(\sqrt{2(s+\varepsilon)}
-2\sqrt{2s+\varepsilon}+\sqrt{2s}\right).
$$
Clearly, one can see that
$$
\lim_{x\to 0}\frac{1-2\sqrt{1-x/2}+\sqrt{1-x}}{x^2}=-\frac1{16}
$$
and the continuity of the function $x\mapsto 1-2\sqrt{1-x/2}+\sqrt{1-x}$ implies that
\begin{align*}
|\phi_s(\varepsilon)|&= \sqrt{2(s+\varepsilon)}|1-2\sqrt{1-x/2}+\sqrt{1-x}|\leq C\frac{\varepsilon^2}{(s+\varepsilon)^{3/2}}\leq C\frac{\varepsilon^{\frac12+\beta}}{(s+\varepsilon)^\beta}
\end{align*}
with $x=\frac{\varepsilon}{s+\varepsilon}$ and $0<\beta<\frac12$, which gives
\begin{align*}
\frac{1}{\varepsilon}&\int_0^t\int_0^t |\phi_s(\varepsilon)\phi_r(\varepsilon)|d\sqrt{s}d\sqrt{r}\leq Ct^{1-2\beta}\varepsilon^{2\beta}.
\end{align*}
It follows from Lemma~\ref{lem2.4} that there is a
constant $\alpha>0$ such that
\begin{align*}
\lim_{\varepsilon\downarrow 0}\frac{1}{
\varepsilon^{1+\alpha}}\int_0^t\int_0^tA_\varepsilon(s,r) d\sqrt{s}d\sqrt{r}=0
\end{align*}
for all $t>0$, which gives the desired estimate
$$
\left\|B^\varepsilon_t-\sqrt{\frac{2t}{\pi}} \right\|_{L^2}^2=O\left(
\varepsilon^\alpha\right)\qquad (\varepsilon\to 0)
$$
for each $t\geq 0$ and some $\alpha>0$. This completes the proof.
\end{proof}
Consider the decomposition
\begin{equation}\label{sec6-eq6.5}
\begin{split}
I_\varepsilon^2(f,x,t)&=\frac1{\sqrt{\varepsilon}}\int_0^t f(B_{s+\varepsilon}) (B_{s+\varepsilon}-B_s)\frac{ds}{2\sqrt{s}}\\
&\hspace{2cm}-\frac1{\sqrt{\varepsilon}}\int_0^t f(B_s)(B_{s+\varepsilon}-B_s)\frac{ds}{2\sqrt{s}}\\
&\equiv I_\varepsilon^{2,+}(f,x,t)-I_\varepsilon^{2,-}(f,x,t)
\end{split}
\end{equation}
for $\varepsilon>0$, and by estimating the two terms in the right hand side above in $L^2(\Omega)$ one can structure the next Banach space:
$$
{\mathscr H}_{\ast}=\{f\,:\,{\text { Borel functions on ${\mathbb R}$ such that $\|f\|_{{\mathscr H}_{\ast}}<\infty$}}\},
$$
where
\begin{align*}
\|f\|_{{\mathscr H}_{\ast}}^2:&=\frac{1}{\sqrt[4]{4\pi}}\int_0^T\int_{\mathbb
R}|f(z)|^2e^{-\frac{z^2\sqrt{\pi}}{2\sqrt{s}}} \frac{dzds}{s^{3/4}}\equiv \int_0^TE|f(B_s)|^2\frac{ds}{2\sqrt{s}}.
\end{align*}
Clearly, ${\mathscr H}_\ast=L^2({\mathbb R},\mu(dz))$ with
$$
\mu(dz)=\left(
\frac{1}{\sqrt[4]{4\pi}}\int_0^T e^{-\frac{z^2\sqrt{\pi}}{2\sqrt{s}}} \frac{ds}{s^{3/4}}\right)dz,
$$
and ${\mathscr H}_\ast$ includes all functions $f$ satisfying the condition
$$
|f(x)|\leq Ce^{\beta {x^2}},\quad x\in {\mathbb R}
$$
with $0\leq \beta<\frac{\sqrt{\pi}}{4\sqrt{T}}$. In a same way proving Theorem~\ref{th3.1} and by smooth approximation one can introduce the following result.
\begin{theorem}\label{th6.1}
The PQC $[f(B),B]^{(TQ)}$ exists and
\begin{align}\label{sec6-eq6.6}
E\left|[f(B),B]^{(TQ)}_t\right|^2\leq C \|f\|_{{\mathscr H}_{\ast}}^2
\end{align}
for all $f\in {\mathscr H}_{\ast}$ and $t\in [0,T]$. Moreover, if $F$ is an absolutely continuous function such that
$$
|F(x)|,|F'(x)|\leq Ce^{\beta {x^2}},\quad x\in {\mathbb R}
$$
with $0\leq \beta<\frac{\sqrt{\pi}}{4\sqrt{T}}$, then the following It\^o type formula holds:
\begin{equation}
F(B_t)=F(0)+\int_0^tF'(B_s)\delta B_s+\frac1{2\sqrt{2}}[F'(B),B]^{(TQ)}_t
\end{equation}
for all $t\in [0,T]$.
\end{theorem}
Recall that Russo and Tudor~\cite{Russo-Tudor} has showed that
$B=\{B_t=u(t,\cdot),t\geq 0\}$ admits a local time $L(t,a)\in L^2(\lambda\times P)$ such that $(a,t)\mapsto L(a,t)$ is jointly continuous, where $\lambda$ denotes Lebesgue measure, since $B=\{B_t=u(t,\cdot),t\geq 0\}$ is a bi-fractional Brownian motion for every $x\in {\mathbb R}$. Define the weighted local time ${\mathscr L}$ of $B=\{B_t=u(t,\cdot),t\geq 0\}$ by
\begin{align*}
{\mathscr L}(x,t)&=\int_0^t\frac{1}{2\sqrt{\pi s}}d_sL(s,x)\equiv \int_0^t\delta(B_s-x)\frac{ds}{2\sqrt{\pi s}}
\end{align*}
for $t\geq 0$ and $x\in {\mathbb R}$, where $\delta$ is the Dirac delta function. Then, the occupation formula
\begin{equation}\label{sec6-eq6.8}
\int_0^t\psi(B_s,s)\frac{ds}{2\sqrt{\pi s}}=\int_{\mathbb R}da\int_0^t\psi(a,s){\mathscr L}(a,ds)
\end{equation}
holds for every continuous and bounded function $\psi:{\mathbb
R}\times {\mathbb R}_{+}\rightarrow {\mathbb R}$ and any $x\geq 0$. As in Section~\ref{sec4-2}, we can show that the integral
$$
\int_{\mathbb R}f_\triangle(x){\mathscr
L}(dx,t):=\sum_jf_j\left[{\mathscr L}(a_j,t)-{\mathscr
L}(a_{j-1},t)\right].
$$
is well-defined and
\begin{equation}\label{sec6-eq6.9}
\int_{\mathbb R}f_{\Delta}(x)\mathscr{L}(dx,t)=
-\frac1{\sqrt{2}}[f_\triangle(B),B]^{(TQ)}_t
\end{equation}
almost surely, for all $f_\triangle=\sum_jf_j1_{(a_{j-1},a_j]}\in {\mathscr E}$. By the denseness of ${\mathscr
E}$ in ${\mathscr H}_{\ast}$ one can define
$$
\int_{\mathbb R}f(x){\mathscr L}(dx,t):=\lim_{n\to
\infty}\int_{\mathbb R}f_{\triangle,n}(x){\mathscr L}(dx,t)
$$
for any $f\in {\mathscr H}_{\ast}$, where $\{f_{\triangle,n}\}\subset {\mathscr E}$ and
$$
\lim_{n\to \infty}f_{\triangle,n}=f
$$
in ${\mathscr H}$. Moreover, the Bouleau-Yor type formula
\begin{equation}\label{sec6-eq6.10}
[f(B),B]^{(TQ)}_t=-\sqrt{2}\int_{\mathbb R}f(x)\mathscr{L}(dx,t)
\end{equation}
holds, almost surely, for all $f\in {\mathscr H}_{\ast}$.
\begin{corollary}[Tanaka formula]
For any $x\in {\mathbb R}$ we have
\begin{align*}
|B_t-x|=|x|+\int_0^t{\rm sign}(B_s-x)\delta B_s+{\mathscr L}(x,t).
\end{align*}
\end{corollary}
|
1,108,101,564,939 | arxiv | \section{Introduction}
\IEEEPARstart{T}{}he problem of coordinating task-level and motion-level
operations for multi-robot systems arises in many real-world scenarios. A
simple example is an automated-warehouse system where heavy robots move
inventory pods in a space inhabited by humans. The robots may have to avoid
close proximity to humans and each other; they may have to compete for
resources with each other; and, yet, they have to work toward a common
objective~\cite{kiva}. Another example is airport surface operations where
towing vehicles autonomously navigate to aircraft and tow them to their
destinations~\cite{airporttug16}. This task-level coordination has to be done
in conjunction with the motion-level coordination of action primitives so that
each robot has a kinematically feasible plan.
The coordination of task-level and motion-level operations for multi-robot
systems requires a large search space. Current technologies are inadequate for
addressing the complexity of the problem, which becomes even worse since we
have to take imperfections in plan execution into account. For example,
exogenous events may not be included in the domain model. Even if they are,
they can often be modeled only probabilistically~\cite{MaAAAI17}.
In this article, we present an overview of our hierarchical framework for the
long-term autonomy of multi-robot systems. Our framework combines techniques
from automated artificial intelligence (AI) planning, temporal reasoning and
robotics. Figure~\ref{fig:architecture} shows its architecture for a small
example.
The plan-generation phase uses a state-of-the-art AI
planner~\cite{CohenUK16,MaAAMAS16} for causal reasoning about the task-level
actions of the robots, independent of their kinematic constraints to achieve
scalability. It then identifies the dependencies between the preconditions and
effects of the actions in the generated plan and compiles them into a temporal
plan graph (TPG), that encodes their partial temporal order. Finally, it
annotates the TPG with quantitative information that captures some kinematic
constraints associated with executing the actions. This converts the TPG into
a simple temporal network (STN) from which a plan (including its execution
schedule) can be generated in polynomial time that takes some of the kinematic
constraints of the robots into account (for simplicity called a kinematically
feasible plan in the following), namely by exploiting the slack in the
STN. The term ``slack'' refers to the existence of an entire class of plans
consistent with the STN, allowing us to narrow down the class of plans to a
single kinematically feasible plan. A similar notion of slack is well studied
for STNs in general in the temporal-reasoning community.
The plan-execution phase also exploits the slack in the STN, namely for
absorbing any imperfect plan execution to avoid time-consuming re-planning in
many cases.
\begin{figure*}
\center
\includegraphics[width=\textwidth]{architecture}
\caption{Architecture of our hierarchical framework. First, we discretize
the continuous MAPF problem in time and space and use an AI planner to
solve the resulting NP-hard problem. Then, we solve the STN for the
resulting discrete MAPF plan in polynomial time to generate a
kinematically feasible plan that provides guaranteed safety distances
among robots under the assumption of perfect plan execution. Control uses
specialized robot controllers during plan execution to exploit the slack
in the plan to try to absorb any imperfect plan execution. If this does
not work, partial dynamic re-planning re-solves a suitably modified STN in
polynomial time. Only if this does not work either, partial dynamic
re-planning re-solves a suitably modified MAPF problem more slowly.}
\label{fig:architecture}
\end{figure*}
We use a multi-robot path-planning problem as a case study to present the key
ideas behind our framework and demonstrate it both in simulation and on real
robots.
\section{Plan Generation}
We use a state-of-the-art AI planner for reasoning about the causal
interactions among actions. In the multi-agent path-finding (MAPF) problem,
which is well studied in AI, robotics and theoretical computer science, the
causal interactions are studied oblivious to the kinematic constraints of the
robots. We are given a graph with vertices (that correspond to locations) and
unit-length edges between them. Each edge connects two different vertices and
corresponds to a narrow passageway between the corresponding locations in
which robots cannot pass each other. Given a set of robots with assigned start
vertices and targets (goal vertices), we have to find collision-free paths for
the robots from their start vertices to their targets (where the robots
remain) that minimize the makespan (or some other measure of the cost, such as
the flowtime). At each timestep, a robot can either wait at its current vertex
or traverse a single edge. Two robots collide when they are at the same vertex
at the same timestep or traverse the same edge at the same timestep in
opposite directions.
The MAPF problem is NP-hard to solve optimally or bounded sub-optimally since
it is NP-hard to approximate within any constant factor less than 4/3
\cite{MaAAAI16}, called the sub-optimality guarantee. Yet, powerful MAPF
planners have recently been developed in the AI community that can find
(optimal or bounded sub-optimal) collision-free plans for hundreds of robots at
the cost of ignoring the kinematic constraints of real robots
\cite{MaAAMAS16,CohenUK16,MaAAAI17,MaAAMAS17}. We report on two of our own
contributions to such MAPF planners below.
\subsection{Consistency and Predictability of Motion}
For many real-world multi-robot systems, the consistency and predictability of
robot motions is important (especially in work spaces shared by humans and
robots), which is not taken into account by existing MAPF planners. We have
shown that we can adapt AI planners, such as the bounded-sub-optimal MAPF
planner Enhanced Conflict-Based Search (ECBS)~\cite{ECBS}, to generate paths
that include edges from a user-provided set of edges (called highways)
whenever the sub-optimality guarantee allows it, which makes the robot motions
more consistent and thus predictable. The highways can be an arbitrary set of
edges and thus be chosen to suit the humans. For example, highways need to be
created only in the part of the environment where the consistency of robot
motions is important. Furthermore, highways provide only suggestions but not
restrictions. Poorly chosen highways do not make a MAPF instance unsolvable
although they can make the MAPF planner less efficient. On the other hand,
well chosen highways typically speed up the MAPF planner because they avoid
front-to-front collisions between robots that travel in opposite directions.
Our version of the ECBS planner with highways either inflates the heuristic
values or the edge costs non-uniformly in a way that encourages path finding
to return paths that include the edges of the highways
\cite{DBLP:conf/socs/CohenUK15}. For example, we can place highways in an
automated-warehouse system along the narrow passageways between the storage
locations as shown by the red arrows in Figure~\ref{fig:arrows}. We have also
developed an approach for learning good highways automatically
\cite{CohenUK16}. It is based on the insight that solving the MAPF problem
optimally is NP-hard but computing the minimum-cost paths for all robots
independently is fast, by employing a graphical model that uses the
information in these paths heuristically to generate good highways
automatically.
\begin{figure}
\center
\includegraphics[width=\columnwidth]{kiva_arrows}
\caption{Environment of a simulated automated-warehouse system where robots
need to swap sides from Area1 to Area2 and vice versa. The red arrows show
user-suggested edges to traverse (called highways). Highways make the
resulting plan more predictable and speed up planning.}\label{fig:arrows}
\end{figure}
\subsection{Target Assignment and Path Finding}
For the MAPF problem, the assignments of robots to targets are pre-determined,
and robots are thus not exchangeable. In practice, however, the assignments of
robots to targets are often not predetermined. For example, consider two
robots in an automated-warehouse system that have to deliver two inventory
pods to the same packing station. It does not matter which robot arrives first
at the packing station, and their places in the arrival queue of the packing
station are thus not pre-determined. We therefore define the combined target
assignment and path finding (TAPF) problem for teams of robots as a
combination of the target-assignment and path-finding problems. The TAPF
problem is a generalization of the MAPF problem where the robots are
partitioned into equivalence classes (called teams). Each team is given the
same number of unique targets as there are robots in the team. We have to
assign the robots to the targets and find collision-free paths for the robots
from their start vertices to their targets in a way such that each robot moves
to exactly one target given to its team, all targets are visited and the
makespan is minimized. Any robot in a team can be assigned to any target of
the team, and robots in the same team are thus exchangeable. However, robots
in different teams are not exchangeable. Figure~\ref{fig:TAPF} shows a TAPF
instance with two teams of robots.
\begin{figure}
\center
\includegraphics[height=50pt]{env} \includegraphics[height=50pt]{example}
\caption{Left: TAPF instance with two teams: Team 1 (in pink) and Team 2 (in
green). The circles on the left are robots. The circles in light colors on
the right are targets given to the team of the same color. Right: Graph
representation of the TAPF instance. Team 1 consists of a single robot
with start vertex $A$ and target $H$. Team 2 consists of two robots with
start vertices $E$ and $F$, respectively, and targets $D$ and
$I$.}\label{fig:TAPF}
\end{figure}
The TAPF problem is NP-hard to solve optimally or bounded sub-optimally for
more than one team \cite{MaAAAI16}. TAPF planners have two advantages over
MAPF planners: 1. Optimal TAPF plans often have smaller makespans than optimal
MAPF plans for TAPF instances since optimal TAPF plans optimize the
assignments of robots to targets. 2. State-of-the-art TAPF planners compute
collision-free paths for all robots on a team very fast and thus often scale
to a larger number of robots than state-of-the-art MAPF planners. We have
developed the optimal TAPF planner Conflict-Based Min-Cost Flow
(CBM)~\cite{MaAAMAS16}, that combines heuristic search-based MAPF planners
\cite{DBLP:journals/ai/SharonSFS15} and flow-based MAPF planners
\cite{YuLav13STAR} and scales to TAPF instances with dozens of teams and
hundreds of robots.
\subsection{Generation of Kinematically Feasible Plans}
MAPF/TAPF planners generate plans using idealized models that do not take the
kinematic constraints of actual robots into account. For example, they gain
efficiency by not taking velocity constraints into account and instead
assuming that all robots always move with the same nominal speed in perfect
synchronization with each other. However, it is communication-intensive for
robots to remain perfectly synchronized as they follow their paths, and their
individual progress will thus typically deviate from the plan. Two robots can
collide, for example, if one robot already moves at large speed while another
robot accelerates from standstill. Slowing down all robots results in large
makespans and is thus undesirable.
We have thus developed MAPF-POST, a novel approach that makes use of a simple
temporal network (STN)~\cite{Dechter1991} to postprocess a MAPF/TAPF plan in
polynomial time and create a kinematically feasible
plan~\cite{HoenigICAPS16,HoenigIROS16}. MAPF-POST utilizes information about
the edge lengths and maximum translational and rotational velocities of the
robots to translate the plan into a temporal plan graph (TPG) and augment the
TPG with additional nodes that guarantee safety distances among the
robots. Figure~\ref{fig:STN} shows an example. Then, it translates the
augmented TPG into an STN by associating bounds with arcs in the augmented TPG
that express non-uniform edge lengths or velocity limits (due to kinematic
constraints of the robots or safety concerns). It then obtains an execution
schedule from the STN by minimizing the makespan or maximizing the safety
distance via graph-based optimization or linear programming. The execution
schedule specifies when each robot should arrive in each location of the plan
(called arrival times). The kinematically feasible plan is a list of locations
(that specify way-points for the robots) with their associated arrival times.
See \cite{HoenigIROS16} for more details.
\begin{figure}
\small
\centering
\footnotesize
\begin{tabular}{c|cccc}
Robot &$t=1$ &$t=2$ &$t=3$ &$t=4$\\
\hline
$1$ (in Team 1) &$A \to B$ &$B \to F$ &$F \to G$ &$G \to H$ \\
$2$ (in Team 2) &$E \to F$ &$F \to G$ &$G \to H$ &$H \to I$ \\
$3$ (in Team 2) &$F \to G$ &$G\to H$ &$H \to C$ &$C \to D$ \\
\end{tabular}
\includegraphics[width=\columnwidth]{stp.pdf}\\
\vspace{0.1cm}
\includegraphics[width=\columnwidth]{tpgWithMarkers.pdf}
\caption{Top: TAPF plan produced by the optimal TAPF planner CBM for the TAPF
instance in Figure \ref{fig:TAPF}. Middle: TPG for the TAPF plan. Each node
$l_i^j$ in the TPG represents the event ``robot $j$ arrives at vertex $l$''
at timestep $i$. The arcs indicate temporal precedences between
events. Bottom: Augmented TPG.}
\label{fig:STN}
\end{figure}
\section{Plan Execution}
The robots will likely not be able to follow the execution schedule perfectly,
resulting in plan deviations. For example, our planner takes velocity
constraints into account but does not capture higher-order kinematic
constraints, such as acceleration limits. Also, robots might be forced to slow
down due to unforeseen exogenous events, such as floors becoming slippery due
to water spills. In such cases, the plan has to be adjusted quickly during
plan execution.
Frequent re-planning could address these plan deviations but is time-consuming
(and thus impractical) due to the NP-hardness of the MAPF/TAPF problem.
Instead, control uses specialized robot controllers to exploit the slack in
the plan to try to absorb any imperfect plan execution. If this does not work,
partial dynamic re-planning re-solves a suitably modified STN in polynomial
time. Only if this does not work either, partial dynamic re-planning re-solves
a suitably modified MAPF problem more slowly.
\subsection{Control}
A robot controller takes the current state and goal as input and computes the
motor output. For example, the state of a differential drive robot can be its
position and heading, and the motor output is the velocities of the two
wheels. The goal is the execution schedule, assuming a constant movement
velocity between two consecutive way-points (called the constant velocity
assumption). Robots cannot execute such motion directly because they cannot
change their velocities instantaneously and might not be able to move
sideways. The actual safety distance during plan execution is thus often
smaller than the one predicted during planning, which is why we recommend to
maximize the safety distance during planning rather than the makespan. We use
robot controllers that try to minimize the effect of the above limitations.
For differential drive robots, we use the fact that turning in place is often
much faster than moving forward. Furthermore, we adjust the robot velocities
dynamically based on the time-to-go to reach the next way-point. It is
especially important to monitor progress toward locations that correspond to
nodes whose slacks are small. Robots could be alerted of the importance of
reaching these bottleneck locations in a timely manner. Similar control
techniques can be used for other robots as well, such as drones, as long as no
aggressive maneuvers are required.
\subsection{Partial Dynamic Re-planning}
If control is insufficient to achieve the arrival times given in the execution
schedule, we adjust the arrival times by re-solving a suitably modified STN,
resulting in a new execution schedule. Only if this does not work either, we
re-solve a suitably modified MAPF problem, resulting in a new kinematically
feasible plan. If probabilistic models of delays and other deviations from the
nominal velocities are available, they could be used to determine the
probabilities that each location will be reached in a certain time interval
and trigger re-planning only if one or more of these probabilities become
small~\cite{HoenigICAPS16}.
\section{Experiments}
We have implemented our approach in C++ using the boost library for advanced
data structures, such as graphs. Experiments can be executed on three
abstraction levels, namely (a) an agent simulation, (b) a robot simulation and
(c) real robots:
\begin{itemize}
\item The agent simulation uses the constant velocity assumption and is
fast. It can be used to verify the code and create useful statistics for the
runtime, minimum distance between any two robots and average time until any
robot reaches its target, among others. It can also be used for scalability
experiments with hundreds of robots in cluttered environments.
\item The robot simulation adds realism because it uses a physics engine
(instead of the constant velocity assumption) and realistic robot
controllers for the simulated robots to follow the execution schedule. We
use V-REP as robot simulation for differential drive robots, robots with
omni-directional wheels, flying robots and spider-like robots.
\item Real robots are the ultimate testbed. We use a team of eight iRobot
Create2 differential drive robots~\cite{HoenigIROS16}.
\end{itemize}
In the following, we discuss two example use cases on a 2.1 GHz Intel Core
i7-4600U laptop computer with 12 GB RAM. Each example is solved within 10
seconds of computation time and also shown in our supplemental video at
\url{http://idm-lab.org/project-p.html}.
\subsection{Automated Warehouse}
In the automated-warehouse use case, we model two robot teams. The first team
consists of ten KUKA youBot robots, which are robots with omni-directional
wheels capable of carrying (only) small boxes. The second team consists of two
Pioneer P3DX robots, which are differential-drive robots capable of carrying
(only) large boxes. The robots have to pick up small and large color-coded
boxes and bring them to a target of the same color. We split the task into two
parts. First, each robot has to move to an appropriately sized box and pick
it up. Second, it has to move to a target of the same color. The first part is
a TAPF instance with two teams, one for each robot type. The second part is a
TAPF instance with four teams, one for each color.
We use the robot simulation on a 2D grid. Figure~\ref{fig:exp:warehouse} shows
a screen-shot after the first part has already been executed, and the robots
are at different pick-up locations. The KUKA robots use their grippers to pick
small boxes from shelves while the Pioneer robots receive the large boxes from
a conveyor belt. The robots then need to move to the targets on the left and
right side of the warehouse, respectively.
\begin{figure}
\center
\includegraphics[width=\columnwidth]{warehouse}
\caption{Simulated automated-warehouse environment. The in-set in the
top-left corner shows an overhead view. The robots are at different
pick-up locations and need to deliver the color-coded boxes to the left
and right side, respectively.}\label{fig:exp:warehouse}
\end{figure}
\subsection{Formation Changes}
Formations are useful for convoys, surveillance operations and artistic
shows. The task of switching from one formation to another, perhaps in a
cluttered environment, is a TAPF problem. In the formation-change use case, we
model a team of 32 identical quadcopters that start in a building with five
open doors. The robots have to spell the letters U -- S -- C outside the
building, which is a special TAPF instance where all robots are exchangeable
(also called an anonymous MAPF instance \cite{YuLav13STAR}).
We use the robot simulation on a 3D grid. Figure~\ref{fig:exp:formations}
shows a screen-shot of the goal formation.
\begin{figure}
\center
\includegraphics[width=\columnwidth]{formations}
\caption{Simulated formation-change environment. 32 quadcopters start
inside the glass building at the bottom of the picture and need to
coordinate the usage of the four exit doors in order to create the
depicted goal formation spelling the letters U -- S --
C.}\label{fig:exp:formations}
\end{figure}
\section{Conclusions}
We presented an overview of our hierarchical framework for coordinating
task-level and motion-level operations in multi-robot systems using the
multi-robot path-planning problem as a case study. We use a state-of-the-art
AI planner for causal reasoning. The AI planner exploits the problem structure
to address the combinatorics of the multi-robot path-planning problem but is
oblivious to the kinematic constraints of the robots. We make the plan
kinematically feasible by identifying the causal dependencies among its
actions and embedding them in an STN. We then use the slack in the STN to
create a kinematically feasible plan and absorb any imperfect plan execution
to avoid time-consuming re-planning in many cases. For more information on our
research, see \url{http://idm-lab.org/project-p.html}.
\section*{Acknowledgments}
Our research was supported by ARL under grant number W911NF-14-D-0005, ONR
under grant numbers N00014-14-1-0734 and N00014-09-1-1031, NASA via Stinger
Ghaffarian Technologies and NSF under grant numbers 1409987 and 1319966. The
views and conclusions contained in this document are those of the authors and
should not be interpreted as representing the official policies, either
expressed or implied, of the sponsoring organizations, agencies or the
U.S. government.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,108,101,564,940 | arxiv | \section{Introduction}
The formation of Active Regions (ARs) is often associated with the emergence of magnetic flux (EMF) from the solar interior \citep[e.g.][]{Parker_1955}. Many explosive phenomena observed on the Sun, such as flaring events and CMEs, are associated with ARs. In fact, it has been observed that a single AR can produce several CMEs in a recurrent manner \citep[e.g. ][]{Nitta_etal2001, Zhang_etal2008,Wang_etal2013}.
Solar eruptions have been studied extensively in the past. Observational studies have reported on the pre-eruptive phase of the eruption \citep[e.g.][]{Canou_Amari2010,Vourlidas_etal2012,Syntelis_etal2016}, the triggering of the eruptions \citep[e.g.][]{Zuccarello_etal2014,Reeves_etal2015,Chintzoglou_etal2015} and the propagation of the erupting structures in the interplanetary medium \citep[e.g.][]{Colaninno_etal2013} and towards the Earth \citep[e.g.][]{Patsourakos_etal2016}.
Often, eruptions are associated with the formation of a twisted magnetic field structure, which is commonly referred to as a magnetic flux rope (FR) \citep[e.g.][]{Cheng_etal2011,Green_etal2011,Zhang_etal2012,Patsourakos_etal2013}.
Still, various aspects regarding the process of formation, destabilization and eruption of FRs are up for debate.
Numerical models studying the formation of magnetic FRs in the solar atmosphere have extensively demonstrated the role of shearing, rotation and reconnection of fieldlines in the buildup of magnetic twist. As an example, magnetic flux emergence experiments \citep[e.g.][]{Magara_etal2001,Fan_2009,Archontis_Torok2008} have shown that shearing motions along a polarity inversion line (PIL), can lead to reconnection of sheared fieldlines and the gradual formation of FRs, which may erupt in a confined or ejective manner \citep[e.g. ][]{Archontis_etal2012}.
Furthermore, experiments where rotational motions are imposed at the photospheric boundary \citep[symmetric and asymmetric driving of polarities, ][]{DeVore_etal2008,Aulanier_etal2010} have shown that the shearing motions can form a pre-eruptive FR and destabilize the system.
Once a FR is formed, it may erupt in an ejective manner towards outer space \citep[e.g. ][]{Leake_etal2014} or remained confined, for instance, by a strong overlying field \citep[e.g. ][]{Leake_etal2013}. There are two main proposed mechanisms, which might be responsible for the triggering and/or driving of the eruption of magnetic FRs. One is the non-ideal process of magnetic reconnection and the other is the action of an ideal MHD instability.
One example of reconnection which leads to the eruption of a magnetic FR, is the well-known tether-cutting mechanism \citep{Moore_etal1980,Moore_etal1992}. During this process, the footpoints of sheared fieldines reconnect along a PIL, forming a FR. The FR slowly rises dragging in magnetic field from the sides and a current sheet is formed underneath the FR. Eventually, fast reconnection of the fieldlines that envelope the FR occurs at the current sheet. Then, the upward reconnection outflow assists to the further rise of the FR. In this way, an imbalance is achieved between a) the upward magnetic pressure and tension force and b) the downward tension force of the envelope fieldlines. This leads to an ejective eruption of the FR. Another example is the so-called break-out reconnection, between the envelope field and a pre-existing magnetic field. If the relative orientation of the two fields is antiparallel, (external) reconnection between them becomes very effective when they come into contact
\citep[e.g. ][]{Antiochos_etal1999, Karpen_etal2012, Archontis_etal2012, Leake_etal2014}.
This reconnection releases the downward magnetic tension of the envelope field and the FR can ``break-out'', experiencing an ejective eruption. We should highlight that the relative orientation and field strengths of the interacting magnetic systems are important parameters that affect the eruption of the FR.
In previous studies, it has been shown that depending on the value of these parameters, the rising FR could experience an ejective eruption or be confined by the envelope field or even become annihilated by the interaction with the pre-existing magnetic field \citep[e.g. ][]{Galsgaard_etal2007, Archontis_etal2012, Leake_etal2014}.
Solar eruptions can also be triggered by ideal processes. For instance, the helical kink instability \citep[][]{Anzer_1968,Torok_etal2004}, which occurs when the twist of the FR exceeds a critical value that depends on the configuration of the FR (e.g. cylindrical, toroidal) and the line-tying effect \citep[e.g. ][]{Hood_Priest_1981,Torok_etal2004}. During the instability, the axis of the rising FR develops a helical shape. The eruption of the helical magnetic field could be ejective or confined, depending e.g. on how strong the overlying magnetic field is \citep{Torok_Kliem_2005}.
Another crucial parameter, which affects the eruption of a FR is how the external constraining magnetic field drops along the direction of height.
This is related to the so-called torus instability \citep{Bateman_1978,Kliem_etal2006} . In this model, a toroidal current ring with major radius $R$ is placed inside an external magnetic field. This external magnetic field drops along the direction of the major radius as $R^{-n}$.
Due to the current ring's curvature, a hoop force acts on the current ring. This force is directed away from the center of the torus.
An inwards Lorentz force acts on the current ring due to the external magnetic field.
Previous studies (\citet{Bateman_1978,Kliem_etal2006}) showed that, if the decrease rate of the external field (i.e. $n=- \partial B_{external} / \partial \ln R$) exceeds a critical value ($n_{crit}=1.5$), the current ring becomes unstable. The decrease rate of the external field is commonly referred to as torus or decay index.
The range of values of the critical torus index is still under debate. For instance, studies of emerging flux tubes with an initial arch-like configuration, have reported higher values of the torus index \citep[$n=1.7-2$, ][]{Fan_etal2007,Fan_2010}.
\citet{An_Magara_2013}, in a flux emergence simulation of a straight, horizontal flux tube, reported values of torus index well above 2.
\citet{Demoulin_etal2010} have found that the torus index can vary depending on a range of parameters, such as the thickness of the current channel (the axial current of a twisted FR is a current channel). In cases of thin current channels, the index was found to be 1 (1.5) for straight (circular) channels. Also, the FR expansion during its eruption affects the critical value of torus instability. For thick channels, the critical index for circular and straight channels does not vary much. It takes values ranging from 1.1-1.3 (with expansion of the FR) and 1.2-1.5 (without expansion). \citet{Zuccarello_etal2015} investigated the role of line-tying effects on the eruption. They performed a series of simulations with a setup similar to \citet{Aulanier_etal2010}, but with different velocity drivers at the photosphere. They found that the critical index did not depend greatly on the pre-eruptive photospheric motions, and it was found to take values within the range of 1.1-1.3.
In our paper, we show the results of a simulation of magnetic flux emergence, which occurs dynamically from the solar interior to the outer solar atmosphere. We focus on the formation of magnetic FRs in the emerging flux region and their possible eruption. In particular, we show how reconnection leads to the formation of the FRs and how / why these FRs erupt. We find that the emergence of a single sub-photospheric magnetic flux tube can drive recurrent eruptions, which are produced due to the combined action of the torus instability and reconnection of the envelope fieldlines in a tether-cutting manner. We find that, at least in the first eruption, the fast ejection phase of the torus unstable FR is triggered by tether-cutting reconnection.
A geometrical extrapolation of the size of the eruptions showed that they can develop into large-scale structures, with a size comparable to small CMEs. The plasma density and temperature distributions reveal that the structure of the erupting fields consist of three main parts: a ``core'', a ``cavity'' and a ``front edge'', which is reminiscent of the ``three-part'' structure of CMEs.
We find that the plasma, at the close vicinity of the ``core'', is hotter and denser when the envelope fieldlines reconnect with themselves in a tether-cutting manner during the eruption. The same area appears to be cooler and less dense, when the envelope fieldlines reconnect with some other neighboring (e.g. sheared J-like) fieldlines.
In Sec.~\ref{sec:initial_conditions} we describe the initial conditions of our simulations. Sec.~\ref{sec:overview} is an overview of the dynamics occurring in our simulation leading to four recurrent eruptions. In Sec.~\ref{sec:eruptions_mechanims} we show the morphology of the magnetic field (before, during and after the eruptions) and the triggering mechanism of these eruptions. In Sec.~\ref{sec:temp_etc} we show the distribution of various properties of the erupting fields, such as density, temperature, velocity and current profiles.
In Sec.~\ref{sec:extrapolation} we perform an extrapolation of the size of the erupting structures. In Sec.~\ref{sec:conclusions} we summarize the results.
\section{Numerical Setup}
\label{sec:initial_conditions}
\begin{figure}
\centering
\includegraphics[width=0.85\columnwidth]{stratification.pdf}
\caption{ Initial stratification of the background atmosphere in our simulation, in dimensionless units (temperature (T), density ($\rho$), magnetic pressure ($P_m$) and gas pressure ($P_g$)).
}
\label{fig:stratification}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{4_plot.pdf}
\caption{Top: magnetic field line morphology and temperature distribution at the $xz$-midplane during the four eruptions of the simulation, at $t=73, 85, 116, 194$~min (for panels a, b, c and d respectively). Bottom: The same in a top-view. Two sets of fieldlines are shown: yellow (traced from the FR center) and green (traced from the envelope field). The horizontal $xy$-plane shows the distribution of $B_{z}$ at the photosphere (white:positive $B_{z}$, black:negative $B_{z}$, from -300~G to 300~G).}
\label{fig:4_plot}
\end{figure*}
To perform the simulations, we numerically solve the 3D time-dependent, resistive, compressible MHD equations in Cartesian geometry using the Lare3D code of \citet{Arber_etal2001}. The equations in dimensionless form are:
\begin{align}
&\frac{\partial \rho}{\partial t}+ \nabla \cdot (\rho \mathbf{v}) =0 ,\\
&\frac{\partial (\rho \mathbf{v})}{\partial t} = - \nabla \cdot (\rho \mathbf{v v}) + (\nabla \times \mathbf{B}) \times \mathbf{B} - \nabla P + \rho \mathbf{g} + \nabla \cdot \mathbf{S} , \\
&\frac{ \partial ( \rho \epsilon )}{\partial t} = - \nabla \cdot (\rho \epsilon \mathbf{v}) -P \nabla \cdot \mathbf{v}+ Q_\mathrm{joule}+ Q_\mathrm{visc}, \\
&\frac{\partial \mathbf{B}}{\partial t} = \nabla \times (\mathbf{v}\times \mathbf{B})+ \eta \nabla^2 \mathbf{B},\\
&\epsilon =\frac{P}{(\gamma -1)\rho},
\end{align}
where $\rho$, $\mathbf{v}$, $\mathbf{B}$ and P are density, velocity vector, magnetic field vector and gas pressure. Gravity is included. We assume a perfect gas with specific heat of $\gamma=5/3$. Viscous heating $Q_\mathrm{visc}$ and Joule dissipation $Q_\mathrm{joule}$ are also included.
We use explicit anomalous resistivity that increases linearly when the current density exceeds a critical value $J_c$:
\begin{equation}
\eta=\begin{cases}
\eta_{b}, & \text{if $\left|J\right|<J_{c}$}.\\
\eta_{b}+\eta_{0}\left(\frac{\left|J\right|}{J_{c}}-1\right), & \text{if $\left|J\right|>J_{c}$}.
\end{cases}
\end{equation}
, where $\eta_b=0.01$ is the background resistivity, $J_c=0.005$ is the critical current and $\eta_0=0.01$.
We use normalization based on the photospheric values of density $\rho_\mathrm{c}=1.67 \times 10^{-7}\ \mathrm{g}\ \mathrm{cm}^{-3}$, length $H_\mathrm{c}=180 \ \mathrm{km}$
and magnetic field strength $B_\mathrm{c}=300 \ \mathrm{G}$. From these, we get pressure $P_\mathrm{c}=7.16\times 10^3\ \mathrm{erg}\ \mathrm{cm}^{-3}$, temperature $T_\mathrm{c}=5100~\mathrm{K}$, velocity $v_\mathrm{0}=2.1\ \mathrm{km} \ \mathrm{s}^{-1}$ and time $t_\mathrm{0}=85.7\ \mathrm{s}$.
The computational box has a size of $64.8\times64.8\times64.8 \ \mathrm{Mm}$ in the $x$, $y$, $z$ directions, in a $417\times417\times417$ grid. We assume periodic boundary conditions in the $y$ direction. Open boundary conditions are at the two $yz$-plane boundaries and at top of the numerical box.
The domain consists of an adiabatically stratified sub-photosheric layer at $-7.2\ \mathrm{Mm}\le z < 0 \ \mathrm{Mm}$, an isothermal photospheric-chromospheric layer at $0 \ \mathrm{Mm} \le z < 1.8 \ \mathrm{Mm} $, a transition region at $1.8 \ \mathrm{Mm} \le z < 3.2 \ \mathrm{Mm}$ and an isothermal coronal at $3.2 \ \mathrm{Mm} \le z < 57.6 \ \mathrm{Mm}$.
We assume to have a field-free atmosphere in hydrostatic equilibrium. The initial distribution of temperature (T), density ($\rho$), gas ($P_\mathrm{g}$) pressure is shown in Fig. \ref{fig:stratification}.
We place a straight, horizontal FR at $z=-2.1 \ \mathrm{Mm}$. The axis of the FR is oriented along the $y$-direction, so the transverse direction is along $x$ and height is in the $z$-direction.
The magnetic field of the FR is:
\begin{align}
B_{y} &=B_\mathrm{0} \exp(-r^2/R^2), \\
B_{\phi} &= \alpha r B_{y}
\end{align}
where $R=450$~km a measure of the FR's radius, $r$ the radial distance from the FR's axis and $\alpha= 0.4$ ($0.0023$~km$^{-1}$) is a measure of twist per unit of length.
The magnetic field's strength is $B_0=3150$~G.
Its magnetic pressure ($P_m$) is over-plotted in Fig.~\ref{fig:stratification}.
Initially the FR is in pressure equilibrium. The FR is destabilized by imposing a density deficit along it's axis, similar to the work by \citet{Archontis_etal2004}:
\begin{equation}
\Delta \rho = \frac{p_\mathrm{t}(r)}{p(z)} \rho(z) \exp(-y^2/\lambda^2),
\label{eq:deficit}
\end{equation}
where $p$ is the external pressure and $p_\mathrm{t}$ is the total pressure within the FR. The parameter $\lambda$ is the length scale of the buoyant part of the FR. We use $\lambda=5$ ($0.9$~Mm).
\section{Recurrent Eruptions}
\subsection{Overall evolution: a brief overview}
\label{sec:overview}
In the following, we briefly describe the overall evolution of the emerging flux region during the running time of the simulation. At $t$=25~min, the crest of the sub-photospheric FR reaches the photosphere. It takes 10~min for the magnetic buoyancy instability criterion \citep[see][]{Acheson1979, Archontis_etal2004} to be satisfied and, and thus, for the first magnetic flux elements to emerge at and above the solar surface. Eventually, the emerging magnetized plasma expands as it rises, due to the magnetic pressure inside the tube and the decreasing gas pressure of the background stratified atmosphere. Because of the expansion, the outermost expanding fieldlines adopt a fan-like configuration, forming an envelope field that surrounds all the upcoming magnetized plasma. As we discuss later in this paper, the characteristics and dynamical evolution of this envelope field play an important role towards understanding the eruptions coming from the emerging flux region.
At the photosphere, the emergence of the field forms a bipolar region with a strong PIL. Similarly to previous studies \citep[e.g.][]{Manchester_2001,Archontis_Torok2008, Leake_etal2013}, we find that the combined action of shearing, driven by the Lorentz force along the PIL, and reconnection of the sheared fieldlines, leads to the formation of a new magnetic FR, which eventually erupts towards the outer space. In fact, this is an ongoing process, which leads to the formation and eruption of several FRs during the evolution of the system. Since these FRs are formed after the initial flux emergence at the photosphere, we will refer to them as the post-emergence FRs.
Fig.~\ref{fig:4_plot} shows the temperature distribution (vertical $xz$-midplane) and selected fieldlines at the times of four successive eruptions in our simulation (panels a-d). The temperature distribution delineates the (bubble-shaped) volume of the erupting field, which is filled by cool and hot plasma. In Sec.~\ref{sec:temp_etc}, we discuss the physical properties (e.g. temperature, density) of the erupting plasma in more detail. The fieldlines are drawn in order to show a first view of the shape of the envelope field (green) and the core of the erupting FRs (yellow). Notice the strongly azimuthal nature of the envelope field and the S-shaped configuration of the FR's fieldlines in the first eruption (Fig.~\ref{fig:4_plot}a, top view).
In the following eruptions, the orientation of the envelope field changes (in a counter-clockwise manner, Fig.~\ref{fig:4_plot}b-d, top view).
The morphology of the fieldlines during the four eruptions is discussed in detail, in Sec.~\ref{sec:eruptions_mechanims}. We find that all the eruptions are fully ejective (i.e. they exit the numerical domain from the top boundary).
To further describe the overall dynamical evolution of the eruptions, we calculate the total magnetic and kinetic energy (black and red line respectively, Fig.~\ref{fig:energy}) above the mid-photosphere ($z=1.37$~Mm). The first maximum of kinetic energy at $t=45.7$~min corresponds to the initial emergence of the field. Then, we find four local maxima of the magnetic and kinetic energies, which correspond to the four eruptions (e.g. kinetic energy peaks at $t=74.3,\, 85.7,\, 117.1,\, 194.3$~min, marked by vertical lines in the figure). As expected, the magnetic (kinetic) energy decreases (increases) after each eruption. Notice that this is less pronounced for the magnetic energy in the first eruption because of the continuous emergence of magnetic flux, which increases the total amount of magnetic energy above the mid-photosphere. Also, the local maximum of the kinetic energy at $t=205.7$~min corresponds mainly to the fast reconnection upflow underneath the erupting FR, which is about to exit the numerical domain.
In a similar way, we compute the self helicity (Fig.~\ref{fig:helicity}). For a single twisted flux tube, the self-helicity is assumed to correspond to the twist within the flux tube.
For the calculation we used the method described in \citet{Moraitis_etal2014}. Overall, we find that the temporal evolution of the self-helicity is similar to that of the kinetic energy (e.g. they reach local maxima at the same time), which indicates that the erupted field is twisted. We also find that between the eruptions, self helicity increases because of the gradual build up of the twist of the post-emergence FRs.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Emag_Ekin_zmin55.pdf}
\caption{ Magnetic (black) and kinetic (red) energy above the middle of the photosheric-chromospheric layer ($z=$1.37~Mm). Vertical black lines mark the kinetic energy maxima related to the four eruptions.}
\label{fig:energy}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{helicity_zmin55.pdf}
\caption{Self helicity above the middle of the photosheric-chromospheric layer ($z=$1.37~Mm). Vertical black lines mark the kinetic energy maxima related to the four eruptions.}
\label{fig:helicity}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.70\textwidth]{j_plots_vertical_4_small.pdf}
\caption{Side and top views of the shape of selected fieldlines at $t$=56~min (a,c) and $t$=64~min (b,d). The horizontal slice shows the distribution of $B_{z}$ (in black and white, from -300~G to 300~G) at $z=0.7$~Mm. Yellow arrows represent the photospheric velocity field scaled by magnitude. Photospheric vorticity is shown by the red contours. Purple isosurface shows $\left| J/B \right|>0.3$.}
\label{fig:jplot}
\end{figure*}
\subsection{Flux rope formation and eruption mechanisms}
\label{sec:eruptions_mechanims}
\subsubsection{First eruption}
\label{sec:eruption1}
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{eruption1.pdf}
\caption{Field line morphology of the first eruption at $t$= 59 (a), 69 (b), 74 (c-f)~min. Green lines are traced from the top of the envelope field. Red lines are envelope fieldlines traced above the FR (c,d,e). Blue lines are J-shaped lines. Yellow lines are traced from the FR center. Purple isosurface is $\left| J/B \right|>0.3$. \textbf{(c-e)}: $t$=74~min eruption from side, front and top view. \textbf{(d)}: Arrows show the two concave-upwards segments of the W-like (red) fieldlines. \textbf{(f)}: Close up of (c). Cyan lines illustrate the post-reconnection arcade.}
\label{fig:eruption1}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.90\columnwidth]{var_time_eruption1_units.pdf}
\caption{First eruption's key parameters with time.
\textbf{(a):} Height-time profile of FR center (black) and FR velocity (h-t derivative, blue). The insert shows a close-up of the height time profile for $t=58-68$~min.
\textbf{(b):} Torus index measured at the FR center. The highlighted region shows an estimated range of values for the occurrence of a torus instability.
\textbf{(c):} Ratio of mean tension ($T_z$) over its initial value ($T_{z_0}$). Mean tension is measured from the apex of the FR to the top of the envelope field.
\textbf{(d):} Maximum $V_z$ of the reconnection outflow. Vertical lines mark the times of the possible onset of the torus instability (first line) and the tether-cutting reconnection of the envelope field (second line)}
\label{fig:vars_eruption1}
\end{figure}
The formation of the post-emergence FR occurs in the low atmosphere due to the combination of: a) shearing and converging motions along the PIL, b) rotation of the polarities of the emerging flux region and c) reconnection of the sheared and rotated fieldlines along the PIL.
Firstly, we would like to focus on the role of shearing along the PIL and the rotation of the polarities during the pre-eruptive phase. For this reason, we present a side view (Fig.~\ref{fig:jplot}a,b) and a top view (Fig.~\ref{fig:jplot}c,d) of a close-up of the emerging flux region. We plot the sheared arcade fieldlines (blue), the $\left| J/B \right|$ isosurface and the photospheric $B_z$ component of the magnetic field (black/white plane). On the photospheric plane, we also plot the planar component of the velocity field vector (yellow arrows) and the $\omega_z$ component of vorticity (red contours).
The visualization of the velocity field reveals: a) the shearing motion along the PIL (the yellow arrows are almost antiparallel on the two sides of the PIL) and b) the converging motions towards the PIL and close to the two main polarities, due to their rotation.
These motions (shear and rotation) are also apparent by looking at the vertical component of the vorticity (red contours). Notice that $\omega_z$ is strong close to the two polarities, where the rotation is fast. Along and sideways of the PIL, there is only apparent ``vorticity'' due to the velocity, which is developed by the shearing.
The footpoints of the sheared arcade fieldlines are rooted at both sides of the PIL (e.g. blue lines in Fig.~\ref{fig:jplot}a,c). Due to the shearing, their footpoints move towards the two polarities where they undergo rotation (e.g. see the footpoints of the blue fieldlines, which go through the red contours close to the two opposite polarities, Fig.~\ref{fig:jplot}b,d).
Due to rotation, the sheared fieldlines adopt the characteristic hook-shaped edge, forming J-like loops. The isosurface of high values of $\left| J/B\right|$ shows the formation of a strong current between the J-like loops. When the J-like fieldlines reconnect at the current sheet, new twisted fieldlines are formed, with an overall sigmoidal shape.
Figure~\ref{fig:eruption1} is a visualization of a series of selected fieldlines during the slow-rise (panels a and b) and the fast-rise (panels c-f) phase of the first eruption. In a similar manner to Fig.~\ref{fig:jplot}, Fig.~\ref{fig:eruption1}a shows the sheared fieldlines (blue) and the $\left| J/B \right|$ isosurface (purple).
Reconnection between the sheared fieldlines forms a new set of longer fieldlines (yellow), which connect the distant footpoints of the sheared fieldlines. Thus, the longer fieldlines produce a magnetic loop above the PIL. As time goes on (panel b), further reconnection between the J-like sheared fieldlines (blue) form another set of fieldlines, which wrap around the magnetic loop, producing the first (post-emergence) magnetic FR. The red and green fieldlines are not reconnected fieldlines. They have been traced from arbitrary heights above the yellow fieldlines. They belong to the emerging field, which has expanded into the corona. In that respect, they create an envelope field for the new magnetic FR.
Eventually, the envelope fieldlines just above the FR (e.g. red lines, Fig.~\ref{fig:eruption1}b) are stretched vertically and their lower segments come into contact and reconnect at the flare current sheet underneath the FR in a tether-cutting manner. Hereafter, for simplicity, we call the reconnection between envelope fieldlines as EE-TC reconnection (i.e. Envelope Envelope - Tether Cutting reconnection). This reconnection occurs in a fast manner, triggering an explosive acceleration of the FR. During this process, the plasma temperature at the flare current sheet reaches values up to 6~MK. The rapid eruption is followed by a similar type of reconnection of the outermost fieldlines of the envelope field (green lines, Figs.~\ref{fig:eruption1}c). Fig.~\ref{fig:eruption1}c,d,e show the side, front and top view of the fieldline morphology at $t$=74~min. Fig.~\ref{fig:eruption1}f is a close up of the reconnection site underneath the erupting FR.
Notice that, due to EE-TC reconnection, the red fieldlines are wrapped around the central region of the erupting field (yellow fieldlines). They make at least two turns around the axis, becoming part of the erupting FR. During the eruption, these fieldlines may reconnect more than once, and thus, have more than two full turns around the axis.
The close-up in Fig.~\ref{fig:eruption1}f shows that a post-reconnection arcade (light blue fieldlines) is formed below the flare current sheet. At the top of the arcade, the plasma is compressed and the temperature increases up to 10~MK.
The time evolution of the post-emergence FR can be followed by locating its axis at different times. To find the axis, we use a vertical 2d cut (at the middle of the FR, along its length), which is perpendicular to the fieldlines of the FR.
Then we locate the maximum of the normal component of the magnetic field ($B_n$) on this 2d plane for every snapshot.
We have also found that the location of the axis of the FR is almost identical to the location of maximum plasma density within the central region of the FR.
The latter can be used as an alternative tracking-method for the location of the FR's axis.
Using the above method(s), we are able to plot the height-time profile of the erupting FR (see Fig.~\ref{fig:vars_eruption1}a black line) and its derivative (blue line).
The h-t profile shows a phase of gradual upward motion (slow-rise phase), which is followed by an exponential period (fast-rise phase). The terminal velocity before the FR exits the numerical box is 170~km~s$^{-1}\,$. During the eruptive phase, the FR is not very highly twisted and also it does not have the characteristic deformation of its axis that results from the kink instability. As a result, kink instability does not seem to play a role in this case. To study whether torus instability is at work, we follow the torus index calculation method of \citet{Fan_etal2007, Aulanier_etal2010}.
We first estimate the external (envelope) field by calculating the potential magnetic field ($B_p$).
This is done based on the calculations made to derive the helicity \citep[details in][]{Moraitis_etal2014}.
To solve the Laplace equation for the calculation of the potential field, it is assumed that both the magnetic field and the potential field have the same normal component at the boundaries (Neumann conditions). The lower $xy$-plane boundary is the photosphere at $z=0.51$~Mm and the rest of the boundaries are the sides of the numerical domain. Having calculated $B_p$, we then compute the torus index as $n=-z \partial \ln B_p / \partial z$.
Then, we find the value of the torus index at the position of the FR center by measuring the value of $n$ along the h-t profile. We plot the results in Fig.~\ref{fig:vars_eruption1}b.
According to the height-time profile (black line and inset in Fig.~\ref{fig:vars_eruption1}a), we find that the FR enters an exponential rise phase just after $t=61.4$~min (first vertical line). The torus index at this time is $n=1.81$, which lies within the estimated range of values for the occurrence of the torus instability (see Introduction and the highlighted region in Fig.~\ref{fig:vars_eruption1}e). Therefore, we anticipate that the FR in our simulation becomes torus unstable at $t\geq 61.4$~min.
We should highlight that the envelope fieldlines above the FR start to reconnect in a TC manner at $t\geq 67.9$~min (second vertical line, Fig.~\ref{fig:vars_eruption1}). As a result, the mean tension of the envelope fieldlines (Fig.~\ref{fig:vars_eruption1}c) decreases while the FR height and velocity increase dramatically (Fig.~\ref{fig:vars_eruption1}a). We also find that the fast reconnection jet ($V_z$ up to $550$~km~s$^{-1}\,$), which is ejected upward from the flare current sheet, is transferring momentum to the FR and contributes to its acceleration (Fig.~\ref{fig:vars_eruption1}d).
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{eruption2.pdf}
\caption{ Field line morphology of the second eruption at $t$=74, 79, 83, 85, 87~min. Green lines are traced from the top of the post-reconnection arcade field of the first eruption. Red lines are envelope fieldlines traced above the FR (b,c,d,e,f). Blue lines are J-shaped lines. Yellow lines are traced from the FR center. Purple isosurface is $\left| J/B \right|>0.3$. Gray lines are fieldlines from the first eruption (now acting as external field).
\textbf{(d):} Closeup of (c) showing the EJ-TC reconnection. \textbf{(f)}: Arrows show the two hook-shaped segments of the fieldlines (red lines). }
\label{fig:eruption2}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.90\columnwidth]{var_time_eruption2_units.pdf}
\caption{Second eruption's key parameters with time.
\textbf{(a):} Height-time profile of FR center (black) and FR velocity (h-t derivative, blue) \textbf{(b):} Torus index measured along the height-time profile. The highlighted region shows an estimated range of values for the occurrence of a torus instability. \textbf{(c):} Maximum $\left| J/B \right|$ along the CS. \textbf{(d):} Maximum reconnection outflow. \textbf{(e):} Ratio of mean tension ($T_z$) over its initial value ($T_{z_0}$).
Vertical lines mark the times of the possible onset of the torus instability (first line) and the EJ-TC reconnection (second line).
}
\label{fig:vars_eruption2}
\end{figure}
\subsubsection{Second eruption}
\label{sec:eruption2}
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{eruption3.pdf}
\caption{ Field line morphology of the third eruption at $t$=102,106, 114, 117~min. \textbf{(a:)} J-like loops (blue) and sea-serpent fieldlines (dark green).
\textbf{(b,c,d):} similar to Fig.~\ref{fig:eruption1} and Fig.~\ref{fig:eruption2}.}
\label{fig:eruption3}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.90\columnwidth]{var_time_eruption3_units.pdf}
\caption{Third eruption's key parameters with time.
\textbf{(a):} Height-time profile of FR center (black) and FR velocity (h-t derivative, blue) \textbf{(b):} Torus index measured along the height-time profile. The highlighted region shows an estimated range of values for the occurrence of a torus instability. \textbf{(c):} Maximum $\left| J/B \right|$ along the CS. \textbf{(d):} Maximum reconnection outflow. \textbf{(e):} Ratio of mean tension ($T_z$) over its initial value ($T_{z_0}$). Vertical lines mark the times of the possible onset of the torus instability (first line) and the EJ-TC reconnection (second line).}
\label{fig:vars_eruption3}
\end{figure}
In the following, we focus on the dynamics of the second eruption. Fig.~\ref{fig:eruption2}a and Fig.~\ref{fig:eruption2}b are close-ups of the area underneath the first erupting FR at $t=74$~min and $t=79$~min respectively. In a similar manner to the formation of the first FR, the second FR (yellow fieldlines, Fig.~\ref{fig:eruption2}b) is formed due to reconnection between J-loops (blue fieldlines). The post-reconnection arcade (green and red fieldlines in Fig.~\ref{fig:eruption2}a, Fig.~\ref{fig:eruption2}b), which was formed after the first eruption (cyan lines, Fig.~\ref{fig:eruption1}f), overlies the yellow fieldlines and, thus, it acts as an envelope field for the second FR. Above and around this envelope field, there are fieldlines (grey) which belong to the first eruptive flux system but they have not exited the numerical domain yet. Hereafter, we refer to this field as the external, pre-existing field.
As the second post-emergence FR moves upwards, the envelope fieldlines are stretched vertically and their footpoints move towards the current sheet (pink isosurface). However, they do not reconnect in an EE-TC manner. Instead, the lower segments of the envelope fieldlines reconnect with the J-like loops. Hereafter, for simplicity, we refer to this as EJ-TC reconnection (i.e. Envelope-J Tether Cutting reconnection). This difference is due to the different orientation of the envelope fieldlines. As we have previously shown (green lines, Fig.~\ref{fig:4_plot}b top view), the envelope fiedlines in the second eruption do not have a strongly azimuthal nature. They are mainly oriented along the $y$-direction. Therefore, their lower segments come closer to the J-like loops and reconnect with them (e.g. bottom right red lines, Fig.\ref{fig:eruption2}c).
To better illustrate the EJ-TC reconnection, in Fig.~\ref{fig:eruption2}d we show a close-up of this region. Here, the envelope fieldlines (green) reconnect with the J-like loops (blue) to form the hook-shaped fieldlines (red).
Eventually, this process occurs on both foot points of the envelope fieldlines, forming new fieldlines such as the red ones in Fig.~\ref{fig:eruption2}e.
Notice that these new reconnected fieldlines are winding around the footpoints of the rising FR and, therefore, they become part of the erupting field.
In general, the EJ-TC reconnection removes flux from the envelope field and adds flux to the FR. Also, the downward tension of the envelope field decreases during EJ-TC reconnection.
Before the FR exits the box (Fig.~\ref{fig:eruption2}f) most of the envelope field has been subject to EJ-TC reconnection.
We should highlight that we don't find evidence of EE-TC reconnection during the second eruption.
EJ-TC and EE-TC reconnection produces fieldlines with a different shape. In the first eruption, the EE-TC reconnected fieldlines (red, Fig.~\ref{fig:eruption1}c-e) are ejected towards the FR center, adopting a ``W-shaped'' configuration. The concave-upward segments of the W-like fieldlines (arrows, Fig.~\ref{fig:eruption1}d) bring dense plasma from the low atmosphere into the central region of the FR. In the second eruption, the EJ-TC reconnected fieldlines have hook-like segments in their footpoints (arrows, Fig.~\ref{fig:eruption2}f). In this case, the tension of the reconnected fieldines ejects hot and dense plasma sideways (mainly along the y-direction) and not towards the center of the FR. Thus, due to the different way that the envelope fieldlines reconnect, the temperature and density distributions within the erupting field show profound differences, between the first and the following eruptions. This is discussed in more detail in Section 3.3.
We plot now the h-t profile and its derivative for the second FR (black and blue lines, Fig.~\ref{fig:vars_eruption2}a).
To calculate the torus index, we consider the potential magnetic field $B_p$.
As discussed earlier, the calculation of the potential field takes into account all the boundaries of the numerical domain.
This means that the potential field solution will not approximate the envelope field everywhere. It will approximate the envelope field up to a height where the solution of the Laplace equation will be strongly influenced by the lower boundary (photosphere).
Above that height, the potential solution will be influenced by the upper boundary and will describe the external field.
So, we examine the values of the potential field along height. We expect them not to change drastically in the region of the envelope field.
We do find that the potential field solution does not describe the envelope field accurately above certain heights (different height for different snapshots). Below that heights, the potential field describes the envelope field well. This transition happens around $z\approx$15-20~Mm. Therefore, when we calculate the torus index, we do not take into account values of the torus index when the FR is located above $z$=15~Mm.
According to the h-t profile, we find that the FR enters the exponential rise phase at $t=79.3$~min (first vertical line, Fig.~\ref{fig:vars_eruption2}b). The torus index at this time is $n=1.22$ and lies in the estimated range of values for the occurrence of a torus instability.
During this phase, the maximum $\left|J/B\right|$ does not increase dramatically (Fig.~\ref{fig:vars_eruption2}c). The current sheet becomes more elongated and the reconnection outflow becomes more enhanced after $t=81$~min (Fig.~\ref{fig:vars_eruption2}d).
When the EJ-TC reconnection starts, we find that the tension above the FR starts to decrease drastically (second vertical line, Fig.~\ref{fig:vars_eruption2}e). Also, after the initiation of the EJ-TC reconnection, the current density of the current sheet becomes more enhanced.
Due to the above, one possible scenario is that the torus instability is responsible for the onset of the exponential phase of the h-t profile, and the EJ-TC reconnection occurs {\it during} the rapid rise of the FR. Another possible scenario is that both processes are at work during the eruptive phase and it is the interplay between them, which leads to the fast eruption of the FR.
In terms of the energy, we have found that the kinetic energy of the second eruption is larger than that of the first eruption (red line, Fig.~\ref{fig:energy}).
This difference is not necessarily associated with the different TC reconnection processes. For instance, the downward magnetic tension of the envelope field above the second FR is less. As a result, the upward motion of the FR is faster. Also, the photospheric unsigned magnetic flux increases between the two eruptions due to the continuous emergence. Thus, there is more available flux at the photosphere for the second eruption. Similarly, the magnetic energy in the corona (black line, Fig.~\ref{fig:energy}) increases between the two eruptions, indicating that more energy is available for the second eruption.
\subsubsection{Third and fourth eruption}
After the second FR exits the numerical domain, the overall fieldline morphology is similar to the first post-eruption phase. There is an external field, a post-reconnection arcade that acts as an envelope field and also the J-like loops.
At the photosphere-chromosphere, we also find sea serpent fieldlines (dark green lines, Fig.~\ref{fig:eruption3}a), similar to the previous work by \citet{Fan_2009, Archontis_etal2013}.
Most of these fieldlines originate from the partial emergence of the sub-photospheric field at different locations along the PIL.
These fieldlines reconnect at many sites along the PIL during the early FR formation. Still, the major role in the FR formation is played by the reconnection of J-like loops (blue and yellow lines, Fig.~\ref{fig:eruption3}b)
In comparison to the second eruption, we find that the morphology of the external field is different. The second eruption (with a kinetic energy peak at $t$=87~min) happened right after the first eruption (with a kinetic energy peak at $t$=72~min). Thus, the external field that the second eruption had to push through was more horizontal (Fig.~\ref{fig:eruption2}c, gray lines are almost parallel to the photosphere). The third eruption, during which the kinetic energy takes its maximum value at $t$=119~min) happens after the second FR exits the numerical box. As a result, the external field is more vertical to the photosphere and, consequently, it has a very small downward tension (gray lines, Fig.~\ref{fig:eruption3}b).
EJ-TC reconnection occurs also during the third eruption (Fig.~\ref{fig:eruption3}c). However, we find that only some of the envelope fieldlines reconnect in both their footpoints (Fig.~\ref{fig:eruption3}c,d), before they exit the numerical domain. The implication of this difference will be discussed in Sec.~\ref{sec:temp_etc}.
We do not find evidence of EE-TC reconnection during the third eruption.
Regarding the torus instability, we should mention that at $t\approx100-104$~min, the FR is located very close to the photosphere, at heights $z\approx1.5-3$~Mm. We find that the value of $B_p$ (and hence $n$) at these heights depends on the choice of the lower boundary (i.e. the exact height of the photospheric layer, which is used to calculate the potential field). Thus, the value of the torus index for heights up to $z\approx3$~Mm are different. Above that height, all the solutions converge. We conjecture that the main reason for the change in the values of $B_p$ and $n$ is the build-up of a complex external field after each eruption.
However, from the height-time profile (Fig.~\ref{fig:vars_eruption3}a), we find that for $t\simeq 104$~min the FR is located just above $z\approx3$~Mm, where the value of the torus index is well defined. Also, we find that $n\geq 1$ for $t > 104$~min (first vertical line, Fig.~\ref{fig:vars_eruption3}b). This is an indication (although not conclusive) that the torus instability might be associated with the onset of the eruption.
Notice that during the time period $t\approx104-110$~min, there is no direct evidence that effective reconnection (e.g. EJ-TC reconnection) is responsible for the driving of the eruption. Fig.~\ref{fig:vars_eruption3}c, d, e show that the reconnection upflow underneath the flux rope undergoes only a small increase (due to reconnection between J-like fieldlines) and $J/B$ experiences a limited drop. The tension of the envelope fieldlines decreases mainly because of the 3D-expansion and not because of vigorous EJ-TC reconnection. Therefore, due to the above limitations, we cannot reach a definite conclusion about the exact contribution of reconnection at the onset of the eruption in this initial phase.
In contrast, for $t > 110$~min, there is a clear correlation between the increase of the reconnection outflow and $J/B$ and the decrease of the tension. This is due to effective EJ-TC reconnection, which releases the tension of the envelope field and it boosts the acceleration of the erupting field. A preliminary comparison between the second and third eruptions show that the the maximum values of the current and reconnection outflow are similar, while the length of the CS and the extend of the jet are much smaller. The fourth eruption is very similar to the third eruption.
\subsection{Temperature, density, velocity and current}
\label{sec:temp_etc}
\begin{figure*}
\centering
\includegraphics[width=0.93\textwidth]{temperature_rho_j_vz.pdf}
\caption{Density (first row), temperature (second row), $V_z$ (third row) and $\sqrt{\left| J/B \right|}$ (fourth row) measured at the $xz$-midplane, for the first (first column), second (second column), third (third column) and fourth (fourth column) eruption. Asterisks mark the location of the FR center.}
\label{fig:temp_etc}
\end{figure*}
There are some remarkable similarities and differences between the four eruptions, as illustrated in
Fig.~\ref{fig:temp_etc}. All panels in this figure are 2D-cuts, at the vertical $xz$-midplane, at times just before the erupting structures exit the numerical domain.
The density distribution (first row) shows that all eruptions adopt an overall bubble-like configuration, due to the expansion of the magnetic field as it rises into larger atmospheric heights. We notice that the erupting field consists of three main features, which are common in all eruptions. For simplicity, we mark them only in the first eruption (panel a1). These features are: (a) the inner-most part of the bubble, which is located at and around the center of the erupting field (marked by asterisk), filled with dense plasma - we refer to this part as the ``core'' of the eruption, (b) the low density area that immediately surrounds the ``core'' - we refer to this as the ``cavity'' and it is the result of the cool adiabatic expansion of the rising magnetic field and (c) the ``front'' of the erupting structure, which is a thin layer of dense material that envelops the ``cavity'' and it demarcates the outskirts of the erupting field. To some extent, the shape of the eruptions in our simulations is reminiscent of the ``three-part'' structure of the observed small-scale prominence eruptions \citep[mini or micro CMEs e.g.][]{Innes_etal2010b,Raouafi_etal2010,Hong_etal2011} and/or CMEs \citep[e.g.][]{Reeves_etal2015}. Because of this, hereafter, we refer to the simulated eruptions as CME-like eruptions.
Now, by looking at the temperature distribution (second row), we notice that there is a mixture of cold and hot plasma within the erupting field (in all cases, b1-b4). In fact, in the first eruption, there is a noticeable column of hot plasma, which extends vertically from $x=0$~Mm, $z=10$~Mm up to $z=40$~Mm. Thus, in this case the ``core'' of the erupting field appears to be hot, with a temperature of about 8~MK. On the contrary, the ``core'' of the following eruptions is cool (5,000-20,000~K) and dense, but is surrounded by hot (0.5-2~MK) plasma. In all cases, the origin of the hot plasma is the reconnection process occurring at the flare current sheet underneath the erupting field.
The distribution of $\sqrt{\left| J/B \right|}$ is shown at the fourth row in Fig.~\ref{fig:temp_etc} (d1-d4).
The flare current sheet is the vertical structure with high values of $\sqrt{\left| J/B \right|}$, and is located at around $x=0$~Mm and between $z=12$~Mm and $z=25$~Mm. The velocity distribution (panels c1-c4) shows that a bi-directional flow is emitted from the flare current sheet. This flow is a fast reconnection jet, which transfers the hot plasma upwards (into the erupting field) and downwards (to the flare arcade located below $z=10$~Mm).
Thus, a marked difference between the first and the following eruptions is that, in the first eruption, the upward reconnection jet shoots the hot plasma vertically into the ``core'' of the erupting field, while in the following eruptions, the upward jet only reaches lower heights, arriving below the ``core''.
In the latter cases, the jet is diverted sideways at heights below the center of the erupting FR, adopting a Y-shaped configuration (e.g. see c2-c4). In the first eruption, the EE-TC reconnection creates fieldlines which have a highly bended concave-upward shape (i.e. towards the central region of the erupting bubble, see red lines in Fig.~\ref{fig:eruption1}d). It is the strong (upward) tension of these fieldlines, that makes the hot plasma to be ejected at large heights and into the ``core'' of the field. In the following eruptions, the tension force that accelerates the hot jet upflow is weaker. This is because the reconnected fieldlines of the jet is the result of reconnection between Js (e.g. see blue lines in Fig.~\ref{fig:eruption2}e), which are not so vertically stretched as the envelope fieldlines during the EE-TC reconnection. Thus, the upward tension of the reconnected fieldlines at the flare current sheet is weaker. Therefore, the hot reconnection jet is not strong enough to reach large atmospheric heights and to heat the central region of the erupting field. When it reaches close to the heavy core of the erupting FR, it is diverted sideways (where the pressure is lower) and the embedded hot plasma runs along the reconnected fieldlines.
In general, the temperature distribution within the overall volume of the erupting field correlates well with the distribution of $\sqrt{\left| J/B \right|}$, which implies that heating occurs mainly at sites with strong currents. As we mentioned above, one such area is the flare current sheet underneath the erupting FR. Another example is the heating that occurs at a thin current layer formed between the ``core'' and the ``cavity'' of the erupting bubble (e.g., see panel b1). This current layer is formed after the onset of the fast-rise phase of the erupting FR. The uppermost fieldlines of the erupting core are moving upwards with a higher speed than the fieldlines within the ``cavity'', which rise due to the expansion of the emerging field. Thus, at the interface between the two sets of fieldlines, the plasma is compressed and it is heated locally (up to 1~MK). This process occurs in all cases (panels, d1-d4), although it is more clearly visible in the first eruption (panel b1). The reconfiguration of the field after the first eruption leads to a more complex fieldline morphology, distribution of $\sqrt{\left| J/B \right|}$ and heating within the rising magnetized volume (panels b1-b4).
\subsection{Geometrical extrapolation}
\label{sec:extrapolation}
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth]{fig_extrapolation.pdf}
\caption{ \textbf{(a):} Geometrical extrapolation based on the position of the flanks of the magnetic volume during its eruption. \textbf{(b):} Same as (a) but shown in the 3D volume of the numerical domain. \textbf{(c):} The extrapolated size of the erupting volume at 0.6~R$_\odot$ above the solar surface. The black box has the physical size of the simulation box.}
\label{fig:extrapolation}
\end{figure*}
Coronographic observations of CMEs show that they usually exhibit a constant angular width (i.e the flanks of the erupting structure move upward, along two approximately straight lines) \citep[e.g.][]{Moore_etal2007}.
Based on that, we perform a geometrical extrapolation of the size of the first eruption.
For this, we find the location of the flanks of the structure at consecutive times and fit a straight line. Firstly, we mark the location of the flank of the erupting structure at a time $t_i$, when the flank is very distinguishable (diamond on the left flank, Fig.~\ref{fig:extrapolation}a). Next, we select the flank location prior to $t_i$ (marked with $t_{i-6},t_{i-5}$ etc.) and after (marked with $t_{i+1}$), and fit a straight line through these points (blue line). We then do the same for the other flank. The point where they intersect is approximately the height of the initiation. These extrapolated lines are also plotted in the 3D volume of our numerical box for better visualization (Fig.~\ref{fig:extrapolation}b).
After we find these lines, we extrapolate them to 0.6~$R_\odot$. For size comparison, we plot them on the solar limb (blue lines, Fig.~\ref{fig:extrapolation}b). The box at the bottom of the extrapolations shows the size of our numerical box. It is clear that although the eruptions originate from a small-scale region, they grow in size, and it is not unlikely that they may evolve into considerably larger-scale events.
We should highlight that the above method is a first order approximation regarding the spatial evolution of the first eruption, assuming that the erupting field will continue to rise and expand even after it leaves the numerical domain.
The maximum value of the magnetic energy in the simulated eruptions is $1\times10^{28}$~erg and the kinetic energy varies in the range $3\times10^{26}-1.5\times10^{27}$~erg. Based on the size of our numerical box and the aforementioned values of energies, the eruptions in this simulation could describe the formation and ejection of small scale CME-like events. Most CMEs have typical values of kinetic energies around $10^{28}-10^{30}$~erg \citep{Vourlidas_etal2010}.
\section{Summary and discussion}
\label{sec:conclusions}
In this work we studied the formation and triggering of recurrent eruptions in an emerging flux region using numerical simulations. The initial emergence of the sub-photospheric flux tube formed a bipolar region at the photosphere. The combination of shearing motions and the rotation of the two opposite polarities formed J-like fieldlines, which reconnected to create a FR that eventually erupted ejectively towards the outer solar atmosphere.
In total, four successive eruptions occurred in the simulation.
We found that the strength of the magnetic envelope field above the eruptive FRs dropped fast enough so that the FRs became torus unstable.
The initial slow-rise phase of the first FR started due to the torus instability. The rising FR pushed the envelope field upwards. The fieldlines of the envelope field reconnected in a tether-cutting manner and, as a result, the tension of the overlying field dropped in an exponential way.
At that time, the FR entered the fast-rise phase. The fieldlines formed due to the reconnection of Js, turned about one time around the axis of the FR, while the fieldlines resulting from the tether-cutting of the envelope field turned about at least two times around the axis of the FR. The reconnected fieldlines that were released downwards, formed a post-reconnection arcade.
After the eruption of the first FR, reconnection of J-like fieldlines continued to occur and another FR was formed, which eventually erupted. This process of FR formation occurred two more times in a similar manner. In all cases, the post-reconnection arcade acted as a new ``envelope'' field for the next FR. We found that the envelope field was decaying fast enough to favor torus instability. The envelope fields between the second, third and fourth eruption differed mostly at the height where the FRs became torus unstable ($n\approx1-2$).
However, we should highlight that our calculation of the torus index is approximate because the envelope field evolves dynamically (e.g. it undergoes expansion). The derivation of the torus instability criteria based on previous analytical studies, took into account perturbations of a static configuration. Thus, a more accurate estimate of the torus index in our simulations, would be to let the envelope field to relax at each time step and then calculate $n$. This can only be done if the driver of the system could be stopped, letting the overall magnetic flux system to reach an equilibrium \citep[e.g.][]{Zuccarello_etal2015}.
However, in our dynamical simulations, there is a certain amount of available magnetic flux, which can emerge to the photosphere and above. The driver of the evolution of the system (i.e. magnetic flux emergence) cannot be stopped before the available magnetic flux is exhausted. Therefore, on this basis, we study the {\it continuous} evolution of the system.
Still, in our experiments, the magnitude of the current inside the envelope field is at least ten times lower than the one in the FR core, so we expect that the envelope field is not far away from the potential state.
The removal of the downward tension of the envelope field is important for the erupting FRs. In the first eruption, the removal of the envelope tension occurred through the reconnection of the envelope field with other envelope fieldlines (EE-TC reconnection). In the other three eruptions, the envelope field reconnected with J-like fieldlines (EJ-TC reconnection). The differences between EE-TC reconnection and the EJ-TC reconnection were found to be significant for the density and temperature distribution within the erupting structure. After the EE-TC reconnection, the reconnected fieldlines underneath the erupting FR adopted a W-like shape, with two upward concave regions (red lines Fig.~\ref{fig:eruption1}d, see arrows). After the EJ-TC reconnection, the lower segments of the reconnected fieldlines adopted a hook-like shape (red lines, Fig.~\ref{fig:eruption2}f, see arrows).
In the case of EE-TC reconnection, the upward tension of the reconnected fieldlines (as illustrated by the upward-stretched segments in the middle of the W-shaped fieldlines) pushed hot plasma from the flare current sheet into the erupting field via a hot and fast collimated jet. Due to this process, the temperature of the central region of the erupting FR changed during the eruption, from low to high values (b1, Fig.~\ref{fig:temp_etc}).
In the case of EJ-TC reconnection, the plasma transfer from the flare current sheet to the erupting field was mainly driven by the reconnection of Js, and therefore the resulting reconnection jet was not as collimated as on the EE-TC reconnection. In the second eruption, this post-reconnection hot jet collided with the FR and became diverted into two side jets (a2 and c2, Fig.~\ref{fig:temp_etc}). In the third and fourth eruption, the jets were not fast enough to enter the region of the erupting core of the field (a3 and c3, a4 and d4, Fig.~\ref{fig:temp_etc}).
Thus, the study of the temperature distribution revealed that due to EE-TC reconnection, the erupting field develops a ``3-part'' structure consisted of a hot front ``edge'', a cold ``cavity'', and a hot and dense ``core''. In the following eruptions, the temperature of the plasma within the central region of the FRs remained low. Therefore, we suggest that the observations of erupting FRs, which are heated e.g. from $10^{3}$~K to$10^{6}$~K, \textit{during} their eruptive phase, might indicate that EE-TC reconnection is at work. We should mention that heat conduction is not included in our simulation.
Therefore, the exact value of the temperature within the erupting field may change if heat conduction were to be included in the numerical experiment.
Overall, we report that the physical mechanism behind the formation of recurrent ejective eruptions in our flux emergence simulation is a combination of torus unstable FRs and the onset of tether-cutting of the overlying field through a flare current sheet.
Both the EE-TC reconnection and the EJ-TC reconnection were found to remove the downward tension of the overlying field and thus assisting the eruptions. In the first eruption, it is likely that torus instability occurs first, and the rapid exponential rise phase of the erupting FR comes after the EE-TC reconnection. For the other eruptions, where the structure of the magnetic field above the FR has a more intricate morphology, it is difficult to conclude which process is responsible for the onset of the various phases of the eruptions.
Comparing our results with previous studies, the formation of all the FRs in our simulation is due to the reconnection of sheared J-like fieldlines, in a similar manner to earlier simulations \citep[e.g. ][]{Aulanier_etal2010,Archontis_etal2012,Leake_etal2013,Leake_etal2014}.
It is also interesting to note that the velocity and current profile of our first eruption (Fig.~\ref{fig:temp_etc}c1, d1) are very similar morphologically to the ones produced from the flare reconnection in the breakout simulation of \citet{Karpen_etal2012}, who used a (different) 2.5D adaptive grid code.
Such similarities indicate that the resulting morphologies might be generic and indicative of the EE-TC reconnection.
\citet{Moreno-Insertis_etal2013} performed a flux emergence simulation of a highly twisted flux tube into a magnetized atmosphere and found recurrent eruptions. In comparison to our simulation, the sub-photospheric flux tube in the work of \citet{Moreno-Insertis_etal2013} had higher magnetic field strength ($B_{0}=3.8$~kG), higher length of the buoyant part of the flux tube ($\lambda$=20 in comparison to our $\lambda$=5) and was located closer to the photosphere ($z=-1.7$~Mm). In their work, their first FR is formed, similarly to our simulation, by the reconnection of sheared-arcade fieldlines. The higher $\lambda$ leads to the formation of a more elongated emerging FR and a longer sigmoid. The eruption mechanism, though, is very different. It involves reconnection between the sheared-arcade fieldlines and the open fieldlines of the ambient field. Also, it involves reconnection of the sheared-arcade with a magnetic system produced from the reconnection of the ambient field with the initial emerging envelope field. Their second and third eruption are off-centered eruptions of segments of the initial flux tube, that eventually become confined by the overlying field. In our case, the flux tube axis emerges only up to 2-3 pressure scale heights above the photosphere ($z$=0) and the erupting FRs are all formed due to reconnection of J-loops.
\citet{Murphy_etal2011} discussed possible heating mechanisms for the dynamic heating of CMEs, one of which is heating from the CME flare current sheet. Taking into account the results of previous studies \citep[e.g. ][]{Lin_etal2004}, they reported that the reconnection hot upward jets from the flare current sheet could reach the cool central region of the erupting FR and heat it. In fact, this leads to some mixing of hot and cool plasma within the central erupting volume.
From the two different tether-cutting reconnections found in our simulation, only the EE-TC reconnection allows effective transfer of hot plasma from the flare current sheet into the FR central region, by the reconnection outflow.
This might account for a process similar to the afore-mentioned mixing of hot and cold plasma, as suggested by \citet{Lin_etal2004}.
On the other hand, during EJ-TC reconnection, hot plasma is mainly found at the periphery of the central region of the FR.
The physical size of our simulated emerging flux region was 23.4~Mm, and the size of the FRs was up to 64.8~Mm (the length of the $y$-axis). The height of our numerical box was 57.6~Mm. The kinetic energies of the eruptions were $3\times10^{26}-1.5\times10^{27}$~erg and the magnetic energies around $1\times10^{28}$~erg.
These values suggest that our numerical experiment describes an emerging flux region, which hosts relatively low energy eruptions in comparison to CMEs. Based on the sizes and the energetics, these eruptions can describe the formation and eruption of small scale eruptive events. For instance, such an eruption in terms of physical size and not magnetic configuration, was reported by \citet{Raouafi_etal2010,Reeves_etal2015}. Still, the results on the plasma transfer for the different flare reconnections (EE-TC reconnection and EJ-TC reconnection) should be scale invariant.
Having reproduced a CME-like configuration (a1 and b1, Fig.~\ref{fig:eruption1}) we extrapolated the expansion of the flanks of the erupting ``bubble'' and estimated its size in 0.6~R$_\odot$. We found that these eruptions have the potential to become comparable to small-sized CMEs (Fig.~\ref{fig:extrapolation}c), but with one order of magnitude lower kinetic energy.
We aim to study the parameters that would increase the energies of the produced eruptions. For this, in our next paper, we will present the results of a parametric study on the magnetic field strength of the subphotospheric flux tube. Our aim is to study the differences in energetics, physical size and recurrence of the eruptions.
\acknowledgments
The Authors would like to thank the Referee for the constructive comments.
This project has received funding from the Science and Technology Facilities Council (UK) through the consolidated grant ST/N000609/1.
This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program ``Education and Lifelong Learning'' of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thales Investing in knowledge society through the European Social Fund.
The authors acknowledge support by the Royal Society.
This work was supported by computational time granted from the Greek Research \& Technology Network (GRNET) in the National HPC facility - ARIS.
This work used the DIRAC 1, UKMHD Consortium machine
at the University of St Andrews and the DiRAC Data Centric system at
Durham University, operated by the Institute for Computational Cosmology on
behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment
was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC
capital grant ST/H008519/1, and STFC DiRAC Operations grant ST/K003267/1
and Durham University. DiRAC is part of the National E-Infrastructure.
\bibliographystyle{apj}
\section{Introduction}
The formation of Active Regions (ARs) is often associated with the emergence of magnetic flux (EMF) from the solar interior \citep[e.g.][]{Parker_1955}. Many explosive phenomena observed on the Sun, such as flaring events and CMEs, are associated with ARs. In fact, it has been observed that a single AR can produce several CMEs in a recurrent manner \citep[e.g. ][]{Nitta_etal2001, Zhang_etal2008,Wang_etal2013}.
Solar eruptions have been studied extensively in the past. Observational studies have reported on the pre-eruptive phase of the eruption \citep[e.g.][]{Canou_Amari2010,Vourlidas_etal2012,Syntelis_etal2016}, the triggering of the eruptions \citep[e.g.][]{Zuccarello_etal2014,Reeves_etal2015,Chintzoglou_etal2015} and the propagation of the erupting structures in the interplanetary medium \citep[e.g.][]{Colaninno_etal2013} and towards the Earth \citep[e.g.][]{Patsourakos_etal2016}.
Often, eruptions are associated with the formation of a twisted magnetic field structure, which is commonly referred to as a magnetic flux rope (FR) \citep[e.g.][]{Cheng_etal2011,Green_etal2011,Zhang_etal2012,Patsourakos_etal2013}.
Still, various aspects regarding the process of formation, destabilization and eruption of FRs are up for debate.
Numerical models studying the formation of magnetic FRs in the solar atmosphere have extensively demonstrated the role of shearing, rotation and reconnection of fieldlines in the buildup of magnetic twist. As an example, magnetic flux emergence experiments \citep[e.g.][]{Magara_etal2001,Fan_2009,Archontis_Torok2008} have shown that shearing motions along a polarity inversion line (PIL), can lead to reconnection of sheared fieldlines and the gradual formation of FRs, which may erupt in a confined or ejective manner \citep[e.g. ][]{Archontis_etal2012}.
Furthermore, experiments where rotational motions are imposed at the photospheric boundary \citep[symmetric and asymmetric driving of polarities, ][]{DeVore_etal2008,Aulanier_etal2010} have shown that the shearing motions can form a pre-eruptive FR and destabilize the system.
Once a FR is formed, it may erupt in an ejective manner towards outer space \citep[e.g. ][]{Leake_etal2014} or remained confined, for instance, by a strong overlying field \citep[e.g. ][]{Leake_etal2013}. There are two main proposed mechanisms, which might be responsible for the triggering and/or driving of the eruption of magnetic FRs. One is the non-ideal process of magnetic reconnection and the other is the action of an ideal MHD instability.
One example of reconnection which leads to the eruption of a magnetic FR, is the well-known tether-cutting mechanism \citep{Moore_etal1980,Moore_etal1992}. During this process, the footpoints of sheared fieldines reconnect along a PIL, forming a FR. The FR slowly rises dragging in magnetic field from the sides and a current sheet is formed underneath the FR. Eventually, fast reconnection of the fieldlines that envelope the FR occurs at the current sheet. Then, the upward reconnection outflow assists to the further rise of the FR. In this way, an imbalance is achieved between a) the upward magnetic pressure and tension force and b) the downward tension force of the envelope fieldlines. This leads to an ejective eruption of the FR. Another example is the so-called break-out reconnection, between the envelope field and a pre-existing magnetic field. If the relative orientation of the two fields is antiparallel, (external) reconnection between them becomes very effective when they come into contact
\citep[e.g. ][]{Antiochos_etal1999, Karpen_etal2012, Archontis_etal2012, Leake_etal2014}.
This reconnection releases the downward magnetic tension of the envelope field and the FR can ``break-out'', experiencing an ejective eruption. We should highlight that the relative orientation and field strengths of the interacting magnetic systems are important parameters that affect the eruption of the FR.
In previous studies, it has been shown that depending on the value of these parameters, the rising FR could experience an ejective eruption or be confined by the envelope field or even become annihilated by the interaction with the pre-existing magnetic field \citep[e.g. ][]{Galsgaard_etal2007, Archontis_etal2012, Leake_etal2014}.
Solar eruptions can also be triggered by ideal processes. For instance, the helical kink instability \citep[][]{Anzer_1968,Torok_etal2004}, which occurs when the twist of the FR exceeds a critical value that depends on the configuration of the FR (e.g. cylindrical, toroidal) and the line-tying effect \citep[e.g. ][]{Hood_Priest_1981,Torok_etal2004}. During the instability, the axis of the rising FR develops a helical shape. The eruption of the helical magnetic field could be ejective or confined, depending e.g. on how strong the overlying magnetic field is \citep{Torok_Kliem_2005}.
Another crucial parameter, which affects the eruption of a FR is how the external constraining magnetic field drops along the direction of height.
This is related to the so-called torus instability \citep{Bateman_1978,Kliem_etal2006} . In this model, a toroidal current ring with major radius $R$ is placed inside an external magnetic field. This external magnetic field drops along the direction of the major radius as $R^{-n}$.
Due to the current ring's curvature, a hoop force acts on the current ring. This force is directed away from the center of the torus.
An inwards Lorentz force acts on the current ring due to the external magnetic field.
Previous studies (\citet{Bateman_1978,Kliem_etal2006}) showed that, if the decrease rate of the external field (i.e. $n=- \partial B_{external} / \partial \ln R$) exceeds a critical value ($n_{crit}=1.5$), the current ring becomes unstable. The decrease rate of the external field is commonly referred to as torus or decay index.
The range of values of the critical torus index is still under debate. For instance, studies of emerging flux tubes with an initial arch-like configuration, have reported higher values of the torus index \citep[$n=1.7-2$, ][]{Fan_etal2007,Fan_2010}.
\citet{An_Magara_2013}, in a flux emergence simulation of a straight, horizontal flux tube, reported values of torus index well above 2.
\citet{Demoulin_etal2010} have found that the torus index can vary depending on a range of parameters, such as the thickness of the current channel (the axial current of a twisted FR is a current channel). In cases of thin current channels, the index was found to be 1 (1.5) for straight (circular) channels. Also, the FR expansion during its eruption affects the critical value of torus instability. For thick channels, the critical index for circular and straight channels does not vary much. It takes values ranging from 1.1-1.3 (with expansion of the FR) and 1.2-1.5 (without expansion). \citet{Zuccarello_etal2015} investigated the role of line-tying effects on the eruption. They performed a series of simulations with a setup similar to \citet{Aulanier_etal2010}, but with different velocity drivers at the photosphere. They found that the critical index did not depend greatly on the pre-eruptive photospheric motions, and it was found to take values within the range of 1.1-1.3.
In our paper, we show the results of a simulation of magnetic flux emergence, which occurs dynamically from the solar interior to the outer solar atmosphere. We focus on the formation of magnetic FRs in the emerging flux region and their possible eruption. In particular, we show how reconnection leads to the formation of the FRs and how / why these FRs erupt. We find that the emergence of a single sub-photospheric magnetic flux tube can drive recurrent eruptions, which are produced due to the combined action of the torus instability and reconnection of the envelope fieldlines in a tether-cutting manner. We find that, at least in the first eruption, the fast ejection phase of the torus unstable FR is triggered by tether-cutting reconnection.
A geometrical extrapolation of the size of the eruptions showed that they can develop into large-scale structures, with a size comparable to small CMEs. The plasma density and temperature distributions reveal that the structure of the erupting fields consist of three main parts: a ``core'', a ``cavity'' and a ``front edge'', which is reminiscent of the ``three-part'' structure of CMEs.
We find that the plasma, at the close vicinity of the ``core'', is hotter and denser when the envelope fieldlines reconnect with themselves in a tether-cutting manner during the eruption. The same area appears to be cooler and less dense, when the envelope fieldlines reconnect with some other neighboring (e.g. sheared J-like) fieldlines.
In Sec.~\ref{sec:initial_conditions} we describe the initial conditions of our simulations. Sec.~\ref{sec:overview} is an overview of the dynamics occurring in our simulation leading to four recurrent eruptions. In Sec.~\ref{sec:eruptions_mechanims} we show the morphology of the magnetic field (before, during and after the eruptions) and the triggering mechanism of these eruptions. In Sec.~\ref{sec:temp_etc} we show the distribution of various properties of the erupting fields, such as density, temperature, velocity and current profiles.
In Sec.~\ref{sec:extrapolation} we perform an extrapolation of the size of the erupting structures. In Sec.~\ref{sec:conclusions} we summarize the results.
\section{Numerical Setup}
\label{sec:initial_conditions}
\begin{figure}
\centering
\includegraphics[width=0.85\columnwidth]{stratification.pdf}
\caption{ Initial stratification of the background atmosphere in our simulation, in dimensionless units (temperature (T), density ($\rho$), magnetic pressure ($P_m$) and gas pressure ($P_g$)).
}
\label{fig:stratification}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{4_plot.pdf}
\caption{Top: magnetic field line morphology and temperature distribution at the $xz$-midplane during the four eruptions of the simulation, at $t=73, 85, 116, 194$~min (for panels a, b, c and d respectively). Bottom: The same in a top-view. Two sets of fieldlines are shown: yellow (traced from the FR center) and green (traced from the envelope field). The horizontal $xy$-plane shows the distribution of $B_{z}$ at the photosphere (white:positive $B_{z}$, black:negative $B_{z}$, from -300~G to 300~G).}
\label{fig:4_plot}
\end{figure*}
To perform the simulations, we numerically solve the 3D time-dependent, resistive, compressible MHD equations in Cartesian geometry using the Lare3D code of \citet{Arber_etal2001}. The equations in dimensionless form are:
\begin{align}
&\frac{\partial \rho}{\partial t}+ \nabla \cdot (\rho \mathbf{v}) =0 ,\\
&\frac{\partial (\rho \mathbf{v})}{\partial t} = - \nabla \cdot (\rho \mathbf{v v}) + (\nabla \times \mathbf{B}) \times \mathbf{B} - \nabla P + \rho \mathbf{g} + \nabla \cdot \mathbf{S} , \\
&\frac{ \partial ( \rho \epsilon )}{\partial t} = - \nabla \cdot (\rho \epsilon \mathbf{v}) -P \nabla \cdot \mathbf{v}+ Q_\mathrm{joule}+ Q_\mathrm{visc}, \\
&\frac{\partial \mathbf{B}}{\partial t} = \nabla \times (\mathbf{v}\times \mathbf{B})+ \eta \nabla^2 \mathbf{B},\\
&\epsilon =\frac{P}{(\gamma -1)\rho},
\end{align}
where $\rho$, $\mathbf{v}$, $\mathbf{B}$ and P are density, velocity vector, magnetic field vector and gas pressure. Gravity is included. We assume a perfect gas with specific heat of $\gamma=5/3$. Viscous heating $Q_\mathrm{visc}$ and Joule dissipation $Q_\mathrm{joule}$ are also included.
We use explicit anomalous resistivity that increases linearly when the current density exceeds a critical value $J_c$:
\begin{equation}
\eta=\begin{cases}
\eta_{b}, & \text{if $\left|J\right|<J_{c}$}.\\
\eta_{b}+\eta_{0}\left(\frac{\left|J\right|}{J_{c}}-1\right), & \text{if $\left|J\right|>J_{c}$}.
\end{cases}
\end{equation}
, where $\eta_b=0.01$ is the background resistivity, $J_c=0.005$ is the critical current and $\eta_0=0.01$.
We use normalization based on the photospheric values of density $\rho_\mathrm{c}=1.67 \times 10^{-7}\ \mathrm{g}\ \mathrm{cm}^{-3}$, length $H_\mathrm{c}=180 \ \mathrm{km}$
and magnetic field strength $B_\mathrm{c}=300 \ \mathrm{G}$. From these, we get pressure $P_\mathrm{c}=7.16\times 10^3\ \mathrm{erg}\ \mathrm{cm}^{-3}$, temperature $T_\mathrm{c}=5100~\mathrm{K}$, velocity $v_\mathrm{0}=2.1\ \mathrm{km} \ \mathrm{s}^{-1}$ and time $t_\mathrm{0}=85.7\ \mathrm{s}$.
The computational box has a size of $64.8\times64.8\times64.8 \ \mathrm{Mm}$ in the $x$, $y$, $z$ directions, in a $417\times417\times417$ grid. We assume periodic boundary conditions in the $y$ direction. Open boundary conditions are at the two $yz$-plane boundaries and at top of the numerical box.
The domain consists of an adiabatically stratified sub-photosheric layer at $-7.2\ \mathrm{Mm}\le z < 0 \ \mathrm{Mm}$, an isothermal photospheric-chromospheric layer at $0 \ \mathrm{Mm} \le z < 1.8 \ \mathrm{Mm} $, a transition region at $1.8 \ \mathrm{Mm} \le z < 3.2 \ \mathrm{Mm}$ and an isothermal coronal at $3.2 \ \mathrm{Mm} \le z < 57.6 \ \mathrm{Mm}$.
We assume to have a field-free atmosphere in hydrostatic equilibrium. The initial distribution of temperature (T), density ($\rho$), gas ($P_\mathrm{g}$) pressure is shown in Fig. \ref{fig:stratification}.
We place a straight, horizontal FR at $z=-2.1 \ \mathrm{Mm}$. The axis of the FR is oriented along the $y$-direction, so the transverse direction is along $x$ and height is in the $z$-direction.
The magnetic field of the FR is:
\begin{align}
B_{y} &=B_\mathrm{0} \exp(-r^2/R^2), \\
B_{\phi} &= \alpha r B_{y}
\end{align}
where $R=450$~km a measure of the FR's radius, $r$ the radial distance from the FR's axis and $\alpha= 0.4$ ($0.0023$~km$^{-1}$) is a measure of twist per unit of length.
The magnetic field's strength is $B_0=3150$~G.
Its magnetic pressure ($P_m$) is over-plotted in Fig.~\ref{fig:stratification}.
Initially the FR is in pressure equilibrium. The FR is destabilized by imposing a density deficit along it's axis, similar to the work by \citet{Archontis_etal2004}:
\begin{equation}
\Delta \rho = \frac{p_\mathrm{t}(r)}{p(z)} \rho(z) \exp(-y^2/\lambda^2),
\label{eq:deficit}
\end{equation}
where $p$ is the external pressure and $p_\mathrm{t}$ is the total pressure within the FR. The parameter $\lambda$ is the length scale of the buoyant part of the FR. We use $\lambda=5$ ($0.9$~Mm).
\section{Recurrent Eruptions}
\subsection{Overall evolution: a brief overview}
\label{sec:overview}
In the following, we briefly describe the overall evolution of the emerging flux region during the running time of the simulation. At $t$=25~min, the crest of the sub-photospheric FR reaches the photosphere. It takes 10~min for the magnetic buoyancy instability criterion \citep[see][]{Acheson1979, Archontis_etal2004} to be satisfied and, and thus, for the first magnetic flux elements to emerge at and above the solar surface. Eventually, the emerging magnetized plasma expands as it rises, due to the magnetic pressure inside the tube and the decreasing gas pressure of the background stratified atmosphere. Because of the expansion, the outermost expanding fieldlines adopt a fan-like configuration, forming an envelope field that surrounds all the upcoming magnetized plasma. As we discuss later in this paper, the characteristics and dynamical evolution of this envelope field play an important role towards understanding the eruptions coming from the emerging flux region.
At the photosphere, the emergence of the field forms a bipolar region with a strong PIL. Similarly to previous studies \citep[e.g.][]{Manchester_2001,Archontis_Torok2008, Leake_etal2013}, we find that the combined action of shearing, driven by the Lorentz force along the PIL, and reconnection of the sheared fieldlines, leads to the formation of a new magnetic FR, which eventually erupts towards the outer space. In fact, this is an ongoing process, which leads to the formation and eruption of several FRs during the evolution of the system. Since these FRs are formed after the initial flux emergence at the photosphere, we will refer to them as the post-emergence FRs.
Fig.~\ref{fig:4_plot} shows the temperature distribution (vertical $xz$-midplane) and selected fieldlines at the times of four successive eruptions in our simulation (panels a-d). The temperature distribution delineates the (bubble-shaped) volume of the erupting field, which is filled by cool and hot plasma. In Sec.~\ref{sec:temp_etc}, we discuss the physical properties (e.g. temperature, density) of the erupting plasma in more detail. The fieldlines are drawn in order to show a first view of the shape of the envelope field (green) and the core of the erupting FRs (yellow). Notice the strongly azimuthal nature of the envelope field and the S-shaped configuration of the FR's fieldlines in the first eruption (Fig.~\ref{fig:4_plot}a, top view).
In the following eruptions, the orientation of the envelope field changes (in a counter-clockwise manner, Fig.~\ref{fig:4_plot}b-d, top view).
The morphology of the fieldlines during the four eruptions is discussed in detail, in Sec.~\ref{sec:eruptions_mechanims}. We find that all the eruptions are fully ejective (i.e. they exit the numerical domain from the top boundary).
To further describe the overall dynamical evolution of the eruptions, we calculate the total magnetic and kinetic energy (black and red line respectively, Fig.~\ref{fig:energy}) above the mid-photosphere ($z=1.37$~Mm). The first maximum of kinetic energy at $t=45.7$~min corresponds to the initial emergence of the field. Then, we find four local maxima of the magnetic and kinetic energies, which correspond to the four eruptions (e.g. kinetic energy peaks at $t=74.3,\, 85.7,\, 117.1,\, 194.3$~min, marked by vertical lines in the figure). As expected, the magnetic (kinetic) energy decreases (increases) after each eruption. Notice that this is less pronounced for the magnetic energy in the first eruption because of the continuous emergence of magnetic flux, which increases the total amount of magnetic energy above the mid-photosphere. Also, the local maximum of the kinetic energy at $t=205.7$~min corresponds mainly to the fast reconnection upflow underneath the erupting FR, which is about to exit the numerical domain.
In a similar way, we compute the self helicity (Fig.~\ref{fig:helicity}). For a single twisted flux tube, the self-helicity is assumed to correspond to the twist within the flux tube.
For the calculation we used the method described in \citet{Moraitis_etal2014}. Overall, we find that the temporal evolution of the self-helicity is similar to that of the kinetic energy (e.g. they reach local maxima at the same time), which indicates that the erupted field is twisted. We also find that between the eruptions, self helicity increases because of the gradual build up of the twist of the post-emergence FRs.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Emag_Ekin_zmin55.pdf}
\caption{ Magnetic (black) and kinetic (red) energy above the middle of the photosheric-chromospheric layer ($z=$1.37~Mm). Vertical black lines mark the kinetic energy maxima related to the four eruptions.}
\label{fig:energy}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{helicity_zmin55.pdf}
\caption{Self helicity above the middle of the photosheric-chromospheric layer ($z=$1.37~Mm). Vertical black lines mark the kinetic energy maxima related to the four eruptions.}
\label{fig:helicity}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.70\textwidth]{j_plots_vertical_4_small.pdf}
\caption{Side and top views of the shape of selected fieldlines at $t$=56~min (a,c) and $t$=64~min (b,d). The horizontal slice shows the distribution of $B_{z}$ (in black and white, from -300~G to 300~G) at $z=0.7$~Mm. Yellow arrows represent the photospheric velocity field scaled by magnitude. Photospheric vorticity is shown by the red contours. Purple isosurface shows $\left| J/B \right|>0.3$.}
\label{fig:jplot}
\end{figure*}
\subsection{Flux rope formation and eruption mechanisms}
\label{sec:eruptions_mechanims}
\subsubsection{First eruption}
\label{sec:eruption1}
\begin{figure*}
\centering
\includegraphics[width=0.75\textwidth]{eruption1.pdf}
\caption{Field line morphology of the first eruption at $t$= 59 (a), 69 (b), 74 (c-f)~min. Green lines are traced from the top of the envelope field. Red lines are envelope fieldlines traced above the FR (c,d,e). Blue lines are J-shaped lines. Yellow lines are traced from the FR center. Purple isosurface is $\left| J/B \right|>0.3$. \textbf{(c-e)}: $t$=74~min eruption from side, front and top view. \textbf{(d)}: Arrows show the two concave-upwards segments of the W-like (red) fieldlines. \textbf{(f)}: Close up of (c). Cyan lines illustrate the post-reconnection arcade.}
\label{fig:eruption1}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.90\columnwidth]{var_time_eruption1_units.pdf}
\caption{First eruption's key parameters with time.
\textbf{(a):} Height-time profile of FR center (black) and FR velocity (h-t derivative, blue). The insert shows a close-up of the height time profile for $t=58-68$~min.
\textbf{(b):} Torus index measured at the FR center. The highlighted region shows an estimated range of values for the occurrence of a torus instability.
\textbf{(c):} Ratio of mean tension ($T_z$) over its initial value ($T_{z_0}$). Mean tension is measured from the apex of the FR to the top of the envelope field.
\textbf{(d):} Maximum $V_z$ of the reconnection outflow. Vertical lines mark the times of the possible onset of the torus instability (first line) and the tether-cutting reconnection of the envelope field (second line)}
\label{fig:vars_eruption1}
\end{figure}
The formation of the post-emergence FR occurs in the low atmosphere due to the combination of: a) shearing and converging motions along the PIL, b) rotation of the polarities of the emerging flux region and c) reconnection of the sheared and rotated fieldlines along the PIL.
Firstly, we would like to focus on the role of shearing along the PIL and the rotation of the polarities during the pre-eruptive phase. For this reason, we present a side view (Fig.~\ref{fig:jplot}a,b) and a top view (Fig.~\ref{fig:jplot}c,d) of a close-up of the emerging flux region. We plot the sheared arcade fieldlines (blue), the $\left| J/B \right|$ isosurface and the photospheric $B_z$ component of the magnetic field (black/white plane). On the photospheric plane, we also plot the planar component of the velocity field vector (yellow arrows) and the $\omega_z$ component of vorticity (red contours).
The visualization of the velocity field reveals: a) the shearing motion along the PIL (the yellow arrows are almost antiparallel on the two sides of the PIL) and b) the converging motions towards the PIL and close to the two main polarities, due to their rotation.
These motions (shear and rotation) are also apparent by looking at the vertical component of the vorticity (red contours). Notice that $\omega_z$ is strong close to the two polarities, where the rotation is fast. Along and sideways of the PIL, there is only apparent ``vorticity'' due to the velocity, which is developed by the shearing.
The footpoints of the sheared arcade fieldlines are rooted at both sides of the PIL (e.g. blue lines in Fig.~\ref{fig:jplot}a,c). Due to the shearing, their footpoints move towards the two polarities where they undergo rotation (e.g. see the footpoints of the blue fieldlines, which go through the red contours close to the two opposite polarities, Fig.~\ref{fig:jplot}b,d).
Due to rotation, the sheared fieldlines adopt the characteristic hook-shaped edge, forming J-like loops. The isosurface of high values of $\left| J/B\right|$ shows the formation of a strong current between the J-like loops. When the J-like fieldlines reconnect at the current sheet, new twisted fieldlines are formed, with an overall sigmoidal shape.
Figure~\ref{fig:eruption1} is a visualization of a series of selected fieldlines during the slow-rise (panels a and b) and the fast-rise (panels c-f) phase of the first eruption. In a similar manner to Fig.~\ref{fig:jplot}, Fig.~\ref{fig:eruption1}a shows the sheared fieldlines (blue) and the $\left| J/B \right|$ isosurface (purple).
Reconnection between the sheared fieldlines forms a new set of longer fieldlines (yellow), which connect the distant footpoints of the sheared fieldlines. Thus, the longer fieldlines produce a magnetic loop above the PIL. As time goes on (panel b), further reconnection between the J-like sheared fieldlines (blue) form another set of fieldlines, which wrap around the magnetic loop, producing the first (post-emergence) magnetic FR. The red and green fieldlines are not reconnected fieldlines. They have been traced from arbitrary heights above the yellow fieldlines. They belong to the emerging field, which has expanded into the corona. In that respect, they create an envelope field for the new magnetic FR.
Eventually, the envelope fieldlines just above the FR (e.g. red lines, Fig.~\ref{fig:eruption1}b) are stretched vertically and their lower segments come into contact and reconnect at the flare current sheet underneath the FR in a tether-cutting manner. Hereafter, for simplicity, we call the reconnection between envelope fieldlines as EE-TC reconnection (i.e. Envelope Envelope - Tether Cutting reconnection). This reconnection occurs in a fast manner, triggering an explosive acceleration of the FR. During this process, the plasma temperature at the flare current sheet reaches values up to 6~MK. The rapid eruption is followed by a similar type of reconnection of the outermost fieldlines of the envelope field (green lines, Figs.~\ref{fig:eruption1}c). Fig.~\ref{fig:eruption1}c,d,e show the side, front and top view of the fieldline morphology at $t$=74~min. Fig.~\ref{fig:eruption1}f is a close up of the reconnection site underneath the erupting FR.
Notice that, due to EE-TC reconnection, the red fieldlines are wrapped around the central region of the erupting field (yellow fieldlines). They make at least two turns around the axis, becoming part of the erupting FR. During the eruption, these fieldlines may reconnect more than once, and thus, have more than two full turns around the axis.
The close-up in Fig.~\ref{fig:eruption1}f shows that a post-reconnection arcade (light blue fieldlines) is formed below the flare current sheet. At the top of the arcade, the plasma is compressed and the temperature increases up to 10~MK.
The time evolution of the post-emergence FR can be followed by locating its axis at different times. To find the axis, we use a vertical 2d cut (at the middle of the FR, along its length), which is perpendicular to the fieldlines of the FR.
Then we locate the maximum of the normal component of the magnetic field ($B_n$) on this 2d plane for every snapshot.
We have also found that the location of the axis of the FR is almost identical to the location of maximum plasma density within the central region of the FR.
The latter can be used as an alternative tracking-method for the location of the FR's axis.
Using the above method(s), we are able to plot the height-time profile of the erupting FR (see Fig.~\ref{fig:vars_eruption1}a black line) and its derivative (blue line).
The h-t profile shows a phase of gradual upward motion (slow-rise phase), which is followed by an exponential period (fast-rise phase). The terminal velocity before the FR exits the numerical box is 170~km~s$^{-1}\,$. During the eruptive phase, the FR is not very highly twisted and also it does not have the characteristic deformation of its axis that results from the kink instability. As a result, kink instability does not seem to play a role in this case. To study whether torus instability is at work, we follow the torus index calculation method of \citet{Fan_etal2007, Aulanier_etal2010}.
We first estimate the external (envelope) field by calculating the potential magnetic field ($B_p$).
This is done based on the calculations made to derive the helicity \citep[details in][]{Moraitis_etal2014}.
To solve the Laplace equation for the calculation of the potential field, it is assumed that both the magnetic field and the potential field have the same normal component at the boundaries (Neumann conditions). The lower $xy$-plane boundary is the photosphere at $z=0.51$~Mm and the rest of the boundaries are the sides of the numerical domain. Having calculated $B_p$, we then compute the torus index as $n=-z \partial \ln B_p / \partial z$.
Then, we find the value of the torus index at the position of the FR center by measuring the value of $n$ along the h-t profile. We plot the results in Fig.~\ref{fig:vars_eruption1}b.
According to the height-time profile (black line and inset in Fig.~\ref{fig:vars_eruption1}a), we find that the FR enters an exponential rise phase just after $t=61.4$~min (first vertical line). The torus index at this time is $n=1.81$, which lies within the estimated range of values for the occurrence of the torus instability (see Introduction and the highlighted region in Fig.~\ref{fig:vars_eruption1}e). Therefore, we anticipate that the FR in our simulation becomes torus unstable at $t\geq 61.4$~min.
We should highlight that the envelope fieldlines above the FR start to reconnect in a TC manner at $t\geq 67.9$~min (second vertical line, Fig.~\ref{fig:vars_eruption1}). As a result, the mean tension of the envelope fieldlines (Fig.~\ref{fig:vars_eruption1}c) decreases while the FR height and velocity increase dramatically (Fig.~\ref{fig:vars_eruption1}a). We also find that the fast reconnection jet ($V_z$ up to $550$~km~s$^{-1}\,$), which is ejected upward from the flare current sheet, is transferring momentum to the FR and contributes to its acceleration (Fig.~\ref{fig:vars_eruption1}d).
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{eruption2.pdf}
\caption{ Field line morphology of the second eruption at $t$=74, 79, 83, 85, 87~min. Green lines are traced from the top of the post-reconnection arcade field of the first eruption. Red lines are envelope fieldlines traced above the FR (b,c,d,e,f). Blue lines are J-shaped lines. Yellow lines are traced from the FR center. Purple isosurface is $\left| J/B \right|>0.3$. Gray lines are fieldlines from the first eruption (now acting as external field).
\textbf{(d):} Closeup of (c) showing the EJ-TC reconnection. \textbf{(f)}: Arrows show the two hook-shaped segments of the fieldlines (red lines). }
\label{fig:eruption2}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.90\columnwidth]{var_time_eruption2_units.pdf}
\caption{Second eruption's key parameters with time.
\textbf{(a):} Height-time profile of FR center (black) and FR velocity (h-t derivative, blue) \textbf{(b):} Torus index measured along the height-time profile. The highlighted region shows an estimated range of values for the occurrence of a torus instability. \textbf{(c):} Maximum $\left| J/B \right|$ along the CS. \textbf{(d):} Maximum reconnection outflow. \textbf{(e):} Ratio of mean tension ($T_z$) over its initial value ($T_{z_0}$).
Vertical lines mark the times of the possible onset of the torus instability (first line) and the EJ-TC reconnection (second line).
}
\label{fig:vars_eruption2}
\end{figure}
\subsubsection{Second eruption}
\label{sec:eruption2}
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{eruption3.pdf}
\caption{ Field line morphology of the third eruption at $t$=102,106, 114, 117~min. \textbf{(a:)} J-like loops (blue) and sea-serpent fieldlines (dark green).
\textbf{(b,c,d):} similar to Fig.~\ref{fig:eruption1} and Fig.~\ref{fig:eruption2}.}
\label{fig:eruption3}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.90\columnwidth]{var_time_eruption3_units.pdf}
\caption{Third eruption's key parameters with time.
\textbf{(a):} Height-time profile of FR center (black) and FR velocity (h-t derivative, blue) \textbf{(b):} Torus index measured along the height-time profile. The highlighted region shows an estimated range of values for the occurrence of a torus instability. \textbf{(c):} Maximum $\left| J/B \right|$ along the CS. \textbf{(d):} Maximum reconnection outflow. \textbf{(e):} Ratio of mean tension ($T_z$) over its initial value ($T_{z_0}$). Vertical lines mark the times of the possible onset of the torus instability (first line) and the EJ-TC reconnection (second line).}
\label{fig:vars_eruption3}
\end{figure}
In the following, we focus on the dynamics of the second eruption. Fig.~\ref{fig:eruption2}a and Fig.~\ref{fig:eruption2}b are close-ups of the area underneath the first erupting FR at $t=74$~min and $t=79$~min respectively. In a similar manner to the formation of the first FR, the second FR (yellow fieldlines, Fig.~\ref{fig:eruption2}b) is formed due to reconnection between J-loops (blue fieldlines). The post-reconnection arcade (green and red fieldlines in Fig.~\ref{fig:eruption2}a, Fig.~\ref{fig:eruption2}b), which was formed after the first eruption (cyan lines, Fig.~\ref{fig:eruption1}f), overlies the yellow fieldlines and, thus, it acts as an envelope field for the second FR. Above and around this envelope field, there are fieldlines (grey) which belong to the first eruptive flux system but they have not exited the numerical domain yet. Hereafter, we refer to this field as the external, pre-existing field.
As the second post-emergence FR moves upwards, the envelope fieldlines are stretched vertically and their footpoints move towards the current sheet (pink isosurface). However, they do not reconnect in an EE-TC manner. Instead, the lower segments of the envelope fieldlines reconnect with the J-like loops. Hereafter, for simplicity, we refer to this as EJ-TC reconnection (i.e. Envelope-J Tether Cutting reconnection). This difference is due to the different orientation of the envelope fieldlines. As we have previously shown (green lines, Fig.~\ref{fig:4_plot}b top view), the envelope fiedlines in the second eruption do not have a strongly azimuthal nature. They are mainly oriented along the $y$-direction. Therefore, their lower segments come closer to the J-like loops and reconnect with them (e.g. bottom right red lines, Fig.\ref{fig:eruption2}c).
To better illustrate the EJ-TC reconnection, in Fig.~\ref{fig:eruption2}d we show a close-up of this region. Here, the envelope fieldlines (green) reconnect with the J-like loops (blue) to form the hook-shaped fieldlines (red).
Eventually, this process occurs on both foot points of the envelope fieldlines, forming new fieldlines such as the red ones in Fig.~\ref{fig:eruption2}e.
Notice that these new reconnected fieldlines are winding around the footpoints of the rising FR and, therefore, they become part of the erupting field.
In general, the EJ-TC reconnection removes flux from the envelope field and adds flux to the FR. Also, the downward tension of the envelope field decreases during EJ-TC reconnection.
Before the FR exits the box (Fig.~\ref{fig:eruption2}f) most of the envelope field has been subject to EJ-TC reconnection.
We should highlight that we don't find evidence of EE-TC reconnection during the second eruption.
EJ-TC and EE-TC reconnection produces fieldlines with a different shape. In the first eruption, the EE-TC reconnected fieldlines (red, Fig.~\ref{fig:eruption1}c-e) are ejected towards the FR center, adopting a ``W-shaped'' configuration. The concave-upward segments of the W-like fieldlines (arrows, Fig.~\ref{fig:eruption1}d) bring dense plasma from the low atmosphere into the central region of the FR. In the second eruption, the EJ-TC reconnected fieldlines have hook-like segments in their footpoints (arrows, Fig.~\ref{fig:eruption2}f). In this case, the tension of the reconnected fieldines ejects hot and dense plasma sideways (mainly along the y-direction) and not towards the center of the FR. Thus, due to the different way that the envelope fieldlines reconnect, the temperature and density distributions within the erupting field show profound differences, between the first and the following eruptions. This is discussed in more detail in Section 3.3.
We plot now the h-t profile and its derivative for the second FR (black and blue lines, Fig.~\ref{fig:vars_eruption2}a).
To calculate the torus index, we consider the potential magnetic field $B_p$.
As discussed earlier, the calculation of the potential field takes into account all the boundaries of the numerical domain.
This means that the potential field solution will not approximate the envelope field everywhere. It will approximate the envelope field up to a height where the solution of the Laplace equation will be strongly influenced by the lower boundary (photosphere).
Above that height, the potential solution will be influenced by the upper boundary and will describe the external field.
So, we examine the values of the potential field along height. We expect them not to change drastically in the region of the envelope field.
We do find that the potential field solution does not describe the envelope field accurately above certain heights (different height for different snapshots). Below that heights, the potential field describes the envelope field well. This transition happens around $z\approx$15-20~Mm. Therefore, when we calculate the torus index, we do not take into account values of the torus index when the FR is located above $z$=15~Mm.
According to the h-t profile, we find that the FR enters the exponential rise phase at $t=79.3$~min (first vertical line, Fig.~\ref{fig:vars_eruption2}b). The torus index at this time is $n=1.22$ and lies in the estimated range of values for the occurrence of a torus instability.
During this phase, the maximum $\left|J/B\right|$ does not increase dramatically (Fig.~\ref{fig:vars_eruption2}c). The current sheet becomes more elongated and the reconnection outflow becomes more enhanced after $t=81$~min (Fig.~\ref{fig:vars_eruption2}d).
When the EJ-TC reconnection starts, we find that the tension above the FR starts to decrease drastically (second vertical line, Fig.~\ref{fig:vars_eruption2}e). Also, after the initiation of the EJ-TC reconnection, the current density of the current sheet becomes more enhanced.
Due to the above, one possible scenario is that the torus instability is responsible for the onset of the exponential phase of the h-t profile, and the EJ-TC reconnection occurs {\it during} the rapid rise of the FR. Another possible scenario is that both processes are at work during the eruptive phase and it is the interplay between them, which leads to the fast eruption of the FR.
In terms of the energy, we have found that the kinetic energy of the second eruption is larger than that of the first eruption (red line, Fig.~\ref{fig:energy}).
This difference is not necessarily associated with the different TC reconnection processes. For instance, the downward magnetic tension of the envelope field above the second FR is less. As a result, the upward motion of the FR is faster. Also, the photospheric unsigned magnetic flux increases between the two eruptions due to the continuous emergence. Thus, there is more available flux at the photosphere for the second eruption. Similarly, the magnetic energy in the corona (black line, Fig.~\ref{fig:energy}) increases between the two eruptions, indicating that more energy is available for the second eruption.
\subsubsection{Third and fourth eruption}
After the second FR exits the numerical domain, the overall fieldline morphology is similar to the first post-eruption phase. There is an external field, a post-reconnection arcade that acts as an envelope field and also the J-like loops.
At the photosphere-chromosphere, we also find sea serpent fieldlines (dark green lines, Fig.~\ref{fig:eruption3}a), similar to the previous work by \citet{Fan_2009, Archontis_etal2013}.
Most of these fieldlines originate from the partial emergence of the sub-photospheric field at different locations along the PIL.
These fieldlines reconnect at many sites along the PIL during the early FR formation. Still, the major role in the FR formation is played by the reconnection of J-like loops (blue and yellow lines, Fig.~\ref{fig:eruption3}b)
In comparison to the second eruption, we find that the morphology of the external field is different. The second eruption (with a kinetic energy peak at $t$=87~min) happened right after the first eruption (with a kinetic energy peak at $t$=72~min). Thus, the external field that the second eruption had to push through was more horizontal (Fig.~\ref{fig:eruption2}c, gray lines are almost parallel to the photosphere). The third eruption, during which the kinetic energy takes its maximum value at $t$=119~min) happens after the second FR exits the numerical box. As a result, the external field is more vertical to the photosphere and, consequently, it has a very small downward tension (gray lines, Fig.~\ref{fig:eruption3}b).
EJ-TC reconnection occurs also during the third eruption (Fig.~\ref{fig:eruption3}c). However, we find that only some of the envelope fieldlines reconnect in both their footpoints (Fig.~\ref{fig:eruption3}c,d), before they exit the numerical domain. The implication of this difference will be discussed in Sec.~\ref{sec:temp_etc}.
We do not find evidence of EE-TC reconnection during the third eruption.
Regarding the torus instability, we should mention that at $t\approx100-104$~min, the FR is located very close to the photosphere, at heights $z\approx1.5-3$~Mm. We find that the value of $B_p$ (and hence $n$) at these heights depends on the choice of the lower boundary (i.e. the exact height of the photospheric layer, which is used to calculate the potential field). Thus, the value of the torus index for heights up to $z\approx3$~Mm are different. Above that height, all the solutions converge. We conjecture that the main reason for the change in the values of $B_p$ and $n$ is the build-up of a complex external field after each eruption.
However, from the height-time profile (Fig.~\ref{fig:vars_eruption3}a), we find that for $t\simeq 104$~min the FR is located just above $z\approx3$~Mm, where the value of the torus index is well defined. Also, we find that $n\geq 1$ for $t > 104$~min (first vertical line, Fig.~\ref{fig:vars_eruption3}b). This is an indication (although not conclusive) that the torus instability might be associated with the onset of the eruption.
Notice that during the time period $t\approx104-110$~min, there is no direct evidence that effective reconnection (e.g. EJ-TC reconnection) is responsible for the driving of the eruption. Fig.~\ref{fig:vars_eruption3}c, d, e show that the reconnection upflow underneath the flux rope undergoes only a small increase (due to reconnection between J-like fieldlines) and $J/B$ experiences a limited drop. The tension of the envelope fieldlines decreases mainly because of the 3D-expansion and not because of vigorous EJ-TC reconnection. Therefore, due to the above limitations, we cannot reach a definite conclusion about the exact contribution of reconnection at the onset of the eruption in this initial phase.
In contrast, for $t > 110$~min, there is a clear correlation between the increase of the reconnection outflow and $J/B$ and the decrease of the tension. This is due to effective EJ-TC reconnection, which releases the tension of the envelope field and it boosts the acceleration of the erupting field. A preliminary comparison between the second and third eruptions show that the the maximum values of the current and reconnection outflow are similar, while the length of the CS and the extend of the jet are much smaller. The fourth eruption is very similar to the third eruption.
\subsection{Temperature, density, velocity and current}
\label{sec:temp_etc}
\begin{figure*}
\centering
\includegraphics[width=0.93\textwidth]{temperature_rho_j_vz.pdf}
\caption{Density (first row), temperature (second row), $V_z$ (third row) and $\sqrt{\left| J/B \right|}$ (fourth row) measured at the $xz$-midplane, for the first (first column), second (second column), third (third column) and fourth (fourth column) eruption. Asterisks mark the location of the FR center.}
\label{fig:temp_etc}
\end{figure*}
There are some remarkable similarities and differences between the four eruptions, as illustrated in
Fig.~\ref{fig:temp_etc}. All panels in this figure are 2D-cuts, at the vertical $xz$-midplane, at times just before the erupting structures exit the numerical domain.
The density distribution (first row) shows that all eruptions adopt an overall bubble-like configuration, due to the expansion of the magnetic field as it rises into larger atmospheric heights. We notice that the erupting field consists of three main features, which are common in all eruptions. For simplicity, we mark them only in the first eruption (panel a1). These features are: (a) the inner-most part of the bubble, which is located at and around the center of the erupting field (marked by asterisk), filled with dense plasma - we refer to this part as the ``core'' of the eruption, (b) the low density area that immediately surrounds the ``core'' - we refer to this as the ``cavity'' and it is the result of the cool adiabatic expansion of the rising magnetic field and (c) the ``front'' of the erupting structure, which is a thin layer of dense material that envelops the ``cavity'' and it demarcates the outskirts of the erupting field. To some extent, the shape of the eruptions in our simulations is reminiscent of the ``three-part'' structure of the observed small-scale prominence eruptions \citep[mini or micro CMEs e.g.][]{Innes_etal2010b,Raouafi_etal2010,Hong_etal2011} and/or CMEs \citep[e.g.][]{Reeves_etal2015}. Because of this, hereafter, we refer to the simulated eruptions as CME-like eruptions.
Now, by looking at the temperature distribution (second row), we notice that there is a mixture of cold and hot plasma within the erupting field (in all cases, b1-b4). In fact, in the first eruption, there is a noticeable column of hot plasma, which extends vertically from $x=0$~Mm, $z=10$~Mm up to $z=40$~Mm. Thus, in this case the ``core'' of the erupting field appears to be hot, with a temperature of about 8~MK. On the contrary, the ``core'' of the following eruptions is cool (5,000-20,000~K) and dense, but is surrounded by hot (0.5-2~MK) plasma. In all cases, the origin of the hot plasma is the reconnection process occurring at the flare current sheet underneath the erupting field.
The distribution of $\sqrt{\left| J/B \right|}$ is shown at the fourth row in Fig.~\ref{fig:temp_etc} (d1-d4).
The flare current sheet is the vertical structure with high values of $\sqrt{\left| J/B \right|}$, and is located at around $x=0$~Mm and between $z=12$~Mm and $z=25$~Mm. The velocity distribution (panels c1-c4) shows that a bi-directional flow is emitted from the flare current sheet. This flow is a fast reconnection jet, which transfers the hot plasma upwards (into the erupting field) and downwards (to the flare arcade located below $z=10$~Mm).
Thus, a marked difference between the first and the following eruptions is that, in the first eruption, the upward reconnection jet shoots the hot plasma vertically into the ``core'' of the erupting field, while in the following eruptions, the upward jet only reaches lower heights, arriving below the ``core''.
In the latter cases, the jet is diverted sideways at heights below the center of the erupting FR, adopting a Y-shaped configuration (e.g. see c2-c4). In the first eruption, the EE-TC reconnection creates fieldlines which have a highly bended concave-upward shape (i.e. towards the central region of the erupting bubble, see red lines in Fig.~\ref{fig:eruption1}d). It is the strong (upward) tension of these fieldlines, that makes the hot plasma to be ejected at large heights and into the ``core'' of the field. In the following eruptions, the tension force that accelerates the hot jet upflow is weaker. This is because the reconnected fieldlines of the jet is the result of reconnection between Js (e.g. see blue lines in Fig.~\ref{fig:eruption2}e), which are not so vertically stretched as the envelope fieldlines during the EE-TC reconnection. Thus, the upward tension of the reconnected fieldlines at the flare current sheet is weaker. Therefore, the hot reconnection jet is not strong enough to reach large atmospheric heights and to heat the central region of the erupting field. When it reaches close to the heavy core of the erupting FR, it is diverted sideways (where the pressure is lower) and the embedded hot plasma runs along the reconnected fieldlines.
In general, the temperature distribution within the overall volume of the erupting field correlates well with the distribution of $\sqrt{\left| J/B \right|}$, which implies that heating occurs mainly at sites with strong currents. As we mentioned above, one such area is the flare current sheet underneath the erupting FR. Another example is the heating that occurs at a thin current layer formed between the ``core'' and the ``cavity'' of the erupting bubble (e.g., see panel b1). This current layer is formed after the onset of the fast-rise phase of the erupting FR. The uppermost fieldlines of the erupting core are moving upwards with a higher speed than the fieldlines within the ``cavity'', which rise due to the expansion of the emerging field. Thus, at the interface between the two sets of fieldlines, the plasma is compressed and it is heated locally (up to 1~MK). This process occurs in all cases (panels, d1-d4), although it is more clearly visible in the first eruption (panel b1). The reconfiguration of the field after the first eruption leads to a more complex fieldline morphology, distribution of $\sqrt{\left| J/B \right|}$ and heating within the rising magnetized volume (panels b1-b4).
\subsection{Geometrical extrapolation}
\label{sec:extrapolation}
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth]{fig_extrapolation.pdf}
\caption{ \textbf{(a):} Geometrical extrapolation based on the position of the flanks of the magnetic volume during its eruption. \textbf{(b):} Same as (a) but shown in the 3D volume of the numerical domain. \textbf{(c):} The extrapolated size of the erupting volume at 0.6~R$_\odot$ above the solar surface. The black box has the physical size of the simulation box.}
\label{fig:extrapolation}
\end{figure*}
Coronographic observations of CMEs show that they usually exhibit a constant angular width (i.e the flanks of the erupting structure move upward, along two approximately straight lines) \citep[e.g.][]{Moore_etal2007}.
Based on that, we perform a geometrical extrapolation of the size of the first eruption.
For this, we find the location of the flanks of the structure at consecutive times and fit a straight line. Firstly, we mark the location of the flank of the erupting structure at a time $t_i$, when the flank is very distinguishable (diamond on the left flank, Fig.~\ref{fig:extrapolation}a). Next, we select the flank location prior to $t_i$ (marked with $t_{i-6},t_{i-5}$ etc.) and after (marked with $t_{i+1}$), and fit a straight line through these points (blue line). We then do the same for the other flank. The point where they intersect is approximately the height of the initiation. These extrapolated lines are also plotted in the 3D volume of our numerical box for better visualization (Fig.~\ref{fig:extrapolation}b).
After we find these lines, we extrapolate them to 0.6~$R_\odot$. For size comparison, we plot them on the solar limb (blue lines, Fig.~\ref{fig:extrapolation}b). The box at the bottom of the extrapolations shows the size of our numerical box. It is clear that although the eruptions originate from a small-scale region, they grow in size, and it is not unlikely that they may evolve into considerably larger-scale events.
We should highlight that the above method is a first order approximation regarding the spatial evolution of the first eruption, assuming that the erupting field will continue to rise and expand even after it leaves the numerical domain.
The maximum value of the magnetic energy in the simulated eruptions is $1\times10^{28}$~erg and the kinetic energy varies in the range $3\times10^{26}-1.5\times10^{27}$~erg. Based on the size of our numerical box and the aforementioned values of energies, the eruptions in this simulation could describe the formation and ejection of small scale CME-like events. Most CMEs have typical values of kinetic energies around $10^{28}-10^{30}$~erg \citep{Vourlidas_etal2010}.
\section{Summary and discussion}
\label{sec:conclusions}
In this work we studied the formation and triggering of recurrent eruptions in an emerging flux region using numerical simulations. The initial emergence of the sub-photospheric flux tube formed a bipolar region at the photosphere. The combination of shearing motions and the rotation of the two opposite polarities formed J-like fieldlines, which reconnected to create a FR that eventually erupted ejectively towards the outer solar atmosphere.
In total, four successive eruptions occurred in the simulation.
We found that the strength of the magnetic envelope field above the eruptive FRs dropped fast enough so that the FRs became torus unstable.
The initial slow-rise phase of the first FR started due to the torus instability. The rising FR pushed the envelope field upwards. The fieldlines of the envelope field reconnected in a tether-cutting manner and, as a result, the tension of the overlying field dropped in an exponential way.
At that time, the FR entered the fast-rise phase. The fieldlines formed due to the reconnection of Js, turned about one time around the axis of the FR, while the fieldlines resulting from the tether-cutting of the envelope field turned about at least two times around the axis of the FR. The reconnected fieldlines that were released downwards, formed a post-reconnection arcade.
After the eruption of the first FR, reconnection of J-like fieldlines continued to occur and another FR was formed, which eventually erupted. This process of FR formation occurred two more times in a similar manner. In all cases, the post-reconnection arcade acted as a new ``envelope'' field for the next FR. We found that the envelope field was decaying fast enough to favor torus instability. The envelope fields between the second, third and fourth eruption differed mostly at the height where the FRs became torus unstable ($n\approx1-2$).
However, we should highlight that our calculation of the torus index is approximate because the envelope field evolves dynamically (e.g. it undergoes expansion). The derivation of the torus instability criteria based on previous analytical studies, took into account perturbations of a static configuration. Thus, a more accurate estimate of the torus index in our simulations, would be to let the envelope field to relax at each time step and then calculate $n$. This can only be done if the driver of the system could be stopped, letting the overall magnetic flux system to reach an equilibrium \citep[e.g.][]{Zuccarello_etal2015}.
However, in our dynamical simulations, there is a certain amount of available magnetic flux, which can emerge to the photosphere and above. The driver of the evolution of the system (i.e. magnetic flux emergence) cannot be stopped before the available magnetic flux is exhausted. Therefore, on this basis, we study the {\it continuous} evolution of the system.
Still, in our experiments, the magnitude of the current inside the envelope field is at least ten times lower than the one in the FR core, so we expect that the envelope field is not far away from the potential state.
The removal of the downward tension of the envelope field is important for the erupting FRs. In the first eruption, the removal of the envelope tension occurred through the reconnection of the envelope field with other envelope fieldlines (EE-TC reconnection). In the other three eruptions, the envelope field reconnected with J-like fieldlines (EJ-TC reconnection). The differences between EE-TC reconnection and the EJ-TC reconnection were found to be significant for the density and temperature distribution within the erupting structure. After the EE-TC reconnection, the reconnected fieldlines underneath the erupting FR adopted a W-like shape, with two upward concave regions (red lines Fig.~\ref{fig:eruption1}d, see arrows). After the EJ-TC reconnection, the lower segments of the reconnected fieldlines adopted a hook-like shape (red lines, Fig.~\ref{fig:eruption2}f, see arrows).
In the case of EE-TC reconnection, the upward tension of the reconnected fieldlines (as illustrated by the upward-stretched segments in the middle of the W-shaped fieldlines) pushed hot plasma from the flare current sheet into the erupting field via a hot and fast collimated jet. Due to this process, the temperature of the central region of the erupting FR changed during the eruption, from low to high values (b1, Fig.~\ref{fig:temp_etc}).
In the case of EJ-TC reconnection, the plasma transfer from the flare current sheet to the erupting field was mainly driven by the reconnection of Js, and therefore the resulting reconnection jet was not as collimated as on the EE-TC reconnection. In the second eruption, this post-reconnection hot jet collided with the FR and became diverted into two side jets (a2 and c2, Fig.~\ref{fig:temp_etc}). In the third and fourth eruption, the jets were not fast enough to enter the region of the erupting core of the field (a3 and c3, a4 and d4, Fig.~\ref{fig:temp_etc}).
Thus, the study of the temperature distribution revealed that due to EE-TC reconnection, the erupting field develops a ``3-part'' structure consisted of a hot front ``edge'', a cold ``cavity'', and a hot and dense ``core''. In the following eruptions, the temperature of the plasma within the central region of the FRs remained low. Therefore, we suggest that the observations of erupting FRs, which are heated e.g. from $10^{3}$~K to$10^{6}$~K, \textit{during} their eruptive phase, might indicate that EE-TC reconnection is at work. We should mention that heat conduction is not included in our simulation.
Therefore, the exact value of the temperature within the erupting field may change if heat conduction were to be included in the numerical experiment.
Overall, we report that the physical mechanism behind the formation of recurrent ejective eruptions in our flux emergence simulation is a combination of torus unstable FRs and the onset of tether-cutting of the overlying field through a flare current sheet.
Both the EE-TC reconnection and the EJ-TC reconnection were found to remove the downward tension of the overlying field and thus assisting the eruptions. In the first eruption, it is likely that torus instability occurs first, and the rapid exponential rise phase of the erupting FR comes after the EE-TC reconnection. For the other eruptions, where the structure of the magnetic field above the FR has a more intricate morphology, it is difficult to conclude which process is responsible for the onset of the various phases of the eruptions.
Comparing our results with previous studies, the formation of all the FRs in our simulation is due to the reconnection of sheared J-like fieldlines, in a similar manner to earlier simulations \citep[e.g. ][]{Aulanier_etal2010,Archontis_etal2012,Leake_etal2013,Leake_etal2014}.
It is also interesting to note that the velocity and current profile of our first eruption (Fig.~\ref{fig:temp_etc}c1, d1) are very similar morphologically to the ones produced from the flare reconnection in the breakout simulation of \citet{Karpen_etal2012}, who used a (different) 2.5D adaptive grid code.
Such similarities indicate that the resulting morphologies might be generic and indicative of the EE-TC reconnection.
\citet{Moreno-Insertis_etal2013} performed a flux emergence simulation of a highly twisted flux tube into a magnetized atmosphere and found recurrent eruptions. In comparison to our simulation, the sub-photospheric flux tube in the work of \citet{Moreno-Insertis_etal2013} had higher magnetic field strength ($B_{0}=3.8$~kG), higher length of the buoyant part of the flux tube ($\lambda$=20 in comparison to our $\lambda$=5) and was located closer to the photosphere ($z=-1.7$~Mm). In their work, their first FR is formed, similarly to our simulation, by the reconnection of sheared-arcade fieldlines. The higher $\lambda$ leads to the formation of a more elongated emerging FR and a longer sigmoid. The eruption mechanism, though, is very different. It involves reconnection between the sheared-arcade fieldlines and the open fieldlines of the ambient field. Also, it involves reconnection of the sheared-arcade with a magnetic system produced from the reconnection of the ambient field with the initial emerging envelope field. Their second and third eruption are off-centered eruptions of segments of the initial flux tube, that eventually become confined by the overlying field. In our case, the flux tube axis emerges only up to 2-3 pressure scale heights above the photosphere ($z$=0) and the erupting FRs are all formed due to reconnection of J-loops.
\citet{Murphy_etal2011} discussed possible heating mechanisms for the dynamic heating of CMEs, one of which is heating from the CME flare current sheet. Taking into account the results of previous studies \citep[e.g. ][]{Lin_etal2004}, they reported that the reconnection hot upward jets from the flare current sheet could reach the cool central region of the erupting FR and heat it. In fact, this leads to some mixing of hot and cool plasma within the central erupting volume.
From the two different tether-cutting reconnections found in our simulation, only the EE-TC reconnection allows effective transfer of hot plasma from the flare current sheet into the FR central region, by the reconnection outflow.
This might account for a process similar to the afore-mentioned mixing of hot and cold plasma, as suggested by \citet{Lin_etal2004}.
On the other hand, during EJ-TC reconnection, hot plasma is mainly found at the periphery of the central region of the FR.
The physical size of our simulated emerging flux region was 23.4~Mm, and the size of the FRs was up to 64.8~Mm (the length of the $y$-axis). The height of our numerical box was 57.6~Mm. The kinetic energies of the eruptions were $3\times10^{26}-1.5\times10^{27}$~erg and the magnetic energies around $1\times10^{28}$~erg.
These values suggest that our numerical experiment describes an emerging flux region, which hosts relatively low energy eruptions in comparison to CMEs. Based on the sizes and the energetics, these eruptions can describe the formation and eruption of small scale eruptive events. For instance, such an eruption in terms of physical size and not magnetic configuration, was reported by \citet{Raouafi_etal2010,Reeves_etal2015}. Still, the results on the plasma transfer for the different flare reconnections (EE-TC reconnection and EJ-TC reconnection) should be scale invariant.
Having reproduced a CME-like configuration (a1 and b1, Fig.~\ref{fig:eruption1}) we extrapolated the expansion of the flanks of the erupting ``bubble'' and estimated its size in 0.6~R$_\odot$. We found that these eruptions have the potential to become comparable to small-sized CMEs (Fig.~\ref{fig:extrapolation}c), but with one order of magnitude lower kinetic energy.
We aim to study the parameters that would increase the energies of the produced eruptions. For this, in our next paper, we will present the results of a parametric study on the magnetic field strength of the subphotospheric flux tube. Our aim is to study the differences in energetics, physical size and recurrence of the eruptions.
\acknowledgments
The Authors would like to thank the Referee for the constructive comments.
This project has received funding from the Science and Technology Facilities Council (UK) through the consolidated grant ST/N000609/1.
This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program ``Education and Lifelong Learning'' of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thales Investing in knowledge society through the European Social Fund.
The authors acknowledge support by the Royal Society.
This work was supported by computational time granted from the Greek Research \& Technology Network (GRNET) in the National HPC facility - ARIS.
This work used the DIRAC 1, UKMHD Consortium machine
at the University of St Andrews and the DiRAC Data Centric system at
Durham University, operated by the Institute for Computational Cosmology on
behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment
was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC
capital grant ST/H008519/1, and STFC DiRAC Operations grant ST/K003267/1
and Durham University. DiRAC is part of the National E-Infrastructure.
\bibliographystyle{apj}
|
1,108,101,564,941 | arxiv | \subsection{Phase-space distribution}
The aim of this section is to derive explicitly the perturbed phase-space distribution $f_1( \vec{x}, \vec{v})$ and the predicted number of stars $N_{star}(\theta)$ in~\eqref{BKT_gen} and~\eqref{Numbe}, respectively. We first focus on the slightly simpler point mass approximation before evaluating the same quantities for the Plummer sphere profile.
\subsubsection{Point mass approximation}
The point mass approximation implies $\rho(r) = M_\text{sh} \delta^3(r)$ and $\Phi(r) = - {G M_\text{sh} / r}$. Substituting this and \eqref{f0} into \eqref{BKT_gen} yields directly
\es{f1_BKT_PM}{
f_1( \vec{x}, \vec{v}) &= -{2 G M_\text{sh} \over \pi^{3/2} v_0^5} {e^{-(\vec{v} + \vec{v}_\text{sh})^2 / v_0^2}} { ( \vec{v} + {\bf v_\text{sh}} ) \cdot \left( {\bf \hat x} - {\bf \hat v} \right) \over v \, x \left(1 - {\bf \hat x} \cdot {\bf \hat v} \right)} \,.
}
For $N_{star}(\theta)$ it is beneficial, instead of integrating \eqref{f1_BKT_PM} over the ROI, to rewrite \eqref{Numbe} in terms of the density profile $\rho(r)$,
\es{BKT_gen0}{
N_\text{star}({\bm {\theta}}) &\equiv \int_\text{ROI} \! d^3\vec{x} \, d^3\vec{v}
\left[ f_0(\vec{v})({\bm \theta})
+ \int_{0}^\infty {du \over u^2} \nabla_y \Phi( {\bf y}) \cdot \nabla_{v} f_0({\bf v}) \right]_{ {\bf y} = {\bf x} - {\bf v} / u } \\
&= \int_\text{ROI} \! d^3\vec{x} \left[ n_0 +\int_{0}^\infty {du \over u^3} \int d^3 {\bf v} \nabla_y^2 \Phi( {\bf y}) f_0({\bf v}) \right]_{ {\bf y} = {\bf x} - {\bf v} / u }\\
&= \int_\text{ROI} \! d^3\vec{x} \left[ n_0 + 4 \pi G \int_{0}^\infty {du \over u^3} \int d^3 {\bf v} \rho( |{\bf x} - {\bf v} / u|) f_0({\bf v}) \right]\,.}
In the last step we used the relation $\nabla^2_y \Phi( {\bf y}) = 4 \pi G \rho(y)$. For a spherical ROI with radius $R$ around the subhalo, together with \eqref{f0}, the equation above yields
\es{BKT_genPoint}{
N_\text{star}({\bm {\theta}}) &= \int_\text{ROI} \! d^3\vec{x}\, n_0 \left[ 1 +
{2 G M_\text{sh} \over {x \, v_0^2}} e^{ - \left[ 1 - ({ \bf \hat x} \cdot {\bf \hat v_\text{sh}})^2 \right]{v_\text{sh}^2 / v_0^2} }\ \text{erfc} \left( {v_\text{sh} \over v_0 }{ \bf \hat x} \cdot {\bf \hat v_\text{sh} } \right) \right] \\
&\approx \frac{4}{3}\pi R^3 n_0 \left[ 1+\frac{3 G M_{sh}}{R\, v_0 \, v_\text{sh}} \; F\left({v_\text{sh} \over v_0}\right)\right],
}
where $F(x)$ is the Dawson integral $F(x) \equiv e^{-x^2} \int_0^x \!
e^{y^2} dy$. The last steps assumes the approximation ${\bf \hat x} \cdot {\bf \hat v_\text{sh}} < - {v_0 / v_\text{sh}} $ such that $\text{erfc}\left( {v_\text{sh} \over v_0 }{ \bf \hat x} \cdot {\bf \hat v_\text{sh} } \right)\approx 1$.
\subsubsection{Plummer sphere profile}
Using the potential and density profile for the Plummer sphere given in \eqref{eq:rho-plummer}, we can directly reproduce \eqref{eq:f1-plummer} by evaluating the integral in \eqref{BKT_gen}.
For $N_\text{star}(\theta)$ we can again use the expression given in \eqref{BKT_gen0}. However, because the density profile is more complex, we need to additionally express $\rho(|\vec{x} - \vec{v}/u|)$ and $f_0(\vec{v})$ in terms of their Fourier transforms
\es{PlummerFourier}{
\tilde{\rho}(\vec{p}) &\equiv \int\!d^3\vec{x} \, \rho(\vec{x}) e^{i \vec{p}\cdot\vec{x}}
= M p \, r_c K_1(p r_c) \,, \\
\tilde{f_0}(\vec{p}) &\equiv \int\!d^3\vec{v} \, f_0(\vec{v}) e^{i \vec{p}\cdot\vec{v}}
= e^{-\frac{1}{4} p^2 v_0^2 + i \vec{p}\cdot\vec{v}_\text{sh}} \,,
}
where $K_1(x)$ is the modified Bessel function of the second kind. This yields
\es{PlummerFourierNStar}{
N_\text{star}({\bm {\theta}}) &= \int_\text{ROI} \! d^3\vec{x} \left[ n_0 + 4 \pi G \int \! d^3\vec{w}\ d^3\vec{p}\ d^3\vec{k} \
\int \! \frac{du}{u^3} \tilde{f}_0(\vec p) \tilde{\rho}(\vec k)
e^{-i u \vec{p}\cdot(\vec{w} - \vec{v}_\text{sh}/u + \vec{x})}
e^{-i \vec{k}\cdot \vec{w}} \right] \\
&= \frac{4}{3}\pi R^3 n_0+
\frac{2 G}{\pi} \int \! d^3\vec{w}\ d^3\vec{p}\ d^3\vec{k}\int \! \frac{du}{u^3}
\ \tilde{f}_0(\vec p) \tilde{\rho}(\vec k)
\frac{\sin(p R u) - p R u \cos(p R u)}{p^3} e^{-i u \vec{p}\cdot (\vec{w} - \vec{v}_\text{sh}/u)}
e^{-i \vec{k}\cdot \vec{w}} \\
&= \frac{4}{3}\pi R^3 n_0+
\frac{2 G}{\pi} \int \! d^3\vec{p}\int \! \frac{du}{u^3}
\tilde{f}_0(\vec p) \tilde{\rho}(u \vec p)
\frac{\sin(p R u) - p R u \cos(p R u)}{p^3}
e^{i \vec{p}\cdot \vec{v}_\text{sh}} \,.
}
When we now evaluate first the integral over $u$, then the integral over $\vec{p}$, we can reproduce \eqref{eq:f1-norm-plummer-2},
where $F(x)$ is again the Dawson integral $F(x) \equiv e^{-x^2} \int_0^x \!
e^{y^2} dy$.
\subsection{Asimov limit}
To derive the leading order expression for the likelihood profile in \eqref{Asimov_Limit} it is easiest to start with the binned likelihood in \eqref{LL_phase_binned}. We have to take the logarithm of this quantity,
\es{Log1}{
\log p(d | {\mathcal M}, {\bm \theta}) &\approx \sum_{i=1}^{N_\text{bins}}\left[ -n_i(\theta) + N_i\log n_i(\theta) - N_i(\log N_i-1)\right] \,,
}
which in the continuous limit reads
\es{Log2}{
\log p(d | {\mathcal M}, {\bm \theta}) &= \int \! d^3\vec{x} \, d^3\vec{v}\left[ -f(\vec{x}, \vec{v})(\theta) + f_0(\vec{v})\log f(\vec{x}, \vec{v})(\theta) - f_0(\vec{v})(\log f_0(\vec{v})-1)\right] \\
&\approx \int \! d^3\vec{x} \, d^3\vec{v}\left[ -f(\vec{x}, \vec{v})(\theta) + f_0(\vec{v})\left(
\log f_0(\vec{v})+\frac{f_1(\vec{x}, \vec{v})(\theta)}{f_0(\vec{v})}-\frac{f_1^2(\vec{x}, \vec{v})(\theta)}{2f_0^2(\vec{v})}
\right) - f_0(\vec{v})(\log f_0(\vec{v})-1)\right] \\
&= - \int \! d^3\vec{x} \, d^3\vec{v} \frac{f_1^2(\vec{x}, \vec{v})(\theta)}{2f_0(\vec{v})} \,.
}
In the second line we expanded $\log f(\vec{x}, \vec{v})(\theta)\equiv \log[f_0(\vec{v}) +f_1(\vec{x}, \vec{v})(\theta)]$ for
$f_1(\vec{x}, \vec{v})(\theta)\ll f_0(\vec{v})$. Substituting \eqref{Log2} into \eqref{LL} reproduces \eqref{Asimov_Limit}.
When we write out \eqref{Asimov_Limit} for a Plummer sphere profile, we obtain \eqref{Asimov_Plummer_limit} with
\es{TS_asimov_PS}{
{\mathcal I}(\epsilon_v,\epsilon_r) &= \frac{1}{\sqrt{\pi}} \int_{-1}^{1} {d(\cos \theta_x) \over 2} \int_0^\infty d \tilde v \, \int_{-1}^{1} {d (\cos \theta_v) \over 2} \int_0^{2 \pi} {d \phi_v \over 2 \pi} \int_{0}^{1} {d\tilde r}
{\tilde v^4 e^{- \tilde v^2} \over 1 + \epsilon_v^2 \tilde v^2 - 2 \epsilon_v \tilde v \cos \theta_v} \left[ {{\bf \hat v} \cdot \left( a \vec{\hat{v'}} - \vec{\hat{x}} \right) \over a\left(a - {\bf \hat x} \cdot {\bf \hat v'} \right)} \right]^{2} \,,
}
where $a\equiv\sqrt{1 + \epsilon_r^2/\tilde r^2}$, $\tilde r\equiv r/R$, $\tilde v\equiv v/v_0$, and $\hat{\vec{v}}'\equiv (\vec{v}-\vec{v}_\text{sh})/\sqrt{v^2+v^2_\text{sh}-2\vec{v}\cdot\vec{v}_\text{sh}}$. Note that the integral over $\tilde r$ can be performed analytically, but due to the length of the expression we refrain from quoting it explicitly. All other integral have to be evaluated numerically. In Fig.~\ref{fig:I} we illustrate this function
over a range of relevant $\{\epsilon_v, \epsilon_r\}$.
\begin{figure}[htb]
\includegraphics[width=0.5\columnwidth]{Ievex.pdf}
\vspace{-.20cm}
\caption{The function $\mathcal{I}(\epsilon_v, \epsilon_r)$ entering
the Asimov likelihood profile in~\eqref{Asimov_Plummer_limit} for
the Plummer sphere model. The parameters are $\epsilon_v = v_0 / v_\text{sh}$ and
$\epsilon_r = r_s / R$. Larger values of $\mathcal{I}(\epsilon_v, \epsilon_r)$
indicate better sensitivity to DM subhalos.
}
\vspace{-0.15in}
\label{fig:I}
\end{figure}
\subsection{6-D vs. 3-D kinematic data}
In order to asses the advantage of using the full 6-D kinematic data, as opposed to 3-D information, we need to know the un-binned likelihood functions based on only the velocity or number density. They can be written analogously to \eqref{LL_phase}:
\es{LL_phase_velo}{
p_\text{velocity}(d | {\mathcal M}, {\bm \theta}) &=
\prod_{k=1}^{\bar N_\text{star}} \, \frac{f(\vec{x}_k, \vec{v}_k)({\vec{\theta}})}{n(\vec{x}_k)(\vec{\theta})} \,,
}
and
\es{LL_phase_number}{
p_\text{number}(d | {\mathcal M}, {\bm \theta}) &= e^{- N_\text{star}({\vec{\theta}})}
\prod_{k=1}^{\bar N_\text{star}} \, n(\vec{x}_k)({\vec{\theta}}) \,,
}
where the data set $d$ is now restricted to $\{\vec{v}_k\}$ and $\{\vec{x}_k\}$, respectively.
We show an example of the Asimov likelihood profile, under the null hypothesis, comparing the 6-D and 3-D distributions in Fig.~\ref{Fig: TScompare}. We take $v_0 \,= \,100~ \text{km/s}$, $n_0 \,= \,5\times10^3 \, \text{kpc}^{-3}$, $M_{sh} \,= \,2\times10^7 \, \text{M}_\odot$, $v_\text{sh} \,= \,200 ~ \text{km/s}$, and ROI radius $R \, =\, 3 ~ \text{kpc}$.
As shown in Fig~\ref{Fig: TScompare}, the likelihood profile using the full phase space information is not much different from that obtained with only velocity information. This indicates that one can work with the simplified likelihood function, which does not require a complete sample of stars, and obtain similar sensitivity to DM subhalos. On the other hand, the likelihood function that only uses the stellar number density data is significantly less sensitive, by almost one orders of magnitude in mass, to DM subhalos.
\begin{figure*}[htb]
\includegraphics[width=0.5\columnwidth]{TScompare.pdf}
\vspace{-.20cm}
\caption{ Likelihood profiles obtained using the Asimov dataset under the null hypothesis, with parameters as in Fig.~\ref{fig:phase}, for the 6-D and 3-D likelihood functions. }
\vspace{-0.15in}
\label{Fig: TScompare}
\end{figure*}
\end{document}
|
1,108,101,564,942 | arxiv |
\section{Introduction}
Consider the following question: a learner receives an $iid$ training set $S$ drawn from a distribution parametrized by $\theta^*$.
There is a teacher who knows $\theta^*$.
Can the teacher select a subset from $S$ so the learner estimates $\theta^*$ better from the subset than from $S$?
This question is distinct from training set reduction (see e.g.~\cite{garcia2012prototype,zeng2005smo,Wilson2000}) in that the teacher can use the knowledge of $\theta^*$ to carefully design the subset.
It is, in fact, a coding problem: Can the teacher approximately encode $\theta^*$ using items in $S$ for a known decoder, which is the learner?
As such, the question is not a machine learning task but rather a machine teaching one~\cite{Zhu2018Overview,Goldman1995Complexity,Zhu2015Machine}.
This question is relevant for several nascent applications.
One application is in understanding blackbox models such as deep nets.
Often observation to a blackbox model is limited to its predicted label $y=\theta^*(x)$ given input $x$.
One way to interpret a blackbox model is to locally train an interpretable model with data points $S$ labeled by the blackbox model around the region of interest~\cite{ribeiro2016should}.
We, however, ask for more: to reduce the size of the training set $S$ for the local learner \emph{while} making the learner approximate the blackbox better. The reduced training set itself also serves as representative examples of local model behavior.
Another application is in education.
Imagine a teacher who has a teaching goal $\theta^*$.
This is a reasonable assumption in practice: e.g. a geology teacher has the knowledge of the actual decision boundaries between rock categories.
However, the teacher is constrained to teach with a given textbook (or a set of courseware) $S$.
To the extent that the student is quantified mathematically, the teacher wants to select pages in the textbook with the guarantee that the student learns better from those pages than from gulping the whole book.
But is the question possible?
The following example says yes.
Consider learning a threshold classifier on the interval $[-1,1]$, with true threshold at $\theta^*=0$.
Let $S$ have $n$ items drawn uniformly from the interval and labeled according to $\theta^*$.
Let the learner be a hard margin SVM, which
places the estimated threshold in the middle of the inner-most pair in $S$ with different labels:
$\hat\theta_S = (x_- + x_+)/2$
where $x_-$ is the largest negative training item and $x_+$ the smallest positive training item in $S$.
It is well known that $|\hat\theta_S-\theta^*|$ converges at a rate of $1/n$: the intuition being that the average space between adjacent items is $O(1/n)$.
\begin{figure}[h]
\centerline{\includegraphics[width=.5\textwidth]{symmetric.pdf}}
\caption{The original training set $S$ with $n=6$ items (circles and stars; green=negative, purple=positive), and the most-symmetric training set (stars) the teacher selects.}
\label{fig:symmetric}
\end{figure}
The teacher knows everything but cannot tell $\theta^*$ directly to the learner.
Instead, it can select the \emph{most-symmetric pair} in $S$ about $\theta^*$ and give them to the learner as a two-item training set.
We will prove later that the risk on the most symmetric pair is $O(1/n^2)$, that is, learning from the selected subset surpasses learning from $S$.
Thus we observe something interesting: the teacher can turn a larger training set $S$ into a smaller and better subset for the midpoint classifier.
We call this phenomenon \textbf{super-teaching}.
\section{Formal Definition of Super Teaching}
\label{sec:def}
Let $\setfont{Z}$ be the data space: for unsupervised learning $\setfont{Z}=\setfont{X}$, while for supervised learning $\setfont{Z}=\setfont{X} \times \setfont{Y}$.
Let $p_\setfont{Z}$ be the underlying distribution over $\setfont{Z}$.
We take a function view of the learner: a learner $A$ is a function $A: \cup_{n=0}^\infty \setfont{Z}^n \mapsto \Theta$, where $\Theta$ is the learner's hypothesis space.
The notation $\cup_{n=0}^\infty \setfont{Z}^n$ defines the ``set of (potentially non-$iid$) training sets'',
namely multisets of any size whose elements are in $\setfont{Z}$.
Given any training set $T \in \cup_{n=0}^\infty \setfont{Z}^n$, we assume $A$ returns a unique hypothesis $A(T) \triangleq \hat\theta_T \in \Theta$.
The learner's risk $R(\theta)$ for $\theta\in\Theta$ is defined as:
\begin{equation}
R(\theta)=\E{p_\setfont{Z}}{\ell(\theta(x), y)}, \mbox{ or }
R(\theta)= \|\theta-\theta^*\|_2.
\label{eq:R}
\end{equation}
The former is for prediction tasks where $\ell()$ is a loss function and $\theta(x)$ denotes the prediction on $x$ made by model $\theta$;
the latter is for parameter estimation where we assume a realizable model $p_\setfont{Z} = p_{\theta^*}$ for some $\theta^* \in \Theta$.
We now introduce a clairvoyant teacher $B$ who has full knowledge of $p_\setfont{Z}, A, R$.
The teacher is also given an $iid$ training set $S=\{z_1, \ldots, z_n\} \sim p_\setfont{Z}$.
If the teacher teaches $S$ to $A$, the learner will incur a risk $R(A(S)) \triangleq R(\hat\theta_S)$.
The teacher's goal is to judiciously select a subset $B(S) \subset S$ to act as a ``super teaching set'' for the learner so that $R(\hat\theta_{B(S)}) < R(\hat\theta_S)$.
Of course, to do so the teacher must utilize her knowledge of the learning task, thus the subset is actually a function $B(S, p_\setfont{Z}, A, R)$.
In particular, the teacher knows $p_\setfont{Z}$ already, and this sets our problem apart from machine learning.
For readability we suppress these extra parameters in the rest of the paper.
We formally define super teaching as follows.
\begin{definition}[Super Teaching]
\label{def:superteaching}
$B$ is a super teacher for learner $A$ if~$\forall\delta>0, \exists N$ such that $\forall n\ge N$
\begin{equation}
\P{S}{R(\hat\theta_{B(S)}) \le c_n R(\hat\theta_S)} > 1-\delta,
\end{equation}
where
$S \stackrel{iid}{\sim} p_\setfont{Z}^n, B(S)\subset S$, and $c_n \le 1$ is a sequence we call super teaching ratio.
\end{definition}
Obviously, $c_n=1$ can be trivially achieved by letting $B(S)=S$ so we are interested in small $c_n$.
There are two fundamental questions:
(1) Do super teachers provably exist?
(2) How to compute a super teaching set $B(S)$ in practice?
We answer the first question positively by exhibiting super teaching on two learners:
maximum likelihood estimator for the mean of a Gaussian in section~\ref{sec:Gaussian},
and
1D large margin classifier in section~\ref{sec:midpoint}.
Guarantees on super teaching for general learners remain future work.
Nonetheless, empirically we can find a super teaching set for many general learners:
We formulate the second question as mixed-integer nonlinear programming in section~\ref{sec:MINLP}.
Empirical experiments in section~\ref{sec:exp} demonstrates that one can find a good $B(S)$ effectively.
\section{Analysis on Super Teaching for the MLE of Gaussian mean}
\label{sec:Gaussian}
In this section, we present our first theoretical result on super teaching, when the learner $A_{MLE}$ is the maximum likelihood estimator (MLE) for the mean of a Gaussian.
Let $\setfont{Z}=\setfont{X}=\setfont{R}$, $\Theta=\setfont{R}$, $p_\setfont{Z}(x)=\mathcal{N}(\theta^*, 1)$.
Given a sample $S$ of size $n$ drawn from $p_\setfont{Z}$,
the learner computes the MLE for the mean: $\hat \theta_S =A_{MLE}(S)= \frac{1}{n}\sum_{i=1}^n x_i$.
We define the risk as $R(\hat \theta_S)=|\hat \theta_S-\theta^*|$.
The teacher we consider is the optimal $k$-subset teacher $B_k$, which uses the best subset of size $k$ to teach:
\begin{equation}
B_k(S) \in \ensuremath{\mbox{argmin}}_{T \subset S, |T|=k} R(\hat\theta_T).
\end{equation}
To build intuition, it is well-known that the risk of $A_{MLE}$ under $S$ is $O(1/\sqrt{n})$ because the variance under $n$ items shrinks like $1/n$.
Now consider $k=1$.
Since the teacher $B_1$ knows $\theta^*$, under our setting the best teaching strategy is for her to select the item in $S$ closest to $\theta^*$, which forms the singleton teaching set $B_1(S)$.
One can show that with large probability this closest item is $O(1/n)$ away from $\theta^*$
(the central part of a Gaussian density is essentially uniform).
Therefore, we already see a super teaching ratio of $c_n = n^{-\frac{1}{2}}$.
More generally, our main result below shows that $B_k$ achieves a super teaching ratio $c_n = O(n^{-k+\frac{1}{2}})$:
\begin{restatable}{theorem}{Amuthm}
\label{thm:Amuthm}
Let $B_k$ be the optimal $k$-subset teacher.
$\forall \epsilon\in(0,\frac{2k-1}{4}), \forall\delta\in(0, 1)$, $\exists N(k, \epsilon, \delta)$ such that $\forall n\ge N(k, \epsilon, \delta)$, $\P{}{R(\hat \theta_{B_k(S)})\le c_nR(\hat \theta_S)}>1-\delta$, where $c_n=\frac{k^{k-\epsilon}}{\sqrt{k}}n^{-k+\frac{1}{2}+2\epsilon}$.
\end{restatable}
Toward proving the theorem,
\footnote{Remark: we introduced an auxiliary variable $\epsilon$ which controls the implicit tradeoff between $c_n$, how much super teaching helps, and $N$, how soon super teaching takes effect.
When $\epsilon\rightarrow 0$ the teaching ratio $c_n$ approaches $O(n^{-k+\frac{1}{2}})$, but as we will see $N(k, \epsilon, \delta)\rightarrow\infty$.
Similarly, $k$ also affects the tradeoff: the teaching ratio is smaller as we enlarge $k$, but
$N(k, \epsilon, \delta)$ increases.}
we first recall the standard rate $R(\hat\theta_S) \approx n^{-\frac{1}{2}}$ if $A_{MLE}$ learns from the whole training set $S$:
\begin{restatable}{proposition}{thmAsyGaussianPool}
\label{thm:AsyGaussianPool}
Let $S$ be an $n$-item $iid$ sample drawn from $ \mathcal{N}(\theta^*, 1)$. $\forall \epsilon>0$, $\forall\delta\in(0,1)$, $\exists N_1(\epsilon, \delta)$ such that $\forall n\ge N_1$,
\begin{equation}
\P{}{n^{-\frac{1}{2}-\epsilon}<R(\hat \theta_S)<n^{-\frac{1}{2}+\epsilon}}>1-\delta.
\end{equation}
\end{restatable}
\begin{proof}
$R(\hat \theta_S)=|\hat \theta_S-\theta^*|$ and $\hat \theta_S-\theta^*\sim \mathcal{N}(0, n^{-1})=\sqrt{\frac{n}{2\pi}}e^{-\frac{nx^2}{2}}$. Let $\alpha=n^{-\frac{1}{2}-\epsilon}$ and $\beta=n^{-\frac{1}{2}+\epsilon}$.
We have
\begin{equation}
\begin{aligned}
\label{lower}
&\P{}{R(\hat\theta_S)\le\alpha}=2\int_{0}^\alpha \sqrt{\frac{n}{2\pi}}e^{-\frac{nx^2}{2}}dx\\
&<2\int_{0}^\alpha \sqrt{\frac{n}{2\pi}}dx=2\alpha\sqrt{\frac{n}{2\pi}}=\sqrt{\frac{2}{\pi}}n^{-\epsilon},
\end{aligned}
\end{equation}
\begin{equation}
\label{upper}
\begin{aligned}
&\P{}{R(\hat\theta_S)\ge\beta}=2\int_{\beta}^\infty \sqrt{\frac{n}{2\pi}}e^{-\frac{nx^2}{2}}dx\\
&<2\int_{\beta}^\infty \frac{x}{\beta}\sqrt{\frac{n}{2\pi}}e^{-\frac{nx^2}{2}}dx=\int_{\beta^2}^\infty \frac{1}{\beta}\sqrt{\frac{n}{2\pi}}e^{-\frac{ny}{2}}dy\\
&=\frac{1}{\beta}\sqrt{\frac{2}{n\pi}}e^{-\frac{n\beta^2}{2}}<\frac{1}{\beta}\sqrt{\frac{2}{n\pi}}=\sqrt{\frac{2}{\pi}}n^{-\epsilon}.
\end{aligned}
\end{equation}
Thus
$\P{}{\alpha< R(\hat\theta_S)<\beta}
=1-\P{}{R(\hat\theta_S)\le \alpha}-\P{}{R(\hat\theta_S)\ge \beta}
>1-2\sqrt{\frac{2}{\pi}}n^{-\epsilon}$.
Let $N_1(\epsilon, \delta)=(\frac{1}{\delta}\sqrt{\frac{8}{\pi}})^{\frac{1}{\epsilon}}$, then $\forall n\ge N_1$, $\P{}{\alpha< R(\hat\theta_S)<\beta}>1-\delta$.
\end{proof}
We now work out the risk of $A_{MLE}$ if it learns from the optimal $k$-subset teacher $B_k$.
\thmref{thm:Gaussian} says that this risk is very small and sharply concentrated around $R(\hat\theta_{B_k(S)}) \approx n^{-k}$. To prove~\thmref{thm:Gaussian}, we first give the following lemma.
\begin{lemma}
Denote $C^n_k=\begin{pmatrix} n\\k\end{pmatrix}$. Let the index set $I=\{1,2,...n\}$ where $n\ge 4k$. Consider all subsets of size $k$, then there are at most $4^kC^{2k}_kC^n_{2k-1}$ ordered pairs of subsets that are overlapping but not identical.
\label{lem:intersect}
\end{lemma}
\begin{proof}
Let $I_1$ and $I_2$ be two subsets of size $k$ and they overlap on $t$ indexes. Then the total number of distinct indexes that appear in $I_1\cup I_2$ is $2k-t$. There are $C^n_{2k-t}$ ways of choosing such $2k-t$ indexes. Next we determine which $t$ indexes are overlapping ones. We have $C^{2k-t}_t$ ways of choosing such $t$ indexes. Finally we have $C^{2k-2t}_{k-t}$ ways of selecting half of the non-overlapping indexes and attribute them to $I_1$. Thus in total we have $O_t=C^n_{2k-t}C^{2k-t}_tC^{2k-2t}_{k-t}$ ordered pairs of subsets that overlap on $t$ indexes. By our assumption $n\ge4k$ we have $C^n_{2k-t}\le C^n_{2k-1}$. Also note that $C^{2k-t}_t<C^{2k}_t$ and $C^{2k-2t}_{k-t}<C^{2k}_k$, thus $O_t<C^n_{2k-1}C^{2k}_tC^{2k}_k$. Therefore the total number of ordered pairs of subsets that are overlapping but not identical is
\begin{equation}
\begin{aligned}
&O=\sum_{t=1}^{k-1}O_t<\sum_{t=1}^{k-1}C^n_{2k-1}C^{2k}_tC^{2k}_k\\
&<\sum_{t=0}^{2k}C^n_{2k-1}C^{2k}_tC^{2k}_k=4^kC^{2k}_kC^n_{2k-1}.
\end{aligned}
\end{equation}
\end{proof}
Now we prove the risk of the optimal $k$-subset teacher.
\begin{restatable}{theorem}{thmAsyGaussian}
\label{thm:Gaussian}
Let $B_k$ be the optimal $k$-subset teacher. Let $S$ be an $n$-item $iid$ sample drawn from $\mathcal{N}(\theta^*, 1)$. $\forall \epsilon\in(0, k), \forall \delta\in(0,1)$, $\exists N_2(k, \epsilon, \delta)$ such that $ \forall n\ge N_2$,
\begin{equation}
\P{}{\frac{1}{\sqrt{k}}(\frac{k}{n})^{k+\epsilon}<R(\hat \theta_{B_k(S)})<\frac{1}{\sqrt{k}}(\frac{k}{n})^{k-\epsilon}}>1-\delta.
\end{equation}
\end{restatable}
\begin{proof}
Let $I\subseteq\{1,2,...,n\}$ and $|I|=k$, define $\gamma_I=\frac{1}{\sqrt{k}}\sum_{i\in I}(x_i-\theta^*)$. Let $S_I$ denote the subset indexed by $I$.
Note that
$\hat\theta_{S_I}=\frac{1}{k}\sum_{i\in I}x_i$ and
$R(\hat\theta_{S_I})=|\hat\theta_{S_I}-\theta^*|=|\frac{1}{k}\sum_{i\in I}x_i-\theta^*|=\frac{1}{\sqrt{k}}|\gamma_I|$.
Also note that $R(\hat \theta_{B_k(S)})=\inf_{I}R(\hat\theta_{S_I})=\frac{1}{\sqrt{k}}\inf_{I}|\gamma_I|$.
Thus to prove \thmref{thm:Gaussian} it suffices to prove
\begin{equation}
\P{}{(\frac{k}{n})^{k+\epsilon}<\inf_{I}|\gamma_I|<(\frac{k}{n})^{k-\epsilon}}\rightarrow1.
\end{equation}
Let $\alpha=(\frac{k}{n})^{k+\epsilon}$ and $\beta=(\frac{k}{n})^{k-\epsilon}$. We first prove the lower bound. Note that $\gamma_I$ has the same distribution for all $I$. Thus by the union bound,
\begin{equation}
\begin{aligned}
\P{}{\inf_{I}|\gamma_I|\le \alpha}=\P{}{\exists I: |\gamma_I|\le\alpha}\le C^n_k\P{}{|\gamma_{I_1}|\le\alpha},
\end{aligned}
\end{equation}
where $I_1=\{1,2,...,k\}$. Since $\gamma_{I_1}\sim \mathcal{N}(0, 1)$, we have
\begin{equation}
\P{}{|\gamma_{I_1}|\le\alpha}=\int_{-\alpha}^{\alpha}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}dx<\sqrt{\frac{2}{\pi}}\alpha.
\end{equation}
Note that $C^n_k\le (\frac{en}{k})^k$. Thus,
\begin{equation}
\P{}{\inf_{I}|\gamma_I|\le \alpha}<(\frac{en}{k})^k\sqrt{\frac{2}{\pi}}\alpha=\sqrt{\frac{2}{\pi}}e^k(\frac{k}{n})^\epsilon\rightarrow 0.
\end{equation}
Thus $\exists N_2^{'}(k,\epsilon, \delta)$ such that $\forall n\ge N_2^{'}$,
\begin{equation}\label{lower:gaussian}
\P{}{\inf_{I}|\gamma_I|\le \alpha}<\frac{\delta}{2}.
\end{equation}
To show the upper bound, we define $t_I=\ind{|\gamma_I|<\beta}$, where $\ind{}$ is the indicator function. Let $T=\sum_{I}t_I$. Then it suffices to show $\lim_{n\rightarrow \infty}\P{}{T=0}=0$.
Note that
\begin{equation}
\label{evbound}
\begin{aligned}
&\P{}{T=0}=\P{}{T-\E{}{T}=-\E{}{T}}\\
&\le\P{}{(T-\E{}{T})^2\ge(\E{}{T})^2}\le\frac{\V{T}}{(\E{}{T})^2},
\end{aligned}
\end{equation}
where the last inequality follows from the Markov inequality.
Now we lower bound $\E{}{T}$.
\begin{equation}
\begin{aligned}
&\E{}{T}=\E{}{\sum_{I}t_I}=\sum_{I}\E{}{t_I}=C^{n}_k\E{}{t_{I_1}}\\
&=C^{n}_k\P{}{|\gamma_{I_1}|<\beta}=C^{n}_k\int_{-\beta}^{\beta}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}dx.\\
\end{aligned}
\end{equation}
Note that $\epsilon<k$, thus $\beta<1$. For $x\in(-\beta,\beta)$, $\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}>\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}}=\frac{1}{\sqrt{2\pi e}}$. Also note that $C^{n}_k>(\frac{n}{k})^k$, thus
\begin{equation}
\label{ebound}
\E{}{T}>(\frac{n}{k})^k\frac{1}{\sqrt{2\pi e}}2\beta=\sqrt{\frac{2}{\pi e}}(\frac{n}{k})^\epsilon.
\end{equation}
Now we upper bound $\V{T}$.
\begin{equation}
\V{T}=\sum_{I, I^{'}}\Cov{t_I}{t_{I^{'}}}=\sum_{I, I^{'}, |I\cap I^{'}|\ge1}\Cov{t_I}{t_{I^{'}}}.
\end{equation}
Note that for Bernoulli random variable $t_I$, $\V{t_I}\le \E{}{t_I}$. Thus if $I=I^{'}$, then
\begin{equation}
\begin{aligned}
&\Cov{t_I}{t_{I^{'}}}=\V{t_I}\le \E{}{t_I}=\P{}{|\gamma_I|<\beta}\\
&=\int_{-\beta}^{\beta}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}dx<\frac{1}{\sqrt{2\pi}}2\beta=\sqrt{\frac{2}{\pi}}(\frac{k}{n})^{k-\epsilon}.
\end{aligned}
\end{equation}
Otherwise $1 \le |I\cap I^{'}| \le k-1$, that is, $I$ and $I^{'}$ overlap but not identical, then
\begin{equation}
\begin{aligned}
\Cov{t_I}{t_{I^{'}}}&=\E{}{t_It_{I^{'}}}-\E{}{t_I}\E{}{t_{I^{'}}}\le \E{}{t_It_{I^{'}}}\\
&=\P{}{|\gamma_I|<\beta, |\gamma_{I^{'}}|<\beta}.
\end{aligned}
\end{equation}
Note that $\gamma_I$ and $\gamma_{I^{'}}$ are jointly Gaussian with covariance
\begin{equation}
\begin{aligned}
\Cov{\gamma_I}{\gamma_{I^{'}}}&=\frac{1}{k}\sum_{i\in I, i^{'}\in I^{'}}\Cov{x_i-\theta^*}{x_{i^{'}}-\theta^*}\\
&=\frac{1}{k}\sum_{i\in I,i^{'}\in I^{'},i=i^{'}}1=\frac{|I\cap I^{'}|}{k}\triangleq\rho,
\end{aligned}
\end{equation}
where $\frac{1}{k}\le\rho\le\frac{k-1}{k}$. The joint PDF of two standard normal distributions $x, y$ with covariance $\rho$ is
\begin{equation}
f(x,y)=\frac{1}{2\pi\sqrt{1-\rho^2}}e^{-\frac{x^2-2\rho xy+y^2}{2(1-\rho^2)}}.
\end{equation}
Note that $f(x, y)\le\frac{1}{2\pi\sqrt{1-\rho^2}}$, thus
\begin{equation}
\begin{aligned}
&\P{}{|\gamma_I|<\beta, |\gamma_{I^{'}}|<\beta}\le\iint\displaylimits_{|x|<\beta, |y|<\beta}\frac{1}{2\pi\sqrt{1-\rho^2}}dxdy\\
&=\frac{1}{2\pi\sqrt{1-\rho^2}}(2\beta)^2=\frac{2}{\pi\sqrt{1-\rho^2}}\beta^2.
\end{aligned}
\end{equation}
Since $\frac{2}{\pi\sqrt{1-\rho^2}}\le\frac{2}{\pi\sqrt{1-(\frac{k-1}{k})^2}}\le\frac{2}{\pi\sqrt{\frac{k}{k^2}}}=\frac{2\sqrt{k}}{\pi}$, thus
\begin{equation}
\P{}{|\gamma_I|<\beta, |\gamma_{I^{'}}|<\beta}\le\frac{2\sqrt{k}\beta^2}{\pi}=\frac{2\sqrt{k}}{\pi}(\frac{k}{n})^{2k-2\epsilon}.
\end{equation}
According to \lemref{lem:intersect}, there are at most $4^kC^{2k}_kC^n_{2k-1}$ pairs of $I$ and $I^{'}$ such that $1\le|I\cap I^{'}|\le k-1$. Thus,
\begin{equation}\label{vbound}
\begin{aligned}
&\V{T}=\sum_{I}\V{t_I}+\sum_{I\neq I^{'}, |I\cap I^{'}|\ge1}\Cov{t_I}{t_{I^{'}}}\\
&\le C^n_k\sqrt{\frac{2}{\pi}}(\frac{k}{n})^{k-\epsilon}+4^kC^{2k}_kC^n_{2k-1}\frac{2\sqrt{k}}{\pi}(\frac{k}{n})^{2k-2\epsilon}\\
&\le \sqrt{\frac{2}{\pi}}(\frac{en}{k})^k(\frac{k}{n})^{k-\epsilon}+4^kC^{2k}_k(\frac{en}{2k-1})^{2k-1}\frac{2\sqrt{k}}{\pi}(\frac{k}{n})^{2k-2\epsilon}\\
&=\sqrt{\frac{2}{\pi}}e^k(\frac{n}{k})^\epsilon+\frac{4\sqrt{k}}{\pi}C^{2k}_k(\frac{2ek}{2k-1})^{2k-1}(\frac{n}{k})^{2\epsilon-1}.\\
\end{aligned}
\end{equation}
Now plug~\eqref{vbound} and~\eqref{ebound} into~\eqref{evbound}, we have
\begin{equation}
\begin{aligned}
&\P{}{T=0}\le a_1(k)(\frac{n}{k})^{-\epsilon}+a_2(k)(\frac{n}{k})^{-1}\rightarrow 0,\\
\end{aligned}
\end{equation}
where $a_1=\sqrt{\frac{\pi}{2}}e^{k+1}$ and $a_2(k)=2\sqrt{k}eC^{2k}_k(\frac{2ek}{2k-1})^{2k-1}$. Thus $\exists N_2^{''}(k, \epsilon,\delta)$ such that $\forall n\ge N_2^{''}$,
\begin{equation}\label{upper:gaussian}
\P{}{\inf_{I}|\gamma_I|\ge \beta}<\frac{\delta}{2}.
\end{equation}
Let $N_2(k, \epsilon, \delta)=\max\{N_2^{'}(k, \epsilon,\delta), N_2^{''}(k, \epsilon,\delta)\}$, combining~\eqref{lower:gaussian} and ~\eqref{upper:gaussian} concludes the proof.
\end{proof}
Now we can conclude super-teaching by comparing~\thmref{thm:Gaussian} and~\propref{thm:AsyGaussianPool}:
\begin{proof}[\textbf{Proof of~\thmref{thm:Amuthm}}]
Let $\alpha=\frac{1}{\sqrt{k}}(\frac{k}{n})^{k-\epsilon}$ and $\beta=n^{-\frac{1}{2}-\epsilon}$.
By \propref{thm:AsyGaussianPool}, $\forall \epsilon\in(0, \frac{2k-1}{4}),\forall\delta\in(0,1)$, $\exists N_1(\epsilon,\frac{\delta}{2})$ such that $\forall n\ge N_1$, $\P{}{R(\hat \theta_S)>\beta}> 1-\frac{\delta}{2}$.
By \thmref{thm:Gaussian}, $\exists N_2(k, \epsilon,\frac{\delta}{2})$ such that $\forall n\ge N_2$, $\P{}{R(\hat \theta_{B_k(S)})<\alpha}>1-\frac{\delta}{2}$. Let $c_n=\frac{k^{k-\epsilon}}{\sqrt{k}}n^{-k+\frac{1}{2}+2\epsilon}$.
Since $\epsilon<\frac{2k-1}{4}$, $c_n$ is a decreasing sequence in $n$ with $\lim_{n\rightarrow\infty}c_n=0$. Let $N_3(k, \epsilon)$ be the first integer such that $c_{N_3}\le1$.
Let $N(k, \epsilon, \delta)=\max\{N_1(\epsilon,\frac{\delta}{2}), N_2(k, \epsilon,\frac{\delta}{2}), N_3(k, \epsilon)\}$. By a union bound $\forall n\ge N(k, \epsilon, \delta)$, $\P{}{R(\hat\theta_{B_k(S)})<\alpha, R(\hat\theta_S)\ge \beta}>1-\delta$.
Since $\frac{\alpha}{\beta}=c_n$, we have $\P{}{R(\hat\theta_{B_k(S)})\le c_n R(\hat\theta_S)}>1-\delta$, where $c_n\le c_{N_3}\le1$.
\end{proof}
\section{Analysis on Super Teaching for 1D Large Margin Classifier}
\label{sec:midpoint}
We present our second theoretical result, this time on teaching a 1D large margin classifier. Let $\setfont{X}=[-1,1]$, $\setfont{Y}=\{-1,1\}$, $\Theta=[-1,1]$, $\theta^*=0$,
$p_{\setfont{Z}}(x, y)=p_{\setfont{Z}}(x)p_{\setfont{Z}}(y\mid x)$ where $p_{\setfont{Z}}(x)=U(\setfont{X})$ and $p_{\setfont{Z}}(y=1\mid x)=\ind{x\ge\theta^*}$. Let
$x_- \triangleq \max_{i:y_i=-1} x_i$ and
$x_+ \triangleq \min_{i:y_i=+1} x_i$
be the inner-most pair of opposite labels in $S$ if they exist.
We formally define the large margin classifier $A_{lm}(S)$ as
\begin{equation}
\hat\theta_S = A_{lm}(S)=\left\{
\begin{array}{ll}
(x_- + x_+)/2 & \mbox{ if $x_-$, $x_+$ exist} \\
-1 & \mbox{ if $S$ all positive} \\
1 & \mbox{ if $S$ all negative.}
\end{array}
\right.
\end{equation}
The risk is defined as $R(\hat\theta_S)=|\hat\theta_S-\theta^*|=|\hat\theta_S|$. The teacher we consider is the most symmetric teacher, who selects the most symmetric pair about $\theta^*$ in $S$ and gives it to the learner.
We define the most-symmetric teacher $B_{ms}$:
\begin{equation}
B_{ms}(S) = \left\{
\begin{array}{ll}
\{(s_-,-1), (s_+,1)\} & \mbox{ if } s_-,s_+ \mbox{ exist}, \\
\{(x_1,y_1)\} & \mbox{ otherwise.}
\end{array}
\right.
\label{eq:Bms}
\end{equation}
where $(s_-,s_+) \in \ensuremath{\mbox{argmin}}_{(x,-1),(x',1) \in S} |\frac{x+x'}{2}-\theta^*|$.
Our main result shows that learning from the whole set $S$ achieves the well-known $O(1/n)$ risk, but surprisingly $B_{ms}$ achieves $O(1/n^2)$ risk, therefore it is an approximately $c_n=O(n^{-1})$ super teaching ratio.
\begin{restatable}{theorem}{Ampthm}
\label{thm:Ampthm}
Let $S$ be an $n$-item $iid$ sample drawn from $p_{\setfont{Z}}$. Then $\forall \delta\in(0,1)$, $\exists N(\delta)$ such that $\forall n\ge N$, $\P{}{R(\hat\theta_{B_{ms}(S)})\le c_nR(\hat\theta_S)}>1-\delta$, where $c_n=\frac{32}{n\delta}\ln\frac{6}{\delta}$.
\end{restatable}
Before proving~\thmref{thm:Ampthm}, we first show that $B_{ms}$ is an optimal teacher for the large margin classifier.
\begin{restatable}{proposition}{propBms}
$B_{ms}$ is an optimal teacher for the large margin classifier $\hat\theta_S$.
\end{restatable}
\begin{proof}
We show $R(\hat\theta_{B_{ms}(S)})\le R(\hat\theta_{B(S)})$ for any $ B$ and any $ S$.
If $|B_{ms}(S)|=1$, then $S$ is either all positive or all negative.
In both cases $R(\hat\theta_{B(S)})=1$ for any $B$ by definition.
Thus $R(\hat\theta_{B_{ms}(S)})\le R(\hat\theta_{B(S)})$.
Otherwise $|B_{ms}(S)|=2$, then if $B(S)$ is all positive or all negative, we have $R(\hat\theta_{B(S)})=1$ and thus $R(\hat\theta_{B_{ms}(S)})\le R(\hat\theta_{B(S)})$. Otherwise let $x^B_-, x^B_+$ be the inner most pair of $B(S)$. Since $x^B_-, x^B_+\in S$, then by definition of $B_{ms}$, $R(\hat\theta_{B_{ms}(S)})=|\frac{s_-+s_+}{2}-\theta^*|\le |\frac{x^B_-+x^B_+}{2}-\theta^*|=R(\hat\theta_{B(S)})$.
\end{proof}
Now we show that learning on the whole $S$ incurs $O(n^{-1})$ risk. First, we give the following lemma for the exact tail probability of $R(\hat\theta_S)$.
\begin{restatable}{lemma}{lemAmp}
\label{lem:Amp}
For the large margin classifier $\hat\theta_S$, we have
\begin{equation}\label{eq:Amptail}
\P{}{R(\hat\theta_S)>\epsilon}=\left\{
\begin{aligned}
&(1-\epsilon)^n+(\epsilon)^n &&\text{ $0<\epsilon\le\frac{1}{2}$}\\
&(\frac{1}{2})^{n-1} &&\text{ $\frac{1}{2}<\epsilon<1$}\\
&0 &&\mbox{ $\epsilon=1$}.
\end{aligned}
\right.
\end{equation}
\end{restatable}
The proof for~\lemref{lem:Amp} is in the appendix.
Now we show that $R(\hat\theta_S)$ is $O(n^{-1})$.
\begin{restatable}{theorem}{thmAmp}
\label{thm:Amp}
Let $S$ be an $n$-item $iid$ sample drawn from $p_{\setfont{Z}}$. Then $\forall \delta\in(0,1)$ and $\forall n\ge2$,
\begin{equation}
\P{}{R(\hat\theta_S)>\frac{\delta}{n}}>1-\delta.
\end{equation}
\end{restatable}
\begin{proof}
According to~\lemref{lem:Amp}, for $\epsilon\le\frac{1}{2}$, we have
\begin{equation}\label{lowerlem2}
\P{}{R(\hat\theta_S)>\epsilon}>(1-\epsilon)^n>1-n\epsilon.
\end{equation}
Note that $n\ge2$, thus $\frac{\delta}{n}\le\frac{1}{2}$. Let $\epsilon=\frac{\delta}{n}$ in~\eqref{lowerlem2} we have
\begin{equation}
\begin{aligned}
&\P{}{R(\hat\theta_S)>\frac{\delta}{n}}>1-n\frac{\delta}{n}=1-\delta.
\end{aligned}
\end{equation}
\end{proof}
Now we work out the risk of the most symmetric teacher $B_{ms}$.
To bound the risk of $B_{ms}$ we need the following key lemma, which shows that the sample complexity with the teacher is $O(\epsilon^{-1/2})$.
\begin{restatable}{lemma}{lemAmshighP}
\label{lem:Ams_highP}
Let $n=4m$, where $m$ is an integer. Let $S$ be an $n$-item $iid$ sample drawn from $p_{\setfont{Z}}$. $\forall\epsilon>0, \forall\delta\in(0,1)$, $\exists \setfont{M}(\epsilon, \delta)=\max\{\frac{3e}{\ln4-1}\ln\frac{3}{\delta}, (\frac{1}{\epsilon}\ln\frac{3}{\delta})^{\frac{1}{2}}\}$ such that $\forall m\ge\setfont{M} (\epsilon, \delta)$, $\P{}{R(\hat\theta_{B_{ms}(S)})\le\epsilon}>1-\delta$.
\end{restatable}
\begin{proof}
We give a proof sketch and the details are in the appendix. Let $S_1=\{x\mid (x, 1)\in S\}$ and $S_2=\{x\mid (x, -1)\in S\}$ respectively. Then we have $|S_1|+|S_2|=4m$. Define event $E_1:\{|S_1|\ge m\land |S_2|\ge m\}$. Given that $m\ge \frac{3e}{\ln4-1}\ln\frac{3}{\delta}$, one can show $P(E_1)>1-\frac{\delta}{3}$. Since $|S_1|+|S_2|=4m$, either $|S_1|\ge2m$ or $|S_2|\ge2m$. Without loss of generality we assume $|S_1|\ge2m$. We then divide the interval [0, 1] equally into $N=\lfloor m^2(\ln\frac{3}{\delta})^{-1} \rfloor$ segments. The length of each segment is $\frac{1}{N}=O(\frac{1}{m^2})$ as Figure~\ref{segments_copy} shows.
\begin{figure}[H]
\centering
\includegraphics[width=3in,height=0.5in]{segments.pdf}
\caption{segments}
\label{segments_copy}
\end{figure}
Let $N_o$ be the number of segments that are occupied by the points in $S_1$. Note that $N_o$ is a random variable.
Let $E_2$ be the event that $N_o\ge m$. Then one can show $P(E_2)>1-\frac{\delta}{3}$. By union bound, we have $P(E_1, E_2)>1-\frac{2\delta}{3}$. Let $E_3$ be the following event: there exist a point $x_2$ in $S_2$ such that $-x_2$, the flipped point, lies in the same segment as some point $x_1$ in $S_1$. One can show that $P(E_3\mid E_1, E_2)>1-\frac{\delta}{3}$. Thus $P(E_3)\ge P(E_1, E_2, E_3)=P(E_3\mid E_1, E_2)P(E_1, E_2)\ge (1-\frac{\delta}{3})(1-\frac{2\delta}{3})> 1-\delta$. If $E_3$ happens, then $|x_1+x_2|=|x_1-(-x_2)|\le\frac{1}{N}$. Note that $m\ge(\frac{1}{\epsilon}\ln\frac{3}{\delta})^{\frac{1}{2}}$ and $N=\lfloor m^2(\ln\frac{3}{\delta})^{-1} \rfloor\ge \frac{m^2}{2}(\ln\frac{3}{\delta})^{-1} $, thus $\frac{1}{N}\le \frac{2}{m^2}\ln\frac{3}{\delta}\le2\epsilon$. Therefore $R(\hat \theta_{B_{ms}(S)})=|\frac{s_-+s_+}{2}|\le|\frac{x_1+x_2}{2}|\le\epsilon$.
\end{proof}
Rewriting $\epsilon$ in~\lemref{lem:Ams_highP} as a function of $n$, we have the following theorem.
\begin{restatable}{theorem}{thmAmsRes}
\label{thm:Ams}
Let $S$ be an $n$-item $iid$ sample dawn from $p_{\setfont{Z}}$, then $\exists N_1(\delta)=\frac{12e}{\ln4-1}\ln\frac{3}{\delta}$ such that $\forall n\ge N_1$,
\begin{equation}
\P{}{R(\hat\theta_{B_{ms}(S)})\le\frac{16}{n^2}\ln\frac{3}{\delta}}>1-\delta.
\end{equation}
\end{restatable}
\begin{proof}
Note that if $n\ge N_1(\delta)=\frac{12e}{\ln4-1}\ln\frac{3}{\delta}$, then $m=\frac{n}{4}\ge\frac{3e}{\ln4-1}\ln\frac{3}{\delta}$, thus the minimum $\epsilon$ that satisfies $m\ge\setfont{M} (\epsilon, \delta)$ is $\frac{1}{m^2}\ln\frac{3}{\delta}=\frac{16}{n^2}\ln\frac{3}{\delta}$.
\end{proof}
Now we can conclude super teaching:
\begin{proof}[\textbf{Proof of~\thmref{thm:Ampthm}}]
According to~\thmref{thm:Ams}, $\exists N_1(\frac{\delta}{2})$ such that $\forall n\ge N_1$, $\P{}{R(\hat\theta_{B_{ms}(S)})\le\frac{16}{n^2}\ln\frac{6}{\delta}}>1-\frac{\delta}{2}$. Note that $N_1\ge2$, thus according to~\thmref{thm:Amp}, $\forall n\ge N_1$, $\P{}{R(\hat\theta_S)>\frac{\delta}{2n}}>1-\frac{\delta}{2}$. Let $c_n=\frac{32}{n\delta}\ln\frac{6}{\delta}$ and $N_2(\delta)=\frac{32}{\delta}\ln\frac{6}{\delta}$ so that $c_{N_2}=1$. Let $N(\delta)=\max\{N_1(\delta), N_2(\delta)\}$. By union bound, $\forall n\ge N$, with probability at least $1-\delta$, we have both $R(\hat\theta_S)>\frac{\delta}{2n}$ and $R(\hat\theta_{B_{ms}(S)})\le\frac{16}{n^2}\ln\frac{6}{\delta}$, which gives $\P{}{R(\hat\theta_{B_{ms}(S)})\le c_nR(\hat\theta_S)}>1-\delta$, where $c_n\le c_{N_2}=1$.
\end{proof}
\section{An MINLP Algorithm for Super Teaching}
\label{sec:MINLP}
Although the problem of proving super teaching ratios for a specific learner is interesting, we now focus on an algorithm to find a super teaching set for general learners \emph{given} a training set $S$.
That is, we find a subset $B(S) \subset S$ so that $R(\hat\theta_{B(S)}) < R(\hat\theta_S)$.
We start by formulating super teaching as a subset selection problem.
To this end, we introduce binary indicator variables $b_1, \ldots, b_n$ where $b_i=1$ means $z_i \in S$ is included in the subset.
We consider learners $A$ that can be defined via convex empirical risk minimization:
\begin{equation}\label{learner}
A(S) \triangleq \ensuremath{\mbox{argmin}}_{\theta \in \Theta} \sum_{i=1}^n \tilde\ell(\theta, z_i) + \frac{\lambda}{2} \|\theta\|^2.
\end{equation}
For simplicity we assume there is a unique global minimum which is returned by $\ensuremath{\mbox{argmin}}$.
Note that we use $\tilde\ell$ in~\eqref{learner} to denote the (surrogate) convex loss used by $A$ in performing empirical risk minimization.
For example, $\tilde\ell$ may be the negative log likelihood for logistic regression.
$\tilde\ell$ is potentially different from $\ell$ (e.g. the 0-1 loss) used by the teacher to define the teaching risk $R$ in~\eqref{eq:R}.
We formulate super teaching as the following bilevel combinatorial optimization problem:
\begin{eqnarray}
&&\min_{b\in\{0,1\}^n,\hat\theta\in\Theta} R(\hat\theta)\\
\mbox{s.t. } && \hat\theta = \ensuremath{\mbox{argmin}}_{\theta \in \Theta} \sum_{i=1}^n b_i \tilde\ell(\theta, z_i) + \frac{\lambda}{2} \|\theta\|^2\label{eq:ML}.
\end{eqnarray}
Under mild conditions, we may replace the lower level optimization problem (i.e. the machine learning problem~\eqref{eq:ML}) by its first order optimality (KKT) conditions:
\begin{eqnarray}
\min_{b\in\{0,1\}^n,\hat\theta\in\Theta} &&R(\hat\theta) \label{eq:MINLP}\\
\mbox{s.t. } && \sum_{i=1}^n b_i \nabla_\theta \tilde\ell(\hat\theta, z_i) + {\lambda}\hat\theta = 0. \nonumber
\end{eqnarray}
This reduces the bilevel problem but the constraint is nonlinear in general, leading to
a mixed-integer nonlinear program (MINLP), for which effective solvers exist.
We use the MINLP solver in NEOS~\cite{czyzyk1998neos}.
\section{Simulations}
\label{sec:exp}
We now apply the framework in section~\ref{sec:MINLP} to logistic regression and ridge regression, and show that the solver indeed selects a super-teaching subset that is far better than the original training set $S$.
\subsection{Teaching Logistic Regression $A_{lr}$}
Let $\setfont{X}=\setfont{R}^d$, $\Theta=\setfont{R}^d$, $\theta^* = (\frac{1}{\sqrt{d}},..., \frac{1}{\sqrt{d}})$, $p_{\setfont{Z}}(x)=\mathcal{N}(0, I)$. Let $p_{\setfont{Z}}(y\mid x)=\ind{x^\top\theta^*>0}$, which is deterministic given $x$.
Logistic regression estimates $\hat\theta_S=A_{lr}(S)$ with~\eqref{learner},
where
$\lambda=0.1$ and
$\tilde\ell(z_i)=\log (1+\exp(-y_ix_i^\top\theta))$.
In contrast,
The teacher's risk is defined to be the expected 0-1 loss: $R(\hat\theta)=\E{p_{\setfont{Z}}}{\ind{\hat\theta(x)\neq y}}$, where $\hat\theta(x)$ is the label of $x$ predicted by $\hat\theta$.
Since $p_{\setfont{Z}}$ is symmetric about the origin, the risk can be rewritten in terms of the angle between $\hat\theta$ and $\theta^*$: $R(\hat\theta)=\arccos(\frac{\hat\theta^\top\theta^*}{||\hat\theta||\cdot||\theta^*||})/\pi$.
Instantiating~\eqref{eq:MINLP} we have
\begin{eqnarray}
\min_{b\in\{0,1\}^n,\hat\theta\in\setfont{R}^d} &&\arccos(\frac{\hat\theta^\top\theta^*}{||\hat\theta||\cdot||\theta^*||})/\pi \label{eq:Logistic}\\
\mbox{s.t. } && \lambda\hat\theta-\sum_{i=1}^n \frac{b_iy_ix_i}{1+\exp(y_ix_i^\top\hat\theta)} = 0. \nonumber
\end{eqnarray}
We run experiments to study the effectiveness and scalability of the NEOS MINLP solver on~\eqref{eq:Logistic}, specifically with respect to the training set size $n=|S|$ and dimension $d$.
In the first set of experiments we fix $d=2$ and vary $n=16, 64, 256$ and $1024$.
For each $n$ we run 10 trials.
In each trial we draw an $n$-item $iid$ sample $S \sim p_{\setfont{Z}}$ and call the solver on~\eqref{eq:Logistic}.
The solver's solution to $b_1 \ldots b_n$ indicates the super teaching set $B(S)$.
We then compute an empirical version of the super teaching ratio:
$$\hat c_n=R(\hat\theta_{B(S)})/R(\hat\theta_S).$$
\tabcolsep=0.09cm
\begin{table}[ht]
\small
\centering
\begin{tabular}{ |c|c|c|c|c|c|c|}
\hline
& \multicolumn{3}{|c| }{Logistic Regression} & \multicolumn{3}{ |c| }{Ridge Regression} \\
\hline
$n=|S|$ &$\hat c_n $ & $|B(S)|/n$ & time (s) & $\hat c_n $ & $|B(S)|/n$ &time (s)\\
\hline
16 & 8.5e-4&0.50&3.4e-1
&7.8e-3&0.50&6.3e-1\\
64 &1.3e-3&0.69&3.5e+0
&7.5e-3&0.70&5.8e+0\\
256 &6.3e-3&0.67&6.0e+1
&5.6e-3&0.84&1.4e+2 \\
1024 &1.3e-2&0.86&1.4e+3
&4.1e-3&0.92& 3.3e+3\\
\hline
\end{tabular}
\caption{Super teaching as $n$ changes.}\label{Ep:empc}
\end{table}
\begin{table}[ht]
\small
\centering
\begin{tabular}{ |c|c|c|c|c|c|c|}
\hline
& \multicolumn{3}{|c| }{Logistic Regression} & \multicolumn{3}{ |c| }{Ridge Regression} \\
\hline
$d$ &$\hat c_n$ & $|B(S)|/n$ & time (s) & $\hat c_n $ & $|B(S)|/n$ &time (s)\\
\hline
2 & 3.1e-3 &0.67 &5.4e-1 &
3.3e-3 &0.55 &6.6e+0\\
4 & 2.4e-3 &0.44&8.5e+1 &
7.2e-3 &0.53 &5.8e+1 \\
8 & 1.8e-1&0.39 &4.1e+0 &
1.5e-1 &0.47 &6.0e+0 \\
16 & 5.6e-1&0.42 &5.1e+0 &
4.3e-1 &0.59 &9.3e+0 \\
32 & 8.2e-1&0.58 &1.0e+1 &
6.4e-1 &0.86 &3.0e+0 \\
\hline
\end{tabular}
\caption{Super teaching as $d$ changes.}\label{Ep:empd}
\end{table}
In the left half of Table~\ref{Ep:empc} we report the median of the following quantities over 10 trials: $\hat c_n$, the fraction of the training items selected for super teaching $|B(S)|/n$, and the NEOS server running time.
The main result is that $\hat c_n\ll1$ for all $n$, which means the solver indeed selects a super-teaching set $B(S)$ that is far better than the original $iid$ training set $S$.
Therefore, MINLP is a valid algorithm for finding a super teaching set.
Second, we note that the solver tends to select a large subset since the median $|B(S)|/n \ge 1/2$.
This is interesting as it is known that when $S$ is dense, one can select extremely sparse super teaching sets, as small as a few items, to teach effectively~\cite{JMLR:v17:15-630}.
Understanding the different regimes remains future work.
Finally, the running time grows fast with $n$.
For example, when $n=1024$ it takes around half an hour to solve~\eqref{eq:Logistic}.
Future work needs to address this bottleneck in applying MINLP to large problems.
In the second set of experiments we fix $n=32$ and vary $d = 2, 4, 8, 16, 32$.
The left half of Table~\ref{Ep:empd} shows the results.
The empirical teaching ratio $\hat c_n$ is still below 1 in all cases, showing super teaching.
But as the dimension of the problem increases $\hat c_n$ deteriorates toward 1.
Nonetheless, even when $d=n$ we still see a median super teaching ratio of 0.82; the corresponding super teaching set $B(S)$ has only 58\% training items than the dimension.
It is interesting that the MINLP algorithm intentionally created a ``high dimensional'' learning problem (as in higher dimension $d$ than selected training items $|B(S)|$) to achieve better teaching, knowing that the learner $A_{lr}$ is regularized.
The running time does not change dramatically.
\subsection{Teaching Ridge Regression $A_{rr}$}
Let $\setfont{X}=\setfont{R}^d$, $\Theta=\setfont{R}^d$, $\theta^* = (\frac{1}{\sqrt{d}},..., \frac{1}{\sqrt{d}})$, $p_{\setfont{Z}}(x)=\mathcal{N}(0, I)$,
$p_{\setfont{Z}}(y\mid x)=\mathcal{N}(y; x^\top\theta^*, 0.1)$.
Let the teaching risk be the parameter difference: $R(\hat\theta)=\|\hat\theta-\theta^*\|$.
Given a sample $S$ with $n$ $iid$ items drawn from $p_{\setfont{Z}}$,
ridge regression estimates $\hat\theta_S=A_{rr}(S)$ with $\lambda=0.1$ and
$\tilde\ell(z_i)=(x_i^\top\hat\theta-y_i)^2$.
The corresponding MINLP is:
\begin{eqnarray}
\min_{b\in\{0,1\}^n,\hat\theta\in\setfont{R}^d} &&||\hat\theta-\theta^*|| \label{eq:Ridge}\\
\mbox{s.t. } && \lambda\hat\theta+2\sum_{i=1}^n b_i(x_i^\top\hat\theta- y_i) x_i = 0. \nonumber
\end{eqnarray}
We run the same set of experiments.
Tables ~\ref{Ep:empc} and~\ref{Ep:empd} show the results,
which are qualitatively similar to teaching logistic regression.
Again, we see the empirical super teaching ratio $\hat c_n \ll 1$, indicating the presence of super teaching.
\begin{figure}[ht]
\centering
\begin{subfigure}[t]{0.45\columnwidth}
\centering
\includegraphics[width=.9\columnwidth]{logistic}
\caption{logistic regression}
\label{fig:logistic}
\end{subfigure}
~
\begin{subfigure}[t]{0.45\columnwidth}
\centering
\includegraphics[width=.9\columnwidth]{ridge}
\caption{ridge regression}
\label{fig:ridge}
\end{subfigure}%
\caption{Typical trials from the MINLP algorithm}
\label{fig:example}
\end{figure}
Finally, Figure~\ref{fig:example} visualizes one typical trial each for teaching logistic regression and ridge regression.
$S$ consists of both dark and light points, while the dark ones representing $B(S)$ optimized by MINLP.
The dashed line shows $\hat\theta_S$, while the solid lines shows $\hat\theta_{B(S)}$.
The ground truth
($x_1+x_2=0$ in logistic regression, $y=x$ in ridge regression)
essentially overlaps with the solid lines.
Specifically, the super taught models $\hat\theta_{B(S)}$ have negligible risks of 2.5e-4 and 3.3e-3, whereas models $\hat\theta_S$ trained from the whole $iid$ sample $S$ incur much larger risks of 0.03 and 0.16, respectively.
\section{Related Work}
\label{sec:relatedwork}
There has been several research threads in different communities aimed at reducing a data set while maintaining its utility.
The first thread is training set reduction~\cite{garcia2012prototype,zeng2005smo,Wilson2000}, which during training time prunes items in $S$ in an attempt to improve the learned model.
The second thread is coresets~\cite{har2011geometric,2017arXiv170306476B}, a summary of $S$ such that models learned on the summary are provably competitive with models learned on the full data set $S$.
But as they do not know the target model $p_{\setfont{Z}}$ or $\theta^*$, these methods cannot truly achieve super teaching.
The third thread is curriculum learning~\cite{icml2009_006} which showed that smart initialization is useful for nonconvex optimization.
In contrast, our teacher can directly encode the true model and therefore obtain faster rates.
The final thread is sample compression~\cite{floyd1995sample}, where a compression function chooses a subset $T\subset S$ and a reconstruction function to form a hypothesis.
Our present work has some similarity with compression, which allows increased accuracy since compression bounds can be used as regularization~\cite{kontorovich2017nearest}.
The theoretical study of machine teaching has focused on the teaching dimension, i.e. the minimum training set size needed to exactly teach a target concept $\theta^*$~\cite{Goldman1995Complexity,Shinohara1991Teachability,Zhu2017NoLearner,JMLR:v18:16-460,Liu2016Teaching,Zhu2015Machine,JMLR:v15:doliwa14a,zhu2013machine,Zilles2011Models,Balbach2009Recent,982362,conf/colt/AngluinK97,Goldman1996Teaching,DBLP:journals/jcss/Mathias97,Balbach2006Teaching,Balbach:2008:MTU:1365093.1365255,Kobayashi2009Complexity,journals/ml/AngluinK03,conf/colt/RivestY95,Hegedus1995Generalized,journals/ml/Ben-DavidE98}. Most of the prior work assumed a synthetic teaching setting where $S$ is the whole item space, which is often unrealistic. Liu \textit{et al.} considered approximate teaching in the finite $S$ setting~\cite{Liu2017Iterative}, though their analysis focused on a specific SGD learner.
Our super teaching setting applies to arbitrary learners, and we allow approximate teaching -- namely we do not require the teacher to teach exactly the target model, which is infeasible in our pool-based teaching setting with a finite $S$.
Machine teaching applications include education~\cite{Clement2016edm,Patil2014Optimal,singla2014near,NIPS2013_4887,Cakmak2011Mixed,Rafferty:2011:FTP:2026506.2026545}, computer security~\cite{Alfeld2017Explicit,Alfeld2016Data,Mei2015Machine}, and interactive machine learning~\cite{Suh2016Label,AAAI124954,Khan2011How}.
By establishing the existence of super-teaching, the present paper can guide the process of finding a more effective training set for these applications.
\section{Discussions and Conclusion}
\label{discuss:superteach}
We presented super-teaching: when the teacher already knows the target model, she can often choose from a given training set a smaller subset that trains a learner better.
We proved this for two learners, and provided an empirical algorithm based on mixed integer nonlinear programming to find a super teaching set.
However, much needs to be done on the theory of super teaching.
We give two counterexamples to illustrate that not all learners are super-teachable.
\begin{example}[MLE of interval]
Let $\setfont{X}=[0, \theta^*]$, where $\theta^*\in \setfont{R}^+$. $p_{\setfont{Z}}(x)=U(\setfont{X})$. Given a $n$-item training set $S$, the MLE for $\theta^*$ is $\hat\theta_S=A_{int}(S)=\max_{i=1:n}x_i$. The risk is defined as $R(\hat\theta_S)=|\hat\theta_S-\theta^*|$.
We show $A_{int}$ is not super-teachable.
$\hat\theta_{B(S)}=\max_{x_i\in B(S)}x_i\le \max_{x_i\in S}x_i=\hat\theta_S$.
Since $\hat\theta_S \le \theta^*$, $R(\hat\theta_{B(S)})=|\hat\theta_{B(S)}-\theta^*|\ge |\hat\theta_S-\theta^*|=R(\hat\theta_S)$.
\end{example}
We can generalize this to a classification setting, and show that neither the least nor the greatest consistent hypothesis is not super-teachable:
\begin{example}[Consistent learners]\label{ex:consistent}
Let $\setfont{X}=[x_{\min}, x_{\max}] \subset \setfont{Z}$ be an interval over the integer grid.
The hypothesis space is $\Theta=\{[a,b]\subseteq\setfont{X}: \mbox{$y=1$ in $[a,b]$ and $-1$ outside}\}$.
$\theta^*=[a^*, b^*] \in \Theta$.
$p_{\setfont{Z}}$ is uniform on $\setfont{X}$ and noiseless $y$ labeled according to $\theta^*$.
The risk $R(\hat\theta_S)$ is the size of the symmetric difference between the two intervals $\hat\theta_S$ and $\theta^*$, normalized by $x_{\max} - x_{\min}$.
Given a sample $S$, the least consistent learner $A_{lc}$ learns the tightest interval over positive items in $S$:
$\hat\theta^{lc}_S=A_{lc}(S) \triangleq \left[\min_{\substack{i=1:n\\y_i=1}} x_i, \max_{\substack{i=1:n\\y_i=1}} x_i \right].$
$\hat\theta^{lc}_S=\emptyset$ if $S$ does not contain positive items.
The greatest consistent learner $A_{gc}$ extends the hypothesis interval
in both directions as much as possible before hitting negative points in $S$.
If $S$ has no positive we define $\hat\theta^{gc}_S=\emptyset$, too.
\begin{restatable}{proposition}{Consistent}
Neither $A_{lc}$ nor $A_{gc}$ is super-teachable.
\end{restatable}
\begin{proof}
We first show $A_{lc}$ is not super-teachable. Note that $A_{lc}$ learns the tightest interval consistent with $S$, thus we always have $\hat\theta^{lc}_S\subseteq \theta^*$. Now we show that $\hat\theta^{lc}_{B(S)}\subseteq \hat\theta^{lc}_S$ is always true so that $R(\hat\theta^{lc}_S)\le R(\hat\theta^{lc}_{B(S)})$ follows.
If $\theta^*=\emptyset$, then trivially $\hat\theta^{lc}_{B(S)}= \hat\theta^{lc}_S=\emptyset$.
Now assume $\theta^*\neq\emptyset$.
If $\exists (x, 1)\in B(S)$, let $[a_1, b_1]=\hat\theta^{lc}_{B(S)}$. Note that $\hat\theta^{lc}_S\neq \emptyset$ because $B(S)\subseteq S$ and thus $S$ has at least one positive point. Let $\hat\theta^{lc}_S=[a_2,b_2]$.
Now $a_1=\min\{x\mid (x, 1)\in B(S)\}\ge \min\{x\mid (x, 1)\in S\}=a_2$, and $b_1=\max\{x\mid (x, 1)\in B(S)\}\le \max\{x\mid (x, 1)\in S\}=b_2$. Thus we have $\hat\theta^{lc}_{B(S)}\subseteq \hat\theta^{lc}_S$.
If $\nexists (x, 1)\in B(S)$, $\hat\theta^{lc}_{B(S)}=\emptyset$ and $\hat\theta^{lc}_{B(S)}\subseteq \hat\theta^{lc}_S$ is always true.
Thus $\hat\theta^{lc}_{B(S)}\subseteq \hat\theta^{lc}_S\subseteq \theta^*$ for any $B$ and any $S$.
The proof for $A_{gc}$ is similar by showing $\theta^*\subseteq \hat\theta^{gc}_S\subseteq \hat\theta^{gc}_{B(S)}$.
\end{proof}
\end{example}
This leads to an open question: which family of learners are super teachable?
We offer a conjecture here:
we speculate that MLEs (and the derived MAP estimates or regularized empirical risk minimizers) which satisfy the asymptotic normality conditions~\cite{white1982maximum} are super teachable.
This conjecture is motivated by its similarity to the proof in section~\ref{sec:Gaussian}.
Also note that the two counterexamples are classic examples of MLE that do \emph{not} satisfy the asymptotic normality conditions.
Another open question concerns the optimal super-teaching subset size $k$ for a given training set of size $n$. For example, our result on teaching the MLE of Gaussian mean indicates that the rate improves as $k$ grows.
However, our analysis only applies to a fixed $k$.
Further research is needed to identify the optimal $k$.
\textbf{Acknowledgments}: R.N. acknowledges support by NSF IIS-1447449 and CCF-1740707. P.R. is supported in part by grants NSF DMS-1712596, NSF DMS-TRIPODS-1740751, DARPA W911NF-16-1-0551, ONR N00014-17-1-2147 and a grant from the MIT NEC Corporation.
X.Z. is supported in part by NSF CCF-1704117, IIS-1623605, CMMI-1561512, DGE-1545481, and CCF-1423237.
|
1,108,101,564,943 | arxiv | \section{Introduction}
\vspace*{-0.5pt}
Recent high precision experiments to verify the Standard Model of
electroweak interactions require, on the side of the theory,
high-precision calculations resulting in the evaluation of higher
loop diagrams. For specific processes thousands of multiloop Feynman
diagrams do contribute, and it turns out impossible to perform these
calculations by hand. That makes the request for automatization a
high-priority task. In this direction, several program packages are
elaborated \cite{GRACE,FeynmArts,CompHep,Vermaseren}. It appears
absolutely necessary that various groups produce their own solutions
of handling this problem: various ways will be of different
efficiency, have different domains of applicability, and last but not
least, should eventually allow for completely independent checks of
the final results. This point of view motivated us to seek our own
way of automatic evaluation of Feynman diagrams. We have in mind only
higher loop calculations (no multipoint functions).
There exist several kinds of methods for evaluating multiloop Feynman
diagrams with masses. The most direct and universal method is based
on the Feynman parameter representation and subsequent numerical
Monte-Carlo integration \cite{Fujimoto}. The universality is its most
essential advancement. Great progress has been achieved in
implementing this method into combined automatic systems like GRACE
\cite{GRACE}. But the convergence of the Monte-Carlo integration is
rather slow, and there is no way to estimate the actual numerical
error. When the subintegral expression has kinematical singularities,
the error may significantly exceed the estimate \cite{MC-error}.
In a semi-analytic method some integrations are done exactly in terms
of special functions, and the dimension of numerical integration
becomes lower \cite{Kreimer}, \cite{self-energy}. The advantage of
this method is the possibility to deal with tensor structures.
However, an essential drawback is the difficulty of consistently
performing renormalizations, because one has to stay in the integer
dimension.
The method \cite{Pade} of Taylor expansion of a diagram in external
momenta, analytic continuation and Pad{\'e}-like approximations
allows one to recover the behaviour of the function in the whole
complex plane of momentum variables. However, it would require the
knowledge of rather many expansion coefficients in order to get a
sufficiently precise estimate at the threshold.
Therefore, we are going to use the asymptotic expansions of Feynman
diagrams in small/large momenta and masses, where the orders of
expansion in a small ratio of parameters are completely collected
even at the threshold. We are interested in the low-energy physics of
the Standard Model. In this situation there exists a natural small
parameter: the ratio of the characteristic scale of the process to
the scale of the weak interaction, defined by $M_Z$. This provides a
good convergence for asymptotic expansions of physical observables
and, in a number of cases, makes already the leading order sufficient
for the existing precision of the experiments. On the other hand,
one can use the dimensional renormalization, and then there are no
problems with the $R$ operation.
We demonstrate here the functioning of a C program (TLAMM) for the
evaluation of the two-loop anomalous magnetic moment (AMM) of the
muon ${\frac{1}{2}(g-2)}_{\mu}$. This piloting C program must read
the diagrams generated by QGRAF \cite{QGRAF} for a given physical
process, generate the FORM \cite{FORM} source code, start the FORM
interpreter, read and sum up the results for the class of diagrams
under consideration. Here, for the purpose of demonstration, we
apply TLAMM to a closed subclass of diagrams of the Standard Model
which we refer to as a ``toy'' model.
Recent papers have reduced the theoretical uncertainty of the muon
AMM by partially calculating the two-loop electroweak contributions
\cite{Kuraev}, \cite{Czarnecki}. In some cases the following
approximations were used: terms suppressed by $(1-4 \sin^2 \Theta_W
)$ were omitted; the fermion masses of the first two families were
set to zero; diagrams with two or more scalar couplings to the muon,
suppressed by the ratio $\frac{m^2_{\mu}}{M^2_Z}$, were discarded;
the Kobayashi-Maskawa matrix was assumed to be unity; the mass of the
Higgs particle was assumed large as compared to $M_{Z}$.
All of these approximations, except possibly the last one, are well
justified and give rise to small corrections only. We consider it of
great interest to study also the case $M_H \sim M_{Z}$. To perform
this calculation is our main physical motivation. Apart from that,
for technical reasons, it may be interesting to study the functioning
of TLAMM by calculating all $1832$ two-loop diagrams without any
approximation.
The calculation of the AMM of the muon reduces, after differentiation
and contractions with projection operators, to diagrams of
the propagator type with external momentum on the muon mass shell
(for details see \cite{projector}).
The applicability of the asymptotic expansion \cite{asymptotic},
\cite{Smirnov} in the limit of large masses has to be investigated for
diagrams evaluated on the muon mass shell ({\em i.e.},
$p^2=-m_{\mu}^2$). Some diagrams already had a threshold at the muon
mass shell before the expansion. In other diagrams, this threshold
appears in some terms of the expansion. In dimensional
regularization, threshold singularities (like any other infrared
singularities if they are strong enough) manifest themselves as poles
in $\varepsilon$ (in $4-2\varepsilon$ dimensions). They ought to
cancel for the total AMM. We check this for a closed subset of
diagrams in the toy model.
\section{Large-mass expansion}
The asymptotic expansion in the limit of large masses is defined
\cite{Smirnov} as
\begin{equation}
F_G(q, M ,m, \varepsilon) \stackrel{M \to \infty}{\sim }
\sum_{\gamma} F_{G/\gamma}(q,m,\varepsilon) \circ
T_{q^{\gamma}, m^{\gamma}}
F_{\gamma}(q^{\gamma}, M ,m^{\gamma}, \varepsilon)
\end{equation}
\noindent
where $G$ is the original graph, $\gamma$'s are subgraphs involved in
the asymptotic expansion, $G/\gamma$ denotes shrinking $\gamma$ to a
point; $F_{\gamma}$ is the Feynman integral corresponding to
$\gamma$; $ T_{q_{\gamma}, m_{\gamma}} $ is the Taylor operator
expanding the integrand in small masses $\{ m_{\gamma} \}$ and small
external momenta $\{ q_{\gamma} \}$ of the subgraph $\gamma$ (before
integration); ``$ \circ $'' inserts the subgraph expansion in the
numerator of the integrand $F_{G/{\gamma}}$. The sum goes over all
subgraphs $\gamma$ which (a) contain all lines with large masses, and
(b) are one-particle irreducible relative to lines with light or zero
masses.
The following types of integrals occur in the asymptotic expansion of
the muon AMM in the Standard Model:
\begin{enumerate}
\item
two-loop tadpole diagrams with various heavy masses on internal
lines;
\item
two-loop self-energy diagrams, involving contributions from fermions
lighter or heavier than the muon, with the external momentum on
the muon mass shell;
\item
two-loop self-energy diagrams with two or three muon lines and
the external momentum on the muon mass shell;
\item
various products of a one-loop self-energy diagram on shell and a one-loop
tadpole with a heavy mass.
\end{enumerate}
Almost all of these diagrams can be evaluated analytically using the
package SHELL2 \cite{SHELL2}. For our calculation we have modified
this package in the following way:
\begin{enumerate}
\item
There are no restrictions on the indices of the lines
(powers of scalar denominators).
\item
More recurrence relations are used, and the dependence on the space-time
dimension is always explicitly reducible to powers of linear factors.
\item
A new algorithm for simplification of this rational fractions is
implemented. These modifications
essentially reduce the execution time (in some cases, down to
the order of a hundredth).
\item
New programs for evaluating two-loop tadpole integrals with different
masses are added.
\item
New programs were written for the asymptotic expansion of one-loop
self-energy diagrams (relevant for renormalization)
in the large-mass limit.
\end{enumerate}
\section{The toy model}
As the first step, we concentrate on a ``toy'' model, a ``slice'' of
the Standard Model, involving a light charged spinor $\Psi$, the
photon $A_\mu$, and a heavy neutral scalar field $\Phi$. The scalar
has triple $\left( g \right)$ and quartic $\left( \lambda \right)$
self-interactions, and the Yukawa coupling $\left( y \right)$ to the
spinor. The Lagrangian of the toy model reads (in the Euclidean
space-time)
\begin{eqnarray}
{\it L} & = & \frac{1}{2} \partial_\mu \Phi \partial^\mu \Phi
+ \frac{1}{2} M^2 :\Phi^2: - \frac{g}{3!} :\Phi^3:
- \frac{\lambda}{4!} :\Phi^4:
+ \frac{1}{4} \left(\partial_\mu A_\nu - \partial_\nu A_\mu \right)^2
\nonumber \\ &&
+ \frac{1}{2 \alpha} \left( \partial_\mu A^\mu \right)^2
+ \bar{\Psi} \left( \hat{\partial} + m \right) \Psi
+ i e \bar{\Psi} \hat A \Psi - y \Phi \bar{\Psi} \Psi
\label{toy-model}
\end{eqnarray}
\noindent
where $e$ is the electric charge and $\alpha$ is a gauge fixing parameter.
The main aims of the present investigation are the following:
\begin{enumerate}
\item
Verification of the consistency of the large-mass asymptotic expansion
with the external momentum on the mass shell of a small mass.
In particular, we check the cancelation of all
threshold singularities that appear in individual diagrams
and manifest themselves as infrared poles in $\varepsilon$.
\item
Estimation of the influence of a heavy neutral scalar particle on the AMM
of the muon in the framework of the Standard Model.
\item
Verification of gauge independence (we use the covariant
gauge with an arbitrary parameter $\alpha$).
\end{enumerate}
In what follows we analyze in some detail the diagrams contributing
to the AMM of the fermion in our toy model and specify the
renormalization procedure. Apart from counterterms $40$ diagrams
contribute to the two-loop AMM of the fermion. After performing the
Dirac and Lorentz algebra, all diagrams can be reduced to some set of
scalar prototypes. A prototype defines the arrangement of massive and
massless lines in a diagram. Individual integrals are specified by
the powers of the scalar denominators, called indices of the lines.
From the point of view of the asymptotic expansion method the
topology of the diagram is essential. All diagrams of the toy model
that contribute to the two-loop AMM can be classified in terms of $9$
prototypes (we omit the pure QED diagrams). These prototypes and
their corresponding subgraphs $\gamma$ involved in the asymptotic
expansion, are given in Fig.\ref{fig3}. In dimensional regularization, the
last subgraphs vanish in cases $1, 4, 7$, and $8$, owing to massless
tadpoles.
\begin{figure}[t]
\vskip 80mm
\centerline{\vbox{\epsfysize=55mm \epsfbox{p.eps}}}
\caption{\label{fig3} The prototypes and their subgraphs
contributing to the large mass expansion.
Thick, thin and dashed lines correspond to
the heavy-mass, light-mass, and massless propagators, respectively.
Dotted lines indicate the lines omitted in the subgraph $\gamma$.}
\end{figure}
\section{Program TLAMM}
All diagrams are generated in symbolic form by means of QGRAF
\cite{QGRAF}. For automatic calculation we have created a special
piloting program written in C. This program, called TLAMM,
\begin{enumerate}
\item
reads QGRAF output;
\item
creates a file containing the complete FORM program for calculating
each diagram;
\item
executes FORM;
\item
reads FORM output, picks out the result of the
calculation, and builds the total sum of all diagrams in a single
file which can be processed by FORM.
\end{enumerate}
The program has its own internal notation for topologies. In the
problem $g-2$ of a lepton in the Standard Model there are four
different topologies (see Fig.\ref{4top}). Line number 1 is always assumed
to correspond to a fermion line (a lepton or a neutrino); therefore,
topologies {\tt b} and {\tt c} are distinguished.
\begin{figure}[ht]
\centerline{\vbox{\epsfysize=85mm \epsfbox{ab_run.eps}}}
\caption{\label{4top}Two-loop topologies existing for the AMM of the
lepton in the Standard Model.}
\end{figure}
\noindent
All diagrams are classified according to their {\tt local
prototypes}. The notation consists of a letter (the topology) and
five (or four in {\tt B} case) integer numbers specifying the masses
on the lines
0 for the zero mass: $\gamma$, $\nu$;
1 for a mass less than $m_\mu$: $e, u, d$;
2 for $m_\mu$;
3 for an intermediate mass between $m_\mu$ and $M_W$: $s,\tau, c, b$;
4 for a heavy mass: $W, Z, t, H$.
\noindent
In the Feynman gauge $\alpha=1$ all pseudo-Higgs particles and the
Faddeev-Popov ghosts are heavy except for one massless ghost.
Each local prototype is calculated by means of the asymptotic
expansion in heavy masses. For each diagram the corresponding FORM
subroutine is called according to the local prototype.
Identifiers for vertices and propagators and the explicit Feynman
rules are red from separate files and then inserted into the FORM
program. Because the number of identifiers needed for the
calculation of all diagrams together may exceed the FORM capacity,
the piloting program TLAMM retains for each diagram only those
involved in its calculation.
All initial settings are defined in a configuration file. The latter
contains information on the file names, identifiers of topologies,
the distribution of momenta, and the description of the model in
terms of the notation that is some extension of QGRAF's.
The program carries out the complete verification of all input files
except the QGRAF output.
There exist several options which allow one to process only the diagrams
\begin{itemize}
\item explicitly listed by number;
\item of a given prototype;
\item of a specified topology.
\end{itemize}
\noindent
There are also some debugging options.
The asymptotic expansion of each prototype is performed by a separate
FORM program. For efficiency of the algorithm the following points are
essential:
\begin{enumerate}
\item
The result of the calculation is presented as a series in small
parameters. Care is taken to avoid the production of unnecessarily high
powers in intermediate results.
\item
For the evaluation of the Feynman integrals, it is necessary to reduce scalar
products of momenta in the numerator to the square combinations which
are present in the denominator. Most efficiently this is done by means of
recurrence relations proposed by Tarasov \cite{Tarasov}.
\end{enumerate}
As a demonstration of the functioning of TLAMM, the unrenormalized
(``bare'') contributions of all the two-loop diagrams to the anomalous
magnetic moment of the fermion in the toy model are presented in
Figs.\ref{fig4}--\ref{fig8} to the leading order in $m^2/M^2$. In
the presented expressions the factor of $(2 \pi)^{-4}$ is implied.
During the calculations, each loop was divided by
$\Gamma(1+\varepsilon)$ rather than multiplied by
$\Gamma(1-\varepsilon)$ as is done in the $\overline{\rm{MS}}$
definition.
\begin{figure}[ht]
\vspace*{-52mm}
\centerline{\vbox{\epsfysize=285mm \epsfbox{1tabl.eps}}}
\vspace*{-62mm}
\caption{\label{fig4}
The two-loop QED contributions to the AMM in the arbitrary gauge.}
\end{figure}
\begin{figure}[ht]
\vspace{-35mm}
\centerline{\vbox{\epsfysize=290mm \epsfbox{2tabl.eps}}}
\vspace{-80mm}
\caption{\label{fig5}
The two-loop AMM contributions proportional to $ y^4 $.}
\end{figure}
\begin{figure}[ht]
\vspace{-37mm}
\centerline{\vbox{\epsfysize=290mm \epsfbox{3tabl.eps}}}
\vspace{-82mm}
\caption{\label{fig6} The two-loop AMM contributions
proportional to $ e^2y^2 $ (to be continued).}
\end{figure}
\begin{figure}[htb]
\vbox{
\vspace*{-60mm}
\centerline{\vbox{\epsfysize=290mm \epsfbox{4tabl.eps}}}
\vspace*{-150mm}
}
\caption{\label{fig7}
The two-loop AMM contributions proportional to $ e^2y^2 $
(continued).}
\end{figure}
\begin{figure}[htb]
\vbox{
\vspace*{-45mm}
\centerline{\vbox{\epsfysize=290mm \epsfbox{5tabl.eps}}}
\vspace*{-180mm}
}
\caption{\label{fig8}
The two-loop AMM contributions involving the triple interaction $ g $ .}
\end{figure}
\section{The results of the calculations}
Initially, we use the minimal subtraction scheme for the parameters of
the Lagrangian (\ref{toy-model}). Afterwards, it is more convenient to
re-express the running masses $m_R$ and $M_R^2$ in terms of the
physical pole masses $m, M^2$ of the particles
(at the one-loop level, relevant here),
\begin{eqnarray}
m_R (\mu^2 )
& = & m \Big \{ 1
- \frac{e_R^2}{16 \pi^2} \Big( 4 - 3 \ln \frac{m^2}{\mu^2} \Big)
+ \frac{y_R^2}{16 \pi^2} \Big[
\frac{5}{4} - \frac{3}{2} \ln \frac{M^2}{\mu^2}
\nonumber \\*
& + &
\left( \frac{m^2}{M^2} \right)
\Big( \frac{1}{6} - \ln \frac{M^2}{m^2} \Big)
+ \left( \frac{m^2}{M^2} \right)^2
\Big( \frac{7}{8} - \frac{3}{2} \ln \frac{M^2}{m^2}
\Big)
\nonumber \\*
& + &
\left( \frac{m^2}{M^2} \right)^3
\Big( \frac{47}{20} - 3 \ln \frac{M^2}{m^2} \Big)
+ \cdots \Big] \Big \},
\\
M_R^2 ( \mu ^2 ) & = & M^2 \Big \{ 1
+ \frac{y_R^2}{16 \pi^2} \Big[ 4 - 2 \ln \frac{M^2}{\mu^2}
- \left( \frac{m^2}{M^2} \right)
\Big( 16 - 12 \ln \frac{M^2}{\mu^2} \Big)
\nonumber \\*
& - &
\left( \frac{m^2}{M^2} \right)^2
\Big( 18 + 12 \ln \frac{M^2}{m^2} \Big)
+
\left( \frac{m^2}{M^2} \right)^3
\Big( \frac{4}{3} - 8 \ln \frac{M^2}{m^2} \Big)
+ \cdots \Big]
\nonumber \\*
& + &
\frac{g_R^2}{16 \pi^2 M^2} \Big( 1 - \frac{\pi}{2 \sqrt{3}}
- \frac{1}{2} \ln \frac{M^2}{\mu^2} \Big)
\Big \},
\end{eqnarray}
\noindent
thus terming to the on-shell renormalization of the masses.
The scalar-field tadpoles do never contribute owing to the normal
ordering of the Lagrangian. Anyway, their contributions would become
of no consequence after everything is expressed in terms of the pole
masses. As a result, the quartic interaction constant $\lambda$ falls
out of the two-loop anomalous magnetic moment of the fermion.
We have calculated the asymptotic expansion up to the $7$th order in the
ratio $m^2/M^2$ and have convinced ourselves that all orders of the
expansion are free from on-shell singularities and involve only the
logarithms of the masses. For brevity, we present just two leading
orders
\begin{eqnarray}
a_\mu & = &
\frac{e_R^2}{4 \pi^2} \left[ \frac{1}{2} \right]
+ \frac{e_R^4}{16 \pi^4} \left[
\frac{197}{144} +\left( \frac{1}{2}-3 \ln \left( 2 \right) \right)
\zeta \left(2 \right) + \frac{3}{4} \zeta \left(3 \right)
+ \frac{1}{6} \ln \frac{m^2}{\mu^2} \right]
\nonumber \\[3mm]
&&
+ \frac{y_R^2}{16 \pi^2}
\left(\frac{m}{M}\right)^2\left [
- \frac{7}{3}+2\ln \frac{M^2}{m^2}
\right ]
\nonumber \\[3mm]
&&
+\frac{e_R^2 y_R^2}{64 \pi^4}
\left(\frac{m}{M}\right)^2\left [
\frac{335}{27}
+ \frac{121}{9} \ln\frac{M^2}{\mu^2}
- \frac{179}{18} \ln\frac{m^2}{\mu^2}
\right.
\nonumber \\[3mm]
&&
\left.
-\frac{13}{2} \ln^2 \frac{M^2}{\mu^2}
-\frac{7}{2}\ln^2 \frac{m^2}{\mu^2}
+ 10 \ln \frac{M^2}{\mu^2} \ln \frac{m^2}{\mu^2}
-29 \zeta \left(2 \right)
\right ]
\nonumber \\[3mm]
&&
+\frac{y_R^3 }{256 \pi^4 } \left(\frac{g_R m}{M^2}\right)
\left [ 2 - \frac{4}{3} \zeta \left(2 \right)
+ \left(\frac{m}{M}\right)^2
\left\{\frac{46}{3}-\frac{189}{2}S_2
+6\ln \frac{M^2}{m^2}
\right\} \right ]
\nonumber \\[3mm]
&&
+\frac{y_R^2}{256 \pi^4}
\left(\frac{g_R m}{M^2}\right)^2\left [
- \frac{5}{3} + \frac{45}{2}S_2
-\frac{13}{6}\frac{\pi}{\sqrt{3}}
-\left(2-\frac{\pi}{\sqrt{3}}\right)
\ln \frac{M^2}{m^2}
\right ]
\nonumber \\[3mm]
&&
+\frac{y_R^4}{256 \pi^4}
\left(\frac{m}{M}\right)^2\left [
\frac{103}{6}
+13\ln \frac{m^2}{\mu^2}
-\frac{74}{3} \ln \frac{M^2}{\mu^2}
+10\ln \frac{M^2}{\mu^2} \ln \frac{M^2}{m^2}
\right ],~
\label{aR}
\end{eqnarray}
\noindent
where
$S_2 = \frac{4}{9 \sqrt{3}} {\rm Cl}_2
\left( \frac{\pi}{3} \right) = 0.2604341$
with ${\rm Cl}_2$ the Clausen function;
$e_R, y_R$ and $g_R$ are renormalized (running) coupling constants.
In quantum electrodynamics it is agreed to express the running charge
of the electron $e_R(\mu^2)$ in terms of the experimentally measurable
physical charge $e$. The later is defined by the nonrelativistic
Thompson limit of the Compton scattering, that is, as the product
of the on-shell vertex function at the zero momentum of the photon
[with the projection operator $\left(-i \hat{p} + m \right)$
$\gamma_\mu$
$\left(-i \hat{p} + m \right)$~] by the on-shell wave-function
renormalization constant (the residue of the fermion propagator at
its pole). Both quantities are re\-norm\-al\-iz\-a\-tion-
and gauge-invariant
but contain an infrared singularity which cancels in the product.
Performing an analogous procedure for the charge of the muon in our
toy model to the one-loop order we get
\begin{equation}
e_R^2 (\mu^2 ) =
e^2 \left(1 - \frac{1}{3} \frac{e^2}{4 \pi^2} \ln \frac{m^2}{\mu^2}
\right).
\end{equation}
The physical Yukawa charge could also be defined in terms of the
on-shell Yukawa vertex. However, its evaluation is a difficult task in
itself. On the other hand, in the Standard Model the Yukawa charge is
usually related to the mass of the fermion rather than kept as an
independent parameter. As a compromise, to define the physical Yukawa
charge $y$, we use the following one-loop expression for the running
charge:
\begin{equation}
y_R^2 ( \mu^2 ) =
y^2 \left(1 + \frac{3}{2} \frac{e^2}{4 \pi^2} \ln \frac{m^2}{\mu^2}
- 5 \frac{y^2}{16 \pi^2} \ln \frac{M^2}{\mu^2} \right).
\end{equation}
\noindent
The calculation through the on-shell vertex would generally give a
finite correction (independent of $\mu^2$) to this formula.
The running of the triple scalar interaction is inessential in the
approximation that we consider. In terms of the physical charges,
the anomalous magnetic moment (\ref{aR}) becomes
\begin{eqnarray}
a_\mu & = &
\frac{e^2}{4 \pi^2} \left[ \frac{1}{2} \right]
+ \frac{e^4}{16 \pi^4}\left[
\frac{197}{144} +\left( \frac{1}{2}-3 \ln \left( 2 \right) \right)
\zeta \left(2 \right) + \frac{3}{4} \zeta \left(3 \right) \right]
\nonumber \\[3mm]
&&
+ \frac{y^2}{16 \pi^2}
\left(\frac{m}{M}\right)^2\left[
- \frac{7}{3}+2\ln \frac{M^2}{m^2}
\right]
\nonumber \\[3mm]
&&
+\frac{e^2y^2}{64 \pi^4}
\left(\frac{m}{M}\right)^2\left[
\frac{335}{27}+\frac{121}{9}\ln \frac{M^2}{m^2}
-\frac{13}{2}\ln^2 \frac{M^2}{m^2}
-29 \zeta \left(2 \right)
\right]
\nonumber \\[3mm]
&&
+\frac{y^3 }{256 \pi^4 } \left(\frac{g m}{M^2}\right)
\left[ 2 - \frac{4}{3} \zeta \left(2 \right)
+ \left(\frac{m}{M}\right)^2
\left\{\frac{46}{3}-\frac{189}{2}S_2
+6\ln \frac{M^2}{m^2}
\right\} \right]
\nonumber \\[3mm]
&&
+\frac{y^2}{256 \pi^4}
\left(\frac{g m}{M^2}\right)^2\left[
- \frac{5}{3}+\frac{45}{2}S_2
-\frac{13}{6}\frac{\pi}{\sqrt{3}}
-\left(2-\frac{\pi}{\sqrt{3}}\right)
\ln \frac{M^2}{m^2}
\right ]
\nonumber \\[3mm]
&&
+\frac{y^4}{256 \pi^4}
\left(\frac{m}{M}\right)^2\left[
\frac{103}{6}-13\ln \frac{M^2}{m^2}
\right].
\end{eqnarray}
The Standard-model motivated values for the Yukawa and triple scalar
interactions are
\begin{equation}
y = - \frac{1}{2} \frac{e}{\sin \Theta_W} \frac{m_\mu}{M_W},
\hspace*{15mm}
g = - \frac{e}{\sin \Theta_W} \frac{3 M_H^2}{2 M_W}.
\end{equation}
\noindent
Then, the estimate of the influence of a heavy neutral scalar particle
on the anomalous magnetic moment of the muon is
(besides the pure QED contribution)
\begin{eqnarray}
\Delta a_\mu & = &
\frac{e^2}{4 \pi^2} \frac{1}{\sin^2 \Theta_W}\left( \frac{m_\mu}{M_W}
\right)^2
\left(\frac{m_\mu}{M_H}\right)^2
\Biggl[
\Biggl(-\frac{7}{48}+\frac{1}{8}\ln\left(\frac{M_H}{m_\mu}\right)^2
\Biggr)
\nonumber \\[3mm]
& + &
\frac{e^2}{4 \pi^2}
\Biggl(
\frac{335}{432}+\frac{121}{144}\ln\left(\frac{M_H}{m_\mu}\right)^2
-\frac{13}{32}\ln^2\left(\frac{M_H}{m_\mu}\right)^2
-\frac{29}{16} \zeta \left(2 \right)
\Biggr)
\nonumber \\[3mm]
& + &
\frac{e^2}{4 \pi^2}
\frac{1}{\sin^2 \Theta_W}\left( \frac{M_H}{M_W} \right)^2
\Biggl( -\frac{9}{256} - \frac{1}{64} \zeta \left(2 \right)
+ \frac{405}{512}S_2
\nonumber \\*
& &
- \frac{39}{256} \frac{\pi}{\sqrt{3}}
+ \frac{9}{128} \left( \frac{\pi}{\sqrt{3}} - 1 \right)
\ln\left(\frac{M_H}{m_\mu}\right)^2
\Biggr)
\nonumber \\[3mm]
& + &
\frac{e^2}{4 \pi^2}
\frac{1}{\sin^2 \Theta_W}\left( \frac{m_\mu}{M_W} \right)^2
\Biggl(
\frac{379}{1536}-\frac{567}{512}S_2
+ \frac{5}{256} \ln\left(\frac{M_H}{m_\mu}\right)^2
\Biggr) \Biggr].
\end{eqnarray}
\noindent
This contribution is strongly suppressed and seems improbable to be
exhibited in future experiments.
We conclude that
\begin{enumerate}
\item
the gauge independence of the two-loop contribution to the anomalous
magnetic moment of the fermion has been verified;
\item
any threshold singularities in the on-shell asymptotic expansion
of the diagrams contributing to the AMM do cancel;
\item
the correction due to the heavy neutral scalar is
suppressed by the ratio of $m^4_\mu$ to heavy masses.
\end{enumerate}
\noindent
{\bf Acknowledgments}
This work was supported in part by the RFFI grant \# 96-02-17531,
by Volkswagenstiftung and by Bundesministerium
f\"ur Forschung und Technologie.
|
1,108,101,564,944 | arxiv | \section{Introduction}\label{sec:introduction}
\todo{Add mention of \enquote{dynamic} functionality}
The Internet of Things (IoT) is making rapid inroads into peoples' everyday life.
The new breed of devices such as the Amazon Echo, Phillips Hue lights and the Nest thermostat allow users to build advanced functionality into their homes.
Using simple configuration tools, users can easily modify their homes and add new devices or reconfigure old ones.
Similar developments are going on in context of automation for commercial buildings.
Today, automation systems for commercial buildings are installed by specialists and typically only configured once during the deployment phase.
Reconfiguring building automation (BA) systems after installation takes a high amount of effort from trained specialists.
Also, the engineering of BA systems is mostly static, adding or removing devices requires the involvement of highly qualified personnel such as electricians and BA specialists for deploying and (re-)~configuring the devices and system.
Expressing dynamic behavior, where devices are added or removed during the operation of the system, is even more difficult.
Sometimes, not even the installation plans of a BA system are available after installation, which means that a \enquote{reverse engineering} of the system becomes necessary for reconfiguration.
Current building automation systems are also centralized to a large degree, with one or more central controllers transferring and converting signals.
The growing computation power of sensors and actuators will make these controllers superfluous, and will allow control algorithms to move into sensors and actuators.
Also, controllers form a single point of failure for the system, where the failure of a controller renders building parts inoperable.
Traditional building automation systems are a form of \emph{orchestration}, where a central controller \emph{orchestrates} the interaction of components.
In this paper, we will move towards a \emph{choreography} of sensors and actors, where the actions of each participant are not controlled by a single controller, but in a distributed fashion.
This transition leads to a number of challenges in management and operation of the system.
We tackle these challenges in this paper by developing a mechanism for the dynamic and automated management of IoT choreographies at runtime.
Our approach is based on semantic technology to describe the structure and configuration of a system based on so-called \emph{Recipes}.
A recipe defines the data flow between IoT devices, so-called \emph{Offerings}, as an abstract template.
We introduce runtime configuration of recipes and allow the definition of communication links to be expressed as rules, so-called \emph{Offering Selection Rules}.
These rules are evaluated at runtime whenever devices are added or removed from the IoT system, in order to keep the recipe choreography running and automatically incorporate new devices.
We illustrate this approach at hand of a use case example from the building domain that is referred to throughout the rest of the paper.
This use case validates the system design and is demonstrates the advantages of dynamic choreographies.
Our use case for evaluation is as follows: A recipe defines the interaction between multiple switches, office lights and motion sensors.
The office lights are controlled by motion sensors and are switched on if motion is detected at any one sensor, but only if any of the switches is enabled.
We will demonstrate in the rest of this paper how our framework allows the centralized creation and decentralized operation of such a system, allowing integration of new devices at runtime.
The remainder of this paper is structured as follows: Section~\ref{sec:background-related} provides background to our work and outlines related work.
In Section~\ref{sec:off--recip}, we describe how services are described in our automation system.
Section~\ref{sec:building-choreography} describes how the selection of service components can be restricted.
In Section~\ref{sec:dynam-chor}, we define the process by which devices are added into the network and how offering selection rules are evaluated to build a choreography.
In Section~\ref{sec:perf-evaluation}, we provide a performance evaluation of the central orchestration component.
We conclude this paper in Section~\ref{sec:conclusions--future} and illustrate future avenues of research.
\section{Background \& Related Work}\label{sec:background-related}
\todo{This section still needs the most work.}
Traditional building automation is based on a static configuration created by highly specialized and developed tools.
The per-application (room lighting, room shading, etc.) \emph{room controller} (RC) provides functionality by accessing connected sensors and actors.
All services available from the controller are preloaded on the controller, and may be parametrized via tools provided by vendors.
When adding new devices, they need to be physically connected to the controller, and the controller needs to be parametrized to use the newly connected devices via the provisioning software (Siemens \enquote{ABT Site}\footnote{\url{http://www.buildingtechnologies.siemens.com/bt/global/en/buildingautomation-hvac/building-automation/building-automation-and-control-system-europe-desigo/room-automation/pages/room-automation.aspx}} tool, or KNX Association \enquote{ETS tool}\footnote{\url{https://www.knx.org/in/software/ets/about/index.php}}).
This configuration process means that room automation functionality is limited to preconfigured functionality on the controller, and that the implementation of dynamic services is difficult.
Additionally, this means that that information on building configuration is only available off-line in the provisioning software configuration file, not on-line in the running system.
Research on building automation tools has led to some advances in the field: Model-based tools have not gained wide acceptance, but represent the current state of the art in building automation system tool research ~\cite{butzin_model_2014}.
Model-based tools allow the configuration and management of BA systems on a higher level.
However, they do not provide the required underlying technology to realize dynamic choreographies as our approach integrating both tools and underlying platform does.
The semantic approach taken by Thuluva et al.~\cite{thuluva_semantic-based_2017} extends automation systems to allow engineering and operation of automation systems.
While these approaches provide functionality for distributed operation of services, no dynamic configuration of the system is supported without user involvement.
In the commercial sphere, \enquote{lightweight} automation services have become popular.
The foremost example here is probably \enquote{If This, Then That} (IFTTT)\footnote{\url{http://ifthisthenthat.com}}, which allows the limited composition of web services and IoT devices with a user-friendly interface.
IFTTT and similar commercial automation services are however limited in their integration with other services.
Integrating external services is difficult due to the inability of these services to export automation descriptions for use in other tools.
By describing services with semantic technology, our system simplifies the creation of external tools to interact with our system.
The \enquote{recipe} concept is a composition language for automation components~\cite{thuluva_recipes_2017}.
Service composition consists of discovering services and connecting them to each other.
In the context of \emph{web} services, there has been intense research activity on composition approaches~\cite{sheng_web_2014}, such as WSDL~\cite{Martin2007} or REST-based techniques~\cite{kopecky2008hrests,verborgh2011efficient}.
Other offerings for service composition are Node-RED\footnote{\url{http://nodered.org}} and FlowHub\footnote{\url{http://flowhub.io}}, which allow the creation of data-flow based compositions.
Flow-based research systems include Calvin~\cite{persson_calvin_2015} and Distributed Node-RED~\cite{giang_developing_2015}.
However, the underlying models used by flow-based composition platforms are not expressive enough to ensure an error-free composition.
The semantic descriptions used in our framework contain enough information to prevent incorrect service compositions.
Apart from the mechanics of composition, there has also been research on dynamically adapting systems.
Using a system of rules allows the automation system to automatically adapt to changing circumstances for autonomous management~\cite{burkert_technical_2015} in a similar vein to our rule-based approach.
\section{Offerings \& Recipes}\label{sec:off--recip}
In the following, we present the recipe and offering models.
These models have been elaborated as part of the BIG IoT project~\cite{broring_enabling_2017} and have been initially introduced in our previous work~\cite{thuluva_recipes_2017}.
Here, we provide an update of these models.
This overview is needed to describe the extensions of these models for dynamic choreographies and making them runtime-ready in the following sections.
\enquote{Recipes} define templates for compositions of \emph{ingredients} and their \emph{interaction}s.
Ingredients are placeholders for \emph{offerings}, devices and services that process and transform data.
Interactions describe the dataflow between these ingredients.
An example recipe is shown in figure~\ref{fig:light-control-recipe} describing a lighting control system.
A lighting controller takes input from brightness sensors, calculates the output brightness through an algorithm (averaging, for example) and outputs the calculated value to the connected lights, but only if one of the switches is switched on.
Inputs and outputs have both a name and a type.
The type is used for matching offerings with ingredients.
This process will be described in Chapter~\ref{sec:building-choreography}.
\begin{figure}
\centering
\input{figures/recipe.tikz}
\caption{\label{fig:light-control-recipe} A lighting control recipe with sensors, switches and lights.}
\end{figure}
Offerings describe service or device instances, and how to access these services or devices.
Offerings are specified in a semantic format by the so-called \enquote{offering description}.
Offering descriptions contain information on the in- and outputs of an offering as well as information on how to access the underlying service or device (providing the offering implementation).
An excerpted offering description for our switch-sensor-controller-light example is shown in listing~\ref{lst:ofdesc}.
\begin{listing}[t]
\begin{minted}[fontsize=\scriptsize,numbers=left,numbersep=5pt,xleftmargin=10pt]{json}
{
"localId": "officeLightOffering",
"category": "schema:lighting",
"endpoints": [{
"uri":
"coap://127.0.0.1:5683/LuminaireController",
"endpointType": "COAP_PUT",
"acceptType": "APPLICATION_XML",
"contentType": "APPLICATION_XML"}],
"requestTemplate":
"<dimmableValue>@@brightness@@</dimmableValue>",
"responseMapping": null,
"inputData": [{
"name": "brightness",
"valueType": "xsd:float"}],
"outputData": [],
"extent": {"city": "Munich"}
}
\end{minted}
\caption{\label{lst:ofdesc} Example offering description for a CoAP-enabled office light.}
\end{listing}
The offering description contains functional as well as non-functional properties.
Functional properties describe the implementation of the offering (e.g. which web service endpoint this offering accesses and procotol and payload of the request), while non-functional properties describe installation-specific metadata about the offering (such as the price or location of the offering).
Non-functional and functional properties thus correspond to offering \enquote{interface} and \enquote{implementation}, respectively.
In detail, the offering description contains the following information:
The \texttt{inputData} and \texttt{outputData} (lines 14 and 18) functional properties contain information on the types of input and output that this offering consumes and produces.
They are visible in the recipe in figure~\ref{fig:light-control-recipe} as type annotations on the input and output nodes.
Type annotations are URIs referencing for example a term in the schema.org~\cite{guha_schema._2016} or QUDT~\cite{hodgson2011qudt} ontologies.
Additionally, a category is used to classify the offering, for example, into \enquote{smart building} or \enquote{transportation} categories.
While being useful for users during the creation of recipes, type and category properties are also used in the basic matchmaking algorithm described in the next section.
The internal properties \texttt{endpoints}, \texttt{requestTemplate} and \texttt{responseMapping} (lines 4, 11 and 13, respectively) specify how this offering accesses the underlying service or device.
The endpoint describes the adress under which the web service implementing this offering is reachable.
To define and parse communication payloads, the BIG IoT library can be used~\cite{schmid_architecture_2017}\footnote{Available at \url{https://gitlab.com/BIG-IoT/lib-java}}.
The BIG IoT library allows interpolation of input values into URLs, URL queries and request bodies, while the response can be parsed into output values via a simple parser that can be parametrized per offering description.
Supported protocols for endpoints are HTTP and CoAP, with POST, PUT and GET methods supported for both protocols.
Additionally, the asynchronous OBSERVE option is supported for CoAP.
Payloads can have XML or JSON format.
For example, the offering in figure~\ref{lst:ofdesc} allows dimming a light via an XML payload over CoAP as defined in the \texttt{endpointType} and \texttt{requestTemplate}.
Finally, non-functional properties (\texttt{extent} line 19, in this example) contain information about the offering that support their discovery and selection restriction beyond the basic matching algorithm.
Both algorithms (basic matching on functional properties and advanced matching on non-functional properties) will be described in the next section.
The duality between offerings and ingredients is central to our system: It allows us to utilize a recipe as choreography descriptions independent of concrete implementations.
A recipe is concrete enough so it can be successfully created as a blueprint by users using our tools, but so generic that it can be implemented and run using a wide variety of service implementations without requiring modification of the recipe.
In the next chapter, we describe the process of turning a recipe into a runnable instance.
\section{Instantiating Recipes}\label{sec:building-choreography}
\enquote{Instantiating} a recipe refers to the process of replacing ingredients with offerings, resulting in a recipe that's executable.
A recipe may be instantiated multiple times with different offerings, depending on the requirements.
To instantiate a recipe, suitable offerings are selected by their external properties described in Section~\ref{sec:off--recip}.
Then, extra restrictions called \enquote{offering selection rules} can optionally be applied. Finally, a recipe can be executed as a choreography, as described in Chapter~\ref{sec:dynam-chor}.
The matching algorithm to select suitable offerings works as follows: For each ingredient in the recipe, the database is searched for offerings that can replace this ingredient. Replacement is governed by the following algorithm:
Let $i$ be an ingredient, and $o$ an offering.
We also define $\func{category}()$, $\func{inputs}()$ and $\func{outputs}()$ to access the so-named properties of the offering and ingredient description in Chapter~\ref{sec:off--recip}.
Furthermore, we use the \enquote{subclass of} operator $\sqsubseteq$ to express subclass relations.
$o$ can replace $i$ iff:
\begin{itemize}
\item The category of the offering is a subclass of the category of the ingredient: $\func{category}(o) \sqsubseteq \func{category}(i)$.
\item For each input of the offering, the ingredient has at least one input with the same or subclassed type: $\forall in_o \in \func{inputs}(o): \exists in_i \in \func{inputs}(i): in_i \sqsubseteq in_o$.
\item For each output of the offering, the ingredient has at least one output with the same or superclassed type: $\forall out_o \in \func{outputs}(o): \exists out_i \in \func{outputs}(i): out_o \sqsubseteq out_i$.
\end{itemize}
Note that this allows offerings to have fewer inputs than the ingredient, as well as more outputs.
Superfluous outputs and inputs are ignored.
In order to instantiate a recipe, each of the ingredients in the recipe has to be filled by at least one offering.
This purely type-based matching is very generic, but also rather limited.
Realizing simple use cases such as \enquote{control all lights in room 3 via any switch in the same room} would require defining categories specifically for this application scenario (e.g., defining the category type \enquote{lighting in room 3}).
To address this and to keep recipes generic, we introduce the concept of \enquote{offering selection rules} (OSRs), which allow users to specify offerings that should participate in a recipe in fine-grained detail.
These rules are attached to an ingredient and specify additional requirements on its non-functional properties that an offering needs to provide to be considered for filling this ingredient.
Offering selection rules are evaluated on the non-functional properties of an offering (see Section~\ref{sec:off--recip}).
Non-functional properties can include location of the component, owner of the component or the energy efficiency of this component.
Because the offering description is specified in a semantic format, non-functional properties can be extended easily.
OSRs can query these properties for equality or inequalities to a literal value.
Multiple OSRs can be composed using boolean operators \texttt{AND} and \texttt{OR}.
Additionally, the cardinality of an ingredient can be specified using OSRs.
This means that the minimum and maximum number of offerings replacing an ingredient can limited.
Using this set of OSRs, it is possible to constrain recipes in complicated ways going beyond the basic matching algorithm.
The light recipe from figure~\ref{fig:light-control-recipe} could for example be constrained to only match lights, sensors and switches in room A, with the controller not being constrained to a certain location, but to a cardinality of one.
Instantiating the recipe would then result in one controller being connected to all sensors, all lights and all switches in room A.
By adding a different set of OSRs to the system, the recipe might be constrained to room B.
This functionality is provided in current automation systems by defining templates that describe the a single deployment and then instantiating these templates multiple times, once for each room.
Templates are not held available during the runtime of the system, and thus cannot be reevaluated for dynamic operation of the system.
Using OSRs, the policy that led to the system's current configuration is always accessible and available.
The policy can thus be reevaluated on system changes, something that is not possible with the template-based approach.
\begin{figure}[t]
\centering
\input{figures/recipes-osrs.tikz}
\caption{\label{fig:relat-recip-osrs} Relation of Recipes and OSRs}
\end{figure}
Conceptually, we have implemented these concepts as follows: Recipes, offerings, ingredients and OSRs are stored in an Apache Jena triple store\footnote{\url{https://jena.apache.org/}} as a semantic graph.
The objects in this semantic graphs are \emph{recipes}, \emph{recipe runtime configurations}, \emph{ingredient runtime configurations} and finally \emph{offering selection rules}.
The relations between those concepts are shown in figure~\ref{fig:relat-recip-osrs}.
Recipes are designed using the recipe design tool described by Thuluva et al.~\cite{thuluva_recipes_2017} and are stored in the central repository.
Recipes are then instantiated by creating a \enquote{recipe runtime configuration} (RRC) for this recipe.
Each RRC describes a specific instantiation of a recipe.
A RRC is an installation-specific instantiation of a recipe, because it can (and often will) contain restrictions on non-functional properties (such as the location) of offerings.
Recipes, on the other hand, describe installation-independent patterns of interaction.
The RRC can contain per-recipe OSRs such as cardinality, and always contains per-ingredient information called \enquote{ingredient runtime configuration} (IRC).
Each RRC contains multiple IRCs, one for each ingredient in the recipe.
IRCs contain runtime information describing the current cardinality of the ingredient, as well as the offerings which are currently implementing the ingredient and finally, offering selection rules (OSRs) restricting the set of offerings that can replace this ingredient.
The specification of a service composition as a recipe refined by a set of queries allows the creation of dynamic systems.
We describe the realisation of such systems in the next section.
\section{Design for Enabling IoT Choreographies}\label{sec:dynam-chor}
The concept of OSRs allow the addition of offerings into a running system without manual intervention.
\begin{figure}
\centering
\input{figures/new-device.tikz}
\caption{\label{fig:incorp-new-devic} Incorporation of new devices into orchestration}
\end{figure}
To realize this functionality, our system consists of three parts, as seen in figure~\ref{fig:incorp-new-devic}:
\begin{itemize}
\item A component for computing choreographies (\enquote{controller})
\item A triple store for data storage
\item An \enquote{engine} running on participating components
\end{itemize}
The controller is the central component of our system.
The controller instantiates recipes, and handles the addition and removal of offerings using the triple store in the background for persistent data storage.
The \enquote{engine} implements a gateway and enables devices to participate in the system.
This is done by accepting input from other engines running on other components, passing this input to the offering implementation via the mechanisms described in Section~\ref{sec:off--recip}, parsing the output of the implementation, and sending it to the next offerings currently part of the recipe.
Thus, recipes are turned into distributed choreographies.
In the future, the engine will run on smart sensors and actuators directly, and enable direct integration of these devices.
Currently, the engine is on Raspberry Pi single board computers connected to hardware devices.
It is crucial to note that the controller is \emph{not} a single point of failure.
Without the controller, all recipes will continue operating, only the addition and removal of components is impacted.
The workflow for realizing dynamic choreographies is shown in figure~\ref{fig:incorp-new-devic}.
When a new component is connected to the network, the engine registers at the controller with its offering description (OD) (step 1).
This offering description includes all the information necessary for deciding whether the component should be part of a choreography.
The offering description is added to the triple store.
In step 2, the controller finds all matching IRCs for an offering by computing the matching between the IRC's ingredient and the new offering with the algorithm in Section~\ref{sec:building-choreography}.
Then, for each IRC matching the new offering, all associated OSRs are evaluated.
This is done by serializing the OSRs to SPARQL queries~\cite{harris_sparql_2013} and running them on the triple store.
This results in a number of IRCs that the new component will be added to.
\begin{listing}[t]
\centering
\begin{minted}[fontsize=\scriptsize,numbers=left,numbersep=5pt,xleftmargin=10pt]{json}
{"offering": "bigiot:light-control",
"recipeRuntimeConfiguration":
"bigiot:rrc1",
"outputs": {
"brightness": {
"http://lamp1/input":
"on\_off",
"http://lamp2/input":
"on\_off"
}},
"inputs": {
"bigiot:sensor1": ["sensorin"],
"bigiot:switch1": ["switchin"]}}
\end{minted}
\caption{\label{lst:example-inter-descr} Example interaction descriptor for a lighting controller connected to two lights and one sensor and switch.}
\end{listing}
From this information, \emph{interaction descriptors} (InDes) are generated in step 3.
Each interaction descriptor describes the communication behavior of one device as part of a choreography.
InDes' are derived from the recipe by finding all other offerings that an offering should communicate with and accept input from.
An example interaction descriptor is shown in listing~\ref{lst:example-inter-descr}.
An interaction descriptor contains information on where to send the outputs of this offering (lines 5--12) and which inputs to expect (lines 13--16).
Finally, interaction descriptors are sent to each device participating in the choreography (step 4), in order to inform it of its communication partners.
Based on the information contained in the InDes, each component has the knowledge to participate in a choreography (step 5).
Thus, each choreography can now run autonomously, with the new component integrated into it.
This process works analogously for offering removal.
When a device unregisters or fails, its offering description is removed from the triple store.
The controller finds all IRCs that contained this offering and removes it.
Optionally, the removed offering may be replaced by another offering already available in the triple store.
In the next section, we will quantitatively evaluate the computational cost of OSR resolution.
\section{Evaluation}\label{sec:perf-evaluation}
To evaluate the scalability of our implementation, we measured the performance of the controller when adding new components.
The computational factor dominating the addition of new components is the matching and resolution of OSRs.
To find the set of ingredients that the new component can replace, all OSRs need to be evaluated.
Thus, it is expected that the computation time for the addition of new components scales with the number of RRCs in the system.
In order to evaluate this, we measured the time between the addition of a component into the database and the conclusion of OSR computation, when a list of all suitable choreographies is available.
To check the scalability of the controller, we measured performance with an increasing number of RRCs ranging from 7 to 700 using a set of 7 OSRs instantiated $n$ times for $n$ from 1 to 100.
The number of components or recipes in the system does not influence the matching performance, since only RRCs are checked for a match with the new component.
Testing was done on a machine with 8 GB of RAM and a 2.4 GHz i5 mobile Intel processor with 4 threads.
\begin{figure}
\centering
\begin{tikzpicture}
\datavisualization [scientific axes, visualize as smooth line, x axis={label=Number of OSRs}, y axis={label=Time in ms}]
data [separator=\space] {
x y
7 12.9215
14 15.6139
21 20.0287
28 27.2350
35 31.3954
42 35.8314
49 42.5043
56 46.2951
63 48.3529
70 59.2845
77 69.5230
84 68.3311
91 74.4902
98 84.5599
105 87.2165
112 98.2285
119 107.4427
126 109.0138
133 119.6255
140 122.4260
147 135.6594
154 143.8361
161 159.4636
168 163.5537
175 168.0322
182 181.7566
189 190.3558
196 187.4866
203 207.3416
210 213.5650
217 227.9465
224 230.9730
231 254.8398
238 244.1707
245 281.6536
252 294.9530
259 295.0532
266 312.7786
273 284.6041
280 309.7454
287 323.1019
294 335.3475
301 341.3630
308 358.6607
315 364.7434
322 389.7746
329 372.3820
336 397.3633
343 415.7228
350 452.4144
357 399.4467
364 460.6495
371 470.7582
378 451.8396
385 479.2675
392 490.6598
399 489.7594
406 539.3505
413 528.6679
420 518.2328
427 608.9891
434 593.3963
441 526.4272
448 622.3448
455 598.6910
462 586.1265
469 678.3524
476 637.5745
483 685.8329
490 711.6073
497 619.3662
504 746.1184
511 717.8471
518 698.1797
525 761.7330
532 691.2753
539 802.3693
546 797.0221
553 783.4881
560 780.3362
567 841.1189
574 824.4469
581 875.3933
588 843.2348
595 900.6881
602 868.5855
609 952.2195
616 902.3861
623 957.4786
630 970.1259
637 990.6656
644 994.4331
651 1026.4359
658 1102.7518
665 1008.3464
672 1133.2290
679 1091.9723
686 1133.1709
693 1151.3241
700 1175.6033
};
\end{tikzpicture}
\caption{\label{fig:evaluation-plot} Performance evaluation of controller}
\end{figure}
With the controller being the only central component of the system, its performance dominates that of the complete system, and is therefore a suitable indicator of the overall performance.
As can be seen in figure~\ref{fig:evaluation-plot}, the controller scales well enough, with a computation time of one second being broken at about 650 RRCs in the controller.
The system scales quadratically in the number of RRCs, but with low constant factors.
Overall, the controller performance only becomes unacceptable for our purposes with very large systems of more than 650 components.
\section{Conclusions \& Future Work}\label{sec:conclusions--future}
In this paper, we present a concept, implementation and evaluation for running dynamic IoT choreographies.
These dynamic choreographies provide am approach that is novel in IoT environments and particularly useful in the domain of building automation systems.
By expressing service compositions as recipes together with selection rules, IoT components can be dynamically updated and recomposed.
This allows the automatic integration of new components into existing compositions without requiring user interaction.
The choreography approach remove single points of failure, and leverages the computation power of network nodes.
The system is reasonably efficient and scales acceptably with growing numbers of devices.
The quadratic scaling behavior is problematic with very large systems, but performance tuning of the triple store will improve the scaling behavior of the system.
Additionally, the recipe concept is limited in the compositions it can express, only allowing the composition of REST services, without the ability to add custom properties or scripts.
By extending the recipe context in the future, we will be able to express a wider range of automation services, and further reduce the reliance on centralized control algorithms.
Additionally, we are working on using the OSR mechanism as a building block for reliability of orchestrations.
This is a crucial task, since failure of single IoT components may remain unnoticed (or noticed quite late) in distributed workflows.
Using the OSR concept, failures in the orchestration can be automatically corrected if suitable components are available.
This research will make our system ready for more complex deployments.
\printbibliography
\end{document}
|
1,108,101,564,945 | arxiv | \section{Introduction}
In order to minimize the need for annotated resources (produced through manual annotation, or by manual check of automatic annotation), several research works were interested in building Natural Language Processing (NLP) tools based on unsupervised or semi-supervised approaches \cite{Collins99,Klein05,Goldberg10}. For example, NLP tools based on cross-language projection of linguistic annotations achieved good performances in the early 2000s \cite{Yaro01}. The key idea of annotation projection can be summarized as follows: through word alignment in parallel text corpora, the annotations are transferred from the \textit{source} (resource-rich) language to the \textit{target} (under-resourced) language, and the resulting annotations are used for supervised training in the target language. However, automatic word alignment errors \cite{Fras07} limit the performance of these approaches.
Our work is built upon these previous contributions and observations. We explore the possibility of using Recurrent Neural Networks (RNN) to build multilingual NLP tools for resource-poor languages analysis. The major difference with previous works is that we do not explicitly use word alignment information. Our only assumption is that parallel sentences (source-target) are available and that the source part is annotated. In other words, we try to infer annotations in the target language from sentence-based alignments only. While most NLP researches on RNN have focused on monolingual tasks\footnote{Exceptions are the recent propositions on Neural Machine Translation \cite{Cho14,Suts14}}
and sequence labeling \cite{Coll11,Grav12}, this paper, however, considers the problem of learning multilingual NLP tools using RNN.
\blfootnote{
%
%
%
\hspace{-0.65cm}
This work is licensed under a Creative Commons
Attribution 4.0 International License.
License details:
\url{http://creativecommons.org/licenses/by/4.0/}
}
\textbf{Contributions} In this paper, we investigate the effectiveness of RNN architectures --- Simple RNN (SRNN) and Bidirectional RNN (BRNN) --- for multilingual sequence labeling tasks without using any word alignment information. Two NLP tasks are considered: Part Of Speech (POS) tagging and Super Sense (SST) tagging ~\cite{Ciaramita06}. Our RNN architectures demonstrate very competitive results on unsupervised training for new target languages. In addition, we show that the integration of POS information in
RNN models is useful to build multilingual coarse-grain semantic (Super Senses) taggers.
For this, a simple and efficient way to take into account low-level linguistic information for more complex sequence labeling RNN is proposed.
\textbf{Methodology} For training our multilingual RNN models, we just need as input a parallel (or multi-parallel) corpus between a resource-rich language and one or many under-resourced languages. Such a parallel corpus can be manually obtained (clean corpus) or automatically obtained (noisy corpus).
To show the potential of our approach, we investigate two sequence labeling tasks: cross-language POS tagging and multilingual Super Sense Tagging (SST). For the SST task, we measure the impact of the parallel corpus quality with manual or automatic translations of the SemCor \cite{Miller93} translated from English into Italian (manually and automatically) and French (automatically).
\textbf{Outline}
The remainder of the paper is organized as follows. Section \ref{Related Work} reviews related work. Section \ref{Approach} describes our cross-language annotation projection approaches based on RNN. Section \ref{Experiments} presents the empirical study and associated results. We finally conclude the paper in Section \ref{Conclusion}.
\section{Related Work}
\label{Related Work}
Cross-lingual projection of linguistic annotations was pioneered by ~\newcite{Yaro01} who created new monolingual resources by transferring annotations from resource-rich languages onto resource-poor languages through the use of word alignments. The resulting (noisy) annotations are used in conjunction with robust learning algorithms to build cheap unsupervised NLP tools ~\cite{Pado09}. This approach has been successfully used to transfer several linguistic annotations between languages (efficient learning of POS taggers ~\cite{Das11,Duon13} and accurate projection of word senses ~\cite{Bent04}). Cross-lingual projection requires a parallel corpus and word alignment between source and target languages. Many automatic word alignment tools are available, such as GIZA++ which implements IBM models ~\cite{Och00}. However, the noisy (non perfect) outputs of these methods is a serious limitation for the annotation projection based on word alignments ~\cite{Fras07}.
\begin{comment}
Examples include: learning good POS taggers ~\cite{Das11,Duon13,Tack13a,Wisn14}, transfer of named entity annotations ~\cite{Kim12} and syntactic constituents ~\cite{Jian11}, the projection of word senses ~\cite{Bent04,Van14} and semantic role labeling ~\cite{Pado07}
~\newcite{Jaba12} have effectively used this technique for porting spoken language understanding system from French to Italian and Arabic.
Cross-lingual projection requires a parallel corpus and word alignment between source and target languages. Many automatic word alignment tools are available, such as GIZA++ ~\cite{Och00}. However, the non perfect quality of word alignment methods constitutes a serious limitation for the annotation projection approach ~\cite{Fras07}.
\end{comment}
To deal with this limitation, recent studies based on cross-lingual representation learning methods have been proposed to avoid using such pre-processed and noisy alignments for label projection. First, these approaches learn language-independent features, across many different languages ~\cite{Durr12,Al-R13,Tack13a,Luon15,Gouw15a,Gouw15b}. Then, the induced representation space is used to train NLP tools by exploiting labeled data from the source language and apply them in the target language. Cross-lingual representation learning approaches have achieved good results in different NLP applications such as cross-language SST and POS tagging ~\cite{Gouw15a}, cross-language named entity recognition ~\cite{Tack12}, cross-lingual document classification and lexical translation task ~\cite{Gouw15b}, cross language dependency parsing ~\cite{Durr12,Tack13a}
and cross-language semantic role labeling ~\cite{Tito12}.
Our approach described in next section, is inspired by these works since we also try to induce a common language-independent feature space (crosslingual words embeddings). Unlike ~\newcite{Durr12} and ~\newcite{Gouw15a}, who use bilingual lexicons, and unlike ~\newcite{Luon15} who use word alignments between the source and target languages\footnote{to train a bilingual representation regardless of the task} our common multilingual representation is very agnostic. We use a simple (multilingual) vector representation based on the occurrence of source and target words in a parallel corpus and we let the RNN learn the best internal representations (corresponding to the hidden layers) specific to the task (SST or POS tagging).
In this work, we learn a cross-lingual POS tagger (multilingual POS tagger if a multilingual parallel corpus is used) based on a recurrent neural network (RNN) on the source labeled text and apply it to tag target language text. We explore simple and bidirectional RNN architectures (SRNN and BRNN respectively). Starting from the intuition that low-level linguistic information is useful to learn more complex taggers, we also introduce three new
RNN variants to take into account external (POS) information in multilingual SST.
\begin{comment}
Finally, we should note that several works have investigated how to apply neural networks to NLP applications ~\cite{Fede93,Hend04,Beng06,Miko10,Coll11}. While ~\newcite{Fede93} work was one of the earliest attempts to develop a part-of-speech tagger based on a special type of neural network, ~\newcite{Beng06} and ~\newcite{Miko10} applied neural networks to build language models. ~\newcite{Coll11} employed a deep learning framework for multitask learning including POS tagging, named entity recognition, language modeling and semantic role labeling. ~\newcite{Hend04} proposed training methods for learning a
parser based on neural networks.
\end{comment}
\section{Unsupervised Approach Overview}
\label{Approach}
To avoid projecting label information from deterministic and error-prone word alignments, we propose to represent the word alignment information intrinsically in a recurrent neural network architecture. The idea consists in implementing a recurrent neural network as a multilingual sequence labeling tool (we investigate POS tagging and SST tagging).
Before describing our cross-lingual (multilingual if a multi-parallel corpus is used) neural network tagger, we present the simple cross-lingual projection method, considered as our baseline in this work.
\subsection{Baseline Cross-lingual Annotation Projection}
\label{Cross_lingual}
We use direct transfer as a baseline system which is similar to the method described in ~\cite{Yaro01}.
First we tag the source side of the parallel corpus using the available supervised tagger. Next, we align words in the parallel corpus to find out corresponding source and target words. Tags are then projected to the (resource-poor) target language. The target language tagger is trained using any machine learning approach (we use TNT tagger ~\cite{Bran00} in our experiments).
\begin{figure*}[tb!]
\centering \includegraphics[scale=0.45]{Approche_figure.png
\vspace*{-0.2cm}
\caption{\label{Method} Overview of the proposed model architecture for inducing multilingual RNN taggers.}
\vspace*{-0.3cm}
\end{figure*}
\vspace{-0.15cm}
\subsection{Proposed Approach}
\vspace{-0.15cm}
\label{Our Approach}
We propose a method for learning multilingual sequence labeling tools based on RNN, as it can be seen in Figure \ref{Method}. In our approach, a parallel or multi-parallel corpus between a resource-rich language and one or many under-resourced languages is used to extract common (multilingual) and agnostic words representations. These representations, which rely on sentence level alignment only, are used with the source side of the parallel/multi-parallel corpus to learn a neural network tagger in the source language. Since a common representation of source and target words is chosen, this neural network tagger is truly multilingual and can be also used to tag texts in target language(s).
\vspace{-0.1cm}
\subsubsection{Common Words Representation}
\vspace{-0.1cm}
In our \textit{agnostic} representation, we associate to each word (in source \textit{and} target vocabularies) a common vector representation, namely ${V_{wi}, i = 1,...,N}$, where $N$ is the number of parallel sentences (bi-sentences in the parallel corpus). If $w$ appears in i-th bi-sentence of the parallel corpus then $V_{wi}=1$.
The idea is that, in general, a source word and its target translation appear together in the same bi-sentences and their vector representations are close. We can then use the RNN tagger, initially trained on source side, to tag the target side (because of our {\em common vector representation}). This simple representation does not require multilingual word alignments and it lets the RNN learns the optimal internal representation needed for the annotation task (for instance, the hidden layers of the RNN can be considered as multi-lingual embeddings of the words).
\subsubsection{Recurrent Neural Networks}
There are two major architectures of neural networks: Feedforward ~\cite{Beng03} and Recurrent Neural Networks (RNN) ~\cite{Schm92,Miko10}. ~\newcite{Sund13} showed that language models based on recurrent architecture achieve better performance than language models based on feedforward architecture. This is due to the fact that recurrent neural networks do not use a context of limited size. This property led us to use, in our experiments, the Elman recurrent architecture ~\cite{Elman90}, in which recurrent connections occur at the hidden layer level.
We consider in this work two Elman RNN architectures (see Figure \ref{RNN_Architectures}): \textit{Simple} RNN (SRNN) and \textit{Bidirectional} RNN (BRNN). In addition, to be able to include low-level linguistic information in our architecture designed for more complex sequence labeling tasks, we propose three new
RNN variants to take into account external (POS) information for multilingual Super Sense Tagging (SST).
\begin{figure*}
\centering \includegraphics[scale=0.23]{RNN_Architectures.png}
\vspace*{-0.2cm}
\caption{\label{RNN_Architectures} High level schema of RNN used in our work.}
\vspace*{-0.2cm}
\end{figure*}
\vspace{0.2cm}\hspace{-0.4cm}\textbf{A.\hspace{0.2cm} Simple RNN}
\vspace{0.1cm}
\hspace{-0.5cm} In the \textit{simple} Elman RNN (SRNN), the recurrent connection is a loop at the hidden layer level. This connection allows SRNN to use at the current time step hidden layer's states of previous time steps. In other words, the hidden layer of SRNN represents all previous history and not just $n-1$ previous inputs, thus the model can theoretically represent long context
The architecture of the SRNN considered in this work is shown in Figure \ref{RNN_Architectures}. In this architecture, we have 4 layers: input layer, forward (also called recurrent or context layer), compression hidden layer and output layer. All neurons of the input layer are connected to every neuron of forward layer by weight matrix $I_F$ and $R_F$, the weight matrix $H_F$ connects all neurons of the forward layer to every neuron of compression layer
and all neurons of the compression layer are connected to every neuron of output layer by weight matrix $O$.
The input layer consists of a vector $w(t)$ that represents the current word $w_t$ in our common words representation (all input neurons corresponding to current word $w_t$ are set to 0 except those that correspond to bi-sentences containing $w_t$, which are set to 1), and of vector $f(t-1)$ that represents output values in the forward layer from the previous time step. We name $f(t)$ and $c(t)$ the current time step hidden layers (our preliminary experiments have shown better performance using these two hidden layers instead of one hidden layer), with variable sizes (usually 80-1024 neurons) and sigmoid activation function. These hidden layers represent our common language-independent feature space and inherently capture word alignment information. The output layer $y(t)$, given the input $w(t)$ and $f(t-1)$ is computed with the following steps :
\begin{equation}
f(t)=\Sigma (w(t) . I_F(t) + f(t-1). R_F(t))\\
\end{equation}
\begin{equation}
c(t)=\Sigma (f(t) . H_F(t))\\
\end{equation}
\begin{equation}
y(t)=\Gamma(c(t) . O(t))\\
\end{equation}
$\Sigma$ and $\Gamma$ are the sigmoid and the softmax functions, respectively. The softmax activation function is used to normalize the values of output neurons to sum up to 1. After the network is trained, the output $y(t)$ is a vector representing a probability distribution over the set of tags. The current word $w_t$ (in input) is tagged with the most probable output tag.
For many sequence labeling tasks, it is beneficial to have access to future in addition to the past context. So, it can be argued that our SRNN is not optimal for sequence labeling, since the network ignores future context and tries to optimize the output prediction given the previous context only. This SRNN is thus penalized compared with our baseline projection based on TNT ~\cite{Bran00} which considers both left and right contexts.
To overcome the limitations of SRNN, a simple extension of the SRNN architecture --- namely Bidirectional recurrent neural network (BRNN) ~\cite{Schu97} --- is used to ensure that context at previous and future time steps will be considered.
\vspace{0.3cm}\hspace{-0.4cm}\textbf{B.\hspace{0.2cm} Bidirectional RNN}
\vspace{0.2cm}
\hspace{-0.5cm} An unfolded BRNN architecture is given in Figure \ref{RNN_Architectures}. The basic idea of BRNN is to present each training sequence forwards and backwards to two separate recurrent hidden layers (forward and backward hidden layers) and then somehow merge the results. This structure provides the compression and the output layers with complete past and future context for every point in the input sequence. Note that without the backward layer, this structure simplifies to a SRNN
\vspace{0.3cm}\hspace{-0.4cm}\textbf{C.\hspace{0.2cm} RNN Variants}
\vspace{0.2cm}
\hspace{-0.5cm} As mentioned in the introduction, we propose three new RNN variants to take into account low level (POS) information in a higher level (SST) annotation task. The question addressed here is: at which layer of the RNN this low level information should be included to improve SST performance? As specified in Figure \ref{RNN_POS}, the POS information can be introduced either at input layer or at forward layer (forward and backward layers for BRNN) or at compression layer. In all these RNN variants, the POS of the current word is also represented with a vector ($POS(t)$). Its dimension corresponds to the number of POS tags in the tagset (universal tagset of ~\newcite{Petr12} is used). We propose one \textit{hot} vector representation where only one value is set to 1 and corresponds to the index of current tag (all other values are 0).
\begin{figure}
\begin{center}
\includegraphics[scale=0.30]{RNN_POS.png}
\caption{\label{RNN_POS} SRNN variants with POS information at three levels: (a) input layer, (b) forward layer, (c) compression layer.}
\end{center}
\end{figure}
\subsubsection{Network Training}
The first step in our approach is to train the neural network, given a parallel corpus (training corpus), and a validation corpus (different from train data) in the source language. In typical applications, the source language is a resource-rich language (which already has an efficient tagger or manually tagged resources). Our RNN models are trained by stochastic gradient descent using usual back-propagation and back-propagation through time algorithms \cite{Rume85}. We learn our RNN models with an iterative process on the tagged source side of the parallel corpus. After each epoch (iteration) in training, validation data is used to compute per-token accuracy of the model. After that, if the per-token accuracy increases, training continues in the new epoch. Otherwise, the learning rate is halved at the start of the new epoch. Eventually, if the per-token accuracy does not increase anymore, training is stopped to prevent over-fitting. Generally, convergence takes 5--10 epochs, starting with a learning rate ${\alpha} = 0.1$.
The second step consists in using the trained model as a target language tagger (using our common vector representation). It is important to note that if we train on a multilingual parallel corpus with \textit{N} languages ($N>2$), the same trained model will be able to tag all the \textit{N} languages.
Hence, our approach assumes that the word order in both source and target languages are similar. In some languages such as English and French, word order for contexts containing nouns could be reversed most of the time. For example, \textit{the European Commission} would be translated into \textit{la Commission europ\'eenne}. In order to deal with the word order constraints,
we also combine the RNN model with the cross-lingual projection model in our experiments.
\subsection{Dealing with out-of-vocabulary words}
For the words absent from in the initial parallel corpus, their vector representation is a vector of zero values. Consequently, during testing, the RNN model will use only the context information to tag the OOV words found in the test corpus. To deal with these types of OOV words\footnote{words which do not have a known vector representation}, we use the CBOW model of ~\cite{Miko13distributed} to replace each OOV word by its closest known word in the current OOV word context. Once the closest word is found, its common vector representation is used (instead of the vector of zero values) at the input of the RNN.
\subsection{Combining Simple Cross-lingual Projection and RNN Models}
\label{Combined Model}
Since the simple cross-lingual projection model \textit{M1} and RNN model \textit{M2} use different strategies for tagging (TNT is based on Markov models while RNN is a neural network), we assume that these two models can be complementary. To keep the benefits of each approach, we explore how to combine them with linear interpolation.
Formally, the probability to tag a given word \textit{w} is computed as
\begin{equation}
\small
P_{M12}(t|w)=(\mu P_{M1}(t|w,C_{M1})+(1-\mu)P_{M2}(t|w,C_{M2}))
\label{ProbCombEq}
\end{equation}
where, $C_{M1}$ and $C_{M2}$ are the context of $w$ considered by \textit{M1} and \textit{M2} respectively. The relative importance of each model is adjusted through the interpolation parameter $\mu$. The word $w$ is tagged with the most probable tag, using the function $f$ described as
\begin{equation}
f(w)= \arg\max_{t}(P_{M12}(t|w))
\label{FoncCombEq}
\end{equation}
\begin{comment}
The word $w$ is tagged with the most probable tag, using
the function $f$ described as
\vspace*{-0.1cm}
\begin{equation}
f(w)= \arg\max_{t}(P_{M12}(t|w))
\label{FoncCombEq}
\end{equation}
\end{comment}
\vspace{-0.1cm}
\section{Experiments}
\label{Experiments}
\vspace{-0.1cm}
Our models are evaluated on two labeling tasks: Cross-language Part-Of-speech (POS) tagging and Multilingual Super Sense Tagging (SST).
\vspace{-0.1cm}
\subsection{Multilingual POS Tagging}
\vspace{-0.1cm}
We applied our method to build RNN POS taggers for four target languages - French, German, Greek and Spanish - with English as the source language.
In order to determine the effectiveness of our common words representation described in section 3.2.1, we also investigated the use of state-of-the-art bilingual word embeddings (using MultiVec Toolkit \cite{Bera16}) as input to our RNN.
\vspace{-0.1cm}
\subsubsection{Dataset}
\vspace{-0.1cm}
For French as a target language, we used a training set of $10,000$ parallel sentences, a validation set of $1000$ English sentences, and a test set of $1000$ French sentences, all extracted from the ARCADE II English-French corpus ~\cite{vero08}. The test set is tagged with the French {\it TreeTagger} \cite{Schmid95} and then manually checked.
For German, Greek and Spanish as a target language, we used training and validation data extracted from the Europarl corpus ~\cite{Koeh05} which are a subset of the training data used in ~\cite{Das11,Duon13}. This choice allows us to compare our results with those of ~\cite{Das11,Duon13,Gouw15a}. The train data set contains $65,000$ bi-sentences ; a validation set of $10,000$ bi-sentences is also available. For testing, we use the same test corpora as ~\cite{Das11,Duon13,Gouw15a} (bi-sentences from CoNLL shared tasks on dependency parsing ~\cite{Buch06}). The evaluation metric ({\it per-token} accuracy) and the ~\newcite{Petr12} \textit{universal tagset}
are used for evaluation.
\begin{table*}[!t]
\centering
\small
\begin{tabular}{|l||c|c||c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\backslashbox[25mm]{\textbf{Model}}{\textbf{Lang.}}}& \multicolumn{2}{c||}{\textbf{French}} & \multicolumn{2}{c|}{\textbf{German}} & \multicolumn{2}{c|}{\textbf{Greek}} & \multicolumn{2}{c|}{\textbf{Spanish}} \\ \cline{2-9}
& All words & OOV & All words & OOV & All words & OOV & All words & OOV \\ \hline
Simple Projection & 80.3 & 77.1 & 78.9 &73.0 &77.5 &72.8 & 80.0 & 79.7 \\
\hline
SRNN MultiVec & 75.0 & 65.4 & 70.3 & 68.8 & 71.1 & 65.4 & 73.4 & 62.4 \\
\hline
\hline
SRNN & 78.5 & 70.0 &76.1 &76.4 & 75.7 & 70.7 & 78.8 &72.6 \\
\hline
BRNN & 80.6 & 70.9 & 77.5 & 76.6 & 77.2 & 71.0 & 80.5 & 73.1 \\
\hline
BRNN - OOV & 81.4 & 77.8 & 77.6 & 77.8 & 77.9 & 75.3 & 80.6 & 74.7 \\
\hline
Projection + SRNN & 84.5 & 78.8 & 81.5 & 77.0 &78.3 & 74.6 & 83.6 &81.2 \\
\hline
Projection + BRNN & 85.2 & 79.0 & 81.9 & 77.1 & 79.2 & 75.0 & 84.4 & 81.7 \\
\hline
Projection + BRNN - OOV & \textbf{85.6 } & \textbf{80.4 } & 82.1 & \textbf{78.7 } & 79.9 & \textbf{ 78.5 } & \textbf{ 84.4 } & \textbf{ 81.9 } \\
\hline
\hline
(Das, 2011) & --- & --- & 82.8 & --- & \textbf{82.5 } & --- & 84.2 & --- \\
\hline
(Duong, 2013) & --- & --- & \textbf{85.4 } & --- & 80.4 & --- & 83.3 & --- \\
\hline
(Gouws, 2015a) & --- & --- & 84.8 & --- & --- & --- & 82.6 & --- \\
\hline
\end{tabular}
\caption[The LOF caption]{\label{Tab-RNN-POS} Token-level POS tagging accuracy for Simple Projection, SRNN using MultiVec bilingual word embeddings as input, RNN\protect\footnotemark, Projection+RNN and methods of Das \& Petrov (2011), Duong et al (2013) and Gouws \& S{\o}gaard (2015).}
\end{table*}
For training, the English (source) sides of the training corpora (ARCADE II and Europarl) and of the validation corpora are tagged with the English {\it TreeTagger} toolkit. Using the matching provided by ~\newcite{Petr12}, we map the TreeTagger and the CoNLL tagsets to the common \textit{Universal Tagset}.
In order to build our baseline unsupervised tagger (based on a Simple Cross-lingual Projection -- see section \ref{Cross_lingual}), we also tag the target side of the training corpus, with tags projected from English side through word-alignments established by GIZA++. After tags projection, a target language POS tagger based on TNT approach ~\cite{Bran00} is trained.
\footnotetext{For RNN models, only one (same) system is used to tag German, Greek and Spanish}
The combined model is built for each considered language using cross-validation on the test corpus. First, the test corpus is split into 2 equal parts and on each part, we estimate the interpolation parameter $\mu$ (Equation \ref{ProbCombEq}) which maximizes the {\em per-token} accuracy score. Then each part of test corpus is tagged using the combined model tuned
on the other part, and vice versa (standard cross-validation procedure).
We trained MultiVec bilingual word embeddings on the parallel Europarl corpus between English and each of the target languages considered.
\subsubsection{Results and discussion}
Table \ref{Tab-RNN-POS} reports the results obtained for the unsupervised POS tagging. We note that the POS tagger based on bidirectional RNN (BRNN) has better performance than simple RNN (SRNN), which means that both past and future contexts help select the correct tag.
Table \ref{Tab-RNN-POS} also shows the performance before
and after performing our procedure for handling OOVs in BRNNs. It is shown that after replacing OOVs by the closest words using CBOW, the tagging accuracy significantly increases.
As shown in the same table, our RNN models accuracy is close to that of the simple projection tagger. It achieves comparable results to ~\newcite{Das11}, ~\newcite{Duon13} (who used the full Europarl corpus while we use only a $65,000$ subset of it) and to ~\newcite{Gouw15a} (who used extra resources such as Wiktionary and Wikipedia). Interestingly, RNN models learned using our common words representation (section 3.2.1) seem to perform significantly better than RNN models using MultiVec bilingual word embeddings.
It is also important to note that only one single SRNN and BRNN tagger applies to German, Greek and Spanish; so this is a truly multilingual POS tagger!
Finally, as for several other NLP tasks such as language modelling or machine translation (where standard and NN-based models are generally combined in order to obtain optimal results), the combination of standard and RNN-based approaches (\textit{Projection+\_}) seems necessary to further optimize POS tagging accuracies.
\subsection{Multilingual SST}
In order to measure the impact of the parallel corpus quality on our method, we also learn our SST models using the multilingual parallel corpus MultiSemCor (MSC) which is the result of manual or automatic translation of SemCor
from English into Italian and French.
\subsubsection{Dataset}
\textbf{SemCor} The SemCor \cite{Miller93}
is a subset of the Brown Corpus \cite{Kucera79} labeled with the \textit{WordNet} \cite{Fellbaum98} senses.
\hspace{-0.4cm}\textbf{MultiSemCor} The English-Italian MultiSemcor (MSC-IT-1) corpus is a manual translation of the English SemCor to Italian \cite{Bent04}. As we already mentioned, we are also interested in measuring the impact of the parallel corpus quality on our method. For this we use two translation systems: (a) Google Translate to translate the English SemCor to Italian (MSC-IT-2) and French (MSC-FR-2). (b) LIG machine translation system ~\cite{Besacier12} to translate the English SemCor to French (MSC-FR-1).
\hspace{-0.4cm}\textbf{Training corpus} The SemCor was labeled with the \textit{WordNet} synsets. However, because we train models for SST, we convert SemCor synsets annotations to super senses. We learn our models using the four different versions of MSC (MSC-IT-1,2 - MSC-FR-1,2), with modified Semcor on source side.
\hspace{-0.4cm}\textbf{Test Corpus}
To evaluate our models, we used the SemEval 2013 Task 12 (Multilingual Word Sense Disambiguation) \cite{Navigli13} test corpora, which are available in 5 languages (English, French, German, Spanish and Italian) and labeled with \textit{BabelNet} ~\cite{Navigli12} senses. We map BabelNet senses to WordNet synsets, then WordNet synsets are mapped to super senses.
\subsubsection{SST Systems Evaluated}
The goals of our SST experiments are twofold: first, to investigate the effectiveness of using POS information to build multilingual super sense tagger, secondly to measure the impact of the parallel corpus quality (manual or automatic translation) on our RNN models (SRNN, BRNN and our proposed variants). To summarize, we build four super sense taggers based on baseline cross-lingual projection (see section \ref{Cross_lingual}) using four versions of MultiSemcor (MSC-IT-1, MSC-IT-2, MSC-FR-1, MSC-FR-2) described above. Then we use the same four versions to train our multilingual SST models based on SRNN and BRNN. For learning our multilingual SST models based on RNN variants proposed in part (C) of section 3.2.2, we also tag SemCor using \textit{TreeTagger} (POS tagger proposed by ~\newcite{Schmid95}).
\subsubsection{Results and discussion}
Our models are evaluated on SemEval 2013 Task 12 test corpora. Results are directly comparable with those of systems which participated to this evaluation campaign. We report two SemEval 2013 (unsupervised) system results for comparison:
\begin{itemize}
\item \textbf{MFS Semeval 2013} : The most frequent sense is the baseline provided by SemEval 2013 for Task 12,
this system is a strong baseline, which is obtained by using an external resource (the WordNet most frequent sense).
\item \textbf{GETALP} : a fully unsupervised WSD system proposed by ~\cite{Schwab12ant} based on Ant-Colony algorithm.
\end{itemize}
The DAEBAK! \cite{Navigli10} and the UMCC-DLSI systems \cite{Gutierrez11} have also participated to SemEval 2013 Task 12. However, they use a supervised approach
\footnote{DAEBAK! and UMCC-DLSI for SST have obtained: 68.1\% and 72.5\% on Italian; 59.8\% and 67.6 \% on French}.
Table \ref{Tab-RNN-SST} shows the results obtained by our RNN models and by two SemEval 2013 WSD systems. SRNN-POS-X and BRNN-POS-X refer to our RNN variants: \textit{In} means input layer, \textit{H1} means first hidden layer and \textit{H2} means second hidden layer.
We achieve the best performance on Italian using MSC-IT-1 clean corpus while noisy training corpus degrades SST performance. The best results are obtained with combination of simple projection and RNN which confirms (as for POS tagging) that both approaches are complementary.
We also observe that the RNN approach seems more robust than simple projection on noisy corpora. This is probably due to the fact that no word alignments are required in our cross language RNN. Finally, BRNN-POS-H2-OOV achieves the best performance, which shows that the integration of POS information in RNN models and dealing with OOV words are useful to build efficient multilingual super senses taggers. Finally, it is worth mentioning that integrating low level (POS) information lately (last hidden layer) seems to be the best option in our case.
\begin{table*}[!t]
\centering
\small
\begin{tabular}{|l|l||c|c||c|c|}
\hline
\multicolumn{2}{|c||}{\textbf{Model} } & \multicolumn{2}{c||}{\textbf{Italian}} & \multicolumn{2}{c|}{\textbf{French}} \\ \cline{1-6}
\multirow{3}{*}{\begin{sideways}Baseline\end{sideways}} & & \textbf{MSC-IT-1} & \textbf{MSC-IT-2} & \textbf{MSC-FR-1} & \textbf{MSC-FR-2} \\
& & \textbf{trans man.} & \textbf{trans. auto} & \textbf{trans. auto} & \textbf{trans auto.}\\
\cline{2-6}
& Simple Projection & 61.3 & 45.6 & 42.6 & 44.5 \\
\hline
\hline
\multirow{9}{*}{\begin{sideways}SST Based RNN \end{sideways}}
& SRNN & 59.4 & 46.2 & 46.2 & 47.0 \\
\cline{2-6}
& BRNN & 59.7 & 46.2 & 46.0 & 47.2 \\
\cline{2-6}
& SRNN-POS-In & 61.0 & 47.0 & 46.5 & 47.3 \\
\cline{2-6}
& SRNN-POS-H1 & 59.8 & 46.5 & 46.8 & 47.4 \\
\cline{2-6}
& SRNN-POS-H2 & 63.1 & 48.7 & 47.7 & 49.8\\
\cline{2-6}
& BRNN-POS-In & 61.2 & 47.0 & 46.4 & 47.3 \\
\cline{2-6}
& BRNN-POS-H1 & 60.1 & 46.5 & 46.8 & 47.5 \\
\cline{2-6}
& BRNN-POS-H2 & 63.2 & 48.8 & 47.7 & 50 \\
\cline{2-6}
& BRNN-POS-H2 - OOV & 64.6 & 49.5 & 48.4 & 50.7 \\
\hline
\hline
\multirow{9}{*}{\begin{sideways}Combination \end{sideways}}
& Projection + SRNN & 62.0 & 46.7 & 46.5 & 47.4 \\
\cline{2-6}
& Projection + BRNN & 62.2 & 46.8 & 46.4 & 47.5 \\
\cline{2-6}
& Projection + SRNN-POS-In & 62.9 & 47.4 & 46.9 & 47.7 \\
\cline{2-6}
& Projection + SRNN-POS-H1 & 62.5 & 47.0 & 47.1 & 48.0 \\
\cline{2-6}
& Projection + SRNN-POS-H2 & 63.5 & 49.2 & 48.0 & 50.1 \\
\cline{2-6}
& Projection + BRNN-POS-In & 62.9 & 47.5 & 46.9 & 47.8 \\
\cline{2-6}
& Projection + BRNN-POS-H1 & 62.7 & 47.0 & 47.0 & 48.0 \\
\cline{2-6}
& Projection + BRNN-POS-H2 &63.6 & 49.3 & 48.0 & 50.3 \\
\cline{2-6}
& Projection + BRNN-POS-H2 - OOV &\textbf{64.7} & 49.8 & 48.6 & 51.0 \\
\hline
\hline
\multirow{2}{*}{\begin{sideways}S-E\end{sideways}} & MFS Semeval 2013 & \multicolumn{2}{c||} {60.7} & \multicolumn{2}{c|} {\textbf{52.4}} \\
\cline{2-6}
& GETALP \cite{Schwab12ant} & \multicolumn{2}{c||} {40.2} & \multicolumn{2}{c|} {34.6}\\
\hline
\end{tabular}
\caption
Super Sense Tagging (SST) accuracy for Simple Projection, RNN and their combination.\label{Tab-RNN-SST}}
\end{table*}
\section{Conclusion}
\label{Conclusion}
In this paper, we have presented an approach based on recurrent neural networks (RNN) to induce multilingual text analysis tools. We have studied Simple and Bidirectional RNN architectures on multilingual POS and SST tagging. We have also proposed new RNN variants in order to take into account low level (POS) information in a super sense tagging task. Our approach has the following advantages: (a) it uses a language-independent word representation (based only on word co-occurrences in a parallel corpus), (b) it provides truly multilingual taggers (1 tagger for N languages) (c) it can be easily adapted to a new target language (when a small amount of supervised data is available,
a previous study ~\cite{AnonymePACLIC2015,AnonymeTALN2015} has shown the effectiveness of our method in a weakly supervised context).
Short term perspectives are to apply multi-task learning to build systems that simultaneously perform syntactic and semantic analysis. Adding out-of-language data to improve our RNN taggers is also possible (and interesting to experiment) with our common (multilingual) vector representation.
\begin{comment}
\section{Credits}
This document has been adapted from the instructions for the
COLING-2014 proceedings compiled by Joachim Wagner, Liadh Kelly
and Lorraine Goeuriot,
which are, in turn, based on the instructions for earlier ACL proceedings,
including
those for ACL-2014 by Alexander Koller and Yusuke Miyao,
those for ACL-2012 by Maggie Li and Michael
White, those for ACL-2010 by Jing-Shing Chang and Philipp Koehn,
those for ACL-2008 by Johanna D. Moore, Simone Teufel, James Allan,
and Sadaoki Furui, those for ACL-2005 by Hwee Tou Ng and Kemal
Oflazer, those for ACL-2002 by Eugene Charniak and Dekang Lin, and
earlier ACL and EACL formats. Those versions were written by several
people, including John Chen, Henry S. Thompson and Donald
Walker. Additional elements were taken from the formatting
instructions of the {\em International Joint Conference on Artificial
Intelligence}.
\section{Introduction}
\label{intro}
\blfootnote{
%
%
\hspace{-0.65cm}
Place licence statement here for the camera-ready version, see
Section~\ref{licence} of the instructions for preparing a
manuscript.
%
%
%
}
The following instructions are directed to authors of papers submitted
to COLING-2016 or accepted for publication in its proceedings. All
authors are required to adhere to these specifications. Authors are
required to provide a Portable Document Format (PDF) version of their
papers. \textbf{The proceedings are designed for printing on A4
paper.}
Authors from countries in which access to word-processing systems is
limited should contact the publication chairs,
Hitoshi Isahara and Masao Utiyama
(\texttt{[email protected], [email protected]}),
as soon as possible.
We will make additional instructions available at ``Instructions for authors'' section of
\url{http://coling2016.anlp.jp}. Please check
this website regularly.
\section{General Instructions}
Manuscripts must be in single-column format.
The title, authors' names and complete
addresses
must be centred at the top of the first page, and
any full-width figures or tables (see the guidelines in
Subsection~\ref{ssec:first}). {\bf Type single-spaced.} Start all
pages directly under the top margin. See the guidelines later
regarding formatting the first page. The manuscript should be
printed single-sided and its length
should not exceed the maximum page limit described in Section~\ref{sec:length}.
Do not number the pages.
\subsection{Electronically-available resources}
We strongly prefer that you prepare your PDF files using \LaTeX{} with
the official COLING 2016 style file (coling2016.sty) and bibliography style
(acl.bst). These files are available in coling2016.zip
at ``Instructions for authors'' section of \url{http://coling2016.anlp.jp}.
You will also find the document
you are currently reading (coling2016.pdf) and its \LaTeX{} source code
(coling2016.tex) in coling2016.zip.
You can alternatively use Microsoft Word to produce your PDF file. In
this case, we strongly recommend the use of the Word template file
(coling2016.dot) in coling2016.zip. If you have an option, we
recommend that you use the \LaTeX2e{} version. If you will be
using the Microsoft Word template, we suggest that you anonymise
your source file so that the pdf produced does not retain your
identity. This can be done by removing any personal information
from your source document properties.
\subsection{Format of Electronic Manuscript}
\label{sect:pdf}
For the production of the electronic manuscript you must use Adobe's
Portable Document Format (PDF). PDF files are usually produced from
\LaTeX{} using the \textit{pdflatex} command. If your version of
\LaTeX{} produces Postscript files, you can convert these into PDF
using \textit{ps2pdf} or \textit{dvipdf}. On Windows, you can also use
Adobe Distiller to generate PDF.
Please make sure that your PDF file includes all the necessary fonts
(especially tree diagrams, symbols, and fonts with Asian
characters). When you print or create the PDF file, there is usually
an option in your printer setup to include none, all or just
non-standard fonts. Please make sure that you select the option of
including ALL the fonts. \textbf{Before sending it, test your PDF by
printing it from a computer different from the one where it was
created.} Moreover, some word processors may generate very large PDF
files, where each page is rendered as an image. Such images may
reproduce poorly. In this case, try alternative ways to obtain the
PDF. One way on some systems is to install a driver for a postscript
printer, send your document to the printer specifying ``Output to a
file'', then convert the file to PDF.
It is of utmost importance to specify the \textbf{A4 format} (21 cm
x 29.7 cm) when formatting the paper. When working with
{\tt dvips}, for instance, one should specify {\tt -t a4}.
If you cannot meet the above requirements
for the
production of your electronic submission, please contact the
publication chairs as soon as possible.
\subsection{Layout}
\label{ssec:layout}
Format manuscripts with a single column to a page, in the manner these
instructions are formatted. The exact dimensions for a page on A4
paper are:
\begin{itemize}
\item Left and right margins: 2.5 cm
\item Top margin: 2.5 cm
\item Bottom margin: 2.5 cm
\item Width: 16.0 cm
\item Height: 24.7 cm
\end{itemize}
\noindent Papers should not be submitted on any other paper size.
If you cannot meet the above requirements for
the production of your electronic submission, please contact the
publication chairs above as soon as possible.
\subsection{Fonts}
For reasons of uniformity, Adobe's {\bf Times Roman} font should be
used. In \LaTeX2e{} this is accomplished by putting
\begin{quote}
\begin{verbatim}
\usepackage{times}
\usepackage{latexsym}
\end{verbatim}
\end{quote}
in the preamble. If Times Roman is unavailable, use {\bf Computer
Modern Roman} (\LaTeX2e{}'s default). Note that the latter is about
10\% less dense than Adobe's Times Roman font.
The {\bf Times New Roman} font, which is configured for us in the
Microsoft Word template (coling2016.dot) and which some Linux
distributions offer for installation, can be used as well.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|rl|}
\hline \bf Type of Text & \bf Font Size & \bf Style \\ \hline
paper title & 15 pt & bold \\
author names & 12 pt & bold \\
author affiliation & 12 pt & \\
the word ``Abstract'' & 12 pt & bold \\
section titles & 12 pt & bold \\
document text & 11 pt &\\
captions & 11 pt & \\
sub-captions & 9 pt & \\
abstract text & 10 pt & \\
bibliography & 10 pt & \\
footnotes & 9 pt & \\
\hline
\end{tabular}
\end{center}
\caption{\label{font-table} Font guide. }
\end{table}
\subsection{The First Page}
\label{ssec:first}
Centre the title, author's name(s) and affiliation(s) across
the page.
Do not use footnotes for affiliations. Do not include the
paper ID number assigned during the submission process.
{\bf Title}: Place the title centred at the top of the first page, in
a 15 pt bold font. (For a complete guide to font sizes and styles,
see Table~\ref{font-table}) Long titles should be typed on two lines
without a blank line intervening. Approximately, put the title at 2.5
cm from the top of the page, followed by a blank line, then the
author's names(s), and the affiliation on the following line. Do not
use only initials for given names (middle initials are allowed). Do
not format surnames in all capitals (e.g., use ``Schlangen'' not
``SCHLANGEN''). Do not format title and section headings in all
capitals as well except for proper names (such as ``BLEU'') that are
conventionally in all capitals. The affiliation should contain the
author's complete address, and if possible, an electronic mail
address. Start the body of the first page 7.5 cm from the top of the
page.
The title, author names and addresses should be completely identical
to those entered to the electronical paper submission website in order
to maintain the consistency of author information among all
publications of the conference. If they are different, the publication
chairs may resolve the difference without consulting with you; so it
is in your own interest to double-check that the information is
consistent.
{\bf Abstract}: Type the abstract between addresses and main body.
The width of the abstract text should be
smaller than main body by about 0.6 cm on each side.
Centre the word {\bf Abstract} in a 12 pt bold
font above the body of the abstract. The abstract should be a concise
summary of the general thesis and conclusions of the paper. It should
be no longer than 200 words. The abstract text should be in 10 pt font.
{\bf Text}: Begin typing the main body of the text immediately after
the abstract, observing the single-column format as shown in
the present document. Do not include page numbers.
{\bf Indent} when starting a new paragraph. Use 11 pt for text and
subsection headings, 12 pt for section headings and 15 pt for
the title.
{\bf Licence}: Include a licence statement as an unmarked (unnumbered)
footnote on the first page of the final, camera-ready paper.
See Section~\ref{licence} below for details and motivation.
\subsection{Sections}
{\bf Headings}: Type and label section and subsection headings in the
style shown on the present document. Use numbered sections (Arabic
numerals) in order to facilitate cross references. Number subsections
with the section number and the subsection number separated by a dot,
in Arabic numerals. Do not number subsubsections.
{\bf Citations}: Citations within the text appear in parentheses
as~\cite{Gusfield:97} or, if the author's name appears in the text
itself, as Gusfield~\shortcite{Gusfield:97}. Append lowercase letters
to the year in cases of ambiguity. Treat double authors as
in~\cite{Aho:72}, but write as in~\cite{Chandra:81} when more than two
authors are involved. Collapse multiple citations as
in~\cite{Gusfield:97,Aho:72}. Also refrain from using full citations
as sentence constituents. We suggest that instead of
\begin{quote}
``\cite{Gusfield:97} showed that ...''
\end{quote}
you use
\begin{quote}
``Gusfield \shortcite{Gusfield:97} showed that ...''
\end{quote}
If you are using the provided \LaTeX{} and Bib\TeX{} style files, you
can use the command \verb|\newcite| to get ``author (year)'' citations.
As reviewing will be double-blind, the submitted version of the papers
should not include the authors' names and affiliations. Furthermore,
self-references that reveal the author's identity, e.g.,
\begin{quote}
``We previously showed \cite{Gusfield:97} ...''
\end{quote}
should be avoided. Instead, use citations such as
\begin{quote}
``Gusfield \shortcite{Gusfield:97}
previously showed ... ''
\end{quote}
\textbf{Please do not use anonymous citations} and do not include
any of the following when submitting your paper for review:
acknowledgements, project names, grant numbers, and names or URLs of
resources or tools that have only been made publicly available in
the last 3 weeks or are about to be made public.
Papers that do not
conform to these requirements may be rejected without review.
These details can, however, be included in the camera-ready, final paper.
\textbf{References}: Gather the full set of references together under
the heading {\bf References}; place the section before any Appendices,
unless they contain references. Arrange the references alphabetically
by first author, rather than by order of occurrence in the text.
Provide as complete a citation as possible, using a consistent format,
such as the one for {\em Computational Linguistics\/} or the one in the
{\em Publication Manual of the American
Psychological Association\/}~\cite{APA:83}. Use of full names for
authors rather than initials is preferred. A list of abbreviations
for common computer science journals can be found in the ACM
{\em Computing Reviews\/}~\cite{ACM:83}.
The \LaTeX{} and Bib\TeX{} style files provided roughly fit the
American Psychological Association format, allowing regular citations,
short citations and multiple citations as described above.
{\bf Appendices}: Appendices, if any, directly follow the text and the
references (but see above). Letter them in sequence and provide an
informative title: {\bf Appendix A. Title of Appendix}.
\subsection{Footnotes}
{\bf Footnotes}: Put footnotes at the bottom of the page and use 9 pt
text. They may be numbered or referred to by asterisks or other
symbols.\footnote{This is how a footnote should appear.} Footnotes
should be separated from the text by a line.\footnote{Note the line
separating the footnotes from the text.}
\subsection{Graphics}
{\bf Illustrations}: Place figures, tables, and photographs in the
paper near where they are first discussed, rather than at the end, if
possible.
Colour
illustrations are discouraged, unless you have verified that
they will be understandable when printed in black ink.
{\bf Captions}: Provide a caption for every illustration; number each one
sequentially in the form: ``Figure 1. Caption of the Figure.'' ``Table 1.
Caption of the Table.'' Type the captions of the figures and
tables below the body, using 11 pt text.
Narrow graphics together with the single-column format may lead to
large empty spaces,
see for example the wide margins on both sides of Table~\ref{font-table}.
If you have multiple graphics with related content, it may be
preferable to combine them in one graphic.
You can identify the sub-graphics with sub-captions below the
sub-graphics numbered (a), (b), (c) etc.\ and using 9 pt text.
The \LaTeX{} packages wrapfig, subfig, subtable and/or subcaption
may be useful.
\subsection{Licence Statement}
\label{licence}
As in COLING-2014,
we require that authors license their
camera-ready papers under a
Creative Commons Attribution 4.0 International Licence
(CC-BY).
This means that authors (copyright holders) retain copyright but
grant everybody
the right to adapt and re-distribute their paper
as long as the authors are credited and modifications listed.
In other words, this license lets researchers use research papers for their research without liegal issues.
Please refer to
\url{http://creativecommons.org/licenses/by/4.0/} for the
licence terms.
Depending on whether you use American or British English in your
paper, please include one of the following as an unmarked
(unnumbered) footnote on page 1 of your paper.
The \LaTeX{} style file (coling2016.sty) adds a command
\texttt{blfootnote} for this purpose, and usage of the command is
prepared in the \LaTeX{} source code (coling2016.tex) at the start
of Section~\ref{intro} ``Introduction''.
\begin{itemize}
%
%
\item This work is licensed under a Creative Commons
Attribution 4.0 International Licence.
Licence details:
\url{http://creativecommons.org/licenses/by/4.0/}
%
%
\item This work is licenced under a Creative Commons
Attribution 4.0 International License.
License details:
\url{http://creativecommons.org/licenses/by/4.0/}
\end{itemize}
We strongly prefer that you licence your paper as the CC license
above. However, if it is impossible for you to use that license, please
contact the publication chairs,
Hitoshi Isahara and Masao Utiyama
(\texttt{[email protected], [email protected]}),
before you submit your final version of accepted papers.
(Please note that this license statement is only related to the final versions of accepted papers.
It is not related to papers submitted for review.)
\section{Translation of non-English Terms}
It is also advised to supplement non-English characters and terms
with appropriate transliterations and/or translations
since not all readers understand all such characters and terms.
Inline transliteration or translation can be represented in
the order of: original-form transliteration ``translation''.
\section{Length of Submission}
\label{sec:length}
The maximum submission length is 8 pages (A4), plus two extra pages for
references. Authors of accepted papers will be given additional space in
the camera-ready version to reflect space needed for changes stemming
from reviewers comments.
Papers that do not
conform to the specified length and formatting requirements may be
rejected without review.
\section*{Acknowledgements}
The acknowledgements should go immediately before the references. Do
not number the acknowledgements section. Do not include this section
when submitting your paper for review.
\end{comment}
\bibliographystyle{acl}
|
1,108,101,564,946 | arxiv | \section{Introduction}
The classical \emph{knapsack problem} is the following: given a collection of items each with a value and a weight, and given a weight limit, find a subset of items whose total weight is at most the weight limit, and whose value is maximized. If $n$ denotes the number of items, this can be formulated as the integer program $\{\max \sum_{i=1}^n x_iv_i \mid x \in \{0,1\}^n, \sum_{i=1}^n x_i w_i \le \ell\}$ where $n$ denotes the number of items, $v_i$ denotes the value of item $i$, $w_i$ denotes the weight of item $i$, and $\ell$ denotes the weight limit.
In the more general \emph{$k$-dimensional knapsack} (or $k$-constrained knapsack) problem, there are $k$ different kinds of ``weight" and a limit for each kind. An example for $k=3$ would be a robber who is separately constrained by the total mass, volume, and noisiness of the items he is choosing to steal. An orthogonal generalization is that the robber could take multiple copies of each item $i$, up to some prescribed limit of $d_i$ available copies. We therefore model the $k$-dimensional knapsack problem as
\begin{equation}
\{\max cx \mid x \in \Z^n, 0 \le x \le d, Ax \leq b\}\label{eq:ilpknap}\end{equation}
where $A$ is a $k$-by-$n$ matrix, $b$ is a vector of length $k$, and $d$ is a vector of length $n$, all non-negative and integral. Two special cases are common: if $d = \vo$ we call it the \emph{$0\textrm{-}1$ knapsack problem}; if $d = +\infty$, we call it the \emph{unbounded knapsack problem}.
Another natural generalization is the \emph{$k$-dimensional knapsack-cover problem},
$$\{\min cx \mid x \in \Z^n, 0 \le x \le d, Ax \geq b\}$$
which has analogous unbounded and 0-1 special cases. We sometimes call this version the \emph{covering version} and likewise \eqref{eq:ilpknap} is the \emph{packing version}.
On the positive side, for any fixed $k$, all above variants admit a simple pseudo-polynomial-time dynamic programming solution.
Chandra et al.~\cite{CHW76} gave the first PTAS (polynomial-time approximation scheme) for $k$-dimensional knapsack in 1976, and later an LP-based scheme was given by Frieze and Clarke~\cite{FC84}. See the book by Kellerer et al.~\cite[\S 9.4.2]{KPP04} for a more comprehensive literature review.
The case $k=1$ also admits a fully polynomial-time approximation scheme (FPTAS), but for $k \ge 2$ there is no FPTAS unless \PP=\NP. This was originally shown for 0-1 $k$-dimensional knapsack by Gens \& Levner~\cite{GL79} and Korte \& Schrader~\cite{KS80} (see also \cite{KPP04}) and subsequently for arbitrary $d$ by Magazine \& Chern~\cite{MC84}.
Our main result is the following:
\begin{theorem}\label{theorem:maint}
Let $k$ and $\epsilon$ be fixed. Given a $k$-dimensional knapsack (resp.~knapsack-cover) instance $\K$, there is a polynomial-sized extended LP relaxation $\L$ of $\P$ with $\OPT(\P) \ge (1-\epsilon)\OPT(\L)$ (resp.~with $\OPT(\P) \le (1+\epsilon)\OPT(\L)$).
\end{theorem}
Here ``polynomial-sized extended LP relaxation" means the following. First, $\P$ has $n$ variables. Then $\L$ must have those $n$ variables plus a polynomial number of other ones. The projection $\L'$ of $\L$ onto the first $n$ variables must contain the same integral solutions as $\P$. Finally, $\L$ and $\P$ must have the same objective function, i.e.~the objective function should ignore the extended variables.
In the proof, we will see that the LP can be constructed in polynomial time, and that a near-optimal integral solution can be obtained from an optimal extreme point fractional solution just by rounding down (resp.~up). The number of variables in the LP is $n^{O(k/\epsilon)}$ and the number of constraints is $kn^{O(k/\epsilon)}$. The \emph{integrality gap} of an IP is the worst-case ratio between the fractional and integral optimum and therefore \prettyref{theorem:maint} can be equivalent stated as saying that $\P$ has integrality gap at most $1+\epsilon$.
Our result and the techniques we use are a generalization of a recent result of Bienstock~\cite{Bienstock08}, which dealt with the packing version for $k=1$. The key observation we contribute is that his ``filtering" approach was also traditionally used to get a PTAS for multi-dimensional knapsack; in \emph{filtering} we exhaustively guess the $\gamma$ max-cost items in the knapsack for some constant $\gamma$.
The construction of $\L$ in \prettyref{theorem:maint} turns out to depend on the cost function $c$. A more interesting and challenging problem is to find an $\L$ which is independent of the cost-function, since this gives a \emph{polyhedral approximation} $\L'$ of $\P$ e.g.~in the packing case, it implies $\L' \supset \P \supset (1-\epsilon)\L'$.
Bienstock's result~\cite{Bienstock08} actually gives an LP which does not depend on the item cost/profits $c$. We will show (in \prettyref{sec:indep}) that in the packing case, our approach can be similarly revised:
\begin{theorem}\label{theorem:maintwo}
Let $k$ and $\epsilon$ be fixed. Given a $k$-dimensional knapsack instance $\K$, there is a polynomial-sized extended LP relaxation $\L$ of $\P$ with $\OPT(\P) \ge (1-\epsilon)\OPT(\L)$, such that $\L$ does not depend on $c$.
\end{theorem}
This comes as the cost of an increase in size to $kn^{O(k^2/\epsilon)}$. For the covering case performing the same (a polynomial-sized extended LP relaxation independent of $c$ with integrality gap $\le 1+\epsilon$) is an interesting open problem; we elaborate at the end.
\subsection{Related Work}
Knapsack (whether packing or covering) has an FPTAS by dynamic programming, and it is well-known that dynamic programs of such a form can be solved as a shortest-path problem, which has an LP formulation. Nonetheless, there is no evident way to combine these steps to get an LP for knapsack with integrality gap $1+\epsilon$. The problem (say, for packing, which is simpler) is that last step in the FPTAS is not merely to return the last entry of the DP table, but rather it finds the maximum scaled profit such that the minimum volume to obtain it fits inside the knapsack (and then recovers the actual solution). The naive fix is adding this volume constraint to the LP but it makes the LP non-integral and then it is not clear how to proceed.
Bienstock \& McClosky~\cite{BM08} extend the work of Bienstock~\cite{Bienstock08} to covering problems and other settings, and also give an LP of size $n^2(1/\epsilon)^{\frac{1}{\epsilon}\log \frac{1}{\epsilon}}$ with integrality gap $1+\epsilon$ for 1-dimensional, 0-1 covering knapsack.\footnote{They use a disjunctive program; in essence, the LP guesses the most costly item in the knapsack, then for $i = 1, \dotsc, O(\frac{1}{\epsilon}\log\frac{1}{\epsilon})$ it guesses the number of items whose costs are $(1+\frac{1}{\epsilon})^{-(i, i+1]}$ times that cost, with all guesses $> \frac{1}{\epsilon}$ deemed equivalent. In particular the LP depends on the cost function. We remark that the method does not readily extend to $k$-dimensional knapsack. } There is some current work \cite{CSh08} on obtaining primal-dual algorithms (that is, not needing the ellipsoid method or interior-point subroutines) for knapsack-type covering problems with good approximation ratio and \cite{BM08} reports that the methods of \cite{CSh08} extend to a combinatorial LP-based approximation scheme for 1-dimensional covering knapsack.
Answering an open question of Bienstock~\cite{BM08} about the efficacy of automatic relaxations for the knapsack problem, Karlin et al.~\cite{KMN10} recently found that the ``Laserre hierarchy" of semidefinite programming relaxations, when applied to the 1-dimensional 0-1 packing knapsack problem, gives an SDP with integrality gap $1+\epsilon$ after $O(1/\epsilon^2)$ rounds.
Knapsack problems have a couple of interesting basic properties. The first contrasts with Lenstra's result~\cite{Lenstra1983} that for any fixed $k$, integer programs with $k$ constraints can be solved in polynomial time; in comparison, if we have nonnegativity constraints for every variable plus \emph{one other constraint}, we get the unbounded (1-dimensional) knapsack problem, which is \NP-hard~\cite{Lueker75}. Second, recall that for any optimization problem whose objective is integral, and whose optimal value is polynomial in the input size, any FPTAS can be used to get a pseudopolynomial-time algorithm. In contrast, 0-1 2-dimensional knapsack shows the converse is false: it has a pseudopolynomial-time algorithm, but getting an FPTAS is \NP-hard even when each profit $c_i$ is 1, e.g.~see~\cite[Thm.~9.4.1]{KPP04}.
\comment{We give two other recent developments in this field. There is a line of work in \emph{counting} the number of feasible solutions to a given $k$-dimensional knapsack problem (in which case there is no objective function $c$) and Dyer~\cite{Dyer03} recently gave a simple dynamic programming-based FPRAS (fully-polynomial time randomized approximation scheme) to count the number of feasible solutions for $k$-dimensional bounded packing knapsack. Separate from this,}
There is a line of work on maximizing constrained submodular functions. For non-monotone submodular maximization subject to $k$ linear packing constraints, the state of the art is by Lee et al.~\cite{LMNS09} who give a $(5+\epsilon)$-approximation algorithm. For monotone submodular maximization the state of the art is by Chekuri \& Vondr\'{a}k~\cite{CV09} who give a $(e/(e-1)+\epsilon)$-approximation subject to $k$ knapsack constraints and a matroid constraint. We note it is \NP-hard to obtain any factor better than $e/(e-1)$ for monotone submodular maximization over a matroid~\cite{F98}, so in this setting knapsack constraints only affect the best ratio by $\epsilon$, just like in our setting of LP-relative approximation.
\subsection{Overview}
First, we review rounding and filtering. Rounding is a standard approach to turn an optimal fractional solution into a nearly-optimal integral one, and here we lose up to $k$ times the maximum per-item profit. Filtering works well with rounding because it reduces the maximum per-item profit; the power of these ideas is already enough to get an LP-based approximation scheme~\cite{FC84}, but it uses a separate LP for each ``guess" made in filtering. Therefore, like Bienstock~\cite{BM08}, we use disjunctive programming~\cite{Ba79} to combine all the separate LPs into a single one. The approach has some similarity to the knapsack-cover inequalities of Carr et al.~\cite{CFLP00}.
\section{Rounding and Filtering}\label{sec:knap-ptas}
We now explain the approach.
A knapsack instance \eqref{eq:ilpknap} is determined by the parameters $(A, b, c, d)$. The na\"ive LP relaxation of the knapsack problem is
\begin{equation}\{\max cx \mid x \in \R^n, 0 \le x \le d, Ax \leq b\}.\label{eq:kdimpacklp}\tag*{$\K(A, b, c, d)$}\end{equation}
In the following, \emph{fractional} means non-integral. The following lemma is standard.
\begin{lemma}
Let $x^*$ be an extreme point solution to the linear program \eqref{eq:kdimpacklp}. Then $x^*$ is fractional in at most $k$ coordinates. \label{lemma:knapstruct}
\end{lemma}
\begin{proof}
It follows from elementary LP theory that $x^* \in \R^n$ satisfies $n$ (linearly independent) constraints with equality. There are $k$ constraints of the form $A_jx \le b_j$; all other constraints are of the form $x_i \ge 0$ or $x_i \le d_i$, so at least $n-k$ of them hold with equality. Clearly $x_i \ge 0$ and $x_i \le d_i$ cannot both hold with equality for the same $i$, so $x^*_i \in \{0, d_i\}$ for at least $n-k$ distinct $i$, which gives the result.
\end{proof}
Therefore, we obtain the following primitive guarantee on a rounding strategy. Let $\lfloor \cdot \rfloor$ applied to a vector mean component-wise floor and let $c_{\max} := \max_i c_i$.
\begin{cor}\label{cor:knapround}
Let $x^*$ be an extreme point solution to the linear program \eqref{eq:kdimpacklp}. Then $c \lfloor x^* \rfloor \ge cx^* - kc_{\max}$.
\end{cor}
Now the idea is to take $x^*$ to be an optimal fractional solution, and use filtering (exhaustive guessing) to turn the additive guarantee into a multiplicative factor of $1+\epsilon$. Let $\gamma$ denote a parameter, which represents the size of a multi-set we will guess. For a non-negative vector $z$ let the notation $\lVert z \rVert_1$ mean $\sum_i z_i$. A \emph{guess} is an integral vector $g$ with $0 \le g \le d, Ag \le b$ and $\lVert g \rVert_1 \le \gamma$. It is easy to see the number of possible guesses is bounded by $(n+1)^\gamma$, and that for any constant $\gamma$ we can iterate through all guesses in polynomial time.
From now on we assume without loss of generality (by reordering items if necessary) that $c_1 \le c_2 \le \dotsb \le c_n$.
For a guess $g$ with $\lVert g \rVert_1 = \gamma$ we now define the \emph{residual knapsack problem} for $g$. The residual problem models how to optimally select the remaining objects \emph{under the restriction} that the $\gamma$ most profitable\footnote{To simplify the description, even if $c_{i+1} = c_i$ we think of item $i+1$ as more profitable than item $i$.} items chosen (counting multiplicity) are $g$. Let $\mu(g)$ denote $\min \{i \mid g_i > 0\}$.
Define $d^g$ to be the first $\mu(g)$ coordinates of $d-g$ followed by $n-\mu(g)$ zeroes, and $b^g = b - Ag$. The \emph{residual knapsack problem} for $g$ is $(A, b^g, c, d^g)$.
The residual problem for $g$ does not permit taking items with index more than $\mu(g)$ and so its $c_{\max}$ value may be thought of as $c_{\mu(g)}$ or less, which is at most $c \cdot g / \lVert g \rVert_1 = c \cdot g / \gamma$.
If a guess $g$ has $\lVert g \rVert_1 < \gamma$, define $b^g$ and $d^g$ to be all-zero. Then \prettyref{cor:knapround} gives the following.
\begin{cor}\label{cor:knapoptround}
Let $x_{\OPT}$ be an optimal integral knapsack solution for $(A, b, c, d)$. Let
$g$ be the $\gamma$ most profitable items in $x_{\OPT}$ (or all, if there are less than $\gamma$). Let $x^*$ be an optimal extreme point solution to $\K(A, b^g, c, d^g)$. Then $g + \lfloor x^* \rfloor$ is a feasible knapsack solution for $(A, b, c, d)$ with value at least $1-k/\gamma$ times optimal.
\end{cor}
\begin{proof}
We use $\OPT$ to denote $c \cdot x_{\OPT}$.
Note that $x_{\OPT}-g$ is feasible for the residual problem for $g$. Therefore $c \cdot x^* \ge \OPT - c \cdot g$. Moreover $c_{\max}$ in the residual problem for $g$ is not more than $c \cdot g / \gamma \le \frac{\OPT}{\gamma}$, so \prettyref{cor:knapround} shows that $$c \cdot \lfloor x^* \rfloor \ge c \cdot x^* - k \frac{\OPT}{\gamma} \ge \OPT - c \cdot g - k \frac{\OPT}{\gamma}$$
and consequently $\lfloor x^* \rfloor + g$ is a solution with value at least $\OPT(1-\frac{k}{\gamma})$, as needed.
\end{proof}
By taking $\gamma = k/\epsilon$ and solving $\K(A, b^g, c, d^g)$ for all possible $g$ we get the previously known PTAS for $k$-dimensional knapsack; we now refine the approach to get a single LP.
\section{Disjunctive Programming}
We now review some disjunctive programming tools~\cite{Ba79}. The only result we need is that it is possible to write a compact LP for the convex hull of the union of several polytopes, provided that we we have compact LPs for each one.
Suppose we have polyhedra $P^1 = \{x \in \R^n \mid A^1x \le b^1\}$ and $P^2 = \{x \in \R^n \mid A^2x \le b^2\}$. Both of these sets are convex and it is therefore easy to see that the convex hull of their union is the set
$$\textrm{conv.hull}(P^1 \cup P^2) = \{x \in \R^n \mid x = \lambda x^1 + (1-\lambda) x^2, 0 \le \lambda \le 1, A^1x^1 \le b^1, A^1x^2 \le b^2\}.$$
However, this is not a \emph{linear} program, e.g.\ since we multiply the variable $\lambda$ by the variables $x^1$. Nonetheless, it is not hard to see that the following is a linear formulation of the same set:
$$\textrm{conv.hull}(P^1 \cup P^2) = \{x \in \R^n \mid x = x^1 + x^2, 0 \le \lambda \le 1, A^1x^1 \le \lambda b^1, A^1x^2 \le (1-\lambda)b^2\}.$$
A similar construction gives the convex hull of the union of any number of polyhedra; we now apply this to the knapsack setting.
The LP $\K(A, b^g, c, d^g)$ was constructed to mean the left-over problem after making a guess $g$ of the $\gamma$ most profitable items; we similarly shift this LP to get $\{y = x + g \mid x \in \R^n, 0 \le x \le d^g, Ax \leq b^g\}$ which is the same set, after the guessed part is added back in.
Let $\mathcal G$ denote the set of all possible guesses $g$. Then the convex hull of the union of the shifted polyhedra is given by the feasible region of the following polyhedron:
\begin{equation}\Bigl\{y \mid y = \sum_{g \in \mathcal G} y^g; \sum_{g \in \mathcal G} \lambda^g = 1; \lambda \ge \vz; \forall g: y^g = x^g + \lambda^g g, \vz \le x^g \le \lambda^gd^g, Ay^g \le \lambda^gb^g \Bigr\}.\label{eq:convun}\tag{\L}\end{equation}
We attach objective $\max c \cdot y$ to \eqref{eq:convun} to make it into an LP, and use it to prove \prettyref{theorem:maint}.
\begin{proof}[Proof of \prettyref{theorem:maint}, packing version]
Let $y$ be an optimal extreme point solution for \eqref{eq:convun}. It is straightforward to argue that any extreme point solution has $\lambda^{g^*} = 1$ for some particular $g^*$, and $\lambda^{g} = 0$ for all other $g$. Hence $y = x^{g^*} + g^*$ where $x^{g^*}$ is an optimal extreme point solution to $\K(A, b^{g^*}, c, d^{g^*})$. We now show that $\lfloor y \rfloor$ is a $(1-\epsilon)$-approximately optimal solution, re-using the previous arguments.
If $\lVert g^* \rVert_1 < \gamma$, then $x^{g^*} = 0$ so $y$ is integral, hence $y$ is an optimal knapsack solution. Otherwise, if $\lVert g^* \rVert_1 = \gamma$, then \prettyref{cor:knapround} shows that
$$c \cdot \lfloor y \rfloor = c\cdot \lfloor x^{g^*} \rfloor + c\cdot g^* \ge c \cdot x^{g^*} - k\frac{c \cdot g^*}{\gamma} + c \cdot g^* = c \cdot y - k\frac{c \cdot g^*}{\gamma} \ge (1-\epsilon) c \cdot y,$$
which completes the proof.
\end{proof}
The corresponding result for the covering version is very similar. One difference is that we round up instead of down. The other is that some guesses become inadmissible. Let $g$ be an integral vector with $0 \le g \le d, \lVert g \rVert_1 \le \gamma$; we define $\mu(g), d^g$ as before and call $g$ a \emph{guess} only if $A(g + d^g) \ge b$, in which case we set $b^g$ to be the component-wise maximum of $\vz$ and $b-Ag$.
\section{Removing the Dependence on $c$ for Packing Problems}\label{sec:indep}
In the LPs described above, for each guess $g$, we treated that guess as the set of most \emph{profitable} items. In particular, $b^g$ and $d^g$ are defined in a way that depends on $c$. We now show in the packing case, how to write a somewhat larger LP, still with integrality gap $1+\epsilon$, which is defined independently of $c$. This exactly follows the approach of Bienstock~\cite{Bienstock08}; what we will do is guess the \emph{biggest} items for each constraint, rather than the most profitable items. The technique does not seem to have an easy analogue for covering problems.
In detail, previously, we guessed the multiset $g$ of $\gamma$ most profitable items in the solution. Instead, let us guess a $k$-tuple $(g^1, g^2, \dotsc, g^k)$ where for each $k$, $g^i$ is the set of $\gamma$ items in the solution which have largest coefficients with respect to the $i$th constraint (breaking ties in each constraint in any consistent way). What we need is that any extreme feasible solution with at most $k$ fractional values can be rounded to an integral feasible solution at a relative cost factor of at most $\epsilon$. Let the original extreme point LP solution be $x$. We round each fractional value up to the closest integer, which causes the solution to become an infeasible one, call it $y = \lceil x \rceil$. Then, to retain feasibility, we go through each of the $k$ constraints, pick the $c$-smallest set of $k$ items from $y$ whose deletion causes the constraint to again become satisfied; and we delete the union of these sets from $y$, obtaining $z$. Each set has $c$-cost at most $\frac{k}{\gamma} c(y)$ since for each constraint $i$, any $k$ elements from $g^i$ form an eligible set for deletion, and $y \subset g^i$ consists of at least $\gamma$ items. Thus $c(z) \ge c(y)-k\frac{k}{\gamma} c(y) = (1-k^2/\gamma)c(x)$. Taking $\gamma = k^2/\epsilon$ (compared to the previous $k/\epsilon$), we get the desired result.
\section{Discussion}
We believe that the main result is a nice theoretical illustration of techniques (filtering, rounding, disjunctive programming). However, it remains to be seen if it could be given useful applications. The disjunctive programming trick is definitely senseless sometimes: if we want to write an LP-based computer program to $(1+\epsilon)$-approximately solve multidimensional knapsack instances, it is more efficient to consider the LP corresponding to each guess separately (as in \cite{FC84}) rather than solve the gigantic LP obtained by merging them together. Sometimes an LP-relative~\cite{KPP08} (or Lagrangian-preserving~\cite{CRW04,ABHK09}) approximation algorithm can be used as a subroutine in ways that a non-LP-relative one could not. However, at least in \cite{CRW04,KPP08,ABHK09}, the analysis relied on LP-relative or Lagrangian-preserving analysis of the na\"ive LP, and an arbitrary LP would not have fared as well, and the LP we build here seems not to be useful in this way.
Finding a compact formulation for $k$-dimensional covering knapsack with small integrality gap and such that the LP does \emph{not} depend on the objective function is an interesting open problem. For example, we are not aware of any polynomial-sized extended LP for 1-dimensional covering knapsack with constant integrality gap, in sharp contrast to the packing case. A partial result for $k$-dimensional covering knapsack is the knapsack-cover LP~\cite{CFLP00} (see also~\cite{PC10,CGK10,CCKN10} for applications); it has integrality gap at most $2k$, and while it is not polynomial size, it can be $(1+\epsilon)$-approximately separated~\cite{CFLP00} and hence $(1+\epsilon)$-approximately optimized~\cite{GK07,GLS88} in polynomial time.
From a theoretical perspective, it also seems challenging to find an LP for 2-dimensional (packing) knapsack where the size of the LP is a function of $1/\epsilon$ times a polynomial in $n$, as was done in \cite{BM08} for the 1-dimensional version.
\subsection*{Acknowledgments}
We thank Laura Sanit\`a for helpful discussions on this topic.
|
1,108,101,564,947 | arxiv | \section{Introduction}
\subsection{Background}
The global resurgence of vector-borne diseases is a growing concern for public health officers in many countries \cite{Gubler}. Diseases like dengue and chikungunya continue to spread all over the world, hand in hand with the spread of their associated vectors; cf. \cite{Powers:2000kl}. Thus, in the United States the {\it Aedes albopictus}, the tiger mosquito, is fixating very rapidly , while in Europe {\it Ae. albopictus} is also spreading at a fast rate---cf. \cite{Lambrechts:2010kh}. The result of this fixation is already evident: Italy and the South of France have already had documented cases of chikungunya \cite{cdc:2014}, and there is a growing number of dengue cases detected in the US \cite{anez:rios:2013}. Furthermore, dengue is now the leading cause in US of acute febrile state of
travelers returning from Asian, South American and Caribbean countries \cite{cdc:2010}. In the particular case of dengue, the main vector, \textit{Ae. Aegypti}, is anthropophilic, and it lives only on urban or semi-urban areas. It is also a very sedentary mosquito: it will usually fly no more than about five hundred meters from its birth place, unless in extreme adverse conditions. These observations suggest that one should not expect that dengue will spread through the diffusion of the vector.
Indeed a number of such resurgent diseases occur in highly urban areas and are transmitted by vectors that do not disperse very far compared to other species---cf. \cite{Honorio:etal:2003} and references therein. On the other hand, in the case of a urban area with an efficient transportation system, movements from one location to another are fast. Then, for a given individual, disease transmission will mostly likely happen either at its home region or at its usual destination location. In this scenario, susceptible individuals can become infected in areas that are geographically apart from their residence area, and infected individuals can travel quite long distances and be able to infect vectors in very distinct areas were they themselves infected. Since the disease dynamics is likely to be largely dependent on whether one has a homogeneous or a heterogeneous population, with heterogeneity
favoring the establishment of epidemics---cf. \cite{Hasibeder,Levin,Smith}---this suggests that in areas with significant population movement, the epidemiological dynamics can be strongly influenced by the circulation of human hosts. The link between host circulation and the disease dynamics seems to be first pointed out by \cite{Adams,Cosner2009,Stoddard} in slightly different frameworks. In any case, circulation naturally segregates host and vector by their registered and current location, and it is then natural to consider the so-called meta-population models as candidates for modeling the disease dynamics. Such meta-population models can be either of multi-patch or of multi- group type. In some regimes, the latter can arise as a limit model of the former---eg. in the case of fast sojourn times; cf. \cite{Adams}.
The previous discussion suggests that the use of multi-group models might become a valuable modeling tool for understanding the disease dynamics in urban settings, and indeed there is a growing interest in the literature on these models. See \cite{Smith:etal:2014} for a recent review on such models, and for a discussion on their importance in the epidemiological modeling, and \cite{mpolya:etal:2013} for a study in a star network. In addition, see also \cite{cambodia} for empirical studies on the impact of human movement on the disease dynamics and \cite{Alvimetal2013} for complementary views to \cite{Adams,Cosner2009}. For a theoretical review on multi-group models, see \cite{MR1993355}.
The overall interest in these epidemic models have, in turn, raised a natural interest in understanding their qualitative dynamical properties. This has fostered a considerable literature addressing this problem, and which we now briefly review.
\subsection{Disease dynamics}
From the point of view of epidemiological mathematical modeling, the first natural question about any disease-dynamics model is what are its stability features as a function of the basic reproduction number, $\mathcal{R}_0$. Following \cite{Shuai:Driessche:2013}, we say than an epidemic model has the \textit{sharp $\mathcal{R}_0$ property} if the following holds: when $\mathcal{R}_0\leq1$, the only feasible equilibrium is the so-called disease free equilibrium (DFE), and it is globally asymptotically stable (GAS); when $\mathcal{R}_0>1$, there is a single interior equilibrium, the so-called endemic equilibrium (EE), which is then GAS.
The literature on mathematical epidemiology and the study of Sharp $\mathcal{R}_0$ property is long and large, particularly for directly transmitted diseases, but it is considerably smaller for vector-borne diseases. The development of the models for indirectly transmitted diseases can be traced back to Ross malaria model as discussed in \cite{Ross1911}---see also the recent review in \cite{Smith1} and the classical monographs \cite{Bailey75,Dietz75}. Nevertheless, the bulk of the theory in the literature is leaned towards directly transmitted diseases and uniform populations---see \cite{AndMay91,Diekmann:2000} for instance. For vector-borne diseases, a very natural model is the coupling of a SIR model for the humans with a SI model. This model is reasonable for mosquito borne diseases, since they do not have a well developed immunological system, while most of the arboviruses confer lifetime stability. This model seems to be first suggested in \cite{Bailey75,Dietz75} and it is now known as the Bailey-Dietz model.
The global dynamics of this model was first studied in \cite{9656647} using a Lyapunov function argument for the stability of the DFE, while the Poincar\'e-Bendixson property for 3-D competitive systems is used to show the stability of the EE; see also \cite{MR2653590,MR2559889} for later similar studies. A global stability analysis using only Lyapunov functions has been obtained only recently---\cite{Souza2014}. See also \cite{0533.92023,Lietal:1999} for various results on global stability of epidemiological models.
In the framework of multi-group epidemic models for directly transmitted diseases, the first paper was probably by Rushton and Mauser \cite{MR0069470}, but seminal results are in Lajmanovich and Yorke \cite{LajYo76} and in the book of Hethcote and Yorke \cite{HethYor84}; but see also \cite{Nold:1980}. Stability results can be found in Thieme \cite{0533.92023,MR87c:92046}; see also chapter 23 of \cite{MR1993355}. Global stability of multi-group SIR model is due to \cite{gls:2006} by using a combinatorial argument arising from graph theory; see also \cite{gls:2008} for a more extensive presentation of their method. For indirectly transmitted diseases, the first global stability result seems to be due to \cite{Hasibeder}, who observed that a monotone dynamics argument of \cite{LajYo76} was also applicable to a SI-SI multi-group model. More recently, general global stability results were obtained by \cite{Shuai:Driessche:2013}; see also \cite{Guo:etal:2012} for results on multi-stage models. None of these results, however, cover the case of vector-borne diseases, since vector and host populations might follow different dynamics. Additional references in meta-population models for vector-borne diseases, but without studying the sharp $\mathcal{R}_0$ property are \cite{Honorio:2009uq,MBS6900} for models with heterogeneous populations and \cite{Xiao:Zou:2014} for a numerical study of a multi-patch model with spatial heterogeneities.
For higher dimensional systems, global stability of endemic equilibrium is usually done by finding an appropriate Lyapunov function---\cite{Hasibeder} being a notable exception. The use of Lyapunov functions to study the global dynamics of ecological and epidemiological models can be traced at least to the works in the late seventies of Goh \cite{goh:1977,goh:1978,goh:1979,goh:1980}, Harrison \cite{Harrison:1979,Harrison:1979b} and Hsu \cite{Hsu:1978}. Since then, it has been successfully used in many studies, and even rediscovered \cite{Freedman:So:1985,Koro:2001,KoroMMB04,Korobeinikov:Maini:2004,Koro:2006,MR2434863}. Recent applications of Lyapunov functions in epidemic and ecological models with meta-populations include \cite{Iggidr:etal:2006,Koro:2009,Yu:etal:2009,Li:etal:2010,Li:Shuai:2010,Ji:etal:2011,Kuniya:2011,Souza:Zubelli:2011,Sun:Shi:2011,Guo:etal:2012,Huang:etal:2012,Shuai:Diressche:2012,Magal:McCluskey:2013,Wang:2014}. See also the recent surveys on the construction and use of Lyapunov functions in models of population dynamics by \cite{Hsu:2005,MR2434863}. Additionally, there is also recent work aiming to obtain similar results for multi-group models, but without recurring to graph theoretic arguments \cite{Li:etal:2012,muroya:etal:2013}. Shuai and van den Driessche~\cite{Shuai:Driessche:2013} discuss two systematic approaches (graph-theoretic and matrix-theoretic) to guide the construction of Lyapunov functions. For results towards infinite dimensional problems, see \cite{Thieme:2011}.
In this work, we show that the sharp $\mathcal{R}_0$ property holds for a very natural multi-group extension of the Bailey-Dietz model---that has been used to model, \textit{inter alia}, the dynamics of dengue \cite{Nishiura:2006}. This extension also accommodates a large number of choices for the modeling of the infection-force, including the most popular ones---see \S\ref{sec:prelims} for an additional discussion on this issue. A special case within the class of models discussed here was studied in \cite{ding:etal:2012} which, however, present an incorrect proof of the global stability of the endemic equilibrium\footnote{The matrix whose kernel should yield the coefficients for Lyapunov function is actually not singular for $n>2$. For $n=2$, a careful checking shows that the claimed cancellation properties do not hold.}. This work can also seen as an extension of the multi-group framework for direct-transmitted diseases in \cite{gls:2006,gls:2008}.
\subsection{Outline}
In Section~\ref{sec:prelims} we introduce the relevant class of multi-group models and identify the relevant network structure, which is a bipartite graph, that we term the host/vector network. This bipartite graph can be reducible, even when the group network is strongly connected. This is markedly different from directly transmitted diseases. On the assumption that the host/vector network is strongly connected, we can meaningfully define an $\mathcal{R}_0$. For the models discussed here, the existence and uniqueness of the Endemic Equilibrium (EE) , when, $\mathcal{R}_0>1$ is not obvious from the governing equations, and these issues are tackled in Section~\ref{sec:eqstab}, where the local stability is also established. We then study the global dynamics in section~\ref{sec:global}: when $\mathcal{R}_0\leq1$, we show that the disease free equilibrium (DFE) is globally asymptotically stable . We then address the global stability of the EE and, we then show that it is globally asymptotically stable when $\mathcal{R}_0>1$ using a "vectorial" extension of the Lyapunov function used in \cite{Souza2014} together with an extension of the graph-theoretical approach developed in \cite{gls:2006,gls:2008}. A discussion of the results is given in Section~\ref{sec:concl}.
\section{A class of multi-group models for vector-borne diseases}
\label{sec:prelims}
In the following, we provide the basic set up for a class of multi-group models for indirectly transmitted diseases. These models are built upon the classical single-patch/group model by \cite{Bailey75,Dietz75}, and include some of the models studied in \cite{Adams,Cosner2009} and the models studied in \cite{Alvimetal2013}.
\subsection{ The basic model}
\label{ssec:onep}
We consider the classical Bailey-Dietz model:
\begin{equation}\label{1patch}
\left \{
\begin{array}{ll}
\dot S_h=&\Lambda_h- \beta_1 \, \dfrac{S_h \, I_v}{N_h} -\mu_h \, S_h \\
\dot I_h=& \beta_1 \, \dfrac{S_h \, I_v}{N_h} - \gamma_h \, I_h- \mu_h \, I_h \\
\dot R_h=& \gamma_h \, I_h-\mu_h \, R_h \\
\dot S_v =& \Lambda_v - \beta_2 \, \dfrac{S_v \, I_h}{N_h} -\mu_v \, S_v\\
\dot I_v= & \beta_2 \, \dfrac{S_v \, I_h}{N_h} -\mu_v \, I_v,
\end{array}
\right.
\end{equation}
where $S_h$, $I$, $R$ denote, as usual, the class of susceptible, infections and removed, respectively. The superscripts $h$ and $v$ indicate that the quantity refers to the host or to the vector. Also, $N_h=S_h+I_h+R_h$ and $N_v=S_v+I_v$ are the total host and vector, respectively, populations. Although they are not necessarily constant, they are taken as so in many applications.
The constant $\beta_1$ is a composite biological constant that embodies all the biological processes relating to transmission from mosquito to man, from the biting rate of the mosquitoes through the probability to develop and infection after a bite. Analogously $\beta_2$ captures the effect of transmission from man to mosquito. The constant $\mu_h$ is the per capita human mortality, $\gamma_h$ denotes the per capita rates at which infectious individual recover and are permanently immune. The parameter $\Lambda_v$ is the constant recruitment of mosquitoes and $\mu_v$ is the per capita vector mortality.
Let
\[
\mathbf N = \dfrac{\Lambda_h}{\mu_h} \textbf{ and } \mathbf V=\dfrac{\Lambda_v}{\mu_v}.
\]
Using the techniques in \cite{VddWat02}, it is straightforward to see that the reproduction number of (\ref{1patch}) is
\[\mathcal R_0^2= \dfrac{ \beta_1\, \beta_2\,\mathbf V}{\mu_v\, (\mu_h+\gamma_h)\, \mathbf N \,}= \dfrac{ \beta_1\, \beta_2\,\mathbf m}{\mu_v\, (\mu_h+\gamma_h)\, \,}\]
with $\mathbf m= \dfrac{\mathbf V}{\mathbf N}$, the classical vectorial density. The basic reproduction ratio $\mathcal R_0$ is the same than for a classical Ross's model \cite{AndMay91,MBS6900,Bailey75,Ross1911}.
As for Ross 's model we will use the prevalences, i.e., defining $x_1=\dfrac{S_h}{N}$, $x_2=\dfrac{I_h}{N}$, $x_3=\dfrac{R_h}{N}$ and $y_1=\dfrac{S_v}{V}$, $y_2=\dfrac{I_v}{V}$. Then, two equilibria are possible : the disease free equilibrium $( \mathbf 1, \mathbf 0, \mathbf 0,\mathbf 1,\mathbf 0)$ and, when $\mathcal R_0 >1$,
a positive endemic equilibrium $(\bar x_1, \bar x_2, \bar x_3, \bar y_1, \bar y_2)$.
The global stability of \eqref{1patch} was originally studied by \cite{9656647}, who showed that the endemic equilibrium is globally asymptotically stable when $\mathcal R_0>1$, and that the disease-free is the global attractor when $\mathcal R_0\leq 1$. using the so-called Poincar\'{e}-Bendixson theorem for competitive systems---cf. \cite{0821.34003}. More recently, \cite{Souza2014} has obtained a proof using only Lyapunov functions
\subsection{A class of multi-group models for vector-borne diseases}
We consider that both host and vector populations are divided in $n$ groups, where each group $i$ has a host population of $N_{h,i}$ and a vector
population of $N_{v,i}$. At each node $i$, we assume a generalized form of \eqref{1patch} by allowing that the susceptible of group $i$ to have contact of mosquitoes of group $j=1,\ldots,n$. This is specified by an infection term for the host $\mathcal{T}_h$, of the form
\[
\mathcal{T}_{h,i}=S_{h,i}\sum_{j=1}^nL_{i,j}(N_h,N_v)I_{v,j}.
\]
Analogously, we allow susceptible mosquitoes of each group $i$ to have contact with infected hosts group $j=1,\ldots,n$, with an infection force for the vectors, $\mathcal{T}_v$, of the form:
\[
\mathcal{T}_{v,i}=S_{v,i}\sum_{j=1}^nM_{i,j}(N_h,N_v)I_{h,j}.
\]
These assumptions then lead to the following multi-group epidemic model:
\begin{equation}\label{npatch2}
\left \{
\begin{array}{ll}
\dot S_{h,i}=&\Lambda_{h,i}- S_{h,i} \, \sum_{j=1}^n \, L_{i,j}(N_h,N_v)I_{v,j} -\mu_{h,i} \, S_{h,i} \\[2mm]
\dot I_{h,i}=& S_{h,i} \, \sum_{j=1}^n \, L_{i,j}(N_h,N_v)I_{v,j} - \gamma_{h,i} \, I_{h,i}- \mu_{h,i} \, I_{h,i} \\[2mm]
\dot R_{h,i}=& \gamma_{h,i} \, I_{h,i}-\mu_{h,i} \, R_{h,i} \\[2mm]
\dot S_{v,i} =&\Lambda_{v,i} - S_{v,i} \, \sum_{j=1}^n \, M_{i,j}(N_h,N_v)I_{h,j} -\mu_{v,i} \, S_{v,i}\\[2mm]
\dot I_{v,i}= & S_{v,i} \, \sum_{j=1}^n \, M_{i,j}(N_h,N_v)I_{h,j} -\mu_{v,i} \, I_{v,i},
\end{array}
\right.
\end{equation}
where
\[
N_h=(N_{h,i}),\text{ with } N_{h,i}=S_{h,i}+I_{h,i}+R_{h,i}
\text{ and }
N_v=(N_{v,i}),\text{ with } N_{v,i}=S_{v,i}+I_{v,i}.
\]
The functions $L_{i,j},M_{i,j}:\mathbb{R}^n\oplus\mathbb{R}^n\to\mathbb{R}$ are assumed to be smooth and positive when $N_h,N_v$ have positive entries. These are mild assumptions, and they can accommodate a variety of functional forms for the infections force---see \cite{Wonham} for a discussion on the different conclusions implied by different assumptions on the infection force; see also \cite{Alvimetal2013} for a discussion on the different transmission force related to dengue. These functions also encode the cross-infection information among all the groups, which will depend on the modeling assumptions that led to the multi-group structure.
\begin{remark}
Similar models have been considered in the literature. See \cite{Cosner2009} for a multi-group SIS-SI model and \cite{Adams} for a multi-group SEIR-SEI model, obtained as the fast sojourn limit of a more general model.
\end{remark}
\begin{remark}
While model \eqref{npatch2} can be easily modified to include disease induced death, the analysis carried out in the sequel cannot be extended to such models, except in the case of constant population. However, for diseases as dengue or chikungunya, this is not a very restricting assumption, as their morbidity is, generally, not high. Dengue can be an exception to that, if there are two epidemics in a row with an intermediate time spacing. In this case, enhanced immunological reaction can cause the so-called severe dengue fever, previously known as haemorraghic dengue, which can be highly fatal if not treated appropriately \cite{who:dengue,Gubl98}.
\end{remark}
We can rewrite \eqref{npatch2} as
\begin{equation}\label{npatch3}
\left \{
\begin{array}{ll}
\dot N_{h,i}=& \Lambda_{h,i} -\mu_{h,i} \, N_{h,i} \\[2mm]
\dot N_{v,i} =& \Lambda_{v,i} -\mu_{v,i} \, N_{v,i}\\[2mm]
\dot S_{h,i}=&\Lambda_{h,i}- S_{h,i} \, \sum_{j=1}^n \, L_{i,j}(N_h,N_v)I_{v,j} -\mu_{h,i} \, S_{h,i} \\[2mm]
\dot I_{h,i}=& S_{h,i} \, \sum_{j=1}^n \, L_{i,j}(N_h,N_v)I_{v,j} - \gamma_h \, I_{h,i}- \mu_{h,i} \, I_{h,i} \\[2mm]
\dot I_{v,i}= & (N_{v,i}-I_{v,i}) \, \sum_{j=1}^n \, M_{i,j}(N_h,N_v)I_{h,j} -\mu_{v,i} \, I_{v,i}.
\end{array}
\right.
\end{equation}
In what follows, we write $S_h=(S_{h,i})$, $i=1,\ldots,n$ and similarly for $I_h$ and $I_v$. Also, let
\[
\bar{N}_h=\left(\frac{\Lambda_{h,i}}{\mu_{h,i}}\right)
\text{ and }
\bar{N}_v= \left(\frac{\Lambda_{v,i}}{\mu_{v,i}}\right).
\]
Then, it is clear that, for \eqref{npatch3}, the set
\[\Omega= \{ (S_h,I_h,I_v,N_h, N_v) \in \mathbb{R}_+^{5n} \arrowvert \;\; 0 \leq S_h+I_h \leq \bar{N}_h\,\;\; 0\leq I_v \leq \bar{N}_v,\;\; 0\leq N_h\leq \bar{N}_h,\;\; 0\leq N_v\leq \bar{N}_v\}\]
is a compact absorbing and positively invariant set.
Also, notice that the system \eqref{npatch3} is of triangular form, and hence its stability analysis can be considerably simplified. There are a number of results that allow for such a simplification in the study of global stability of systems of this kind\cite{0478.93044,Thieme:1992}. For the convenience of the reader, we recall the following result:
\begin{theorem}[Vidyasagar \cite{0478.93044}, Theorem 3.1]\label{Vid:theo}:
\noindent
Consider the following $\mathcal C^1$ system :
\begin{equation}\label{triang}
\left\{
\begin{array}{lr}
\dot x = f(x) & x \in \mathbb{R}^n \; , y \in \mathbb{R}^m\\
\dot y = g(x,y)& \\
\text{ \rm with an equilibrium point, }(x^*,y^*), \text{ \rm i.e.,} &\\
f(x^*)=0 \mbox{\;} \text{\rm and } g(x^*, y^*)=0 .&
\end{array}
\right.
\end{equation}
If $x^*$ is globally asymptotically stable (GAS) in $\mathbb{R}^n$for the system $\dot x= f(x)$, and if $y^*$ is GAS in $\mathbb{R}^m$, for the system $\dot y =g(x^*,y)$, then $(x^*,y^*)$ is (locally) asymptotically stable for (\ref{triang}). Moreover, if all the trajectories of (\ref{triang} ) are forward bounded, then $(x^*,y^*)$ is GAS for (\ref{triang}).
\end{theorem}
Since $(\bar{N}_h,\bar{N}_v)$ is a globally asymptotically stable equilibrium for the first two equations of \eqref{npatch3},
we can use Theorem~\ref{Vid:theo} to reduce the study of the stability properties of \eqref{npatch3} to the study of the stability of
\begin{equation}\label{npatch3b}
\left \{
\begin{array}{ll}
\dot S_{h,i}=& \displaystyle \Lambda_{h,i}- S_{h,i} \, \sum_{j=1}^n \, L_{i,j}(\bar{N}_h,\bar{N}_v)I_{v,j} -\mu_{h,i} \, S_{h,i} \\
\\
\dot I_{h,i}=& \displaystyle S_{h,i} \, \sum_{j=1}^n \, L_{i,j}(\bar{N}_h,\bar{N}_v)I_{v,j} - \gamma_{h,i} \, I_{h,i}- \mu_{h,i} \, I_{h,i} \\
\\
\dot I_{v,i}= & \displaystyle
(N_{v,i}-I_{v,i}) \, \sum_{j=1}^n \, M_{i,j}(\bar{N}_h,\bar{N}_v)
I_{h,j} -\mu_{v,i} \, I_{v,i}.
\end{array}
\right.
\end{equation}
In what follows, we shall denote by $\Lambda_h$, $\mu_h$ and $\gamma_h$ the vectors of
$\mathbb{R}^n_+$ whose components are respectively $\Lambda_{h,i}$, $\mu_{h,i}$ and $\gamma_{h,i}$. We shall also write $M=M(\bar{N}_h,\bar{N}_v)$ and $L=L(\bar{N}_h,\bar{N}_v)$.
System~\eqref{npatch3b} can then be written in the following vectorial notation:
\begin{equation}\label{npatch4}
\left\{
\begin{array}{ll}
\dot S_h=& \Lambda_h \, - \mathrm{diag}(S_h)\, L\, I_v - \mathop{\mathrm{diag}}(\mu_h) S_h\\
\\
\dot I_h=& \mathrm{diag}(S_h)\, L\, I_v - \mathop{\mathrm{diag}}(\mu_h+\gamma_h) I_h\\
\\
\dot I_v=& \mathrm{diag}(\bar N_v-I_v) \, M \, I_h -\mathop{\mathrm{diag}}(\mu_v) I_v\,,
\end{array}
\right.
\end{equation}
where for $\mathbf{v}\in\mathbb{R}^n$, $\mathop{\mathrm{diag}}(\mathbf{v})$ denotes the $n\times n$ diagonal matrix whose main diagonal is $\mathbf{v}$.
\subsection{The Host-Vector contact network}
We shall need an assumption about the network topology in system~\eqref{npatch4}. For a matrix $A$, we write $\Gamma(A)$ for the associated graph. We begin with a definition
\vspace{3mm}
\begin{definition}[Host-Vector Contact Network]
\label{def:hvcn}
Given nonnegative matrices $L$ and $M$, we write
\[
\mathcal{M}=\begin{pmatrix}
0&L\\
M&0
\end{pmatrix}.
\]
The graph associated to $\mathcal{M}$, $\Gamma(\mathcal{M})$, is denoted the host-vector contact network, or contact network for short.
\end{definition}
\vspace{3mm}
\begin{hyp}
\label{hyp:1}
The contact network is strongly connected, i.e., $\mathcal{M}$ is nonnegative and irreducible.
\end{hyp}
\vspace{3mm}
\begin{remark}
Notice that irreducibility of $L$ and $M$ are neither necessary nor sufficient for the irreducibility of $\mathcal{M}$. As an example, consider
\[
C=\begin{pmatrix}
0&1\\
1&0
\end{pmatrix}
\text{ and }
D=\begin{pmatrix}
1&0\\
1&1
\end{pmatrix};
\quad
\mathcal{M}_1=
\begin{pmatrix}
0&C\\
C&0
\end{pmatrix}
\text{ and }
\mathcal{M}_2=
\begin{pmatrix}
0&D^t\\
D&0\\
\end{pmatrix}.
\]
Then $C$ is irreducible and $D$ is reducible. Nevertheless, $\mathcal{M}_1$ is reducible and $\mathcal{M}_2$ is irreducible.
\end{remark}
\vspace{3mm}
The irreducibility of $\mathcal{M}$ is associated to the strong connectivity of the corresponding directed bipartite graph. This is a consequence of the infection process, when considered between hosts (or vectors) themselves, is a two step process. Thus, even when the circulation structure (the non-zero patterns of $L$ and $M$) is strongly connected, this is not necessarily the case for the host-vector contact structure of an indirectly transmitted disease, and this is a significant difference to directly transmitted ones.
In the following Proposition we shall give a useful characterization of the irreducibility of $\mathcal{M}$ that will be used later on:
\begin{proposition}
\label{prop:factors_irred}
$\mathcal{M}$ is irreducible if, and only if, the following conditions are satisfied:
%
\begin{enumerate}
\item Both $LM$ and $ML$ are irreducible;
\item We have that $Lv,Mv\gg0$, for some $v\gg0$ (and hence, for every $v\gg0$).
\end{enumerate}
%
Moreover, in this case, we also have that
%
\[
\rho(\mathcal{M})^2=\rho(LM)=\rho(ML),
\]
%
and that both $LM$ and $ML$ have right and left positive eigenvectors associated to $\rho(\mathcal{M})^2$.
\end{proposition}
\vspace{3mm}
\begin{proof}
Firstly, we compute
\[
\mathcal{M}^{2k}=\begin{pmatrix}
(LM)^k&0\\
0&(ML)^k
\end{pmatrix}
\quad\text{and}\quad
\mathcal{M}^{2k+1}=\begin{pmatrix}
0&L(ML)^k\\
M(LM)^k&0
\end{pmatrix},
\]
Assume $\mathcal{M}$ is irreducible. Then there is some natural $n$ such that
\[
(I+\mathcal{M})^{2n}=\sum_{m=0}^{2n}\binom{2n}{m}\mathcal{M}^m=
\begin{pmatrix}
\sum_{k=0}^n\binom{2n}{2k}(LM)^k&L\sum_{k=0}^{n-1}\binom{2n}{2k+1}(ML)^k\\
M\sum_{k=0}^{n-1}\binom{2n}{2k+1}(LM)^k&\sum_{k=0}^{n}\binom{2n}{2k}(ML)^k
\end{pmatrix}
\gg0.
\]
Hence, we have that
\[
(I+ML)^n,(I+LM)^n\gg0,
\]
and both $LM$ and $ML$ are irreducible as claimed. In addition, we have $L\sum_{k=0}^{n-1}\binom{2n}{2k+1}(ML)^k \gg0$. Thus $L$ applied to a column of $\sum_{k=0}^{n-1}\binom{2n}{2k+1}(ML)^k$ positive. The argument for $M$ is similar.
Conversely, if both $LM$ and $ML$ are irreducible, then we
have that the main diagonals of $(I+\mathcal{M})^{2n}$ are positive. The remaining blocks are also positive, since $L$ and $M$ are acting on positive matrices.
Finally, let $(u,v)$ be a positive eigenvector associated to $\rho(\mathcal{M})=\rho_\mathcal{M}$. The we necessarily have $Lu=\rho_\mathcal{M} v$ and $Mv=\rho_\mathcal{M} u$. Hence $LMv=\rho^2_\mathcal{M} v$, and similarly $MLu=\rho^2_\mathcal{M} u$. Furthermore $u$ and $v$ are positive right eigenvectors of $ML$ and $LM$, respectively, associated to $\rho_\mathcal{M}^2$. The argument for left eigenvectors is analogous.
\end{proof}
\begin{remark}
We observe that System~\eqref{npatch4} can be recast as a special case of the multigroup SIR model treated in~\cite{gls:2006}, as follows: Replace $N_v-I_v$ by $S_v$ in the last equation of~\eqref{npatch4}. Include the redundant equation:
\[
\dot S_v= \Lambda_v \, - \mathrm{diag}(S_v)\, M\, I_h - \mathop{\mathrm{diag}}(\mu_v) S_v.
\]
Let $\mathbf{S}=(S_1,\ldots,S_{2n})^t$, $\mathbf{I}=(I_1,\ldots,I_{2n})^t$ and set $S_i=S_{h,i}$, $S_{n+i}=S_{v,i}$, $I_i=I_{h,i}$, $I_{n+i}=I_{v,i}$, for $i=1,\ldots,n$. Further, let $\mathcal{M}$ be as given in definition~\ref{def:hvcn} and let $\Lambda=(\Lambda_h\, \Lambda_v)^t$, $\mu=(\mu_h\, \mu_v)^t $, and $\gamma=(\gamma_h\, \mathbf{0})^t$. Then $(\mathbf{S}\,\mathbf{I})^t$ satisfies
\begin{align*}
\dot{\mathbf{S}}&=\Lambda-\mathop{\mathrm{diag}}(\mathbf{S})\mathcal{M}\mathbf{I}-\mathop{\mathrm{diag}}(\mu)\mathbf{S}\\
\dot{\mathbf{I}}&=\mathop{\mathrm{diag}}(\mathbf{S})\mathcal{M}\mathbf{I}-\mathop{\mathrm{diag}}(\mu+\gamma)\mathbf{I},
\end{align*}
for which the sharp threshold property holds. Nevertheless, we shall obtain this result by considering equation \eqref{npatch4} directly, and using a related but different approach. This can be seen as an extension to indirect transmitted diseases of the framework introduced in \cite{gls:2006,gls:2008}.
\end{remark}
\section{Equilibria and local stability}
\label{sec:eqstab}
We will show that
for our vectorial disease with sub-populations structure, System~\eqref{npatch2}, the results of \cite{MR87c:92046,MR1993355} are conserved.
Namely we
obtain that the DFE is locally asymptotically stable, iff $\mathcal R_0 \leq
1$, and the existence and uniqueness of a strongly endemic
equilibrium when $\mathcal R_0 >1$. This equilibrium is always locally
asymptotically stable. For global results, see Section~\ref{sec:global}.
Using the now standard techniques \cite{MR1057044,VddWat02}, we define the basic reproduction ratio as
%
\[ \mathcal R_0 = \rho(\mathcal{N}),\quad \mathcal{N}= \left (
\begin{array}{ll}
0 & \mathop{\mathrm{diag}}(\mu_h+\gamma_h)^{-1} \, \mathop{\mathrm{diag}}(\bar N_h) \,L \\
\\
\mathop{\mathrm{diag}}(\mu_v)^{-1}\, \mathop{\mathrm{diag}}(\bar N_v) \, M &0
\end{array} \right ). \]
\begin{remark}
\label{rem:about_ngo}
Since $\mu_v,\mu_h\gg0$, we have from Proposition~\ref{prop:factors_irred} that $\mathcal{N}$ is irreducible if, and only if, $\mathcal{M}$ is irreducible. In particular, if Hypothesis~\ref{hyp:1} holds then $\mathcal{N}$ is irreducible and we have that
\[ \mathcal R_0^2= \rho \left ( \mathop{\mathrm{diag}}(\mu_h+\gamma_h)^{-1}\mathop{\mathrm{diag}}(\mu_v)^{-1} \, \mathrm{diag}(\bar N_h) \,L \, \mathrm{diag}(\bar N_v) \, M \right ).
\]
\end{remark}
\vspace{3mm}
\begin{theorem}
\label{thm:ee:exstab}
Assume that hypothesis \ref{hyp:1} holds. Then system~\eqref{npatch4} (and hence system~\eqref{npatch2}) has a unique endemic equilibrium if, and only if,
$\mathcal R_0>1$. Moreover this equilibrium is locally asymptotically stable with respect to System~\eqref{npatch4}.
\end{theorem}
\vspace{3mm}
\begin{proof}
We denote by $S_h^*$, $I_h^*$ and $I_v^*$ the expression
of an endemic equilibrium. Recall that the notation
$\mathds{1}$ refers to the vector of $\mathbb{R}^n_+$ whose components
are all equal to $1$. We have the following relation, defining an endemic equilibrium:
\begin{subequations}
\begin{eqnarray}
\Lambda_h &= \mathop{\mathrm{diag}}(\mu_h + \, L\, I_v^* ) \, S_h^* \label{subeqnendem1}\\
\mathop{\mathrm{diag}} (\mu_h+\gamma_h) \, I^*_h&=\mathop{\mathrm{diag}}(S_h^*)\, L\, I_v^* \label{subeqnendem2}\\
\mathop{\mathrm{diag}}( \mu_v )\, I_v^*&= \mathop{\mathrm{diag}}(\bar N_v-I_v^*) \,M\,I_h^* \label{subeqnendem3}
\end{eqnarray}
\end{subequations}
\noindent From \eqref{subeqnendem1} we obtain
\[ S_h^*=\mathop{\mathrm{diag}}(\mu_h + \, L\, I_v^* )^{-1} \, \Lambda_h \]
\noindent Rewriting \eqref{subeqnendem3} as
\[ \mathop{\mathrm{diag}}(\mu_v) \, I_v^*= \mathop{\mathrm{diag}} (M\,I_h^*)\,(\bar N_v-I_v^*) \]
Substituting for $S_h^*$ in \eqref{subeqnendem2} we
obtain
\begin{subequations}
\begin{eqnarray}
I_h^* &= \mathop{\mathrm{diag}}(\mu_h+\gamma_h)^{-1} \mathop{\mathrm{diag}}(\mu_h + \, L\, I_v^* )^{-1} \,\mathop{\mathrm{diag}}(L\,I_v^*) \,\Lambda_h\label{subeqnendem11}\\
I^*_v&= \mathop{\mathrm{diag}}(\mu_v + \, M\, I_h^* )^{-1} \, \mathop{\mathrm{diag}}(M\,I_h^*) \, \bar N_v \label{subeqnendem12}
\end{eqnarray}
\end{subequations}
\noindent Hence $(I_h^*,I_v^*)$ is a fixed point of the following
application
\[ F(x,y) =
\begin{bmatrix}
\mathop{\mathrm{diag}}(\mu_h + \gamma_h)^{-1} \mathop{\mathrm{diag}}(\mu_h + \, L\, y )^{-1} \,\mathop{\mathrm{diag}}(L \, y) \,\Lambda_h \\
\\
\mathop{\mathrm{diag}}(\mu_v + \, M\, x )^{-1} \, \mathop{\mathrm{diag}}(M\,x) \, \bar N_v
\end{bmatrix}\]
We will use a result of Hethcote and Thieme \cite{MR87c:92046}, which we recall for the convenience of the reader:
\begin{lemma}[\mbox{Theorem 2.1 in \cite{MR87c:92046}}]
\label{thm:thieme}
Let $F(w)$ be a continuous, monotone non-decreasing, strictly
sublinear, bounded function which maps the nonnegative orthant
$\mathbb{R}^n_+= [0, \infty)^n$ into itself. Let $F(0)=0$ and $F'(0)$
exist and be irreducible. Then $F(w)$ does not have a
nontrivial fixed point on the boundary of $\mathbb{R}^n_+$. Moreover,
$F(x)$ has a positive fixed point iff $\rho(F('0))>1$. If there
is a fixed point, then it is unique.
\end{lemma}
We have to check, for our function $F$ defined on $\mathbb{R}^n_+
\times \mathbb{R}^n_+$, the conditions of Theorem~\ref{thm:thieme}.
It is immediate that $F$ is continuous, bounded and maps the
nonnegative orthant $\mathbb{R}^n_+ \times \mathbb{R}^n_+$ into itself.
The function $F$ is monotone since the Jacobian of $F$ is
\[JF(x,y)=
\begin{bmatrix}
0 & A_1 \\
A_2 &0
\end{bmatrix}\]
With
\[ A_1= \mathop{\mathrm{diag}}(\mu_h+\gamma_h)^{-1}\mathop{\mathrm{diag}}(\mu_h + \, L\, y )^{-1} \, \mathop{\mathrm{diag}}(\Lambda_h) \left [I_n- \mathop{\mathrm{diag}}(\mu_h \ + \, L\, y )^{-1} \, \mathop{\mathrm{diag}}(L \, y)\right ] \,L.\]
and
\[ A_2= \mathop{\mathrm{diag}}( \bar N_v )\, \mathop{\mathrm{diag}}(\mu_v + \, M\, x )^{-1} \, \left [I_n - \mathop{\mathrm{diag}}(\mu_v + \, M\, x )^{-1} \, \mathop{\mathrm{diag}}(M\,x) \right ] \, M. \]
\noindent
Then $JF(x,y)$ is a Metzler matrix, i.e. a matrix whose off diagonal terms are nonnegative \cite{MR94c:34067,0458.93001}. These matrices are also known as quasi-positive matrix \cite{0821.34003,MR1993355}. This proves that $F$ is monotone \cite{0821.34003,MR2182759}.
Now, we have to check the strict sublinearity. We use the
equivalent definition of \cite{MR2182759}, using the standard
ordering of $\mathbb{R}^n$ and the classical notations $x \leq y $
if, for any index $i$, $x_i \leq y_i$; $x <y$ if $x\leq y$ and
$x \neq y$ ; $x \ll y$ if $x_i <y_i$ for any index $i$ ;
\noindent
$F$ is strongly sublinear if
\[0<\lambda <1, \;\; w \gg 0 \Longrightarrow \; \lambda \, F(w) \gg F(\lambda \, w).\]
\noindent With $x \gg 0$ and $y \gg 0$, since $\mathcal{M}$ is irreducible, we must have $\mathcal{M}\begin{pmatrix}
x\\y \end{pmatrix} \gg0$, and hence we have $L \,y \gg 0$ and
$M\, x \gg 0$. Thus, $\mu_h + \,\lambda \, L\, y
\ll \mu_h + \, L\, y$ and a similar inequality
$\mu_v + \, \lambda M\, x \ll \mu_v
+ M\, x $. This proves the strict sublinearity. Using the
formula for the Jacobian of $F$, we have
\[ JF(0
,0) =
\begin{bmatrix}
0 & \mathop{\mathrm{diag}}(\mu_h+\gamma_h)^{-1}\mathop{\mathrm{diag}}(\bar{N}_h)\, L \\
\\
\mathop{\mathrm{diag}}(\mu_v)^{-1}\, \mathop{\mathrm{diag}}( \bar N_v ) \, M & 0
\end{bmatrix}\]
\noindent This matrix is irreducible, since $\mathcal{M}$ is irreducible, and $\rho(JF(0,0))=\mathcal
R_0$. All the requirements of Theorem~\ref{thm:thieme} are satisfied.
This proves that there exists a unique positive endemic
equilibrium in $\mathbb{R}^n_+$ when $\mathcal R_0 >1$. Moreover,
looking at the expression of $F$, it is clear that this
equilibrium is in the compact $\Omega$.
\medskip \noindent We will prove the asymptotic stability of this positive
equilibrium. The proof is adapted from~\cite{MR87c:92046}, using Krasnosel{$'$}ski{\u\i}'s trick~\cite{MR0181881}. The difference is that we have to
vectorize this proof for the infective of human host and
vectors.
We will show that the linearized equation has no solution of
the form $X(t)=\exp(z\, t) \, X_0$ with $X_0 \in \mathbb{C}^{3n}$, $z
\in \mathbb{C}$, $\Re z \geq 0$ for $X_0$ eigenvector and $z $
corresponding eigenvalue of the Jacobian computed at the
endemic equilibrium. Let $X_0=(U,V,W) \in \mathbb{C}^{3n}$ be such an
eigenvector for the eigenvalue $z$ . Then
\begin{subequations}
\begin{align}
z \, U&= -\mathop{\mathrm{diag}}(\mu_h) \,U - \mathop{\mathrm{diag}}(L\,I_v^*) \, U - \mathop{\mathrm{diag}}(S_h^*) \, L \, W
\label{eigen1}\\
z\,V &= \mathop{\mathrm{diag}}(L\, I_v^* ) \,U - (\mu_h+\gamma_h) \, V + \mathop{\mathrm{diag}}(S_h^*) \, L \, W \label{eigen2}\\
z\,W&= \mathop{\mathrm{diag}}(\bar N_v-I_v^*) \, M\, V -\mu_V \, W - \mathop{\mathrm{diag}}(M\,I_h^*) \, W \label{eigen3}
\end{align}
\end{subequations}
\noindent Adding the sub-equations (\ref{eigen1}) and(\ref{eigen2})
we obtain the relation
\[\mathop{\mathrm{diag}}(\mu_h+z\mathds{1}) \, U= -\mathop{\mathrm{diag}}(\mu_h+\gamma_h +z\mathds{1}) \, V \]
\noindent Replacing $U$ in (\ref{eigen2}) and (\ref{eigen3}) yields
after some rearrangements
\begin{multline} \label{krasno}
\begin{bmatrix}
\mathop{\mathrm{diag}}\left (\mathds{1} + z\mathop{\mathrm{diag}}(\mu_h+\gamma_h)^{-1}\mathds{1} + \mathop{\mathrm{diag}}(z\mathds{1}+\mu_h+\gamma_h)\mathop{\mathrm{diag}}(z\mathds{1}+\mu_h)^{-1}\mathop{\mathrm{diag}}(\mu_h+\gamma_h)^{-1}\, L\,I_v^* \right ) \, V \\
\\
\mathop{\mathrm{diag}}\left (\mathds{1}+ z\mathop{\mathrm{diag}}(\mu_v)^{-1}\mathds{1}+
\mathop{\mathrm{diag}}(\mu_v)^{-1} M\,I_h^*\right ) \, W
\end{bmatrix} = \\
\\
\begin{bmatrix}
0 & \mathop{\mathrm{diag}}(\mu_h+\gamma_h)^{-1}\mathop{\mathrm{diag}}(S_h^*) \, L \\
\\
\mathop{\mathrm{diag}}(\mu_v)^{-1} \mathop{\mathrm{diag}}(\bar N_v-I_v^*) \, M& 0
\end{bmatrix} \,
\begin{bmatrix}
V\\
\\
W
\end{bmatrix}
\end{multline}
The matrix
\[ H = \begin{bmatrix}
0 & \mathop{\mathrm{diag}}(\mu_h+\gamma_h)^{-1}\mathop{\mathrm{diag}}(S_h^*) \, L \\
\\
\mathop{\mathrm{diag}}(\mu_v)^{-1}\mathop{\mathrm{diag}}(\bar N_v-I_v^*) \, M& 0
\end{bmatrix} \]
is a nonnegative irreducible matrix, since its associated graph is isomorphic to $\Gamma(\mathcal{M})$. From equations
(\ref{subeqnendem2}) and (\ref{subeqnendem3}), we have that
\[ H \, \begin{bmatrix}
I_h^*\\
I_v^*
\end{bmatrix} = \begin{bmatrix}
I_h^*\\
I_v^*
\end{bmatrix}.\]
\noindent Note that $\begin{bmatrix}
I_h^*\\
I_v^*
\end{bmatrix}$ is the positive Perron-Frobenius vector of $H$.
\noindent We assume that $\Re z \geq 0$. Let $\eta (z)$ be the
minimum of the real part of the components of the two vectors
\[ z\mathop{\mathrm{diag}}(\mu_h+\gamma_h)^{-1}\mathds{1}+ \mathop{\mathrm{diag}}(z\mathds{1}+\mu_h+\gamma_h)\mathop{\mathrm{diag}}(z\mathds{1}+\mu_h)^{-1}\mathop{\mathrm{diag}}(\mu_h+\gamma_h)^{-1} L\,I_v^*\]
and
\[ z\mathop{\mathrm{diag}}(\mu_v)^{-1}\mathds{1}+\mathop{\mathrm{diag}}(\mu_v)^{-1}M\,I_h^* \]
Since $\Re z \geq 0$, $I_v^* \gg 0$, $I_h^* \gg 0$, the irreducibility of $\mathcal{M}$ implies that we have $\eta(z)>0$.
Taking the absolute values in (\ref{krasno}) gives
\[[ 1+\eta(z) ] \,
\begin{bmatrix}
| V | \\
| W|
\end{bmatrix} \leq H \, \begin{bmatrix}
| V | \\
| W|
\end{bmatrix} \]
Let $r$ the minimum number such that
\[ \begin{bmatrix}
| V | \\
| W|
\end{bmatrix} \leq r \, \begin{bmatrix}
I_h^*\\
I_v^*
\end{bmatrix}. \]
We now have
\[[ 1+\eta(z) ] \,
\begin{bmatrix}
| V | \\
| W|
\end{bmatrix} \leq H \, \begin{bmatrix}
| V | \\
| W|
\end{bmatrix} \leq r \, H \, \begin{bmatrix}
I_h^*\\
I_v^*
\end{bmatrix}= r \, \begin{bmatrix}
I_h^*\\
I_v^*
\end{bmatrix}.\]
Since $\eta(z) >0$ if $\Re z\geq 0$, we obtain a
contradiction to the minimality of $r$. Thus $\Re z <0$, which
proves the asymptotic stability at the endemic equilibrium.
\end{proof}
\section{Global Dynamics}
\label{sec:global}
In this section, we discuss a number of results concerning the global dynamics of system~\eqref{npatch4}. We begin by introducing some notation to allow an easier handling of the vector calculations.
\begin{definition}
The entry-wise product for vectors, the Hadamard product, will be denoted by $\circ$. Namely, if $(X_1,\ldots,X_n), (Y_1,\ldots,Y_n)\in\mathbb{R}^n$, then
\[
(X_1,\ldots,X_n)\circ(Y_1,\ldots,Y_n)=(X_1Y_1,\ldots,X_nY_n).
\]
For a vector $\mathbf{X}=(X_1,\ldots,X_n)\in\mathbb{R}^n$ and for $f:I\subset\mathbb{R}\to\mathbb{R}$, we shall write
%
\[
f(\mathbf{X})=(f(X_1),\ldots,f(X_n)).
\]
%
In particular, if $X=(X_1,\ldots,X_n)\gg0$, then $X^{-1}=(X_1^{-1},\ldots,X_n^{-1})$.
\end{definition}
We collect some useful facts about the manipulation of expression involving Hadamard products in the following Lemma:
\begin{lemma}
If $\mathbf{X}_1,\ldots,\mathbf{X}_m\in\mathbb{R}^n$ and $M\in M_n(\mathbb{R})$ then we have
\begin{enumerate}
\item $\mathbf{X}_1+\cdots+\mathbf{X}_m\geq m\sqrt[m]{\mathbf{X}_1\mathop{\circ}\dots\mathop{\circ} \mathbf{X}_m};$\\
\item $\mathbf{X}_1\circ(M\mathbf{X}_2)=\mathop{\mathrm{diag}}(\mathbf{X}_1)M\mathbf{X}_2=\mathop{\mathrm{diag}}(M\mathbf{X}_2)\mathbf{X}_1;$
\item if $\mathbf{X}_1=\mathbf{X}_1(t)$, and if $f$ is differentiable then $\dfrac{\mathrm{d}}{\mathrm{d} t}f(\mathbf{X}_1)=\dot{\mathbf{X}}_1\mathop{\circ} f'(\mathbf{X}_1)$.
\end{enumerate}
\end{lemma}
It turns out that it is more convenient to work with system~ \eqref{npatch4} in prevalence form, so that the susceptible population at the disease-free equilibrium (DFE) , for both host and vector populations in each group, is unity. Let
\begin{align*}
&D_h=\mathop{\mathrm{diag}}(\bar N_h),\; D_v=\mathop{\mathrm{diag}}(\bar N_v),\\[2mm]
&(X,Y)=D_h^{-1}(S_h, I_h),\quad Z=D_v^{-1}I_v\\[2mm]
&A=LD_v\text{ and }B=MD_h.
\end{align*}
introduce
In this case system~\eqref{npatch4} reads
\begin{equation}
\label{eqn:sys_res}
\left\{
\begin{array}{ccc}
\dot{X}&=&\mu_h\circ(\mathds{1}-X)-\mathop{\mathrm{diag}}(X)AZ\\[2mm]
\dot{Y}&=&\mathop{\mathrm{diag}}(X)AZ-(\mu_h+\gamma_h)\circ Y\\[2mm]
\dot{Z}&=&\mathop{\mathrm{diag}}(\mathds{1}-Z)BY-\mu_v\circ Z
\end{array}
\right.
\end{equation}
I suggest to use $X^*, Y^*, \mathcal{N}^*,...$ instead of $\bar X, \bar Y, \bar{\mathcal{N}}, ..$ for all what is related to the endemic equilibrium since bar has already been used and we used stars for the EE in the previous section
With this notation, the DFE is $(\mathds{1},0,0)$ and we shall write the EE as $(\bar{X},\bar{Y},\bar{Z})$, with
\[
\bar{X}_i=\left(\frac{X^*_i}{\bar{N}_{h,i}}\right),\quad \bar{Y}_i=\left(\dfrac{Y^*_i}{\bar{N}_{h,i}}\right)\quad \textbf{and}\quad \bar{Z}_i=\left(\dfrac{Z^*_i}{\bar{N}_{v,i}}\right).
\]
Notice that, since in the new coordinates we have $\mathop{\mathrm{diag}}(\bar N_h)=\mathop{\mathrm{diag}}(\bar N_v)=\mathop{\mathrm{diag}}(\mathds{1})$, the next generation operator is now given by
\[
\mathcal{N}=\begin{pmatrix}
0&\mathop{\mathrm{diag}}(\mu_h+\gamma_h)^{-1}A\\
\mathop{\mathrm{diag}}(\mu_v)^{-1}B&0
\end{pmatrix}.
\]
Also, the absorbing set can now be written as
\[
K=\left\{(X,Y,Z)\in\mathbb{R}^{3n} \text{ s.t. } 0\leq X+Y\leq \mathds{1},\quad 0\leq Z\leq\mathds{1} \right\}.
\]
We begin with the stability of the DFE when $\mathcal{R}_0\leq1$:
\begin{theorem}
\label{thm:gs:dfe}
Assume that hypothesis \ref{hyp:1} holds and that $\mathcal R_0\leq 1$. Then the DFE is globally asymptotically stable. If $\mathcal{R}_0>1$, then the DFE is unstable.
\end{theorem}
\begin{proof}
Since $\mathcal{N}$ is irreducible, let $(\alpha, \beta)$ be a left, positive eigenvector of $\mathcal{N}$, associated to the eigenvalue $\mathcal R_0$. Let
%
\[
V=\langle \alpha,Y\rangle + \langle \beta,(\mu_h+\gamma_h)\circ \mu_v^{-1}\circ Z\rangle
\quad\text{and}\quad
R=\langle\alpha,\mathop{\mathrm{diag}}(\mathds{1}-X)AZ\rangle + \langle\beta,(\mu_h+\gamma_h)\circ\mu_v^{-1}\circ\,\mathop{\mathrm{diag}}(Z)BY\rangle.
\]
%
Notice that $R\geq0$, and that $R$ vanishes in the set
\[ S_0=\{(X,Y,Z) \in K : \mathop{\mathrm{diag}}(\mathds{1}-X)AZ=\mathop{\mathrm{diag}}(Z)BY=0\,,\, Y,Z\not=0\}.
\]
Computing the derivative of $V$ along the flow, we have:
\begin{align*}
\dot{V}&=\langle \alpha,\dot{Y}\rangle + \langle \beta, (\mu_h+\gamma_h)\circ\mu_v^{-1}\circ\dot{Z}\rangle\\
&= \langle \alpha,\mathop{\mathrm{diag}}{(X)}AZ-\left(\mu_h+\gamma_h\right)\circ Y\rangle + \langle \beta, (\mu_h+\gamma_h)\circ\mu_v^{-1}\circ\left(\mathop{\mathrm{diag}}(\mathds{1}-Z)BY-\mu_v\circ Z\right)\rangle\\
&= \langle \alpha,AZ-\left(\mu_h+\gamma_h\right)\circ Y\rangle + \langle \beta,(\mu_h+\gamma_h)\circ\mu_v^{-1}\circ\left( BY-\mu_vZ\right)\rangle -R\\
&=\left[\mathcal R_0\langle(\mu_h+\gamma_h)\circ\beta,Z\rangle - \langle(\mu_h+\gamma_h)\circ\alpha,Y\rangle +\mathcal R_0\langle(\mu_h+\gamma_h)\circ\alpha,Y\rangle - \langle(\mu_h+\gamma_h)\circ\beta,Z\rangle\right] -R\\
&=\left(\mathcal R_0-1\right)\left[\langle(\mu_h+\gamma_h)\circ\alpha,Y\rangle +\langle(\mu_h+\gamma_h)\circ\beta,Z\rangle\right]-R\\
&\leq 0,
\end{align*}
%
provided that $\mathcal R_0\leq 1$.
Also, notice that when $\mathcal R_0<1$, we have that $\dot{V}=0$ if,
and only if, $Y=Z=0$. Since the DFE is the unique invariant
compact set in this latter case, LaSalle principle implies
that it is globally asymptotically stable. If $\mathcal R_0=1$ then we observe that $\dot{V}=0$ holds in $S_0$, which contains the set $\{(X,Y,Z) | Y=Z=0\}$. Nevertheless, it can then be easily verified from system~\eqref{eqn:sys_res} that the DFE is the only invariant set contained in $S_0$. Thus the result follows once again from LaSalle invariance principle.
If $\mathcal{R}_0>1$, then if both $Y$ and $Z$ are sufficient close to zero, we have $\dot{V}(\mathds{1},Y,Z)>0$. By continuity, this is also true in a neighbourhood of $(\mathds{1},0,0)$, and hence the DFE is unstable.
\end{proof}
Before we can tackle the global stability of the endemic equilibrium, when $\mathcal R_0>1$, we need some preliminary results.
\begin{lemma}
\label{lem:tt:pev}
Assume that Hypothesis~\ref{hyp:1} holds, and let
\[
\bar{\mathcal{N}}=
\begin{pmatrix}
0&\mathop{\mathrm{diag}}(\mu_h+\gamma_h)^{-1}\,\mathop{\mathrm{diag}}(\bar{X})A\\
\mathop{\mathrm{diag}}(\mu_v)^{-1}\,\mathop{\mathrm{diag}}(\mathds{1}-\bar{Z})B&0
\end{pmatrix}.
\]
Then, $\bar{\mathcal{N}}$ is irreducible, $\rho(\bar{\mathcal{N}})=1$ and $\bar{\mathcal{N}}$ has a positive left eigenvector $(\xi,\eta)^t$ to $\rho(\bar{\mathcal{N}})$. In addition, let
\[
T=\mathop{\mathrm{diag}}(\mu_v)^{-1}\mathop{\mathrm{diag}}(\mu_h+\gamma_h)^{-1}\mathop{\mathrm{diag}}(\bar{X})A\mathop{\mathrm{diag}}(\mathds{1}-\bar{Z})B.
\]
Then $\rho(T)=1$, and $T^t\eta=\eta$.
\end{lemma}
\vspace{3mm}
\begin{proof}
Since Hypothesis~\ref{hyp:1} holds, we have that $\mathcal{N}$ is irreducible, and hence $\bar{\mathcal{N}}$ is irreducible. From the equilibrium relationship we also have
\[
\bar{\mathcal{N}}\begin{pmatrix}
\bar{Y}\\\bar{Z}
\end{pmatrix}
=
\begin{pmatrix}
\bar{Y}\\\bar{Z}
\end{pmatrix},
\]
and hence we have
\[
\rho(\bar{\mathcal{N}})=1.
\]
The remaining claims follow from Proposition~\ref{prop:factors_irred}.
\end{proof}
Before giving the next definition, we introduce some terminology. For a given digraph $G$, we will denote its set of vertices by $\mathcal{V}(G)$, and the set of edges of $G$ by $\mathcal{E}(G)\subset \mathcal{V}(G)\times\mathcal{V}(G)$. A $c$-edge colored multidigraph ($c$-ECM for short) is a multi-digraph where the parallel edges must have different colors---and therefore a maximum of $c$ parallel edges are allowed. If $G$ is a $c$-ECM, we will write $\mathcal{C}(G)$ for its set of colors. Thus each edge of $G$ can be uniquely described as an ordered triple $(v_1,v_2,c)\in \mathcal{E}(G)\subset\mathcal{V}(G)\times\mathcal{V}(G)\times\mathcal{C}(G)$.
\begin{definition}[Transitive Contact Multigraph]
Given a contact network $\Gamma(\mathcal{M})$, we define the \textit{transitive contact multigraph} (TCM for short) $\Gamma(\mathfrak{M})$ as the $n$-ECM of order $n$, obtained from $\Gamma(\mathcal{M})$ by taking $\mathcal{V}(\Gamma(\mathfrak{M}))=\{1,\ldots,n\}$ and defining $(i,j,k)\in\mathcal{E}(\Gamma(\mathfrak{M}))$ if $L_{i,k}M_{k,j}\not=0$.
\end{definition}
\vspace{3mm}
\begin{remark}
Notice that if we collapse all the parallel edges, then we obtain a graph isomorphic to $\Gamma(LM)$. In particular, Proposition~\ref{prop:factors_irred} then says that $\Gamma(\mathfrak{M})$ is strongly connected.
\end{remark}
\begin{remark}
If $(i,j,k)\in\mathfrak{M}$, then this means that an infected host in group $j$ can be the origin of an infection of a host in group $i$ by infecting a vector of group $k$, which then infects the host in group $i$. Within the fast travelling interpretation, this means that a infect host that is resident in region $j$ can travel to region $k$, where it infects a vector there. This infected vector will subsequently infect a susceptible host of region $i$ that travels to region $k$. See Figure~\ref{fig:exm} for an example of a host-vector contact network, and the corresponding transitive contact multigraph.
\end{remark}
\begin{figure}[htbp]
\subfloat[Host-vector contact network]{\includegraphics[scale=1.25]{fig1a}}
\hfill
\subfloat[Transitive contact multigraph]{\includegraphics[scale=1.25]{fig1b}}
\caption{In (a) we display a host-vector contact network. Within the travelling interpretation of the model, that solid lines indicate the travelling patterns of susceptible hosts (specified by the nonzero entries of $L$), while the dotted lines indicate the travelling pattern of the infected hosts (specified by the nonzero entries of $M$). Notice that, in this example, neither $L$ or $M$ are irreducible, but $\mathcal{M}$ is. In (b) we display the corresponding TCM: the red edges indicate connections through region 1 (dotted lines in B\&W), the green edges indicate connections through region 2 (dasehd lines in B\&W), and the blue edge indicates a connection trough region 3 (dashed-dotted lines in B\&W).}
\label{fig:exm}
\end{figure}
We will now give a graph-theoretical interpretation of $\eta$.
\begin{proposition}
Let $\zeta=\mathop{\mathrm{diag}}(\bar{Y})\eta$. Then $\zeta$ spans the kernel of the graph Laplacian of $\mathfrak{M}$. In particular, its entries are given by (a multiple of) the principal minors along the diagonal and, therefore, it is equal to the sum of the weight product of weights of a spanning tree of $\mathfrak{M}$, over all such spanning trees.
\end{proposition}
\begin{proof}
From the equilibrium relations, we have
\[
T\bar{Y}=\bar{Y}
\]
and hence
\[
\tilde{T}\cdot\mathds{1}=\mathds{1},\text{ where } \tilde{T}=\mathop{\mathrm{diag}}^{-1}(\bar{Y})T\mathop{\mathrm{diag}}(\bar{Y}).
\]
Thus, we also have
\[
\tilde{T}^t\zeta=\zeta,\quad \zeta=\mathop{\mathrm{diag}}(\bar{Y})\eta.
\]
Notice now that
\begin{equation*}
I-\tilde{T}^t=
\begin{pmatrix}
1-\tilde{T}_{11}&-\tilde{T}_{21}&\cdots&-\tilde{T}_{n1}\\
-\tilde{T}_{12}&1-\tilde{T}_{22}&\cdots&-\tilde{T}_{n2}\\
\vdots&\vdots&\ddots&\vdots\\
-\tilde{T}_{1n}&-\tilde{T}_{2n}&\cdots&1-\tilde{T}_{nn}
\end{pmatrix}=
\begin{pmatrix}
\sum_{i\not=1}\tilde{T}_{i1}&-\tilde{T}_{21}&\cdots&-\tilde{T}_{n1}\\
-\tilde{T}_{12}&\sum_{i\not=2}\tilde{T}_{i2}&\cdots&-\tilde{T}_{n2}\\
\vdots&\vdots&\ddots&\vdots\\
-\tilde{T}_{1n}&-\tilde{T}_{2n}&\cdots&\sum_{i\not=n}\tilde{T}_{in}
\end{pmatrix},
\end{equation*}
where we have used that
\[
\sum_{i=1}^n\tilde{T}_{i,j}=1,\quad j\in\{1,\ldots,n\}.
\]
Therefore $\zeta$ is in the kernel of the matrix Laplacian of $\tilde{T}^t$.
In addition, we have that
\[
\Gamma(\tilde{T}^t)=\Gamma(\tilde{T})=\Gamma(T),
\]
and the latter is isomorphic to $\mathfrak{M}$ when collapsing all the parallel edges, and hence the Laplacian of $\mathfrak{M}$ with its edge directions reversed is $I-\tilde{T}^t$. Furthermore, since $\Gamma(M)$ is strongly connected we have, by Proposition~\ref{prop:factors_irred}, that $\mathfrak{M}$ is also strongly connected and thus the kernel of the associated Laplacian is one-dimensional \cite{Chung:1996}.
The other claims follow from Kirchhoff's theorem for multigraphs---cf. \cite{Bollobas:1998}.
\end{proof}
\bigskip
\bigskip
\begin{theorem}\label{thm:gs:ee}
Assume that Hypothesis~\ref{hyp:1} holds and that $\mathcal R_0>1$. Then the EE is globally asymptotically stable.
\end{theorem}
\bigskip
\begin{proof}
Let
\[
V=\langle X-\bar{X}\circ\log(X),\eta\rangle + \langle Y-\bar{Y}\circ\log(Y),\eta\rangle + \langle Z-\bar{Z}\log(Z),\bar{\xi}\rangle,\quad \bar{\xi}=(\mu_h+\gamma_h)\circ\mu_v^{-1}\circ\xi,
\]
where $(\xi,\eta)^t$ is the positive left eigenvector of $\bar{\mathcal{N}}$ as discussed in Lemma~\ref{lem:tt:pev}. In particular, we have that
\[
A^t\mathop{\mathrm{diag}}(\bar{X})\eta=\mu_v\circ\bar{\xi}.
\quad\text{and}\quad
B^t\mathop{\mathrm{diag}}(\mathds{1}-\bar{Z})\bar{\xi}=(\mu_h+\gamma_h)\circ\eta.
\]
Then
\begin{align*}
\dot{V}&= \langle \dot{X}\circ\left(\mathds{1}-\bar{X}\circ X^{-1}\right),\eta\rangle + \langle \dot{Y}\circ\left(\mathds{1}-\bar{Y}\circ Y^{-1}\right),\eta\rangle + \langle \dot{Z}\circ\left(\mathds{1}-\bar{Z}\circ Z^{-1}\right),\bar{\xi}\rangle\\
&=\langle \mu_h\circ(\mathds{1}-X)-\mathop{\mathrm{diag}}(X)AZ-\mu_h\circ(\mathds{1}-X)\circ\bar{X}\circ X^{-1}+\left(\mathop{\mathrm{diag}}(X)AZ\right)\circ\bar{X}\circ X^{-1},\eta\rangle\\
&\qquad +\langle\mathop{\mathrm{diag}}(X)AZ-(\mu_h+\gamma_h)\circ Y-\left(\mathop{\mathrm{diag}}(X)AZ\right)\circ \bar{Y}\circ Y^{-1} +(\mu_h+\gamma_h)\circ\bar{Y},\eta\rangle\\
&\qquad +\langle\mathop{\mathrm{diag}}(\mathds{1}-Z)BY-\mu_v\circ Z-\left(\mathop{\mathrm{diag}}(\mathds{1}-Z)BY\right)\circ\bar{Z}\circ Z^{-1}+\mu_v\circ\bar{Z},\bar{\xi}\rangle\\
&=\langle\mu_h\circ\left( \mathds{1}+\bar{X}-X-\bar{X}\circ X^{-1}\right),\eta\rangle +\langle(AZ)\circ\bar{X},\eta\rangle-\langle\mu_v\circ Z,\bar{\xi}\rangle -\langle (\mu_h+\gamma_h)\circ Y,\eta\rangle\\
&\qquad +\langle (\mu_h+\gamma_h)\circ\bar{Y},\eta\rangle - \langle\left(\mathop{\mathrm{diag}}(X)AZ\right)\circ\bar{Y}\circ Y^{-1},\eta\rangle + \langle\mathop{\mathrm{diag}}(\mathds{1}-Z)BY,\bar{\xi}\rangle\\
&\qquad -\langle\left(\mathop{\mathrm{diag}}(\mathds{1}-Z)BY\right)\circ\bar{Z}\circ Z^{-1},\bar{\xi}\rangle+\langle\mu_v\circ\bar{Z},\bar{\xi}\rangle.
\end{align*}
Now observe that
\[
\langle(AZ)\circ\bar{X},\eta\rangle=\langle\mathop{\mathrm{diag}}(\bar{X})AZ,\eta\rangle=\langle Z,A^t\mathop{\mathrm{diag}}(\bar{X})\eta\rangle=\langle\mu_v\circ Z,\bar{\xi}\rangle.
\]
Also, from the equilibrium equations:
\[
(\mu_h+\gamma_h)\circ\bar{Y}=\mu_h\circ(\mathds{1}-\bar{X})
\quad\text{and}\quad
\mathop{\mathrm{diag}}(\bar{X})A\bar{Z}=\mu_h\circ(\mathds{1}-\bar{X}).
\]
Thus,
\[
\langle\mu_v\circ\bar{Z},\bar{\xi}\rangle=\langle\bar{Z},A^t\mathop{\mathrm{diag}}(\bar{X})\eta\rangle=\langle \mu_h\left(\mathds{1}-\bar{X}\right),\eta\rangle.
\]
Combining all this information, we find that
\begin{align*}
\dot{V}&=\langle \mu_h\circ\left(3\mathds{1}-\bar{X}-X-\bar{X}\circ X^{-1}\right),\eta\rangle-\langle(\mu_h+\gamma_h)\circ Y,\eta\rangle
-\langle\left(\mathop{\mathrm{diag}}(X)AZ\right)\circ\bar{Y}\circ Y^{-1},\eta\rangle\\
&\qquad + \langle\mathop{\mathrm{diag}}(\mathds{1}-Z)BY,\bar{\xi}\rangle - \langle\left(\mathop{\mathrm{diag}}(\mathds{1}-Z)BY\right)\circ\bar{Z}\circ Z^{-1},\bar{\xi}\rangle\\
&= \langle\mu_h\circ\left( 3\mathds{1}-\bar{X}-X-\bar{X}\circ X^{-1}\right),\eta\rangle + \langle\mathop{\mathrm{diag}}(\mathds{1}-\bar{Z})BY,\bar{\xi}\rangle
-\langle(\mu_h+\gamma_h)\circ Y,\eta\rangle\\
&\qquad -\langle\left(\mathop{\mathrm{diag}}(X)AZ\right)\circ\bar{Y}\circ Y^{-1},\eta\rangle + \langle\mathop{\mathrm{diag}}(\bar{Z}-Z)BY,\bar{\xi}\rangle
-\langle\left(\mathop{\mathrm{diag}}(\mathds{1}-Z)BY\right)\circ\bar{Z}\circ Z^{-1},\bar{\xi}\rangle.
\end{align*}
We also have
\begin{align*}
\langle\mathop{\mathrm{diag}}(\mathds{1}-\bar{Z})BY,\bar{\xi}\rangle&=\langle Y,B^t\mathop{\mathrm{diag}}(\mathds{1}-\bar{Z})\bar{\xi}\rangle\\
&=\langle(\mu_h+\gamma_h)\circ Y,\eta\rangle.
\end{align*}
and
\begin{align*}
&\langle\mathop{\mathrm{diag}}(\bar{Z}-Z)BY,\bar{\xi}\rangle
-\langle\left(\mathop{\mathrm{diag}}(\mathds{1}-Z)BY\right)\circ\bar{Z}\circ Z^{-1},\bar{\xi}\rangle\\
&\qquad = \left\langle\left[2\bar{Z}-Z-\bar{Z}\circ Z^{-1}\right]\circ BY,\bar{\xi}\right\rangle.
\end{align*}
Hence, we are left with
\begin{align*}
\dot{V}&=\langle\mu_h\circ\left( 3\mathds{1}-\bar{X}-X-\bar{X}\circ X^{-1}\right),\eta\rangle
+ \left\langle\left[2\bar{Z}-Z-\bar{Z}\circ Z^{-1}\right]\circ BY,\bar{\xi}\right\rangle \\
&\qquad - \langle\left(\mathop{\mathrm{diag}}(X)AZ\right)\circ\bar{Y}\circ Y^{-1},\eta\rangle .
\end{align*}
Now we write
\[
\mathds{1}=\bar{X} + \mathds{1}-\bar{X}
\quad\text{and}\quad
\mathds{1}=\bar{Z} + \mathds{1}-\bar{Z}.
\]
Then, we also have
\[
-X-\bar{X}^2\circ X^{-1}\leq -2\bar{X},
\]
and analogously for $Z-\bar{Z}^2\circ Z^{-1}$.
Therefore, we find
\begin{align*}
\dot{V}&\leq 3\langle \mu_h\circ\left(\mathds{1}-\bar{X}\right),\eta\rangle -\langle\mu_h\circ \bar{X}\circ(\mathds{1}-\bar{X})\circ X^{-1},\eta\rangle\\
&\qquad -\langle \bar{Z}\circ(\mathds{1}-\bar{Z})\circ Z^{-1}\circ(BY),\bar{\xi}\rangle
- \langle\left(\mathop{\mathrm{diag}}(X)AZ\right)\circ\bar{Y}\circ Y^{-1},\eta\rangle.
\end{align*}
Notice that the inequality above for $\dot{V}$ is strict, except when $X=\bar{X}$ and $Z=\bar{Z}$.
Since
\[
\bar{\xi}=\mathop{\mathrm{diag}}(\mu_v)^{-1}A^t\mathop{\mathrm{diag}}(\bar{X})\eta,
\]
we can then write
\begin{align*}
\dot{V}&\leq 3\langle \mu_h\circ\left(\mathds{1}-\bar{X}\right),\eta\rangle -\langle \mu_h\circ\bar{X}\circ(\mathds{1}-\bar{X})\circ X^{-1},\eta\rangle\\
&\qquad - \langle \mu_v^{-1}\circ \bar{X}\circ A\left(\bar{Z}\circ(\mathds{1}-\bar{Z})\circ Z^{-1}\circ(BY)\right),\eta\rangle
- \langle\left(\mathop{\mathrm{diag}}(X)AZ\right)\circ\bar{Y}\circ Y^{-1},\eta\rangle.
\end{align*}
Let
\[
\bar{A}=\mathop{\mathrm{diag}}(\bar{X})A\mathop{\mathrm{diag}}(\bar{Z})
\quad\text{and}\quad
\bar{B}=\mathop{\mathrm{diag}}(\mu_v)^{-1}\mathop{\mathrm{diag}}(\bar{Z})^{-1}\mathop{\mathrm{diag}}(\mathds{1}-\bar{Z})B\mathop{\mathrm{diag}}(\bar{Y}).
\]
Then
$
\bar{A}\mathds{1}=\mu_h\circ(\mathds{1}-\bar{X})
\quad\text{and}\quad
\bar{B}\mathds{1}=\mathds{1}.
$
We can then write
\begin{align*}
\dot{V}&\leq 3\langle \bar{A}\mathds{1},\eta\rangle -\langle \left(\bar{A}\mathds{1}\right)\circ\bar{X}\circ X^{-1},\eta\rangle\\
&\qquad - \langle \bar{A}\left( \bar{Z}\circ Z^{-1}\circ \left(\bar{B}\left(Y\circ\bar{Y}^{-1}\right)\right)\right),\eta\rangle
- \langle X\circ\bar{X}^{-1}\circ\left(\bar{A} \left(Z\circ\bar{Z}^{-1}\right)\right)\circ\bar{Y}\circ Y^{-1},\eta\rangle\\
&=\sum_{i=1}^n\eta_i\left[3\left(\bar{A}\mathds{1}\right)_i-\dfrac{\bar{X}_i}{X_i}\left(\bar{A}\mathds{1}\right)_i-\left(\bar{A}\left( \bar{Z}\circ Z^{-1}\circ \left(\bar{B}\left(Y\circ\bar{Y}^{-1}\right)\right)\right)\right)_i-\dfrac{X_i\bar{Y}_i}{\bar{X}_iY_i}\left(\bar{A} \left(Z\circ\bar{Z}^{-1}\right)\right)_i\right]\\
&=\sum_{i,j=1}^n\eta_i\bar{A}_{i,j}\left[3-\dfrac{\bar{X}_i}{X_i}-\dfrac{\bar{Z}_j}{Z_j}\left(\bar{B}\left(Y\circ\bar{Y}^{-1}\right)\right)_j-\dfrac{X_i\bar{Y}_iZ_j}{\bar{X}_iY_i\bar{Z}_j}\right]\\
&=\sum_{i,j,k=1}^n\eta_i\bar{A}_{i,j}\bar{B}_{j,k}\left[3-\dfrac{\bar{X}_i}{X_i}-\dfrac{\bar{Z}_jY_k}{Z_j\bar{Y}_k}-\dfrac{X_i\bar{Y}_iZ_j}{\bar{X}_iY_i\bar{Z}_j}\right]\\
&=H_n.
\end{align*}
Before proceeding, we recall that a unicyclic graph is a graph with exactly one cycle \cite{Knuth:1997}. Given the graph $\mathfrak{M}$, we shall denote by $\mathcal{D}(n,l)$ the set of unicyclic subgraphs of $\mathfrak{M}$, that has order $n$, with cycle of length $l$. Recalling that $\mathfrak{M}$ is a $n$-ECM, we notice that, in a similar way as in Guo et al \cite{gls:2006,gls:2008} we have
\begin{align*}
H_n&=\sum_{i,j,k}^n\eta_i\bar{A}_{i,j}\bar{B}_{j,k}\left[3-\dfrac{\bar{X}_i}{X_i}-\dfrac{Y_k}{\bar{Y}_k}\dfrac{\bar{Z}_j}{Z_j}-\dfrac{X_i\bar{Y}_i}{\bar{X}_iY_i}\dfrac{Z_j}{\bar{Z}_j}\right]\\
&=\sum_{l=1}^n\left\{\sum_{Q\in\mathcal{D}(n,l)} \left(\prod_{(k,h,j)\in \mathcal{E}(CQ)}\bar{A}_{k,j}\bar{B}_{j,h}\right)\right.\\
&\qquad \left.\times \sum_{(r,m,j)\in \mathcal{E}(CQ)}\left[3-\dfrac{\bar{X}_r}{X_r}-\dfrac{Y_m}{\bar{Y}_m}\dfrac{\bar{Z}_j}{Z_j}-\dfrac{X_r\bar{Y}_r}{\bar{X}_rY_r}\dfrac{Z_j}{\bar{Z}_j}\right]\right\},\\
\end{align*}
where $CQ$ denotes the unique cycle in the unicyclic graph $Q$. Along such a cycle, we have
\begin{align*}
&\sum_{(r,m,j)\in \mathcal{E}(CQ)}\left[3-\dfrac{\bar{X}_r}{X_r}-\dfrac{Y_m}{\bar{Y}_m}\dfrac{\bar{Z}_j}{Z_j}-\dfrac{X_r\bar{Y}_r}{\bar{X}_rY_r}\dfrac{Z_j}{\bar{Z}_j}\right]\\
=&3|\mathcal{E}(CQ)|-\sum_{(r,m,j)\in \mathcal{E}(CQ)}\left[\dfrac{\bar{X}_r}{X_r}+\dfrac{Y_m}{\bar{Y}_m}\dfrac{\bar{Z}_j}{Z_j}+\dfrac{X_r\bar{Y}_r}{\bar{X}_rY_r}\dfrac{Z_j}{\bar{Z}_j}\right]\\
\leq&3|\mathcal{E}(CQ)|-3|\mathcal{E}(CQ)|\left[\prod_{(r,m,j)\in \mathcal{E}(CQ)}\dfrac{Y_m\bar{Y}_r}{\bar{Y}_mY_r}\right]^{1/3|\mathcal{E}(CQ)|}\\
&=0.&
\end{align*}
Hence, we have that $H_n\leq0$, with equality being attained only when
\[
\dfrac{\bar{X}_r}{X_r}=\dfrac{Y_m}{\bar{Y}_m}\dfrac{\bar{Z}_j}{Z_j}=\dfrac{X_r\bar{Y}_r}{\bar{X}_rY_r}\dfrac{Z_j}{\bar{Z}_j},\quad (r,m,j)\in \mathcal{E}(CQ).
\]
But since, we have $\dot{V}\leq H_n$, with equality only when $X=\bar{X}$ and $Z=\bar{Z}$, we find that $\dot{V}\leq0$, with equality attained only when, for each $Q\in\mathcal{D}(n,l)$, $l=1,\ldots,n$, we have
\[
1=\dfrac{Y_m}{\bar{Y}_m}=\dfrac{\bar{Y}_r}{Y_r},\quad (r,m,\cdot)\in \mathcal{E}(CQ).
\]
But since $CQ$ is a cycle, we have that
\[
Y_r=\bar{Y}_r,\quad r \in \mathcal{V}(CQ).
\]
Since $\mathcal{M}$ is irreducible, we have that $\bar{A}\bar{B}$ is also irreducible by Proposition~\ref{prop:factors_irred}. Thus, we have that any two vertices will be in some unicyclic graph, and hence we have equality only when
\[
Y=\bar{Y}.
\]
\end{proof}
\section{Discussion}
\label{sec:concl}
We have considered a class of multi-group models for vector-borne diseases. This class is a natural extension of the classical Bailey-Dietz model and it is a natural candidate for modeling the impact of fast urban movement in some vector transmitted diseases, as for instance, in the case of dengue fever---cf. \cite{Adams,Cosner2009,Alvimetal2013}. The host-vector interaction along the network gives rise to what we call the host-vector contact network---denoted by $\Gamma(\mathcal{M})$---and that has a number of distinguishing features from the networks that arise in directly transmitted diseases. The most striking one is, perhaps, that the irreducibility of the circulation topology is not sufficient to guarantee the irreducibility of the host-vector topology. In addition, we also characterize the irreducibility of $\Gamma(\mathcal{M})$ through the irreducibility of the product sub-networks. With this assumption we are able to provide a complete analysis of the dynamics in the sense the this class of models possesses the so-called sharp $\mathcal{R}_0$ property, i.e., $\mathcal{R}_0$ is a threshold parameter with the disease free equilibrium being both locally and globally asymptotically stable when $\mathcal{R}_0\leq 1$, and being unstable when $\mathcal{R}_0>1$. In addition, an interior equilibrium (the endemic equilibrium) that is biologically feasible, i.e. has positive coordinates, if and only if $\mathcal{R}_0>1$. Furthermore, when it exists it is globally asymptotically stable.
From a mathematical point of view, these results extend previous result of directly transmitted diseases to the class considered here. The global stability of the disease free equilibrium (which has been obtained by \cite{ding:etal:2012} for a special case, and more restricted conditions) is a very natural extension of the argument presented in \cite{gls:2006}; see also \cite{Shuai:Driessche:2013} for a very general presentation of this argument. The existence, uniqueness and local stability of the endemic equilibrium shows that the corresponding results of \cite{MR87c:92046} for sub-populations hold for this class of models. Finally, the global stability proof brings a new ingredient in the graph-theoretic framework introduced in \cite{gls:2006,gls:2008}: the identification of $\Gamma(\mathcal{M})$ with a multi-graph---that we have termed a transitive contact multi-graph---which is a $c$-edge colored multi-digraph, and which contains all the information of the host-vector contact network encoded on a different way. The product of the host and vector networks can then be interpreted as a contact matrix for such a graph, and that allows us to organize the calculation of the Lie-derivative of the Lyapunov function within a similar graph-theoretical framework of \cite{gls:2006,gls:2008}.
The analysis presented here shows that, in spite of the complexity of the models in the considered class, the long-term global dynamics is very simple. This, however, does not imply that the transient dynamics of the model is necessarily simple, and further studies are necessary. As an example of this complexity, we refer to \cite{Alvimetal2013} which provides examples of situations---included in the class analyzed here---that have a local group $\mathcal{R}_0$ less than unity, but a global $\mathcal{R}_0$ that is greater than unity---and hence bounded to evolve to an endemic state in the long term. While this duality of local versus global $\mathcal{R}_0$ has been observed in other contexts---see \cite{Mckenzie:etal:2012}---we believe that it should be further studied and understood in the realm of epidemic models.
\def$'${$'$}
|
1,108,101,564,948 | arxiv | \section{Introduction}
The mean-field game (MFG) framework \cite{LL061,LL062,LL07,HCM06,HCM07} models systems with a huge number of small identical rational players (agents) that play non-cooperative differential games. In this framework, a generic player aims at minimizing a cost functional that takes the distribution of the whole population as a parameter. Consequently, the problem is to find a Nash equilibrium where a generic player cannot unilaterally improve his position. For a detailed account on MFG systems we refer the reader to \cite{LCDF,CardaNotes,gll'11,befreyam'13,Gomes2014,GoBook,cardela'18,notebari}.
In this note, we introduce Fourier approximation techniques for first-order nonlocal MFG models. More precisely, we consider the system
\begin{equation}\label{eq:maingeneral}
\begin{cases}
-\partial_t u + H(x,\nabla u) = F[x,m], \\
\partial_t m - \mathrm{div}\left(m \nabla_pH(x,\nabla u)\right)=0,~(x,t) \in \mathbb{T}^d \times [0,1],\\
m(x,0)=M(x),~u(x,1)=U(x),~x\in \mathbb{T}^d.
\end{cases}
\end{equation}
Here, $u: \mathbb{T}^d \times [0,1]\to \mathbb{R}$ and $m:\mathbb{T}^d \times [0,1]\to \mathbb{R}_+$ are the unknown functions. Furthermore, $H\in C^2(\mathbb{T}^d \times \mathbb{R}^d)$ is a Hamiltonian, and $F:\mathbb{T}^d \times \mathcal{P}(\mathbb{T}^d) \to \mathbb{R}$ is a nonlocal coupling term between the Hamilton-Jacobi and Fokker-Planck equations. Above, $\mathbb{T}^d$ is the d-dimensional flat torus, and $\mathcal{P}(\mathbb{T}^d)$ is the space of Borel probability measures on $\mathbb{T}^d$. Next, $U\in C^2(\mathbb{T}^d)$ and $M\in L^\infty(\mathbb{T}^d)\cap \mathcal{P}(\mathbb{T}^d)$ (with a slight abuse of notation we identify the absolutely continuous measures with their densities) are terminal-initial conditions for $u$ and $m$, respectively.
In \eqref{eq:maingeneral}, $u$ represents the value function of a generic agent from a continuum population of players, whereas $m$ represents the density of this population. Each agent aims at solving the optimization problem
\begin{equation}\label{eq:generic_optim}
u(x,t)=\inf_{\gamma \in H^1([t,1]),\gamma(t)=x} \int_t^1 L(\gamma(s),\dot{\gamma}(s))+F(\gamma(s),m(\cdot,s))ds+U(\gamma(1)),
\end{equation}
where $L$ is the Legendre transform of $H$; that is,
\begin{equation*}
L(x,v)=\sup_p -v\cdot p - H(x,p),~(x,v)\in \mathbb{T}^d \times \mathbb{R}^d.
\end{equation*}
Hence, $U$ is a terminal cost function. Since a generic agent is small and her actions on the population distribution can be neglected, we assume that $m$ is fixed, but unknown, in \eqref{eq:generic_optim}. Consequently, $u$ must solve a Hamilton-Jacobi equation; that is, the first PDE in \eqref{eq:maingeneral} with terminal data $U$.
Furthermore, given $u$, optimal trajectories of agents are determined by
\begin{equation*}
\dot{\gamma}(s)=-\nabla_p H(\gamma(s),\nabla u(\gamma(s),s)).
\end{equation*}
Therefore, $m$, being the population density, must satisfy the Fokker-Planck equation; that is, the second PDE in \eqref{eq:maingeneral} with initial data $M$. Hence, $M$ is the population density at time $t=0$.
The existence, uniqueness and stability theories for \eqref{eq:maingeneral} are well understood \cite{LL07,CardaNotes,carda'13}. Here, we are specifically interested in approximation methods for the solutions of \eqref{eq:maingeneral} that can be useful for numerical solution and modeling purposes.
Currently, there are number of efficient approximation methods for solutions of MFG systems. We refer to \cite{achdou'13,achdcamdolcetta'13,achporretta'16,achddolcetta'10} for finite-difference schemes, \cite{carsilva'14,carsilva'15} for semi-Lagrangian methods, \cite{bencar'15,andreev'17,bencarsan'17,bencarmarnen'18,silva'18} for convex optimization techniques, \cite{alfego'17,GJS2} for monotone flows, and \cite{osher'18b} for infinite-dimensional Hamilton-Jacobi equations. Although general, the aforementioned methods are particularly advantageous when $F$ in \eqref{eq:maingeneral} depends locally on $m$. The reason is that local $F$ yield analytic pointwise formulas for infinite-dimensional operators involved in the algorithms. Instead, nonlocal $F$ do not yield such formulas. Additionally, fixed-grid methods suffer from dimensionality issues. Also, the number of inter-nodal couplings grows significantly for nonlocal $F$ which leads to an increased complexity of such schemes. Hence, we are interested in developing approximation methods that specifically suit nonlocal $F$ and are grid-free.
Our approach is based on a Fourier approximation of $F$ and is inspired by the methods in \cite{nurbe'18}. Here, we use the classical trigonometric polynomials as an approximation basis. Nevertheless, our method is flexible and allows more general bases. For instance, one may consider \eqref{eq:maingeneral} on different domains and boundary conditions and choose a basis accordingly.
Additionally, our approach yields a mesh-free numerical approximation of $u$ and $m$. More precisely, we directly recover the optimal trajectories of the agents rather than the values of $u$ and $m$ on a given mesh. In particular, our methods may blend well with recently developed ideas for fast and curse-of-the-dimensionality-resistant solution approach for first-order Hamilton-Jacobi equations \cite{osher'17,osher'18,Yegorov2018}. Hence, our techniques may lead to numerical schemes for nonlocal MFG that are efficient in high dimensions.
To avoid technicalities, we consider a linear $F$. More precisely, we assume that
\begin{equation*
F(x,m)=\int_{\mathbb{T}^d} K(x,y) m(y,t)dy,~x\in \mathbb{T}^d,~m\in \mathcal{P}(\mathbb{T}^d),
\end{equation*}
where $K\in C^2(\mathbb{T}^d \times \mathbb{T}^d)$. Thus, here we deal with the system
\begin{equation} \label{eq:main}
\begin{cases}
-\partial_t u + H(x,\nabla u) = \int_{\mathbb{T}^d} K(x,y) m(y,t) dy, \\
\partial_t m - \mathrm{div}(m \nabla_p H(x,\nabla u))=0,~(x,t) \in \mathbb{T}^d \times [0,1],\\
m(x,0)=M(x),~u(x,1)=U(x),~x\in \mathbb{T}^d.
\end{cases}
\end{equation}
Our basic idea is to show that when $K$ is a generalized polynomial in a given basis then \eqref{eq:main} is equivalent to a fixed point problem, in a space of continuous curves, that has nice structural properties. In particular, when $K$ is symmetric and positive semi-definite, \eqref{eq:main} is equivalent to a convex optimization problem in the space of continuous curves.
Furthermore, we discuss how to construct generalized polynomial kernels that approximate a given $K$. Additionally, we observe that for translation invariant $K$ the approximating kernels have a particularly simple structure. Consequently, for such $K$ the aforementioned optimization problem is much simpler to solve.
The paper is organized as follows. In Section \ref{sec:prelim}, we present standing assumptions and some preliminary results. In Section \ref{sec:optim}, we prove the equivalence of \eqref{eq:main} to a fixed point problem over the space of continuous curves when $K$ is a generalized polynomial. Next, in Section \ref{sec:approx}, we discuss approximation methods for a general kernel. Furthermore, in Section \ref{sec:a_numerical_method}, we construct a discretization for the optimization problem from Section \ref{sec:optim} and devise a variant of a primal dual hybrid gradient algorithm for the discrete problem. Finally, in Section \ref{sec:numerical_examples}, we study several numerical examples.
\section{Assumptions and preliminary results}\label{sec:prelim}
We denote by $\mathbb{T}^d$ the $d$-dimensional flat torus. Furthermore, throughout the paper, we assume that $H \in C^2(\mathbb{T}^d \times \mathbb{R}^d)$, and
\begin{equation}\label{eq:H_hyp}
\begin{split}
\frac{1}{C}I_d \leq& \nabla^2_{pp} H(x,p)\leq C I_d,~\forall (x,p)\in \mathbb{T}^d \times \mathbb{R}^d,\\
-C(1+|p|^2) \leq& \nabla_x H(x,p) \cdot p,~\forall (x,p)\in \mathbb{T}^d \times \mathbb{R}^d,
\end{split}
\end{equation}
for some constant $C>0$. Next, we assume that $M\in L^\infty(\mathbb{T}^d)\cap \mathcal{P}(\mathbb{T}^d),~U\in C^2(\mathbb{T}^d),~K\in C^2(\mathbb{T}^d \times \mathbb{T}^d)$, and
\begin{equation}\label{eq:MUbounds}
\|M\|_{L^\infty(\mathbb{T}^d)},~\|U\|_{C^2(\mathbb{T}^d)},~\|K\|_{C^2(\mathbb{T}^d\times \mathbb{T}^d)} \leq C.
\end{equation}
Additionally, we suppose that $K$ is positive semi-definite; that is,
\begin{equation}\label{eq:K_mon}
\int_{\mathbb{T}^d\times \mathbb{T}^d} K(x,y)f(x)f(y)dxdy \geq 0,~\forall f\in L^{\infty}(\mathbb{T}^d).
\end{equation}
We call $K$ symmetric if
\begin{equation}\label{eq:K_sym}
K(x,y)=K(y,x),~\forall x,y \in \mathbb{T}^d.
\end{equation}
Next, we denote by $\mathcal{P}(\mathbb{T}^d)$ the space of Borel probability measures on $\mathbb{T}^d$. We equip $\mathcal{P}(\mathbb{T}^d)$ with the Monge-Kantorovich distance that is given by
\begin{equation}\label{eq:MK}
\|m_2-m_1\|_{MK} =\sup \left\{\int_{\mathbb{T}^d}\phi(x) (m_2(x)-m_1(x))dx~\mbox{s.t.}~\|\phi\|_{\mathrm{Lip}}\leq 1\right\}.
\end{equation}
In the rest of this section, we present some preliminary results and formulas. For the optimal control and related Hamilton-Jacobi equations theory we refer to \cite{fleson'93,bardidolcetta'97}. We begin by the definition of a solution for \eqref{eq:main}.
\begin{definition}
A pair $(u,m)$ is a solution of \eqref{eq:main} if $u\in W^{1,\infty}(\mathbb{T}^d\times [0,1])$ is a viscosity solution of
\begin{equation}\label{eq:HJinMFG}
\begin{cases}
-\partial_t u+H(x,\nabla u)= \int_{\mathbb{T}^d} K(x,y) m(y,t) dy,~(x,t)\in \mathbb{T}^d\times[0,1],\\
u(x,1)=U(x),~x\in \mathbb{T}^d,
\end{cases}
\end{equation}
and $m\in L^\infty(\mathbb{T}^d\times [0,1])\cap C\left([0,1];\mathcal{P}(\mathbb{T}^d)\right)$ is a distributional solution of
\begin{equation}\label{eq:FPinMFG}
\begin{cases}
\partial_t m-\mathrm{div}(m \nabla_p H(x,\nabla u))= 0,~(x,t)\in \mathbb{T}^d\times[0,1],\\
m(x,0)=M(x),~x\in \mathbb{T}^d.
\end{cases}
\end{equation}
\end{definition}
The following theorem \cite{LL07,CardaNotes,carda'13} asserts that \eqref{eq:main} is well-posed.
\begin{theorem}\label{thm:MFGwellposed}
\begin{itemize}
\item[i.] Under assumptions \eqref{eq:H_hyp} and \eqref{eq:MUbounds}, system \eqref{eq:main} admits a solution $(u,m)$. Moreover, there exists a constant $C_1(C)>0$ such that
\begin{equation}\label{eq:um_bounds}
\nabla^2_{xx} u,~\|u\|_{W^{1,\infty}},~\|m\|_{L^\infty} \leq C_1,
\end{equation}
for any solution $(u,m)$. Additionally, if \eqref{eq:K_mon} holds then $(u,m)$ is unique.
\item[ii.] Solutions of \eqref{eq:main} are stable with respect to variations of $U,M$ and $K$ in respective norms. Particularly, suppose that $\{K_r\}_{r=1}^\infty \subset C^2(\mathbb{T}^d\times \mathbb{T}^d)$ is such that
\begin{equation}\label{eq:K-K_rC2}
\lim\limits_{r\to \infty}\|K-K_r\|_{C^2(\mathbb{T}^d\times \mathbb{T}^d)}=0,
\end{equation}
and $\{(u_r,m_r)\}_{r=1}^\infty$ are solutions of \eqref{eq:main} corresponding to kernel $K_r$. Then, the sequence $\{(u_r,m_r)\}_{r=1}^\infty$ is precompact in $C(\mathbb{T}^d\times [0,1])\times C\left([0,1];\mathcal{P}(\mathbb{T}^d)\right)$ with all accumulation points being solutions of \eqref{eq:main}. Consequently, if \eqref{eq:K_mon} holds then
\begin{equation}
\begin{split}
\lim\limits_{r\to \infty}u_r(x,t)=u(x,t),~\mbox{uniformly in}~(x,t)\in \mathbb{T}^d \times [0,1],\\
\lim\limits_{r\to \infty} \|m_{r}(\cdot,t)-m(\cdot,t)\|_{MK}=0,~\mbox{uniformly in}~t\in [0,1],
\end{split}
\end{equation}
where $(u,m)$ is the unique solution of \eqref{eq:main}.
\end{itemize}
\end{theorem}
Next, consider an arbitrary basis of smooth functions
\begin{equation}\label{eq:basis}
\Phi=\{\phi_1,\phi_2,\cdots,\phi_r\} \subset C^2(\mathbb{T}^d).
\end{equation}
For $a=(a_1,a_2,\cdots,a_r)\in C\left([0,1];\mathbb{R}^r\right)$ we denote by $u_a$ the viscosity solution of
\begin{equation}\label{eq:u_a}
\begin{cases}
-\partial_t u(x,t)+H(x,\nabla u(x,t))=\sum\limits_{i=1}^r a_i(t)\phi_i(x),~(x,t)\in \mathbb{T}^d\times[0,1]\\
u(x,1)=U(x),~x\in \mathbb{T}^d.
\end{cases}
\end{equation}
From the optimal control theory, we have that
\begin{equation}\label{eq:ua_rep}
u_a(x,t)=\inf_{\gamma \in H^1([t,1]),\gamma(t)=x} \int_t^1 \left(L\left(\gamma(s),\dot{\gamma}(s) \right)+\sum_{i=1}^r a_i(s) \phi_i(\gamma(s))\right) ds+U(\gamma(1)),
\end{equation}
for all $(x,t)\in \mathbb{T}^d\times [0,1]$, where
\begin{equation}\label{eq:L}
L(x,v)=\sup_{p\in \mathbb{R}^d} -v \cdot p -H(x,p).
\end{equation}
Moreover, for all $(x,t)\in \mathbb{T}^d\times [0,1]$ there exists $\gamma_{x,t,a}\in C^2([t,1];\mathbb{T}^d)$ such that
\begin{equation}\label{eq:ua_rep_min}
u_{a}(x,t)=\int_t^1 \left(L\left(\gamma_{x,t,a}(s),\dot{\gamma}_{x,t,a}(s) \right)+\sum_{i=1}^r a_i(s) \phi_i(\gamma_{x,t,a}(s))\right) ds+U(\gamma_{x,t,a}(1)),
\end{equation}
and
\begin{equation}\label{eq:EL}
\begin{split}
&\frac{d}{ds}\nabla_v L\left(\gamma_{x,t,a}(s),\dot{\gamma}_{x,t,a}(s) \right) \\
=& \nabla_x L\left(\gamma_{x,t,a}(s),\dot{\gamma}_{x,t,a}(s) \right)+\sum_{i=1}^r a_i(t)\nabla \phi_i(\gamma_{x,t,a}(s)),~s\in [t,1].
\end{split}
\end{equation}
Additionally,
\begin{equation}\label{eq:u_agrad}
\begin{split}
-\nabla_v L\left(x,\dot{\gamma}_{x,t,a}(t) \right) \in& \nabla^{+}_x u_a(x,t),\\
-\nabla_v L\left(\gamma_{x,t,a}(s),\dot{\gamma}_{x,t,a}(s) \right) = &\nabla_x u_a(\gamma_{x,t,a}(s),s),~s\in (t,1],\\
-\dot{\gamma}_{x,t,a}(s) = &\nabla_p H(\gamma_{x,t,a}(s),\nabla_x u_a(\gamma_{x,t,a}(s),s)),~s\in (t,1].
\end{split}
\end{equation}
In fact, this previous equation is also sufficient for \eqref{eq:ua_rep_min} to hold. For lighter notation, we denote $\gamma_{x,0,a}$ by $\gamma_{x,a}$.
In general, $u_a$ is not everywhere differentiable. Nevertheless, $u_a$ is semiconcave and hence $\nabla^+ u_a(x,t) \neq \emptyset$ for all $(x,t)$, and $\nabla^+ u_a(x,t)=\{\nabla u_a(x,t)\}$ for a.e. $(x,t)$. In fact, points $(x,t)$ where $u_a$ is not differentiable are precisely those for which \eqref{eq:ua_rep} admits multiple minimizers. Thus, at points $x\in \mathbb{T}^d$ where $u_a(x,0)$ is not differentiable we choose $\gamma_{x,a}$ in such a way that the map $(x,t)\mapsto \gamma_{x,a}(t)$ is Borel measurable.
Furthermore, we denote by $m_a$ the distributional solution of
\begin{equation}\label{eq:m_a}
\begin{cases}
\partial_t m- \mathrm{div}\left(m \nabla_p H(x,\nabla u_a)\right)=0,~(x,t)\in \mathbb{T}^d\times[0,1],\\
m(x,0)=M(x),~x\in \mathbb{T}^d.
\end{cases}
\end{equation}
One can show that $m_a$ is given by the push-forward of the measure $M$ by the map $\gamma_{\cdot,a}(t)$; that is,
\begin{equation}\label{eq:m_a_rep}
m_a(\cdot,t)=\gamma_{\cdot,a}(t)\sharp M.
\end{equation}
We equip $C([0,1];\mathbb{R}^r)$ with the $L^\infty$ norm
\begin{equation*}
\|a\|_{\infty}=\max_{i}\sup_{t\in [0,1]} |a_i(t)|.
\end{equation*}
Then, one has that
\begin{equation}\label{eq:m_a_stability}
\lim\limits_{n\to \infty} \|m_{a_n}(\cdot,t)-m_a(\cdot,t)\|_{MK}=0,~\mbox{uniformly in}~t\in [0,1],
\end{equation}
if $\lim\limits_{n\to \infty}\|a_n-a\|_{\infty}=0$. For a detailed discussion on $m_a$ see Chapter 4 in \cite{CardaNotes}.
Finally, we denote by
\begin{equation}\label{eq:G}
G(a)=\int_{\mathbb{T}^d} u_a(x,0)M(x)dx,~a\in C\left([0,1];\mathbb{R}^r\right).
\end{equation}
Our first theorem addresses the properties of $G$.
\begin{theorem}\label{thm:G}
The functional $a\mapsto G(a)$ is concave and everywhere Fr\'{e}chet differentiable. Moreover,
\begin{equation}\label{eq:dGa}
\partial_{a_i} G=\int_{\mathbb{T}^d} \phi_i(x) m_a(x,\cdot) dx,~1\leq i \leq r.
\end{equation}
\end{theorem}
\begin{proof}
We denote by
\begin{equation*}
p(a)=\left(\int_{\mathbb{T}^d} \phi_i(x) m_a(x,\cdot) dx\right)_{i=1}^r,~a\in C([0,1];\mathbb{R}^r).
\end{equation*}
We prove that for every $a\in C([0,1];\mathbb{R}^r)$
\begin{equation*}
0\geq G(b)-G(a)-(b-a)\cdot p(a) \geq o\left(\|b-a\|_\infty\right).
\end{equation*}
We have that
\begin{equation*}
\begin{split}
&G(b)-G(a)-(b-a)\cdot p(a)\\
=&\int_{\mathbb{T}^d}M(x)dx\int_0^1 \left(L\left(\gamma_{x,b}(t),\dot{\gamma}_{x,b}(t) \right)+\sum_{i=1}^r b_i(t) \phi_i(\gamma_{x,b}(t))\right) dt+U(\gamma_{x,b}(1))\\
&-\int_{\mathbb{T}^d}M(x)dx\int_0^1 \left(L\left(\gamma_{x,a}(t),\dot{\gamma}_{x,a}(t) \right)+\sum_{i=1}^r a_i(t) \phi_i(\gamma_{x,a}(t))\right) dt+U(\gamma_{x,a}(1))\\
&-\sum_{i=1}^r \int_0^1 (b_i(t)-a_i(t))dt\int_{\mathbb{T}^d} \phi_i(x)m_a(x,t)dx.
\end{split}
\end{equation*}
From \eqref{eq:m_a_rep} we have that
\begin{equation*}
\int_{\mathbb{T}^d} \phi_i(x)m_a(x,t)dx=\int_{\mathbb{T}^d} \phi_i(\gamma_{x,a}(t)) M(x)dx,~t\in [0,1],~1\leq i \leq r.
\end{equation*}
Hence,
\begin{equation*}
\begin{split}
&G(b)-G(a)-(b-a)\cdot p(a)\\
=&\int_{\mathbb{T}^d}M(x)dx\int_0^1 L\left(\gamma_{x,b}(t),\dot{\gamma}_{x,b}(t) \right)-L\left(\gamma_{x,a}(t),\dot{\gamma}_{x,a}(t) \right)dt\\
&+\int_{\mathbb{T}^d}M(x)dx\int_0^1\sum_{i=1}^r b_i(t)(\phi_i(\gamma_{x,b}(t))-\phi_i(\gamma_{x,a}(t))) dt \\
&+\int_{\mathbb{T}^d}M(x)\left(U(\gamma_{x,b}(1))-U(\gamma_{x,a}(1))\right)dx.
\end{split}
\end{equation*}
By definition, we have that
\begin{equation*}
\begin{split}
& \int_0^1 L\left(\gamma_{x,b}(t),\dot{\gamma}_{x,b}(t) \right)+\sum_{i=1}^r b_i(t)\phi_i(\gamma_{x,b}(t)) dt+U(\gamma_{x,b}(1))\\
\leq & \int_0^1 L\left(\gamma_{x,a}(t),\dot{\gamma}_{x,a}(t) \right)+\sum_{i=1}^r b_i(t)\phi_i(\gamma_{x,a}(t)) dt+U(\gamma_{x,a}(1)),~\forall x\in \mathbb{T}^d.
\end{split}
\end{equation*}
Hence,
\begin{equation*}
G(b)-G(a)-(b-a)\cdot p(a) \leq 0,~\forall a,b \in C([0,1];\mathbb{T}^d).
\end{equation*}
This previous inequality yields the concavity of $G$. On the other hand, we have that
\begin{equation*}
\begin{split}
&G(b)-G(a)-(b-a)\cdot p(a)\\
=&\int_{\mathbb{T}^d}M(x)dx\int_0^1 L\left(\gamma_{x,b}(t),\dot{\gamma}_{x,b}(t) \right)-L\left(\gamma_{x,a}(t),\dot{\gamma}_{x,a}(t) \right)dt\\
&+\int_{\mathbb{T}^d}M(x)dx\int_0^1\sum_{i=1}^r a_i(t)(\phi_i(\gamma_{x,b}(t))-\phi_i(\gamma_{x,a}(t))) dt \\
&+\int_{\mathbb{T}^d}M(x)\left(U(\gamma_{x,b}(1))-U(\gamma_{x,a}(1))\right)dx\\
&+\int_{\mathbb{T}^d}M(x)dx\int_0^1\sum_{i=1}^r (b_i(t)-a_i(t))(\phi_i(\gamma_{x,b}(t))-\phi_i(\gamma_{x,a}(t))) dt.
\end{split}
\end{equation*}
Therefore, again by the definition of $\gamma_{x,a}$ and $\gamma_{x,b}$, we have that
\begin{equation*}
\begin{split}
&G(b)-G(a)-(b-a)\cdot p(a)\\
\geq&\int_{\mathbb{T}^d}M(x)dx\int_0^1\sum_{i=1}^r (b_i(t)-a_i(t))(\phi_i(\gamma_{x,b}(t))-\phi_i(\gamma_{x,a}(t))) dt\\
\geq& -\|b-a\|_{\infty}\sum_{i=1}^r \int_0^1 \left|\int_{\mathbb{T}^d}\phi_i(\gamma_{x,b}(t))M(x)dx-\int_{\mathbb{T}^d}\phi_i(\gamma_{x,a}(t))M(x)dx\right|dt\\
=& -\|b-a\|_{\infty}\sum_{i=1}^r \int_0^1 \left|\int_{\mathbb{T}^d}\phi_i(x)m_b(x,t)dx-\int_{\mathbb{T}^d}\phi_i(x)m_a(x,t)dx\right| dt\\
\geq & -\|b-a\|_{\infty}\sum_{i=1}^r \mathrm{Lip}(\phi_i)\int_0^1 \|m_b(\cdot,t)-m_a(\cdot,t)\|_{MK}dt.
\end{split}
\end{equation*}
Hence, by \eqref{eq:m_a_stability} the proof is complete.
\end{proof}
\section{The optimization problem}\label{sec:optim}
In this section, we assume that $K$ is a generalized polynomial in the basis $\Phi$; that is,
\begin{equation}\label{eq:K_poly}
K(x,y)=\sum_{i,j=1}^{r} k_{ij} \phi_i(x)\phi_j(y),~x,y \in \mathbb{T}^d.
\end{equation}
where $\mathbf{K}=(k_{ij})_{i,j=1}^r\in M_{r,r}(\mathbb{R})$ is a matrix of coefficients. For such $K$, \eqref{eq:main} takes form
\begin{equation} \label{eq:main_r}
\begin{cases}
-\partial_t u + H(x,\nabla u) = \sum\limits_{i=1}^r \phi_i(x) \sum\limits_{j=1}^r k_{ij} \int_{\mathbb{T}^d} \phi_j(y) m(y,t) dy, \\
\partial_t m - \mathrm{div}(m \nabla_p H(x,\nabla u))=0,~(x,t) \in \mathbb{T}^d \times [0,1],\\
m(x,0)=M(x),~u(x,1)=U(x),~x\in \mathbb{T}^d.
\end{cases}
\end{equation}
Our main observation is the following theorem.
\begin{theorem}\label{thm:equivalence}
\begin{itemize}
\item[i.] A pair $(u,m)$ is a solution of \eqref{eq:main_r} if and only if $(u,m)=(u_{a^*},m_{a^*})$ for some $a^*\in C\left([0,1];\mathbb{R}^r\right)$ such that
\begin{equation}\label{eq:a_fixedpoint}
a^*=\mathbf{K} \partial_a G(a^*).
\end{equation}
\item[ii.] If $\mathbf{K}$ is positive-definite then \eqref{eq:a_fixedpoint} is equivalent to finding a $0$ of a monotone operator $a\mapsto \mathbf{K}^{-1} a - \partial_a G(a),~a\in C\left([0,1];\mathbb{R}^r\right)$.
\item[iii.] Additionally, if $\mathbf{K}$ is symmetric, \eqref{eq:a_fixedpoint} is equivalent to the convex optimization problem
\begin{equation}\label{eq:a_equation_optimization}
\begin{split}
&\inf_{a\in C\left([0,1];\mathbb{R}^r\right)} \frac{1}{2}\langle \mathbf{K}^{-1} a, a \rangle - G(a)\\
=&\inf_{a\in C\left([0,1];\mathbb{R}^r\right)} \frac{1}{2}\langle \mathbf{K}^{-1} a, a \rangle - \int_{\mathbb{T}^d}u_a(x,0)M(x)dx.
\end{split}
\end{equation}
\end{itemize}
\end{theorem}
\begin{proof}
Items \textit{ii} and \textit{iii} follow immediately from \textit{i} by the concavity of $G$. Thus, we just prove \textit{i}.
By Theorem \ref{thm:MFGwellposed} \eqref{eq:main_r} admits a solution $(u,m)$. Furthermore, define $a^*$ as
\begin{equation}\label{eq:a*}
a_i^*(t)= \sum_{j=1}^r k_{ij} \int_{\mathbb{T}^d} \phi_j(y) m(y,t) dy,~t \in [0,1].
\end{equation}
Then $a^*\in C\left([0,1];\mathbb{R}^r\right)$, and by the definition of $u_a$ and $m_a$ we have that $(u,m)=(u_{a^*},m_{a^*})$. Hence, by Theorem \ref{thm:G}, we have that
\begin{equation*}
\partial_{a_i} G(a^*)=\int_{\mathbb{T}^d} \phi_i(x) m(x,\cdot) dx,~1\leq i \leq r.
\end{equation*}
Consequently, from \eqref{eq:a*} obtain
\begin{equation*}
a_i^*= \sum_{j=1}^r k_{ij} \partial_{a_j} G(a^*).
\end{equation*}
\end{proof}
\begin{remark}
The optimization problem \eqref{eq:a_equation_optimization} is equivalent to the optimal control of Hamilton-Jacobi PDE pointed out in \cite{LL07} (equations (58)-(59) in Section 2.6). One can think of \eqref{eq:a_equation_optimization} as (58)-(59) of \cite{LL07} written in Fourier coordinates.
\end{remark}
\section{Approximating the kernel}\label{sec:approx}
In this section, we show that one can construct suitable approximations for an arbitrary $K$. We begin by a simple lemma.
\begin{lemma}\label{lma:mon_posdef_equivalence}
Suppose that $K$ is given by \eqref{eq:K_poly}. Then $K$ is positive semi-definite if and only if $\mathbf{K}=(k_{ij})_{ij,=1}^r$ is positive semi-definite.
\end{lemma}
\begin{proof}
Fix an arbitrary $(\xi_i)_{i=1}^r \in \mathbb{R}^r$. Then there exists a unique $(\lambda_i)_{i=1}^r \in \mathbb{R}^r$ such that
\begin{equation*}
\xi_i=\sum_{j=1}^r \lambda_j \int_{\mathbb{T}^d} \phi_i(x)\phi_j(x)dx,~1\leq i \leq r,
\end{equation*}
because $\{\phi_i\}$ are linearly independent. Therefore, for
\begin{equation*}
f=\sum_{j=1}^r \lambda_j \phi_j
\end{equation*}
we have that
\begin{equation*}
\xi_i = \int_{\mathbb{T}^d} f(x)\phi_i(x)dx,~1\leq i \leq r.
\end{equation*}
Hence,
\begin{equation*}
\int_{\mathbb{T}^d\times \mathbb{T}^d} K(x,y) f(x) f(y)dxdy=\sum_{i,j=1}^r k_{ij} \xi_i \xi_j,
\end{equation*}
that yields the proof.
\end{proof}
Now, we fix our basis to be the trigonometric one:
\begin{equation}\label{eq:phi_k_trig}
\phi_{\alpha}(x)=e^{2i \pi \alpha \cdot x},~x\in \mathbb{T}^d,~\alpha\in \mathbb{Z}^d.
\end{equation}
\begin{remark}
Unlike in \eqref{eq:basis}, here it is more practical to use multi-dimensional indexes to enumerate the trigonometric functions in higher dimensions. Additionally, it is more economical in terms of notation to use the complex-valued trigonometric functions. Nevertheless, our discussion is always about real valued $K$, and the reader can think of the end results as expansions in terms of $\{\cos(2\pi \alpha \cdot x),\sin(2\pi \alpha\cdot x)\}_{\alpha\in \mathbb{Z}^d}$.
\end{remark}
For $\alpha=(\alpha_1,\alpha_2,\cdots,\alpha_d)\in \mathbb{Z}^d$, we denote by
\begin{equation*}
|\alpha|=(|\alpha_1|,|\alpha_2|,\cdots,|\alpha_d|),
\end{equation*}
and for $\alpha,r\in \mathbb{Z}^d$
\begin{equation*}
\alpha \leq r \iff \alpha_j \leq r_j,~1\leq j \leq d.
\end{equation*}
For $r_1,r_2 \in \mathbb{N}_0^d$ we denote by
\begin{equation*}
K_{r_1 r_2}(x,y)=\sum_{|\alpha|\leq r_1,|\beta| \leq r_2} \hat{K}_{\alpha \beta} e^{2i \pi (\alpha \cdot x+\beta \cdot y)},~x,y\in \mathbb{T}^d,
\end{equation*}
where
\begin{equation*}
\hat{K}_{\alpha \beta}=\int_{\mathbb{T}^d} K(x,y) e^{-2i \pi (\alpha \cdot x+\beta \cdot y)}dxdy,~\alpha,\beta \in \mathbb{Z}^d.
\end{equation*}
Furthermore, for $r_1,r_2\in \mathbb{N}_0^d$ we denote by
\begin{equation*}
\Sigma_{r_1 r_2}(x,y)=\frac{1}{\prod_{j=1}^d(1+r_{1j})(1+r_{2j})}\sum_{|\alpha|\leq r_1,|\beta| \leq r_2} K_{r_1 r_2}(x,y),~x,y\in \mathbb{T}^d.
\end{equation*}
\begin{remark}
The function $K_{r_1 r_2}$ is the rectangular partial Fourier sum of $K$. Correspondingly, $\Sigma_{r_1 r_2}$ is the rectangular Fej\'{e}r average of $K$. Additionally, if $K$ is real valued then $K_{r_1 r_2}$ and $\Sigma_{r_1 r_2}$ are real valued for any $r_1,r_2 \in \mathbb{N}_0^d$.
\end{remark}
\begin{proposition}\label{prp:K_approx}
If $K$ is positive semi-definite (symmetric) then, $K_{r r}$ and $\Sigma_{r r}$ are also positive semi-definite (symmetric) for all $r \in \mathbb{N}_0^d$. Moreover,
\begin{equation}\label{eq:Fejerconverges}
\lim\limits_{\min_j r_j \to \infty} \|\Sigma_{r r}-K\|_{C^2(\mathbb{T}^d \times \mathbb{T}^d)}=0,
\end{equation}
Additionally, if $K\in C^3(\mathbb{T}^d \times \mathbb{T}^d)$ then
\begin{equation}\label{eq:partialconverges}
\lim\limits_{\min_j r_j \to \infty} \|K_{r r}-K\|_{C^2(\mathbb{T}^d \times \mathbb{T}^d)}=0.
\end{equation}
\end{proposition}
\begin{proof}
The convergence properties \eqref{eq:Fejerconverges}, \eqref{eq:partialconverges} are classical results in Fourier analysis. Thus, we will just prove that $K_{r r}$ and $\Sigma_{r r}$ are positive semi-definite (symmetric). For that, we use the representation formulas
\begin{equation*}
\begin{split}
K_{rr}(x,y)=&\int_{\mathbb{T}^d\times \mathbb{T}^d} K(z,w) D_{r r}(x-z,y-w)dzdw,\\
\Sigma_{rr}(x,y)=&\int_{\mathbb{T}^d\times \mathbb{T}^d} K(z,w) F_{r r}(x-z,y-w)dzdw,~x,y\in \mathbb{T}^d,\\
\end{split}
\end{equation*}
where $D_{rr}$ and $F_{rr}$ are, respectively, the $2d$-dimensional rectangular Dirichlet and Fej\'{e}r kernels. A crucial feature of $D_{rr}$ and $F_{rr}$ is that they are symmetric and decompose into lower dimensional kernels:
\begin{equation*}
D_{rr}(z,w)=D_{r}(z)D_{r}(w),~ F_{rr}(z,w)=F_{r}(z)F_{r}(w),~z,w\in \mathbb{T}^d,
\end{equation*}
where $D_r$ and $F_r$ are the corresponding $d$-dimensional kernels. In particular, $K_{rr},\Sigma_{rr}$ are symmetric if $K$ is such. Furthermore, for an arbitrary $f \in L^{\infty}(\mathbb{T}^d)$ we have that
\begin{equation*}
\begin{split}
&\int_{\mathbb{T}^d\times \mathbb{T}^d} K_{rr}(x,y)f(x)f(y)dxdy\\
=&\int_{\mathbb{T}^d\times \mathbb{T}^d} K(z,w)dzdw \int_{\mathbb{T}^d\times \mathbb{T}^d} f(x)f(y)D_{r r}(x-z,y-w)dxdy\\
=&\int_{\mathbb{T}^d\times \mathbb{T}^d} K(z,w)dzdw \int_{\mathbb{T}^d\times \mathbb{T}^d} f(x)f(y)D_r(x-z) D_r(y-w)dxdy\\
=&\int_{\mathbb{T}^d\times \mathbb{T}^d} K(z,w) f_r(z)f_r(w)dzdw \geq 0.
\end{split}
\end{equation*}
Thus, $K_{rr}$ is positive semi-definite if $K$ is such. The proof for $\Sigma_{r r}$ is identical.
\end{proof}
\begin{remark}
By Proposition \ref{prp:K_approx}, kernels $K_{rr},\Sigma_{rr}$ are positive semi-definite. Therefore, their coefficients matrices with respect to basis $\{\cos(2\pi \alpha\cdot x),\sin(2\pi \alpha\cdot x)\}$ are also positive semi-definite by Lemma \ref{lma:mon_posdef_equivalence}. Nevertheless, to take full advantage of Theorem \ref{thm:equivalence} one would need these matrices to be positive definite (invertible). To solve this problem one can add $\eps I$ regularization term, where $I$ is the identity matrix of the suitable dimension and $\eps>0$ is a small constant. However, as discussed below, this regularization is not necessary for translation invariant kernels.
\end{remark}
Suppose that
\begin{equation*
K(x,y)=\eta(x-y),~x,y \in \mathbb{T}^d,
\end{equation*}
where $\eta$ is a periodic function. Then, we have
\begin{equation}\label{eq:K_cos}
\begin{split}
&\int_{\mathbb{T}^d} K(x,y) \cos (2\pi \alpha \cdot y) dy\\
=& \int_{\mathbb{T}^d} \eta(x-y) \cos (2\pi \alpha\cdot y) dy= \int_{\mathbb{T}^d} \eta(y) \cos (2\pi \alpha\cdot (x-y)) dy\\
=&\cos(2\pi \alpha\cdot x) \int_{\mathbb{T}^d} \eta(y) \cos (2\pi \alpha\cdot y) dy+ \sin(2\pi \alpha\cdot x) \int_{\mathbb{T}^d} \eta(y) \sin (2\pi \alpha\cdot y) dy.
\end{split}
\end{equation}
Similarly, we obtain that
\begin{equation}\label{eq:K_sin}
\begin{split}
&\int_{\mathbb{T}^d} K(x,y) \sin (2\pi \alpha\cdot y) dy\\
=& \int_{\mathbb{T}^d} \eta(x-y) \sin (2\pi \alpha\cdot y) dy= \int_{\mathbb{T}^d} \eta(y) \sin (2\pi \alpha\cdot (x-y)) dy\\
=&\sin(2\pi \alpha\cdot x) \int_{\mathbb{T}^d} \eta(y) \cos (2\pi \alpha\cdot y) dy- \cos(2\pi \alpha\cdot x) \int_{\mathbb{T}^d} \eta(y) \sin (2\pi \alpha\cdot y) dy.
\end{split}
\end{equation}
Therefore, we have that
\begin{equation*}
\begin{split}
\int_{\mathbb{T}^d}K(x,y) \cos(2\pi \alpha \cdot x) \cos(2\pi \alpha \cdot y) dx dy=& \int_{\mathbb{T}^d} \eta(y) \cos (2\pi \alpha\cdot y) dy,\\
\int_{\mathbb{T}^d}K(x,y) \sin(2\pi \alpha \cdot x) \cos(2\pi \alpha \cdot y) dx dy=& \int_{\mathbb{T}^d} \eta(y) \sin (2\pi \alpha\cdot y) dy,\\
\int_{\mathbb{T}^d}K(x,y) \cos(2\pi \alpha \cdot x) \sin(2\pi \alpha \cdot y) dx dy=& -\int_{\mathbb{T}^d} \eta(y) \sin (2\pi \alpha\cdot y) dy,\\
\int_{\mathbb{T}^d}K(x,y) \sin(2\pi \alpha \cdot x) \sin(2\pi \alpha \cdot y) dx dy=& \int_{\mathbb{T}^d} \eta(y) \cos (2\pi \alpha\cdot y) dy.
\end{split}
\end{equation*}
Hence, the coefficients matrices of partial Fourier sums (and their linear combinations) of $K$ consist of $2\times 2$ blocks that correspond to expansion terms with a frequency $\alpha \in \mathbb{Z}^d$; that is,
\begin{equation}\label{eq:Delta_alpha}
\Delta_\alpha=\begin{pmatrix}
\int_{\mathbb{T}^d} \eta(y) \cos (2\pi \alpha\cdot y) dy & \int_{\mathbb{T}^d} \eta(y) \sin (2\pi \alpha\cdot y) dy\\
-\int_{\mathbb{T}^d} \eta(y) \sin (2\pi \alpha\cdot y) dy & \int_{\mathbb{T}^d} \eta(y) \cos (2\pi \alpha\cdot y) dy
\end{pmatrix}.
\end{equation}
Thus, the coefficient matrix will be degenerate if $\det(\Delta_\alpha)=0$ for some $\alpha$. But we have that
\begin{equation*}
\det(\Delta_\alpha)= \left(\int_{\mathbb{T}^d} \eta(y) \cos (2\pi \alpha\cdot y) dy\right)^2 + \left(\int_{\mathbb{T}^d} \eta(y) \sin (2\pi \alpha\cdot y) dy\right)^2.
\end{equation*}
Hence, $\det(\Delta_\alpha)=0$ if and only if $\Delta_\alpha=0$ or, equivalently, there are no expansion terms with frequency $\alpha$. But then, we can simply ignore these terms in our basis and obtain a non-degenerate matrix.
Moreover, to invert the coefficients matrix one just has to invert the $2\times 2$ blocks. Additionally, if $K$ is symmetric; that is, $\eta(y)=\eta(-y)$, we get that
\begin{equation*}
\int_{\mathbb{T}^d} \eta(y) \sin (2\pi \alpha\cdot y) dy=0,~\forall \alpha \in \mathbb{Z}^d.
\end{equation*}
Hence, the coefficient matrices are simply diagonal. Therefore, we have proved the following proposition.
\begin{proposition}
If $K$ is translation invariant then all partial Fourier sums of $K$ and their linear combinations, such as $K_{rr}$ and $\Sigma_{rr}$, contain only $\cos (2\pi \alpha \cdot x) \cos (2\pi \alpha \cdot y), \cos (2\pi \alpha \cdot x) \sin (2\pi \alpha \cdot y), \sin (2\pi \alpha \cdot x) \cos (2\pi \alpha \cdot y), \sin (2\pi \alpha \cdot x) \sin (2\pi \alpha \cdot y)$ expansion terms. Therefore, coefficient matrices of such approximations with respect to trigonometric basis consist of $2\times 2$ blocks that are multiples of $\Delta_\alpha$ in \eqref{eq:Delta_alpha}. If, additionally, $K$ is symmetric these coefficient matrices are diagonal.
\end{proposition}
\begin{remark}
In general, if $\{\phi_1,\phi_2,\cdots,\phi_r,\cdots\}$ is an orthonormal basis consisting of eigenfunctions of Hilbert-Schmidt integral operator $f(\cdot) \mapsto \int_{\mathbb{T}^d} K(\cdot,y) f(y)dy$; that is,
\begin{equation*}
\int_{\mathbb{T}^d} K(x,y) \phi_\alpha(y)dx=\lambda_\alpha \phi_\alpha(x),~x\in \mathbb{T}^d,~\alpha \in \mathbb{N},
\end{equation*}
for some $\{\lambda_\alpha\} \subset \mathbb{R}$. Then, one has that
\begin{equation*}
k_{\alpha \beta}= \int_{\mathbb{T}^d} K(x,y)\phi_\alpha(x)\phi_\beta(y)dxdy= \lambda_\beta \delta_{\alpha \beta}.
\end{equation*}
Consequently, for arbitrary $I \subset \mathbb{N}\times \mathbb{N}$ we have that
\begin{equation*}
K_{I}(x,y)=\sum_{(\alpha,\beta) \in I} k_{\alpha \beta} \phi_{\alpha}(x) \phi_{\beta}(y)=\sum_{(\alpha,\alpha) \in I} \lambda_{\alpha} \phi_{\alpha}(x) \phi_{\alpha}(y).
\end{equation*}
Therefore, all partial Fourier sums of $K$ in basis $\{\phi_\alpha(x)\phi_\beta(y)\}$ contain only terms $\phi_\alpha(x) \phi_\alpha(y)$ and yield diagonal coefficient matrices consisting of corresponding eigenvalues of the Hilbert-Schmidt integral operator.
In general, it is not easy to calculate the eigenfunctions of a given Hilbert-Schmidt integral operator. Nevertheless, as we saw above, for translation invariant symmetric periodic $K$ these eigenfunctions are precisely the trigonometric functions.
\end{remark}
\section{A numerical method} \label{sec:a_numerical_method}
In this section we propose a numerical method to solve \eqref{eq:main} for a symmetric and positive semi-definite $K$. We assume that an approximation $K_r$ of the form \eqref{eq:K_poly} is already constructed with a symmetric and positive definite $\mathbf{K}$. Thus, we devise an algorithm for the solution of \eqref{eq:main_r}.
By Theorem \ref{thm:equivalence} we have that \eqref{eq:main_r} is equivalent to \eqref{eq:a_equation_optimization}. Therefore, in what follows, we present a suitable discretization of \eqref{eq:a_equation_optimization}. We rewrite latter as
\begin{equation}\label{eq:optim_S}
\inf_{a\in C\left([0,1];\mathbb{R}^r\right)} S(a),
\end{equation}
where
\begin{equation*}
S(a) = \frac{1}{2} \langle \mathbf{J} a, a \rangle - G(a),
\end{equation*}
and $\mathbf{J} = \mathbf{K}^{-1}$.
\subsection{Discretization of the $u_a$}\label{sub:discretization_of_the_lax_formula}
We start with the discretization of $u_a$. For that, we discretize the representation formula \eqref{eq:ua_rep}. We can rewrite latter as
\begin{equation}\label{eq:u_form}
u_a(x,0) = \inf_{{\bf u}} \int_0^1 L_a({\bf x}(s), {\bf u}(s),s) ds + U({\bf x}(1)),
\end{equation}
where ${\bf x}$ satisfies the following controlled ODE
\begin{equation} \label{eq:ODE}
\dot{{\bf x}}(s) = {\bf u}(s), \quad {\bf x}(0) = x,~s\in [0,1].
\end{equation}
Recall that
\begin{equation*}
L_a(x,u,s)=L(x,u)+\sum_{k=1}^r a_k(s)\phi_k(x),~(x,u,s)\in \mathbb{T}^d\times \mathbb{R}^d \times[0,1].
\end{equation*}
We choose a uniform discretization of the time interval:
\[
0=s_0<s_1<s_2<\ldots<s_N=1,
\]
with a step size $h_t = \frac{1}{N}$, hence $s_i=i h_t=\frac{i}{N}$. We denote the values of ${\bf x}$ and ${\bf u}$ at time $s_i$ by ${\bf x}(s_i)=x_i$, ${\bf u}(s_i)=u_i$. Using a backward Euler discretization of \eqref{eq:ODE} we have
\begin{equation*}
u_i = \frac{x_{i} - x_{i-1}}{h_t},~i \in \{1,\ldots, N\}.
\end{equation*}
Discretizing the integral \eqref{eq:u_form} with a right point quadrature rule and using the above discretization we get
\begin{equation} \label{eq:udisc}
\begin{cases}
\quad [u_a](x,0)&= \inf_{\{x_i\}_0^N} h_t \sum_{i=1}^{N}L_a\left(x_i,\frac{x_{i}-x_{i-1}}{h_t},s_i\right)+U(x_N),\\
\mbox{subject to:}& x_0=x.
\end{cases}
\end{equation}
\subsection{Discretization of $G$}\label{sub:discretization_of_the_functional_g}
We start by discretizing the initial measure $M$ using a convex combination of Dirac $\delta$ distributions. Denoting the discretized measure $[M]$, we have
\begin{equation*}
[M]=\sum\limits_{\alpha=1}^{Q} c_{\alpha} \delta_{y_{\alpha}}
\end{equation*}
or, in the distributional sense,
\begin{equation}\label{eq:Mdiscrete}
\int\limits_{{\mathbb{T}}^d} \psi(y) d[M](y)=\sum\limits_{\alpha=1}^{Q} c_{\alpha} \psi(y_{\alpha}),\quad \psi \in C({\mathbb{T}}^d),
\end{equation}
for some $\{y_{\alpha}\}_{\alpha=1}^Q \subset {\mathbb{T}}^d$ and $\{c_{\alpha}\geq 0\}_{\alpha=1}^Q$ such that $\sum_{\alpha=1}^Q c_{\alpha}=1$. Then, $G$ is discretized as follows
\begin{equation}\label{eq:[G]}
[G](a)=\sum\limits_{\alpha=1}^{Q} c_{\alpha}~[u_a](y_{\alpha},0).
\end{equation}
\subsection{Discretization of $S$}\label{sub:discretization_of_s}
Now, we discretize \eqref{eq:optim_S}. We first discretize $a_k$-s by taking their values at times $s_i$, that we denote by:
\begin{equation*}
[a]_k=(a_k(s_0),\cdots, a_k(s_N))=(a_{k0},\cdots, a_{kN}),~ k=1,2,\cdots,r.
\end{equation*}
Recall that
\begin{equation*}
\langle {\bf J} a, a\rangle = \sum\limits_{k,l=1}^{r} {\bf J}_{kl} \int\limits_{0}^{1} a_k(s) a_l(s) ds.
\end{equation*}
We discretize this previous quadratic form by a simple right point quadrature rule.
\begin{equation*}
[\langle {\bf J} a, a\rangle] = h_t \sum\limits_{k,l=1}^{r} {\bf J}_{kl} \sum\limits_{i=1}^{N} a_{ki}a_{li}.
\end{equation*}
So the discretization of $S$ is
\begin{equation} \label{eq:S}
\begin{split}
[S](a)&= \frac{h_t}{2} \sum\limits_{k,l=1}^{r} {\bf J}_{kl} \sum\limits_{i=1}^{N} a_{ki}a_{li}-[G](a)\\
&= \frac{h_t}{2} \sum\limits_{k,l=1}^{r} {\bf J}_{kl} \sum\limits_{i=1}^{N} a_{ki}a_{li}- \sum\limits_{\alpha=1}^{Q} c_{\alpha}~[u_a](y_{\alpha},0),
\end{split}
\end{equation}
where we used \eqref{eq:[G]}. Therefore, the discretization of \eqref{eq:optim_S} is
\begin{equation}\label{eq:infsupDirectB_gen}
\begin{split}
&\inf_{\{a_{ki}\}}[S](a)\\
=&\inf_{\{a_{ki}\}}\sup_{\{x_{\alpha i}:~x_{\alpha 0}=y_{\alpha}\}} \frac{h_t}{2}\sum_{k,l=1}^r\mathbf{J}_{kl}\sum_{i=1}^{N}a_{ki}a_{li}-h_t\sum_{\alpha=1}^Q\sum_{i=1}^{N}c_{\alpha}L\left(x_{\alpha i},\frac{x_{\alpha i}-x_{\alpha (i-1)}}{h_t}\right)\\
&-h_t \sum_{\alpha=1}^Q \sum_{i=1}^{N}\sum_{k=1}^r c_{\alpha}a_{ki} \phi_k(x_{\alpha i})-\sum_{\alpha=1}^Q c_{\alpha} U(x_{\alpha N}).
\end{split}
\end{equation}
\subsection{Primal-dual hybrid-gradient method}\label{sub:primal_dual_hybrid_gradient_method}
Now, we specify the Lagrangian to be quadratic and devise a primal-dual hybrid-gradient algorithm \cite{chapock'11} to solve \eqref{eq:optim_S}. More precisely, we assume that
\begin{equation*}
L(x,u)=\frac{|u|^2}{2},~(x,u)\in \mathbb{T}^d\times \mathbb{R}^d,
\end{equation*}
and therefore
\eqref{eq:infsupDirectB_gen} becomes
\begin{equation}\label{eq:infsupDirectB}
\begin{split}
&\inf_{\{a_{ki}\}}[S](a)\\
=&\inf_{\{a_{ki}\}}\sup_{\{x_{\alpha i}~:~x_{\alpha 0}=y_{\alpha}\}} \frac{h_t}{2}\sum_{k,l=1}^r\mathbf{J}_{kl}\sum_{i=1}^{N}a_{ki}a_{li}-\sum_{\alpha=1}^Q\sum_{i=1}^{N}c_{\alpha}\frac{|x_{\alpha i}-x_{\alpha (i-1)}|^2}{2h_t}\\
&-h_t \sum_{\alpha=1}^Q \sum_{i=1}^{N}\sum_{k=1}^r c_{\alpha}a_{ki} \phi_k(x_{\alpha i})-\sum_{\alpha=1}^Q c_{\alpha} U(x_{\alpha N}).
\end{split}
\end{equation}
Now, we describe the algorithm. For each iteration time $\nu\geq 0$ we have four groups of variables:
$a^\nu=\{a^\nu_{ki}\}_{k,i=1,1}^{r,N},~x^\nu=\{x^\nu_{\alpha i}\}_{\alpha,i=1,0}^{Q,N}$, and $z^\nu=\{z^\nu_{\alpha i}\}_{\alpha,i=1,0}^{Q,N}$. Furthermore, we fix $\lambda, \omega>0$ that are proximal step parameters for variables $a$ and $x$, respectively. Additionally, we take $0\leq \theta \leq 1$.
\textbf{Step 1.} Given $a^\nu,x^\nu,z^\nu$ the first step of the algorithm is to solve the proximal problem
\begin{equation*}
\begin{split}
&\inf_{\{a_{ki}\}} \frac{h_t}{2}\sum_{k,l=1}^r\mathbf{J}_{kl}\sum_{i=1}^{N}a_{ki}a_{li}-\sum_{\alpha=1}^Q\sum_{i=1}^{N}c_{\alpha}\frac{|z_{\alpha i}^\nu-z_{\alpha (i-1)}^\nu|^2}{2h_t}\\
&-h_t \sum_{\alpha=1}^Q \sum_{i=1}^{N}\sum_{k=1}^r c_{\alpha}a_{ki} \phi_k(z_{\alpha i}^\nu)-\sum_{\alpha=1}^Q c_{\alpha} U(z_{\alpha N}^\nu)
+\frac{1}{2\lambda} \sum_{k=1}^r\sum_{i=1}^N (a_{ki}-a^{\nu}_{ki})^2,
\end{split}
\end{equation*}
that is equivalent to
\begin{equation*}
\begin{split}
&\inf_{\{a_{ki}\}} \frac{h_t}{2}\sum_{k,l=1}^r\mathbf{J}_{kl}\sum_{i=1}^{N}a_{ki}a_{li}- h_t \sum_{\alpha=1}^Q \sum_{i=1}^{N}\sum_{k=1}^r c_{\alpha}a_{ki} \phi_k(z_{\alpha i}^\nu)+\frac{1}{2\lambda} \sum_{k=1}^r\sum_{i=1}^N (a_{ki}-a^{\nu}_{ki})^2.
\end{split}
\end{equation*}
Thus, we obtain the following update of the $a$-variable.
\begin{equation}\label{eq:a_iterationDirect}
\begin{pmatrix}
a^{\nu+1}_{1i}\\
a^{\nu+1}_{2i}\\
\vdots\\
a^{\nu+1}_{ri}
\end{pmatrix}= (\lambda h_t {\bf J}+\mathrm{Id}_r)^{-1}
\begin{pmatrix}
a^\nu_{1i}+\lambda h_t \sum_{\alpha=1}^Q c_{\alpha} \phi_1(z^\nu_{\alpha i})\\
a^\nu_{2i}+\lambda h_t\sum_{\alpha=1}^Q c_{\alpha} \phi_2(z^\nu_{\alpha i})\\
\vdots\\
a^\nu_{ri}+\lambda h_t\sum_{\alpha=1}^Q c_{\alpha} \phi_r(z^\nu_{\alpha i})
\end{pmatrix},\quad 1\leq i \leq N.
\end{equation}
\begin{remark}\label{rem:time_par}
Note that although the number of variables $\{a_{ki}\}_{k,i=1,1}^{r,N}$ is $r\times N$, the calculations of $\{a_{ki}\}$ for different $i$-s are mutually independent. Therefore, the only complexity is in the inversion of an $r\times r$ matrix $\lambda \sigma {\bf J}+\mathrm{Id}_r$ that can be computed beforehand and used throughout the scheme. Moreover, as seen in Section \ref{sec:approx}, translation invariant symmetric kernels yield diagonal matrices that extremely simplify the calculations.
\end{remark}
\textbf{Step 2.} Given $a^{\nu+1},x^\nu,z^\nu$ we update $x$-variable by solving the proximal problem
\begin{equation*}
\begin{split}
&\inf_{\{x_{\alpha i}:~x_{\alpha0}=y_{\alpha}\}} \sum_{\alpha=1}^Q\sum_{i=1}^{N}c_{\alpha}\frac{|x_{\alpha i}-x_{\alpha (i-1)}|^2}{2h_t}+ h_t \sum_{\alpha=1}^Q \sum_{i=1}^{N}\sum_{k=1}^r c_{\alpha}a^{\nu+1}_{ki} \phi_k(x_{\alpha i})\\
&+\sum_{\alpha=1}^Q c_{\alpha} U(x_{\alpha N}) +\frac{1}{2\omega} \sum_{\alpha=1}^Q\sum_{i=1}^N |x_{\alpha i}-x^{\nu}_{\alpha i}|^2.
\end{split}
\end{equation*}
Solving this previous problem may be a costly operation. Hence, we just perform a one step gradient descent. Therefore, we obtain
\begin{equation}\label{eq:x_iterationDirect}
\begin{split}
x^{\nu+1}_{\alpha 1} &= x^{\nu}_{\alpha 1}-\frac{\omega c_\alpha}{h_t}(x_{\alpha 1}-y_{\alpha})-\frac{\omega c_\alpha}{h_t}(x_{\alpha 1}-x_{\alpha 2})-\omega c_\alpha h_t \sum_{k=1}^r a^{\nu+1}_{k1}\nabla \phi_k(x_{\alpha 1}),\\
x^{\nu+1}_{\alpha i} &= x^{\nu}_{\alpha i}-\frac{\omega c_\alpha}{h_t}(x_{\alpha i}-x_{\alpha (i-1)})-\frac{\omega c_\alpha}{h_t}(x_{\alpha i}-x_{\alpha (i+1)}),\\
&-\omega c_\alpha h_t \sum_{k=1}^r a^{\nu+1}_{k i}\nabla \phi_k(x_{\alpha i}), \quad 1\leq i \leq N-1,\\
x^{\nu+1}_{\alpha N} &= x^{\nu}_{\alpha N}-\frac{\omega c_\alpha}{h_t}(x_{\alpha N}-x_{\alpha (N-1)})-\omega c_\alpha \nabla U (x_{\alpha N})-\omega c_\alpha h_t \sum_{k=1}^r a^{\nu+1}_{k N}\nabla \phi_k(x_{\alpha N}).
\end{split}
\end{equation}
\textbf{Step 3.} In the final step we update the $z$-variable by
\begin{equation}\label{eq:z_updateDirect}
z^{\nu+1}_{\alpha i}=x^{\nu+1}_{\alpha i}+\theta (x_{\alpha i}^{\nu+1}-x^\nu_{\alpha i}), \quad 1\leq \alpha \leq Q, ~1\leq i \leq N.
\end{equation}
\begin{remark}\label{rem:space_par}
Note that the updates for $\{x_{\alpha i}\},\{z_{\alpha i}\}$ variables are mutually independent for different $\alpha$-s. Therefore, our $a$-updates are parallel in time, and $x,z$-updates are parallel in space.
\end{remark}
\begin{remark}\label{rem:PDHGnonst}
Strictly speaking, one cannot simply apply the primal-dual hybrid gradient method to \eqref{eq:infsupDirectB} because the coupling between $a$ and $x$ is not bilinear, and there is no concavity in $x$. Nevertheless, our calculations always yield solid results. Therefore, there is a natural problem of rigorously understanding the convergence properties of the aforementioned algorithm. We plan to address this problem in our future work.
\end{remark}
\section{Numerical Examples}\label{sec:numerical_examples}
In this section, we present several numerical experiments. We first look into one-dimensional case, in Section \ref{sub:1_dimensional_examples}, and after we consider the two-dimensional case, in Section \ref{sub:2_dimensional_examples}.
For our calculations, we choose the periodic Gaussian kernel that is given by
\begin{equation}\label{eq:K_smud}
K^d_{\sigma,\mu}(x,y)=\prod_{i=1}^{d} K^1_{\sigma,\mu}(x_i,y_i),~x,y\in \mathbb{T}^d,
\end{equation}
where
\begin{equation}\label{eq:kernel1}
K^1_{\sigma,\mu}(x,y)=\frac{\mu}{\sqrt{2\pi (\frac{\sigma}{2})^2}}\sum_{k=-\infty}^{\infty}e^{-\frac{(x-y-k)^2}{2 (\frac{\sigma}{2})^2}},~x,y\in \mathbb{T},
\end{equation}
and $\sigma, \mu >0$ are given parameters. Here, $\sigma$ models how spread is the kernel. The smaller $\sigma$ the more weight agents assign to their immediate neighbors -- this translates into crowd-aversion in the close neighborhood only. Furthermore, $\mu$ is the total weight of the agents. Therefore, $\mu$ measures how sensitive is a generic agent to the total population, the bigger the more averse is the agent to others. As we observe in the numerical experiments, the less $\sigma$ and the larger $\mu$ the more separated are the agents. This phenomenon was also observed in \cite{aurelldjehiche'18}.
Throughout the section we denote by
\begin{equation}\label{eq:phisystem}
\phi_k(x)=
\begin{cases}
1,~\mbox{if}~k=1,\\
\sqrt{2}\cos \pi (k-1) x,~\mbox{if}~k~\mbox{is odd,}~\mbox{and}~k>0,\\
\sqrt{2}\sin \pi k x,~\mbox{if}~k~\mbox{is even},~x\in \mathbb{T}.
\end{cases}
\end{equation}
Therefore, we have
\begin{equation*}
\{\phi_1,\phi_2,\phi_3,\cdots\}=\{1,\sqrt{2}\sin 2\pi x, \sqrt{2} \cos 2\pi x,\cdots \}.
\end{equation*}
\subsection{One-dimensional examples}\label{sub:1_dimensional_examples}
For all simulations we use the same initial-terminal conditions
\begin{equation*}
M(x) = \frac{1}{6} + \frac{5}{3} \sin^2 \pi x,\quad U(x) = 1+\sin \left(4\pi x+ \frac{\pi}{2}\right),~x\in \mathbb{T},
\end{equation*}
that are depicted in Figure \ref{fig:itc_1}.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussianm_0.pdf}
\caption{Initial distribution of agents, $M(x)$.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussianu_T.pdf}
\caption{Terminal cost function, $U(x)$.}
\end{subfigure}
\caption{Initial-terminal conditions.}
\label{fig:itc_1}
\end{figure}
We also use the same time and space discretization for all one dimensional experiments, and the same parameters for the numerical scheme. We discretize the time using a step size $\Delta t = \frac{1}{N}$. For the discretization of $M$ we use
\begin{equation*}
y_\alpha=\frac{\alpha}{Q+1},\quad c_{\alpha}=\frac{M(y_\alpha)}{\sum_{\beta=1}^Q M(y_\beta)} ~1\leq \alpha \leq Q.
\end{equation*}
We choose $N=20,~Q=50$ and use eight basis functions, $r=8$. Additionally, we set the numerical scheme parameters to $\lambda = 3,~\omega = \frac{1}{12}$ and $\theta=1$.
\begin{remark}
For the standard primal-dual hybrid gradient method, one must have $\omega \lambda <\frac{1}{A^2}$, where $A$ is the norm of the bilinear-form matrix. As we mentioned in Remark \ref{rem:PDHGnonst}, here we do not have a bilinear coupling between $a$ and $x$. Thus, we estimate $A$ by an upper bound on the $(l_2,l_2)$ Lipschitz norm of the mapping
\begin{equation*}
F_{ki}(x)=h_t \sum_{\alpha=1}^Q c_\alpha \phi_k(x_{\alpha i}),~1\leq l \leq r,~1\leq i \leq N.
\end{equation*}
More precisely, we have that
\begin{equation*}
\begin{split}
\mathrm{Lip}(F)^2=&\sup_{\{x_{\beta j}\}} \sup_{\|w_{\beta j}\|_2\leq 1} \sum_{k,i} \left(\sum_{\beta,j} \frac{\partial F_{ki}}{\partial x_{\beta j}}w_{\beta j}\right)^2\\
=&\sup_{\{x_{\beta j}\}} \sup_{\|w_{\beta j}\|_2\leq 1} \sum_{k,i} \left(\sum_{\beta} h_t c_\beta \nabla \phi_k(x_{\beta i}) w_{\beta i}\right)^2\\
\leq &h_t^2 \sup_{\{x_{\beta j}\}} \sup_{\|w_{\beta j}\|_2\leq 1} \sum_{k,i} \left( \sum_{\beta} c_\beta^2 \|\nabla \phi_k(x_{\beta i})\|_2^2 \cdot \sum_\beta w_{\beta i}^2\right)\\
\leq & h_t^2 \sup_{\|w_{\beta j}\|_2\leq 1} \sum_{k,i} \mathrm{Lip}(\phi_k)^2 \left( \sum_{\beta} c_\beta^2 \cdot \sum_\beta w_{\beta i}^2\right)\\
= & h_t^2 \sup_{\|w_{\beta j}\|_2\leq 1} \sum_{k} \mathrm{Lip}(\phi_k)^2 \sum_{\beta} c_\beta^2 \sum_{\beta,i} w_{\beta i}^2\\
= & h_t^2 \sum_{k} \mathrm{Lip}(\phi_k)^2 \sum_{\beta} c_\beta^2.
\end{split}
\end{equation*}
Thus, we take
\begin{equation*}
A^2= h_t^2 \sum_{k=1}^r \mathrm{Lip}(\phi_k)^2 \sum_{\beta=1}^Q c_\beta^2.
\end{equation*}
\end{remark}
The trigonometric expansion of $K^1_{\sigma,\mu}$ is given by
\begin{equation}\label{eq:kernel1expansioncos}
K^1_{\sigma,\mu}(x,y)=\mu \left(1+2\sum_{n=1}^\infty e^{-\frac{(\pi n \sigma)^2}{2}} \cos 2\pi n (x-y)\right),~x,y \in \mathbb{T},
\end{equation}
or
\begin{equation}\label{eq:kernel1expansionphi}
K^1_{\sigma,\mu}(x,y)=\sum_{k=1}^\infty \mu e^{-\frac{1}{2}\left(\pi \sigma \left[\frac{k}{2}\right]\right)^2} \phi_k(x)\phi_k(y),~x,y \in \mathbb{T},
\end{equation}
in our notation. Therefore, for a given $r$, the matrices ${\bf K, J}$ are given by
\begin{equation}\label{eq:KJmats1}
\begin{split}
{\bf K}=& \mbox{diag} \left(\mu e^{-\frac{1}{2}\left(\pi \sigma \left[\frac{k}{2}\right]\right)^2}\right)_{k=1}^r,\\
{\bf J}=& \mbox{diag} \left(\mu^{-1} e^{\frac{1}{2}\left(\pi \sigma \left[\frac{k}{2}\right]\right)^2}\right)_{k=1}^r.
\end{split}
\end{equation}
In Figure \ref{1d_kernels} we plot the Gaussian kernels we used, for $r=8$ and different values of $\mu$ and $\sigma$.
We see the influence of these values in Figure \ref{1D_comparison}.
In the first column of Figure \ref{1D_comparison} we compare the results regarding for different values of $\mu$ and $\sigma$.
Comparing the first and the second columns of Figure \ref{1D_comparison}, we see that the trajectories of the agents in the first column are closer than in the second one.
This is due to the fact that $\mu = 0.5$ in the first kernel and $\mu=1.5$ in the second one, hence the second kernel (higher value of $\mu$) penalizes more high density of agents.
Therefore, the agents spread out more before the final time when they converge to the points of low-cost near minima of the terminal cost function, $U$, see Figure \ref{fig:itc_1} (b).
In the last column the value of $\sigma = 0.8$ is higher, this means that agents are indifferent to the distances between them -- they just feel the total mass. Hence, they minimize the travel distances from initial positions to low-cost locations of $U$ ignoring the population density. In fact, in this case $K^1_{\sigma,\mu}\approx \mu$, and therefore $\int_{\mathbb{T}} K^1_{\mu,\sigma}(x,y)m(y,t)dy \approx \mu$. Thus, in this case \eqref{eq:main} approximates a decoupled system of Hamilton-Jacobi and Fokker-Planck equations. But the optimal trajectories of the decoupled system are straight lines by Hopf-Lax formula. As we can see in Figure \ref{1D_comparison} (d), this fact is consistent with the straight-line trajectories that we obtain.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian1kernel3D.pdf}
\caption{Gaussian kernel, $K^1_{0.2, 0.5}(x,y)$.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian3kernel3D.pdf}
\caption{Gaussian kernel, $K^1_{0.2, 1.5}(x,y)$.}
\end{subfigure}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2kernel3D.pdf}
\caption{Gaussian kernel, $K^1_{0.8, 0.5}(x,y)$.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{GaussianKernels.pdf}
\caption{Comparison of kernels on $K^1_{\sigma, \mu}(x,0)$.}
\end{subfigure}
\caption{Plots of the three Gaussian kernels in (a)-(c), and a comparison of their sections in (d).}
\label{1d_kernels}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian1Kernel_section.pdf}
\caption{$K^1_{0.2, 0.5}(x,0)$.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian3Kernel_section.pdf}
\caption{$K^1_{0.2, 1.5}(x,0)$.}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2Kernel_section.pdf}
\caption{$K^1_{0.8, 0.5}(x,0)$.}
\end{subfigure}%
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian1Trajectories.pdf}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian3Trajectories.pdf}
\caption{Trajectories, ${\bf x}(t,y_\alpha)$.}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2Trajectories.pdf}
\end{subfigure}%
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian1Density3D.pdf}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian3Density3D.pdf}
\caption{Density, $m(x,t)$.}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2Density3D.pdf}
\end{subfigure}%
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian1Densities_cost.pdf}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian3Densities_cost.pdf}
\caption{$M(x)$ -- blue, $m(x,1)$ -- green, $U(x)$ -- orange.}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2Densities_cost.pdf}
\end{subfigure}%
\caption{Simulations using Gaussian kernels with different parameters, $(\sigma, \mu) \in \left\{ (0.2,0.5), (0.2,1.5), (0.8,0.5) \right\}$, for each column.
In the first row, we show a section of each kernel.
In the second row, we plot the trajectories of the agents, $\{{\bf x}(t,y_\alpha)\}_{\alpha=1}^Q$, at time $t\in[0,1]$ and initial positions $\{y_\alpha\}_{\alpha=1}^Q\subset {\mathbb{T}}$.
In the third row, we plot the time evolution of the distribution of players, $m(t,x)$.
Each plot of the last row displays the initial-terminal conditions, $M(x)$ and $U(x)$, and the final distribution, $m(x,1)$.
}
\label{1D_comparison}
\end{figure}
\subsection{Two-dimensional examples} \label{sub:2_dimensional_examples}
Here, we consider the case of two-dimensional state space. The initial distribution of players and the terminal cost function are given by
\begin{equation*}
\begin{split}
M(x_1,x_2) =& 1 + \frac 1 2 \cos \left( \pi + 2 \pi \left( x_1 - x_2 \right) \right) + \frac 1 2 \sin \left( \frac \pi 2 + 2 \pi \left( x_1 + x_2 \right) \right),\\
U(x_1,x_2) =& \frac 3 2 + \frac 1 2 \left( \cos \left( 6 \pi x_1 \right) + \cos \left( 2 \pi x_2 \right) \right),~(x_1,x_2)\in {\mathbb{T}}^2,
\end{split}
\end{equation*}
that are depicted in Figure \ref{fig:itc_2D}.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_3m_0.pdf}
\caption{Initial distribution of agents, $M(x_1,x_2)$.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_3u_T.pdf}
\caption{Terminal cost function, $U(x_1,x_2)$.}
\end{subfigure}
\caption{Initial-terminal conditions.}
\label{fig:itc_2D}
\end{figure}
The corresponding expansion of the kernel is given by
\begin{equation}\label{eq:kernel12dexpansionphi}
\begin{split}
&K^2_{\sigma,\mu}(x_1,x_2;y_1,y_2)\\
=&\sum_{k,k'=1}^{\infty} \mu^2 e^{-\frac{\pi^2\sigma^2}{2}\left(\left[\frac{k}{2}\right]^2+\left[\frac{k'}{2}\right]^2\right)}\phi_k(x_1) \phi_k(y_1) \phi_{k'}(x_2) \phi_{k'}(y_2)\\
=&\sum_{k,k'=1}^{\infty} \mu^2 e^{-\frac{\pi^2\sigma^2}{2}\left(\left[\frac{k}{2}\right]^2+\left[\frac{k'}{2}\right]^2\right)}\phi_{k,k'}(x_1,x_2) \phi_{k,k'}(y_1,y_2),
\end{split}
\end{equation}
where
\begin{equation}\label{eq:tensorphi}
\phi_{k,k'}(x_1,x_2)=\phi_{k}(x_1)\phi_{k'}(x_2),~x_1,x_2 \in \mathbb{T},~k,k' \in \mathbb{N}.
\end{equation}
Thus, for a fixed $r$ we take as a basis functions the set:
\begin{equation*}
\begin{split}
\{\phi_{1,1}, \phi_{1,2},\cdots,\phi_{1,r-1},\phi_{2,1},\cdots,\phi_{2,r-2},\cdots,\phi_{r-1,1} \}
= \{\psi_1, \psi_2,\cdots,\psi_{\frac{r(r-1)}{2}} \}.
\end{split}
\end{equation*}
Therefore, we take all functions $\phi_{k,k'}$ such that $k+k'\leq r$ and order them in the lexicographic order.
The corresponding matrices will be of size $\frac{r(r-1)}{2}\times \frac{r(r-1)}{2}$:
\begin{equation}\label{eq:KJmats12d}
\begin{split}
{\bf K}=& \mbox{diag} \left(\mu^2 e^{-\frac{\pi^2\sigma^2}{2}\left(\left[\frac{k}{2}\right]^2+\left[\frac{k'}{2}\right]^2\right)}\right)_{k+k'\leq r},\\
{\bf J}=& \mbox{diag} \left(\mu^{-2} e^{\frac{\pi^2\sigma^2}{2}\left(\left[\frac{k}{2}\right]^2+\left[\frac{k'}{2}\right]^2\right)}\right)_{k+k'\leq r},
\end{split}
\end{equation}
where the order is again lexicographic.
To compare the results, we use the same time and space discretization throughout all our $2-$dimensional experiments, as well as the same parameters for the numerical scheme. We discretize the time using a step size $\Delta t = \frac{1}{N}$. For the discretization of $M$ we use
\begin{equation*}
\begin{split}
y_{\alpha \alpha'}=\left( \frac{\alpha}{Q+1},\frac{\alpha'}{Q+1} \right), \quad c_{\alpha \alpha'}=\frac{M(y_{\alpha \alpha'})}{\sum_{\beta,\beta'=1}^Q M(y_{\beta \beta'})},~1\leq \alpha,\alpha' \leq Q.
\end{split}
\end{equation*}
We choose $N=20,~Q=20$ and use eight basis functions, $r=8$. Furthermore, we set the numerical scheme parameters to $\lambda = 1,~\omega = \frac{1}{12}$ and $\theta=1$.
In Figure \ref{2d_kernels}, we plot the Gaussian kernels used in the simulations, with different values of $\mu$ and $\sigma$.
We see that the bigger $\mu$ is the higher the peak of the kernel, see (a) and (b) in Figure \ref{2d_kernels}.
This means that each agent in (a) is more adverse of being in crowded areas than agents is (b), $\mu=0.75$ and $\mu=0.5$ respectively.
For higher values of $\sigma$ we see that the kernel becomes flat, compare (b) with (c) in Figure \ref{2d_kernels}, for $\sigma=0.1$ and $\sigma=1$ respectively. As before, this means that the agents penalize others independent of mutual distances.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_32Kernel_section.pdf}
\caption{$K^2_{0.1, 0.75}(x_1,x_2;0,0)$.
\end{subfigure}%
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_31Kernel_section.pdf}
\caption{$K^2_{0.1, 0.5}(x_1,x_2;0,0)$.
\end{subfigure}%
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_33Kernel_section.pdf}
\caption{$K^2_{1, 0.5}(x_1,x_2;0,0)$.
\end{subfigure}%
\caption{Plots of the Gaussian kernels for $(\sigma,\mu) \in \{(0.1,0.75),(0.1,0.5),(1,0.5)\}$.}
\label{2d_kernels}
\end{figure}
In Figure \ref{fig:2d_comp}, we compare the simulation results using the same initial-terminal conditions, see Figure \ref{fig:itc_2D}, but different kernel functions (plotted in the first row of Figure \ref{fig:2d_comp}). In the last row of Figure \ref{fig:2d_comp} we have the final distribution of agents.
We see that for larger values of $\mu$, left column compared with the middle one, the agents' concentration near low-cost regions of terminal cost, $U$, is less dense. We also see that when $\sigma$ is bigger the the agents become more indifferent to the density of the crowd, and concentrate more densely near low-cost values of $U$ -- see the right column in Figure 6 (f).
As in the 1-dimensional case, looking to the projected trajectories in the 2-dimensional plane we observe that for flat kernel agents follow straight lines from the initial positions to closest low-cost regions of the terminal cost function.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_32Kernel_section.pdf}
\caption{$K^2_{0.1, 0.75}(x_1,x_2;0,0)$.}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_31Kernel_section.pdf}
\caption{$K^2_{0.1, 0.5}(x_1,x_2;0,0)$.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_33Kernel_section.pdf}
\caption{$K^2_{1, 0.5}(x_1,x_2;0,0)$.}
\end{subfigure}
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_32trajectories_sampled3.pdf}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_31trajectories_sampled3.pdf}
\caption{Trajectories, ${\bf x}(t,y_{\alpha})$.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_33trajectories_sampled3.pdf}
\end{subfigure}%
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_32trajectories2D.pdf}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_31trajectories2D.pdf}
\caption{Projected trajectories.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_33trajectories2D.pdf}
\end{subfigure}%
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_32Final_dist.pdf}
\end{subfigure}
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_31Final_dist.pdf}
\caption{Final density, $m({\bf x},1)$.}
\end{subfigure}%
~
\begin{subfigure}[t]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Gaussian2D_33Final_dist.pdf}
\end{subfigure}%
%
\caption{Simulations using Gaussian kernels with different parameters, $(\sigma, \mu) \in \left\{ (0.1,0.75), (0.1,0.5), (1,0.5) \right\}$, for each column.
In the first row we show a section of each kernel.
In the second row we show the trajectories of the agents, $\{{\bf x}(t,y_{\alpha \alpha'})\}_{\alpha,\alpha'=1}^Q,~t\in[0,1],$ with initial positions $\{y_{\alpha \alpha'}\}_{\alpha,\alpha'=1}^Q\in {\mathbb{T}}^2$.
In the third row, we plot the 2D projection of the trajectories. And in the last row, we plot the final distribution of the agents, $m({\bf x},1)$.
}
\label{fig:2d_comp}
\end{figure}
\frenchspacing
\bibliographystyle{plain}
|
1,108,101,564,949 | arxiv | \section{Introduction}
Loewner's systolic inequality for the torus and Pu's inequality
\cite{Pu52} for the real projective plane were historically the first
results in systolic geometry. Great stimulus was provided in 1983 by
Gromov's paper \cite{Gr83}, and later by his book \cite{Gr99}.
Our goal is to prove a strengthened version with a remainder term of
Pu's systolic inequality~$\sys^2(g)\leq\frac\pi2\area(g)$ (for an
arbitrary metric~$g$ on~$\RP^2$), analogous to Bonnesen's
inequality~$L^2 - 4\pi A \geq \pi^2(R-r)^2$, where~$L$ is the length
of a Jordan curve in the plane,~$A$ is the area of the region bounded
by the curve,~$R$ is the circumradius and~$r$ is the inradius.
Note that both the original proof in Pu (\cite{Pu52}, 1952) and the
one given by Berger (\cite{Be65}, 1965, pp.\;299--305) proceed by
averaging the metric and showing that the averaging process decreases
the area and increases the systole. Such an approach involves a
5-dimensional integration (instead of a 3-dimensional one given here)
and makes it harder to obtain an explicit expression for a remainder
term. Analogous results for the torus were obtained in \cite{Ho09}
with generalisations in \cite{Ba19} - \cite{Sa11}.
\section{The results}
We define a closed~$3$-dimensional manifold~$M \subseteq \R^3 \times
\R^3$ by setting
\[
M=\{ (v,w) \in \R^3 \times \R^3 \colon \; v\cdot v =1, \ w\cdot w=1, \
v\cdot w=0 \}
\]
where~$v \, \cdot \, w$ is the scalar product on~$\R^3$. We have a
diffeomorphism~$M\to SO(3,\R)$,~$(v,w) \mapsto (v,w,v\times w)$,
where~$v\times w$ is the vector product on~$\R^3$. Given a
point~$(v,w) \in M$, the tangent space~$T_{(v,w)}M$ can be identified
by differentiating the three defining equations of~$M$ along a path
through~$(v,w)$. Thus
\[
T_{(v,w)}M=\{ (X,Y) \in \R^3 \times \R^3\colon X\cdot v =0, \ Y\cdot
w=0, \ X\cdot w+Y\cdot v=0 \}.
\]
We define a Riemannian metric~$g_M$ on~$M$ as follows. Given a point
$(v,w)\in M$, let~$n=v\times w$ and declare the basis~$(0,n) , (n,0) ,
(w,-v)$ of~$T_{(v,w)}M$ to be orthonormal. This metric is a
modification of the metric restricted to~$M$
from~$\R^3\times\R^3=\R^6$. Namely, with respect to the Euclidean
metric on~${\R}^6$ the above three vectors are orthogonal and the
first two have length 1. However, the third vector has Euclidean
length~$\sqrt{2}$, whereas we have defined its length to be 1. Thus
if~$A\subseteq T_{(v,w)}M$ denotes the span of~$(0,n)$ and~$(n,0)$,
and~$B\subseteq T_{(v,w)}M$ is spanned by~$(w,-v)$, then the metric
$g_M$ on~$M$ is obtained from the Euclidean metric~$g$ on~$\R^6$
(viewed as a quadratic form) as follows:
\begin{equation}
g_M = g\!\downharpoonright_A^{\phantom{I}}+\,\frac12\,
g\!\downharpoonright_B^{\phantom{II}}.
\end{equation}
Each of the natural projections~$p,q \colon M \to S^2$ given
by~$p(v,w)=v$ and~$q(v,w)=w$, exhibits~$M$ as a circle bundle
over~$S^2$.
\begin{lemma}
\label{l101}
The maps~$p$ and~$q$ on~$(M,g_M)$ are Riemannian submersions, over the
unit sphere~$S^2\subseteq\R^3$.
\end{lemma}
\begin{proof}
For the projection~$p$, given~$(v,w) \in M$, the vector~$(0,n)$ as
defined above is tangent to the fiber~$p^{-1}(v)$. The other two
vectors,~$(n,0)$ and~$(w,-v)$, are thus an orthonormal basis for the
subspace of~$T_{(v,w)}M$ normal to the fiber, and are mapped by~$dp$
to the orthonormal basis~$n,w$ of~$T_v S^2$.
\end{proof}
The projection~$p$ maps the fiber~$q^{-1}(w)$ onto a great circle
of~$S^2$. This map preserves length since the unit vector~$(n,0)$,
tangent to the fiber~$q^{-1}(w)$ at~$(v,w)$, is mapped by~$dp$ to the
unit vector~$n \in T_v S^2$. The same comments apply when the roles
of~$p$ and~$q$ are reversed.
In the following proposition, integration takes place respectively
over great circles~$C\subseteq S^2$, over the fibers in~$M$,
over~$S^2$, and over~$M$. The integration is always with respect to
the volume element of the given Riemannian metric. Since~$p$ and~$q$
are Riemannian submersions by Lemma~\ref{l101}, we can use Fubini's
Theorem to integrate over~$M$ by integrating first over the fibers of
either~$p$ or~$q$, and then over~$S^2$; cf.\;\cite[Lemma\;4]{Cs18}.
By the remarks above, if~$C=p(q^{-1}(w))$ and~$f\colon S^2 \to \R$
then~$\int_{q^{-1}(w)} f\circ p = \int_C f$.
\begin{proposition}
\label{inq}
Given a continuous function~$f\colon S^2 \to \R^+$, we define~$m\in\R$
by setting~$m=\min \{\int_C f\colon C \subseteq S^2 \text{ a great
circle} \}$. Then
\[
\frac{m^2}{\pi}\leq\frac{1}{4\pi}\bigg(\int_{S^2}f\bigg)^2\leq\int_{S^2}f^2,
\]
where equality in the second inequality occurs if and only if~$f$ is
constant.
\end{proposition}
\begin{proof}
Using the fact that~$M$ is the total space of a pair of Riemannian
submersions, we obtain
\[
\begin{aligned}
\int_{S^2} f & = \int_{S^2}\bigg(\frac{1}{2\pi}\int_{p^{-1}(v)} f\circ
p\bigg)
\\&= \frac{1}{2\pi}\int_M f \circ p
\\&=
\frac{1}{2\pi}\int_{S^2}\bigg(\int_{q^{-1}(w)} f\circ p\bigg) \\&\geq
\frac{1}{2\pi}\int_{S^2}m =2m,
\end{aligned}
\]
proving the first inequality. By the Cauchy--Schwarz inequality, we
have
\[
\Big(\int_{S^2} 1 \cdot f \Big)^2 \leq 4 \pi \int_{S^2} f^2,
\]
proving the second inequality. Here equality occurs if and only if
$f$ and~$1$ are linearly dependent, i.e., if and only if~$f$ is
constant.
\end{proof}
We define the quantity~$V_f$ by setting~$V_f =\int_{S^2} f^2 -
\frac{1}{4\pi}\big(\int_{S^2} f \big)^2$. Then Proposition \ref{inq}
can be restated as follows.
\begin{corollary}
\label{c1}
Let~$f\colon S^2 \to \R^+$ be continuous. Then
\[
\int_{S^2} f^2 - \frac{m^2}{\pi} \geq V_f \geq 0,
\]
and~$V_f=0$ if and only if~$f$ is constant.
\end{corollary}
\begin{proof}
The proof is obtained from Proposition~\ref{inq} by noting that~$a\leq
b \leq c$ if and only if~$c-a \geq c-b \geq 0$.
\end{proof}
We can assign a probabilistic meaning to~$V_f$ as follows. Divide the
area measure on~$S^2$ by~$4\pi$, thus turning it into a probability
measure~$\mu$. A function~$f\colon S^2 \to \R^+$ is then thought of
as a random variable with
expectation~$E_\mu(f)=\frac{1}{4\pi}\int_{S^2}f$. Its variance is
thus given by
\[
\Var_\mu(f)=E_\mu(f^2)-\big(E_\mu(f)\big)^2 = \frac{1}{4\pi}\int_{S^2}
f^2 - \bigg(\frac{1}{4\pi}\int_{S^2} f\bigg)^2 = \frac{1}{4\pi} V_f.
\]
The variance of a random variable~$f$ is non-negative, and it vanishes
if and only if~$f$ is constant. This reproves the corresponding
properties of~$V_f$ established above via the Cauchy--Schwarz
inequality.
Now let~$g_0$ be the metric of constant Gaussian curvature~$K=1$
on~$\RP^2$. The double covering~$\rho \colon S^2 \to (\RP^2, g_0)$ is
a local isometry. Each projective line~$C\subseteq\RP^2$ is the image
under~$\rho$ of a great circle of~$S^2$.
\begin{proposition}
\label{inqP}
Given a function~$f\colon \RP^2 \to \R^+$, we define~$\bar m\in\R$ by
setting~$\bar{m}=\min \{ \int_C f \colon C \subseteq \RP^2 \ \text{a
projective line} \}$. Then
\[
\frac{2\bar{m}^2}{\pi} \ \leq \ \frac{1}{2\pi}\bigg(\int_{\RP^2} f
\bigg)^2 \ \leq \ \int_{\RP^2} f^2,
\]
where equality in the second inequality occurs if and only if~$f$ is
constant.
\end{proposition}
\begin{proof}
We apply Proposition \ref{inq} to the composition~$f\circ\rho$. Note
that we have $\int_{\rho^{-1}(C)} f\circ \rho = 2 \int_C f$
and~$\int_{S^2} f \circ \rho = 2 \int_{\RP^2} f$. The condition
for~$f$ to be constant holds since~$f$ is constant if and only
if~$f\circ \rho$ is constant.
\end{proof}
For~$\RP^2$ we define~$\bar{V}_f = \int_{\RP^2} f^2 -
\frac{1}{2\pi}\big(\int_{\RP^2} f \big)^2 =\frac{1}{2}V_{f\circ\rho}$.
We obtain the following restatement of Proposition \ref{inqP}.
\begin{corollary}
\label{c1P}
Let~$f\colon\RP^2 \to \R^+$ be a continuous function. Then
\[
\int_{\RP^2} f^2 - \frac{2\bar{m}^2}{\pi} \geq \bar{V}_f \geq 0,
\]
where~$\bar{V}_f=0$ if and only if~$f$ is constant.
\end{corollary}
Relative to the probability measure induced by~$\frac{1}{2\pi}g_0$
on~$\RP^2$, we have~$E(f) = \frac{1}{2\pi}\int_{\RP^2}f$, and
therefore~$\Var(f)=\frac{1}{2\pi}\bar{V}_f$, providing a probabilistic
meaning for the quantity~$\bar{V}_f$, as before.
By the uniformization theorem, every metric~$g$ on~$\RP^2$ is of the
form~$g=f^2 g_0$ where~$g_0$ is of constant Gaussian curvature~$+1$,
and the function~$f\colon\RP^2\to\R^+$ is continuous. The area of~$g$
is~$\int_{\RP^2} f^2$, and the~$g$-length of a projective line~$C$ is
$\int_C f$. Let~$L$ be the shortest length of a noncontractible
loop. Then~$L\leq \bar{m}$ where~$\bar{m}$ is defined in
Proposition~\ref{inqP}, since a projective line in~$\RP^2$ is a
noncontractible loop. Then Corollary~\ref{c1P} implies~$\area(\RP^2,
g) - \frac{2L^2}{\pi} \ \geq \ \bar{V}_f \ \geq \ 0$.
If~$\area(\RP^2, g) = \frac{2L^2}{\pi}$ then~$ \bar{V}_f = 0$, which
implies that~$f$ is constant, by Corollary \ref{c1P}. Conversely, if
$f$ is a constant~$c$, then the only geodesics are the projective
lines, and therefore~$L=c\pi$. Hence~$\frac{2L^2}{\pi} = 2\pi
c^2=\area(\RP^2)$. We have thus completed the proof of the following
result strengthening Pu's inequality.
\begin{theorem}
\label{pu}
Let~$g$ be a Riemannian metric on~$\RP^2$. Let~$L$ be the shortest
length of a noncontractible loop in~$(\RP^2,g)$.
Let~$f\colon\RP^2\to\R^+$ be such that~$g=f^2g_0$ where~$g_0$ is of
constant Gaussian curvature~$+1$. Then
\[
\area(g) - \frac{2L^2}{\pi}\geq 2\pi \Var(f),
\]
where the variance is with respect to the probability measure induced
by~$\frac{1}{2\pi} g_0$. Furthermore,
equality\,~$\area(g)=\frac{2L^2}{\pi}$ holds if and only if~$f$ is
constant.
\end{theorem}
\bibliographystyle{amsalpha}
|
1,108,101,564,950 | arxiv | \section*{Introduction}
\label{intro}
Measuring the neutron lifetime is a whole epoch in the history of nuclear physics.
This era began in the late 1940s and consists of two periods.
In the first period, the results for the neutron lifetime exceeded 900 seconds.
The so-called beam method measured a counting rate of protons or
electrons from the decay of neutrons $n \to p+e^-+\tilde \nu $
in a neutron beam of a nuclear reactor ~\cite{SS59}, ~\cite{Chr72}, ~\cite{Brn80}.
For the second period, typical neutron lifetimes
were less than 900 seconds. During this period, another method
of measuring the neutron lifetime becomes the leading one, namely
the method of storing ultracold neutrons in a trap
until beta-decay. In 2018, this method yielded
the neutron lifetime equal to $881.5 \pm 0.9$ s \cite{Srb2018}.
The most accurate result of the beam method is
$\tau_n=887.7 \pm2.2$ s \cite{Yue2013}.
There were cases of recalculating the results of the
experiments and shifting it from the upper range of values to
the lower one. Herewith, the shift in the estimates of the
lifetime significantly exceeded the indicated experimental
errors. In the case of \cite{SS59} in 1959, the result was
$1013 \pm26$ seconds and was reduced in 1978 to the
value of $877 \pm 8$ seconds \cite{BS78} while repeating
the basic scheme of the experiment.
In the experiment \cite{Brn80} the result of 1980 was $937 \pm 18$
seconds, but 16 years later, the authors published their result
\cite{Brn96} as equal to $889.2 \pm 4.8$ s. The averaging out all the
results for the whole mentioned measurement period without any
restriction leads to an average neutron lifetime of about 900 s.
The beam experiments use the differential decay equation:
$$
\frac{dN_d}{dt}=\frac{1}{\tau_n} \cdot \varepsilon \times N, \eqno (1)
$$
where $\frac{dN_d}{dt}$ is the counting rate of the decay
electron (or proton) detector, $\tau_n$ is the neutron
lifetime, $N$ is the number of neutrons at time $t$ in some
region of the neutron beam, $\varepsilon$ is the total
efficiency. The total efficiency includes the efficiency of
neutron detection by a neutron detector, the efficiency of
collecting electrons (or protons) from the decay region to
the electron (proton) detector, and the efficiency of their registration
by the detector. The most difficult problem is the
exact experimental determination of the number $N_d=(\varepsilon \times N)$
on the right side of equation (1), i.e. the number of neutrons
whose decay is in the field of view for the electron (or proton) detector.
The value $(\varepsilon \times N)$ includes all sources of
systematic errors of the beam method.
A new method for the beam experiments has been developed in some years.
This method excludes necessities to measure precisely the number of neutrons
in the beam, and eliminates the need to determine the absolute values
of the efficiency components.
\section{ Variation method of neutron decay scale tuning}
\label{sect1:VARM}
The method proposed by the author \cite{VV1} uses a step-wise variation
of the initial number of neutrons passing through a region
controlled by a detector of electrons. The method is based on a system
of differential equations of type (1) for $k$ steps of
neutron numbers:
$$
R_d(i)=\frac{1}{\tau_n} \times N_d(i), \eqno (2)
$$
where $R_d(i)$ is the electron count rate from the detector,
$N_d(i)$ is a number of neutrons seen by the electron detector at
the $i$-th variation step, $i=1, 2, 3,\cdots k$. The problem of measuring
the number of neutrons is not posed here at all. At each
$i$-th stage of the neutron number variation the count rate
$R_d(i)$ of the electron detector is measured and, after
multiple iterations, the count rate error $\sigma_d(i)$ at each
stage is determined. As a result, an array $(R_d(i),\sigma_d(i))$
with $k$ lines is formed. To simplify the notation, it is worth
eliminating the indices $d$ and $n$ in (2).
The next step is to represent an unknown set of neutron
numbers $N_i$ by members of an arithmetic progression
$N_i\approx \frac{1}{\mu}\times m_i$ with $\frac{1}{\mu}$
as a decimal common difference. The common
difference of the required arithmetic progression is the step of
the neutron number scale that describes the distribution of
counting rates with their errors. The integer $m_i$ is the
number of the scale division corresponding to the neutron
number $N_i$. The parameter $\mu$, the inverse of the scale
step, is called a scale factor or $\mu$-factor.
Then the system of differential equations of decay has the following form:
$$
\tau \times (R_i \pm \sigma_i)\approx \frac{1}{\mu} \times m_i, \eqno (3)
$$
where $i=1, 2, 3,\cdots k$, $k$ is the full number of
variation steps.
The goal is set as follows. It is necessary to choose an
optimal scale step to describe in the best way the measured data
array of count rates by a certain sample of members of the
obtained arithmetic progression of neutron numbers.
The $\frac{1}{\mu}$ scale step uniquely identifies the set
of integers $m_i$ - the scale division numbers corresponding
to the array of pairs $(R_i,\sigma_i)$ for a given value of $\tau$--trial
lifetime.
The estimate $\aleph_i$ of neutron number $N_i$ is
$$
\aleph_i =
round \left[\frac{round \left[\mu \cdot \tau \cdot R_i,0 \right]}{\mu}, p \right] \mbox{.} \eqno (4)
$$
The operator $round[C, p]$ rounds to the nearest number
with $p$ significant digits. The operator $round[C, 0]$ means
rounding $C$ to the nearest integer.
The following error functional is constructed for the
required range of $\tau$ for different $p$:
$$
F_{\mu,p}(\tau)=\sum_{i=1}^k \frac{(R_i-\frac{1}{\tau} \cdot
\aleph_i(p,\mu,\tau))^2}{\sigma^2_i} \mbox{.} \eqno (5)
$$
The scale factor $\mu_0$ is to be selected for the best
approximation by the estimate
$\aleph_i(p,\mu_0,\tau)$ for any $p$.
The neutron lifetime $\tau_0$ is determined from the equation
$$
\frac{dF_{\mu_0,p}(\tau)}{d\tau}=0 \mbox{.} \eqno (6)
$$
\begin{figure}[h!]
\includegraphics{fig1_lines}
\caption{Description of count rates with integer neutron numbers.}
\label{fig1}
\end{figure}
Fig. ~\ref{fig1} illustrates the method in the case of integer neutron
numbers. The figure shows the best correspondence of count
rates to the set of neutron numbers 1, 2, 3, 5 by the straight
line L1. The path AB-BC-CD-DE-EF describes the action of the
operator (4) in the case of the count rate $R2$ for the line L2.
At the counting rate $R3$ the operator implements the
trajectory GH-HK-KD-DE-EF. The description of the line L2 in
integers leads to an increase in the error functional by the deviations
FA and GF. Hence a deviation from the optimal line (from L1 to
L2) leads to an increase in the approximation error while
processing in the same neutron number scale. The main
requirement for obtaining an accurate result is a high accuracy
of counting rate measurements.
\section{Experimental data}
\label{sect2:ExpData}
The experimental data of background measurement in the last ITEP experiment
on the magnetic storage of ultra-cold neutrons (UCN) was used.
The background consisted of electrons generated in the vacuum
chamber of the magnetic trap by the decay of neutrons.
Electrons were transported from the trap to the UCN detector \cite{VV2}.
The proportional gas chamber of the UCN detector operated as an
electron detector of high efficiency.
The vertical and horizontal channels of the reactor with open shutters
were sources of thermal, intermediate and fast neutrons of the neutron
background. The set of those neutrons penetrating through
the walls of the trap was the generator of decay electrons.
To measure the electron background, a separate long experiment
was performed. The low pressure gas detector was specially
optimized for counting electrons emanating from the magnetic trap.
A special absorber was installed into the trap chamber for
eliminating the ultracold neutrons. A virtually complete storage cycle for electrons
collected in the trap from background neutrons was carried out.
The electron counts in the intervals with the magnetic shutter
on and in the drain intervals with the magnetic shutter off
were measured. The counting of the electron background
from the neutron flux through the magnetic trap
was an analogue of the beam experiment. The count of electrons
flowing to the detector from the magnetic trap changed cyclically with
the changes in the set of simultaneously operating neutron
channels of the reactor. The data on background measurements in
the readout intervals of the outgoing electrons were processed
and shown in the order
of growth in Fig. ~\ref{fig2}--Fig.~\ref{fig4}.
\begin{figure}
\includegraphics{fig2_rates_a}
\caption{Electron count rate vs number. Number 1-71.}
\label{fig2}
\end{figure}
\begin{figure}
\includegraphics{fig3_rates_b}
\caption{Electron count rate vs number. Number 72-145.}
\label{fig3}
\end{figure}
\begin{figure}
\includegraphics{fig4_rates_c}
\caption{Electron count rate vs number. Number 147-151.}
\label{fig4}
\end{figure}
The full series of 152 count rates is divided into two series.
There are seventy-one values (``series 71'', S-71) in the first
series (Fig. ~\ref{fig2}). The most accurate among these values is
$6.6667\cdot10^{-3} \pm 6\cdot10^{-7}$ $s^{-1}$. In addition,
eighty-one values formed the second series (``series 81'', S-81)
Fig. \ref{fig3} --Fig. \ref{fig4}. Out of 81 measurements,
for reasons of scale, two points are not shown:
$$
\mbox{point number} \quad 146: R = 1.355 \cdot10^{-2} \pm 6 \cdot 10^{-5} \quad s^{-1}
$$
and
$$
\mbox{point number} \quad 152: R = 3.738 \cdot10^{-2} \pm 4 \cdot10^{-4}\quad s^{-1} \mbox{.}
$$
There are several steps of the background electrons in
Fig.~\ref{fig2}~--Fig.\ref{fig4}.
Fig. ~\ref{fig4} shows the count of electrons flowing out of the
magnetic trap after opening the magnetic shutter.
All results for different reading-out intervals were received
over a period of more than 100 days. The paper \cite{VV2}
described the scheme and description of the
experimental set-up in more details.
\section{Decay asymmetry and neutron lifetime}
\label{sect3:DAs}
The well-known phenomenon of electron-spin asymmetry of the neutron
beta decay $n \to p+e^{-}+\tilde \nu$ reveals itself in the fact that neutron
decays with an electron emitting in the direction of the neutron spin occur
less frequently than neutron decays with an electron emitting against
the neutron spin direction. The neutron decay probability
\cite{Jacks} modified for decay electrons emitting is
$$
dW(E_e,\Theta_e)=W_0dE_ed\Theta_e\left(1+A\cdot\frac{v_e}{c}
\cdot\cos\theta_e\right) \mbox{,} \eqno (7)
$$
where $A$ is the correlation coefficient of the electron emission with the direction
of the neutron spin, $\theta_e$ is an electron emission angle
relative to the direction of the neutron spin,
$\Theta_e$ is a solid angle of electron emission, $\frac{v_e}{c}$ is an electron helicity,
and $W_0$ is a constant. From numerous experiments \cite{PDG} it is known
that the coefficient $A = - 0.1173 \pm 0.0013$. Thus, neutron decays with electron
emission in the direction of the neutron spin and against the spin differ qualitatively and
quantitatively. The decay asymmetry in case of a transversely polarized neutron beam
is the relative difference of the electrons emitted in the direction of the neutron spin
and against the direction of the neutron spin. However, the fact that even with
a completely depolarized ensemble of neutrons, the asymmetry of decay leads
to two different frequencies of electron generation in the decay region remains
unnoticed. Nevertheless, the asymmetry of neutron decay is a phenomenon of
existence of two $\beta$-decay constants, i.e. two reduced decay frequencies
defined at different subsets of neutrons. The total set $T$ of neutrons is a sum
of two subsets. Those are the subset of $L$-neutrons with decays by an electron
ejection against the direction of the neutron spin ($L$-channel) and the subset
of $R$-neutrons, decaying with the emission of an electron in the direction of
the neutron spin ($R$-channel). Hence $T=L\cup R$ and the number of neutrons
$N_T$ in the total set $T$ is the sum of $L$-neutrons ($L$-subset) and $R$-neutrons
($R$-subset): $N_T=N_L+N_R$. Without loss of generality the decay constants
for these neutron sets are equal to
$$
\lambda_S=\frac{1}{N_{S}}\cdot\frac{dN_{S}}{dt}\mbox{,} \eqno (8)
$$
where $S=T, L, R$.
Differentiation of the sum $N_T$ in the expression (8) for $S=T$ with the
subsequent insularity of the partial constants for $S=L$ and $S=R$
leads to the expression:
$$
\lambda_T=\lambda_L \cdot W_L+ \lambda_R \cdot W_R \mbox{.} \eqno (9)
$$
Here the total decay constant $\lambda_T$ has the form of a weighted average
of the partial constants $\lambda_L$ and $\lambda_R$.
The weights $W_L$ and $W_R$, when use the parallel decay rule \cite[ p. 344]{Blatt} as
$\frac{N_R}{N_L}=\frac{\lambda_R}{\lambda_L}$
are the following relations
$$
W_L=\frac{N_L}{N_L + N_R}=\frac{\lambda_L}{\lambda_L + \lambda_R} \mbox{,} \eqno (10.1)
$$
$$
W_R=\frac{N_R}{N_L + N_R}=\frac{\lambda_R}{\lambda_L + \lambda_R} \mbox{.} \eqno (10.2)
$$
The lifetime of neutrons in every $S$-set is
$\tau_{NS} = \frac {1} {\lambda_S}$
and the total lifetime on the full set $T$ is equal to
$\tau_{NT} = \frac {1} {\lambda_T}$.
Therefore, a record for the total lifetime follows from (9) in the form of
a weighted average value, namely:
$$
\tau_{NT}=\tau_{NL} \cdot W_{\tau L}+\tau_{NR} \cdot W_{\tau R}\mbox{,} \eqno (11)
$$
where the weights for the partial lifetimes receive the following forms:
$$
W_{\tau L} = \frac{\tau^2_{NR}}{\tau^2_{NL}+\tau^2_{NR}}\mbox{,} \eqno (12.1)
$$
$$
W_{\tau R} = \frac{\tau^2_{NL}}{\tau^2_{NL}+\tau^2_{NR}}\mbox{.} \eqno (12.2)
$$
The introduced partial decay constants receive the following dependence on
the asymmetry parameter $\Delta$
$$
\lambda_L=\lambda_0\cdot (1+\Delta) \mbox{,} \eqno (13.1)
$$
$$
\lambda_R=\lambda_0\cdot (1-\Delta) \mbox{,} \eqno (13.2)
$$
where $\lambda_0$ is the arithmetic mean of the introduced constants,
$\Delta=A\cdot\frac{\bar v_e}{c}$ , where $\frac{\bar v_e}{c}$ is the
average helicity of electrons emitted during neutron decay.
The average helicity of electrons can be calculated from the electronic
neutron decay spectrum, for example, from the results of \cite{Robsn}.
The partial lifetimes are
$$
\tau_{NL}=\tau_{NCenter} \cdot (1-\Delta) \mbox{,} \eqno (14.1)
$$
$$
\tau_{NR}= \tau_{NCenter} \cdot (1+\Delta) \mbox{,} \eqno (14.2)
$$
where $\tau_{NCenter}$ is a central neutron life time , i.e. the central
point between values of $\tau_{NL}$ and $\tau_{NR}$ .
The weighted average decay constant $\lambda_T$ is related to the average
decay constant $\lambda_0$ as follows:
$$
\lambda_T=\lambda_0 \cdot (1+\Delta^2) \mbox{.} \eqno (15)
$$
The total neutron lifetime $\tau_{NT}$ vs $\Delta$ is
$$
\tau_{NT}=
\tau_{NCenter} \cdot \left(1-\frac{2 \cdot \Delta^2}{1+\Delta^2}\right) \mbox{.} \eqno (16)
$$
Therefore, the dependence of the error functional (5) on the trial lifetime is symmetrical with respect to the point $\tau_{NCenter}$. Then the weighted average lifetime according to (16) is to the left of the center of symmetry.
The goal of this study, therefore, turned out to be two-fold.
Firstly, it is a direct determination of an experimental value for the observed neutron lifetime, i.e. the weighted average neutron lifetime.
On the other hand, it became necessary to determine the value of the ``central'' neutron lifetime. In the case of a convincing correspondence between a determined displacement value and a calculated value (16), it is advisable to introduce new physical quantities into the physical dictionary -- the lifetimes of $L$-neutrons and $R$-neutrons and to estimate their numerical values using formulas (14.1) and (14.2).
\section{Direct determination of the neutron lifetime from the experimental data}
\label{sect4:DDED}
The interval of the trial lifetime from 860 seconds to 940 seconds is quite
informative for solving the given problem. This interval includes
the results of measurements of the lifetime of the last three decades.
To determine the neutron lifetime, it is necessary to find the minimum value
of the scale-factor $\mu$, providing the condition of ``reduction-to-one''.
This condition means that the minimum point of the error
functional is the closest to the unity for any data series in the considered
range of the lifetime.
The ``reduction-to-one'' is applied to the data shown in Fig. ~\ref{fig2}--Fig.~\ref{fig4}
for various values of the scale step in the indicated range of the trial
neutron lifetime $\tau$. The results of the data processing
are illustrated by Fig. ~\ref{fig5}--Fig.~\ref{fig7}.
\begin{figure}
\includegraphics{fig5_sigma_152}
\caption{ Error functional dependence on trial lifetime for S-152.}
\label{fig5}
\end{figure}
Fig. ~\ref{fig5} shows the result of the reduction-to-one
for the S-152 series with $p=3$. The closest to the unity is
the value of the functional minimum equal to 1.074, corresponding to
the lifetime of 883.305 s for the scale-factor $\mu =72$. At the same
scale-factor $\mu$, minimums of the functional of the second, third and
fourth orders of accuracy reached a good agreement. The value for
the neutron lifetime is
$\tau_{NT}=(529983\pm10)\cdot\frac{1}{6}\cdot10^{-2}$ s
after using the half-width of the parabola of the third order functional
at height $\chi^2=1.2$.
\begin{figure}
\includegraphics{fig6_sigma_71_81}
\caption{ Error functional dependence on trial lifetime for S-71 and S-81.}
\label{fig6}
\end{figure}
Fig. ~\ref{fig6} shows the results of the reduction-to-one for
the S-71 (1) and S-81 (2) series separately. The dependence of the average
weighted functional over two series on the lifetime gives the result for
the neutron lifetime $\tau_{NT}=(264992\pm7)\cdot\frac{1}{3}\cdot10^{-2}$ s.
Thus, the complete result for the neutron lifetime is
$\tau_{NT}=883.31\pm0.02$ s, 95\%, $CL$.
All these facts prove the following:
a) both independent data series S-71 and S-81 are qualitatively homogeneous
and describe electrons from the decay of neutrons without admixture
of an additional background;
b) the results of independent data series S-71 and S-81 are compatible
within statistical errors, reasonably processed together and do not require
an additional introduction of a systematic error to describe the consolidated result.
Therefore, the applied method has reached its goal of eliminating known
systematic errors for higher accuracy in the experimental determination of
the neutron lifetime.
Additional studies were made to prove the existence of a common center of
symmetry of the error functional on the trial lifetime interval
from 895 to 905 seconds for all orders of accuracy $p = 0, 1, 2, \cdots $.
\begin{figure}
\includegraphics{fig7_symmetry}
\caption{Center of symmetry of the error functional for S-152 at $\mu=72$}
\label{fig7}
\end{figure}
Fig.~\ref{fig7} shows results for the scale-factor $\mu=72$ by dependencies
(1)-(3) of error functional $F_{\mu,p}(\tau)$: (1) integers ($p = 0$); (2) $p=1$; (3) $p=2$.
The coincidence of the minimums of all three orders of the functional on the
$\tau$-scale at the point of 899.975 s, with symmetry of the left and right wings
of the functional shows that this point is the center of symmetry and confirms
the result of \cite{VV2}. This value can serve as an estimate of
the so-called ``central'' neutron lifetime.
The symmetry of the coordinates of the jumps of the functional (3) with respect
to this $\tau_{NCenter}=899.975$ s point also confirms its central position.
The dependence of the functional on the trial lifetime allows determining
the value of the central neutron lifetime with a completely moderate error
$\tau_{NCenter}=(179995\pm 4)\cdot 5 \cdot 10^{-3}$ s. It is important
to emphasize that the error functional in the range of the trial lifetime of
750-1050 s has only one specified center of symmetry for all accuracy level $p$.
\section{Upper limit of a systematic error}
\label{sect5:SYS}
As a source of systematic error, a hypothetical difference between the electrons
flowing through the closed magnetic shutter and the electrons flowing onto
the detector through the open magnetic shutter may appear. If only the electrons
of the first type are presented in the first S-71 series, then in the second S-81
series there is a fraction of electrons of the second type providing
the maximum values of the counting rate.
The result indicated in Fig.~\ref{fig6} proves that the independently
obtained lifetime values in these series are mutually compatible within the
statistical error. In the case of functional (1), the result is $883.29 \pm 0.03$ s,
and in the case of functional (2), the result is $883.33 \pm 0.03$ s.
The errors indicated here are equal to the half-width of the parabolas at the level $\chi^2=1.3$.
It means that the displacement is within the limits of the statistical errors
at 95$\% CL$.
Calculation of the half-sum of these lifetimes and its error gives
$\tau_{NT}=883.31\pm 0.02$ s, and comparison with the result
shown in Fig. ~\ref{fig5} confirms complete coincidence. Moreover,
the minimums of the functional for S-152, $\mu=72$ at $p=2$,
$p=3$ and $p=4$ coincide in those limits altogether.
Nevertheless, the fact that the minimums at the accuracy $p=4$
shift from 883.29 s for S-71 to 883.33 s for S-81 gives
a reason to interpret it due to some unaccounted factors and to introduce
on this basis a systematic error equal to the half of the bias value.
Thus, the estimate of the systematic error is 0.02 s, and the result for
the neutron lifetime as a weighted average is
$\tau_{NT}=883.31 \pm 0.02(stat.)\pm 0.02(sys.)$ s, $95\% CL$.
In the same format, the central neutron lifetime has got the following
value $\tau_{NCenter}=899.98 \pm 0.02(stat.) \pm 0.02(sys.)$ s.
\section{Discussion of the result}
\label{sect6:DISR}
In details the previous two results in the introduction are
$$
887.7 \pm 1.2 (stat.) \pm 1.9 (syst.) \quad \mbox{s}
$$ for the beam method,
and
$$
881.5 \pm 0.7 (stat.) \pm 0.6 (syst.) \quad \mbox{s}
$$
for the storage method.
Taking into account limits within double total errors for these two results,
it is easy to note the coincidence of the result of this work with the lower limit
for the first result (883.3 s) and with the upper limit for the second result (883.3 s) .
Therefore, a good agreement between these three results, including the presented
one, is evident.
It also means that there are no grounds for any conclusions about the
so-called "neutron anomaly" as a possible interpretation of the difference
between the two mentioned results of measuring the neutron lifetime.
A side and unexpected result of the present research is the above evidence of
the difference between the two obtained neutron lifetimes, the first of them
is the weighted average value and the second is the "central" lifetime of neutron.
Within the error limits (0.02 s), the obtained value
of the weighted average neutron lifetime in a more convenient form is
$$
\tau_{NT}=900 \cdot \frac{53}{54} \quad \mbox{s,}
$$
while the central lifetime is $\tau_{NCenter}\approx 900$ s within the same error.
The value of the relative shift
$$
\delta=\frac{\tau_{NCenter} - \tau_{NT}}{\tau_{NCenter}+\tau_{NT}}
$$
is a characteristic of the displacement of the weighted average lifetime relative to
the central lifetime. Then the estimate of the relative shift is
$$
\delta=\frac{1}{107} \mbox{.}
$$
The weighted average lifetime is connected with the central lifetime as
$$
\tau_{NT}=\tau_{NCenter} \cdot \left(1-\frac{2 \cdot \delta}{1+\delta}\right) \mbox{.}
\eqno (17)
$$
From (16) $\delta$ is interpreted as $\delta=\Delta^2$.
The obtained value for $\Delta=\frac{1}{\sqrt{107}}$ is in a good agreement with $A = - 0.1173 \pm 0.001$
and $\frac{\bar v_e}{c}=0.824$. This value for the average helicity is confirmed
by the spectrum of electrons from the neutron decay \cite{Robsn}.
Thus, the displacement of the weighted average neutron lifetime relative to the
so-called central lifetime is a consequence of the electron-spin asymmetry of neutron decay.
The main purpose of this research is the direct determination of the observed neutron lifetime.
Nevertheless, it is easy to predict from the displacement the numerical values for
the lifetime of $L$-neutrons $\tau_{NL}$ (14.1) and
the lifetime of $R$-neutrons $\tau_{NR}$ (14.2). They are the following:
$$
\tau_{NL}=(2 \cdot 3 \cdot 5)^2 \cdot \left (1-\frac{1}{\sqrt{107}} \right) \quad \mbox{s,}
$$
$$
\tau_{NR}=(2 \cdot 3 \cdot 5)^2 \cdot \left (1+\frac{1}{\sqrt{107}} \right) \quad \mbox{s.}
$$
The errors of the indicated values are estimated not exceeding 0.08 s.
A direct determination of these parameters of the neutron ~$\beta$-decay will make
it possible in the future to indicate their values more accurately.
Now it is worth noticing that the weighted average of the mentioned above
values of $\tau_{NL}$ and $\tau_{NR}$ is in a good agreement with
the neutron lifetime obtained in this research from the experimental data.
This can be easily verified using numerical expressions for the lifetime weights of
(12.1) and (12.2), namely:
$$
W_{\tau L}=\frac{1}{2}+\frac{\sqrt{107}}{108} \mbox{,}
$$
$$
W_{\tau R}=\frac{1}{2}-\frac{\sqrt{107}}{108} \mbox{.}
$$
The calculation result by formula (11) gives 883.33 s
that coincides with the experimental result of this work within $\pm 0.01$ s.
\section{Conclusion}
\label{sect7:CONCL}
Using the proposed method, the neutron lifetime is determined with
the accuracy of 0.02 s. The system of differential equations of decay
with a step-wise variation of the number of neutrons turned out
to be a sufficient tool for determining the neutron lifetime by
the modified least-squares method.
Traditional sources of systematic errors inherent in the beam method of measuring
the neutron lifetime are successfully excluded. As a result, the accuracy of the neutron lifetime is significantly improved. The observed (weighted average) neutron lifetime obtained in this work is expressed in primes as
$$
\tau_{NT}=2 \cdot 5^2 \cdot \frac{53}{3} \quad \mbox{s}
$$
within no more than 0.03 s (taking into account the introduced systematic error).
In addition to the main result, estimates of the effect of splitting the neutron lifetime
are obtained. It is indicated that neutron decay can be described by two lifetimes:
$L$-neutron lifetime and $R$-neutron lifetime, differing in the sign of the scalar
product of the electron momentum and the neutron spin. The estimation results
correspond to the known parameters of the electron-spin asymmetry of the neutron decay.
This fact provides the grounds for introducing new quantities - $L$-neutron lifetime
and the $R$-neutron lifetime into the neutron ~$\beta$ -decay physics.
The obtained interval of neutron lifetimes from $\tau_{NL} = 813$ s to
$\tau_{NR} = 987$ s gives an explanation of the range of experimental values
obtained over the 70-year history of of the neutron lifetime measurements.
This method opens up the prospect of reducing the neutron lifetime errors
to thousandths of a second.
\begin{acknowledgements}
The author is deeply grateful to V.V. Vladimirsky for his personal support.
The author is grateful to the staff of the UCN-ITEP group - V. F. Belkin, A. A. Belonozhenko, N. I. Kozlov, E. N. Mospan, A. Yu. Karpov, I. B. Rozhnin, A.M. Salomatin , V.G. Frankovskaya, and also and especially G.M. Kukavadze and S.P. Boroblev for the creation and operation of the KENTAVR set-up at the ITEP HWR reactor.
\end{acknowledgements}
|
1,108,101,564,951 | arxiv | \section{Introduction}
The key idea in this paper is the study of defining properties of
(or tuples of) operators such that an operator that
\textquotedblleft almost" satisfies this property is
\textquotedblleft close" to an operator that actually does satisfy
this property. In the setting of C*-algebras we would insist that
the \textquotedblleft closeness" be with respect to the norm, and in
the finite von Neumann algebra sense \textquotedblleft closeness"
would be with respect to the $2$-norm $\left\Vert \cdot\right\Vert
_{2}$ defined in terms of the
tracial state $\tau$ on the algebra, i.e., $\left\Vert x\right\Vert _{2}%
=\tau\left( x^{\ast}x\right) ^{1/2}$. A classic example of this
phenomenon is the fact that if $A$ is an operator such that
$\left\Vert A-A^{\ast }\right\Vert $ is small and $\left\Vert
A-A^{2}\right\Vert $ is small, then $A$ is very close to a
projection $P.$ In fact, $P$ can be chosen in the nonunital
C*-algebra generated by $A$. It is also true that if $\mathcal{M}$
is a finite von Neumann algebra with a faithful normal trace $\tau$
and if $A\in\mathcal{M}$ such that $\left\Vert A-A^{\ast}\right\Vert
_{2}$ is small and $\left\Vert A-A^{2}\right\Vert _{2}$ is small,
then there is a projection $P\in W^{\ast}\left( A\right) $ (the
von Neumann subalgebra generated by $A$ in $\cal M$) such that
$\left\Vert A-P\right\Vert _{2}$ is small.
In the C*-algebra setting these ideas are essentially the notion of weak
semiprojectivity introduced by S. Eilers and T. Loring \cite{Ei} in 1999, and
semiprojectivity introduced by B. Blackadar \cite{B} in 1985. These notions
were studied by T. Loring \cite{Lo1} in terms of stable relations and D.
Hadwin, L. Kaonga and B. Mathes \cite{Don} in terms of their noncommutative
continuous functions.
The von Neumann algebra results appear in an ad hoc manner in various papers
in the literature.
Semiprojectivity and weak semiprojectivity can also be expressed in
terms of liftings of representations into algebras of the form
$\prod_{1}^{\infty}{\mathcal{B}}_{n}/\oplus
_{1}^{\infty}{\mathcal{B}}_{n}$ or in terms of ultraproducts $\prod
_{1}^{\infty}{\mathcal{B}}_{n}/\mathcal{J}$. It is in the theory of
(tracial) ultraproducts of finite von Neumann algebras where many of
these ``approximate" results appear in the von Neumann algebra
setting.
After the preliminary definitions and results (Section 2), we begin Section 3
with our results in the C*-algebra setting. Our main result is that two
(weakly) semiprojective unital C*-algebras, each generated by $n$ projections,
can be glued together with partial isometries to define a larger (weakly)
semiprojective algebra (Theorem \ref{theorem,glue together, projections}).
In the von Neumann algebra setting (Section 4) we prove lifting
theorems for trace-preserving *-homomorphisms from abelian von
Neumann algebras (Corollary \ref{corollary,tracepreserving
homomorphism}) or hyperfinite von Neumann algebras (Theorem
\ref{theorem,hyperfinite}) into ultraproducts. We also extend a
classical result of S. Sakai \cite{sakai} by showing (Theorem
\ref{theorem, saikai}) that a tracial ultraproduct of C*-algebras is
a von Neumann algebra. This result allows us to prove a hybrid
result (Corollary \ref{corollary,shanghai}), namely, an approximate
result with respect to $\left\Vert \cdot\right\Vert _{2}$ on
C*-algebras. For example, if $\varepsilon>0,$ then there is a
$\delta>0$ such that for any unital C*-algebra $\mathcal{A}$ with
trace $\tau,$ we have that if $u,v$ are unitaries in $\mathcal{A}$
with $\left\Vert uv-vu\right\Vert _{2}<\delta$, then there are
commuting unitaries $u^{\prime},v^{\prime}$ in C$^{\ast}\left(
u,v\right) $ (the unital C$^*$-subalgebra generated by $u, v$ in
$\cal A$) such that $\left\Vert u-u^{\prime}\right\Vert
_{2}+\left\Vert v-v^{\prime }\right\Vert _{2}<\varepsilon$. With
respect to the operator norm this fails even in the class of
finite-dimensional C*-algebras \cite{Vo}.
\section{Preliminaries}
A C*-algebra $\mathcal{A}$ is \emph{projective} if, for any
*-homomorphism $\varphi:\mathcal{A}\rightarrow\mathcal{C}$, where
$\mathcal{C}$ is a C$^{*}$-algebra, and every surjective
*-homomorphism $\rho:\mathcal{B}\rightarrow\mathcal{C}$, where $\cal C$ is a C$^*$-algebra, there is a
*-homomorphism $\overline{\varphi}:\mathcal{A}\rightarrow\mathcal{B}$
such that $\rho \circ\overline{\varphi}=\varphi$. A
C$^{\ast}$-algebra $\mathcal{A}$ is \emph{semiprojective} \cite{B}
if, for every *-homomorphism $\pi:\
{\mathcal{A}}\rightarrow{\mathcal{B}}/\overline{\cup_{1}^{\infty
}{\mathcal{J}}_{n}}$, where ${\mathcal{J}}_{n}$ are increasing
ideals of a
C$^{\ast}$-algebra $\mathcal{B}$, and with $\varphi_{N}:\ {\mathcal{B}%
}/{\mathcal{J}}_{N}\rightarrow{\mathcal{B}}/\overline{\cup_{1}^{\infty
}{\mathcal{J}}_{n}} $ the natural quotient map, there exists a $\ast
$-homomorphism $\pi_{N}:\ {\mathcal{A}}\rightarrow{\mathcal{B}}/{\mathcal{J}%
}_{N}$ such that $\pi=\varphi_{N}\circ\pi_{N}$. A C$^{\ast}$-algebra
$\mathcal{A}$ is \emph{weakly semiprojective} \cite{Lo1} if, for any given
sequence $\left\{ \mathcal{B}_{n}\right\} _{n\in\mathbb{N}}$ of C$^{\ast}%
$-algebras and a *-homomorphism $\pi:\ {\mathcal{A}}\rightarrow\prod
_{1}^{\infty}{\mathcal{B}}_{n}/\oplus_{1}^{\infty}{\mathcal{B}}_{n}$,
there exist functions
$\pi_{n}:\mathcal{A}\rightarrow\mathcal{B}_{n}$ for all $n\geq1$ and
a positive integer $N$ such that
(1) \ $\pi_{n}$ is a *-homomorphism for all $n\geq N,$ and
(2)\ $\pi\left( a\right) =[\left\{ \pi_{n}\left( a\right)
\right\}]$ for every $a\in\mathcal{A}$.
\newline Equivalently, since
$\prod_{1}^{\infty}{\mathcal{B}}_{n}/\oplus_{1}^{\infty
}{\mathcal{B}}_{n}$ is isomorphic to $\prod_{N}^{\infty}{\mathcal{B}}%
_{n}/\oplus_{N}^{\infty}{\mathcal{B}}_{n},$ the conditions above say
that there is a *-homomorphism
$\rho:\mathcal{A}\rightarrow\prod_{N}^{\infty }{\mathcal{B}}_{n}$
such that $\pi\left( a\right) =\rho\left( a\right)
+\oplus_{N}^{\infty}{\mathcal{B}}_{n}$ for every $a\in\mathcal{A}$.
These notions of projectivity makes sense in two categories:
(1)\ the \emph{nonunital category}, i.e., the category of C*-algebras with
*-homomorphisms as morphisms, and
(2)\ the \emph{unital category}, i.e., the category of unital
C*-algebras with unital *-homomorphisms as morphisms.
These notions are drastically different in the different categories. For
example, the $1$-dimensional C*-algebra $\mathbb{C}$ is projective in the
unital category, but not in the nonunital category, e.g., in the definition of
projective C$^{*}$-algebra, let $\mathcal{B}=C_{0}\left( (0,1]\right) ,$
$\mathcal{C=}\mathbb{C}$ and $\rho\left( f\right) =f\left( 1\right) $.
However, if $\mathcal{A}$ is not unital and projective (semiprojective, weakly
semiprojective) in the nonunital category, and if $\mathcal{A}^{+}$ is the
algebra obtained by adding a unit to $\mathcal{A},$ then $\mathcal{A}^{+}$ is
projective (semiprojective, weakly semiprojective) in the unital category. In
Loring's book \cite{Lo1} he only considers the nonunital category. In this
paper we restrict ourselves to the unital category.
Suppose $\mathcal{S}$ is a subset of a unital C*-algebra $\cal A$.
Let C$^{\ast}\left( \mathcal{S}\right) $ denote the unital
C*-subalgebra generated by $\mathcal{S}$ in $\cal A$.
In \cite{Don} the notions of semiprojectivity and weak
semiprojectivity for finitely generated algebras were cast in terms
of \emph{noncommutative continuous functions}. The *-algebra of
noncommutative continuous functions is basically the metric
completion of the algebra of $\ast $-polynomials with respect to a
family of seminorms. There is a functional calculus for these
functions on any $n$-tuple of operators on any Hilbert space. Here
is a list of a few of the basic properties of noncommutative
continuous functions \cite{Don}:
(1)\ For each noncommutative continuous function $\varphi$ there is
a sequence $\left\{ p_{k}\right\} $ of noncommutative
*-polynomials such that for every tuple $\left(
T_{1},\ldots,T_{n}\right) $ we have
\[
\left\Vert p_{k}\left( T_{1},\ldots,T_{n}\right) -\varphi\left(
T_{1},\ldots,T_{n}\right) \right\Vert \rightarrow0,
\]
and the convergence is uniform on bounded $n$-tuples of operators.
(2)\ For any tuple $\left( T_{1},\ldots,T_{n}\right) ,$ C*$\left(
T_{1},\ldots,T_{n}\right) $ is the set of all $\varphi\left( T_{1}%
,\ldots,T_{n}\right) $ with $\varphi$ a noncommutative continuous function.
(3)\ For any $n$-tuple $\left( A_{1},\ldots,A_{n}\right) $ and any $S\in
C^{\ast}\left( A_{1},\ldots,A_{n}\right) ,$ there is a noncommutative
continuous function $\varphi$ such that $S=\varphi\left( A_{1},\ldots
,A_{n}\right) $ and $\left\Vert \varphi\left( T_{1},\ldots,T_{n}\right)
\right\Vert \leq\left\Vert S\right\Vert $ for all $n$-tuples $\left(
T_{1},\ldots,T_{n}\right) .$
(4)\ If $T_{1},\ldots,T_{n}$ are elements of a unital C*-algebra
$\mathcal{A} $ and $\pi:\mathcal{A}\rightarrow\mathcal{B}$ is a
unital *-homomorphism, then
\[
\pi\left( \varphi\left( T_{1},\ldots,T_{n}\right) \right) =\varphi\left(
\pi\left( T_{1}\right) ,\ldots,\pi\left( T_{n}\right) \right)
\]
for every noncommutative continuous function.
In \cite{Don} it was shown that the natural notion of relations used to define
a C*-algebra generated by $a_{1},\ldots,a_{n}$ are all of the form%
\[
\varphi\left( a_{1},\ldots,a_{n}\right) =0
\]
for a noncommutative continuous function $\varphi.$ In fact, it was also shown
in \cite{Don} that given a unital C*-algebra $\mathcal{A}$ generated by
$a_{1},\ldots,a_{n},$ there is a single noncommutative continuous function
$\varphi$ such that $\mathcal{A}$ is isomorphic to the universal C*-algebra
C*$\left( x_{1},\ldots,x_{n}|\varphi\right) $ with generators $x_{1}%
,\ldots,x_{n}$ and with the single relation $\varphi\left( x_{1},\ldots
,x_{n}\right) =0,$ where the map $x_{j}\mapsto a_{j}$ extends to a $\ast
$-isomorphism. Such a noncommutative continuous function $\varphi$ must be
\emph{null-bounded}, i.e., there is a number $r>0$ such that $\left\Vert
A_{j}\right\Vert \leq r$ for $1\leq j\leq n$ whenever $\varphi\left(
A_{1},\ldots,A_{n}\right) =0$. In this sense, every finitely generated
C*-algebra is finitely presented. In particular, Theorem 14.1.4 in T. Loring's
book \cite{Lo1} is true for all finitely generated C*-algebras.
For a finitely generated nonunital C*-algebra $\mathcal{A}$ there is a
null-bounded noncommutative continuous function $\varphi$ such that
$\mathcal{A}$ is isomorphic to the universal nonunital C*-algebra C$_{0}%
^{\ast}\left( x_{1},\ldots,x_{n}|\varphi\right)$ with generators
$x_{1},\ldots,x_{n}$ and with the single relation $\varphi\left( x_{1}%
,\ldots,x_{n}\right) =0$.
Here is a reformulation of the notions of semiprojectivity and weak
semiprojectivity for finitely generated C*-algebras in terms of noncommutative
continuous functions. We only state the result in the unital category.
\begin{proposition}
\cite{Don} Suppose $\varphi$ is a null-bounded noncommutative continuous
function. Then
(1)\ C*$\left( x_{1},\ldots,x_{n}|\varphi\right) $ is weakly semiprojective
if and only if there exist noncommutative continuous functions $\varphi
_{1},\ldots,\varphi_{n}$ such that for any $\varepsilon>0$, there exists
$\delta>0$, such that for any operators $T_{1},\cdots,T_{n}$ with
$\Vert\varphi(T_{1},\ldots,T_{n})\Vert<\delta$, we have
\hspace{2em}(a) $\varphi(\varphi_{1}(T_{1},\ldots,T_{n}),\ldots,\varphi
_{n}(T_{1},\ldots,T_{n}))=0$, and
\hspace{2em}(b) $\Vert T_{j}-\varphi_{j}(T_{1},\ldots,T_{n})\Vert<\varepsilon$.
(2)\ C*$\left( x_{1},\ldots,x_{n}|\varphi\right) $ is semiprojective if, in
addition, we can choose $\varphi_{1},\ldots,\varphi_{n}$ as in part 1 so that
$\varphi_{j}\left( A_{1},\ldots,A_{n}\right) =A_{j}$ for $1\leq j\leq n$,
whenever $\varphi\left( A_{1},\ldots,A_{n}\right) =0$.
\end{proposition}
We call the functions $\varphi_{1},\ldots,\varphi_{n}$ the \emph{(weakly)
semiprojective approximating functions} for $\varphi$.
For example, it is a classical result that has often been rediscovered that a
selfadjoint operator $A$ with $\left\Vert A-A^{2}\right\Vert $ sufficiently
small is very close to a projection. More precisely, if $\left\Vert
A-A^{2}\right\Vert <\varepsilon^{2}<1/9,$ then $\sigma\left( A\right)
\subset\left( -\varepsilon,\varepsilon\right) \cup\left( 1-\varepsilon
,1+\varepsilon\right) $. So if $h:\mathbb{R}\rightarrow\mathbb{R}$ is defined
by
\[
h\left( t\right) =\left\{
\begin{array}
[c]{ll}%
0, & \mbox{if }t\leq\frac{1}{3}\\
3t-1, & \mbox{if }\frac{1}{3}<t<\frac{2}{3}\\
1, & \mbox{if }\frac{2}{3}\leq t
\end{array}
\right. ,
\]
then $h\left( A\right) $ is a projection and $\left\Vert A-h\left(
A\right) \right\Vert =\sup\limits_{t\in\sigma\left( A\right)
}\left\vert t-h\left( t\right) \right\vert <\varepsilon$. If $A$
already is a projection, then $h\left( A\right) =A.$ Thus the
universal C*-algebra generated by a single projection is C*$\left(
x|\varphi\right) $ where $\varphi\left( x\right) =\left(
x-x^{\ast}\right) ^{2}+\left( x-x^{2}\right) ^{\ast}\left(
x-x^{2}\right) ,$ and defining $\varphi _{1}\left( x\right)
=h\left( \frac{x+x^*}{2} \right) $ shows that C*$\left(
x|\varphi\right) $ is semiprojective.
Throughout this paper, all the C$^*$-algebras considered are unital
and all *-homomorphism are unital.
\section{C*-algebra Results}
For simplicity we only consider finitely generated C$^{*}$-algebras throughout
this section.
The main results in this section concern the (weak) semiprojectivity of
C*-algebras defined in terms of partial isometries. We begin with some results
that are elementary in the unital category.
\begin{lemma}
\label{lemma, basic properties} Suppose $\mathcal{A}$ and $\mathcal{B}$ are
separable unital C*-algebras. The following are true:
(1)\ if $\mathcal{A}$ is (weakly) semiprojective, then $\mathcal{A}%
\otimes\mathcal{M}_{n}\left( \mathbb{C}\right) $ is (weakly) semiprojective;
(2)\ if $\mathcal{A}\otimes\mathcal{M}_{n}\left( \mathbb{C}\right) $ is
weakly semiprojective, then $\mathcal{A}$ is weakly semiprojective;
(3)\ if $\mathcal{A}$ and $\mathcal{B}$ are projective (semiprojective, weakly
semiprojective), then so is $\mathcal{A}\ast\mathcal{B}$;
(4)\ if $\mathcal{A}\ast\mathcal{B}$ is (weakly semiprojective,
semiprojective) projective, and there is a linear multiplicative functional
$\alpha$ on $\mathcal{B}$, then $\mathcal{A}$ is (weakly semiprojective,
semiprojective) projective;
(5)\ $\mathcal{A}\oplus\mathcal{B}$ is (weakly) semiprojective if an only if
both $\mathcal{A}$ and $\mathcal{B}$ are (weakly) semiprojective.
\end{lemma}
Proof. \ (1)\ Suppose $\mathcal{A}$ is weakly semiprojective, and
$\mathcal{A}=C^{\ast}(x_{1},\ldots,x_{m}|\varphi)$ with weakly semiprojective
approximating functions $\varphi_{1},\ldots,\varphi_{m}$. Since $\mathcal{M}%
_{n}(\mathbb{C})$ is semiprojective, we can assume that ${\cal M}_{n}%
(\mathbb{C})=C^{\ast}(y|\rho)$ with a semiprojective approximating function
$\rho_{1}$. Hence
\[
\mathcal{A}\otimes\mathcal{M}_{n}\left( \mathbb{C}\right) =C^{\ast}%
(x_{1},\ldots,x_{m},y|\Phi)
\]
where
\begin{align*}
\Phi(x_{1},\ldots,x_{m},y) & =\sum_{i=1}^{m}(x_{i}y-yx_{i})^{\ast}%
(x_{i}y-yx_{i})+\sum_{i=1}^{m}(x_{i}y^{\ast}-y^{\ast}x_{i})^{\ast}%
(x_{i}y^{\ast}-y^{\ast}x_{i})\\
& +\varphi(x_{1},\ldots,x_{m})^{\ast}\varphi(x_{1},\ldots,x_{m})+\rho
(y)^{\ast}\rho(y).
\end{align*}
Since matrix units $E_{i,j}(1\leq i,j\leq n)$ are in ${\mathcal{M}}%
_{n}(\mathbb{C})$, there exists a family of noncommutative
continuous functions $\{\rho_{i,j}: 1\leq i,j\leq n\}$ such that
$E_{i,j}=\rho_{i,j}(y)$.
For any operators $T_{1}, \ldots, T_{m}, S$, and for $1\leq j\leq m$, let
$\widehat{T_{k}}=\sum_{j=1}^{n}\rho_{j,1}(\rho_{1}(S))\cdot T_{k}\cdot
\rho_{1,j}(\rho_{1}(S))$. Define functions $\{\Phi_{k}: 1\leq k\leq m+1\}$ by
\[
\Phi_{k}\left( T_{1}, \ldots, T_{m}, S\right) =\left\{
\begin{array}
[c]{ll}%
\varphi_{k}\left( \widehat{T_{1}}, \ldots, \widehat{T_{m}} \right) , & 1\leq
k\leq m\\
\rho_{1}(S), & k=m+1
\end{array}
\right.
\]
Given any $\varepsilon>0$, we will find some $\delta>0$ in the definition of
weak semiprojectivity.
Note that $\mathcal{M}_{n}(\mathbb{C)}$ is semiprojective, there exists
$\delta_{1}>0$, such that if $\|\rho(S)\|<\delta_{1}$, then
(1)\ $\rho(\Phi_{m+1}(T_{1}, \ldots, T_{m}, S))=\rho(\rho_{1}(S))=0$,
(2)\ $\|\rho_{1}(S)-S\|<\varepsilon$,
(3)\ $\rho_{1}(S)$ and $\rho_{1}(S)^{*}$ commute with all $\widehat{T_{k}}$.
Since $\mathcal{A}$ is weakly semiprojective, there exists $\delta_{2}>0 $,
such that if $\|\varphi(\widehat{T_{1}},\cdots, \widehat{T_{m}})\|<\delta_{2}
$, then
\[
\varphi(\varphi_{1}(\widehat{T_{1}},\cdots, \widehat{T_{m}}), \cdots,
\varphi_{m}(\widehat{T_{1}},\cdots, \widehat{T_{m}}))=0
\]
and
\[
\|\widehat{T_{k}}-\varphi_{k}(\widehat{T_{1}},\cdots, \widehat{T_{m}%
})\|<\varepsilon.
\]
Note that
\[
\|\varphi(\widehat{T_{1}},\cdots, \widehat{T_{m}})\|\leq\|\varphi
(\widehat{T_{1}},\cdots, \widehat{T_{m}})-\varphi(T_{1}, \ldots,
T_{m})\|+\|\varphi(T_{1}, \ldots, T_{m})\|,
\]
there exists $\delta_{3}>0$, such that if $\|T_{k}-\widehat{T_{k}}%
\|<\delta_{3}$, then
\[
\|\varphi(\widehat{T_{1}},\cdots, \widehat{T_{m}})\|\leq\|\varphi
(\widehat{T_{1}},\cdots, \widehat{T_{m}})-\varphi(T_{1}, \ldots,
T_{m})\|<\frac{\delta_{2}}{2}.
\]
In addition, if $\|\varphi(T_{1}, \ldots, T_{m})\|<\frac{\delta_{2}}{2} $,
then $\|\varphi(\widehat{T_{1}},\cdots, \widehat{T_{m}})\|<\delta_{2}$.
Furthermore,
\begin{align*}
\|T_{k}-\widehat{T_{k}}\| & =\|T_{k}-\sum_{j=1}^{n}\rho_{j,1}(\rho
_{1}(S))\cdot T_{k}\cdot\rho_{1,j}(\rho_{1}(S))\|\\
& =\|\sum_{j=1}^{n}\rho_{j,1}(\rho_{1}(S))\cdot\left( \rho_{1,j}(\rho
_{1}(S))\cdot T_{k}-T_{k}\cdot\rho_{1,j}(\rho_{1}(S))\right) \|\\
& \leq\sum_{j=1}^{n}\|\rho_{1,j}(\rho_{1}(S))\cdot T_{k}-T_{k}\cdot\rho
_{1,j}(\rho_{1}(S))\|.
\end{align*}
Therefore there exists $\delta_{4}>0$, such that if $\|\rho_{1,j}(\rho
_{1}(S))\cdot T_{k}-T_{k}\cdot\rho_{1,j}(\rho_{1}(S))\|<\delta_{4}$, then
$\|T_{k} -\widehat{T_{k}}\|<\delta_{3}$.
Note that $\rho_{1}(S)=\sum_{i,j=1}^{n}c_{ij}\cdot\rho_{ij}(\rho_{1}(S))$,
where $c_{i,j}$'s are complex numbers. We have
\begin{align*}
\|T_{k}\rho_{1}(S)-\rho_{1}(S)T_{k}\| & =\|T_{k}\sum_{i,j=1}^{n}
c_{i,j}\cdot\rho_{ij}(\rho_{1}(S))-\sum_{i,j=1}^{n}c_{i,j}\cdot\rho_{ij}%
(\rho_{1}(S))T_{k}\|\\
& \leq\sum_{i,j=1}^{n}|c_{ij}|\cdot\|T_{k} \rho_{ij}(\rho_{1}(S))-\rho
_{ij}(\rho_{1}(S))T_{k}\|\\
& =\sum_{i,j=1}^{n}|c_{ij}|\cdot\delta_{4}.
\end{align*}
Let $\delta_{5}=\sum_{i,j=1}^{n}|c_{ij}|\cdot\delta_{4}$. Since
\begin{align*}
\|T_{k}\rho_{1}(S)-\rho_{1}(S)T_{k}\| & =\|T_{k}\left( \rho_{1}%
(S)-S+S\right) -\left( \rho_{1}(S)-S+S\right) T_{k}\|\\
& \leq2\|T_{k}\|\cdot\|\rho_{1}(S)-S\|+\|T_{k}S-ST_{k}\|,
\end{align*}
there exists $\delta_{6}>0$, such that if $\|\rho_{1}(S)-S\|<\delta_{6},$
$\|T_{k}S-ST_{k}\|<\delta_{6}$, then $\|T_{k}\rho_{1}(S)-\rho_{1}%
(S)T_{k}\|<\delta_{5}$.
By the fact that ${\cal M}_n(\Bbb C)=C^*(y|\rho)$ is semiprojective,
there exists $\delta_{7}>0$ such that if
$\|\varphi(S)\|<\delta_{7}$, then $\|\rho_{1}(S)-S\|<\delta_{6}$.
Note that $\Vert\varphi\left( T_{1},\ldots,T_{m}\right) \Vert,\Vert
\rho(S)\Vert,\Vert T_{i}S-ST_{i}\Vert,\Vert T_{i}S^{\ast}-S^{\ast}T_{i}\Vert$
are all less than or equal to $\sqrt{\Vert\Phi\left( T_{1},\ldots
,T_{m},S\right) \Vert}$. Put $\delta=\mbox{min}\{\delta_{1}^{2},(\delta
_{2}/2)^{2},\delta_{6}^{2},\delta_{7}^{2}\}$, then $\Phi_{1},\ldots,\Phi
_{m+1}$ are weakly semiprojective approximating functions for $\Phi$.
If $\mathcal{A}$ is semiporjective and $\varphi_{1},\ldots,\varphi_{m}$ are
semiprojective approximating functions for $\varphi$, it is clear that
$\Phi_{1},\ldots,\Phi_{m+1}$ are semiprojective approximating functions for
$\Phi$.
(2)\ Suppose ${\mathcal{A}}\otimes{\mathcal{M}}_{n}(\mathbb{C)}$ is
weakly
semiprojective. Let $\pi:\mathcal{A}\rightarrow\prod_{1}^{\infty}{\mathcal{B}%
}_{k}/\oplus_{1}^{\infty}{\mathcal{B}}_{k}$ is a unital
*-homomorphism. Then $\rho=\pi\otimes id$ is a unital *-homomorphism from $\mathcal{A}\otimes{\mathcal{M}}_{n}(\mathbb{C)}$
to $\left( \prod_{1}^{\infty}{\mathcal{B}}_{k}/\oplus_{1}^{\infty
}{\mathcal{B}}_{k}\right) \otimes{\mathcal{M}}_{n}(\mathbb{C)}\
=\prod _{1}^{\infty}\left(
{\mathcal{B}}_{k}\otimes{\mathcal{M}}_{n}(\mathbb{C)} \right)
/\oplus_{1}^{\infty}\left( {\mathcal{B}}_{k}\otimes{\mathcal{M}}
_{n}(\mathbb{C)}\right)$. Since $\mathcal{A}\otimes{\mathcal{M}}
_{n}(\mathbb{C)}$ is weakly semiprojective, there is a positive
integer $N$ and maps
$\rho_{k}:\mathcal{A}\otimes{\mathcal{M}}_{n}(\mathbb{C)}\rightarrow
{\mathcal{B}}_{k} $ such that, for $k\geq N,$ $\rho_{k}$ is a unital
*-homomorphism and, for every $x\in\mathcal{A}\otimes{\mathcal{M}%
}_{n}(\mathbb{C)},$
\[
\rho\left( x\right) =\left[ \left\{ \rho_{k}\left( x\right) \right\}
\right] .
\]
It follows that there is a sequence $\left\{ U_{k}\right\} $ of
unitary elements ($U_{k}\in\mathcal{B}_{k}$) such that $\left\Vert
U_{k}-1\right\Vert \rightarrow0$ and $\left\Vert 1\otimes
T-U_{k}^{\ast}\rho_{k}\left( 1\otimes T\right) U_{k}\right\Vert
\rightarrow0$ for every $T\in\mathcal{M}_{n}\left( \mathbb{C}\right)
$. Therefore, for $k\geq N$ and $A\in\mathcal{A}$,
$U_{k}^{\ast}\rho_{k}\left( A\otimes1\right) U_{k}$ is in the
commutant
of $1\otimes{\mathcal{M}}_{n}(\mathbb{C)},$ which is $\mathcal{B}_{k}%
\otimes1.$ Hence there are representations $\pi_{k}:\mathcal{A}\rightarrow
\mathcal{B}_{k}$ such that $\pi_{k}\left( A\right) \otimes1=U_{k}^{\ast}%
\rho_{k}\left( A\otimes1\right) U_{k}$ for every $A\in\mathcal{A}$. Clearly,
$\pi\left( A\right) =\left[ \left\{ \pi_{k}\left( A\right) \right\}
\right] $ for every $A\in\mathcal{A}$.
(3)\ This is obvious from the defining properties of the free
product in the unital category.
(4) We give a proof for the projective case; the other cases are handled
similarly. Suppose $\mathcal{C}$ is a unital C*-algebra with an ideal
$\mathcal{J}$ and $\pi:\mathcal{A}\rightarrow\mathcal{C}/\mathcal{J}$ is a
*-homomorphism. Define a unital *-homomorphism $\sigma:\mathcal{B}%
\rightarrow\mathcal{C}/\mathcal{J}$ by $\sigma\left( x\right) =\alpha\left(
x\right)\cdot 1 $. Thus there is a unital *-homomorphism $\rho:\mathcal{A}%
\ast\mathcal{B}\rightarrow\mathcal{C}/\mathcal{J}$ such that $\rho
|\mathcal{A}=\pi$ and $\rho|\mathcal{B}=\sigma$. Since $\mathcal{A}%
\ast\mathcal{B}$ is projective, $\rho$ lifts to a *-homomorphism
$\tau:\mathcal{A}\ast\mathcal{B}\rightarrow\mathcal{C}$. Thus $\tau
|\mathcal{A}$ is the required lifting of $\pi$.
(5) Suppose $\mathcal{A}=C^{*}(x_{1}, \ldots, x_{m}|\varphi)$ is
weakly semiprojective with weakly semiprojective approximating
functions $\varphi_{1}, \ldots, \varphi_{m}$, and
$\mathcal{B}=C^{*}(y_{1}, \ldots, y_{n}|\psi)$ is weakly
semiprojective with weakly semiprojective approximating functions
$\psi_{1}, \ldots, \psi_{n}$. Then
\[
{\mathcal{A}}\oplus{\ \mathcal{B}}=C^{\ast}(x_{1},\ldots,x_{m},y_{1}%
,\ldots,y_{n},p|\Phi),
\]
where
\begin{align*}
& \Phi(x_{1},\ldots,x_{m},y_{1},\ldots,y_{n},p)\\
=&\varphi(x_{1},\ldots,x_{m})^{\ast}\varphi(x_{1},\ldots,x_{m})+\psi
(y_{1},\ldots,y_{n})^{\ast}\psi(y_{1},\ldots,y_{n})+\\
& +(p-p^{\ast})^{\ast}(p-p^{\ast})+(p-p^{2})^{\ast}(p-p^{2})+\sum_{j=1}%
^{m}(px_{j}-x_{j}p)^{\ast}(px_{j}-x_{j}p)+\\
& +\sum_{j=1}^{m}(px_{j}-x_{j})^{\ast}(px_{j}-x_{j})+\sum_{j=1}^{n}%
(py_{j}-y_{j}p)^{\ast}(py_{j}-y_{j}p).
\end{align*}
Let $f$ be a continuous function on $\mathbb{R}$ such that $f(t)=0$ when
$t\leq\frac{1}{4}$ and $f(t)=1$ when $t\geq\frac{3}{4}$. Define $\Phi_{1},
\ldots, \Phi_{m+n+1}$ on ${\mathcal{A}}\oplus{\mathcal{B}}$ by
\[
\Phi_{i}(x_{1}, \ldots, x_{m}, y_{1}, \ldots, y_{n},p)=\left\{
\begin{array}
[c]{ll}%
f(p)\varphi_{i}(x_{1}, \ldots, x_{m})f(p), & 1\leq i\leq m\\
(1-f(p))\psi_{i-m}(y_{1}, \ldots, y_{n})(1-f(p)), & m+1\leq i\leq m+n\\
f(p), & i=m+n+1
\end{array}
\right.
\]
For any $\varepsilon>0$, and any operators $S_{1}, \ldots,
S_{m},T_{1}, \ldots, T_{n}, Q $, there exists $\delta_{1}>0$, such
that if $\|P-P^{*}\|<\delta_1$ and $\|Q-Q^{2}\|<\delta_{1}$, then
$f(Q) $ is a projection and $\|Q-f(Q)\|<\varepsilon$. In addition,
there exists $\delta_{2}>0$, such that if $\|\varphi(S_{1}, \ldots,
S_{m})\|<\delta_{2}$, then $$\|S_{j}-\varphi _{j}(S_{1}, \ldots,
S_{m} )\|<\varepsilon\ \ \ \mbox{ for all}\ \ \ 1\leq j\leq m$$ and
$$\varphi\left( \varphi_{1}(S_{1}, \ldots, S_{m}), \ldots,
\varphi_{m}(S_{1}, \ldots, S_{m})\right) =0;$$ there exists
$\delta_{3}>0$, such that if $\|\psi(T_{1}, \ldots,
T_{n})\|<\delta_{3}$, then $$\|T_{k}-\psi_{k}(T_{1}, \ldots,
T_{n})\|<\varepsilon\ \ \ \mbox{ for all} \ \ \ 1\leq k\leq n$$ and
$$\psi\left( \psi_{1}(T_{1}, \ldots, T_{n}), \ldots, \psi_{n}(T_{1},
\ldots, T_{n}) \right) =0.$$ Let
$$\widehat{S_{j}}=f(Q)\varphi_{j}(S_{1}, \ldots, S_{m})f(Q)\ \ \mbox{ for
all}\ \ 1\leq j\leq m$$ and
$$\widehat{T_{k}}=(1-f(Q))\psi_{k}(T_{1}, \ldots,
T_{n})(1-f(Q))\ \ \mbox{ for all} \ \ 1\leq k\leq n,$$ then $\widehat{S_{j}}%
f(Q)=f(Q)\widehat{S_{j}}=\widehat{S_{j}}$ and $\widehat{T_{k}}%
f(Q)=f(Q)\widehat{T_{k}}=0$.
Choose
$\delta=\mbox{min}\{\delta_{1}^{2},\delta_{2}^{2},\delta_{3}^{2}\}$,
then $\Phi_{1}, \ldots, \Phi_{m+n+1}$ are weakly semiprojective
approximating functions for $\Phi$.
Conversely, suppose $\mathcal{A}\oplus\mathcal{B}=C^{\ast}(x_{1}\oplus
y_{1},\ldots,x_{n}\oplus y_{n}|\varphi)$ is weakly semiprojective with weakly
semiprojective approximating functions $\varphi_{1}, \ldots, \varphi_{n}$.
Since $\varphi(A\oplus B)=\varphi(A)\oplus\varphi(B)$, it is clear that
${\mathcal{A}}=C^{\ast}(x_{1}, \ldots, x_{n}|\varphi)$ and ${\mathcal{B}%
}=C^{\ast}(y_{1}, \ldots, y_{n}|\varphi)$, and $\mathcal{A} $ and
$\mathcal{B}$ are all weakly semiprojective with weakly semiprojective
approximating functions $\varphi_{1}, \ldots, \varphi_{n}$.
It is not hard to prove the semiprojective case using the similar
idea.\hfill$\Box$
\begin{remark}
(1)\ Statement $\left( 2\right) $ and statement $\left( 5\right)
$ of Lemma \ref{lemma, basic properties} are not true in the
nonunital case, and statement $\left( 1\right) $ is not true for
projectivity in the unital case, e.g., in the Calkin algebra the
C$^{*}$-algebra generated by $\left(
\begin{array}
[c]{ll}%
0 & S\\
0 & 0
\end{array}
\right) $, where $S$ is the unilateral shift operator, is isomorphic to
$\mathcal{M}_{2}(\mathbb{C})$, but it cannot be lifted to a representation of
$\mathcal{M}_{2}(\mathbb{C})$ in $\mathcal{B(H)}$.
(2)\ In general the different types of projectivity are not
preserved under tensor products even when the algebras are very
nice. For example, if $X$ is the unit circle, then $C\left(
X\right)$, which is isomorphic to the universal C$^*$-algebra
generated by a unitary operator, is projective, but $C\left(
X\right) \otimes C\left( X\right) =C^{\ast }\left( x,y|x,y\text{
unitary, }xy-yx=0\right) $ is not weakly semiprojective \cite{Vo}.
(3)\ In the nonunital category, statement $\left( 4\right) $ of Lemma
\ref{lemma, basic properties} always holds, because the $0$ functional is allowed.
\end{remark}
The following lemma is a key ingredient to our main results in this section.
\begin{lemma}
\label{lemma,for theorem in section3} There exists a noncommutative continuous
function $\psi(x,y,z)$ such that, for any C*-algebra $\mathcal{A} $ and any
$\varepsilon>0$, there exists $\delta>0$, such that, whenever $P,Q,A\in
\mathcal{A}$ with $P$ and $Q$ projections and $\Vert A^{\ast}A-P\Vert<\delta$,
$\Vert AA^{\ast}-Q\Vert<\delta$, we have
(1)\ $\Vert\psi(P,Q,A)-A\Vert<\varepsilon$,
(2)\ $\psi(P,Q,A)^{\ast}\psi(P,Q,A)=P$ and $\psi(P,Q,A)\psi(P,Q,A)^{\ast}=Q$,
(3)\ $\psi(P,Q,A)=A$ whenever $A^{\ast}A=P$ and $AA^{\ast}=Q$.
\end{lemma}
Proof. Let $f:\mathbb{R\rightarrow R}$ be continuous defined by $f(t)=\left\{
\begin{array}
[c]{ll}%
0, & 0\leq t\leq\frac{1}{4}\\
\frac{1}{\sqrt{t}}, & \frac{3}{4}\leq t\leq\frac{5}{4}%
\end{array}
\right. ,$ and define $\psi(P,Q,A)=f(QAPA^{\ast}Q)QAP$. \hfill$\Box$
\vspace{0.2cm}
The following is our main theorem in this section. Suppose $\mathcal{A}$ is a
unital C*-algebra generated by partial isometries $V_{1},\ldots,V_{n}$ and
C*$\left( V_{1}^{\ast}V_{1},\ldots,V_{n}^{\ast}V_{n}\right) $ and C*$\left(
V_{1}V_{1}^{\ast},\ldots,V_{n}V_{n}^{\ast}\right) $ are both (weakly)
semiprojective or C*$\left( V_{1}^{\ast}V_{1},\ldots,V_{n}^{\ast}V_{n}%
,V_{1}V_{1}^{\ast},\ldots,V_{n}V_{n}^{\ast}\right) $ is (weakly)
semiprojective. Does it follow that $\mathcal{A}$ is weakly semiprojective? We
prove this is true when the only relations on $V_{1},\ldots,V_{n}$ are those
on $V_{1}^{\ast}V_{1},\ldots,V_{n}^{\ast}V_{n},V_{1}V_{1}^{\ast},\ldots
,V_{n}V_{n}^{\ast}$.
\begin{theorem}
\label{theorem,glue together, projections} The following are true:
(1)\ Suppose $C^{\ast}\left( P_{1},\ldots,P_{n}|\ \varphi\right) $ and
$C^{\ast}\left( Q_{1},\ldots,Q_{n}|\psi\right) $ are (weakly)
semiprojective, where $P_{1},Q_{1},\ldots,P_{n},Q_{n}$ are projections. Then
the universal C*-algebra $\mathcal{A}=C^{\ast}(V_{1},\ldots,V_{n}|\Phi)$ with
the relation $\Phi$ defined by
\begin{align*}
\Phi(V_{1},\ldots,V_{n}) =& \varphi\left( V_{1}^{\ast}V_{1},\ldots
,V_{n}^{\ast}V_{n}\right) ^{\ast}\varphi\left( V_{1}^{\ast}V_{1}%
,\ldots,V_{n}^{\ast}V_{n}\right) +\\
& +\psi\left( V_{1}V_{1}^{\ast},\ldots,V_{n}V_{n}^{\ast}\right) ^{\ast}%
\psi\left( V_{1}V_{1}^{\ast},\ldots,V_{n}V_{n}^{\ast}\right)
\end{align*}
is (weakly) semiprojective.
(2)\ If $C^{\ast}(P_{1},\ldots,P_{n},Q_{1},\ldots,Q_{n}|\phi)$ is
(weakly) semiprojective, where
$P_{1},\ldots,P_{n},Q_{1},\ldots,Q_{n}$ are projections, then
$C^{\ast}(V_{1},\ldots,V_{n}|\Psi)$ is (weakly) semiprojective,
where the relation $\Psi$ is defined by
\begin{align*}
& \Psi(V_{1},\ldots,V_{n})\\
=& \phi\left( V_{1}^{\ast}V_{1},\ldots,V_{n}^{\ast}V_{n},V_{1}V_{1}^{\ast
},\ldots,V_{n}V_{n}^{\ast}\right) ^{\ast}\phi\left( V_{1}^{\ast}V_{1}%
,\ldots,V_{n}^{\ast}V_{n},V_{1}V_{1}^{\ast},\ldots,V_{n}V_{n}^{\ast}\right)
\end{align*}
\end{theorem}
Proof. \ (1)\ Let $\varphi_{1}, \ldots, \varphi_{n}$ be weakly semiprojective
approximating functions for $\varphi$, and $\psi_{1}, \ldots, \psi_{n}$ be
weakly semiprojective approximating functions for $\psi$.
Define the functions $\Phi_{1}, \ldots, \Phi_{n}$ by
$$\Phi_{i}(V_{1}, \ldots, V_{n})=\phi(\varphi_{i}(P_{1}, \ldots,
P_{n}), \psi_{i}(Q_{1}, \ldots, Q_{n}), V_{i}),$$ where $\phi$ is
the noncommutative continuous function in Lemma \ref{lemma,for
theorem in section3}.
Given any $\varepsilon>0$. Let $\delta_{0}$ be defined in Lemma
\ref{lemma,for theorem in section3} corresponding to $\varepsilon$.
Since $C^{\ast}\left( P_{1},\ldots,P_{n}|\ \varphi\right) $ is weakly
semiprojective, there exists $\delta_{1}>0$, such that for any operators
$A_{1}, \ldots, A_{n}$ with $\|\varphi(A_{1}^{*}A_{1}, \ldots, A_{n}^{*}%
A_{n})\|<\delta_{1}$, we have, for $1\leq i\leq n$,
(a)\ $\varphi_{i}(A_{1}%
^{*}A_{1}, \ldots, A_{n}^{*}A_{n})$ is projection and
(b)\ $\|A_{i}^{*}%
A_{i}-\varphi_{i}(A_{1}^{*}A_{1}, \ldots, A_{n}^{*}A_{n})\|<\delta_{0}$.
\newline
since $C^*(Q_1, \ldots, Q_n|\psi)$ is weakly semiprojective, there
exists $\delta_{2}>0$, such that if $\|\varphi(A_{1}A_{1}^{*},
\ldots,
A_{n}A_{n}^{*})\|<\delta_{2}$, then, for $1\leq i\leq n$,
(a$'$)\ $\psi_{i}%
(A_{1}A_{1}^{*}, \ldots, A_{n}A_{n}^{*})$ is projection and
(b$'$)\ $\|A_{i}A_{i}%
^{*}-\psi_{i}(A_{1}A_{1}^{*}, \ldots, A_{n}A_{n}^{*})\|<\delta_{0}$.
Put $\varphi_{i}(A_{1}^{*}A_{1}, \ldots, A_{n}^{*}A_{n}), \psi_{i}(A_{1}%
A_{1}^{*}, \ldots, A_{n}A_{n}^{*}), A_{i}$ to $P, Q, A$ in Lemma
\ref{lemma,for theorem in section3}, we have that
\[
\phi(\varphi_{i}(A_{1}^{*}A_{1}, \ldots, A_{n}^{*}A_{n}),\psi_{i}(A_{1}%
A_{1}^{*}, \ldots, A_{n}A_{n}^{*}), A_{i}) \ \left(=\Phi_{i}(A_{1},
\ldots, A_{n})\right)
\]
is a partial isometry from $\varphi_{i}(A_{1}^{*}A_{1}, \ldots, A_{n}^{*}%
A_{n})$ to $\psi_{i}(A_{1}A_{1}^{*}, \ldots, A_{n}A_{n}^{*}).$
Let $\delta=\min\{\delta_{1}^{2}, \delta_{2}^{2}\}$. We prove that $\Phi_{1},
\ldots, \Phi_{n}$ are weakly semiprojective approximating functions for
$\Phi.$
Use the similar idea and Lemma \ref{lemma,for theorem in section3}, we can
prove the weakly semiprojective case.
(2)\ Similar to the proof of Part (1).\hfill$\Box$
\begin{example}
Suppose
\begin{align*}
{\cal M}_{2}(C) & =C^{\ast}\left( P_{1}=\left(
\begin{array}
[c]{ll}%
1 & 0\\
0 & 0
\end{array}
\right) ,P_{2}=\left(
\begin{array}
[c]{ll}%
\frac{1}{2} & \frac{1}{2}\\
\frac{1}{2} & \frac{1}{2}%
\end{array}
\right) ,P_{3}=\left(
\begin{array}
[c]{ll}%
1 & 0\\
0 & 1
\end{array}
\right) \right) \\
& = C^{\ast}\left( P_{1},P_{2},P_{3}|\varphi\right)
\end{align*}
and
\begin{align*}
{\cal M}_{3}(C) & =C^{\ast}\left( Q_{1}=\left(
\begin{array}
[c]{lll}%
1 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & 0
\end{array}
\right) ,Q_{2}=\left(
\begin{array}
[c]{lll}%
0 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 0
\end{array}
\right) ,Q_{3}=\left(
\begin{array}
[c]{lll}%
\frac{1}{3} & \frac{1}{3} & \frac{1}{3}\\
\frac{1}{3} & \frac{1}{3} & \frac{1}{3}\\
\frac{1}{3} & \frac{1}{3} & \frac{1}{3}%
\end{array}
\right) \right) \\
& = C^{\ast}\left( Q_{1},Q_{2},Q_{3}|\psi\right) .
\end{align*}
Then the universal C*-algebra generated by partial isometries $V_{1}%
,V_{2},V_{3}$ such that $$\varphi(
V_{1}^{\ast}V_{1},V_{2}^{\ast}V_{2} ,V_{3}^{\ast}V_{3}) =0$$ and
$$\psi\left( V_{1}V_{1}^{\ast},V_{2}V_{2}^{\ast
},V_{3}V_{3}^{\ast}\right) =0$$ is semiprojective.
\end{example}
We also can apply our results to the generalized version of the
noncommutative unitary construction of K. McClanahan \cite{M}.
\begin{proposition}
If $\mathcal{A}=C^{\ast}\left( x_{1},\ldots x_{n}|\varphi\right) $
is (weakly) semiprojective, then the universal C*-algebra
$\mathcal{E}$ generated by $\left\{ a_{ijk}:1\leq i,j\leq m,1\leq
k\leq n\right\}$, subject to $\varphi\left( \left( a_{ij1}\right)
,\ldots,\left( a_{ijn}\right) \right) =0$ is (weakly)
semiprojective.
\end{proposition}
Proof. Suppose $\varphi_{1}, \ldots, \varphi_{n}$ are (weakly) semiprojective
approximating functions for $\varphi$. Define functions $\{\Phi_{i,j,k}: 1\leq
i,j\leq m,1\leq k\leq n\}$ by
\[
\Phi_{i,j,k}\left( \{a_{s,t,l}\}_{s,t,l}\right) =f_{i,j}\left( \varphi
_{k}\left( \left( a_{s,t,1}\right) ,\ldots,\left( a_{s,t,n}\right)
\right) \right) ,
\]
where $f_{i,j}: {\mathcal{M}}_{m}(\mathbb{C)\mapsto C}$ such that for any
$m\times m$ matrix $A$, $A=\left( f_{i,j}(A)\right) $. It is clear that
$\{\Phi_{i,j,k}: 1\leq i,j\leq m,1\leq k\leq n\}$ are (weakly) semiprojective
approximating functions for $\Phi.$\hfill$\Box$
\begin{corollary}
Suppose $C^{\ast}\left( P_{1},\ldots,P_{n}|\varphi\right) $ and $C^{\ast
}\left( Q_{1},\ldots,Q_{n}|\psi\right) $ are (weakly) semiprojective, where
$P_{1},Q_{1},\ldots,P_{n},Q_{n}$ are projections. Suppose $m$ is a positive
integer and $\mathcal{A}$ is the universal C*-algebra generated by
$\{a_{ijk}:$$1\leq i,j\leq m,1\leq k\leq n\}$ subject to
\[
\varphi\left( \left( a_{ij1}\right) ^{\ast}\left( a_{ij1}\right)
,\ldots,\left( a_{ijn}\right) ^{\ast}\left( a_{ijn}\right) \right) =0,
\]
\[
\psi((a_{ij1})(a_{ij1})^{\ast},\ldots,(a_{ijn})(a_{ijn})^{\ast})=0.
\]
Then $\mathcal{A}$ is (weakly) semiprojective.
\end{corollary}
We can also define projectivity in terms of noncommutative continuous
functions. A unital C$^{\ast}$-algebra C*$\left( b_{1},\ldots,b_{n}%
|\varphi\right) $ is projective in the unital category if, for any unital
C*-algebra $\mathcal{A}$ and any ideal $\mathcal{J}$ in $\mathcal{A}$, and any
$x_{1},\ldots,x_{n}\in{\mathcal{A}}/{\mathcal{J}}$ with $\varphi(x_{1}%
,\ldots,x_{n})=0$, there exist elements $a_{1},\ldots,a_{n}$ in
$\mathcal{A}$, such that $x_{i}=a_{i}+{\mathcal{J}}$ and
$\varphi(a_{1},\ldots,a_{n})=0$.
\begin{remark}
(1)\ The universal C*-algebra generated by $A$ such that
$\left\Vert A\right\Vert \leq r$ is projective. The universal
C*-algebra generated by $\left\{ A_{n}\right\}_{n=1}^{\infty}$ such
that $\left\Vert A_{n}\right\Vert \leq r_{n}$ for some numbers
$r_n$, is projective. Thus every separable unital C*-algebra is
isomorphic to $\mathcal{A}/\mathcal{J}$ , where $\mathcal{A}$ is a
projective C$^*$-algebra and $\cal J$ is an ideal of $\cal A$.
(2)\ If $\{{\mathcal{A}}_{n}\}_{n=1}^{\infty}$ is a sequence of
projective C$^*$-algebras, then the free product
$\ast_{n}{\mathcal{A}}_{n}$ is projective.
(3)\ If $\left\{ \mathcal{A}_{n}\right\}_{n=1}^{\infty}$ is a
sequence of separable unital semiprojective algebras, then
$\ast_{n}\mathcal{A}_{n}$ may not be weakly semiprojective. For
example,
${\mathcal{M}}_{2}(\mathbb{C})\ast{\mathcal{M}}_{3}(\mathbb{C)\ast
{\mathcal{M}}}_{4}\mathbb{(C)\ast\cdots}$ is not weakly
semiprojective, but each ${\mathcal{M}}_{n}(\mathbb{C})$ is
semiprojective.
\end{remark}
Although weakly semiprojective C*-algebras need not be finitely generated,
identity representation on such algebras must be a pointwise limit of
representations into finitely generated subalgebras.
\begin{proposition}
\label{DL}Suppose $\mathcal{A}$ is separable and weakly
semiprojective and $\left\{\mathcal{A}_{n}\right\}_{n=1}^{\infty}$
is an increasing sequence of finitely generated C*-subalgebras whose
union is dense in $\mathcal{A}$. Then there is
a positive integer $N$ and unital *-homomorphisms $\pi_{n}%
:\mathcal{A\rightarrow A}_{n}$ for all $n\geq N$ such that%
\[
\left\Vert x-\pi_{n}\left( x\right) \right\Vert \rightarrow0
\]
for every $x\in\mathcal{A}$.
\end{proposition}
Proof. It follows from the hypothesis that dist$\left( x,\mathcal{A}%
_{n}\right) \rightarrow0$ for every $x\in\mathcal{A}$. Thus, for
each $x\in\mathcal{A}$, there is a $x_n\in\mathcal{A}_{n}$ such that
$\left\Vert x-x_n\right\Vert \leq$dist$\left(
x,\mathcal{A}_{n}\right) +\frac{1}{n}$. Define a unital
*-homomorphism $\pi:\mathcal{A}\rightarrow\prod_{1}^{\infty}
\mathcal{A}_{n}/\oplus _{1}^{\infty} \mathcal{A}_{n}$ by
\[
\pi\left( x\right) =\left[ \left\{x_n\right\} \right] .
\]
The desired result follows easily from the weak semiprojectivity of
$\mathcal{A}$.\hfill$\Box$
\begin{definition}
A unital $C^{\ast}$-algebra $\mathcal{A}$ is called GCR if for any
irreducible representation $\pi$ from $\mathcal{A}$ to
$\mathcal{B}(\mathcal{H})$,
$\mathcal{K}(\mathcal{H})\subseteq\pi(\mathcal{A})$.
\end{definition}
\begin{lemma}
If $\mathcal{A}$ is a unital GCR C$^{\ast}$-algebra, then there exists a
positive integer $n$ and a representation $\pi:{\mathcal{A}}\rightarrow
{\mathcal{M}}_{n}(\mathbb{C)}$ that is onto.
\end{lemma}
Proof. Suppose $\mathcal{J}$ is a maximal ideal in $\mathcal{A}$. Then
${\mathcal{A}}/{\mathcal{J}}$ is a simple C$^{*}$-algebra. Let $\pi:
{\mathcal{A}}/{\mathcal{J}}\rightarrow{\mathcal{B}(\mathcal{H})}$ be an
irreducible representation. Then $\pi\left( {\mathcal{A}}/{\mathcal{J}%
}\right) ^{\prime}=\mathbb{C}1$. It follows that ${\mathcal{K}(\mathcal{H}%
)}\subseteq\pi\left( {\mathcal{A}}/{\mathcal{J}}\right) $ is a closed ideal,
therefore $\mathcal{H}$ is finite-dimensional. \hfill$\Box$
\vspace{0.1cm}
From the above lemma, it is not hard to see that if $\mathcal{A}$ is a simple
infinite-dimensional C$^{\ast}$-algebra, then $\mathcal{A}$ cannot be a
subalgebra of a GCR algebra.
\begin{corollary}
If $\mathcal{A}$ is a unital simple infinite-dimensional C*-algebra that is a
subalgebra of a direct limit of subalgebras of GCR C*-algebras, then
$\mathcal{A}$ is not weakly\ semiprojective.
\end{corollary}
Proof. It follows from the proof of Proposition \ref{DL} that there is a
*-homomorphism $\pi:\mathcal{A}\rightarrow\prod_{1}^{\infty}
\mathcal{A}_{n}/\oplus_{1}^{\infty} \mathcal{A}_{n}$, where $\{{\cal
A}_n\}_{n=1}^{\infty}$ as defined in Proposition \ref{DL}. Assume
via contradiction that $\mathcal{A}$ is weakly semiprojective. Then
there is a representation $\pi_{n}:\mathcal{A\rightarrow A}_{n}$ for
some positive integer $n$. Since $\mathcal{A}_{n}$ is a subalgebra
of a GCR algebra, it follows that $\mathcal{A}_{n},$ and hence
$\mathcal{A}$, has a finite-dimensional representation. Since
$\mathcal{A}$ is simple, every representation of $\mathcal{A}$ is
one-to-one, which implies that $\mathcal{A}$ is finite-dimensional,
a contradiction.\hfill$\Box$
\begin{remark}
The Cuntz algebra is weakly semiprojective, hence, the Cuntz algebra cannot be
embedded into a direct limit of subalgebras of GCR C*-algebras. The irrational
rotation algebra ${\mathcal{A}}_{\theta}$ is not weakly semiprojective, since
it can be embedded into the direct limit of subalgebras of GCR C*-algebras.
\end{remark}
We conclude this section with an observation concerning the reduced
free group C$^{*}$-algebra, $C_{r}^{\ast}\left(
\mathbb{F}_{n}\right)$.
\begin{proposition}
$C_{r}^{\ast}\left( \mathbb{F}_{n}\right) $ is not weakly semiprojective.
\end{proposition}
Proof. U. Haagerup and S. Thorbj\o rnsen \cite{Haag} proved that
there is a map $$\pi:C_{r}^{\ast}\left( \mathbb{F}_{n}\right)
\rightarrow\prod_{n\geq
1}\mathcal{M}_{n}\left( \mathbb{C}\right) /\oplus_{n\geq1}\mathcal{M}%
_{n}\left( \mathbb{C}\right) .$$ Since $C_{r}^{\ast}\left( \mathbb{F}%
_{n}\right) $ is infinite-dimensional and simple,
$C_{r}^{\ast}\left( \mathbb{F}_{n}\right)$ has no finite-dimensional
representation. Hence $C_{r}^{\ast}\left( \mathbb{F}_{n}\right) $
cannot be weakly semiprojective.\hfill$\Box$
\section{Finite Von Neumann Algebras and trace norms}
When we talked about weak semiprojectivity in C*-algebras we described it in
terms of mappings into algebras $\prod_{1}^{\infty}{\mathcal{B}}_{n}%
/\oplus_{1}^{\infty}{\mathcal{B}}_{n}$ being ``liftable". There is another way
to describe this by replacing the $\prod_{1}^{\infty}{\mathcal{B}}_{n}%
/\oplus_{1}^{\infty}{\mathcal{B}}_{n}$ construction with ultraproducts.
Suppose $\mathbb{I}$ is an infinite set and $\omega$ is an ultrafilter on
$\mathbb{I}$, i.e., $\omega$ is a family of subset of $\mathbb{I}$ such that
\newline(1) $\emptyset\notin\omega$ \newline(2) If $A,B\in\omega$, then $A\cap
B\in\omega$ \newline(3) For every subset $A$ in $\mathbb{I}$, either
$A\in\omega$ or $\mathbb{I}\setminus A\in\omega$.
One example of an ultrafilter is obtained by choosing an element
$\iota$ in $\mathbb{I}$ and letting $\omega$ be the collection of
all subsets of $\mathbb{I}$ that contain $\iota$. Such an
ultrafilter is called \emph{principle} ultrafilter and ultrafilter
not of this form are called \emph{free}. We call an ultrafilter
$\omega$ \emph{nontrivial} if it is free and there is a sequence
$\left\{ E_{n}\right\}_{n=1}^{\infty}$ in $\omega$ whose
intersection is empty. We can always choose $E_{1}=\mathbb{I}$ and,
by replacing $E_{n}$ with $\cap_{k=1}^{n}E_{k}$, we can assume that
$\left\{ E_{n}\right\}_{n=1}^{\infty}$ is decreasing. Throughout
this paper we will only use nontrivial ultrafilters.
Suppose $\{{\mathcal{A}}_{i}:i\in\mathbb{I\}}$ is a family of C*-algebras and
$\omega$ is a nontrivial ultrafilter on $\mathbb{I}$. Then
\[
{\mathcal{J}}=\left\{ \{A_{i}\}\in\prod_{i\in\mathbb{I}}{\mathcal{A}}%
_{i}:\ \lim_{i\rightarrow\omega}\left\Vert A_{i}\right\Vert =0\right\}
\]
is a norm-closed two-sided ideal in
$\prod_{i\in\mathbb{I}}{\mathcal{A}}_{i},$ and we call the quotient
the \emph{C*-algebraic ultraproduct} of the $\mathcal{A}_{i}$'s and
denote it by $\prod^{\omega}{\mathcal{A}}_{i}$. For an introduction
to ultraproducts see \cite{Don1}. It is easily verified that a
C*-algebra $\mathcal{A}$ is weakly semiprojective if and only if,
given a unital *-homomorphism $\pi:\mathcal{A}\rightarrow$
$\prod^{\omega }{\mathcal{A}}_{i}$ there are functions
$\pi_{i}:\mathcal{A}\rightarrow \mathcal{A}_{i}$ for each
$i\in\mathbb{I}$ such that, eventually along $\omega,$ $\pi_{i}$ is
a unital *-homomorphism and such that, for ever $a\in\mathcal{A}$,
\[
\pi\left( a\right) =[\left\{ \pi_{i}\left( a\right) \right\} ] _{\omega}.
\]
We now want to look at analogue of weak semiprojectivity for finite
von Neumann algebras with faithful tracial states. Suppose
$\mathcal{A}$ is a C*-algebra with a tracial state $\tau$. As in the
GNS construction there is a seminorm $\left\Vert \cdot\right\Vert
_{2, \tau}$ on $\mathcal{A}$ defined by $\left\Vert a\right\Vert
_{2,\tau}=\tau\left( a^{\ast}a\right) ^{1/2}$. More generally, if
$1\leq p<\infty$, we define $\Vert a\Vert_{p,\tau}=(\tau((a^{\ast
}a)^{p/2}))^{1/p}$. Since $C^{\ast}(a^{\ast}a)$ is isomorphic to
$C(X)$, where $X$ is the spectrum of $a^{\ast}a$, there is a
probability measure $\mu$ such that
$\tau(f(a^{\ast}a))=\int_{X}fd\mu$ for every $f\in C(X)$. Thus, for
$1\leq p<\infty$, \begin{equation}\label{equation,1}\Vert
a\Vert_{p,\tau}=0\ \ \mbox{if and only if}\ \ \Vert a\Vert_{2,
\tau}=0.\end{equation} If there is no confusion, we can simply use
$\|\cdot\|_2$ and $\|\cdot\|_p$ to denote $\|\cdot\|_{2,\tau}$ and
$\|\cdot\|_{p,\tau}$ respectively.
Suppose $\left\{ \left( \mathcal{A}_{i},\tau_{i}\right) :i\in
\mathbb{I}\right\} $ is a family of C*-algebras $\mathcal{A}_{i}$ with
tracial states $\tau_{i}$. We can define a trace $\rho$ on $\prod
_{i\in\mathbb{I}}{\mathcal{A}}_{i}$ by
\[
\rho\left( \left\{ a_{i}\right\} \right) =\lim_{i\rightarrow\omega}%
\tau_{i}\left( a_{i}\right) .
\]
The set $\mathcal{J}_{2}=\left\{ \{A_{i}\}\in\prod_{i\in\mathbb{I}%
}{\mathcal{A}}_{i}:\ \lim_{i\rightarrow\omega}\left\Vert A_{i}\right\Vert
_{2}=0\right\} $ is a closed two-sided ideal in $\prod_{i\in\mathbb{I}%
}{\mathcal{A}}_{i}$, and we call the quotient $\left( \prod_{i\in\mathbb{I}%
}{\mathcal{A}}_{i}\right) /\mathcal{J}_{2}$ the \emph{tracial ultraproduct}
of the $\mathcal{A}_{i}$'s, and we denote it by $\prod^{\omega}\left(
\mathcal{A}_{i},\tau_{i}\right) $. There is a natural faithful trace $\tau$
on $\prod^{\omega}\left( \mathcal{A}_{i},\tau_{i}\right) $ defined by
\[
\tau\left( \lbrack\left\{ a_{i}\right\} ]_{\omega}\right) =\lim
_{i\rightarrow\omega}\tau_{i}\left( a_{i}\right) .
\]
$\prod^{\omega}\left( \mathcal{A}_{i},\tau_{i}\right) $ is the
representation of $\prod_{i\in\mathbb{I}}{\mathcal{A}}_{i}$ using
the GNS construction with $\rho$. By Equation (\ref{equation,1}) we
see that $\prod^{\omega}\left( \mathcal{A}_{i},\tau_{i}\right)
=\left( \prod_{i\in\mathbb{I}}{\mathcal{A}}_{i}\right)
/\mathcal{J}_{p}$, where
$\mathcal{J}_{p}=\left\{ \{A_{i}\}\in\prod_{i\in\mathbb{I}}{\mathcal{A}}%
_{i}:\ \lim_{i\rightarrow\omega}\left\Vert A_{i}\right\Vert _{p}=0\right\} .$
One immediate consequence of $\mathcal{J}_{p}=\mathcal{J}_{2}$ is the fact
that, on a bounded subset of any $\left( \mathcal{A},\tau\right) $ the norms
$\left\Vert \cdot\right\Vert _{2}$ and $\left\Vert \cdot\right\Vert _{p}$
generate the same topology.
It was shown by S. Sakai \cite{sakai} that a tracial ultraproduct of
finite factors is a von Neumann algebra and is, in fact, a factor.
However, it is true that any tracial ultraproduct of von Neumann
algebras is a von Neumann algebra. Here we prove that any tracial
ultraproduct of C*-algebras is a von Neumann algebra.
\begin{theorem}
\label{theorem, saikai} Suppose
$\{\mathcal{A}_{i}\}_{i\in\mathbb{I}}$ is a family of
C$^{*}$-algebras with a tracial state $\tau_{i}$ on each
$\mathcal{A}_{i} $ and $\omega$ is a nontrivial ultrafilter on
$\mathbb{I}$. Then the tracial ultraproduct $\prod^{\omega}\left(
{\mathcal{A}}_{i},\tau _{i}\right) $ of
$\{\mathcal{A}_{i}\}_{i\in\mathbb{I}}$ is a von Neumann algebra.
\end{theorem}
Proof. Let $\mathcal{A}=\prod^{\omega}\left( {\mathcal{A}}_{i},\tau
_{i}\right) $. Note that $Ball(\overline{\mathcal{A}}^{\ast-SOT}%
)=\overline{Ball(\mathcal{A})}^{\Vert\cdot\Vert_{2}},$ i.e., the unit ball of
$\overline{\mathcal{A}}^{\ast-SOT}$ is equal to the $\Vert\cdot\Vert_{2}$
closure of the unite ball of $\mathcal{A}$.
Suppose $T\in\overline{Ball(\mathcal{A})}^{\|\cdot\|_{2}}$. Then for
any positive integer $n$, there exists $A_{n}\in Ball(\mathcal{A})$
such that $\|T-A_{n}\|_{2}\leq\frac{1}{4^{n}}.$ Write
$A_{n}=[\{A_{ni}\}]_{\omega}$ with each $A_{ni}\in
Ball(\mathcal{A}_{i})$.
Since $\omega$ is nontrivial, there is a family $\{E_{n}\}$ of elements of
$\omega$ such that
\[
I=E_{1}\supseteq E_{2}\supseteq\cdots\ \ \mbox{and}\ \ \cap_{n}E_{n}%
=\emptyset.
\]
Let
\[
F_{n}=\{i\in E_{n}: \forall1\leq k\leq n, \|A_{ki}-A_{ni}\|_{2}<\frac{1}%
{4^{n}}+\frac{1}{4^{k}}\}.
\]
Let $X_{i}=A_{ki}$ for $i\in F_{k}/F_{k+1}$. For any $i\in F_{n}$, there
exists some $k\geq n$ such that $i\in F_{k}/F_{k+1}$ and
\[
\|A_{ni}-X_{i}\|_{2}=\|A_{ni}-A_{ki}\|_{2}\leq\frac{1}{4^{n}}+\frac{1}{4^{k}%
}\leq\frac{2}{4^{n}}\leq\frac{1}{2^{n}}.
\]
Let $X=[\{X_{i}\}]_{\omega}$. Then $X\in Ball (\mathcal{A})$ and
\[
\|A_{n}-X\|_{2}\leq\frac{1}{2^{n}}.
\]
Hence $T=X\in Ball (\mathcal{A}).$ This implies that ${\cal
A}=\prod^{\omega}\left( {\mathcal{A}}_{i},\tau _{i}\right) $ is a
von Neumann algebra.\hfill$\Box$
\vspace{0.2cm}
The next Theorem gives a generalization of Lin's theorem for
$\left\Vert \cdot\right\Vert _{p}$ on C*-algebras with trace. When
$p=2$, it was proved for finite factors in \cite{Don2}.
\begin{theorem}
\label{corollary,shanghai}For every $\varepsilon>0$ and every $1\leq
p<\infty,$ there exists $\delta>0$ such that, for any
C$^{\ast}$-algebra $\mathcal{A}$ with trace $\tau$, and
$A_{1},\ldots,A_{n}\in$ball$\left( \mathcal{A}\right) $ with $\Vert
A_{j}A_{j}^{\ast}-A_{j}^{\ast}A_{j}\Vert _{p}<\delta$ and $\Vert
A_{j}A_{k}-A_{k}A_{j}\Vert_{p}<\delta$, there exists
$B_{1},\ldots,B_{n}\in$ball$\left( \mathcal{A}\right) $ so that
$B_{j}B_{j}^{\ast}=B_{j}^{\ast}B_{j}$, $B_{j}B_{k}=B_{k}B_{j}$ and
$\sum _{j=1}^{n}\Vert A_{j}-B_{j}\Vert_{p}<\varepsilon$.
\end{theorem}
Proof. Assume the statement is false. Then there is an $\varepsilon>0$ such
that, for every positive integer $k$, there is a unital C*-algebra
$\mathcal{A}_{k}$ with trace $\tau_{k}$ and elements $A_{k,1},\ldots,A_{k,n}$
with $\Vert A_{k,j}A_{k,j}^{\ast}-A_{k,j}^{\ast}A_{k,j}\Vert_{p}<\frac{1}{k}$
and $\Vert A_{k,j}A_{k,i}-A_{k,i}A_{k,j}\Vert_{p}<\frac{1}{k}$, so that for
all $B_{1},\ldots,B_{n}\in$ball$\left( \mathcal{A}\right) $ with $B_{j}%
B_{j}^{\ast}=B_{j}^{\ast}B_{j}$ and $B_{j}B_{k}=B_{k}B_{j}$ we have
$\sum_{j=1}^{n}\Vert A_{j}-B_{j}\Vert_{p}^{p}\geq\varepsilon.$ The
tracial ultraproduct
$\mathcal{A=}\prod^{\omega}\mathcal{A}_{i}=\left( \prod
_{i\in\mathbb{I}}{\mathcal{A}}_{i}\right) /\mathcal{J}_{p}$ is a
von Neumann algebra and $\{A_{j}=\left[ \left\{ A_{k,j}\right\}
\right] _{\omega}: 1\leq j\leq n\}$ is a family of commuting normal
operators. Hence, by the proof of Theorem 5.5 in \cite{BDF}, there
is a selfadjoint operator $C\in\mathcal{A}$ and bounded continuous
functions $f_{1},\ldots ,f_{n}:\mathbb{R}\rightarrow\mathbb{C}$ such
that $A_{j}=f_{j}\left( C\right) $ for $1\leq j\leq n.$ Write
$C=\left[ \left\{ C_{k}\right\}
\right] _{\omega}$ with each $C_{k}=C_{k}^{\ast}$. Define $B_{k,j}%
=f_{j}\left( C_{k}\right) $ for $1\leq j\leq n$ and
$k\in\mathbb{N}$. Then $A_{j}=\left[ \left\{ B_{k,j}\right\}
\right] _{\omega}$ for $1\leq j\leq n$ and $\left\{ B_{k,j}:1\leq
j\leq n\right\} $ is a family of commuting normal operators. So
\[
\varepsilon\leq\lim_{k\rightarrow\omega}\sum_{j=1}^{n}\Vert A_{k,j}%
-B_{k,j}\Vert_{p}=\sum_{j=1}^{n}\Vert A_{j}-f_{j}\left( C\right) \Vert
_{p}=0,
\]
which is a contradiction.\hfill$\Box$
\begin{remark}
Suppose $K$ is a compact nonempty subset of $\mathbb{C}$ that is a
continuous image of $[0,1]$. It follows from Proposition 39 in
\cite{Don} that there is a noncommutative continuous function
$\alpha$ such that, for every operator $T$ with $\Vert T\Vert\leq1$
we have $\alpha(T)$=0 if and only if $T$ is normal and the spectrum
of $T$ is contained in $K$. If, in Corollary
\ref{corollary,shanghai}, we add the condition that $\Vert
\alpha(A_{1})\Vert_{2}<\delta$, then we can choose $B_{1}$ so that
its spectrum is contained in $K$. In particular, if we add $\Vert
1-A_{1}^{\ast}A_{1}\Vert_{2}<\delta$, we can choose $B_{1}$ to be
unitary.
\end{remark}
The next theorem shows that, unlike in the C*-algebra case, commutative
C*-algebras are ``weakly semiprojective" in the ``diffuse von Neumann algebra" sense.
\begin{theorem}
\label{theorem,diffuse}Suppose, for $i\in \Bbb I$, $\mathcal{M}_{i}$
is a diffuse von Neumann algebra with faithful trace $\tau_i$ and
$\mathcal{A}$ is a commutative countably generated von Neumann
subalgebra of the ultraproduct $\prod^{\omega}\left(
\mathcal{M}_{i},\tau_{i}\right) .$ Then, for every $i,$ there is a
trace-preserving *-homomorphism $\pi_{i}:\mathcal{A}\rightarrow
\mathcal{M}_{i}$ such that, for every $a\in\mathcal{A},$
\[
a=[\left\{ \pi_{i}\left( a\right) \right\} ] _{\omega}.
\]
\end{theorem}
Proof. Suppose $P$ is a projection in $\prod^{\omega}\left( \mathcal{M}%
_{i},\tau_{i}\right) $. It is well-known that $P$ can be written as
$P=[\{A_{i}\}]_{\omega}$ with each $A_{i}$ a projection. Since $\tau
(A_{i})\rightarrow\tau(P)$ as $i\rightarrow\omega$ and since each
$\mathcal{M}_{i}$ is diffuse, we can, for each $i$, find a
projection $P_{i}\in\mathcal{M}_{i}$ so that $\tau_{i}\left(
P_{i}\right) =\tau\left( P\right)$ and either $P_{i}\leq A_{i}$ or
$A_{i}\leq P_{i}$. Since $\left\Vert A_{i}-P_{i}\right\Vert
_{2}=\sqrt{\left\vert \tau_{i}\left( P_{i}\right) -\tau_{i}\left(
A_{i}\right) \right\vert }\rightarrow0$, we have $P=[\left\{
P_{i}\right\} ]_{\omega}$. Hence, every projection in
$\prod^{\omega}\left( \mathcal{M}_{i},\tau_{i}\right) $ can be
lifted to projections with the same trace.
Next suppose $P=[\left\{ P_{i}\right\} ]_{\omega},Q=[\left\{ Q_{i}\right\}
]_{\omega}$ are projections in $\prod^{\omega}\left( \mathcal{M}_{i},\tau
_{i}\right) $ such that $P\leq Q,$ and, for every $i$, $P_{i}\leq Q_{i}$ and
$\tau_{i}\left( P_{i}\right) =\tau\left( P\right) $ and $\tau_{i}\left(
Q_{i}\right) =\tau\left( Q\right) $. Suppose $E$ is a projection in
$\prod^{\omega}\left( \mathcal{M}_{i},\tau_{i}\right) $ and $P<E<Q$.
Applying what we just proved to the projection $E-P$ in the ultraproduct
\begin{center}
$\left( Q-P\right) \left( \prod^{\omega}\left( \mathcal{M}_{i},\tau
_{i}\right) \right) \left( Q-P\right) =\prod^{\omega}\left( Q_{i}%
-P_{i}\right) \mathcal{M}_{i}\left( Q_{i}-P_{i}\right) ,$
\end{center}
we can find projections $E_{i}\in\mathcal{M}_{i}$ so that $P_{i}\leq
E_{i}\leq Q_{i}$, $\tau_{i}\left( E_{i}\right) =\tau\left(
E\right) $ and $E=[\left\{ E_{i}\right\} ]_{\omega}$. Since
$\mathcal{A}$ is countably generated and commutative, we know from
von Neumann's Theorem that $\mathcal{A}$ is generated by a single
selfadjoint $T$ with $0\leq T\leq1.$ Since $\prod^{\omega}\left(
\mathcal{M}_{i},\tau_{i}\right) $ is diffuse, the chain $\left\{
\chi_{\lbrack0,s)}\left( T\right) :0\leq s\leq1\right\} $ can be
extended to a chain $\left\{ P\left( t\right) :t\in\left[
0,1\right] \right\} $ such that $\tau\left( P\left( t\right)
\right) =t$ for $0\leq t\leq1$. Repeatedly using the result above
we can find projections $P_i\left( t\right)$ for each $i$ and each
rational $t\in\left[ 0,1\right] $ such that $\tau_{i}\left(
P_i\left( t\right)\right) =t$
and $P\left( t\right) =[\left\{ P_i\left( t\right)\right\} ]_{\omega}%
$, and such that $P_i\left( s\right)\leq P_i\left( t\right)$ for
all $i$ and $s\leq t$. Hence, for each $t\in\left[ 0,1\right] $ and
each $i\in
I,$ we can define%
\[
P_i\left( t\right)=\sup\left\{ P_i\left( s\right):s\leq
t,s\in\mathbb{Q}\right\} =\inf\left\{ P_i\left( s\right) :s\geq
t,s\in\mathbb{Q}\right\} .
\]
Then we must have $P\left( t\right) =[\left\{ P_i\left( t\right)
\right\} ] _{\omega}$ for every $t\in\left[ 0,1\right] $. For each
$i$, the map $P\left( t\right) \mapsto P_i\left( t\right)$
extends to a trace-preserving *-homomorphism $\rho_{i}:\left\{
P\left( t\right) :t\in\left[ 0,1\right] \right\}
^{\prime\prime}\rightarrow\mathcal{M}_{i} $, and we can let
$\pi_{i}=\rho_{i}|\mathcal{A}$.\hfill$\Box$
\begin{corollary}
\label{corollary,tracepreserving homomorphism} Suppose $\mathcal{M}_{i}$ is a
diffuse von Neumann algebra for every $i\in\mathbb{I}$ and $\mathcal{A}$
commutative countably generated unital C*-algebra and $\pi:\mathcal{A}%
\rightarrow$ $\prod^{\omega}\left( \mathcal{M}_{i},\tau_{i}\right)
$ is a unital *-homomorphism. Then, for every $i,$ there is a
*-homomorphism $\pi_{i}:\mathcal{A}\rightarrow\mathcal{M}_{i}$ such
that
\begin{enumerate}
\item $\pi\left( a\right) =\left[ \left\{ \pi_{i}\left( a\right)
\right\} \right] _{\omega}$ for every $a\in\mathcal{A}$, and
\item $\tau_{i}\circ\pi_{i}=\tau\circ\pi$ for every $i\in\mathbb{I}$.
\end{enumerate}
\end{corollary}
\begin{corollary}
For every $\varepsilon>0$ there is a positive integer $N$ and
$\delta>0$ such that, for every diffuse finite von Neumann algebra
with trace $\tau$ and every $U\in$ball$\left( \mathcal{M}\right) ,$
if $\left\vert \tau\left( U^{k}\right) \right\vert <\delta$ for
$1\leq k\leq N$ with $\left\Vert 1-U^{\ast}U\right\Vert
_{2}<\delta$, then there is a Haar unitary $V\in\mathcal{M}$ such
that $\left\Vert U-V\right\Vert _{2}<\varepsilon$.
\end{corollary}
\begin{remark}
It follows from Theorem \ref{theorem,diffuse} that the hypothesis in Theorem 4
in \cite{Don4} that $w_{1},w_{2},\ldots$ are Haar unitaries can be replaced by
the assumption that they are unitaries. In particular, if $\mathcal{M}$ is a
von Neumann algebra with a faithful trace $\tau$, and $\mathcal{N}$ is a
diffuse subalgebra of $\mathcal{\ M}$, and $\{v_{n}\}$ is a sequence of Haar
unitaries in $\mathcal{M}$ and $\{w_{n}\}$ is a sequence of unitaries in
$\mathcal{N}$, and $\Vert w_{n}-v_{n}\Vert_{2}\rightarrow0$, then there exists
a sequence $\{u_{n}\}$ of Haar unitaries in $\mathcal{\ N}$ such that $\Vert
u_{n}-v_{n}\Vert_{2}\rightarrow0$.
\end{remark}
We next give an analogue of Theorem \ref{theorem,diffuse} with $\mathcal{A}$
hyperfinite instead of commutative, but with each of the $\mathcal{M}_{i}$'s a
II$_{1}$ factor.
\begin{lemma}
\label{lemma,con}\cite{Con} Let $\mathcal{M}$ be a separable factor and
$\omega$ a nontrivial ultrafilter. Let $E=[\{E_{i}\}]_{\omega}$ and
$F=[\{F_{i}\}]_{\omega}$ be equivalent projections in $\prod^{\omega
}\mathcal{M}$ with $E_{i}$'s and $F_{i}$'s projections. Suppose $V$ is a
partial isometry from $E$ to $F$. Then $V=[\{V_{i}\}]_{\omega}$, where $V_{i}$
is a partial isometry from $E_{i}$ to $F_{i}$.
\end{lemma}
\begin{lemma}
\label{lemma,lift trace-preserving homo} Suppose each
$\mathcal{M}_{i}$ is a II$_{1}$ factor with the trace $\tau_i$ and
$\mathcal{A}\subseteq\mathcal{B}$ are finite-dimensional
C*-subalgebras of the ultraproduct $\prod^{\omega}\left( \mathcal{M}_{i}%
,\tau_{i}\right) , $ and suppose for every $i,$ there is a trace-preserving
homomorphism $\pi_{i}:\mathcal{A}\rightarrow\mathcal{M}_{i}$ such that, for
every $a\in\mathcal{A},$ $a=[\left\{ \pi_{i}\left( a\right) \right\}
]_{\omega}.$ Then for every $i,$ there is a trace-preserving homomorphism
$\rho_{i}:\mathcal{B}\rightarrow\mathcal{M}_{i}$ such that,
(1) for every $b\in\mathcal{B},$
\[
b=[\left\{ \rho_{i}\left( b\right) \right\} ]_{\omega},
\]
and
(2) for every $i,$ $\rho_{i}|_{\mathcal{A}}=\pi_{i}$.
\end{lemma}
Proof. To avoid a notational nightmare, we will describe the proof for a
specific example. It will be easy to see how this technique applies
universally. Suppose $\mathcal{A}$ is isomorphic to $\mathcal{M}_{2}%
\oplus\mathcal{M}_{3}$ and $\mathcal{B}$ is isomorphic to $\mathcal{M}%
_{4}\oplus\mathcal{M}_{5}$ where the inclusion $\mathcal{A}\subset\mathcal{B}$
identifies $A\oplus B$ with $\left( A\oplus A\right) \oplus\left( A\oplus
B\right) $. Let $\left\{ e_{st}:1\leq s,t\leq4\right\} $ denote matrix
units for $\mathcal{M}_{4}\oplus0$ and $\left\{ f_{st}:1\leq s,t\leq
5\right\} $ denote matrix units for $0\oplus\mathcal{M}_{5}$. Then
$$\mathcal{S}_{1}=\left\{ e_{11}+e_{33}+f_{11},e_{12}+e_{34}+f_{12}%
,e_{21}+e_{43}+f_{21},e_{22}+e_{44}+f_{22}\right\} $$ is a set of
matrix units for
$\mathcal{M}_{2}\oplus0$ and $$\mathcal{S}_{2}=\left\{ f_{33},f_{34}%
,f_{35},f_{43},f_{44},f_{45},f_{53},f_{54},f_{55}\right\} $$ is a
set of matrix units
for $0\oplus\mathcal{M}_{3}$. We have $\pi_{i}$ is defined on $\mathcal{S}%
_{1}\cup\mathcal{S}_{2}.$ We want to extend $\pi_{i}$ to all of the matrix
units for $\mathcal{B}$. However, $\left\{ e_{11}+e_{33}+f_{11},e_{22}%
+e_{44}+f_{22},f_{33},f_{55}\right\} $ is a commuting family that
is contained in $span\left( \left\{ e_{ss}:1\leq s\leq4\right\}
\cup\left\{ f_{ss}:1\leq s\leq5\right\} \right) ,$ and, using the
techniques in the proof of Theorem \ref{theorem,diffuse}, we can
extend $\pi_{i}$ to a trace-preserving *-homomorphism $\rho_{i}$ on
$span\left( \left\{ e_{ss}:1\leq s\leq4\right\} \cup\left\{
f_{ss}:1\leq s\leq5\right\} \right) $. Using the fact that
\[
e_{11}\left( e_{12}+e_{34}+f_{12}\right) =e_{12},
\]
we naturally can define
\[
\rho_{i}\left( e_{12}\right) =\rho_{i}\left( e_{11}\right) \pi_{i}\left(
e_{12}+e_{34}+f_{12}\right) .
\]
The definition of $\rho_{i}$ for the remaining matrix units in $\mathcal{B}$
is immediately obtained using Lemma \ref{lemma,con}.\hfill$\Box$
\begin{theorem}
\label{theorem,hyperfinite} If each $\mathcal{M}_{i}$ is a II$_{1}$
factor with the trace $\tau_i$ and $\mathcal{A}$ is a countably
generated hyperfinite von Neumann subalgebra of the ultraproduct
$\prod^{\omega}\left( \mathcal{M}_{i},\tau_{i}\right) ,$ then, for
every $i,$ there is a trace-preserving homomorphism $\pi
_{i}:\mathcal{A}\rightarrow\mathcal{M}_{i}$ such that, for every
$a\in\mathcal{A},$
\[
a=[\left\{ \pi_{i}\left( a\right) \right\} ]_{\omega}.
\]
\end{theorem}
Proof. \ There is an increasing sequence $\left\{ \mathcal{A}_{n}\right\} $
of finite-dimensional C*-subalgebras of $\mathcal{A}$ whose union
$\mathcal{D}$ is $\left\Vert \cdot\right\Vert _{2}$-dense in $\mathcal{A}$
such that $\mathcal{A}_{1}=\mathbb{C}\cdot1$. Using Lemma
\ref{lemma,lift trace-preserving homo}, for every $i,$ there is a
trace-preserving homomorphism $\pi_{i}:\mathcal{D}\rightarrow\mathcal{M}_{i}$
such that, for every $a\in\mathcal{A},$
\[
a=[\left\{ \pi_{i}\left( a\right) \right\} ]_{\omega}.
\]
However, since each $\pi_{i}$ is an isometry in $\left\Vert
\cdot\right\Vert _{2},$ we can extend $\pi_{i}$ uniquely to an
isometry (i.e., trace-preserving) linear map (still called
$\pi_{i}$) from $\mathcal{A}$ to $\mathcal{M}_{i}$. Since
multiplication and the map $x\rightarrow x^{\ast}$ are $\left\Vert
\cdot\right\Vert _{2}$-continuous on bounded sets, it follows that
$\pi_{i}:\mathcal{A}\rightarrow\mathcal{M}_{i}$ is a *-homomorphism,
and that
\[
a=[\left\{ \pi_{i}\left( a\right) \right\} ]_{\omega}%
\]
holds for every $a\in\mathcal{A}$.
|
1,108,101,564,952 | arxiv | \section{Introduction}
Graphical models such as Bayesian networks and Markov random fields provide a powerful framework for reasoning about
conditional dependency structures over many variables, and have found wide application in many areas including error correcting codes, computer vision, and computational biology \citep{Wainwright08,Koller_book}.
Given a graphical model, which may be estimated from empirical data or constructed by domain expertise,
the term \emph{inference} refers generically to answering
probabilistic queries about the model,
such as computing marginal probabilities or maximum {\it a posteriori} estimates.
Although these inference tasks are NP-hard in the worst case,
recent algorithmic advances, including the development of variational methods and the family of algorithms collectively called belief propagation, provide approximate
or exact solutions for these problems in many practical circumstances.
In this work we will focus on three common types of inference tasks.
The first involves \emph{maximization} or \emph{max-inference}
tasks, sometimes called maximum {\it a posteriori} (MAP) or most probable explanation
(MPE) tasks, which look for a mode of the joint probability.
The second are \emph{sum-inference} tasks, which include calculating the marginal probabilities
or the normalization constant of the distribution
(corresponding to the probability of evidence in a Bayesian network). Finally, the main focus of
this work is on \emph{marginal MAP}, a type of \emph{mixed-inference} problem that seeks a partial configuration
of variables that maximizes those variables' marginal probability, with the remaining variables summed out.%
\footnote{In some literature \citep[e.g.,][]{Park04}, marginal MAP is simply referred to as MAP, and the joint MAP problem is called MPE.}
A \emph{marginal MAP} problem can arise, for example, as a MAP problem on models with hidden variables whose predictions are not of interest,
or as a robust optimization variant of MAP with some unknown or noisily observed parameters marginalized w.r.t. a prior distribution.
It can be also treated as a special case of the more complicated frameworks of stochastic programming \citep{birge1997introduction} or decision networks
\citep{howard2005influence, liu12b}.
These three types of inference tasks are listed in order of increasing difficulty: max-inference is NP-complete, while sum-inference is \#P-complete,
and mixed-inference is $\mathrm{NP}^{\mathrm{PP}}$-complete \citep{Park04, de2011new}.
Practically speaking, max-inference tasks have a host of efficient algorithms such as
loopy max-product BP, tree-reweighted BP, and dual decomposition~\cite[see e.g., ][]{Koller_book, Sontag_optbook}.
Sum-inference is more difficult than max-inference:
for example there are models, such as those with binary attractive pairwise potentials, on which sum-inference is \#P-complete but max-inference is tractable \citep{greig1989exact, jerrum1993polynomial}.
Mixed-inference is even much harder than either max- or sum- inference
problems alone: marginal MAP can be NP-hard even on tree structured graphs, as illustrated in the example in \figref{fig:hiddenchain} \citep{Koller_book}.
The difficulty arises in part because the
max and sum operators do not commute, causing the feasible elimination orders to have much higher induced width than for sum- or max-inference.
Viewed another way, the marginalization step may destroy the dependency structure of the original graphical model, making the subsequent maximization step far more challenging.
Probably for these reasons, there is much less work on marginal MAP than that on joint MAP or marginalization, despite its importance to many practical problems.
\textbf{Contributions.}
We reform the mixed-inference problem to a joint maximization problem as a free energy objective that extends the well-known log-partition function duality form, making it possible to easily extend essentially arbitrary variational algorithms to marginal MAP.
In particular, we propose a novel ``mixed-product" BP algorithm that is a hybrid of max-product, sum-product, and a special ``argmax-product" message updates, as well as a convergent proximal point algorithm that works by iteratively solving pure (or annealed) marginalization tasks. We also present junction graph BP variants of our algorithms, that work on models with higher order cliques. We also discuss mean field methods and highlight their connection to the expectation-maximization (EM) algorithm.
We give theoretical guarantees on the global and local optimality of our algorithms for cases when the sum variables form tree structured subgraphs. Our numerical experiments show that our methods can provide significantly better solutions than existing algorithms, including a similar hybrid message passing algorithm by \citet{Jiang10} and a state-of-the-art algorithm based on local search methods.
\textbf{Related Work.}
Expectation-maximization (EM) or variational EM provide one straightforward approach for marginal MAP, by viewing the sum nodes as hidden variables and the max nodes as parameters to be estimated;
however, EM is prone to getting stuck at sub-optimal configurations.
The classical state-of-the-art approaches include local search methods \citep[e.g.,][]{Park04}, Markov chain Monte Carlo methods \citep[e.g.,][]{Doucet02, yuan2004annealed}, and variational elimination based methods \citep[e.g.,][]{dechter2003mini, maua2012anytime}.
\citet{Jiang10} recently proposed a hybrid message passing algorithm that has a similar form to our mixed-product BP algorithm, but without theoretical guarantees; we show in Section~\ref{sec:compare_jiang} that \citet{Jiang10} can be viewed as an approximation of the marginal MAP problem that exchanges the order of sum and max operators.
Another message-passing-style algorithm was proposed very recently in \citet{altarelli2011stochastic} for general multi-stage stochastic optimization problems based on survey propagation, which again does not have optimality guarantees and has a relatively more complicated form.
Finally, \citet{ibrahimi2011robust} introduces a robust max-product belief propagation for solving a relevant worst-case robust optimization problem, where the hidden variables are minimized instead of marginalized.
To the best of our knowledge, our work is the first general variational framework for marginal MAP, and provides the first strong optimality guarantees.
We begin in Section~\ref{sec:background} by introducing background on graphical models and variational inference.
We then introduce a novel variational dual representation for marginal MAP in Section~\ref{sec:mixduality}, and propose analogues of the Bethe and tree-reweighted approximations in Section~\ref{sec:variational}.
A class of ``mixed-product" message passing algorithms is proposed and analyzed in Section~\ref{sec:message} and convergent alternatives are proposed in Section~\ref{sec:proximal} based on proximal point methods.
We then discuss the EM algorithm and its connection to our framework in Section~\ref{sec:EM},
and extend our algorithms to junction graphs in Section~\ref{sec:junctiongraph}.
Finally, we present numerical results in Section~\ref{sec:experiments} and conclude the paper in Section~\ref{sec:conclusion}.
\section{Background}
\label{sec:background}
\subsection{Graphical Models}
\newcommand{\mathcal{I}}{\mathcal{I}}
Let $\boldsymbol{x} = \{x_1, x_2, \cdots, x_n\}$ be a random vector in a discrete space $\mathcal{X}= \mathcal{X}_1\times \cdots \times \mathcal{X}_n$. For an index set $\alpha \in \{1, \cdots, n \}$, let denote by $\boldsymbol{x}_\alpha$ the sub-vector $\{x_{i} \colon i \in \alpha \}$, and similarly, $\mathcal{X}_{\alpha}$ the cross product of $\{\mathcal{X}_i \colon i \in \alpha \}$.
A graphical model defines a factorized probability on $\boldsymbol{x}$,
\begin{align}
p(\boldsymbol{x}) = \frac{1}{Z} \prod_{\alpha \in \mathcal{I}} \psi_{\alpha}( \boldsymbol{x}_\alpha) &&\text{~~~~or~~~~} && p(\boldsymbol{x} ; \boldsymbol{\theta}}%{\boldsymbol{\theta}) = \exp[\sum_{\alpha \in \mathcal{I}} \theta_{\alpha}(\boldsymbol{x}_\alpha) - \Phi(\boldsymbol{\theta}}%{\boldsymbol{\theta})],
\end{align}
where $\mathcal{I}$ is a set of subsets of variable indexes, $\psi_{\alpha} \colon \mathcal{X}_{\alpha} \to \mathbb{R}^+ $ is called a factor function,
and $\theta_{\alpha}(\boldsymbol{x}_\alpha) = \log \psi_{\alpha}(\boldsymbol{x}_\alpha)$.
Since the $x_i$ are discrete, the functions $\psi$ and $\theta$ are tables; by alternatively viewing $\theta$ as
a vector, it is interpreted as the natural parameter in an overcomplete, exponential family representation.
Let $\boldsymbol{\psi}}%{\boldsymbol{\theta}$ and $\boldsymbol{\theta}}%{\boldsymbol{\theta}$ be the joint vector of all $\psi_{\alpha}$ and $\theta_{\alpha}$ respectively, e.g., $\boldsymbol{\theta}}%{\boldsymbol{\theta} = \{ \theta_{\alpha}(\boldsymbol{x}_\alpha) \colon \alpha \in I, \boldsymbol{x}_\alpha \in \mathcal{X}_{\alpha} \} $.
The normalization constant $Z$, called \emph{partition function}, normalizes the probability to sum to one, and $\Phi(\boldsymbol{\theta}}%{\boldsymbol{\theta}) \mathrel{\mathop:}= \log Z$ is called the log-partition function,
\begin{align*}
\Phi(\boldsymbol{\theta}}%{\boldsymbol{\theta}) = \log \sum_{\boldsymbol{x} \in \mathcal{X}}\exp[\theta(\boldsymbol{x})],
\end{align*}
where we define $\theta(\boldsymbol{x}) = \sum_{\alpha \in \mathcal{I}} \theta_{\alpha}(\boldsymbol{x}_\alpha)$ to be the joint potential function that maps from $\mathcal{X}$ to $\mathbb{R}$.
The factorization structure of $p(\boldsymbol{x})$ can be represented by an undirected graph
$G=(V,E)$, where each node $i\in V$ maps to a variable $x_i$, and each edge $(ij)\in E$ corresponds to two variables $x_i$ and $x_j$ that coappear in some factor function $\psi_{\alpha}$, that is, $\{ i, j \} \subseteq \alpha$.
The set $\mathcal{I}$ is then a set of cliques (fully connected subgraphs) of $G$.
For the purpose of illustration, we mainly restrict our scope on the set of pairwise models, on which $\mathcal{I}$ is the set of nodes and edges, i.e., $\mathcal{I} = E \cup V$. However, we show how to extend our algorithms to models with higher order cliques in Section~\ref{sec:junctiongraph}.
\subsection{Sum-Inference Problems and Variational Approximation}
Sum-inference is the task of marginalizing (summing out) variables in the model, e.g., calculating the marginal probabilities of single variables, or the normalization constant $Z$,
\begin{align}
p(x_{i}) = \frac{1}{Z}\sum_{\boldsymbol{x}_{V \setminus \{i\}}} \exp[\theta(\boldsymbol{x})], && Z = \sum_{\boldsymbol{x}} \exp[\theta(\boldsymbol{x})].
\end{align}
Unfortunately, the problem is generally \#P-complete, and the straightforward calculation requires summing over an exponential number of terms. Variational methods are a class of approximation algorithms that transform the marginalization problem into a continuous optimization problem, which is then typically solved approximately.
\textbf{Marginal Ploytope.}
The marginal polytope is a key concept in variational inference. We define the \emph{marginal polytope} $\mathbb{M}$ to be the set of local marginal probabilities ${\boldsymbol{\tau}} = \{\tau_{\alpha}(\boldsymbol{x}_\alpha) \colon \alpha \in \mathcal{I} \}$ that are extensible to a valid joint distribution, i.e.,
\begin{equation}
\mathbb{M} = \{{\boldsymbol{\tau}} \ :\ \text{$\exists$ joint distribution $q(\boldsymbol{x})$, s.t. $\tau_{\alpha}(\boldsymbol{x}_\alpha) = \sum_{\boldsymbol{x}_{V \setminus \alpha}} q(\boldsymbol{x})$ for $\forall \alpha \in \mathcal{I}$}\}.
\label{equ:marginalpolytope}
\end{equation}
Denote by $\mathcal{Q}[{\boldsymbol{\tau}}]$ the set of joint distributions whose marginals are consistent with ${\boldsymbol{\tau}} \in \mathbb{M}$; by the principle of maximum entropy \citep{maxent}, there exists a unique distribution in $\mathcal{Q}[{\boldsymbol{\tau}}]$ that has maximum entropy and follows the exponential family form for some $\boldsymbol{\theta}}%{\boldsymbol{\theta}$.%
\footnote{In the case that $p(\boldsymbol{x})$ has zero elements, the maximum entropy distribution is still unique and satisfies the exponential family form, but the corresponding $\boldsymbol{\theta}}%{\boldsymbol{\theta}$ has negative infinite values \citep{maxent}.}
With an abuse of notation, we denote these unique global distributions by $\tau(\boldsymbol{x})$, and we do not distinguish $\tau(\boldsymbol{x})$ and ${\boldsymbol{\tau}}$ when it is clear from the context.
\textbf{Log-partition Function Duality.}
A key result to many variational methods is that the log-partition function $\Phi(\boldsymbol{\theta}}%{\boldsymbol{\theta})$ is a convex function of $\boldsymbol{\theta}}%{\boldsymbol{\theta}$ and can be rewritten into a convex dual form,
\begin{align}
\Phi(\boldsymbol{\theta}}%{\boldsymbol{\theta}) = \max_{{\boldsymbol{\tau}} \in \mathbb{M}} \big\{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + H(\vtau)}%{H(\vx; \vtau) \big\},
\label{equ:sumduality}
\end{align}
where $\langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle = \sum_{\alpha} \sum_{\boldsymbol{x}_\alpha} \theta_{\alpha}(\boldsymbol{x}_\alpha) \tau_\alpha(\boldsymbol{x}_\alpha)$
is the vectorized inner product,
and $H(\vtau)}%{H(\vx; \vtau)$ is the entropy of the corresponding global distribution $\tau(\boldsymbol{x})$, i.e., $H(\vtau)}%{H(\vx; \vtau) = - \sum_{\boldsymbol{x}} \tau(\boldsymbol{x}) \log \tau(\boldsymbol{x})$. The unique maximum ${\boldsymbol{\tau}}^*$ of \eqref{equ:sumduality} exactly equals the marginals of the original distribution $p(\boldsymbol{x}; \boldsymbol{\theta}}%{\boldsymbol{\theta})$, that is, $\tau^*(\boldsymbol{x}) = p(\boldsymbol{x}; \boldsymbol{\theta}}%{\boldsymbol{\theta})$.
We call $F_{sum}({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta}) = \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + H(\vtau)}%{H(\vx; \vtau)$ the sum-inference free
energy (although technically the {\it negative} free energy).
The dual form \eqref{equ:sumduality} transforms the marginalization problem into a continuous optimization, but does not make it any easier:
the marginal polytope $\mathbb{M}$ is defined by an exponential number of linear constraints, and the entropy term in the objective function is as difficult to calculate as the log-partition function.
However, \eqref{equ:sumduality}
provides a framework for deriving efficient approximate inference algorithms by approximating
both the marginal polytope and the entropy \citep{Wainwright08}.
\textbf{BP-like Methods.} Many approximation methods replace $\mathbb{M}$ with the \emph{locally consistent polytope} ${\mathbb{L}(G)}$; in pairwise models, it is the set of singleton and pairwise ``pseduo-marginals" $\{\tau_i (x_i)| i \in
V\}$ and $\{\tau_{ij}(x_i, x_j) | (ij) \in E\}$ that are consistent on their intersections, that is,
\begin{equation*}
{\mathbb{L}(G)} = \{ \tau_i, \tau_{ij} ~ \colon ~ \sum_{x_i} \tau_{ij}(x_i,x_j) = \tau_j(x_j),
\sum_{x_i}\tau_{i}(x_i) = 1, \tau_{ij}(x_i, x_j) \geq 0 \}.
\end{equation*}
Since not all such pseudo-marginals have valid global distributions,
it is easy to see that ${\mathbb{L}(G)}$ is an outer bound of $\mathbb{M}$, that is, $ \mathbb{M} \subseteq {\mathbb{L}(G)}$.
The free energy remains intractable (and is not even well-defined) in ${\mathbb{L}(G)}$. We
typically approximate the free energy by a combination of singleton
and pairwise entropies, which only requires knowing $\tau_i$ and $\tau_{ij}$. For
example, the Bethe free energy approximation \citep{yedidia2003understanding} is
\begin{align}
H(\vtau)}%{H(\vx; \vtau) \approx \sum_{i\in V} H_i({\boldsymbol{\tau}}) - \sum_{(ij) \in E} I_{ij} ({\boldsymbol{\tau}}), &&
\Phi(\boldsymbol{\theta}}%{\boldsymbol{\theta}) \approx \max_{{\boldsymbol{\tau}} \in {\mathbb{L}(G)}} \big \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + \sum_{i\in V} H_i - \sum_{(ij) \in E} I_{ij} \big \} ,
\label{equ:bethe}
\end{align}
where $H_i ({\boldsymbol{\tau}})$ is the entropy of $\tau_i(x_i)$ and $I_{ij}({\boldsymbol{\tau}})$ the mutual information of $x_i$ and $x_j$, i.e.,
\begin{align*}
H_i({\boldsymbol{\tau}}) = - \sum_{x_i} \tau_i(x_i) \log \tau_i(x_i), && I_{ij}({\boldsymbol{\tau}}) = \sum_{x_i,x_j} \tau_{ij}(x_i, x_j) \log \frac{\tau_{ij}(x_i, x_j)}{\tau_i(x_i) \tau_j(x_j)}.
\end{align*}
We sometimes abbreviate $H_i({\boldsymbol{\tau}})$ and $I_{ij}({\boldsymbol{\tau}})$ into $H_i$ and $I_{ij}$ for convenience.
The well-known loopy belief propagation (BP) algorithm of \citet{pearl1988probabilistic} can be interpreted as a fixed point algorithm to optimize the Bethe free energy in \eqref{equ:bethe} on the locally consistent polytope ${\mathbb{L}(G)}$ \citep{yedidia2003understanding}.
Unfortunately, the Bethe free energy is a non-concave function of ${\boldsymbol{\tau}}$, causing \eqref{equ:bethe} to be a non-convex optimization. The tree reweighted (TRW) free energy is a convex surrogate of the Bethe free energy \citep{Wainwright_TRBP},
\begin{align}
H(\vtau)}%{H(\vx; \vtau) \approx \sum_{i\in V} H_i - \sum_{(ij) \in E} \rho_{ij} I_{ij}, &&
\Phi(\boldsymbol{\theta}}%{\boldsymbol{\theta}) \approx \max_{{\boldsymbol{\tau}} \in {\mathbb{L}(G)}} \big \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + \sum_{i\in V} H_i({\boldsymbol{\tau}}) - \sum_{(ij) \in E} \rho_{ij} I_{ij} ({\boldsymbol{\tau}}) \big \},
\label{equ:TRW}
\end{align}
where $\{\rho_{ij} \colon (ij) \in E \}$ is a set of positive edge appearance probabilities obtained from
a weighted collection of spanning trees of $G$ (see \citet{Wainwright_TRBP} and Section~\ref{sec:trw_bp_marginalMAP} for the detailed definition).
The TRW approximation in \eqref{equ:TRW} is a convex optimization problem, and is guaranteed to give an upper bound of the true log-partition function.
A message passing algorithm similar to loopy BP, called tree reweighted BP, can be derived as a fixed point algorithm for solving the convex optimization in \eqref{equ:TRW}.
\textbf{Mean-field-based Methods.} Mean-field-based methods are another set of approximate inference algorithms, which work by restricting $\mathbb{M}$ to a set of tractable distributions, on which both the marginal polytope and the joint entropy are tractable.
Precisely, let $\mathbb{M}_{mf}$ be a subset of $\mathbb{M}$ that corresponds to a set of tractable distributions, e.g., the set of fully factored distributions, $\mathbb{M}_{mf} = \{{\boldsymbol{\tau}} \in \mathbb{M} \colon \tau(\boldsymbol{x}) = \prod_{i\in V} \tau_i(x_i) \}$. The mean field methods approximate the log-partition function \eqref{equ:sumduality} by
\begin{align}
\max_{{\boldsymbol{\tau}} \in \mathbb{M}_{mf}} \big \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + H(\vtau)}%{H(\vx; \vtau) \big \} , \label{equ:meanfield}
\end{align}
which is guaranteed to give a lower bound of the log-partition function. Unfortunately, mean field methods usually lead to non-convex optimization problems, because $\mathbb{M}_{mf}$ is often non-convex set. In practice, block coordinate descent methods can be adopted to find the local optima of \eqref{equ:meanfield}.
\subsection{Max-Inference Problems}
Combinatorial maximization (max-inference), or maximum \emph{a posteriori} (MAP), problems are the tasks of finding a mode of the joint probability. That is,
\begin{align}
\Phi_{\infty}(\boldsymbol{\theta}}%{\boldsymbol{\theta}) = \max_{\boldsymbol{x}} \theta(\boldsymbol{x}_\alpha) , ~~~~~~~~~~~ \boldsymbol{x}^* = \argmax_{\boldsymbol{x}} \theta(\boldsymbol{x}_\alpha).
\label{equ:max_inference}
\end{align}
where $\boldsymbol{x}^*$ is a MAP configuration and $\Phi_{\infty}(\boldsymbol{\theta}}%{\boldsymbol{\theta})$ the optimal energy value.
This problem can be reformed into a linear program,
\begin{equation}
\Phi_{\infty}(\boldsymbol{\theta}}%{\boldsymbol{\theta}) = \max_{{\boldsymbol{\tau}} \in \mathbb{M}} \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle,
\label{equ:maxduality}
\end{equation}
which attains its maximum when $\tau^*(\boldsymbol{x}) = \boldsymbol{1}(\boldsymbol{x} = \boldsymbol{x}^*)$,
where $\boldsymbol{1}(\cdot)$ is the Kronecker delta function, defined as $\boldsymbol{1}(t) = 1$ if
condition $t$ is true, and zero otherwise.
If there are multiple MAP solutions, say $\{\boldsymbol{x}^{*k} \colon k=1,\ldots,K \}$, then any convex combination $\sum_k c_k \boldsymbol{1}(\boldsymbol{x} = \boldsymbol{x}^{*k})$ with $\sum_k c_k =1, c_i\geq0$ leads to a maximum of \eqref{equ:maxduality}.
The problem in \eqref{equ:maxduality} remains NP-hard, because of the intractability of marginal polytope $\mathbb{M}$. Most variational methods for MAP \citep[e.g.,][]{wainwright2005map, werner2007linear}
can be interpreted as relaxing $\mathbb{M}$ to the locally consistent polytop ${\mathbb{L}(G)}$, yielding a linear
relaxation of the original integer programming problem. Note that
\eqref{equ:maxduality} differs from \eqref{equ:sumduality} only by its lack of an
entropy term; in the next section, we generalize this similarity to marginal
MAP.
\subsection{Marginal MAP Problems}
Marginal MAP is simply a hybrid of the max- and sum- inference tasks. Let $A$ be a subset of nodes $V$, and $B = V\backslash A$ be the complement of $A$. The
marginal MAP problem seeks a partial configuration $\boldsymbol{x}_B^*$ that has the maximum marginal probability $p(\boldsymbol{x}_B) = \sum_{x_{A}} p(\boldsymbol{x})$, where $A$ is the set of {sum} nodes to be marginalized out, and $B$ the {max} nodes to be optimized.
We call this a type of ``mixed-inference'' problem, since it involves more than one type of variable elimination operator.
In terms of the exponential family representation, marginal MAP can be formulated as
\begin{align}
\Phi_{AB}(\boldsymbol{\theta}}%{\boldsymbol{\theta}) = \max_{\boldsymbol{x}_B} Q(\boldsymbol{x}_B; \boldsymbol{\theta}}%{\boldsymbol{\theta}), && \text{where~~~} Q(\boldsymbol{x}_B; \boldsymbol{\theta}}%{\boldsymbol{\theta}) = \log \sum_{\boldsymbol{x}_A} \exp(\sum_{\alpha \in \mathcal{I}}\theta_{\alpha}(\boldsymbol{x}_\alpha)).
\label{equ:marginalMAP}
\end{align}
\begin{figure}[tb] \centering
\begin{picture}(0,0)
\thicklines \setlength{\unitlength}{1.3cm}
\put(-5, 1.1){\tt max:$\boldsymbol{x}_B$}
\put(-5, .15){\tt sum:$\boldsymbol{x}_A$}
\put(2.5, 1.4){Marginal MAP:}
\put(2.7, .85){$\displaystyle \boldsymbol{x}_B^* = \argmax_{\boldsymbol{x}_B} p(\boldsymbol{x}_B)$}
\put(2.97, .32){$\displaystyle \ \ = \argmax_{\boldsymbol{x}_B} \sum_{\boldsymbol{x}_A} p(\boldsymbol{x})$.}
\end{picture}
\hspace{-5cm}\includegraphics[width=.4\columnwidth]{figures_jmlr/hiddenchain.pdf}
\caption{An example from \citet{Koller_book} in which a marginal MAP query on a tree requires exponential time complexity.
The marginalization over $\boldsymbol{x}_A$ destroys the conditional dependency structure in the marginal distribution $p(\boldsymbol{x}_B)$, causing an intractable maximization problem over $\boldsymbol{x}_B$. The complexity of the exact variable elimination method is $O(\exp(n))$, where $n$ is the length of the chain.
}
\label{fig:hiddenchain}
\end{figure}
Although similar to max- and sum-inference, marginal
MAP is significantly harder than either of them.
A classic example is shown in \figref{fig:hiddenchain}, where marginal MAP is NP-hard even on a tree structured graph \citep{Koller_book}. The main difficulty arises
because the {max} and {sum} operators do not commute, which
restricts feasible elimination orders to those with \emph{all} the sum nodes eliminated
before \emph{any} max nodes.
In the worst case, marginalizing the sum nodes $\boldsymbol{x}_A$ may destroy any conditional independence among the max nodes $\boldsymbol{x}_B$,
making it difficult to represent or optimize $Q(\boldsymbol{x}_B;\theta)$,
even when the sum part alone is tractable
(such as when the nodes in $A$ form a tree).
Despite its computational difficulty, marginal MAP plays an essential role in many practical scenarios. The marginal MAP configuration $\boldsymbol{x}_B^*$ in \eqref{equ:marginalMAP} is Bayes optimal in the sense that it minimizes the expected error on $B$, $\mathbb{E}[\boldsymbol{1}(\boldsymbol{x}_B^* = \boldsymbol{x}_B)]$, where $\mathbb{E}[\cdot]$ denotes the expectation under distribution $p(\boldsymbol{x} ; \boldsymbol{\theta}}%{\boldsymbol{\theta})$. Here, the variables $\boldsymbol{x}_A$ are not included in the error criterion, for example because they are ``nuisance" hidden variables of no direct interest, or unobserved or inaccurately measured model parameters. In contrast, the joint MAP configuration $\boldsymbol{x}^*$ minimizes the joint error $\mathbb{E}[\boldsymbol{1}(\boldsymbol{x}^* = \boldsymbol{x})]$, but this gives no guarantees on the partial error $\mathbb{E}[\boldsymbol{1}(\boldsymbol{x}_B^* = \boldsymbol{x}_B)]$. In practice, perhaps because of the wide availability of efficient algorithms for joint MAP, researchers tend to over-use joint MAP even in cases where marginal MAP would be more appropriate.
The following toy example shows that this seemingly reasonable approach can sometimes cause serious problems.
\begin{exa}[Weather Dilemma]
Denote by $x_b \in \{ {\tt {\tt rainy}}, {\tt {\tt sunny}} \}$ the weather condition of Irvine, and $x_a \in \{ {\tt walk}, {\tt drive}\}$ whether Alice drives or walks to the school depending on the weather condition. Assume the probabilities of $x_b$ and $x_a$ are
\\
\begin{tabular}{p{5cm} p{7cm}}
\vspace{0pt}
\begin{tabular}{p{.8cm} p{3cm}}
\vspace{0pt} $p(x_b):$ & \vspace{0pt} \begin{tabular}{|c|c|} \hline {\tt rainy} & $0.4$ \\ \hline {\tt sunny} & $0.6$ \\ \hline \end{tabular}
\end{tabular}
&
\vspace{0pt}
\begin{tabular}{p{1.2cm} p{3cm}}
\vspace{0pt} $p(x_a | x_b):$ & \vspace{0pt} \begin{tabular}{|c|c|c|}\hline & {\tt walk} & {\tt drive} \\ \hline {\tt rainy} & $1/8$ & $7/8$ \\ \hline {\tt sunny} & $1/2$ & $1/2$ \\ \hline \end{tabular}
\vspace{.1\baselineskip}
\end{tabular}
\end{tabular} \\
The task is to calculate the most likely weather condition of Irvine, which is obviously {\tt sunny} according to $p(x_b)$. The marginal MAP, $x_b^* = \argmax_{x_b} p(x_b) = {\tt sunny}$, gives the correct answer. However, the full MAP estimator, $[x_a^*, x_b^*] = \argmax p(x_a, x_b) = [ {\tt drive}, {\tt rainy}]$, gives answer $x_b^* = {\tt rainy}$ (by dropping the $x_a^*$ component), which is obviously wrong. Paradoxically, if $p(x_a | x_b)$ is changed (say, corresponding to a different person), the solution returned by full MAP could be different.
\end{exa}
In the above example, since no evidence on $x_a$ is observed, the conditional probability $p(x_a | x_b)$ does not provide useful information for $x_b$,
but instead provides misleading information when it is incorporated in the full MAP estimator. The marginal MAP, on the hand, eliminates the influence of the irrelevant $p(x_a | x_b)$ by marginalizing (or averaging) $x_a$. In general, the marginal MAP and full MAP can differ significantly
when the uncertainty in the hidden variables changes as a function of $\boldsymbol{x}_B$.
\section{A Dual Representation for Marginal MAP}
\label{sec:mixduality}
In this section, we present our main result, a dual representation of the
marginal MAP problem \eqref{equ:marginalMAP}. Our dual
representation generalizes that of sum-inference in \eqref{equ:sumduality} and
max-inference in \eqref{equ:maxduality}, and provides a unified framework for
solving marginal MAP problems.
\begin{thm}
\label{thm:duality}
The marginal MAP energy $\Phi_{AB}(\boldsymbol{\theta}}%{\boldsymbol{\theta})$ in \eqref{equ:marginalMAP} has a dual representation,
\begin{equation}
\Phi_{AB}(\boldsymbol{\theta}}%{\boldsymbol{\theta}) = \max_{{\boldsymbol{\tau}} \in \mathbb{M}} \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + H_{A|B}(\vtau) \},
\label{equ:mixduality}
\end{equation}
where $H_{A|B}(\vtau)$ is a conditional entropy,
$H_{A|B}(\vtau) = -\sum_{\boldsymbol{x}} \tau(\boldsymbol{x}) \log \tau(\boldsymbol{x}_A | \boldsymbol{x}_B)$.
If $Q(\boldsymbol{x}_B; \boldsymbol{\theta}}%{\boldsymbol{\theta})$ has a unique maximum $\boldsymbol{x}_B^*$, the maximum point ${\boldsymbol{\tau}}^*$ of \eqref{equ:mixduality} is also unique, satisfying $\tau^*(\boldsymbol{x}) = \tau^*(\boldsymbol{x}_B) \tau^*(\boldsymbol{x}_A | \boldsymbol{x}_B)$, where
$\tau^*(\boldsymbol{x}_B) = \boldsymbol{1}(\boldsymbol{x}_B = \boldsymbol{x}_B^*)$ and $\tau^*(\boldsymbol{x}_A | \boldsymbol{x}_B) = p(\boldsymbol{x}_A | \boldsymbol{x}_B; \boldsymbol{\theta}}%{\boldsymbol{\theta})$.
\footnote{Since $\tau(\boldsymbol{x}_B)=0$ if $\boldsymbol{x}_B \neq \boldsymbol{x}_B^*$, we do not necessarily need to define $\tau^*(\boldsymbol{x}_A | \boldsymbol{x}_B)$ for $\boldsymbol{x}_B \neq \boldsymbol{x}_B^*$.}.
\end{thm}
\begin{proof}
\newcommand{\qtau}{\tau}
For any ${\boldsymbol{\tau}} \in \mathbb{M}$ and its corresponding global distribution $\tau(\boldsymbol{x})$, consider the conditional KL divergence between $\tau(\boldsymbol{x}_A | \boldsymbol{x}_B)$ and $p(\boldsymbol{x}_A |\boldsymbol{x}_B ; \boldsymbol{\theta}}%{\boldsymbol{\theta})$,
\begin{align}
&D_\mathrm{KL}[ \qtau(\boldsymbol{x}_A| \boldsymbol{x}_B) || p(\boldsymbol{x}_A| \boldsymbol{x}_B; \boldsymbol{\theta}}%{\boldsymbol{\theta}) ]
= \sum_{\boldsymbol{x}} \qtau(\boldsymbol{x}) \log \frac{\qtau(\boldsymbol{x}_A | \boldsymbol{x}_B)}{p(\boldsymbol{x}_A |\boldsymbol{x}_B ; \boldsymbol{\theta}}%{\boldsymbol{\theta})} \notag \\
&\qquad\qquad = -H_{A|B}( \boldsymbol{\tau}) - \mathbb{E}_\qtau [\log p(\boldsymbol{x}_A | \boldsymbol{x}_B ; \boldsymbol{\theta}}%{\boldsymbol{\theta})] \notag \\
&\qquad\qquad = -H_{A|B}( \boldsymbol{\tau}) - \mathbb{E}_\qtau [\theta(\boldsymbol{x})] + \mathbb{E}_\qtau[ Q(\boldsymbol{x}_B; \boldsymbol{\theta}}%{\boldsymbol{\theta})]
\quad \geq \quad 0, \notag
\end{align}
where $H_{A|B}( \boldsymbol{\tau})$ is the conditional entropy on $\qtau(\boldsymbol{x})$;
the equality on the last line holds because $p(\boldsymbol{x}_A | \boldsymbol{x}_B ; \boldsymbol{\theta}}%{\boldsymbol{\theta}) = \exp(\theta(\boldsymbol{x}) - Q(\boldsymbol{x}_B; \boldsymbol{\theta}}%{\boldsymbol{\theta}))$;
the last inequality follows from the nonnegativity of KL divergence,
and is tight if and only if $\qtau(\boldsymbol{x}_A| \boldsymbol{x}_B) = p(\boldsymbol{x}_A| \boldsymbol{x}_B; \boldsymbol{\theta}}%{\boldsymbol{\theta})$ for all $\boldsymbol{x}_A$ and $\boldsymbol{x}_B$ that $\qtau(\boldsymbol{x}_B) \neq 0$.
Therefore, we have for any $\qtau(\boldsymbol{x})$,
\begin{equation*}
\Phi_{AB}(\boldsymbol{\theta}}%{\boldsymbol{\theta}) = \max_{\boldsymbol{x}_B} Q(\boldsymbol{x}_B ; \boldsymbol{\theta}}%{\boldsymbol{\theta}) \geq \mathbb{E}_\qtau[Q(\boldsymbol{x}_B; \boldsymbol{\theta}}%{\boldsymbol{\theta})] \geq \mathbb{E}_\qtau[\theta(\boldsymbol{x})] +H_{A|B}( \boldsymbol{\tau}).
\end{equation*}
It is easy to show that the two inequality signs are tight if and only if $\qtau(\boldsymbol{x})$ equals $\tau^*(\boldsymbol{x})$ as defined above.
Substituting $\mathbb{E}_\qtau[\theta(\boldsymbol{x})]=\langle \boldsymbol{\theta}}%{\boldsymbol{\theta},{\boldsymbol{\tau}}\rangle$ completes the proof.
\end{proof}
\begin{table}[tb] \centering
\setlength{\extrarowheight}{5pt}
\begin{tabular}{ | l | l | l |}
\hline
Problem Type & Primal Form & Dual Form \\ \hline
Max-Inference & $\displaystyle \log \max_{\boldsymbol{x}} \exp(\theta(\boldsymbol{x}))$ & $\displaystyle \max_{{\boldsymbol{\tau}} \in \mathbb{M}} \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle \}$ \\ \hline
Sum-Inference & $\displaystyle \log \sum_{\boldsymbol{x}} \exp(\theta(\boldsymbol{x}))$ & $ \displaystyle \max_{{\boldsymbol{\tau}} \in \mathbb{M}} \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + H(\vtau)}%{H(\vx; \vtau) \}$ \\ \hline
\textcolor{blue}{Marginal MAP} & \textcolor{blue}{$\displaystyle \log \max_{\boldsymbol{x}_B}\sum_{\boldsymbol{x}_A} \exp(\theta(\boldsymbol{x}))$} & \textcolor{blue}{$\displaystyle \max_{{\boldsymbol{\tau}} \in \mathbb{M}} \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle+ H_{A|B}(\vtau) \} $} \\\hline
\end{tabular}
\caption{The primal and dual forms of the three inference types. The dual forms of sum-inference and max-inference are well known; the form for marginal MAP is a contribution of this work. Intuitively, the max vs.\ sum operators in the primal form determine the conditioning set of the conditional entropy term in the dual form. }
\label{tab:threetasks}
\end{table}
\textbf{Remark 1.} If $Q(\boldsymbol{x}_B ; \boldsymbol{\theta}}%{\boldsymbol{\theta})$ has multiple maxima $\{ \boldsymbol{x}^{*k}_B \}$, each corresponding to a distribution $\tau^{*k}(\boldsymbol{x}) = \boldsymbol{1}(\boldsymbol{x}_B = \boldsymbol{x}_B^*) p(\boldsymbol{x}_A | \boldsymbol{x}_B; \boldsymbol{\theta}}%{\boldsymbol{\theta})$, then
the set of maximum points of $\eqref{equ:mixduality}$ is the convex hull of $\{ {\boldsymbol{\tau}}^{*k} \}$.
\textbf{Remark 2.}
Theorem \ref{thm:duality} naturally integrates the marginalization and maximization sub-problems into one joint optimization problem,
providing a novel and efficient treatment for marginal MAP beyond the traditional approaches that treat the marginalization sub-problem as a sub-routine of the maximization problem. As we show in Section~\ref{sec:message}, this enables us to derive efficient ``mixed-product" message passing algorithms that simultaneously takes marginalization and maximization steps, avoiding expensive and possibly wasteful inner loop steps in the marginalization sub-routine.
\textbf{Remark 3.}
Since we have $H_{A|B}(\vtau) = H(\vtau)}%{H(\vx; \vtau) - H_{B}(\vtau)$ by the entropic chain rule \citep{InformationTheory},
the objective function in \eqref{equ:mixduality} can be view as a ``truncated" free energy,
\begin{align*}
F_{mix}({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta}) \mathrel{\mathop:}= \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + H_{A|B}(\vtau)
= F_{sum} ({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta}) - H_{B}(\vtau),
\end{align*}
where the entropy $H_{B}(\vtau)$ of the {max} nodes $\boldsymbol{x}_B$ are removed from the regular
sum-inference free energy $F_{sum}({\boldsymbol{\tau}} , \boldsymbol{\theta}}%{\boldsymbol{\theta}) = \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + H(\vtau)}%{H(\vx; \vtau)$. Theorem \ref{thm:duality} generalizes the dual form of
both sum-inference \eqref{equ:sumduality} and max-inference \eqref{equ:maxduality}, since it reduces to those forms when
the {max} set $B$ is empty or all nodes, respectively.
Table~\ref{tab:threetasks} shows all three forms together for comparision. Intuitively, since the entropy $H_{B}(\vtau)$ is removed
from the objective, the optimal marginal $\tau^*(\boldsymbol{x}_B)$ tends to have lower
entropy and its probability mass concentrates on the optimal configurations $\{\boldsymbol{x}_B^*\}$.
Alternatively, the $\tau^*(\boldsymbol{x})$ can be interpreted as the marginals
obtained by clamping the value of $\boldsymbol{x}_B$ at
$\boldsymbol{x}_B^*$ on the distribution $p(x; \boldsymbol{\theta}}%{\boldsymbol{\theta})$, i.e., $\tau^*(\boldsymbol{x}) =
p(\boldsymbol{x} | \boldsymbol{x}_B = \boldsymbol{x}_B^*; \boldsymbol{\theta}}%{\boldsymbol{\theta})$.
\textbf{Remark 4.}
Unfortunately, subtracting the $H_{B}(\vtau)$ term causes some subtle difficulties. First, $H_{B}(\vtau)$ (and hence $F_{mix}({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta})$) may be intractable to calculate even when the joint entropy $H(\vtau)}%{H(\vx; \vtau)$ is tractable, because the marginal distribution $p(\boldsymbol{x}_B) = \sum_{\boldsymbol{x}_A} p(\boldsymbol{x})$ does not necessarily inherit the conditional dependency structure of the joint distribution. Therefore, the dual optimization in \eqref{equ:mixduality} may be intractable even on a tree, reflecting the intrinsic difficulty of marginal MAP compared to full MAP or marginalization.
Interestingly, we show in the sequel that a certificate of
optimality can still be obtained on general tree graphs in some cases.
Secondly, the conditional entropy $H_{A|B}(\vtau)$ (and hence $F_{mix}({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta})$) is concave, but not strongly concave, with respect to ${\boldsymbol{\tau}}$.
This creates additional difficulty when optimizating \eqref{equ:mixduality}, since
many iterative optimization algorithms, such as coordinate descent, can lose their typical convergence or optimality guarantees when the objective function is not strongly convex.
\textbf{Smoothed Approximation.}
To sidestep the issue of non-strong convexity, we introduce a smoothed approximation of $F_{mix}({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta})$ that ``adds back" part of the missing $H_{B}(\vtau)$ term,
\begin{equation*}
F_{mix}^{\epsilon}({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta}) =
\langle \boldsymbol{\theta}}%{\boldsymbol{\theta} , {\boldsymbol{\tau}} \rangle + H_{A|B}(\vtau) + \epsilon H_{B}(\vtau),
\end{equation*}
where $\epsilon$ is a small positive constant.
This smoothed dual approximation is closely connected to a direct approximation in the primal domain, as shown in the following Theorem.
\begin{thm}
\label{thm:smoothVersion}
Let $\epsilon$ be a positive constant, and $Q(\boldsymbol{x}_B ; \boldsymbol{\theta}}%{\boldsymbol{\theta})$ as defined in \eqref{equ:marginalMAP}. Define
\begin{align*}
\Phi^{\epsilon}_{AB} (\boldsymbol{\theta}}%{\boldsymbol{\theta}) = \log \big \{ [\sum_{\boldsymbol{x}_B} \exp(Q(\boldsymbol{x}_B ; \boldsymbol{\theta}}%{\boldsymbol{\theta}))^{1/\epsilon}]^{\epsilon} \big \},
\end{align*}
then we have
\begin{align}
\Phi^{\epsilon}_{AB} (\boldsymbol{\theta}}%{\boldsymbol{\theta}) = \max_{{\boldsymbol{\tau}} \in \mathbb{M}} \big \{\langle \boldsymbol{\theta}}%{\boldsymbol{\theta} , {\boldsymbol{\tau}} \rangle + H_{A|B}(\vtau) + \epsilon H_{B}(\vtau) \big \}.
\label{equ:smoothdual}
\end{align}
In addition, we have \[\lim_{\epsilon \to 0^+}\Phi^{\epsilon}_{AB}(\boldsymbol{\theta}}%{\boldsymbol{\theta}) = \Phi_{AB}(\boldsymbol{\theta}}%{\boldsymbol{\theta})\], where $\epsilon\to 0^+$ denotes approaching zero from the positive side.
\end{thm}
\begin{proof}
The proof is similar to that of Theorem~\ref{thm:duality}, but exploits the non-negativity of a weighted sum of two KL divergence terms,
$$\mathrm{D}_{KL}[\tau(\boldsymbol{x}_A | \boldsymbol{x}_B) || p(x_{A} | \boldsymbol{x}_B ; \boldsymbol{\theta}}%{\boldsymbol{\theta})] + \epsilon \mathrm{D}_{KL}[\tau(\boldsymbol{x}_B) || p(\boldsymbol{x}_B)].$$
The remaining part follows directly from the standard zero temperature limit formula,
\begin{align}
\label{equ:zerolimit}
\lim_{\epsilon \to 0^+} [\sum_{x} f(x)^{1/\epsilon}]^{\epsilon} = \max_{x} f(x),
\end{align}
where $f(x)$ is any function with positive values.
\end{proof}
\section{Variational Approximations for Marginal MAP}
\label{sec:variational}
Theorem~\ref{thm:duality} transforms the marginal MAP problem into a
variational form, but obviously does not decrease its computational hardness.
Fortunately, many well-established variational techniques
for sum- and max-inference can be extended to
apply to \eqref{equ:mixduality}, opening a new door for deriving novel approximate
algorithms for marginal MAP. In the spirit of \citet{Wainwright08}, one can either relax $\mathbb{M}$ to
a simpler outer bound like ${\mathbb{L}(G)}$ and replace $F_{mix}({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta})$ by
some tractable form to give algorithms similar to loopy BP or TRW BP, or
restrict $\mathbb{M}$ to a tractable subset like $\mathbb{M}_{mf}$
to give mean-field-like algorithms. In the sequel,
we demonstrate several such approximation schemes, mainly focusing on the
BP-like methods with pairwise free energies. We will briefly discuss mean-field-like methods
when we connect to EM in section~\ref{sec:EM}, and derive an extension to junction graphs that exploits higher order approximations in Section~\ref{sec:junctiongraph}.
Our framework can be easily adopted to take advantage of other, more advanced variational techniques,
like those using higher order cliques \citep[e.g.,][]{Yedidia_Bethe, globerson2007approximate, liu11d, hazantightening}
or more advanced optimization methods like dual decomposition \citep{Sontag_optbook} or alternating direction method of multipliers \citep{Boyd10}.
We start by characterizing the graph structure on which marginal MAP is tractable.
\begin{mydef}
\label{def:partialorder}
We call $G$ an \emph{$A$-$B$ tree} if there exists a partial order on the node set $V = A\cup B$, satisfying
\begin{description}
\item
{\bf 1) Tree-order}. For any $i \in V$, there is at most one other node $j \in V$ (called its parent), such that $j \prec i$ and $(ij) \in E$;
\item
{\bf 2) A-B Consistency}. For any $a \in A$ and $b \in B$, we have $b \prec a$.
\end{description}
We call such a partial order an $A$-$B$ tree-order of $G$.
\end{mydef}
For further notation, let $G_A = (A, E_A)$ be the subgraph induced by nodes in $A$,
i.e., $E_A = \{(ij) \in E | i \in A, j\in A\}$,
and similarly for $G_B = (B, E_B)$. Let $\partial_{AB} = \{ (ij)\in E | i \in A, j \in B \}$ be the
edges that join sets $A$ and $B$.
Obviously, marginal MAP on an $A$-$B$ tree can be tractably solved by sequentially eliminating
the variables along the $A$-$B$ tree-order \citep[see e.g.,][]{Koller_book}.
We show that its dual optimization is also tractable in this case.
\begin{lem}
\label{lem:ABtree}
If $G$ is an $A$-$B$ tree, then
\begin{description}
\item[1)] The locally consistent polytope equals the marginal polytope, that is, $\mathbb{M} = {\mathbb{L}(G)}$.
\item[2)] The conditional entropy has a pairwise decomposition,
\begin{align}
\label{equ:ABtree_entropy}
H_{A|B}(\vtau) = \sum_{i\in A} H_i \ \ - \!\!\!\!\!\! \sum_{(ij)\in E_A\cup \partial_{AB}} \!\!\!\!\!\! I_{ij} .
\end{align}
\end{description}
\end{lem}
\begin{proof}
1)\ The fact that $\mathbb{M} = {\mathbb{L}(G)}$ on trees is a standard result; see \citet{Wainwright08} for details. \\
2)\ Because $G$ is an $A$-$B$ tree, both $p(\boldsymbol{x})$ and $p(\boldsymbol{x}_B)$ have tree structured conditional dependency. We then have \citep[see e.g.,][]{Wainwright08} that
\begin{align*}
H(\vtau)}%{H(\vx; \vtau) = \sum_{i\in V} H_i - \sum_{(ij)\in E} I_{ij}, &&\text{and}&& H_{B}(\vtau) = \sum_{i \in B} H_i - \sum_{(ij) \in E_B} I_{ij}.
\end{align*}
Equation \eqref{equ:ABtree_entropy} follows by using the entropic chain rule $H_{A|B}(\vtau) = H(\vtau)}%{H(\vx; \vtau) - H_{B}(\vtau)$.
\end{proof}
\subsection{Bethe-like Free Energy}
Lemma~\ref{lem:ABtree} suggests that the free energy of $A$-$B$ trees can be decomposed into singleton and pairwise terms that are easy to deal with. This is not true for general graphs, but motivates a ``Bethe" like approximation,
\begin{align}
& \Phi_{bethe}(\boldsymbol{\theta}}%{\boldsymbol{\theta})= \max_{{\boldsymbol{\tau}} \in {\mathbb{L}(G)}} F_{bethe}({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta}),
& F_{bethe}({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta}) = \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle \ + \ \sum_{i \in A} H_i \ \!\!\ - \!\!\!\!\!\! \!\!\! \sum_{(ij)\in E_A\cup \partial_{AB}} \!\!\!\!\!\! \!\!\! I_{ij}, \label{equ:betheenergy}
\end{align}
where we call $F_{bethe}({\boldsymbol{\tau}} , \boldsymbol{\theta}}%{\boldsymbol{\theta})$ is a ``truncated" Bethe free energy, whose entropy and mutual
information terms that involve only max nodes are truncated.
If $G$ is an $A$-$B$ tree, $\Phi_{bethe}$ equals
the true $\Phi_{AB}$, giving an intuitive justification. In the sequel we
give more general theoretical conditions under which this approximation gives the exact solution, and we find empirically that it usually gives surprisingly good solutions in practice.
Similar to the regular Bethe approximation, \eqref{equ:betheenergy} leads to a nonconvex
optimization, and we will derive both message passing algorithms and provably convergent algorithms to solve it.
\subsection{Tree-reweighted Free Energy}
\label{sec:trw_bp_marginalMAP}
Following the idea of TRW belief propagation \citep{Wainwright_TRBP}, we construct an approximation of marginal MAP
using a convex combination of $A$-$B$ subtrees (subgraphs of $G$ that are $A$-$B$ trees).
Let $\mathcal{T}_{AB}$ be a collection of $A$-$B$ subtrees of $G$. We assign with each $T \in \mathcal{T}_{AB}$ a weight $w_T$ satisfying $w_T\geq 0$ and $\sum_{T\in \mathcal{T}_{AB}}{w_T} = 1$.
For each $A$-$B$ sub-tree $T = (V, E_T)$, define
\begin{equation*}
H_{A|B}(\vtau ~;~ T) = \sum_{i\in A} H_i \, - \!\!\! \sum_{(ij) \in E_T\backslash E_B} \!\!\! I_{ij} .
\end{equation*}
As shown in \citet{Wainwright08}, the $H_{A|B}(\vtau ~;~ T)$ is always a concave function of ${\boldsymbol{\tau}}$ on ${\mathbb{L}(G)}$, and $H_{A|B}(\vtau) \leq H_{A|B}(\vtau ~;~ T)$ for all ${\boldsymbol{\tau}} \in \mathbb{M}$ and $T\in \mathcal{T}_{AB}$. More generally, we have
$H_{A|B}(\vtau) \leq \sum_{T\in \mathcal{T}_{AB}} w_T H_{A|B}(\vtau ~;~ T)$, which can be transformed to
\begin{equation}
\label{equ:trwBound}
H_{A|B}(\vtau) \leq \sum_{i \in A} H_i \ \ - \!\!\!\!\!\! \sum_{(ij)\in E_A\cup \partial_{AB}} \!\!\!\!\!\! \rho_{ij}I_{ij},
\end{equation}
where $\rho_{ij} = \sum_{T: (ij)\in E_T} w_T$
are the edge appearance probabilities as defined in \citet{Wainwright08}.
Replacing $\mathbb{M}$ with ${\mathbb{L}(G)}$ and $H_{A|B}(\vtau)$ with the bound in \eqref{equ:trwBound} leads to a TRW-like approximation of marginal MAP,
\begin{align}
&\Phi_{trw}(\boldsymbol{\theta}}%{\boldsymbol{\theta}) = \max_{{\boldsymbol{\tau}} \in {\mathbb{L}(G)}} F_{trw}({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta}),
& F_{trw}({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta}) = \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle \ + \ \sum_{i \in A} H_i \ \ - \!\!\!\!\!\! \sum_{(ij)\in E_A\cup \partial_{AB}} \!\!\!\!\!\! \rho_{ij}I_{ij}.
\label{equ:trwmaxF}
\end{align}
Since ${\mathbb{L}(G)}$ is an outer bound of $\mathbb{M}$, and $F_{trw}$ is a concave upper bound of the true free
energy, we can guarantee that $\Phi_{trw}(\boldsymbol{\theta}}%{\boldsymbol{\theta})$ is always an upper bound of $\Phi_{AB}(\boldsymbol{\theta}}%{\boldsymbol{\theta})$. To our knowledge, this provides the first known convex relaxation for upper bounding marginal MAP.
One can also optimize the weights $\{w_T \colon T\in \mathcal{T}_{AB}\}$ to get the tightest upper bound using methods similar to those used for regular TRW BP \citep[see][]{Wainwright_TRBP}.
\begin{comment}
\begin{figure}[tb]
\centering
\begin{tabular}{ccccc}
\includegraphics[height=0.1\textwidth]{figures_jmlr/hiddenchain_type1_subtree.pdf} && &&
\includegraphics[height=0.1\textwidth]{figures_jmlr/hiddenchain_type2_subtree.pdf} \\
(a) && && (b)
\end{tabular}
\caption{(a) A type-I $A$-$B$ subtree and (b) a type-II $A$-$B$ subtree of the hidden Markov chain in \figref{fig:hiddenchain}.}
\label{fig:hiddenchainSub}
\end{figure}
{\bf Selecting $A$-$B$ subtrees.}
Selecting $A$-$B$ subtrees for approximation \eqref{equ:trwmaxF} is not as
straightforward as selecting subtrees in regular sum-inference. An
important property of an $A$-$B$ tree $T$ is that no two edges of $T$ in $\partial_{AB}$ can be
connected by edges or nodes of $T$ in $G_A$. Therefore, one can
construct an $A$-$B$ subtree by first selecting a subtree in $G_A$, and then
join each connected component of $G_A$ to at most one edge in $\partial_{AB}$.
Two simple, extreme cases stand out:
\begin{enumerate} \addtolength{\itemsep}{-0.6\baselineskip}\vspace{-.6\baselineskip}
\item[(i)] \emph{type-I $A$-$B$ subtrees}, which include a spanning tree of $G_A$ and only one crossing edge in $\partial_{AB}$;
\item[(ii)] \emph{type-II $A$-$B$ subtrees}, which include no edges in $G_A$, but several edges in $\partial_{AB}$ that are not
incident on the same nodes in $G_A$.
\end{enumerate} \vspace{-.5\baselineskip}
See \figref{fig:hiddenchainSub} for an example.
Intuitively, type-I subtrees capture more information about the summation
structures of $G_A$, while type-II subtrees capture more information about
$\partial_{AB}$, relating the sum and max parts.
If one restricts to the set of
type-I subtrees,
it is possible to guarantee that, if $G_A$ is a tree, the summation component
will be exact (all $\rho_{ij}=1$ for $(ij)\in E_A$), in which case it will be
possible to make some theoretical guarantees about the solution.
However in experiments we find it is often practically beneficial
to balance type-I and type-II when choosing the weights.
\end{comment}
\subsection{Global Optimality Guarantees}
\label{sec:globalopt}
We show the global optimality guarantees of the above approximations under some circumstances.
In this section, we always assume $G_A$ is a tree, and hence the objective function is tractable to calculate for a given
$\boldsymbol{x}_B$.
However, the optimization component remains intractable in this case, because the marginalization step destroys the decomposition structure of the objective function (see \figref{fig:hiddenchain}).
It is thus nontrivial to see how the Bethe and TRW approximations behave in this case.
In general, suppose we approximate $\Phi_{AB}(\boldsymbol{\theta}}%{\boldsymbol{\theta})$ using the following pairwise approximation,
\begin{equation}
\Phi_{tree}(\boldsymbol{\theta}}%{\boldsymbol{\theta}) = \max_{{\boldsymbol{\tau}} \in {\mathbb{L}(G)}} \big \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle \ + \ \sum_{i\in V} H_i - \!\!\!\sum_{(ij)\in E_A} \!\!\! I_{ij} - \!\!\!\sum_{(ij)\in \partial_{AB}} \!\!\!\! \rho_{ij} I_{ij} \big \} ,
\label{equ:Phitree}
\end{equation}
where the weights on the sum part, $\{ \rho_{ij} \colon (ij) \in E_A\}$, have been fixed to be ones. This choice makes sure that the sum part is ``intact'' in the
approximation, while the weights on the crossing edges, ${\boldsymbol{\rho}}_{AB} = \{\rho_{ij} \colon (ij)\in \partial_{AB} \}$, can take arbitrary values, corresponding to different free energy approximation methods.
If ${\rho}_{ij} = 1$ for $\forall (ij)\in \partial_{AB}$, it is the Bethe free energy; it will correspond to the TRW free energy if $\{\rho_{ij}\}$ are taken to be a set of edge appearance probabilities (which in general have values less than one). The edge appearance probabilities of $A$-$B$ trees are more restrictive than for the standard trees used in TRW BP.
For example, if the max part of a $A$-$B$ sub-tree is a connected tree, then it can include at most one crossing edge, so in this case ${\boldsymbol{\rho}}_{AB}$ should satisfy $\sum_{(ij) \in \partial_{AB}}\rho_{ij} = 1$, $\rho_{ij} \geq 0$.
Interestingly, we will show in Section~\ref{sec:EM} that if $\rho_{ij} \rightarrow +\infty$ for $\forall (ij) \in \partial_{AB}$,
then Equation~\eqref{equ:Phitree} is closely related to to an EM algorithm.
\begin{thm}
\label{thm:betheglobalopt}
Suppose the sum part $G_A$ is a tree, and we approximate $\Phi_{AB}(\boldsymbol{\theta}}%{\boldsymbol{\theta})$
using $\Phi_{tree} (\boldsymbol{\theta}}%{\boldsymbol{\theta})$ defined in \eqref{equ:Phitree}. Assume that \eqref{equ:Phitree} is \emph{globally} optimized.
\begin{enumerate}
\item[{\rm (i)}] We have $\Phi_{tree} (\boldsymbol{\theta}}%{\boldsymbol{\theta}) \geq \Phi_{AB}(\boldsymbol{\theta}}%{\boldsymbol{\theta})$. If the there exists
$\boldsymbol{x}_B^*$ such that $Q(\boldsymbol{x}_B^*; \boldsymbol{\theta}}%{\boldsymbol{\theta}) = \Phi_{tree}(\boldsymbol{\theta}}%{\boldsymbol{\theta})$, we have
$\Phi_{tree} (\boldsymbol{\theta}}%{\boldsymbol{\theta})= \Phi_{AB}(\boldsymbol{\theta}}%{\boldsymbol{\theta})$, and $\boldsymbol{x}_B^*$ is a globally optimal marginal MAP solution.
\item[{\rm (ii)}] Suppose ${\boldsymbol{\tau}}^*$ is a \emph{global} maximum of
\eqref{equ:Phitree}, and $\{\tau^*_i(x_i) | i \in B \}$ have integral values, i.e.,
$\tau^*_i(x_i) =0~\text{or}~1$, then $\{ x_i^* = \arg\max_{x_i} \tau_i^*(x_i) \colon i \in B \}$
is a globally optimal solution of the marginal MAP problem \eqref{equ:marginalMAP}.
\end{enumerate}\vspace{-.5\baselineskip}
\end{thm}
\begin{proof}[Proof (sketch)] (See appendix for the complete proof.)
The fact that the sum part $G_A$ is a tree guarantees the marginalization is exact. Showing \eqref{equ:Phitree} is a relaxation of the maximization problem and applying standard relaxation arguments completes the proof.
\end{proof}
\textbf{Remark.} Theorem~\ref{thm:betheglobalopt} works for arbitrary values of ${\boldsymbol{\rho}}_{AB}$, and suggests a fundamental tradeoff of hardness as ${\boldsymbol{\rho}}_{AB}$ takes on different values. On the one hand, the value of ${\boldsymbol{\rho}}_{AB}$ controls the concavity of the objective function in \eqref{equ:Phitree} and hence the difficulty of finding a global optimum; small enough ${\boldsymbol{\rho}}_{AB}$ (as in TRW) can ensure that \eqref{equ:Phitree} is a convex optimization, while larger ${\boldsymbol{\rho}}_{AB}$ (as in Bethe or EM) causes \eqref{equ:Phitree} to become non-convex, making it difficult to apply Thoerem~\ref{thm:betheglobalopt}.
On the other hand, the value of ${\boldsymbol{\rho}}_{AB}$ also controls how likely the solution is to be integral -- larger $\rho_{ij}$ emphasizes the mutual
information terms, forcing the solution towards integral points. Thus the solution of the TRW free energy is less likely to be integral
than the Bethe free energy, causing a difficulty in applying
Theorem~\ref{thm:betheglobalopt} to TRW solutions as well.
The TRW approximation ($\sum_{ij} \rho_{ij} = 1$) and EM ($\rho_{ij} \rightarrow +\infty$; see Section~\ref{sec:EM}) reflect two extrema of this tradeoff between concavity and integrality, respectively, while the Bethe approximation ($\rho_{ij}=1$) appears to represent a reasonable compromise that often gives excellent performance in practice. In Section~\ref{sec:localoptimality}, we give a different set of local optimality guarantees that are derived from a reparameterization perspective.
\section{Message Passing Algorithms for Marginal MAP}
\label{sec:message}
We now derive message-passing-style algorithms to optimize the ``truncated" Bethe or TRW free energies in \eqref{equ:betheenergy} and \eqref{equ:trwmaxF}. Instead of optimizing the truncated free energies directly, we leverage the results of Theorem~\ref{thm:smoothVersion} and consider their ``annealed" versions,
\begin{equation*}
\max_{{\boldsymbol{\tau}} \in {\mathbb{L}(G)}} \big \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta} , {\boldsymbol{\tau}} \rangle + \hat{H}_{A|B}(\vtau)} %{\hat{H}(\vx_B; \vtau) + \epsilon \hat{H}_B(\vtau)} %{\hat{H}(\vx_B; \vtau) \big \},
\end{equation*}
where $\epsilon$ is a positive annealing coefficient (or temperature), and the $\hat{H}_{A|B}(\vtau)} %{\hat{H}(\vx_B; \vtau)$ and $\hat{H}_B(\vtau)} %{\hat{H}(\vx_B; \vtau)$ are the generic pairwise approximations of $H_{A|B}(\vtau)$ and $H_{B}(\vtau)$, respectively. That is,
\begin{align}
\hat{H}_{A|B}(\vtau)} %{\hat{H}(\vx_B; \vtau) = \sum_{i \in A} H_i \ \ - \!\!\!\!\!\! \sum_{(ij)\in E_A\cup \partial_{AB}} \!\!\!\!\!\! \rho_{ij}I_{ij}, &&\text{and}&& \hat{H}_B(\vtau)} %{\hat{H}(\vx_B; \vtau) = \sum_{i \in B} H_i \ \ - \!\!\!\!\!\! \sum_{(ij)\in E_B} \!\!\!\!\!\! \rho_{ij}I_{ij},
\label{equ:HABHB}
\end{align}
where different values of pairwise weights $\{ \rho_{ij} \}$ correspond to either the Bethe approximation or the TRW approximation. This yields a generic pairwise free energy optimization problem,
\begin{align}
\max_{{\boldsymbol{\tau}} \in {\mathbb{L}(G)}} \big \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + \sum_{i \in V} w_i H_i - \sum_{(ij)\in E} {w_{ij}} I_{ij} \big \},
\label{equ:generalF}
\end{align}
where the weights $\{w_i, w_{ij}\}$ are determined by the temperature $\epsilon$ and $\{\rho_{ij}\}$ via
\begin{align}
w_i = \left\{ \begin{array}{l l}
1 & \quad \text{$\forall i \in A$}\\
\epsilon& \quad \text{$\forall i \in B$} , \\
\end{array} \right. &&
w_{ij} = \left\{ \begin{array}{l l}
\rho_{ij} & \quad \text{$\forall (ij) \in E_A \cup \partial_{AB}$}\\
\epsilon \rho_{ij} & \quad \text{$\forall (ij) \in E_B$} . \\
\end{array} \right.
\label{equ:defineWeights}
\end{align}
The general framework in \eqref{equ:generalF} provides a unified treatment for approximating sum-inference, max-inference and mixed, marginal MAP
problems simply by taking different weights. Specifically,
\begin{enumerate}
\item If $w_i = 1$ for all $ i \in V$, \eqref{equ:generalF} corresponds to the sum-inference problem and the sum-product BP objectives and algorithms.
\item If $w_i = 0$ for all $ i \in V$, \eqref{equ:generalF} corresponds to the max-inference problem and the max-product linear programming objective and algorithms.
\item If $w_i = 1$ for $\forall i \in A$ and $w_i = 0$ for $\forall i \in B$, \eqref{equ:generalF} corresponds to the marginal MAP problem; in the sequel, we derive ``mixed-product" BP algorithms.
\end{enumerate}
Note the different roles of the singleton and pairwise weights: the singleton weights $\{w_i \colon i \in V\}$ define the type of inference problem, while the pairwise weights $\{w_{ij} \colon (ij) \in E\}$ determine the approximation method (e.g., Bethe vs.\ TRW).
We now derive a message passing-style algorithm for solving the generic problem \eqref{equ:generalF}. Assuming $w_i$ and $w_{ij}$ are strictly positive, using a Lagrange multiplier method similar to \citet{Yedidia_Bethe} or \citet{Wainwright_TRBP}, we can show that the stationary points of \eqref{equ:generalF} satisfy the fixed point condition of the following message passing update,
\begin{align}
\label{equ:weightedmsg}
& \text{Message Update:} &~~~& m_{i\to j}(x_j) \gets \big[ \sum_{x_i} (\psi_i m_{\sim i})^{1/w_i} ({ \psi_{ij}}/{m_{j\to i}})^{1/w_{ij}} \big]^{w_{ij}} , \\
& \text{Marginal Decoding:} &~~~& \tau_{i}(x_i) \propto (\psi_i m_{\sim i})^{1/w_i} , ~~~~ \tau_{ij}(x_{ij}) \propto \tau_i \tau_j (\frac{ \psi_{ij}}{m_{i\to j} m_{j\to i}}) ^{1/w_{ij} },
\label{equ:weighted_marginals}
\end{align}
where $m_{\sim i}(x_i)$ is the product of messages sending into node $i$, that is,
\begin{equation*}
m_{\sim i} (x_i) = \prod_{k \in \neib{i}} m_{k\to i}(x_i).
\end{equation*}
The above message update is mostly similar to TRW-BP of \citet{Wainwright_TRBP}, except that it incorporates general singleton weights $w_i$.
The marginal MAP problem can be solved by running \eqref{equ:weightedmsg} with $\{w_i, w_{ij}\}$ defined by \eqref{equ:defineWeights} and a scheme for choosing the temperature $\epsilon$, either directly set to be a small constant, or gradually decreased (or annealed) to zero through iterations, e.g., by $\epsilon = 1/t$ where $t$ is the iteration.
Algorithm~\ref{alg:annealedmsg} describes the details for the annealing method.
\begin{algorithm}[tb]
\caption{Annealed BP for Marginal MAP}
\label{alg:annealedmsg}
\begin{algorithmic}
\STATE Define the pairwise weights $\{\rho_{ij} \colon (ij) \in E\}$, e.g., $\rho_{ij}=1$ for Bethe or valid appearance probabilities for TRW.
Initialize the messages $ \{ m_{i \to j} \colon (ij) \in E \}$.
\FOR{iteration $t$}
\STATE 1. Update $\lambda^t$ by $\lambda^t = 1/t$, and correspondingly the weights $\{w_i, w_{ij}\}$ by \eqref{equ:defineWeights}.
\STATE 2. Perform the message passing update in \eqref{equ:weightedmsg} for all edges $(ij) \in E$.
\ENDFOR
\STATE Calculate the singleton beliefs $b_{i}(x_i)$ and decode the solution $\boldsymbol{x}_B^*$,
\begin{align}
x_i^* = \argmax_{x_i} b_{i}(x_i), ~~ \forall i \in B, & \text{ where $b_{i}(x_i) \propto \psi_i(x_i) m_{\sim i}(x_i)$} .
\end{align}
\end{algorithmic}
\end{algorithm}
\block{
\begin{algorithm}[htb]
\caption{Weighted Message Passing Algorithm for Solving the Generic Problem \eqref{equ:generalF}}
\label{alg:weightedmsg}
\begin{algorithmic}
\STATE Initialize the messages $m_{i \to j} (x_j)$ for all edges $(ij) \in E$ and $x_j \in \mathcal{X}_j$.
\FOR{iteration $t$}
\FOR{edge $(ij) \in E$}
\STATE {\emph{Perform message update}:}
\begin{align}
\label{equ:weightedmsg}
&m_{i\to j}(x_j) \gets \big[ \sum_{x_i} (\psi_i m_{\sim i})^{1/w_i} ({ \psi_{ij}}/{m_{j\to i}})^{1/w_{ij}} \big]^{w_{ij}} , \\
&\text{where $m_{\sim i} (x_i) = \prod_{k \in \neib{i}} m_{k\to i}(x_i)$.} \notag
\end{align}
\vspace{-1.5\baselineskip}
\ENDFOR
\ENDFOR
\STATE Calculate the singleton and pairwise marginals,
\begin{align}
\label{equ:weighted_marginals}
\tau_{i}(x_i) \propto (\psi_i m_{\sim i})^{1/w_i} , &&
\tau_{ij}(x_{ij}) \propto \tau_i \tau_j (\frac{ \psi_{ij}}{m_{i\to j} m_{j\to i}}) ^{1/w_{ij} }
\end{align}
\end{algorithmic}
\end{algorithm}
}
\subsection{Mixed-Product Belief Propagation}
Directly taking $\epsilon \to 0^+$ in message update \eqref{equ:weightedmsg}, we can get an interesting ``mixed-product" BP algorithm that is a hybrid of the max-product and sum-product message updates, with a novel ``argmax-product" message update that is specific to marginal MAP problems.
This algorithm is listed in Algorithm~\ref{alg:mix_product_msg}, and described by the following proposition:
\begin{pro}
As $\epsilon$ approaches zero from the positive side, that is, $\epsilon \to 0^+$, the message update \eqref{equ:weightedmsg} reduces to the update in \eqref{equ:mix_product1}-\eqref{equ:mix_product3} in Algorithm~\ref{alg:mix_product_msg}.
\end{pro}
\begin{proof}
For messages from $i\in A$ to $j\in A\cup B$, we have $w_i = 1$, $w_{ij} = \rho_{ij}$; the result is obvious. \\
For messages from $i\in B$ to $j\in B$, we have $w_i = \epsilon$, $w_{ij}= \epsilon \rho_{ij}$. The result follows from the zero temperature limit formula in \eqref{equ:zerolimit}, by letting $f(x_i) = (\psi_i m_{\sim i})^{\rho_{ij}} (\frac{\psi_{ij}}{m_{j\to i}})$.
\\
For messages from $i\in B$ to $j\in A$, we have $w_i = \epsilon$, $w_{ij} = \rho_{ij}$.
Let $\mathcal{X}_i^* = \argmax_{x_i} \psi_i m_{\sim i}$
be the set of maximizing arguments of the belief $b_i$,
and $c_i = \max_{x_i} \psi_i m_{\sim i}$ its maximum value; one can show that
$$\lim_{\epsilon \to 0^+} \Big[ \big(\frac{\psi_i m_{\sim i} }{c_i}\big)^{1/w_i} \Big] = \boldsymbol{1}(x_i \in \mathcal{X}_i^*).$$
Plugging this into \eqref{equ:weightedmsg} and dropping the constant $c_i$, we get the message update in \eqref{equ:mix_product3}.
\end{proof}
\begin{algorithm}[tb]
\caption{Mixed-product Belief Propagation for Marginal MAP}
\label{alg:mix_product_msg}
\begin{algorithmic}
\STATE Define the pairwise weights $\{\rho_{ij} \colon (ij) \in E\}$ and initialize messages $ \{ m_{i \to j} \colon (ij) \in E \}$ as in Algorithm~\ref{alg:annealedmsg}.
\FOR{iteration $t$}
\FOR{edge $(ij) \in E$}
\STATE{\emph{Perform different message updates depending on the node type of the source and destination},}
\begin{align}
&{A \to A\cup B}:& &m_{i\to j} \gets \big[ \sum_{x_i} (\psi_i m_{\sim i}) (\frac{ \psi_{ij}}{m_{j\to i}})^{1/\rho_{ij}} \big]^{\rho_{ij}} , ~~~~~~~~~~~~~\text{(sum-product)} \label{equ:mix_product1}\\
&B \to B:&&m_{i\to j} \gets \max_{x_i} (\psi_i m_{\sim i})^{\rho_{ij}} (\frac{ \psi_{ij}}{m_{j\to i}}) , ~~~~~~~~~~~~~~~~~~~~ \text{(max-product)} \label{equ:mix_product2}\\
&{B \to A}:& &m_{i\to j} \gets \big[ \sum_{x_i \in \mathcal{X}_i^*} (\psi_i m_{\sim i}) (\frac{ \psi_{ij}}{m_{j\to i}})^{1/\rho_{ij}} \big]^{\rho_{ij}} , ~~~~~~ \text{(argmax-product)} \label{equ:mix_product3}\\
&&&\text{where the set $\mathcal{X}_i^* = \argmax_{x_i} \psi_i(x_i) m_{\sim i}(x_i)$
and $m_{\sim i}(x_i)= \prod_{k\in \neib{i}} m_{ki}(x_i)$.} \notag
\end{align}
\vspace{-1.5\baselineskip}
\ENDFOR
\ENDFOR
\STATE Calculate the singleton beliefs $b_{i}(x_i)$ and decode the solution $\boldsymbol{x}_B^*$,
\begin{align}
x_i^* = \argmax_{x_i} b_{i}(x_i), ~~ \forall i \in B, & \text{ where $b_{i}(x_i) \propto \psi_i(x_i) m_{\sim i}(x_i)$} .
\end{align}
\end{algorithmic}
\end{algorithm}
Algorithm~\ref{alg:mix_product_msg} has an intuitive interpretation: the sum-product and max-product messages in \eqref{equ:mix_product1} and \eqref{equ:mix_product2} correspond to the marginalization and maximization steps, respectively. The special ``argmax-product" messages in \eqref{equ:mix_product3} serves to synchronize the sum-product and max-product messages -- it restricts the max nodes to the currently decoded local marginal MAP solutions $\mathcal{X}^*_i = \argmax \psi_i(x_i) m_{\sim i}(x_i)$, and passes the posterior beliefs back to the sum part. Note that the summation notation in \eqref{equ:mix_product3} can be ignored if $\mathcal{X}^*_i$ has only a single optimal state.
One critical feature of our mixed-product BP is that it takes simultaneous movements on the marginalization and maximization sub-problems in a parallel fashion,
and is computationally much more efficient than the traditional methods that require fully solving a marginalization sub-problem before taking each maximization step. This advantage is inherited from our general variational framework, which naturally integrates the marginalization and maximization sub-problems into a joint optimization problem.
Interestingly, Algorithm~\ref{alg:mix_product_msg} also bears similarity to a recent hybrid message passing method of \citet{Jiang10}, which differs from Algorithm~\ref{alg:mix_product_msg} only in replacing the special argmax-product messages \eqref{equ:mix_product3} with regular max-product messages.
We make a detailed comparison of these two algorithms in Section~\ref{sec:compare_jiang}, and show that it is in fact the argmax-product messages \eqref{equ:mix_product3} that lends our algorithm several appealing optimality guarantees.
\subsection{Reparameterization Interpretation and Local Optimality Guarantees}
\label{sec:localoptimality}
An important interpretation of the sum-product and max-product BP is the
reparameterization viewpoint \citep{Wainwright03, Weiss07}: Message passing updates can be
viewed as moving probability mass between local pseudo-marginals (or beliefs), in a way that leaves their
product a reparameterization of the original distribution, while ensuring some consistency conditions at the fixed points. Such viewpoints are theoretically important, because they are useful for proving optimality guarantees for the BP algorithms.
In this section, we show that the mixed-product BP in Algorithm~\ref{alg:mix_product_msg} has a similar reparameterization interpretation, based on which we establish a local optimality guarantee for mixed-product BP.
To start, we define a set of ``mixed-beliefs" as
\begin{align}
b_{i}(x_i) \propto \psi_i m_{\sim i}, &&
b_{ij}(x_{ij}) \propto b_i b_j (\frac{ \psi_{ij}}{m_{i\to j} m_{j\to i}}) ^{1/\rho_{ij} }.
\label{equ:mixedmargin}
\end{align}
The marginal MAP solution should be decoded from $x_i^* \in \arg\max_{x_i} b_i(x_i),
\forall i \in B$, as is typical in max-product BP.
\newcommand{\margin()}{b()}
Note that the above mixed-beliefs $\{b_i, b_{ij}\}$ are different from the local marginals $\{\tau_i, \tau_{ij}\}$ defined in \eqref{equ:weighted_marginals}, but are rather softened versions of $\{\tau_i, \tau_{ij}\}$.Their relationship is explicitly clarified in the following.
\begin{pro}
The $\{\tau_i, \tau_{ij}\}$ in \eqref{equ:weighted_marginals} and the $\{b_i, b_{ij}\}$ in \eqref{equ:mixedmargin} are associated via,
\begin{align*}
\begin{cases}
b_{i} \propto \tau_{i} &\forall i\in A , \\
b_{i}\propto (\tau_{i})^{\epsilon} &\forall i\in B
\end{cases} & &
\begin{cases}
b_{ij} \propto b_{i} b_{j} (\frac{\tau_{ij} }{\tau_{i} \tau_{j}}) & \forall (ij)\in E_A \cup \partial_{AB} \\
b_{ij} \propto b_{i} b_{j} (\frac{\tau_{ij} }{\tau_{i} \tau_{j}})^{\epsilon} & \forall (ij)\in E_B .
\end{cases}
\end{align*}
\end{pro}
\begin{proof}
The result follows from the simple algebraic transformation between \eqref{equ:weighted_marginals} and \eqref{equ:mixedmargin}.
\end{proof}
Therefore, as $\epsilon \to 0^+$, the $\tau_i$ ($= b_i^{1/\epsilon}$) for $i\in B$ should concentrate their mass on a deterministic configuration, but $b_i$ may continue to have soft values.
We now show that the mixed-beliefs $\{b_i, b_{ij} \}$ have a reparameterization interpretation.
\begin{thm}
\label{thm:reparameter}
At the fixed point of mixed-product BP in Algorithm~\ref{alg:mix_product_msg} , the mixed-beliefs defined in \eqref{equ:mixedmargin} satisfy\\
\textbf{Reparameterization:
\begin{equation}
\label{equ:repara}
p(\boldsymbol{x}) \propto \prod_{i\in V} b_{i} (x_i) \prod_{(ij)\in E}\big[ \frac{b_{ij}(x_i, x_j)}{b_{i}(x_i) b_{j}(x_j)} \big]^{\rho_{ij}} .
\end{equation}
\textbf{Mixed-consistency:
\begin{align}
{\rm (a)}\hspace{-1em}&& \sum_{x_i} b_{ij}(x_i, x_j) & = b_j(x_j), & \forall i \in A, j \in A\cup B , \label{equ:sum_consistency}\\
{\rm (b)}\hspace{-1em}&&\max_{x_i} b_{ij}(x_i, x_j) &= b_j(x_j), &\forall i \in B, j \in B , \label{equ:max_consistency} \\
{\rm (c)}\hspace{-1em}&&\sum_{x_i \in \arg\max b_i} \!\!\!\!\!\!\!\! b_{ij}(x_i, x_j) & = b_j(x_j), & \forall i \in B, j \in A . \label{equ:mix_consistency}
\end{align}
\end{thm}
\begin{proof}
Directly substitute the definition \eqref{equ:mixedmargin} into the message update \eqref{equ:mix_product1}-\eqref{equ:mix_product3}.
\end{proof}
The three mixed-consistency constraints exactly map to the three
types of message updates in Algorithm~\ref{alg:mix_product_msg}.
Constraint (a) and (b) enforces the regular sum- and max- consistency of the sum- and max- product messages in \eqref{equ:mix_product1} and \eqref{equ:mix_product2}, respectively.
Constraint (c) corresponds to the argmax-product message update in \eqref{equ:mix_product3}: it enforces the marginals to be consistent after $x_i$ is assigned to the currently decoded solution,
$x_i =\arg \max_{x_i} b_i (x_i) = \arg \max_{x_i} \sum_{x_j} b_{ij}(x_i, x_j)$, corresponding to solving a local marginal MAP problem on $b_{ij}(x_i, x_j)$.
It turns out that this special constraint is a crucial ingredient of mixed-product BP, enabling us to prove guarantees on the strong local optimality of the solution.
Some notation is required. Suppose $C$ is a subset of {max} nodes in $B$. Let $G_{C\cup A} = (C\cup A, E_{C\cup A})$ be
the subgraph of $G$ induced by nodes $C\cup A$, where $E_{C\cup A} = \{(ij) \in E \colon i,
j \in C\cup A\}$. We call $G_{C\cup A}$ a semi-$A$-$B$ subtree of $G$ if the edges in $E_{C\cup A}
\backslash E_B$ form an $A$-$B$ tree. In other words, $G_{C\cup A}$ is a semi-$A$-$B$
tree if it is an $A$-$B$ tree when ignoring any edges entirely within the {max} set $B$.
See \figref{fig:semiABtree} for examples of semi $A$-$B$ trees.
Following \citet{Weiss07}, we say that a set of weights $\{\rho_{ij} \}$ is \emph{provably convex} if there exist positive constants $\kappa_i $ and $\kappa_{i\to j}$, such that $\kappa_i + \sum_{i' \in \neib{i}}\kappa_{i' \to i} =
1$ and $\kappa_{i\to j} + \kappa_{j\to i} = \rho_{ij}$.
\citet{Weiss07} shows that if $\{\rho_{ij} \}$ is provably convex, then $H({\boldsymbol{\tau}}) = \sum_i H_i - \sum_{ij} \rho_{ij} I_{ij}$ is a concave function of ${\boldsymbol{\tau}}$ in the locally consistent polytope ${\mathbb{L}(G)}$.
\begin{thm}
\label{thm:localopt}
Suppose $C$ is a subset of $B$ such that $G_{C\cup A}$ is a semi-$A$-$B$ tree, and the weights $\{
\rho_{ij} \}$ satisfy
\begin{enumerate
\item $\rho_{ij} =1$ for $(ij)\in E_A$;
\item $0\leq \rho_{ij} \leq 1$ for $(ij)\in E_{C\cup A} \cap \partial_{AB}$;
\item $\{\rho_{ij} | (ij)\in E_{C\cup A} \cap E_B\}$ is provably convex.
\end{enumerate}\vspace{-.5\baselineskip}
At the fixed point of mixed-product BP in Algorithm~\ref{alg:mix_product_msg}, if the mixed-beliefs $b_i$, $b_{ij}$ in \eqref{equ:mixedmargin} all have unique maxima, then there exists a
$B$-configuration $\boldsymbol{x}_B^*$ satisfying $x^*_i = \arg \max b_i$ for $\forall i
\in B$ and $(x^*_i, x^*_j) = \arg\max b_{ij}$ for $\forall (ij)\in E_B$, and $\boldsymbol{x}_B^*$ is locally optimal in the sense that $Q(\boldsymbol{x}_B^*; \boldsymbol{\theta}}%{\boldsymbol{\theta})$ is not
smaller than any $B$-configuration that differs from $\boldsymbol{x}_B^*$ only on $C$, that is, $Q(\boldsymbol{x}_B^* ; \boldsymbol{\theta}}%{\boldsymbol{\theta}) = \max_{\boldsymbol{x}_C}Q([\boldsymbol{x}_C, x_{B\setminus C}^*] ; \boldsymbol{\theta}}%{\boldsymbol{\theta})$.
\end{thm}
\begin{proof}[Proof (sketch)]
(See appendix for the complete proof.)
The mixed-consistency constraint (c) in \eqref{equ:mix_consistency} and the fact that $G_{C\cup A}$ is a semi-$A$-$B$ tree enables the summation part
to be eliminated away. The remaining part only involves the {max} nodes, and
the method in \citet{Weiss07} for analyzing standard MAP can be applied.
\end{proof}
For $G_{C\cup A}$ to be a semi $A$-$B$ tree, the sum part $G_A$ must be a tree, which
Theorem~\ref{thm:localopt} assumes implicitly. For the hidden Markov chain in \figref{fig:hiddenchain},
Theorem~\ref{thm:localopt} implies only the local
optimality up to Hamming distance one (or coordinate-wise optimality), because any semi $A$-$B$ subtree of $G$ in \figref{fig:hiddenchain} can contain at most one max node.
However, Theorem~\ref{thm:localopt} is in general much stronger, especially when the {sum} part is not fully connected, or when the {max} part has interior regions disconnected from the {sum} part. As examples, see \figref{fig:semiABtree}(b)-(c).
\begin{figure}[tbp]
\centering
\begin{tabular}{ccccc}
\includegraphics[height=0.1\textwidth]{figures_jmlr/semiABtreeHMM.pdf} & &
\includegraphics[height=0.1\textwidth]{figures_jmlr/semiABtree3lines.pdf} &&
\includegraphics[height=0.1\textwidth]{figures_jmlr/semiABtreeRBM_V3.pdf} \\
(a) && (b) && (c)
\end{tabular}
\caption{Examples of semi $A$-$B$ trees. The shaded nodes represent sum nodes, while the unshaded are max nodes.
In each graph, a semi $A$-$B$ tree is labeled by red bold lines.
Theorem~\ref{thm:localopt} shows that the fixed point of mixed-product BP is locally optimal up to jointly perturbing all the max nodes in any semi-A-B subtree of $G$. }
\label{fig:semiABtree}
\end{figure}
\subsection{The importance of the Argmax-product Message Updates}
\label{sec:compare_jiang}
\citet{Jiang10} proposed a similar hybrid message passing algorithm, repeated here as Algorithm~\ref{alg:jiang_product}, which differs from our mixed-product BP only in replacing our argmax-product message update \eqref{equ:mix_product3} with the usual max-product message update \eqref{equ:mix_product2}.
\begin{algorithm}[h]
\caption{Hybrid Message Passing by \citet{Jiang10}}
\label{alg:jiang_product}
\begin{algorithmic}
\STATE 1. Message Update:
\begin{align*}
&{A \to A \cup B}:&& m_{i\to j} \gets \big[ \sum_{x_i} (\psi_i m_{\sim i}) (\frac{ \psi_{ij}}{m_{j\to i}})^{1/\rho_{ij}} \big]^{\rho_{ij}} , ~~~~~~~~~~~~~~~~~~~~ \text{(sum-product)} \\% \label{equ:jiang_product1}\\
&{B \to A \cup B}:&& m_{i\to j} \gets \max_{x_i} (\psi_i m_{\sim i})^{\rho_{ij}} (\frac{ \psi_{ij}}{m_{j\to i}}) . ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \text{(max-product)}
\end{align*}
\STATE 2. Decoding:\quad $x_i^{*} = \argmax_{x_i} b_i(x_i)$ for $\forall i\in B$, where $b_i(x_i) \propto \psi_i(x_i) m_{\sim i}(x_i)$.
\end{algorithmic}
\end{algorithm}
Similar to our mixed-product BP, Algorithm~\ref{alg:jiang_product} also satisfies the reparameterization property in \eqref{equ:repara} (with beliefs $\{b_i, b_{ij}\}$ defined by \eqref{equ:mixedmargin});
it also satisfies a set of similar, but crurially different, consistency conditions at its fixed points,
\begin{align*}
\sum_{x_i} b_{ij}(x_i, x_j) = b_j(x_j), ~~~~~~~~~~~ \forall i\in A, j \in A\cup B, \\
\max_{x_i} b_{ij}(x_i, x_j) = b_j (x_j), ~~~~~~~~~~~ \forall i\in B, j \in A\cup B,
\end{align*}
which exactly map to the max- and sum- product message updates in Algorithm~\ref{alg:jiang_product}.
Despite its striking similarity, Algorithm~\ref{alg:jiang_product} has very different properties, and does not share the appealing variational interpretation and optimality guarantees that we have demonstrated for mixed-product BP.
First, it is unclear whether Algorithm~\ref{alg:jiang_product} can be interpreted as a fixed point algorithm for maximizing our, or a similar, variational objective function.
Second, it does not inherit the same optimality guarantees in Theorem~\ref{thm:localopt}, despite its similar reparameterization and consistency conditions.
These disadvantages are caused by the lass of the special argmax-product message update and its associated mixed-consistency condition in \eqref{equ:mix_consistency}, which was a critical ingredient of the proof of Theorem~\ref{thm:localopt}.
More detailed insights into Algorithm~\ref{alg:jiang_product} and mixed-product BP can be obtained by considering the special case when the full graph $G$ is an undirected tree. We show that in this case, Algorithm~\ref{alg:jiang_product} can be viewed as optimizing a set of \emph{approximate} objective functions, obtained by rearranging the max and sum operators into orders that require less computational cost, while
mixed-product BP attempts to maximize the \emph{exact} objective function by message updates that effectively perform some ``asynchronous" coordinate descent steps. In the sequel, we use an illustrative toy example to explain the main ideas.
\vspace{1\baselineskip}
\begin{wrapfigure}{r}{0.18\textwidth}
\begin{center}
\vspace{-25pt}
\includegraphics[width=0.18\textwidth]{figures_jmlr/hiddenchain_4node_figure.pdf}
\vspace{-25pt}
\end{center}
\end{wrapfigure}
\textbf{Example 2.} \textit{Consider the marginal MAP problem shown on the left, where the graph $G$ is an undirected tree; the sum and max sets are $A=\{1,2\}$ and $B=\{3,4\}$, respectively. We analyze how Algorithm~\ref{alg:jiang_product} and mixed-product BP in Algorithm~\ref{alg:mix_product_msg} perform on this toy example, when both taking Bethe weights ($\rho_{ij} = 1$ for $(ij)\in E$). }
\textit{
\emph{\textbf{Algorithm~\ref{alg:jiang_product} (\citet{Jiang10})}}. Since $G$ is a tree, one can show that Algorithm 3 (with Bethe weights) terminates after a full forward and backward iteration (e.g., messages passed along $x_3\to x_1 \to x_2 \to x_4$ and then $x_4 \to x_2 \to x_1 \to x_3$). By tracking the messages, one can write its final decoded solution in a closed form,
\begin{align*}
x_3^* = \argmax_{x_3}\sum_{x_1}\sum_{x_2} \max_{x_4}[ \exp(\theta(\boldsymbol{x}))], && x_4^* = \argmax_{x_4}\sum_{x_2}\sum_{x_1} \max_{x_3} [\exp(\theta(\boldsymbol{x}))],
\end{align*}
On the other hand, the true marginal MAP solution is given by,
\begin{align*}
x_3^* = \argmax_{x_3} \max_{x_4} \sum_{x_1}\sum_{x_2} [ \exp(\theta(\boldsymbol{x}))], && x_4^* = \argmax_{x_4} \max_{x_3}\sum_{x_2}\sum_{x_1} [\exp(\theta(\boldsymbol{x}))].
\end{align*}
Here, Algorithm~\ref{alg:jiang_product} approximates the exact marginal MAP problem by rearranging the max and sum operators into an elimination order that makes the calculation easier. A similar property holds for the general case when $G$ is undirected tree: Algorithm 3 (with Bethe weights) terminates in a finite number of steps, and its output solution $x_i^*$ effectively maximizes an approximate objective obtained by reordering the max and sum operators along a tree-order (see Definition~\ref{def:partialorder}) that is rooted at node $i$.
The performance of the algorithm should be related to the error caused by exchanging the order of max and sum operators. However, exact optimality guarantees are likely difficult to show because it maximizes an inexact objective function.
In addition, since each component $x_i^*$ uses a different order of arrangement, and hence maximizes a different surrogate objective function, it is unclear whether the joint $B$-configuration $\boldsymbol{x}_B^* = \{x_i^* \colon i \in B\}$ given by Algorithm~\ref{alg:jiang_product} maximizes a single consistent objective function.
}
\textit{
\emph{\textbf{Algorithm~\ref{alg:mix_product_msg} (mixed-product)}}. On the other hand, the mixed-product belief propagation in Algorithm~\ref{alg:mix_product_msg} may not terminate in a finite number of steps, nor does it necessarily yield a closed form solution when $G$ is an undirected tree.
However, Algorithm~\ref{alg:mix_product_msg} proceeds in an attempt to optimize the exact objective function.
In this toy example, we can show that the true solution is guaranteed to be a fixed point of Algorithm~\ref{alg:mix_product_msg}.
Let $b_3(x_3)$ be the mixed-belief on $x_3$ at the current iteration, and $x_3^* = \argmax_{x_3} b_3(x_3)$ its unique maxima.
After a message sequence passed from $x_3$ to $x_4$, one can show that $b_4(x_4)$ and $x_4^*$ update to
\begin{align*}
x_4^* &= \argmax_{x_4} b_4(x_4),
b_4(x_4) &= \sum_{x_2} \sum_{x_1} \exp(\theta([x_3^*, x_{\neg 3} ])) = \exp( Q([x_3^*, x_{4}] ; \boldsymbol{\theta}}%{\boldsymbol{\theta})),
\end{align*}
where we maximize the exact objective function $Q([x_3, x_4] ; \boldsymbol{\theta}}%{\boldsymbol{\theta})$ with fixed $x_3 = x_3^*$.
Therefore, on this toy example, one sweep ($x_3\to x_4$ or $x_4 \to x_3$) of Algorithm~\ref{alg:mix_product_msg} is effectively performing a coordinate descent step, which monotonically improves the true objective function towards a local maximum.
In more general models, Algorithm~\ref{alg:mix_product_msg} differs from sequential coordinate descent, and does not guarantee monotonic convergence. But, it can be viewed as a ``parallel" version of coordinate descent, which ensures the stronger local optimality guarantees shown in Theorem~\ref{thm:localopt}.
}
\section{Convergent Algorithms by Proximal Point Methods }
\label{sec:proximal}
An obvious disadvantage of mixed-product BP is its lack of convergence guarantees, even when $G$ is an undirected tree. In this section, we apply a proximal point approach \citep[e.g.,][]{martinet1970breve, Rockafellar76} to derive convergent algorithms that directly optimize our free energy objectives. Similar methods have been applied to standard sum-inference \citep{Yuille_CCCP} and max-inference \citep{Ravikumar10}.
For the purpose of illustration, we first consider the problem of maximizing the \emph{exact} marginal MAP free energy, $F_{mix}({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta}) = \langle {\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta} \rangle + H_{A|B}({\boldsymbol{\tau}})$. The proximal point algorithm works by iteratively optimizing a smoothed problem,
$${\boldsymbol{\tau}}^{t+1} = \argmin_{{\boldsymbol{\tau}} \in \mathbb{M}} \{ - F_{mix}({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta}) + \lambda^t D( {\boldsymbol{\tau}} || {\boldsymbol{\tau}}^{t}) \}, $$
where ${\boldsymbol{\tau}}^{t}$ is the solution at iteration $t$, and $\lambda^t$ is a positive
coefficient. Here, $D(\cdot || \cdot)$ is a distance, called the proximal function, which forces ${\boldsymbol{\tau}}^{t+1}$ to be close to ${\boldsymbol{\tau}}^{t}$;
typical choices of $D(\cdot || \cdot)$ are Euclidean or Bregman distances or $\psi$-divergences
\citep[e.g.,][]{Teboulle92, Iusem93}.
Proximal algorithms have nice convergence guarantees:
the objective series $\{f({\boldsymbol{\tau}}^t) \}$ is guaranteed to
be non-increasing at each iteration, and $\{{\boldsymbol{\tau}}^{t}\}$ converges to an
optimal solution,
under some regularity conditions. See, e.g.,
\citet{Rockafellar76, Tseng93, Iusem93}. The proximal algorithm is closely related to the majorize-minimize (MM) algorithm \citep{Hunter04} and the convex-concave procedure \citep{Yuille_CCCP}.
For our purpose, we take $D(\cdot || \cdot)$ to be a KL divergence between distributions on the max nodes,
$$D({\boldsymbol{\tau}} || {\boldsymbol{\tau}}^t) = \mathrm{KL}(\tau_B(\boldsymbol{x}_B) || \tau^t_B(\boldsymbol{x}_B) ) = \sum_{\boldsymbol{x}_B} \tau_B(\boldsymbol{x}_B ) \log \frac{\tau_B(\boldsymbol{x}_B)}{\tau^t_B(\boldsymbol{x}_B)}.$$
In this case, the proximal point algorithm reduces to Algorithm~\ref{alg:proximal_point}, which iteratively solves a smoothed free energy objective, with natural parameter $\boldsymbol{\theta}}%{\boldsymbol{\theta}^t$ updated at each iteration.
\begin{algorithm}[h]
\caption{Proximal Point Algorithm for Marginal MAP (Exact)}
\label{alg:proximal_point}
\begin{algorithmic}
\STATE Initialize local marginals ${\boldsymbol{\tau}}^0$.
\FOR{iteration $t$}
\STATE
\vspace{-1.2\baselineskip}
\begin{align}
& \boldsymbol{\theta}}%{\boldsymbol{\theta}^{t+1} = \boldsymbol{\theta}}%{\boldsymbol{\theta} + \lambda^t \log {\boldsymbol{\tau}}^{t}_B, \\
&{\boldsymbol{\tau}}^{t+1} = \arg\max_{\tau \in \mathbb{M}} \{ \langle {\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta}^{t+1} \rangle + H_{A|B}({\boldsymbol{\tau}}) + \lambda^t H_B({\boldsymbol{\tau}}) \}, \label{equ:proximal_inner}
\end{align}
\vspace{-1.2\baselineskip}
\ENDFOR
\STATE Decoding:
$\displaystyle x_i^{*} = \argmax_{x_i} \tau_i(x_i)$ for $\forall i\in B$.
\end{algorithmic}
\end{algorithm}
Intuitively, the proximal inner loop \eqref{equ:proximal_inner} essentially ``adds back'' the truncated entropy
term $H_B({\boldsymbol{\tau}})$, while canceling its effect by adjusting $\boldsymbol{\theta}}%{\boldsymbol{\theta}$ in the opposite
direction.
Typical choices of $\lambda^t$ include $\lambda^t = 1$ (constant) and $\lambda^t = 1/t$ (harmonic).
Note that the proximal approach is distinct from an annealing method, which would require that the annealing coefficient vanish to zero.
Interesting, if we take $\lambda^t = 1$, then the inner maximization problem \eqref{equ:proximal_inner} reduces to the standard log-partition function duality \eqref{equ:sumduality}, corresponding to a pure marginalization task. This has the interpretation of transforming the marginal MAP problem into a sequence of standard sum-inference problems.
In practice we approximate $H_{A|B}(\vtau)$ and $H_{B}(\vtau)$ by pairwise entropy decomposition $\hat{H}_{A|B}(\vtau)} %{\hat{H}(\vx_B; \vtau)$ and $\hat{H}_B(\vtau)} %{\hat{H}(\vx_B; \vtau)$ in \eqref{equ:HABHB}, respectively.
If $\hat{H}_B(\vtau)} %{\hat{H}(\vx_B; \vtau)$ is provably convex in the sense of \citet{Weiss07},
then the resulting approximate algorithm can be interpreted as a proximal algorithm that
maximizes $\hat{F}_{mix}({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta})$ with proximal function defined as
\begin{equation*}
D_{pair}({\boldsymbol{\tau}} || {\boldsymbol{\tau}}^t) = \sum_{i \in B} \kappa_i \mathrm{KL}[\tau_i (x_i)|| \tau_i^0(x_i)] ~ + \! \sum_{(ij) \in E_B} \kappa_{i\to j} \mathrm{KL}[(\tau_{ij}(x_i | x_j) || \tau_{ij}^0(x_i | x_j) ],
\end{equation*}
where $\{\kappa_i, \kappa_{i\to j}\}$ are positive, and satisfy $\rho_i = \kappa_i + \sum_{k\in \neib{i}} \kappa_{k\to i}$ and $\rho_{ij} = \kappa_{i\to j} + \kappa_{j \to i}$. In this case, Algorithm~\ref{alg:proximal_point} still inherits proximal methods' nice convergence guarantees.
\begin{comment}
In practice, $H_{A|B}(\vtau)$, and hence $F_{mix}({\boldsymbol{\tau}}, \boldsymbol{\theta}}%{\boldsymbol{\theta})$, is approximated by a Bethe or TRW approximation, and correspondingly, $D({\boldsymbol{\tau}} || {\boldsymbol{\tau}}^t)$ should be also replaced by some tractable surrogate, e.g.,
$$
D_{pair}({\boldsymbol{\tau}} || {\boldsymbol{\tau}}^t) = \sum_{i \in B} \kappa_i \mathrm{KL}[\tau_i (x_i)|| \tau_i^0(x_i)] ~ + \! \sum_{(ij) \in E_B} \kappa_{i\to j} \mathrm{KL}[(\tau_{ij}(x_i | x_j) || \tau_{ij}^0(x_i | x_j) ],
$$
where $\{\kappa_i, \kappa_{i\to j}\}$ are positive numbers; this corresponds to replacing $H_B({\boldsymbol{\tau}})$ in \eqref{equ:proximal_inner} by a ``convex" pairwise entropy decomposition,
\begin{align}
\hat{H}_B({\boldsymbol{\tau}}) = \sum_{i \in B} \rho_i H_i \ - \!\! \sum_{(ij) \in E_B }\rho_{ij} I_{ij},
\label{equ:proxim_HB}
\end{align}
where $\rho_i = \kappa_i + \sum_{k\in \neib{i}} \kappa_{k\to i}$, $\rho_{ij} = \kappa_{i\to j} + \kappa_{j \to i}$, and $\{\rho_i, \rho_{ij}\}$ is provably convex in the sense of \citet{Weiss07}. In other words, Algorithm~\ref{alg:proximal_point} still has convergent guarantees if $H_B$ is approximated by a convex entropy decomposition.
\end{comment}
An interesting special case is when both $H_{A|B}(\vtau)$ and $H_{B}(\vtau)$ are approximated by a Bethe approximation, which is provably convex only in some special cases, such as when $G$ is tree structured.
However, we find in practice that this approximation gives very accurate solutions, even on general loopy graphs where its convergence is no longer theoretically guaranteed.
\section{ Connections to EM}
\label{sec:EM}
A natural algorithm for solving the marginal MAP problem is to use the expectation-maximization (EM) algorithm,
by treating $\boldsymbol{x}_A$ as the hidden variables and $\boldsymbol{x}_B$ as the ``parameters'' to be maximized. In this
section, we show that the EM algorithm can be seen as a coordinate ascent algorithm on a mean field variant
of our framework.
We start by introducing a ``non-convex" generalization of Theorem~\ref{thm:duality}.
\begin{cor}
\label{cor:nonconvex_mixduality}
Let $\margpoly^o$ be the subset of the marginal polytope $\mathbb{M}$ corresponding to the distributions in which $\boldsymbol{x}_B$ are clamped to some deterministic values, that is,
$$\margpoly^o = \{ {\boldsymbol{\tau}} \in \mathbb{M}~ \colon ~ \text{$\exists \boldsymbol{x}_B^* \in \mathcal{X}_{B}$, such that $\tau(\boldsymbol{x}_B) = \boldsymbol{1}(\boldsymbol{x}_B = \boldsymbol{x}_B^*) $} \}.$$
Then the dual optimization \eqref{equ:mixduality} remains exact if the marginal polytope $\mathbb{M}$ is replaced by any $\mathbb{N}$ satisfying $\margpoly^o \subseteq \mathbb{N} \subseteq \mathbb{M}$, that is,
\begin{align}
\label{equ:nonconvex_mixdaulity}
\Phi_{AB} = \max_{{\boldsymbol{\tau}} \in \mathbb{N}} \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + H_{A|B}(\vtau) \}.
\end{align}
\end{cor}
\begin{proof}
For an arbitrary marginal MAP solution $\boldsymbol{x}_B^{*}$, the ${\boldsymbol{\tau}}^{*}$ with ${\tau^{*}}(\boldsymbol{x}) = p(x | \boldsymbol{x}_B = \boldsymbol{x}_B^{*}; \boldsymbol{\theta}}%{\boldsymbol{\theta})$
is an optimum of \eqref{equ:mixduality} and satisfies ${\boldsymbol{\tau}}^{*} \in \margpoly^o $. Therefore, restricting the optimization on $\margpoly^o$ (or any $\mathbb{N}$) does not change the maximum value of the objective function.
\end{proof}
\textbf{Remark.} Among all $\mathbb{N}$ satisfying $\margpoly^o \subseteq \mathbb{N} \subseteq \mathbb{M}$, the marginal polytope $\mathbb{M}$ is the smallest (and the unique) convex
set that includes $\mathbb{M}^o$, i.e., it is the convex hull of $\mathbb{M}^o$.
\begin{comment}
Given the structure of the marginal MAP problem, it is natural to consider a
coordinate ascent method that alternately optimizes $\tau(\boldsymbol{x}_A | \boldsymbol{x}_B)$ and $\tau(\boldsymbol{x}_B)$. Denote
$\tau(\boldsymbol{x}_A | \boldsymbol{x}_B)$ by $\tau_{A|\boldsymbol{x}_B}$, and $\tau(\boldsymbol{x}_B)$ by $\tau_B$, so that ${\boldsymbol{\tau}} =
\tau_B \tau_{A | \boldsymbol{x}_B}$.
The coordinate update is
\begin{align*}
&\text{sum:}\ \ &\tau^{n+1}_{A|\boldsymbol{x}_B}\hspace{0em}&\gets \hspace{0em} \arg \hspace{-1.21em} \max_{\tau_{A|\boldsymbol{x}_B} \in M_{A|B} } F_{mix}( \tau_B^{n} \tau_{A|\boldsymbol{x}_B}, \boldsymbol{\theta}}%{\boldsymbol{\theta})&&& \\
&\text{max:} \ \ &\tau^{n+1}_B \hspace{0em}&\gets \hspace{0em} \arg \hspace{-.3em} \max_{ \tau_B \in M_B} \hspace{0em} F_{mix}(\tau_{B}\tau_{A|\boldsymbol{x}_B}^{n+1}, \boldsymbol{\theta}}%{\boldsymbol{\theta}) &&&
\end{align*}
where $\tau^n_{A|\boldsymbol{x}_B}$, $\tau_B^n$ represent the solution at the $n^{\rm th}$ iteration,
and $\mathbb{M}_{A|B}$ and $\mathbb{M}_B$ are the set of $\tau_{A|\boldsymbol{x}_B}$ and $\tau_B$
respectively. Note we have $\mathbb{M} = \mathbb{M}_{A|B} \times \mathbb{M}_B$, where $\times$
represents the Cartesian product.
\end{comment}
To connect to EM, we define $\mathbb{M}^{\times}$, the set of
distributions in which $\boldsymbol{x}_A$ and $\boldsymbol{x}_B$ are independent, that is,
$\mathbb{M}^{\times} = \{{\boldsymbol{\tau}} \in \mathbb{M} |
\tau(\boldsymbol{x}) =\tau(\boldsymbol{x}_A) \tau(\boldsymbol{x}_B) \}$.
Since $\margpoly^o \subset \mathbb{M}^{\times} \subset \mathbb{M}$, the dual optimization \eqref{equ:mixduality} remains exact when restricted to $\mathbb{M}^\times$, that is,
\begin{align}
\Phi_{AB}(\boldsymbol{\theta}}%{\boldsymbol{\theta}) = \max_{{\boldsymbol{\tau}} \in \mathbb{M}^{\times}} \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + H_{A|B}(\vtau) \} = \max_{{\boldsymbol{\tau}} \in \mathbb{M}^{\times}} \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + H_{A}(\vtau) \},
\end{align}
where the second equality holds because $H_{A|B}(\vtau) = H_{A}(\vtau)$ for ${\boldsymbol{\tau}} \in \mathbb{M}^{\times}$.
Although $\mathbb{M}^{\times}$ is no longer a convex set, it is natural to consider a coordinate update that alternately optimizes $\tau(\boldsymbol{x}_A)$ and $\tau(\boldsymbol{x}_B)$,
\begin{equation}\begin{split}
\text{Updating sum part}:~~~~~~& \tau_A^{t+1} \gets \arg \!\!\!\! \max_{\tau_A \in \mathbb{M}_{A}} \langle\mathbb{E}_{{\tau_B^{t}}}(\boldsymbol{\theta}}%{\boldsymbol{\theta}), \tau_A \rangle +\HAtauA , \\
\text{Updating max part}: ~~~~~~& \tau_B^{t+1} \gets \arg \!\!\!\! \max_{\tau_B \in \mathbb{M}_{B}} \langle\mathbb{E}_{\tau_A^{t+1}}(\boldsymbol{\theta}}%{\boldsymbol{\theta} ), \tau_B \rangle ,
\label{equ:dualEM}
\end{split}\end{equation}
where $\mathbb{M}_A$ and $\mathbb{M}_B$ are the marginal polytopes over $\boldsymbol{x}_A$ and $\boldsymbol{x}_B$, respectively.
Note that the sum and max step each happen to be the dual of a sum-inference and max-inference problem,
respectively. If we go back to the primal, and update the primal configuration
$\boldsymbol{x}_B$ instead of $\tau_B$, \eqref{equ:dualEM} can be rewritten into
\begin{equation*}
\begin{split}
\text{E step}:~~~~~~&\tau_{A}^{n+1}(\boldsymbol{x}_A) \gets p(\boldsymbol{x}_A | \boldsymbol{x}_B^{n}; \boldsymbol{\theta}}%{\boldsymbol{\theta}) , \\
\text{M step}:~~~~~~&\boldsymbol{x}^{n+1}_B \gets \arg \max_{\boldsymbol{x}_B} \mathbb{E}_{\tau_A^{n+1}}(\boldsymbol{\theta}}%{\boldsymbol{\theta}),
\end{split}
\end{equation*}
which is exactly the EM update, viewing $\boldsymbol{x}_B$ as parameters and $\boldsymbol{x}_A$ as hidden variables. Similar connections between EM and the coordinate ascent method on variational objectives has been discussed in \citet{Neal98} and \citet{Wainwright08}.
When the E-step or M-step are intractable, one can insert various
approximations. In particular, approximating $\mathbb{M}_A$ by a
mean-field inner bound $\mathbb{M}_A^{mf}$ leads to variational EM. An interesting
observation is obtained by using a Bethe approximation \eqref{equ:bethe} to solve the E-step and a
linear relaxation to solve the M-step; in this case, the EM-like update is equivalent to
solving
\begin{equation}
\max_{{\boldsymbol{\tau}} \in {\mathbb{L}(G)}^{\times}} \big \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + \sum_{i\in A}{H_i} \ - \sum_{(ij)\in E_A} I_{ij} \big \},
\label{equ:EMenergy}
\end{equation}
where ${\mathbb{L}(G)}^{\times}$ is the subset of ${\mathbb{L}(G)}$ in which $\tau_{ij} (x_i,
x_j) = \tau_i(x_i ) \tau_j (x_j)$ for $(ij)\in \partial_{AB}$. Equivalently,
${\mathbb{L}(G)}^{\times}$ is the subset of ${\mathbb{L}(G)}$ in which $I_{ij} = 0$ for $(ij)\in
\partial_{AB}$. Therefore, \eqref{equ:EMenergy} can be treated as an special case of
\eqref{equ:Phitree} by taking $\rho_{ij} \to +\infty$, forcing the solution
$\tau^*$ to fall into ${\mathbb{L}(G)}^{\times}$. As we discussed in Section~\ref{sec:globalopt}, EM represents an extreme of the tradeoff between convexity and integrality implied by Theorem~\ref{thm:betheglobalopt}, which strongly encourages vertex solutions by sacrificing convexity, and hence is likely to become stuck in local optima.
\section{Junction Graph Belief Propagation for Marginal MAP}
\label{sec:junctiongraph}
In the above, we have restricted the discussion to pairwise models and pairwise entropy approximations, mainly for the purpose of clarity. In this section, we extend our algorithms to leverage higher order cliques, based on the junction graph representation \citep{mateescu2010join, Koller_book}.
Other higher order methods, like generalized BP \citep{Yedidia_Bethe} or their convex variants \citep{Wainwright_TRBP, Wiegerinck05}, can be derived similarly.
\newcommand{\pi}{\pi}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\colon}{\colon}
\renewcommand{\L}{\mathbb{L}(\mathcal{G})}
\newcommand{\pa}[1]{{\mathrm{pa}({#1})}}
\newcommand{\JDec}[0]{\mathcal{D}
\newcommand{\JChan}[0]{\mathcal{R}
\renewcommand{\v}[1]{\boldsymbol{#1}}
\newcommand{\bpa}[1]{{\pi(#1)}
\newcommand{consistent }{consistent }
\newcommand{\bdelta}[0]{\mathcal{\boldsymbol{\delta}}}
\newcommand{\EU}[0]{\mathrm{EU}}
\newcommand{\mathrm{MEU}}{\mathrm{MEU}}
\newcommand{\mathrm{MEU}}{\mathrm{MEU}}
\newcommand{H_{c_k}(\vtau)}{H_{c_k}({\boldsymbol{\tau}})}
\newcommand{H_{s_{kl}}(\vtau)}{H_{s_{kl}}({\boldsymbol{\tau}})}
\newcommand{H_{\Jmaxset_{k}}(\vtau)}{H_{\pi_{k}}({\boldsymbol{\tau}})}
\newcommand{H_{c_k | \Jmaxset_{k}}(\vtau)}{H_{c_k | \pi_{k}}({\boldsymbol{\tau}})}
%
A cluster graph is a graph of subsets of variables (called clusters). Formally, it is a triple $(\mathcal{G}, \mathcal{C}, \mathcal{S})$, where $\mathcal{G} =
(\mathcal{V}, \mathcal{E})$ is an undirected graph, with each node $k \in \mathcal{V}$ associated with
a cluster $c_k \in \mathcal{C}$, and each edge $(kl) \in \mathcal{E}$ with a
subset $s_{kl} \in \mathcal{S}$ (called separators) satisfying $s_{kl} \subseteq c_k \cap c_l$.
We assume that $\mathcal{C}$ subsumes the index set $\mathcal{I}$, that is, for any
$\alpha \in \mathcal{I}$, we can assign it with a $c_k \in \mathcal{C}$, denoted $c[\alpha]$,
such that $\alpha \subseteq c_k$.
In this case, we can reparameterize $\boldsymbol{\theta}}%{\boldsymbol{\theta}
= \{ \theta_{\alpha} \colon \alpha \in \mathcal{I}\} $ into $\boldsymbol{\theta}}%{\boldsymbol{\theta} = \{
\theta_{c_k} \colon k \in \mathcal{V} \} $ by taking
$\displaystyle \theta_{c_k} = \!\!\!\!\!\sum_{\alpha \colon c[\alpha] = c_k} \!\!\!\!\theta_{\alpha}$,
without changing the distribution.
Therefore, we simply assume $\mathcal{C} = \mathcal{I}$ in this paper without loss of generality.
A cluster graph is called a \emph{junction graph} if it satisfies the
\emph{running intersection property} -- for each $i \in V$, the induced
sub-graph consisting of the clusters and separators that include $i$ is a
connected tree. A junction graph is a junction tree if $\mathcal{G}$ is a tree structured graph.
To approximate the variational dual form, we first replace $\mathbb{M}$ with a higher order
locally consistent polytope $\L$, which is the set of local marginals that are consistent on the intersections of the clusters and separators, that is,
$$\L = \{ {\boldsymbol{\tau}} \colon \sum_{x_{c_k \setminus s_{kl}}}\tau_{c_k}(x_{c_k}) = \tau(x_{s_{kl}}), \tau_{c_k}(x_{c_k}) \geq 0, \text{for $\forall ~ k\in \mathcal{V}, (kl)\in \mathcal{E}$} \}. $$
Clearly, we have $\mathbb{M} \subseteq \L$ and that $\L$ is tighter than the pairwise polytope ${\mathbb{L}(G)}$ we used previously.
We then approximate the joint entropy term by a linear combination of the entropies over the clusters and separators,
\begin{align*}
H({\boldsymbol{\tau}}) \approx \sum_{k \in \mathcal{V}}H_{c_k}(\vtau) - \sum_{(kl) \in \mathcal{E}} H_{s_{kl}}(\vtau),
\end{align*}
where $H_{c_k}(\vtau)$ and $H_{s_{kl}}(\vtau)$ are the entropy of the local marginals $\tau_{c_k}$ and $\tau_{s_{kl}}$, respectively.
Further, we approximate $H_{B}(\vtau)$ by a slightly more restrictive entropy decomposition,
$$
H_{B}(\vtau) \approx \sum_{k \in \mathcal{V}} H_{\pi_k}({\boldsymbol{\tau}}),
$$
where $\{\pi_k \colon k \in \mathcal{V} \}$ is a non-overlapping partition of the max nodes $B$ satisfying $\pi_k \subseteq c_k$ for $\forall k \in \mathcal{V}$.
In other words, $\pi$ represents an assignment of each max node $x_b \in B$ into a cluster $k$ with $x_b \in \pi_k$.
Let $\mathcal{B}$ be the set of clusters $k \in \mathcal{V}$ for which $\pi_k \neq \emptyset$,
and call $\mathcal{B}$ the \emph{max-clusters}; correspondingly, call $\mathcal{A} = \mathcal{V} \setminus \mathcal{B}$ the \emph{sum-clusters}. See \figref{fig:junction_example} for an example.
\begin{figure*}[t]
\begin{tabular}{ccc}
\qquad \includegraphics[height= .25\textwidth]{figures_jmlr/junction_mmap_example_graph.pdf} & &
\raisebox{1em}{\includegraphics[height= .2\textwidth]{figures_jmlr/junction_mmap_example_junction.pdf}} \\
{\small (a) } & & {\small (b) }
\end{tabular}
\caption{(a) An example of marginal MAP problem, where $d,c,e$ are sum nodes (shaded) and $a, b, f$ are max nodes.
(b) A junction graph of (a). Selecting a partitioning of max nodes, $\pi_{bde}=\pi_{bef}=\emptyset$, $\pi_{abc} = \{a,b\}$, and $\pi_{bef}=\{f\}$,
results in $\{bde\}, \{bce\}$ being sum clusters (shaded) and $\{abc\}, \{bef\}$ being max clusters.
}
\label{fig:junction_example}
\end{figure*}
Overall, the marginal MAP dual form in \eqref{equ:mixduality} is approximated by
\begin{align}
\max_{{\boldsymbol{\tau}} \in \L} \big\{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta} , {\boldsymbol{\tau}} \rangle + \sum_{k \in \mathcal{A}} H_{c_k}(\vtau) + \sum_{k \in \mathcal{B}} H_{c_k | \Jmaxset_{k}}(\vtau) - \sum_{(kl )\in \mathcal{E}} H_{s_{kl}}(\vtau) \big\}
\label{equ:jgraphdualobj}
\end{align}
where $H_{c_k | \Jmaxset_{k}}(\vtau) = H_{c_k}(\vtau) - H_{\Jmaxset_{k}}(\vtau)$.
Optimizing \eqref{equ:jgraphdualobj} using a method similar to the derivation of mixed-product BP in Algorithm~\ref{alg:mix_product_msg}, we obtain a ``mixed-product" junction graph belief propagation, given in Algorithm~\ref{alg:mix_jgraph_BP}.
\begin{algorithm}[tb]
\caption{Mixed-product Junction Graph BP}
\label{alg:mix_jgraph_BP}
\begin{algorithmic}
\STATE 1. Passing messages between clusters on the junction graph:
\begin{align*}
&{\mathcal{A} \to \mathcal{A} \cup \mathcal{B}}:&& m_{k \to l} \propto \sum_{x_{c_k \setminus s_{kl}} } \psi_{c_k} m_{\sim k \setminus l} , \hspace{10em} \text{(Sum-product message)} \\% \label{equ:jiang_product1}\\
&{\mathcal{B} \to \mathcal{A} \cup \mathcal{B}}:&& m_{k \to l} \propto \sum_{x_{c_k \setminus s_{kl}} }( \psi_{c_k} m_{\sim k \setminus l} ) \cdot \boldsymbol{1}[x_{\pi_k} \in \mathcal{X}^*_{\pi_k}],
~~~~~ \text{(Argmax-product message)} \\
&&&\text{where $\mathcal{X}^*_{\pi_k} = \argmax_{x_{\pi_k}} \sum_{x_{c_k \setminus \pi_k}} b_k(x_{c_k})$, } \\
&&&\text{\ \ \ \ \ \ \ \ $b_k(x_{c_k}) = \psi_{c_k} \!\!\! \prod_{k' \in \mathcal{N}(k)} \!\!\! m_{k' \to k}$~~ and ~~$m_{\sim k \setminus l} ~ = \!\!\!\!\!\!\! \prod_{k' \in \mathcal{N}(k)\setminus \{l\}} \!\!\! \!\!\! m_{k'\to k}$.}
\end{align*}
\STATE 2. Decoding:
$\displaystyle \boldsymbol{x}^*_{\pi_k} = \argmax_{x_{\pi_k}} \sum_{x_{c_k \setminus \pi_k}} b_k(x_{c_k})$
for $\forall k\in \mathcal{B}$.
\end{algorithmic}
\end{algorithm}
Similarly to our mixed-product BP in Algorithm~\ref{alg:mix_product_msg}, Algorithm~\ref{alg:mix_jgraph_BP} also admits an intuitive reparameterization interpretation and a strong local optimality guarantee.
Algorithm~\ref{alg:mix_jgraph_BP} can be seen as a special case of a more general junction graph BP algorithm derived in \citet{liu12b} for solving maximum expected
utility tasks in decision networks. For more details, we refer the reader to that work.
\begin{comment}
\section{Extension to Multistage Stochastic Optimization}
\label{sec:multistage}
\newcommand{{\mathrm{pa}_k}}{{\mathrm{pa}_k}}
Our method can be easily generalized to more complicated multi-stage stochastic optimization problems,
in which maximization and summation (or expectation) operators interleaves, e.g.,
\begin{align}
\Phi_{mstg}(\theta) &= \log \max_{x_{B_q}} \sum_{x_{A_{q}}} \cdots \max_{x_{B_1}} \sum_{x_{A_{1}}} \exp(\theta(\boldsymbol{x})), \label{equ:multistg_primalPhi}
\end{align}
where $A_k, B_k , k=1,\ldots, q$, are disjoint subsets of V that form a partition of $V$, that is, $\cup_{i=1}^{q} (A_i \cup B_i) = V$. This problem also corresponds to the structured decision making problems under uncertainty in influence diagrams, where summation and maximization correspond to averaging random nodes and optimizing decision nodes, respectively.
The duality result in Theorem~\ref{thm:duality} can be naturally extended to \eqref{equ:multistg_primalPhi}.
\begin{thm}
\label{thm:multidual}
The $\Phi_{mstg}(\boldsymbol{\theta}}%{\boldsymbol{\theta})$ in \eqref{equ:multistg_primalPhi} has a dual representation,
\begin{equation}
\Phi_{mstg}(\boldsymbol{\theta}}%{\boldsymbol{\theta}) = \max_{{\boldsymbol{\tau}} \in \mathbb{M}} \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + \sum_{k=1}^q H_{A_k|B_k}(\vtau) \}.
\label{equ:multistg_mixduality}
\end{equation}
\end{thm}
\begin{proof}
The proof is similar to that of Theorem~\ref{thm:duality}.
\end{proof}
Intuitively, the max v.s. sum operators in the primal forms determinate the appearance of the conditional entropy terms in the dual forms. See Table~\ref{tab:threetasks} for a summary of all the variational duality results in this paper.
Similar variational algorithms, in particular BP-type algorithms, can be derived based on the above result. See \citet{Liu12b} for a more throughout discussion.
\begin{table}[tb] \centering
\setlength{\extrarowheight}{5pt}
\begin{tabular}{ | l | l | l |}
\hline
Problem Type & Primal Form & Dual Form \\ \hline
Max-Inference & $\displaystyle \log \max_{\boldsymbol{x}} \exp(\theta(\boldsymbol{x}))$ & $\displaystyle \max_{{\boldsymbol{\tau}} \in \mathbb{M}} \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle \}$ \\ \hline
Sum-Inference & $\displaystyle \log \sum_{\boldsymbol{x}} \exp(\theta(\boldsymbol{x}))$ & $ \displaystyle \max_{{\boldsymbol{\tau}} \in \mathbb{M}} \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + H(\vtau)}%{H(\vx; \vtau) \}$ \\ \hline
\textcolor{blue}{Mixed-Inference} & \textcolor{blue}{$\displaystyle \log \max_{\boldsymbol{x}_B}\sum_{\boldsymbol{x}_A} \exp(\theta(\boldsymbol{x}))$} & \textcolor{blue}{$\displaystyle \max_{{\boldsymbol{\tau}} \in \mathbb{M}} \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + H_{A|B}(\vtau) \} $} \\\hline
\begin{tabular}[x]{@{}l@{}}\textcolor{blue}{Multi-stage}\\\textcolor{blue}{Mixed-Inference}\end{tabular}
& \textcolor{blue}{$\displaystyle \log \max_{x_{B_q}} \sum_{x_{A_{q}}} \cdots \max_{x_{B_1}} \sum_{x_{A_{1}}} \exp(\theta(\boldsymbol{x})) $}
& \textcolor{blue}{$\displaystyle \max_{{\boldsymbol{\tau}} \in \mathbb{M}} \{ \langle \boldsymbol{\theta}}%{\boldsymbol{\theta}, {\boldsymbol{\tau}} \rangle + \sum_{k=1}^q H_{A_k|B_k}(\vtau) \} $} \\\hline
\end{tabular}
\caption{The primal and dual forms of the three inference types. The dual forms of sum-inference and max-inference are well known, but that of (multi-stage) mixed-inference is the contribution of this work. Intuitively, the max v.s. sum operators in the primal forms determinate the appearance of the conditional entropy terms in the dual forms. }
\label{tab:threetasks}
\end{table}
\end{comment}
\section{Experiments}
\label{sec:experiments}
We illustrate our algorithms on both simulated models and more realistic diagnostic Bayesian networks taken from the UAI08 inference challenge.
We show that our Bethe approximation algorithms perform best among all the tested algorithms, including \citet{Jiang10}'s hybrid message passing and a state-of-the-art local search algorithm \citep{Park04}.
We implement our mixed-product BP in Algorithm~\ref{alg:mix_product_msg} with Bethe weights ({\tt mix-product (Bethe)}), the regular sum-product BP ({\tt sum-product}), max-product BP ({\tt max-product}) and \citet{Jiang10}'s hybrid message passing (with Bethe weights) in Algorithm~\ref{alg:jiang_product} ({\tt Jiang's method}), where the solutions are all extracted by maximizing the singleton marginals of the max nodes. For all these algorithms, we run a maximum of 50 iterations; in case they fail to converge, we run 100 additional iterations with a damping coefficient of $0.1$. We initialize all these algorithms with 5 random initializations and pick the best solution; for {\tt mix-product (Bethe)} and {\tt Jiang's method}, we run an additional trial initialized using the sum-product messages, which was reported to perform well in \citet{Park04} and \citet{Jiang10}. We also run the proximal point version of mixed-product BP with Bethe weights ({\tt Proximal (Bethe) }), which is Algorithm~\ref{alg:proximal_point} with both $H_{A|B}(\vtau)$ and $H_{B}(\vtau)$ approximated by Bethe approximations.
We also implement the TRW approximation, but only using the convergent proximal point algorithm, because the TRW upper bounds are valid only when the algorithms converge.
The TRW weights of $\hat{H}_{A|B}$ are constructed by first (randomly) selecting spanning trees of $G_A$, and then augmenting each spanning tree with one uniformly selected edge in $\partial_{AB}$;
the TRW weights of $\hat{H}_B({\boldsymbol{\tau}})$ are constructed to be provably convex, using the method of TRW-S in \citet{trws}. We run all the proximal point algorithms for a maximum of 100 iterations, with a maximum of 5 iterations of weighted message passing updates \eqref{equ:weightedmsg}-\eqref{equ:weighted_marginals} for the inner loops (with 5 additional damping with 0.1 damping coefficient).
In addition, we compare our algorithms with SamIam, which is a state-of-the-art implementation of the local search algorithm for marginal MAP \citep{Park04}; we use its default Taboo search method with a maximum of 500 searching steps, and report the best results among 5 trials with random initializations, and one additional trial initialized by its default method (which sequentially initializes $x_i$ by maximizing $p(x_{i} | x_{\mathrm{pa}_i})$ along some predefined order).
We also implement an EM algorithm, whose expectation and maximization steps are approximated by sum-product and max-product BP, respectively. We run EM with 5 random initializations and one initialization by sum-product marginals, and pick the best solution.
\textbf{Simulated Models.}
We consider pairwise models over discrete random variables taking values in $\{-1,0, +1\}^n$,
\begin{equation*}
p(\boldsymbol{x}) \propto \exp\big[\sum_i \theta_{i}(x_i) + \sum_{(ij)\in E} \theta_{ij}(x_{i}, x_{j})\big].
\end{equation*}
The value tables of $\theta_i$ and $\theta_{ij}$ are randomly generated from normal distribution, $\theta_{i}(k) \sim \mathrm{Normal}(0, 0.01)$, $\theta_{ij}(k,l) \sim \mathrm{Normal}(0, \sigma^2)$, where $\sigma$ controls the strength of coupling. Our results are averaged on 1000 randomly generated sets of parameters.
We consider different choices of graph structures and max / sum node patterns:
\begin{enumerate}
\item \emph{Hidden Markov chain} with 20 nodes, as shown in \figref{fig:hiddenchain}.
\item \emph{Latent tree models}. We generate random trees of size 50, by finding the minimum spanning trees of random symmetric matrices with elements drawn from $\mathrm{Uniform}([0,1])$. We take the leaf nodes to be max nodes, and the non-leaf nodes to be sum nodes. See \figref{fig:rand_tree_result}(a) for a typical example.
\item \emph{$10\times10$ Grid} with max and sum nodes distributed in two opposite chess board patterns shown in \figref{fig:chessboard_result}(a) and \figref{fig:chessboard_rev_result}(a), respectively. In \figref{fig:chessboard_result}(a), the sum part is a loopy graph, and the max part is a (fully disconnected) tree; in \figref{fig:chessboard_rev_result}(a), the max and sum parts are flipped.
\end{enumerate}
The results on the hidden Markov chain are shown in \figref{fig:hiddenchain_result}, where we plot in panel (a) different algorithms' percentages of obtaining the globally optimal solutions among 1000 random trials,
and in panel (b) their relative energy errors defined by $Q(\hat{\boldsymbol{x}}_B; \boldsymbol{\theta}}%{\boldsymbol{\theta}) - Q(\boldsymbol{x}_B^*; \boldsymbol{\theta}}%{\boldsymbol{\theta})$, where $\hat{\boldsymbol{x}}_B$ is the solution returned by the algorithms, and $\boldsymbol{x}_B^*$ is the true optimum.
The results of the latent tree models and the two types of 2D grids are shown in \figref{fig:rand_tree_result}, \figref{fig:chessboard_result} and \figref{fig:chessboard_rev_result}, respectively. Since the globally optimal solution $\boldsymbol{x}_B^*$ is not tractable to calculate in these cases, we report the approximate relative error defined by $Q(\hat{\boldsymbol{x}}_B; \boldsymbol{\theta}}%{\boldsymbol{\theta}) - Q(\tilde{\boldsymbol{x}}_B; \boldsymbol{\theta}}%{\boldsymbol{\theta})$, where $\tilde{\boldsymbol{x}}_B$ is the best solution we found across all algorithms.
\textbf{Diagnostic Bayesian Networks.}
We also test our algorithms on two diagnostic Bayesian networks taken from the UAI08 Inference Challenge, where we construct marginal MAP problems by randomly selecting varying percentages of nodes to be max nodes. Since these models are not pairwise, we implement the junction graph versions of {\tt mix-product (Bethe)} and {\tt proximal (Bethe)} shown in Section~\ref{sec:junctiongraph}. \figref{fig:uai_result} shows the approximate relative errors of our algorithms and {\tt local search (SamIam)} as the percentage of the max nodes varies.
\textbf{Insights.}
Across all the experiments, we find that {\tt mix-product (Bethe)}, {\tt proximal (Bethe)} and {\tt local search (SamIam)} significantly outperform all the other algorithms, while {\tt proximal (Bethe)} outperforms the two others in some circumstances. In the hidden Markov chain example in \figref{fig:hiddenchain_result}, these three algorithms almost always (with probability $\geq 99 \%$) find the globally optimal solutions. However, the performance of SamIam tends to degenerate when the max part has loopy dependency structures (see \figref{fig:chessboard_rev_result}), or when the number of max nodes is large (see \figref{fig:uai_result}), both of which make it difficult to explore the solution space by local search. On the other hand, {\tt mix-product (Bethe)} tends to degenerate as the coupling strength $\sigma$ increases (see \figref{fig:chessboard_rev_result}), probably because its convergence gets worse as $\sigma$ increases.
We note that our TRW approximation gives much less accurate solutions than the other algorithms, but is able to provide an upper bound on the optimal energy. Similar phenomena have been observed for TRW-BP in standard max- and sum- inference.
The hybrid message passing of \citet{Jiang10} is significantly worse than {\tt mix-product (Bethe)}, {\tt proximal (Bethe)} and {\tt local search (SamIam)}, but is otherwise the best among the remaining algorithms. EM performs similarly to (or sometimes worse than) Jiang's method.
The regular max-product BP and sum-product BP are among the worst of the tested algorithms, indicating the danger of approximating mixed-inference by pure max- or sum- inference.
Interestingly, the performances of max-product BP and sum-product BP have opposite trends: In \figref{fig:hiddenchain_result}, \figref{fig:rand_tree_result} and \figref{fig:chessboard_result}, where the max parts are fully disconnected and the sum parts are connected and loopy, max-product BP usually performs worse than sum-product BP, but gets better as the coupling strength $\sigma$ increases; sum-product BP, on the other hand, tends to degenerate as $\sigma$ increases. In \figref{fig:chessboard_rev_result}, where the max / sum pattern is reversed
(resulting in a larger, loopier max subgraph), max-product BP performs better than sum-product BP.
\begin{figure*}[t]
\begin{tabular}{cc}
\!\!\!\!\!\!
\scalebox{0.95}{\includegraphics[width= .32\textwidth]{figures_jmlr/hmm_state3_assym_percentage.pdf}}
\hspace{2.8em}\scalebox{0.95}{\includegraphics[width= .32\textwidth]{figures_jmlr/hmm_state3_assym_withbound.pdf} \qquad
\hspace{-2.6em}\raisebox{.2em}{\begin{tikzpicture}
\shade[left color=gray!50!white,right color=gray!50!white] (0,0) -- (0,.8) -- (.6,2.8) -- (.6,-.4) -- cycle;
\draw[gray!50!white] (.6,-.4) -- (4.6,-.4) -- (4.6,2.8) -- (.6,2.8) -- cycle;
\end{tikzpicture}}
\hspace{-10.5em}\raisebox{.5em}{\includegraphics[width= .25\textwidth]{figures_jmlr/hmm_state3_assym_nobound.pdf}}}
\\
{\small (a) } & {\small (b)}
\begin{picture}(0,0)
\put(-160,70){\includegraphics[width= .16\textwidth]{figures_jmlr/hmm_state3_assym_LEGEND_withbound.pdf}}
\end{picture}
\end{tabular}
\caption{Results on the hidden Markov chain in \figref{fig:hiddenchain} (best viewed in color). (a)
different algorithms' probabilities of obtaining the globally optimal solution among 1000 random trials. {\tt Mix-product (Bethe)}, {\tt Proximal (Bethe)} and {\tt Local Search (SamIam)} almost always (with probability $\geq 99\%$) find the optimal solution.
(b) The relative energy errors of the different algorithms, and the upper bounds obtained by {\tt Proximal (TRW)}.
}
\label{fig:hiddenchain_result}
\end{figure*}
\begin{figure*}[t]
\begin{tabular}{cc}
\!\!\!\!\!\!
\raisebox{0em}{\scalebox{0.9}{\includegraphics[width= .29\textwidth]{figures_jmlr/rand_tree_50_dot_plot}}}
\hspace{4em}\scalebox{0.95}{\includegraphics[width= .32\textwidth]{figures_jmlr/rand_tree_state3_assym_withbound.pdf} \qquad
\hspace{-2.3em}\raisebox{.2em}{\begin{tikzpicture}
\shade[left color=gray!50!white,right color=gray!50!white] (0,0) -- (0,.8) -- (.6,2.8) -- (.6,-.4) -- cycle;
\draw[gray!50!white] (.6,-.4) -- (4.6,-.4) -- (4.6,2.8) -- (.6,2.8) -- cycle;
\end{tikzpicture}}
\hspace{-10.5em}\raisebox{.5em}{\includegraphics[width= .25\textwidth]{figures_jmlr/rand_tree_state3_assym_withoutbound.pdf}}} \\%mixhiddenchain_err_2.pdf} &
{\small (a) } & {\small (b)}
\begin{picture}(0,0)
\put(-182,20){\includegraphics[width= .16\textwidth]{figures_jmlr/hmm_state3_assym_LEGEND_withbound.pdf}}
\end{picture}
\end{tabular}
\caption{(a) A typical latent tree model, whose leaf nodes are taken to be max nodes (white) and non-leaf nodes to be sum nodes (shaded).
(b) The approximate relative energy errors of different algorithms, and the upper bound
obtained by {\tt Proximal (TRW)}.
}
\label{fig:rand_tree_result}
\end{figure*}
\begin{figure*}[tbh]
\begin{tabular}{cc}
\raisebox{5.5em}{\begin{tabular}{c}
\raisebox{0em}{\scalebox{1}{\includegraphics[width= .15\textwidth]{figures_jmlr/chessboard_pattern.pdf}}} \\
{\includegraphics[width= .15\textwidth]{figures_jmlr/hmm_state3_assym_LEGEND_withbound.pdf}}
\end{tabular}
}
\hspace{0em}\scalebox{1.2}{\includegraphics[width= .32\textwidth]{figures_jmlr/chessboard_state3_assym_withbound_V2} \qquad
\hspace{-2.3em}\raisebox{.2em}{\begin{tikzpicture}
\shade[left color=gray!50!white,right color=gray!50!white] (0, 0) -- (0,.5) -- (.6,2.8) -- (.6,-.4) -- cycle;
\draw[gray!50!white] (.6,-.4) -- (4.6,-.4) -- (4.6,2.8) -- (.6,2.8) -- cycle;
\end{tikzpicture}}
\hspace{-10.5em}\raisebox{.8em}{\includegraphics[width= .25\textwidth]{figures_jmlr/chessboard_state3_assym_withoutbound_V2}}} \\%mixhiddenchain_err_2.pdf} &
{\small (a) } & {\small (b)}
\end{tabular}
\caption{(a) A marginal MAP problem defined on a $10\times10$ Ising grid, with shaded sum nodes
and unshaded max nodes; note that the sum part is a loopy graph, while max part is fully disconnected. (b) The approximate relative errors of different algorithms as a function of coupling strength $\sigma$.}
\label{fig:chessboard_result}
\end{figure*}
\begin{figure*}[tbh]
\begin{tabular}{cc}
\raisebox{5.5em}{\begin{tabular}{c}
\raisebox{0em}{\scalebox{1}{\includegraphics[width= .15\textwidth]{figures_jmlr/chessboard_rev_pattern}}} \\
{\includegraphics[width= .15\textwidth]{figures_jmlr/hmm_state3_assym_LEGEND_withbound.pdf}}
\end{tabular}
}
\hspace{0em}\scalebox{1.2}{\includegraphics[width= .32\textwidth]{figures_jmlr/chessboard_reV_state3_assym_withbound_V2} \qquad
\hspace{-2.3em}\raisebox{.2em}{\begin{tikzpicture}
\shade[left color=gray!50!white,right color=gray!50!white] (0, 1.3) -- (0,1.65) -- (.6,2.8) -- (.6,-.4) -- cycle;
\draw[gray!50!white] (.6,-.4) -- (4.6,-.4) -- (4.6,2.8) -- (.6,2.8) -- cycle;
\end{tikzpicture}}
\hspace{-10.5em}\raisebox{.8em}{\includegraphics[width= .25\textwidth]{figures_jmlr/chessboard_reV_state3_assym_withoutbound_V2_tmptmp}}} \\%mixhiddenchain_err_2.pdf} &
{\small (a) } & {\small (b)}
\end{tabular}
\caption{(a) A marginal MAP problem defined on a $10\times10$ Ising grid, but with max / sum part exactly opposite to that in \figref{fig:chessboard_result}; note that the max part is loopy, while the sum part is fully disconnected in this case. (b) The approximate relative errors
of different algorithms as a function of coupling strength.}
\label{fig:chessboard_rev_result}
\end{figure*}
\begin{figure*}[t]
\begin{tabular}{c}
\hspace{-1em} \raisebox{0em}{\scalebox{1}{\includegraphics[width= 1\textwidth]{figures_jmlr/uaidiagnoseBN_figure_V1.pdf}}} \\
{\small (a) The structure of Diagnostic BN-2, with 50\% randomly selected sum nodes shaded. }
\end{tabular}
\begin{tabular}{cc}
\raisebox{0em}{\scalebox{1}{\includegraphics[width= .35\textwidth]{figures_jmlr/uai_model1_marginalMAP.pdf}}} &
\raisebox{0em}{\scalebox{1}{\includegraphics[width= .35\textwidth]{figures_jmlr/uai_model2_marginalMAP.pdf}}} \\
{\small (b) Diagnostic BN-1 } & {\small (c) Diagnostic BN-2}
\end{tabular}
\begin{picture}(0,0)
\put(0,20){\includegraphics[width= .2\textwidth]{figures_jmlr/uai_model1_marginalMAP_LEGEND.pdf}}
\end{picture}
\caption{The results on two diagnostic Bayesian networks (BNs) in the UAI08 inference challenge. (a) The Diagnostic BN-2 network. (b)-(c) The performances of algorithms on the two BNs as a function of the percentage of max nodes. Results are averaged over 100 random trials.}
\label{fig:uai_result}
\end{figure*}
\section{Conclusion and Further Directions}
\label{sec:conclusion}
We have presented a general variational framework for solving marginal MAP
problems approximately, opening new doors for developing efficient algorithms.
In particular, we show that our proposed ``mixed-product" BP admits appealing theoretical properties and performs well in practice.
Potential future directions include improving the performance of the truncated TRW approximation
by optimizing weights, deriving optimality conditions that may be applicable
even when the sum component does not form a tree, studying the convergent properties of mixed-product BP, and
leveraging our results to learn hidden variable models for data.
\subsection*{Acknowledgments}
We thank Arthur Choi for providing help on SamIam.
This work was supported in part by
NSF grant IIS-1065618 and a Microsoft Research Ph.D Fellowship.
|
1,108,101,564,953 | arxiv | \section{Introduction}
In 1969 \cite{strassen1969gaussian} Strassen presented his celebrated algorithm for matrix multiplication breaking for the first time the naive
complexity bound of $n^3$ for $n\times n$
matrices. Since then, the complexity of the optimal matrix multiplication algorithm is one of the central problems in computer science.
In terms of algebra we know that this question is equivalent to estimating rank or border rank of a specific tensor $M_{n,n,n}\in\mathbb{C}^{n^2}\otimes\mathbb{C}^{n^2}\otimes\mathbb{C}^{n^2}$
\cite{JM1, landsberg_2017, BurgisserBook}.
The current best lower and upper bounds are presented in \cite{JaJMSIAGA, JaJMIMRN, JMOttaviani, Virgi, LeGall}.
We recall
that the constant $\omega$ is defined as the smallest number such that for any $\epsilon >0$ the multiplication of $n\times n$ matrices can be performed in time
$O(n^{\omega+\epsilon})$.
Further, recall that the Waring rank of a homogeneous polynomial $P$ of degree $d$ is the smallest number $r$ of linear forms $l_1,\dots,l_r$ such that
$P=\sum_{i=1}^r l_i^d$.
Recently, Chiantini et al. \cite{chiantini2017polynomials} provided another equivalent interpretation of $\omega$ in terms of Waring (border) rank. Namely, let $SM_n$ be a cubic in
$S^3(\mathfrak{sl}_n^*)$ given by
$SM_n(A)=\tr(A^3)$. Then $\omega$ is the smallest number such that for any $\epsilon>0$ the Waring rank (or Waring border rank) of
$SM_n$ is $O(n^{\omega+\epsilon})$.
This observation was the initial motivation for our study of the plethysm $S^3(\mathfrak{sl}_n)$.
The computations of plethysm are in general very hard and
explicit formulas are known only in specific cases \cite{macdonald1998symmetric}. For example for symmetric power $S^3(S^k)$ the decomposition was classically computed already in
\cite{Thrall,Plunkett}, but $S^4(S^k)$ and $S^5(S^k)$ were only recently explicitely obtained in \cite{JaThomas}. As symmetric powers (together with exterior powers) are the simplest Schur functors,
one could expect that respective
formulas for $S^d(\mathfrak{sl}_n)$ are harder. In principle, one could use the methods of \cite{Howe, JaThomas, JaManivel} to decompose this plethysm, but this requires
a lot of nontrivial character manipulations. Instead, we present a very easy proof of explicit decomposition based on Cauchy formula and Littlewood-Richardson rule in
Theorem \ref{thm:plet}. In fact, using our method one can inductively obtain the formula for $S^k(\mathfrak{sl}_n)$ for any $k$.
While matrix multiplication is represented by the (unique) invariant in $S^3(\mathfrak{sl}_n)$ the aim of this article is to understand the other highest-weight
vectors. A precise description of them is presented in Section \ref{sec:hwv}. We plan to undertake a detailed study of ranks and border ranks of other highest-weight vectors in
future work. Here we present just the first two nontrivial instances. It turns out, that two of the highest-weight vectors are (isomorphic to) the (four and five dimensional) variants of the
Coppersmith-Winograd tensor \cite{CW}. We recall that the best upper bounds for rank and border rank are based on a beautiful technique by Coppersmith and Winograd applied to
a specific tensor $T$ \cite{Virgi}. While $T$ is extremely efficient for this technique, it is completely not clear which properties of $T$ make it so useful and
how to identify potentially better tensors. In fact, there are whole programs, see e.g.~\cite{cohn2003group}, aimed at finding tensors similar to, but better than Coppersmith-Winograd. We hope that
other highest-weight vectors will also reveal their importance.
\subsection*{Acknowledgement} The author would like to thank his advisor, Mateusz Micha\l{}ek, for the many helpful comments and discussions.
\section{The plethysm}
In this section we describe a general procedure to decompose $S^k(\mathfrak{gl}_n)$ and $S^k(\mathfrak{sl}_n)$ into irreducibles.
Recall that the irreducible representations of $SL_n$ are precisely the representations $\mathbb{S}_{\lambda}(\mathbb{C}^n)$, where $\lambda=[\lambda_1,\ldots,\lambda_{n-1}]$ is a partition of length at most $n-1$, and $\mathbb{S}_{\lambda}$ is the Schur functor associated to the partition $\lambda$ (consult for example \cite{FH13}).
\begin{theorem}\label{thm:plet}
For $n \in \mathbb{N}$, it holds that
\[
S^k(\mathfrak{gl}_n) \cong \bigoplus_{\lambda \vdash k}\bigoplus_{\nu}N_{\lambda \overline{\lambda}}^{\nu}\mathbb{S}_{\nu}(\mathbb{C}^n) \numberthis \label{plethysm}
\]
as $SL_n$-representations.
Here the second summation is over all partitions $\nu$ of length at most $n-1$, $N_{\lambda \mu}^{\nu}$ are the Littlewood-Richardson coefficients, and $\overline{\lambda}=[\lambda_1,\lambda_1-\lambda_{n-1},\ldots,\lambda_1-\lambda_2]$.
\end{theorem}
\begin{proof}
Note that $\mathfrak{gl}_n \cong (\mathbb{C}^n) \otimes (\mathbb{C}^n)^*$ as $SL_n$-representations. So
\begin{align*}
S^k(\mathfrak{gl}_n) \cong& S^k\big((\mathbb{C}^n) \otimes (\mathbb{C}^n)^*\big)
\cong \bigoplus_{\lambda \vdash k}\mathbb{S}_{\lambda}(\mathbb{C}^n) \otimes \mathbb{S}_{\lambda}(\mathbb{C}^n)^*\\
\cong& \bigoplus_{\lambda \vdash k}\mathbb{S}_{\lambda}(\mathbb{C}^n) \otimes \mathbb{S}_{\overline{\lambda}}(\mathbb{C}^n)
\cong \bigoplus_{\lambda \vdash k}\bigoplus_{\nu}N_{\lambda \overline{\lambda}}^{\nu}\mathbb{S}_{\nu}(\mathbb{C}^n) \text{.}
\end{align*}
The second isomorphism holds by Cauchy's formula; for the third one see for example \cite[15.50]{FH13}; the fourth isomorphism is the Littlewood-Richardson rule.
\end{proof}
To compute the decomposition of $S^k(\mathfrak{sl}_n)$, we simply note that
\begin{align*}
S^k(\mathfrak{gl}_n) \cong& S^k(\mathfrak{sl}_n\oplus \mathbb{C}) \cong \mathbb{C} \oplus \bigoplus_{i=1}^k{S^i(\mathfrak{sl}_n)} \text{.}
\end{align*}
This allows us to compute the decomposition of $S^k(\mathfrak{sl}_n)$ inductively.\\
As a corollary we present an explicit decomposition in the case $k=3$. Computing the Littlewood-Richardson coefficients in \eqref{plethysm} gives us the decomposition of $S^3(\mathfrak{gl}_n)$ (resp.\ $S^3(\mathfrak{sl}_n)$) into irreducibles. We present these in Table \ref{tablePlethysm}: the first column lists the highest weights $\lambda$ of the occurring irreducible representations $\mathbb{S}_{\lambda}(\mathbb{C}^n)$. To be more precise: the first column actually shows the highest weights when we view $S^3(\mathfrak{gl}_n)$ (resp.\ $S^3(\mathfrak{sl}_n)$) as a $GL_n$-representation. (Recall that weights of $GL_n$ are $n$-tuples $[\lambda_1,\ldots,\lambda_n]\in \mathbb{Z}^n$ with $\lambda_1 \geq \ldots \geq \lambda_n$. The corresponding $SL_n$-weight is then $[\lambda_1-\lambda_n,\ldots,\lambda_{n-1}-\lambda_n]$.)
The second and third column list the multiplicities of the irreducibles in $S^3(\mathfrak{gl}_n)$ resp.\ $S^3(\mathfrak{sl}_n)$. We also list the dimensions of the occurring irreducible representations $\mathbb{S}_{\lambda}(\mathbb{C}^n)$, as well as the dimensions of the projective homogeneous varieties contained in $\mathbb{P}(\mathbb{S}_{\lambda}(\mathbb{C}^n))$ (see Subsection \ref{subsec:homog}).
\begin{table}[h]
\centering
\caption{Irreducible components of $S^3(\mathfrak{gl}_n)$ and $S^3(\mathfrak{sl}_n)$}
\label{tablePlethysm}
\begin{tabular}{|l|l|l|l|l|}
\hline
Highest weight & $S^3(\mathfrak{gl}_n)$ & $S^3(\mathfrak{sl}_n)$ & Dimension & Variety \\ \hline
$[0,\ldots,0]$ & $3$ & $1$ & $1$ & $0$ \\ \hline
$[1,0,\ldots,0,-1]$ & $4$ & $2$ & $n^2-1$ & $2n-3$ \\ \hline
$[2,0,\ldots,0,-2]$ & $2$ & $1$ & $\frac{(n-1)n^2(n+3)}{4}$ & $2n-3$ \\ \hline
$[3,0,\ldots,0,-3]$ & $1$ & $1$ & $\frac{(n-1)n^2(n+1)^2(n+5)}{36}$ & $2n-3$ \\ \hline
$[1,1,0,\ldots,0,-1,-1]$ & $2$ & $1$ & $\frac{(n-3)n^2(n+1)}{4}$ & $4n-12$ \\ \hline
$[2,0,\ldots,0,-1,-1]$ & $1$ & $1$ & $\frac{(n-2)(n-1)(n+1)(n+2)}{4}$ & $3n-7$ \\ \hline
$[1,1,0,\ldots,0,-2]$ & $1$ & $1$ & $\frac{(n-2)(n-1)(n+1)(n+2)}{4}$ & $3n-7$ \\ \hline
$[2,1,0,\ldots,0,-1,-2]$ & $1$ & $1$ & $\frac{(n-3)(n-1)^2(n+1)^2(n+3)}{9}$ & $4n-10$ \\ \hline
$[1,1,1,0,\ldots,0,-1,-1,-1]$ & $1$ & $1$ & $\frac{(n-5)(n-1)^2n^2(n+1)}{36}$ & $6n-27$ \\ \hline
\end{tabular}
\end{table}
\subsection{Homogeneous varieties}\label{subsec:homog}
Let $V$ be an irreducible representation of a semisimple Lie group G. Then $\mathbb{P}V$ has a unique closed $G$-orbit $X$, which is the orbit of the highest-weight vector in $\mathbb{P}V$ under the action of $G$. The projective variety $X$ is isomorphic to $G/P$, where $P$ is a parabolic subgroup. We call these varieties homogeneous varieties or partial flag varieties. \\
In our case $G=SL_n$, we can compute the dimension of $X$ in the following way:
Consider the Dynkin diagram of $\mathfrak{sl}_n$, which consists of $n-1$ dots marked $1$ to $n-1$, and the Young diagram $\lambda$ associated to the representation $V$. For every $j \in \{1,\ldots,n-1 \}$, if the Young diagram has at least one column of length $j$, we remove the dot $j$ from the Dynkin diagram. After removing these dots the Dynkin diagram splits in connected components of size $k_i$. The dimension of our variety $X$ is then given by
\[
\frac{1}{2}\left(n^2-n-\sum_{i}{(k_i^2+k_i)}\right) \text{.}
\]
This gives us the last column of Table \ref{tablePlethysm}.
\section{Highest weight vectors}\label{sec:hwv}
We now describe highest-weight vectors for all irreducible components of $S^3(\mathfrak{gl}_n)$. We write $E_{i,j} \in \mathfrak{gl}_n$ for the $n \times n$ matrix with as only nonzero entry a $1$ on position $(i,j)$. Note that the vector $E_{i,j}E_{i',j'}E_{i'',j''} \in S^3(\mathfrak{gl}_n)$ has weight $e_i+e_{i'}+e_{i''}-e_j-e_{j'}-e_{j''}$, where $e_i$ is the weight $[0,\ldots,1,\ldots,0]$ with a $1$ on the $i$-th position. Furthermore, to check that a weight vector $v$ in some representation $V$ of $SL_n$ is a highest-weight vector, it suffices to view $V$ as a representation of the Lie algebra $\mathfrak{sl}_n$ and check that every matrix $E_{i,i+1}$ acts by zero. Using this, it is straightforward to check that the vectors listed in Table \ref{tableHW} are indeed highest-weight vectors.
\begin{table}[h]
\centering
\caption{Highest weight vectors of $S^3(\mathfrak{gl}_n)$}
\label{tableHW}
\begin{tabular}{|l|p{60mm}|}
\hline
Weight & Highest Weight Vector \\ \hline
$[0,\ldots,0]$ & $III$ \\ \hline
$[0,\ldots,0]$ & $\sum_{i,j}{IE_{i,j}E_{j,i}}$ \\ \hline
$[0,\ldots,0]$ & $\sum_{i,j,k}{E_{i,j}E_{j,k}E_{k,i}}$ \\ \hline
$[1,0,\ldots,0,-1]$ & $IIE_{1,n}$ \\ \hline
$[1,0,\ldots,0,-1]$ & $\sum_i{IE_{1,i}E_{i,n}}$ \\ \hline
$[1,0,\ldots,0,-1]$ & $\sum_{i,j}{E_{1,n}E_{i,j}E_{j,i}}$ \\ \hline
$[1,0,\ldots,0,-1]$ & $\sum_{i,j}{E_{1,i}E_{i,j}E_{j,n}}$ \\ \hline
$[2,0,\ldots,0,-2]$ & $IE_{1,n}E_{1,n}$ \\ \hline
$[2,0,\ldots,0,-2]$& $\sum_i{E_{1,n}E_{1,i}E_{i,n}}$ \\ \hline
$[1,1,0,\ldots,0,-2]$ & $\sum_i{E_{1,n}E_{2,i}E_{i,n}-E_{2,n}E_{1,i}E_{i,n}}$ \\ \hline
$[2,0,\ldots,0,-1,-1]$ & $\sum_i{E_{1,n}E_{1,i}E_{i,n-1}-E_{1,n-1}E_{1,i}E_{i,n}}$ \\ \hline
$[1,1,0,\ldots,0,-1,-1]$ & $IE_{1,n}E_{2,n-1}-IE_{1,n-1}E_{2,n}$ \\ \hline
$[1,1,0,\ldots,0,-1,-1]$ & $\sum_i{E_{1,n}E_{2,i}E_{i,n-1} - E_{2,n}E_{1,i}E_{i,n-1}}$ ${ - E_{1,n-1}E_{2,i}E_{i,n} + E_{2,n-1}E_{1,i}E_{i,n}}$ \\ \hline
$[3,0,\ldots,0,-3]$ & $E_{1,n}E_{1,n}E_{1,n}$ \\ \hline
$[2,1,0,\ldots,0,-1,-2]$ & $E_{1,n}E_{1,n-1}E_{2,n} - E_{1,n}E_{1,n}E_{2,n-1}$ \\ \hline
$[1,1,1,0,\ldots,0,-1,-1,-1]$ & $\sum_{\sigma \in S_3}{\sgn{\sigma}E_{\sigma(1),n}E_{\sigma(2),n-1}E_{\sigma(3),n-2}}$ \\ \hline
\end{tabular}
\end{table}
\subsection{Waring rank and border Waring rank}
As explained in the introduction (see also \cite{chiantini2017polynomials}), estimating the (border) Waring rank of the highest-weight vector $\sum_{i,j,k}{E_{i,j}E_{j,k}E_{k,i}}$ is equivalent to determining the exponent $\omega$ of matrix multiplication. We will analyze the (border) Waring ranks of other highest-weight vectors. We start with the following surprising observation:
\begin{obs}
Every highest-weight vector with weight different from $[0,\ldots,0]$ has Waring rank $O(n^2)$. Furthermore the weight space of $[0,\ldots,0]$ is 3-dimensional: it has a basis consisting of two vectors of Waring rank $O(n^2)$, and the vector $\sum_{i,j,k}{E_{i,j}E_{j,k}E_{k,i}}$.
\end{obs}
\begin{proof}
Every of the highest-weight vectors in Table \ref{tableHW}, except for $\sum_{i,j,k}{E_{i,j}E_{j,k}E_{k,i}}$, is a sum of at most $n^2$ monomials, and every degree 3 monomial has Waring rank at most 4.
\end{proof}
We now study the highest-weight vectors $IE_{1,n}E_{2,n-1}-IE_{1,n-1}E_{2,n}$ and\\ $E_{1,n}E_{1,n-1}E_{2,n} - E_{1,n}E_{1,n}E_{2,n-1}$, which we will rewrite as $xyz-xwt$ and $xzt-x^2y$.
\begin{prop}
The cubics $f_1=xyz-xwt$ and $f_2=xzt-x^2y$ are two variants of the Coppersmith-Winograd tensor. Their ranks and border ranks (equal to Waring rank resp.\ Waring border rank) are given by
$\rk(f_1)=9,\brk(f_1)=6,\rk(f_2)=7,\brk(f_2)=4$.
\end{prop}
\begin{proof}
After the change of basis $x=x_0$, $y=x_1+ix_2$, $z=x_1-ix_2$, $w=x_3+ix_4$, $t=-x_3+ix_4$, our cubic $f_1$ becomes $x_0x_1^2+x_0x_2^2+x_0x_3^2+x_0x_4^2$, which is precisely the Coppersmith-Winograd tensor $T_{4,CW}$ (here we use the notation from \cite[Section 7]{LaMM}).
For $f_2$ we can do a similar change of basis, or alternatively we can use the geometric characterization of Coppersmith-Winograd tensors form \cite[Theorem 7.4]{LaMM}. We find that $f_2$ is isomorphic to $\tilde{T}_{2,CW}$.\\
The ranks and border ranks of Coppersmith-Winograd tensors are known: consult for example \cite{CW} for the border ranks and \cite[Proposition 7.1]{LaMM} for the ranks.
\end{proof}
\begin{remark}
The highest-weight vectors that are monomials are easily understood: $III$ and $E_{1,n}E_{1,n}E_{1,n}$ trivially have Waring rank equal to 1; $IIE_{1,n}$ and $IE_{1,n}E_{1,n}$ agree with the Coppersmith-Winograd tensor $T_{1,CW}$, hence have Waring rank 3 and border Waring rank 2.
\end{remark}
|
1,108,101,564,954 | arxiv | \section{Sporadic and alternating groups}\label{sporadicandalt}
Results of the section are easily developed from well-known ones, and we include them here just
for completeness. Let $G$ be a finite simple sporadic or alternating group. Denote by $\theta(G)$
the maximal subset of $\pi(G)$ such that the intersection of all cocliques of maximal size of $GK(G)$, and by $\Theta(G)$ the set
$\{\theta(G)\}$. The set
$\Theta'(G)$ is defined as follows. A subset $\theta'(G)$ of $\pi(G)\setminus\theta(G)$ is an
element of $\Theta'(G)$ if and only if $\rho(G)=\theta(G)\cup\theta'(G)$ is coclique of $GK(G)$ of
maximal size. Obviously, the sets $\Theta(G)$ and $\Theta'(G)$ are uniquely determined, and
$\Theta'(G)$ either is empty or contains at least two elements.
We start with alternating groups. Let $G=Alt_n$ be an alternating group of degree $n$, and
$n\geqslant5$. Following \cite{VasVd} we denote by $\tau(n)$ the set of all primes $r$ with $n/2\leqslant r\leqslant n$, and
by $s_n$ the minimal element of~$\tau(n)$. Define the set $\tau'(n)$ as follows. An odd prime $r$
lies in $\tau'(n)$ if and only if $r<n/2$ and $r+s_n>n$, and the prime $2$ lies in $\tau'(n)$ if
and only if $4+s_n>n$.
\begin{prop}\label{indalt}
Let $G=Alt_n$ be an alternating group of degree $n$, and $n\geqslant5$. If $|\tau'(n)|\leqslant1$,
then $\theta(G)=\tau(n)\cup\tau'(n)$ is the unique coclique of maximal size in $GK(G)$, and
$\Theta'(G)=\varnothing$. If $|\tau'(n)|\geqslant2$, then $\theta(G)=\tau(n)$,
$\Theta'(G)=\{\{r\}\mid r\in \tau'(n)\}$, and every coclique of maximal size in $GK(G)$ is of the form
$\tau(n)\cup\{r\}$, where $r\in\tau'(n)$. In all cases the set $\Theta(G)=\{\theta(G)\}$ is
one-element, and all elements $\theta'(G)$ of $\Theta'(G)$ are one-element subsets of $\pi(G)$.
\end{prop}
\begin{proof}
The result follows from an adjacency criterion for vertices of $GK(G)$
\cite[Proposition~1.1]{VasVd}.
\end{proof}
\begin{prop}\label{indsporadic}
Let $G$ be a simple sporadic group. If $\Theta'(G)=\varnothing$, then $\theta(G)$ is the unique
coclique of maximal size in $GK(G)$. If $\Theta'(G)\neq\varnothing$, then every coclique of maximal
size is of the form $\theta(G)\cup\theta'(G)$, where $\theta'(G)\in\Theta'(G)$. If $G\neq M_{23}$
then every $\theta'(G)$ of $\Theta'(G)$ contains precisely one element. The sets $\Theta(G)$ and
$\Theta'(G)$, as well as the value $t(G)$, are listed in Table~{\em\ref{sporadic}}.
\end{prop}
\begin{proof}
The proposition is easy to verify using
\cite{Atlas} or~\cite{GAP}.
\end{proof}
\textsl{Remark.} Note that in Columns 3 and 4 of Table 1 we list the elements of $\Theta(G)$ and
$\Theta'(G)$, that is sets $\theta(G)\in\Theta(G)$ and $\theta'(G)\in\Theta'(G)$, and omit the
braces for one-element sets. In particular, for group $G=M_{11}$ we have
$\Theta(G)=\{\theta(G)\}=\{\{5,11\}\}$ and $\Theta'(G)=\{\{2\},\{3\}\}$, while for $G=M_{23}$ we
have $\Theta(G)=\{\theta(G)\}=\{\{11,23\}\}$ and $\Theta'(G)=\{\{2,5\},\{3,7\}\}$.
\begin{tab}\label{sporadic}{\bfseries Cocliques of sporadic groups}\vspace{1\baselineskip}
{\small
\begin{tabular}{|r|c|l|l|}
\hline $G$ & $t(G)$ & $\Theta(G)$ & $\Theta'(G)$ \\
\hline $M_{11}$ & $3$ & $\{5,11\}$ & $2,3$ \\
$M_{12}$ & $3$ & $\{3,5,11\}$ & $\varnothing$ \\
$M_{22}$ & $4$ & $\{5,7,11\}$ & $2,3$ \\
$M_{23}$ & $4$ & $\{11,23\}$ & $\{2,5\}$, $\{3,7\}$ \\
$M_{24}$ & $4$ & $\{5,7,11,23\}$ & $\varnothing$ \\
$J_{1}$ & $4$ & $\{7,11,19\}$ & $2,3,5$ \\
$J_{2}$ & $2$ & $7$ & $2,3,5$ \\
$J_{3}$ & $3$ & $\{17,19\}$ & $2,3,5$ \\
$J_{4}$ & $7$ & $\{11,23,29,31,37,43\}$ & $5,7$ \\
$\operatorname{Ru}$ & $4$ & $\{7,13,29\}$ & $3,5$ \\
$\operatorname{He}$ & $3$ & $\{5,7,17\}$ & $\varnothing$ \\
$\operatorname{McL}$ & $3$ & $\{7,11\}$ & $3,5$ \\
$\operatorname{HN}$ & $3$ & $\{11,19\}$ & $3,5,7$ \\
$\operatorname{HiS}$ & $3$ & $\{7,11\}$ & $2,3,5$ \\
$\operatorname{Suz}$ & $4$ & $\{5,7,11,13\}$ & $\varnothing$ \\
$\operatorname{Co}_{1}$ & $4$ & $\{11,13,23\}$ & $5,7$ \\
$\operatorname{Co}_{2}$ & $4$ & $\{7,11,23\}$ & $3,5$ \\
$\operatorname{Co}_{3}$ & $4$ & $\{5,7,11,23\}$ & $\varnothing$ \\
$\operatorname{Fi}_{22}$ & $4$ & $\{5,7,11,13\}$ & $\varnothing$ \\
$\operatorname{Fi}_{23}$ & $5$ & $\{11,13,17,23\}$ & $5,7$ \\
$\operatorname{Fi}'_{24}$ & $6$ & $\{11,13,17,23,29\}$ & $5,7$ \\
$\operatorname{O'N}$ & $5$ & $\{7,11,19,31\}$ & $3,5$ \\
$\operatorname{LyS}$ & $6$ & $\{5,7,11,31,37,67\}$ & $\varnothing$ \\
$F_{1}$ & $11$ & $\{11,13,19,23,29,31,41,47,59,71\}$ & $7,17$ \\
$F_{2}$ & $8$ & $\{7,11,13,17,19,23,31,47\}$ & $\varnothing$ \\
$F_{3}$ & $5$ & $\{5,7,13,19,31\}$ & $\varnothing$ \\
\hline
\end{tabular}}
\end{tab}
In addition, we notice another substantial property of prime graphs of groups under consideration.
\begin{prop}\label{cliquesporalt} Suppose that $G$ is either an alternating group of degree $n$, $n\geqslant5$,
or a sporadic group distinct from $M_{23}$. Then the set $\pi(G)\setminus\theta(G)$ is a clique of
$GK(G)$.
\end{prop}
\begin{proof}
This follows from \cite{Atlas} and \cite[Proposition~1.1]{VasVd}.
\end{proof}
\section{Preliminary results for groups of Lie type}\label{preliminary}
We write $[x]$ for the integer part of a rational number $x$. The set of prime divisors of a natural number
$m$ is denoted by $\pi(m)$. By $(m_1,m_2,\dots,m_s)$ we denote the greatest common divisor of numbers $m_1,m_2,\dots,m_s$.
For a
natural number $r$, the $r$-share of a natural number $m$ is the greatest divisor $t$ of $m$ with
$\pi(t)\subseteq\pi(r)$. We write $m_r$ for the $r$-share of $m$ and $m_{r'}$ for the quotient
$m/m_r$.
If $q$ is a natural number, $r$ is an odd prime and $(q,r)=1$, then $e(r,q)$ denotes
a~multi\-plicative order of $q$ modulo $r$, that is a minimal natural number $m$ with
$q^m\equiv1\pmod{r}$. For an odd $q$, we put $e(2,q)=1$ if $q\equiv1\pmod{4}$, and $e(2,q)=2$
otherwise.
\begin{lem}\label{Zsigmondy Theorem}
{\em (Corollary to Zsigmondy's theorem \cite{zs})} Let $q$ be a natural number greater than $1$.
For every natural number $m$ there exists a prime $r$ with $e(r,q)=m$ but for the cases $q=2$ and
$m=1$, $q=3$ and $m=1$, and $q=2$ and $m=6$.
\end{lem}
\textsl{Remark.} In conclusion of the same corollary \cite[Lemma~1.4]{VasVd} in our previous
article we miss two exceptions: $m=1$ and $q=2$, and $m=1$ and $q=3$. However, these exceptions
don't arise in all proofs and arguments from \cite{VasVd}, that use the corollary to Zsigmondy's
theorem.
\smallskip
A prime $r$ with $e(r,q)=m$ is called a {\em primitive prime divisor} of $q^m-1$. By Lemma
\ref{Zsigmondy Theorem} such a number exists except for the cases mentioned in the lemma. Given $q$ we denote by $R_m(q)$ the set of all
primitive prime divisors of $q^m-1$ and by $r_m(q)$ any
element of $R_m(q)$. A divisor $k_m(q)$ of $q^m-1$ is said to be the {\em greatest primitive divisor} if
$\pi(k_m(q))=\pi(R_m(q))$ and $k_m(q)$ is the greatest divisor with this property. Usually the number $q$ is fixed (for example, by the
choice of a group of Lie type $G$), and we write $R_m$, $r_m$, and $k_m$ instead of $R_m(q)$, $r_m(q)$, and $k_m(q)$. Following our
definition
of $e(2,q)$, we derive that $k_1(q)=(q-1)/2$ if $q\equiv-1\pmod{4}$, and $k_1(q)=q-1$ otherwise;
$k_2(q)=(q+1)/2$ if $q\equiv1\pmod{4}$, and $k_2(q)=q+1$ otherwise. The following lemma provides a
formula for expressing greatest primitive divisors $k_m$, $m\geqslant3$, in terms of cyclotomic
polynomials $\phi_m(x)$.
\begin{lem}\label{gpd} {\em \cite{VasGrSmall}}
Let $q$ and $m$ be natural numbers, $q>1$, $m\geqslant3$, and let $k_m(q)$ be the greatest primitive
divisor of $q^m-1$. Then $$k_m(q)=\frac{\phi_m(q)}{\prod_{r\in\pi(m)}( \phi_{m_{r'}}(q),r)}.$$
\end{lem}
According to our definitions, if $i\neq j$, then $\pi(R_i)\cap\pi(R_j)=\varnothing$, and so
$(k_i,k_j)=1$.
\begin{lem}\label{Divisibility} {\em \cite[Lemma~6(iii)]{ZavL3}} Let $q,k,l$ be natural numbers. Then
\emph{(a)} $(q^k-1,q^l-1)=q^{(k,l)}-1;$
\emph{(b)} $(q^k+1,q^l+1)=\left \{
\begin{array}{ll}
q^{(k,l)}+1, &\mbox{if both $\frac{k}{(k,l)}$ and $\frac{l}{(k,l)}$ are
odd},\\
(2,q+1), & \mbox{otherwise};
\end{array} \right.$
\emph{(c)} $(q^k-1,q^l+1)=\left \{
\begin{array}{ll}
q^{(k,l)}+1,&\mbox{if $\frac{k}{(k,l)}$ is even and $\frac{l}{(k,l)}$ is
odd},\\
(2,q+1), & \mbox{otherwise.}
\end{array} \right.$
In particular, for every $q\ge2$, $k\ge 1$ the inequality $(q^k-1,q^k+1)\le 2$ holds.
\end{lem}
For $q=p^\alpha$, where $p$ is a prime, we recall also two statements from \cite{VasVd}.
\begin{multline}\label{firstold}
\text{An odd prime }c\not=p\text{ divides }q^x-1\text{ if and only if }\\ e(c,q)\text{ divides }x\text{ (see~\cite[statement~(1)]{VasVd}).}
\end{multline}
\begin{multline}\label{fourthold}
\text{If an odd prime }c\not=p\text{ divides }q^x-\epsilon \text{, where }\epsilon\in\{+1,-1\},\\ \text{ then } \eta(e(c,q))\text{
divides }x,
\text{ where }\eta(n)\\ \text{ is defined in Proposition~\ref{adjbn} (see~\cite[statement~(4)]{VasVd}).}
\end{multline}
In the proofs of Propositions \ref{adjbn}, \ref{adjdn}, and \ref{adjexcept} by $\epsilon, \epsilon_i$ we denote an element from the set
$\{+1,\-1\}$. For groups of Lie type our notation agrees with that of \cite{VasVd}. We write $A_n^\varepsilon(q)$, $D_n^\varepsilon(q)$, and
$E_6^\varepsilon(q)$, where $\varepsilon\in\{+,-\}$, and $A_n^+(q)=A_n(q)$, $A_n^-(q)={}^2A_n(q)$, $D_n^+(q)=D_n(q)$, $D_n^-(q)={}^2D_n(q)$,
$E_6^+(q)=E_6(q)$, $E_6^-(q)={}^2E_6(q)$. In \cite[Proposition~2.2]{VasVd}, considering unitary groups, we define
\begin{equation}\label{nu(m)}
\nu(m)=\left\{
\begin{array}{rl}
m &\text{ if }m\equiv 0(\mod 4),\\
\frac{m}{2}& \text{ if }m\equiv 2(\mod 4),\\
2m&\text{ if }m\equiv1(\mod 2).\\
\end{array}\right.
\end{equation}
Clearly $\nu(m)$ is a bijection from $\mathbb{N}$ onto $\mathbb{N}$ and $\nu^{-1}(m)=\nu(m)$. In most cases it is natural to consider linear
and unitary groups together. So we define
\begin{equation}\label{nuepsilon(n)}
\nu_{\varepsilon}(m)=\left\{
\begin{array}{rl}
m &\text{ if } \varepsilon=+,\\
\nu(m) & \text{ if }\varepsilon=-.\\
\end{array}\right.
\end{equation}
\begin{prop}\label{adjbn}
Let $G$ be one of simple groups of Lie type, $B_n(q)$ or $C_n(q)$, over a field of characteristic~$p$. Define
$$\eta(m)=\left\{
\begin{array}{cc}
m &\text{ if }m\text{ is odd},\\
\frac{m}{2}& \text{ otherwise}.\\
\end{array}\right.$$ Let $r,s$ be odd primes with $r,s\in\pi(G)\setminus\{p\}$. Put $k=e(r,q)$ and $l=e(s,q)$, and suppose that $1\le
\eta(k)\le \eta(l)$. Then $r$ and $s$ are non-adjacent if and only if $\eta(k)+\eta(l)> n$, and $k$, $l$ satisfy
to~\eqref{strange}{\em:}
\end{prop}
\begin{equation}\label{strange}
\dfrac{l}{k} \text{ {\em is not an odd natural number}}
\end{equation}
\begin{proof}
We prove the ``if'' part first. Assume that $\eta(k)+\eta(l)\le n$, then
there exists a maximal torus $T$ of order $\frac{1}{(2,q-1)}(q^{\eta(k)}+(-1)^k)(q^{\eta(l)}+(-1)^l)(q-1)^{n-\eta(k)-\eta(l)}$ of $G$ (see
\cite[Lemma~1.2(2)]{VasVd}, for example).
Both $r,s$ divide $\vert T\vert$, hence $r,s$ are adjacent in~$G$. If
$\frac{l}{k}$ is an odd integer, then either both $k,l$ are odd and Lemma~\ref{Divisibility}(a) implies that
$q^{\eta(k)}+(-1)^k=q^k-1$ divides $q^{\eta(l)}+(-1)^l=q^l-1$, or both $k,l$ are even and Lemma~\ref{Divisibility}(b) implies that
$q^{\eta(k)}+(-1)^k=q^{k/2}+1$ divides $q^{\eta(l)}+(-1)^l=q^{l/2}+1$. Again both $r,s$ divide $\vert T\vert$, where $T$ is a maximal
torus of order $\frac{1}{(2,q-1)}(q^{\eta(l)}+(-1)^l)(q-1)^{n-\eta(l)}$ of $G$ (the existence of such torus
follows from \cite[Lemma~1.2(2)]{VasVd}), so $r,s$ are adjacent.
Now we prove the ``only if'' part. Assume by contradiction that $\eta(k)+\eta(l)>n$ and $l/k$ is not odd natural
number, but $r,s$
are adjacent. Then $G$ contains an element $g$ of order $rs$. The element $g$ is semisimple, since $(rs,p)=1$, hence $g$ is contained in a
maximal torus $T$. By \cite[Lemma~1.2(2)]{VasVd} it follows that $\vert
T\vert=\frac{1}{(2,q-1)}(q^{n_1}-\epsilon_1)(q^{n_2}-\epsilon_2)\ldots(q^{n_k}-\epsilon_k)$, where $n_1+n_2+\ldots+n_k=n$.
Up to renumberring, we may assume
that $r$ divides $(q^{n_1}-\epsilon_1)$, while $s$ divides either $(q^{n_1}-\epsilon_1)$ or~$(q^{n_2}-\epsilon_2)$. Assume first that $s$
divides~$(q^{n_2}-\epsilon_2)$. Then \eqref{fourthold} implies that $\eta(k)$ divides $n_1$ and $\eta(l)$ divides $n_2$, so $n_1+n_2\ge
\eta(k)+\eta(l)>n$, a contradiction.
Now assume that both $r,s$ divide $(q^{n_1}-\epsilon_1)$. Again \eqref{fourthold} implies that both $\eta(k),\eta(l)$ divide $n_1$. Now
$\eta(k)+\eta(l)>n$ and $\eta(k)\le \eta(l)$, so $\eta(l)=n_1$. Assume first that $l$ is odd. Then $l=\eta(l)=n_1$ and $s$ divides $q^l-1$.
Since $s$ is odd, Lemma \ref{Divisibility} imples that $s$ does not divide $q^l+1$, hence $q^{n_1}-\epsilon_1=q^{n_1}-1$. Since $r$ divides
$q^{n_1}-1$, by using \eqref{firstold} we obtain that $k$ divides $n_1=l$, hence $k$ is odd. Therefore
$\frac{l}{k}$ is an odd integer, a contradiction with~\eqref{strange}. Now assume that $l$ is even. Then $l/2=\eta(l)=n_1$
and $s$ divides $q^l-1$. In view of \eqref{firstold}, $s$ does not divide $q^{l/2}-1$, hence $s$ divides $q^{l/2}+1$ and
$q^{n_1}-\epsilon_1=q^{n_1}+1$. Now \eqref{fourthold} implies that $\eta(k)$ divides $n_1$, hence $k$ divides $2n_1=l$. By Lemma
\ref{Divisibility}(c) we obtain that $r$ does not divide $q^{l/2}-1$, hence $k$ does not divide $l/2$ and $\frac{l}{k}$ is an
odd integer, a contradiction with~\eqref{strange}.
\end{proof}
\begin{prop}\label{adjdn}
Let $G=D_n^{\varepsilon}(q)$ be a finite simple group of Lie type over a field of characteristic $p$, and let the function
$\eta(m)$ be defined as in Proposition~{\em\ref{adjbn}}. Suppose $r,s$ are odd primes and $r,s\in\pi(D_n^\varepsilon(q))\setminus\{p\}$.
Put $k=e(r,q)$, $l=e(s,q)$, and $1\le\eta(k)\le\eta(l)$. Then $r$ and $s$ are non-adjacent if and only if $2\cdot\eta(k)+2\cdot\eta(l)>
2n-(1-\varepsilon(-1)^{k+l})$,
$k$ and $l$ satisfy~\eqref{strange}, and, if $\varepsilon=+$, then the chain of equalities{\em:}
\begin{equation}\label{strange2}
n=l=2\eta(l)=2\eta(k)=2k
\end{equation}
is not true.
\end{prop}
\begin{proof}
The following inclusions are known $\widetilde{B}_{n-1}(q)\leq \widetilde{D}_n^\varepsilon(q)\leq
\widetilde{B}_n(q)$ (see \cite[Table~2]{KondSbgrps}), where $\widetilde{B}_{n-1}(q)$,
$\widetilde{D}_n^\varepsilon(q)$, $\widetilde{B}_n(q)$ are central extensions of corresponding
simple groups and $n\ge 4$. Since the Schur multiplier for each of simple groups $B_{n-1}(q)$,
$D_n^\varepsilon(q)$, $B_n(q)$ has order equal to $1$, $2$, or $4$, it is clear that two odd
prime divisors of the order of a simple group isomorphic to $B_n(q)$ or $D_n^\varepsilon(q)$ are
adjacent if and only if they are adjacent in every central extension of the group. Hence if two
odd prime divisors of $\vert D_n^\varepsilon(q)\vert$ are adjacent in $GK(B_{n-1}(q))$, then they
are adjacent in $GK(D_n^\varepsilon(q))$ and if two odd prime divisors of $\vert
D_n^\varepsilon(q)\vert$ are non-adjacent in $GK(B_{n}(q))$, then they are non-adjacent in
$GK(D_n^\varepsilon(q))$. There can be the following cases:
\begin{itemize}
\item[(i)] $\eta(k)+\eta(l)\le n-1$;
\item[(ii)]$\eta(k)+\eta(l)\ge n$, $l/k$ is an odd number and $\eta(l)\le n-1$;
\item[(iii)] $\eta(k)+\eta(l)=n$ and $\frac{l}{k}$ is not an odd natural number;
\item[(iv)] $\eta(l)=n$ and $\frac{l}{k}$ is an odd natural number;
\item[(v)] $\eta(k)+\eta(l)>n$ and $\frac{l}{k}$ is not an odd natural number.
\end{itemize}
By Lemma \ref{adjbn} in cases (i), (ii) primes $r,s$ are adjacent in $GK(B_{n-1}(q))$, while in case (v) primes $r,s$ are non-adjacent in
$GK(B_n(q))$. In view of above notes it follows that we
need to consider (iii) and~(iv).
Assume first that $\eta(k)+\eta(l)=n$ and $\frac{l}{k}$ is not an odd natural number, i.~e., case (iii) holds. Primes $r,s$ are adjacent in
$GK(D_n^\varepsilon(q))$ if and only if there exists an element $g\in D_n^\varepsilon(q)$ such that $\vert g\vert=rs$. This element $g$
is contained in a maximal torus $T$,
since $(rs,p)=1$. In view of \cite[Lemma~1.2(3)]{VasVd} the order $\vert T\vert$ is equal to
$\frac{1}{(4,q^n-\varepsilon1)}(q^{n_1}-\epsilon_1)\cdot\ldots\cdot(q^{n_m}-\epsilon_m)$, where $n_1+\ldots+n_m=n$ and
$\epsilon_1\cdot\ldots\cdot\epsilon_m=\varepsilon1$.
Up to renumberring, we may assume that $r$ divides
$q^{n_1}-\epsilon_1$, while $s$ divides either $q^{n_1}-\epsilon_1$, or $q^{n_2}-\epsilon_2$.
If $s$ divides $q^{n_1}-\epsilon_1$, then
\eqref{fourthold}
implies that both $\eta(k)$, $\eta(l)$ divide $n_1$. As in the proof of Proposition \ref{adjbn} we derive that $r,s$ are adjacent if and
only if $\frac{l}{k}$ is an
odd integer.
Assume now that $s$ divides $q^{n_2}-\epsilon_2$. Then \eqref{fourthold} implies that $\eta(k)$ divides $n_1$ and $\eta(l)$ divides $n_2$.
Hence we obtain
the following inequalities $n\ge n_1+n_2\ge \eta(k)+\eta(l)=n$, so $\eta(k)=n_1$, $\eta(l)=n_2$, and $q^{n_1}-\epsilon_1=q^{\eta(k)}+(-1)^k$,
$q^{n_2}-\epsilon_2=q^{\eta(l)}+(-1)^l$. If $\varepsilon=-$, then a maximal torus $T$ of order
$\frac{1}{(4,q^n+1)}(q^{\eta(k)}+(-1)^k)(q^{\eta(l)}+(-1)^l)$
of $G$ exists if and only if $k,l$ has the distinct parity, i.~e., if and only if $2n-(1-\varepsilon(-1)^{k+l})=2n-(1+(-1)^{k+l})=2n$. Hence
in
this case
$r,s$ are non-adjacent if and only if the inequality $2\cdot\eta(k)+2\cdot\eta(l)> 2n-(1-\varepsilon(-1)^{k+l})$ holds. If $\varepsilon=+$ and
$n_1\not=n_2$, then a maximal torus $T$ of order $\frac{1}{(4,q^n-1)}(q^{\eta(k)}+(-1)^k)(q^{\eta(l)}+(-1)^l)$ of $G$ exists if and only if $k,l$ has
the same parity, i.~e., if and only if $2n-(1-\varepsilon(-1)^{k+l})=2n-(1-(-1)^{k+l})=2n$. Hence in this case $r,s$ are non-adjacent if and only if
the inequality $2\cdot\eta(k)+2\cdot\eta(l)> 2n-(1-\varepsilon(-1)^{k+l})$ holds. If $n_1=n_2=n/2$ and $\frac{l}{k}$ is an odd integer,
then, $r,s$ are adjacent. Assume that $n_1=n_2=n/2$ and $\frac{l}{k}$ is not an odd integer. The condition $\frac{l}{k}$ is not an odd
integer implies that $l\not=k$, so the chain of equalities \eqref{strange2} holds.
In this case there exists a maximal torus $T$ of order
$\frac{1}{(4,q^n-1)}(q^n-1)=\frac{1}{(4,q^n-1)}(q^{n/2}-1)(q^{n/2}+1)$ of $G$, so condition \eqref{strange2} is not satisfied and $r,s$ are adjacent.
Now assume that $\eta(l)=n$ and $\frac{l}{k}$ is an odd natural number, i.~e., case (iv) holds. In this case there exists a maximal torus
$T$ of order
$\frac{1}{(4,q^n-\varepsilon1)}(q^n+(-1)^l)$ of $G$ (if such a torus does not exist then $s$ does not divide $\vert G\vert$). The fact that
$\frac{l}{k}$ is an odd prime implies that $r$ divides $\vert T\vert$, so $r,s$ are adjacent.
\end{proof}
Now we consider simple exceptional groups of Lie type. Note that the orders of maximal tori of
simple exceptional groups were listed in \cite[Lemma~1.3]{VasVd}. However, for groups $E_7(q)$,
$E_8(q)$, and Ree groups ${}^2F_4(2^{2n+1})$ (items (4), (5), and (9) of the lemma respectively),
the list of orders of tori was incorrect. We correct the list by the following lemma.
\begin{lem}\label{toriofexcptgrps} {\em (see \cite{Ca3
)} Let $\overline{G}$ be a connected simple exceptional algebraic
group
of adjoint type and let $G=O^{p'}(\overline{G}_\sigma)$ be the finite simple exceptional group of
Lie type.
\item[{\em 1.}] For every maximal torus $T$ of $G=E_7(q)$, the number
$m=(2,q-1)\vert T\vert$ is equal to one of the following:
$(q+1)^{n_1}(q-1)^{n_2},$ $n_1+n_2=7;$ $(q^2+1)^{n_1}(q+1)^{n_2}(q-1)^{n_3},$ $1\leqslant
n_1\leqslant2,$ $2n_1+n_2+n_3=7,$ and $m\neq(q^2+1)(q\pm1)^5;$
$(q^3+1)^{n_1}(q^3-1)^{n_2}(q^2+1)^{n_3}(q+1)^{n_4}(q-1)^{n_5},$ $1\leqslant n_1+n_2\leqslant2,$
$3n_1+3n_2+2n_3+n_4+n_5=7,$ and $m\neq(q^3+\epsilon1)(q-\epsilon1)^4,$ $m\neq(q^3\pm1)(q^2+1)^2,$
$m\neq(q^3+\epsilon1)(q^2+1)(q+\epsilon1)^2;$
$(q^4+1)(q^2\pm1)(q\pm1);$ $(q^5\pm1)(q^2-1);$ $(q^5+\epsilon1)(q+\epsilon1)^2;$ $q^7\pm1;$
$(q-\epsilon1)\cdot (q^2+\epsilon q+1)^3; (q^5-\epsilon1)\cdot
(q^2+\epsilon q +1); (q^3\pm1)\cdot (q^4-q^2+1); (q-\epsilon1)\cdot
(q^6+\epsilon q^3+1);$ $(q^3-\epsilon1)\cdot(q^2-\epsilon
q+1)^2,$ where $\epsilon=\pm$. Moreover, for every number $m$ given above there exists a torus $T$ with $(2,q-1)\vert T\vert=m$.
\item[{\em 2.}] Every maximal torus $T$ of $G=E_8(q)$ has one of the
following orders:
$(q+1)^{n_1}(q-1)^{n_2},$ $n_1+n_2=8;$ $(q^2+1)^{n_1}(q+1)^{n_2}(q-1)^{n_3},$ $1\leqslant
n_1\leqslant4,$ $2n_1+n_2+n_3=8,$ and $|T|\neq(q^2+1)^3(q\pm1)^2,$ $|T|\neq(q^2+1)(q\pm1)^6;$
$(q^3+1)^{n_1}(q^3-1)^{n_2}(q^2+1)^{n_3}(q+1)^{n_4}(q-1)^{n_5},$ $1\leqslant n_1+n_2\leqslant2,$
$3n_1+3n_2+2n_3+n_4+n_5=8,$ and $|T|\neq(q^3\pm1)^2(q^2+1),$
$|T|\neq(q^3+\epsilon1)(q-\epsilon1)^5,$ $|T|\neq(q^3+\epsilon1)(q^2+1)(q+\epsilon1)^3,$
$|T|\neq(q^3+\epsilon1)(q^2+1)^2(q-\epsilon1);$ $q^8-1;$ $(q^4+1)^2;$ $(q^4+1)(q^2\pm1)(q\pm1)^2;$
$(q^4+1)(q^2-1)^2;$ $(q^4+1)(q^3+\epsilon1)(q-\epsilon1);$
$(q^5+\epsilon1)(q+\epsilon1)^3;$ $(q^5\pm1)(q+\epsilon1)^2(q-\epsilon1);$
$(q^5+\epsilon1)(q^2+1)(q-\epsilon1);$ $(q^5+\epsilon1)(q^3+\epsilon1);$ $(q^6+1)(q^2\pm1);$
$(q^7\pm1)(q\pm1);$ $(q-\epsilon1)\cdot (q^2+\epsilon q+1)^3\cdot(q\pm1);$ $(q^5-\epsilon1)\cdot
(q^2+\epsilon q +1)\cdot(q+\epsilon1);$ $(q^3\pm1)\cdot (q^4-q^2+1)\cdot(q\pm1);$
$(q-\epsilon1)\cdot
(q^6+\epsilon q^3+1)\cdot(q\pm1);$ $(q^3-\epsilon1)\cdot(q^2-\epsilon
q+1)^2\cdot(q\pm1);$ $q^8-q^4+1;$
$q^8+q^7-q^5-q^4-q^3+q+1;$
$q^8-q^6+q^4-q^2+1;$ $(q^4-q^2+1)^2;$ $(q^6+\epsilon q^3+1)(q^2+\epsilon q+1);$
$q^8-q^7+q^5-q^4+q^3-q+1;$ $(q^4+\epsilon q^3+q^2+\epsilon q+1)^2;$
$(q^4-q^2+1)(q^2\pm q+1)^2;$
$(q^2-q+1)^2\cdot(q^2+q+1)^2;$
$(q^2\pm q+1)^4,$ where $\epsilon=\pm$. Moreover, for every number
given above there exists a torus of corresponding order.
\item[{\em 3.}] Every maximal torus $T$ of $G={^2F_4(2^{2n+1})}$ with $n\ge1$
has one of the following orders: $q^2+\epsilon q\sqrt{2q}+q+\epsilon \sqrt{2q}+1;$ $q^2-\epsilon
q\sqrt{2q}+\epsilon \sqrt{2q}-1;$ $q^2-q+1;$ $(q\pm \sqrt{2q}+1)^2;$ $(q-1)(q\pm \sqrt{2q}+1);$
$(q\pm1)^2;q^2\pm1;$ where $q=2^{2n+1}$ and~$\epsilon=\pm$. Moreover, for every number given above
there exists a torus of corresponding order.
\end{lem}
\begin{prop}\label{adjexcept}
Let $G$ be a finite simple exceptional group of Lie type over a field of characteristic~$p$, suppose that
$r,s$ are odd primes, and assume that $r,s\in\pi(G)\setminus\{p\}$, $k=e(r,q)$, $l=e(s,q)$, and
$1\le k\le l$. Then $r$ and $s$ are non-adjacent if and only if $k\not=l$ and one of the following holds{\em:}
\begin{itemize}
\item[{\em 1.}] $G=G_2(q)$ and either $r\not=3$ and $l\in\{3,6\}$ or $r=3$
and~${l=9-3k}$.
\item[{\em 2.}] $G=F_4(q)$ and either $l\in\{8,12\}$, or $l=6$
and~$k\in\{3,4\}$, or $l=4$ and~${k=3}$.
\item[{\em 3.}] $G=E_6(q)$ and either $l=4$ and $k=3$, or $l=5$ and $k\ge3$, or $l=6$
and $k=5$, or $l=8$, $k\ge3$, or $l=8$, $r=3$, and $(q-1)_3=3$, or
$l=9$, or $l=12$ and $k\not=3$.
\item[{\em 4.}] $G={^2E_6(q)}$ and either $l=6$ and $k=4$, or $l=8$, $k\ge3$, or $l=8$, $r=3$,and
$(q+1)_3=3$, or $l=10$ and $k\ge 3$, or $l=12$ and $k\not=6$, or~$l=18$.
\item[{\em 5.}] $G=E_7(q)$ and either $l=5$ and $k=4$, or $l=6$ and $k=5$, or $l\in\{14,18\}$ and
$k\not=2$, or $l\in\{7,9\}$ and $k\ge2$, or $l=8$ and $k\ge3,k\not=4$, or $l=10$ and $k\ge3,
k\not=6$, or $l=12$ and~$k\ge 4,k\not=6$.
\item[{\em 6.}] $G=E_8(q)$ and either $l=6$ and $k=5$, or $l\in\{7,14\}$ and $k\ge3$, or
$l=9$ and $k\ge 4$, or $l\in\{8,12\}$ and $k\ge 5, k\not=6$, or $l=10$ and $k\ge3, k\not=4,6$, or $l=18$ and $k\not=1,2,6$, or
$l=20$ and $r\cdot k\not=20$, or~$l\in\{15,24,30\}$.
\item[{\em 7.}] $G={^3D_4(q)}$ and either $l=6$ and $k=3$,
or~$l=12$.
\end{itemize}
\end{prop}
\begin{proof}
Recal that $k_m$ is the greatest primitive divisor of $q^m-1$, while $R_m$ is the set of all prime primitive divisors of $q^m-1$.
The orders of maximal tori in exceptional groups are given
in~\cite[Lemma~1.3]{VasVd} and Lemma \ref{toriofexcptgrps}, for example.
1. Since $\vert G_2(q)\vert=q^6(q^2-1)(q^6-1)$, the numbers $k,l$ are in the set $\{1,2,3,6\}$. If $\{k,l\}\subseteq \{1,2\}$, then the
existence
of a maximal torus of order $q^2-1=(2,q-1)\cdot k_1\cdot k_2$ implies the existence of an element of order $rs$, i.~e., $r$ and $s$ are
adjacent in
$GK(G)$.
If $l=3$ (resp. $l=6$), then an element $g$ of order $s$ is contained in a unique, up to conjugation, maximal torus of order $q^2+q+1=(3,q-1)k_3$
(resp. $q^2-q+1=(3,q+1)k_6$).
In this case $r,s$ are non-adjacent if and only if $r$ does not divide $\vert T\vert$, whence statement 1 of the lemma follows.
2. Since $\vert F_4(q)\vert=q^{24}(q^2-1)(q^6-1)(q^8-1)(q^{12}-1)$, the numbers $k,l$ are in the set $\{1,2,3,4,6,8,12\}$. If $l\le 3$, then the
existence of maximal torus of order $(q^3-1)(q+1)=(2,q-1)\cdot(3,q-1)k_1\cdot k_2\cdot k_3$ implies that for every $k\le 3$ the primes $r,s$
are adjacent. If $l=4$,
then an element $g$ of order
$s$ is in a maximal torus of order equals to either $(q-\epsilon)^2(q^2+1)$, or $(q^2-\epsilon)(q^2+1)$. In particular,
for every maximal torus $T$ containing
$g$ the inclusion $\pi(T)\subseteq R_1\cup R_2 \cup R_4$ holds. Moreover there exists a maximal torus of order
$q^4-1=(2,q-1)^2\cdot k_1\cdot k_2\cdot k_4$. So in
this case $r,s$ are non-adjacent if and only if $r$ does not divide $k_1\cdot k_2\cdot k_4$, i.~e., if and only if
$k=3$. If $l=6$, then each element of order
$s$ is in a maximal torus of order equals to either $(q^3+1)(q-\epsilon)=(3,q+1)\cdot k_6\cdot (q+1)\cdot (q-\epsilon)$, or
$(q^2-q+1)^2=(3,q+1)^2\cdot k_6^2$.
In particular, for every maximal
torus $T$ containing $g$ the inclusion $\pi(T)\subseteq R_1\cup R_2\cup R_6$ holds, and there exists a maximal torus of order
$(q^3+1)(q-1)=(2,q-1)(3,q+1)k_1\cdot k_2\cdot k_6$. Thus $r,s$ are non-adjacent if and only if $k\in\{3,4\}$. If, finally, $l=8$ (resp. $l=12$),
then every element of order $s$ is in a unique up to conjugation maximal torus of order $(2,q-1)k_8$ (resp. $k_{12}$). Thus $r,s$ are
non-adjacent if and only if $k\not=8$ (resp.~${k\not=12}$).
3. Since $\vert E_6(q)\vert=\frac{1}{(3,q-1)}q^{36}(q^2-1)(q^5-1)(q^6-1)(q^8-1)(q^9-1)(q^{12}-1)$, the numbers $k,l$ are in the set
$\{1,2,3,4,5,6,8,9,12\}$. If $l\le 3$, then the existence of a maximal torus $T$ of order
$\frac{1}{(3,q-1)}(q^3-1)(q^2-1)(q-1)=(2,q-1)\cdot k_3\cdot k_2\cdot k_1\cdot(q-1)^2$
implies that $r,s$
are adjacent. If $l=4$, then each element of order $s$ is in a maximal torus of order equals either
$\frac{1}{(3,q-1)}(q^4-1)(q-\epsilon_1)(q-\epsilon_2)=\frac{1}{(3,q-1)}\cdot (2,q-1)^2\cdot k_1\cdot k_2\cdot k_4\cdot (q-\epsilon_1)\cdot(q-\epsilon_2)$, or
$\frac{1}{(3,q-1)}(q^3+1)(q^2+1)(q-1)=\frac{1}{(3,q-1)}\cdot (2,q-1)^2\cdot (3,q+1)\cdot k_6\cdot k_4\cdot k_2\cdot k_1$, or
$\frac{1}{(3,q-1)}(q^2+1)^2(q-1)^2=\frac{1}{(3,q-1)}\cdot (2,q-1)^2\cdot k_4^2\cdot (q-1)^2$.
Thus $r,s$ are non-adjacent if and only if $k=3$. If $l=5$, then each element
of order $s$ is in a maximal torus of order
$\frac{1}{(3,q-1)}(q^5-1)(q-\epsilon)=\frac{1}{(3,q-1)}(5,q-1)k_5(q-1)(q-\epsilon)$. Thus $r,s$ are non-adjacent if and only if $k\in\{3,4\}$.
If $l=6$, then every element of order $s$ is in a maximal torus of order equals either
$\frac{1}{(3,q-1)}(q^3+1)(q^2+q+1)(q-\epsilon)=(3,q+1)\cdot k_6\cdot k_3\cdot (q+1)\cdot (q-\epsilon)$, or
$\frac{1}{(3,q-1)}(q^3+1)(q^2+1)(q-1)=\frac{1}{(3,q-1)}\cdot(3,q+1)\cdot k_6\cdot (2,q-1)\cdot k_1\cdot k_2\cdot k_4$, or
$\frac{1}{(3,q-1)}(q^3+1)(q^2-1)(q-1)=\frac{1}{(3,q-1)}\cdot(3,q+1)\cdot k_6\cdot (2,q-1)^2\cdot k_1^2\cdot k_2^2$, or
$\frac{1}{(3,q-1)}(q^2+q+1)(q^2-q+1)^2=(3,q+1)^2\cdot k_6^2\cdot k_3$.
Thus $r,s$ are non-adjacent if and only if $k=5$. If $l=8$, then each element of order $s$ is in a unique up to conjugation
maximal torus of order
$\frac{1}{(3,q-1)}(q^4+1)(q^2-1)=\frac{1}{(3,q-1)}\cdot (2,q-1)^2\cdot k_8\cdot k_2\cdot k_1$. Hence $r,s$ are non-adjacent if and only if either
$k\ge3$ and $k\not=8$, or
$r=3$ and $(q-1)_3=3$. If $l=9$, then each element of order $s$ is in a unique up to conjugation maximal torus
of order $\frac{1}{(3,q-1)}(q^6+q^3+1)=k_9$. Hence $r,s$ are non-adjacent if and only if $k\not=9$. If, finally, $l=12$, then every element of order
$s$ is in a unique up to
conjugation
maximal torus of order $\frac{1}{(3,q-1)}(q^4-q^2+1)(q^2+q+1)=k_{12}\cdot k_3$. So $r,s$ are non-adjacent if and only if~${k\not=3,12}$.
4. Since $\vert {}^2E_6(q)\vert=\frac{1}{(3,q+1)}q^{36}(q^2-1)(q^5+1)(q^6-1)(q^8-1)(q^9+1)(q^{12}-1)$, the numbers $k,l$ are in the set
$\{1,2,3,4,6,8,10,12,18\}$. If $l\le 4$, the existence of maximal tori of orders
$\frac{1}{(3,q+1)}(q^3-1)(q^2+1)(q+1)=\frac{1}{(3,q+1)}\cdot (2,q-1)\cdot k_1\cdot k_2\cdot k_3\cdot k_4$,
$\frac{1}{(3,q+1)}(q^2+1)^2(q+1)^2=\frac{1}{(3,q+1)}\cdot(2,q-1)^2\cdot k_4^2\cdot (q+1)^2$, and
$\frac{1}{(3,q+1)}(q^3-1)(q^2-1)(q+1)=\frac{1}{(3,q+1)}\cdot (3,q-1)\cdot k_3\cdot (2,q-1)^2\cdot k_1^2\cdot k_2^2$ implies that $r,s$ are adjacent.
If $l=6$, then each element of order $s$ is contained in a maximal torus of order equals either
$\frac{1}{(3,q+1)}(q^3+1)^2=(3,q+1)\cdot (q+1)^2\cdot k_6^2$, or $\frac{1}{(3,q+1)}(q^3+1)(q+1)(q-\epsilon_1)(q-\epsilon_2)=k_6(q+1)^2
(q-\epsilon_1)(q-\epsilon_2)$, or
$\frac{1}{(3,q+1)}(q^2-q+1)(q^3-\epsilon)(q-1)=(3,q+1)\cdot k_6\cdot(q^3-\epsilon)\cdot (q-1)$, or
$\frac{1}{(3,q+1)}(q^2-q+1)(q^2+q+1)^2=k_6\cdot (3,q-1)^2\cdot k_3^2$,
or $\frac{1}{(3,q+1)}(q^4-q^2+1)(q^2-q+1)=k_{12}\cdot k_6$.
Thus $r,s$ are non-adjacent if and only if $k=4$. If $l=8$, then every element of order $s$ is in a unique up to conjugation
maximal torus of order $\frac{1}{(3,q+1)}(q^4+1)(q^2-1)=\frac{1}{(3,q+1)}\cdot (2,q-1)^2\cdot k_8\cdot k_2\cdot k_1$. So $r,s$ are
non-adjacent if and only if either
$k\ge3$ and $k\not=8$, or $r=3$ and $(q+1)_3=3$. If $l=10$, then each element of order $s$ is in a maximal torus of order equals
$\frac{1}{(3,q+1)}(q^5+1)(q-\epsilon)=\frac{1}{(3,q+1)}\cdot k_{10}\cdot (q+1)\cdot(q-\epsilon)$. Hence $r,s$ are non-adjacent if and only if
$k\ge 3$, $k\not=10$.
If $l=12$, then every element of order $s$ is contained in a unique up to conjugation maximal torus of order
$\frac{1}{(3,q+1)}(q^4-q^2+1)(q^2-q+1)=k_{12}\cdot k_6$. Therefore $r,s$ are non-adjacent if and only if $k\not=6,12$. If, finally $l=18$, then each
element of order $s$ is
contained in a
unique up to conjugation maximal torus of order $\frac{1}{(3,q+1)}(q^6-q^3+1)=k_{18}$. Hence $r,s$ are non-adjacent if and only if~${k\not=18}$.
5. Since $\vert E_7(q)\vert=\frac{1}{(2,q-1)}q^{63}(q^2-1)(q^6-1)(q^8-1)(q^{10}-1)(q^{12}-1)(q^{14}-1)(q^{18}-1)$, the numbers $k,l$ are in
$\{1,2,3,4,5,6,7,8,9,10,12,14,18\}$. There exist maximal tori of orders equal
$\frac{1}{(2,q-1)}(q^5-1)(q^2+q+1)=\frac{1}{(2,q-1)}\cdot(3,q-1)\cdot(5,q-1)\cdot(q-1)\cdot k_5\cdot k_3$,
$\frac{1}{(2,q-1)}(q^4-1)(q^3-1)=(2,q-1)\cdot k_4\cdot k_1\cdot
k_2\cdot k_3\cdot (3,q-1)\cdot (q-1)$ and $\frac{1}{(2,q-1)}(q^5-1)(q^2-1)=k_5\cdot k_2\cdot k_1\cdot (5,q-1)\cdot (q-1)$, so for $l\le 5$
and
$(k,l)\not=(4,5)$
the numbers $r,s$ are adjacent. Since for $l=5$ every element $g$ of order $s$ is contained in a maximal torus of order either
$\frac{1}{(2,q-1)}(q^5-1)(q-1)(q-\epsilon)$, or $\frac{1}{(2,q-1)}(q^5-1)(q^2+q+1)=\frac{1}{(2,q-1)}\cdot(3,q-1)\cdot(5,q-1)\cdot(q-1)\cdot
k_5\cdot k_3$, we obtain that $r,s$ are non-adjacent if $(k,l)\not=(4,5)$. If $l=6$, then the existence of maximal tori of orders equal
$\frac{1}{(2,q-1)}(q^3+1)(q^4-1)=(2,q-1)\cdot(3,q+1)\cdot k_1\cdot k_2\cdot k_4 \cdot k_6\cdot (q+1)$ and
$\frac{1}{(2,q-1)}(q^6-1)(q-1)=(3,q^2-1) \cdot k_6\cdot k_3\cdot k_2\cdot k_1\cdot (q-1)$
implies that for $k\le 4$ and $k=6$ the numbers $r,s$ are adjacent.
Every element of order $s$ is in a maximal torus of order equals either
$\frac{1}{(2,q-1)}(q^3+1)(q^2+1)(q-\epsilon_1)(q-\epsilon_2)=(3,q+1)\cdot (q+1)\cdot k_6\cdot k_4\cdot (q-\epsilon_1)\cdot(q-\epsilon_2)$
with $(\epsilon_1,\epsilon_2)\not=(-1,-1)$, or
$\frac{1}{(2,q-1)}(q^3+1)(q-\epsilon_1)(q-\epsilon_2)(q-\epsilon_3)(q-\epsilon_4)=
\frac{1}{(2,q-1)}\cdot (3,q+1)\cdot k_6\cdot(q+1)(q-\epsilon_1)(q-\epsilon_2)(q-\epsilon_3)(q-\epsilon_4)$,
or $\frac{1}{(2,q-1)}(q^3+1)(q^3-\epsilon_1)(q-\epsilon_2)$, or
$\frac{1}{(2,q-1)}(q^2-q+1)^3(q+1)=\frac{1}{(2,q-1)}\cdot(3,q+1)^3\cdot k_6^3\cdot(q+1)$, or
$\frac{1}{(2,q-1)}(q^5+1)(q^2-q+1)=\frac{1}{(2,q-1)}\cdot(3,q+1)\cdot(5,q+1)\cdot k_6\cdot k_{10}\cdot(q+1)$,
or $\frac{1}{(2,q-1)}(q^3-1)(q^2-q+1)^2=\frac{1}{(2,q-1)}\cdot(q-1)\cdot (3,q-1)\cdot k_3\cdot (3,q+1)^2\cdot k_6^2$. Since for
$k=5$ the prime $r$ does not divide these numbers, we obtain that $r,s$ are non-adjacent if and only if $k=5$. If $l=7$, then each element
of order $s$ is in a unique up to conjugation maximal torus of order $\frac{1}{(2,q-1)}(q^7-1)=\frac{1}{(2,q-1)}\cdot (7,q-1)\cdot k_7\cdot(q-1)$.
Hence
$r,s$ are non-adjacent if and only if $k\not=1,7$. If $l=8$, then every element of order $s$ is in a maximal torus of
order equals
$\frac{1}{(2,q-1)}(q^4+1)(q^2-\epsilon_1)(q-\epsilon_2)=k_8\cdot(q^2-\epsilon_1)(q-\epsilon_2)$. Hence $r,s$ are non-adjacent if and only if
$k\ge 3,k\not=4$. If $l=9$, then an element of order $s$ is contained
in a unique up to conjugation maximal torus of order $\frac{1}{(2,q-1)}(q-1)(q^6+q^3+1)=\frac{1}{(2,q-1)}\cdot(q-1)\cdot(3,q-1)\cdot k_9$.
Therefore $r,s$ are non-adjacent if and
only if $k\not=1,9$. If $l=10$, then an element of order $s$ is contained in a maximal torus of order equals either
$\frac{1}{(2,q-1)}(q^5+1)(q-1)(q-\epsilon)=(2,q-1)\cdot (5,q+1) \cdot k_{10}\cdot k_2\cdot k_1\cdot (q-\epsilon)$, or
$\frac{1}{(2,q-1)}(q^5+1)(q^2-q+1)=\frac{1}{(2,q-1)}\cdot (5,q+1)\cdot (q+1)\cdot k_{10}\cdot(3,q+1)\cdot k_6$.
So $r,s$ are non-adjacent if and only if
$k\ge3$ and $k\not=6$. If $l=12$, then each element of order $s$ is contained in a maximal torus
of order equals $\frac{1}{(2,q-1)}(q^3-\epsilon)(q^4-q^2+1)=\frac{1}{(2,q-1)}\cdot (q^3-\epsilon)\cdot k_{12}$.
Hence $r,s$ are non-adjacent if and only if $k\ge 4$ and $k\not=6,12$. If
$l=14$, then an element of order $s$ is contained in a unique ap to conjugation maximal torus of order
$\frac{1}{(2,q-1)}(q^7+1)=\frac{1}{(2,q-1)}\cdot(7,q+1)\cdot k_{14}\cdot(q+1)$.
Therefore $r,s$ are non-adjacent if and only if $k\not=2,14$. If, finally, $l=18$,
then an element of order $s$ is contained in a unique up to conjugation maximal torus of order
$\frac{1}{(2,q-1)}(q+1)(q^6-q^3+1)=\frac{1}{(2,q-1)}\cdot(3,q+1)\cdot (q+1)\cdot k_{18}$. Therefore $r,s$ are non-adjacent if and only if~${k\not=2,18}$.
6. Since $\vert
E_8(q)\vert=q^{120}(q^2-1)(q^8-1)(q^{12}-1)(q^{14}-1)(q^{18}-1)(q^{20}-1)(q^{24}-1)(q^{30}-1)$, the
numbers $k,l$ are in the set $\{1,2,3,4,5,6,7,8,9,10,12,14,15,18,20,24,30\}$. Since $G$ contains
maximal tori of orders $(q^3-\epsilon_1)(q^4-1)(q-\epsilon_2)$,
$(q^5-1)(q^2+1)(q+1)=(5,q-1)\cdot k_5\cdot (2,q-1)^2\cdot k_4\cdot k_2\cdot k_1$, and
$(q^5-1)(q^3-1)=(3,q-1)\cdot (5,q-1)\cdot k_5\cdot k_3\cdot (q-1)^2$, then for $l\le 6$ primes
$r,s$ are adjacent if $(k,l)\not=(5,6)$. If $k=5$, then every element $g$ of order $r$ is contained in a maximal torus of order equals
either $(q^5-1)(q^3-1)=(3,q-1)\cdot (5,q-1)\cdot k_5\cdot k_3\cdot (q-1)^2$, or $(q^5-1)(q^2+1)(q+1)=(5,q-1)\cdot k_5\cdot (2,q-1)^2\cdot
k_4\cdot k_2\cdot k_1$, or $(q^5-1)(q^2-1)(q-\epsilon)=(5,q-1)\cdot k_5\cdot (2,q-1)\cdot k_2\cdot k_1\cdot (q-1)\cdot (q-\epsilon)$, or
$(q^5-1)(q-1)^3$, or $(q^4+q^3+q^2+q+1)^2$, and all these orders are not divisible by $s$ for $l=6$. It follows that if $(k,l)=(5,6)$, then
$r,s$ are
non-adjacent. If $l=7$, then every element of order $s$ is contained in a maximal torus of
order $(q^7-1)(q-\epsilon)=(7,q-1)\cdot k_7\cdot (q-1)(q-\epsilon)$. So $r,s$ are non-adjacent if
and only if $k\ge3$ and $k\not=7$. If $l=8$, then an element $g$ of order $s$ is contained in a
maximal torus of order equals either $(q^4+1)(q^4-\epsilon)=(2,q-1)\cdot k_8\cdot(q^4-\epsilon)$,
or $(q^4+1)(q^3-\epsilon_1)(q-\epsilon_2)=(2,q-1)\cdot k_8\cdot(q^3-\epsilon_1)(q-\epsilon_2)$ with $(\epsilon_1,\epsilon_2)\not=(-1,-1)$,
or
$(q^4+1)(q^2-1)^2=(2,q-1)\cdot k_8\cdot(q^2-1)^2$, or $(q^4+1)(q^2-\epsilon_1)
(q-\epsilon_2)^2=(2,q-1)\cdot k_8\cdot(q^2-\epsilon_1)\cdot (q-\epsilon_2)^2$. Hence $r,s$ are non-adjacent if and
only if $k=5,7$. If $l=9$, then an element of order $s$ is contained in a maximal torus of order
equals either $(q^6+q^3+1)(q-1)(q-\epsilon)=(3,q-1)\cdot k_9\cdot(q-1)\cdot(q-\epsilon)$, or
$(q^6+q^3+1)(q^2+q+1)=(3,q-1)^2\cdot k_9\cdot k_3$. Hence $r,s$ are non-adjacent if and only if
$k\ge4$ and $k\not=9$. If $l=10$ then every element of order $s$ is contained in a maximal torus of
order either $(q^5+1)(q^2-\epsilon_1)(q-\epsilon_2)=(5,q+1)\cdot k_{10}\cdot
(q+1)(q^2-\epsilon_1)(q-\epsilon_2)$ with $(\epsilon_1,\epsilon_2)\not=(-1,-1)$, or $(q^5+1)(q^3+1)=(5,q+1)\cdot k_{10}\cdot
(q+1)^2\cdot (3,q+1)\cdot k_6$, or $(q^5+1)(q^2-q+1)(q-1)=(5,q+1)\cdot k_{10}\cdot
(3,q+1)\cdot k_6\cdot(2,q-1)\cdot k_1\cdot k_2 $, or $(q^5+1)(q+1)^3=(5,q+1)\cdot k_{10}\cdot
(q+1)(q+1)^3$, or $(q^4-q^3+q^2-q+1)^2=((5,q+1)\cdot k_{10})^2$. Hence $r,s$ are
non-adjacent if and only if $k\ge3$ and $k\not=4,6,10$. If $l=12$, then each element of order $s$ is
contained in a maximal torus of order equals either $(q^4-q^2+1)(q^2+1)(q^2-\epsilon)=(2,q-1)\cdot
k_{12}\cdot k_4\cdot(q^2-\epsilon)$, or $(q^4-q^2+1)(q^2+1)(q-\epsilon)^2=(2,q-1)\cdot k_{12}\cdot
k_4\cdot(q-\epsilon)^2$, or $(q^4-q^2+1)(q^3-\epsilon_1) (q-\epsilon_2)=k_{12}\cdot
(q^3-\epsilon_1)\cdot (q-\epsilon_2)$, or $(q^4-q^2+1)(q^2+q+1)^2=(3,q-1)^2\cdot k_{12}\cdot
k_3^2$, or $(q^4-q^2+1)(q^2-q+1)^2=(3,q+1)^2\cdot k_{12}\cdot k_6^2$. Hence $r,s$ are non-adjacent
if and only if $k\ge5$ and $k\not=6,12$. If $l=14$, then an element of order $s$ is contained in a
maximal torus of order $(q^7+1)(q-\epsilon)=(7,q+1)\cdot k_{14}\cdot(q+1)\cdot(q-\epsilon)$.
Therefore $r,s$ are non-adjacent if and only if $k\ge3$ and $k\not=14$. If $l=15,24,30$, then each
element of order $s$ is contained in a unique up to conjugation maximal torus of order $k_{l}$. So
$r,s$ are non-adjacent if and only if $k\not=l$. If $l=18$, then an element of order $s$ is
contained in a maximal torus of order equals either $(q^6-q^3+1)(q+1)(q-\epsilon)=(3,q+1)\cdot
k_{18}\cdot(q+1)\cdot(q-\epsilon)$, or $(q^6-q^3+1)(q^2-q+1)=(3,q+1)^2\cdot k_{18}\cdot k_6$. Hence
$r,s$ are non-adjacent if and only if $k\ge 3$ and $k\not=6,18$. If $l=20$, then every element of
order $s$ is contained in a unique up to conjugation maximal torus of order
$q^8-q^6+q^4-q^2+1=(5,q^2+1)\cdot k_{20}$. So $r,s$ are non-adjacent if and only if $r\cdot
k\not=20$ (i.~e., $r\not=5$ or $k\not=4$) and $k\not=20$.
7. Since $\vert {}^3D_4(q)\vert=q^{12}(q^2-1)(q^6-1)(q^8+q^4+1)$, the numbers $k,l$ are in the set $\{1,2,3,6,12\}$. Since
$G$ contains maximal tori of orders $(q^3-\epsilon_1)(q-\epsilon_2)$, then for $l\le 3$ primes $r,s$ are adjacent. If $l=6$, then each element of
order $s$ is in a maximal torus of order $(q^3+1)(q-\epsilon)=(3,q+1)\cdot k_6\cdot (q+1)\cdot (q-\epsilon)$.
Hence $r,s$ are non-adjacent if and only if $k=3$.
If $l=12$, then and
element of order $s$ is contained in a unique up to conjugation maximal torus of order $q^4-q^2+1=k_{12}$ and $r,s$ are non-adjacent if and only
if~${k\not=12}$.
\end{proof}
Now we consider simple Suzuki and Ree groups.
\begin{lem}\label{SuzReeDivisors} Let $n$ be a natural number.
\noindent {\em 1.} Let
$m_1(B,n)=2^{2n+1}-1$,
$m_2(B,n)=2^{2n+1}-2^{n+1}+1$,
$m_3(B,n)=2^{2n+1}+2^{n+1}+1$.
Then $(m_i(B,n),m_j(B,n))=1$ if~$i\not=j$.
\noindent {\em 2.} Let $m_1(G,n)=3^{2n+1}-1$,
$m_2(G,n)=3^{2n+1}+1$,
$m_3(G,n)=3^{2n+1}-3^{n+1}+1$,
$m_4(G,n)=3^{2n+1}+3^{n+1}+1$.
Then $(m_1(G,n), m_2(G,n))=2$ and $(m_i(G,n),m_j(G,n))=1$ otherwise.
\noindent {\em 3.} Let $m_1(F,n)=2^{2n+1}-1$,
$m_2(F,n)=2^{2n+1}+1$,
$m_3(F,n)=2^{4n+2}+1$,
$m_4(F,n)=2^{4n+2}-2^{2n+1}+1$,
$m_5(F,n)=2^{4n+2}-2^{3n+2}+2^{2n+1}-2^{n+1}+1$,
$m_6(F,n)=2^{4n+2}+2^{3n+2}+2^{2n+1}+2^{n+1}+1$.
Then $(m_2(F,n),m_4(F,n))=3$ and $(m_i(F,n),m_j(F,n))=1$ otherwise.
\end{lem}
\begin{proof} Items (1) and (2) are repeated items (1) and (2) of \cite[Lemma~1.5]{VasVd}. Item (3) is
corrected with respect to Lemma~\ref{toriofexcptgrps}.
\end{proof}
If $G$ is a Suzuki or a Ree group over a field of order $q$, then denote by $S_i(G)$ the set $\pi(m_i(B,n))$ for
$G={}^2B_2(2^{2n+1})$, the set $\pi(m_i(G,n))\setminus\{2\}$ for $G={}^2G_2(3^{2n+1})$, and the set
$\pi(m_i(F,n))\setminus\{3\}$ for $G={}^2F_4(2^{2n+1})$. If $G$ is fixed, then we put $S_i=S_i(G)$,
and denote by $s_i$ any prime from~$S_i$.
\begin{prop}\label{adjsuzree}
Let $G$ be a finite simple Suzuki or Ree group over a field of characte\-ristic~$p$, let $r,s$ be
odd primes with $r,s\in\pi(G)\setminus\{p\}$. Then $r,s$ are non-adjacent if and only if one of the
following holds:
\begin{itemize}
\item[{\em 1.}] $G={^2B_2(2^{2n+1})}$, $r\in S_k(G)$, $s\in S_l(G)$ and~$k\not=l$.
\item[{\em 2.}] $G={^2G_2(3^{2n+1})}$, $r\in S_k(G)$, $s\in S_l(G)$ and~$k\not=l$.
\item[{\em 3.}] $G={^2F_4(2^{2n+1})}$, either $r\in S_k(G)$, $s\in S_l(G)$ and~$k\not=l$,
$\{k,l\}\neq\{1,2\},\{1,3\}$; or $r=3$ and $s\in S_l(G)$, where $l\in\{3,5,6\}$.
\end{itemize}
\end{prop}
\begin{proof}
Follows from \cite[Lemma~1.3]{VasVd}, Lemma \ref{toriofexcptgrps}, and Lemma \ref{SuzReeDivisors}.
\end{proof}
\section{Cocliques for groups of Lie type}
Let $G$ be a finite simple group of Lie type with the base field of order $q$ and characteristic~$p$.
Every $r\in\pi(G)$ is known to be a primitive prime divisor of $q^i-1$,
where $i$ is bounded by some function depending on the Lie rank of~$G$. Given a finite simple group
$G$ of Lie type, define a set $I(G)$ as follows. If $G$ is neither a Suzuki, nor a Ree group, then $i\in I(G)$
if and only if $\pi(G)\cap R_i(q)\not=\varnothing$. If $G$ is either a Suzuki or a Ree group, then $i\in
I(G)$ if and only if $\pi(G)\cap S_i(G)\not=\varnothing$. Notice that if $\pi(G)\cap R_i(q)\not=\varnothing$ (resp. $\pi(G)\cap
S_i(G)\not=\varnothing$), then $R_i(q)\subseteq\pi(G)$ (resp. $S_i(G)\subseteq\pi(G)$). Thus, the following
partition of $\pi(G)$ arises:
$$\pi(G)=\{p\}\cup\bigcup_{i\in I(G)}R_i,$$
or $$\pi(G)=\{2\}\cup\bigcup_{i\in I(G)}S_i$$ in case of Suzuki groups, or
$$\pi(G)=\{2\}\cup\{3\}\cup\bigcup_{i\in I(G)}S_i$$ in case of Ree groups.
As followed from an adjacency criterion, two distinct primes from the same class of the partition
are always adjacent. Moreover, in most cases an answer to the question: whether two primes from
distinct classes $R_i$ and $R_j$ (or $S_i$ and $S_j$) of the partition are adjacent, depends only
on the choice of the indices $i$ and $j$. We formalize this inference by the following
definitions.
\begin{df}\label{MG} Suppose $G$ is a finite simple group of Lie type with the base field of order $q$ and characteristic $p$, and $G$
is not
isomorphic to ${}^2B_2(2^{2m+1})$, ${}^2G_2(3^{2m+1})$, ${}^2F_4(2^{2m+1})$, and
$A^\varepsilon_2(q)$. Then define the set $M(G)$ to be a subset of $I(G)$ such that $i\in M(G)$ if and only if the
intersection of $R_i$ and every coclique of maximal size of $GK(G)$ is nonempty.
\end{df}
\begin{df}\label{MGSR} If $G={}^2B_2(2^{2m+1})$ or ${}^2G_2(3^{2m+1})$, $m\geqslant1$, then put $M(G)=I(G)$. If
$G={}^2F_4(2^{2m+1})$, $m\geqslant2$, then put $M(G)=\{2,3,4,5,6\}$. If $G={}^2F_4(8)$, then put
$M(G)=\{5,6\}$.
\end{df}
\begin{df}\label{theta}
Suppose $G$ is a finite simple group of Lie type with the base field of order $q$ and characteristic $p$, and $G$ is not
isomorphic to ${}^2B_2(2^{2m+1})$, ${}^2G_2(3^{2m+1})$, ${}^2F_4(2^{2m+1})$, and
$A^\varepsilon_2(q)$. A set $\Theta(G)$ consists of all subsets $\theta(G)$ of $\pi(G)$ satisfying
the following conditions:
(a) $p$ lies in $\theta(G)$ if and only if $p$ lies in every coclique of maximal size of $GK(G)$;
(b) for every $i\in M(G)$ exactly one prime from $R_i$ lies in $\theta(G)$.
\end{df}
\begin{df}\label{thetaS}
Let $G={}^2B_2(2^{2m+1})$. A set $\Theta(G)$ consists of all subsets $\theta(G)$ of $\pi(G)$
satisfying the following conditions:
(a) $p=2$ lies in $\theta(G);$
(b) for every $i\in M(G)$ exactly one prime from $S_i$ lies in $\theta(G)$.
\end{df}
\begin{df}\label{thetaR3}
Let $G={}^2G_2(3^{2m+1})$. A set $\Theta(G)$ consists of all subsets $\theta(G)$ of $\pi(G)$
satisfying the following conditions:
(a) $p=3$ lies in $\theta(G);$
(b) for every $i\in M(G)$ exactly one prime from $S_i$ lies in $\theta(G).$
\end{df}
\begin{df}\label{thetaR2}
Let $G={}^2F_4(2^{2m+1})$, $m\geqslant1$. A set $\Theta(G)$ consists of all subsets $\theta(G)$ of
$\pi(G)$ satisfying the following condition:
(a) for every $i\in M(G)$ exactly one prime from $S_i$ lies in $\theta(G).$
\end{df}
\begin{df}\label{theta2A} Let $G=A^{\varepsilon}_2(q)$, and $(q,\varepsilon)\neq(2,-)$. If $q+\varepsilon1\neq2^k$, then put
$M(G)=\{\nu_{\varepsilon}(2),\nu_{\varepsilon}(3)\}$, and if $q+\varepsilon1=2^k$, then
$M(G)=\{\nu_{\varepsilon}(3)\}$. A set $\Theta(G)$ consists of all subsets $\theta(G)$ of $\pi(G)$
satisfying the following conditions.
(1) $p$ lies in $\theta(G)$ if and only if $q+\varepsilon1\neq2^k;$
(2) if $(q-\varepsilon1)_3=3$, then $3\in\theta(G)$.
(3) for every $i\in M(G)$ exactly one prime from $R_{\nu_{\varepsilon}(i)}$ lies in $\theta(G)$,
excepting one case: if $2\in R_{\nu_{\varepsilon}(2)}$, then $2$ does not lie in $\theta(G)$.
\end{df}
\textsl{Remark.} A function $\nu_{\varepsilon}$ is defined in~\eqref{nuepsilon(n)}.
\begin{df}\label{Thetaprime} Let $G$ be a finite simple group of Lie type. The subset $\theta'(G)$
of $\pi(G)$ is an element of $\Theta'(G)$, if for every $\theta(G)\in\Theta(G)$ the union
$\rho(G)=\theta(G)\cup\theta'(G)$ is a coclique of maximal size in $GK(G)$.
\end{df}
Now we describe cocliques of maximal size for groups of Lie type. First we consider classical
groups postponing groups $A_1(q)$, $A_2^{\varepsilon}(q)$ to the end of the section.
\begin{prop}\label{cocliqueAE4} If $G$ is one of finite simple groups $A_{n-1}(q)$, ${}^2A_{n-1}(q)$
with the base field of characteristic~$p$ and order $q$, and $n\geqslant4$, then $t(G)$, and the sets $\Theta(G)$,
$\Theta'(G)$ are listed in Table~{\em\ref{LinearUnitaryTable}}.
\end{prop}
\begin{proof}
It is obvious that the function $\nu_{\varepsilon}$ defined in~\eqref{nuepsilon(n)} is a bijection on $\mathbb{N}$, so
$\nu_{\varepsilon}^{-1}$
is well defined. Moreover, since $\nu_{\varepsilon}^2$ is the identity map, we have
$\nu_{\varepsilon}^{-1}=\nu_{\varepsilon}$.
Using Zsigmondy's theorem and an information on the orders of groups $A^{\varepsilon}_{n-1}(q)$ we
obtain that a number $i$ lies in $I(G)$, if the following conditions holds:
(a) $\nu_{\varepsilon}(i)\leqslant n$;
(b) $i\neq1$ for $q=2,3$, and $i\neq6$ for $q=2$.
By \cite[Propositions~2.1, 2.2, 4.1, and~4.2]{VasVd} two distinct primes from $R_i$ are adjacent for every~${i\in I(G)}$.
Denote by $N(G)$ the set $\{i\in I(G)\mid n/2<\nu_{\varepsilon}(i)\leqslant n\}$ and by $\chi$ any
set of type $\{r_i\mid i\in N(G)\}$ such that $|\chi\cap R_i|=1$ for all $i\in N(G)$. Note that
$1,2$ can not lie in $N(G)$, because $n\geqslant4$. In particular, $2$ does not lie in any $\chi$.
Let $i\neq j$ and $n/2<\nu_{\varepsilon}(i),\nu_{\varepsilon}(j)\leqslant n$. Then
$\nu_{\varepsilon}(i)+\nu_{\varepsilon}(j)>n$ and $\nu_{\varepsilon}(i)$ does not divide
$\nu_{\varepsilon}(j)$. By \cite[Propositions~2.1,~2.2]{VasVd}, primes $r_i$ and $r_j$ are not
adjacent. Thus, every $\chi$ forms a coclique of $GK(G)$.
Denote by $\xi$ the set
$$\{p\}\cup\bigcup_{i\in I(G)\setminus N(G)}R_i.$$ By
\cite[Propositions 2.1, 2.2, 3.1, 4.1, 4.2]{VasVd} every two distinct primes from $\xi$ are
adjacent in $GK(G)$. Thus, every coclique of $GK(G)$ contains at most one prime from~$\xi$.
\textbf{Case 1.} Let $n\geqslant7$.
If $q=2$ and $G=A_{n-1}(q)$ we assume that $n\geqslant13$ first, in order to avoid the exceptions
arising because of $R_6=\varnothing$ for $q=2$.
The conditions on $n$ implies that $|N(G)|\geqslant4$. By \cite[Proposition~3.1]{VasVd}, we have
that $t(p,G)\leqslant3$, so $p$ can not lie in any coclique of maximal size. By
\cite[Propositions~4.1, 4.2]{VasVd}, the same assertion is true for any primitive prime divisor
$r_i$, where $\nu_{\varepsilon}(i)=1$. Thus, solving the problem, does a prime $r$ lie in a
coclique of maximal size of $GK(G)$, we may assume that $r$ is neither a characteristic, nor a
divisor of $q-\varepsilon1$. Hence \cite[Propositions~2.1 and~2.2]{VasVd} will be the main
technical tools.
Suppose that $n=2t+1$ is odd. If $\nu_{\varepsilon}(i)\leqslant n/2$, then there exist at least two
distinct numbers $j,k$ from $N(G)$ such that $r_i$ is adjacent to $r_j$ and $r_k$. Indeed, if
$\nu_{\varepsilon}(i)<t$, then we take $j$ and $k$ such that $\nu_{\varepsilon}(j)=t+1$ and
$\nu_{\varepsilon}(k)=t+2$, while if $\nu_{\varepsilon}(i)=t$, then we take $j$ and $k$ such that
$\nu_{\varepsilon}(j)=t+1$ and $\nu_{\varepsilon}(k)=2t$. Thus, $M(G)=N(G)$, every
$\theta(G)\in\Theta(G)$ is of type $\{r_i\mid n/2<\nu_{\varepsilon}(i)\leqslant n\}$,
$\Theta'(G)=\varnothing$, and $t(G)=t+1=[(n+1)/2]$.
Suppose that $n=2t$ is even. If $\nu_{\varepsilon}(i)< n/2$, then there exist at least two distinct
numbers $j,k$ from $N(G)$ such that $r_i$ is adjacent to $r_j$ and $r_k$. It is sufficient, to take
$j$ and $k$ such that $\nu_{\varepsilon}(j)=t+1$, and $\nu_{\varepsilon}(k)=t+2$ if
$\nu_{\varepsilon}(i)<t-1$, or $\nu_{\varepsilon}(k)=2t-2$ if $\nu_{\varepsilon}(i)=t-1$. On the
other hand, if $\nu_{\varepsilon}(i)=t=n/2$, then $r_i$ is adjacent to $r_j$, where
$\nu_{\varepsilon}(j)=2t=n$, and is non-adjacent to every $r_k$, where $k\in N(G)$ and $k\neq j$.
Thus, $M(G)=N(G)\setminus\{\nu_{\varepsilon}(n)\}$, every $\theta(G)\in\Theta(G)$ is of type
$\{r_i\mid n/2<\nu_{\varepsilon}(i)<n\}$, and $\Theta'(G)$ consists of one-element sets of type
$\{r_{\nu_{\varepsilon}(n/2)}\}$ or $\{r_{\nu_{\varepsilon}(n)}\}$. Hence, $t(G)=t=[(n+1)/2]$.
It remains to consider the cases: $q=2$, $G=A_{n-1}(q)$, and $7\leqslant n\leqslant12$. All
results (see Table~\ref{LinearUnitaryTable}) are obtained by arguments similar to that in general
case with respect to the fact: $R_6=\varnothing$, and can be easily verified by using
\cite[Propositions 2.1, 2.2, 3.1, 4.1, 4.2]{VasVd}. The most interesting case arises, when $n=8$.
In that case $\Theta(G)$ consists of one-element sets $\theta(G)$ of type $\{r_7\}$, while
$\Theta'(G)$ consists of two-elements sets $\theta'(G)$ of types $\{p,r_8\}$, $\{r_4,r_5\}$, $\{r_3,r_8\}$, or
$\{r_5,r_8\}$.
\textbf{Case 2.} Let $n=6$.
First, we assume that $q\neq2$. Then
$N(G)=\{\nu_{\varepsilon}(4),\nu_{\varepsilon}(5),\nu_{\varepsilon}(6)\}$, and $|N(G)|=3$.
Therefore, a set of type
$\{r_{\nu_{\varepsilon}(4)},r_{\nu_{\varepsilon}(5)},r_{\nu_{\varepsilon}(6)}\}$ forms a coclique
in $GK(G)$, and $t(G)\geqslant3$. Arguing as in previous case, we obtain that any prime
$r_{\nu_{\varepsilon}(3)}$ is adjacent to $r_{\nu_{\varepsilon}(6)}$, and a set of type
$\{r_{\nu_{\varepsilon}(3)},r_{\nu_{\varepsilon}(4)},r_{\nu_{\varepsilon}(5)}\}$ is a coclique. By
\cite[Proposition~3.1]{VasVd}, we have that a set of type
$\{p,r_{\nu_{\varepsilon}(5)},r_{\nu_{\varepsilon}(6)}\}$ is a coclique, and $p$ is adjacent to any
prime $r_{\nu_{\varepsilon}(4)}$. If $\nu_{\varepsilon}(i)=2$, then $r_i$ is adjacent to $p$, and
is non-adjacent to $r_j$ if and only if $\nu_{\varepsilon}(j)=5$, so $t(r_i,G)=2$. Let $r$ be a
divisor of $q-\varepsilon1$. If $r\neq3$ or $(q-\varepsilon1)_3\neq3$, then \cite[Propositions~4.1,
4.2]{VasVd} implies that $t(r,G)=2$, while if $r=3$ and $(q-\varepsilon1)_3=3$, we have that
$t(3,G)=3$ and a set of type $\{3,r_{\nu_{\varepsilon}(5)},r_{\nu_{\varepsilon}(6)}\}$ is a
coclique in $GK(G)$. Thus, if $q\neq2$, then $M(G)=\{\nu_{\varepsilon}(5)\}$, and every
$\theta(G)\in\Theta(G)$ is of type $\{r_{\nu_{\varepsilon}(5)}\}$. Every $\theta'(G)\in\Theta'(G)$
is a two-element set of type $\{p,r_{\nu_{\varepsilon}(6)}\}$,
$\{r_{\nu_{\varepsilon}(3)},r_{\nu_{\varepsilon}(4)}\}$,
$\{r_{\nu_{\varepsilon}(4)},r_{\nu_{\varepsilon}(6)}\}$, and if $(q-\varepsilon1)_3=3$ is also of
type $\{3,r_{\nu_{\varepsilon}(6)}\}$.
Let $G=A_5(2)$. Since $R_6=R_1=\varnothing$, we have that every $\theta(G)\in\Theta(G)$ is of type
$\{r_{3},r_{4},r_{5}\}$, and $\Theta'(G)=\varnothing$.
Let $G={}^2A_5(2)$. Since $R_6=R_{\nu(3)}=\varnothing$ and $(q+1)_3=3$, we have that every
$\theta(G)\in\Theta(G)$ is of type $\{r_{3},r_{10}\}$, and every $\theta'(G)\in\Theta'(G)$ is a one
element set of type $p$, $r_4$, or $3$.
In all cases $t(G)=3$.
\textbf{Case 3.} Let $n=5$.
We have $N(G)=\{\nu_{\varepsilon}(4),\nu_{\varepsilon}(5)\}$, and $|N(G)|=2$, so $t(G)\leqslant3$.
Assume now that $G\neq{}^2A_{4}(2)$. Then $R_{\nu_{\varepsilon}(3)}$ is always nonempty, and a set
of type $\{r_{\nu_{\varepsilon}(3)},r_{\nu_{\varepsilon}(4)},r_{\nu_{\varepsilon}(5)}\}$ is a
coclique in $GK(G)$. By \cite[Proposition~3.1]{VasVd}, we have that a set of type
$\{p,r_{\nu_{\varepsilon}(4)},r_{\nu_{\varepsilon}(5)}\}$ is also a coclique. A prime
$r_{\nu_{\varepsilon}(2)}$ is adjacent to $p$, and is non-adjacent to $r_j$ if and only if
$\nu_{\varepsilon}(j)=5$. Let $r$ be a divisor of $q-\varepsilon1$. If $r\neq5$ or
$(q-\varepsilon1)_5\neq5$, then \cite[Propositions~4.1, 4.2]{VasVd} implies that $t(r,G)=2$, while
if $r=5$ and $(q-\varepsilon1)_5=5$, we have that $t(5,G)=3$ and a set of type
$\{5,r_{\nu_{\varepsilon}(4)},r_{\nu_{\varepsilon}(5)}\}$ is a coclique in $GK(G)$. Thus, if
$G\neq{}^2A_{4}(2)$, then $M(G)=N(G)$, every $\theta(G)\in\Theta(G)$ is of type
$\{r_{\nu_{\varepsilon}(4)},r_{\nu_{\varepsilon}(5)}\}$. Every $\theta'(G)\in\Theta'(G)$ is a
one-element set of type $\{p\}$ or $\{r_{\nu_{\varepsilon}(3)}\}$, and if $(q-\varepsilon1)_5=5$ is
also of type $\{5\}$ .
Let $G={}^2A_{4}(2)$. Since $R_6=R_{\nu(3)}=\varnothing$ and $(q+1)_5=1$, we have that every
$\theta(G)\in\Theta(G)$ is of type $\{p,r_{4},r_{10}\}$, and $\Theta'(G)=\varnothing$.
In all cases $t(G)=3$.
\textbf{Case 4.} Let $n=4$.
First, we assume that $G\neq{}^2A_{3}(2)$. Then
$N(G)=\{\nu_{\varepsilon}(3),\nu_{\varepsilon}(4)\}$, and $|N(G)|=2$, so $t(G)\leqslant3$. By
\cite[Proposition~3.1]{VasVd}, we have that a set of type
$\{p,r_{\nu_{\varepsilon}(3)},r_{\nu_{\varepsilon}(4)}\}$ is a coclique in $GK(G)$. A prime
$r_{\nu_{\varepsilon}(2)}$ is adjacent to $p$ and any prime $r_{\nu_{\varepsilon}(4)}$. If $r$ is
an odd prime divisor of $q-\varepsilon1$, then \cite[Propositions~4.1, 4.2]{VasVd} implies that
$t(r,G)=2$. The same assertion is true for $r=2$ if and only if $(q-\varepsilon1)_2\neq4$, while if
$(q-\varepsilon1)_2=4$ we have that $\{2,r_{\nu_{\varepsilon}(3)},r_{\nu_{\varepsilon}(4)}\}$ is a
coclique. Therefore, if $(q-\varepsilon1)_2\neq4$, then every $\theta(G)\in\Theta(G)$ is of type
$\{p,r_{\nu_{\varepsilon}(3)},r_{\nu_{\varepsilon}(4)}\}$, and $\Theta'(G)=\varnothing$. But if
$(q-\varepsilon1)_2=4$, then $M(G)=N(G)$, every $\theta(G)\in\Theta(G)$ is of type
$\{r_{\nu_{\varepsilon}(3)},r_{\nu_{\varepsilon}(4)}\}$, and $\Theta'(G)=\{\{2\},\{p\}\}$. Anyway,
$t(G)=3$.
Let $G={}^2A_{3}(2)$. Since $R_6=R_{\nu(3)}=\varnothing$ and $(q+1)_4=1$, we obtain that
$\Theta(G)=\{\{r_4\}\}$, and $\Theta'(G)=\{\{p\},\{r_2\}\}$. In this case, $t(G)=2$.
\end{proof}
\begin{prop}\label{cocliqueBC} If $G$ is one of finite simple groups $B_n(q)$, $C_n(q)$, $D_n(q)$ or ${}^2D_n(q)$
with the base field of characteristic~$p$ and order $q$, then $t(G)$, and the sets $\Theta(G)$, $\Theta'(G)$ are listed
in Table~{\em\ref{ClassicTable}}.
\end{prop}
\begin{proof} Using Zsigmondy's theorem and an information on the orders of groups under consideration we obtain that
a number $i$ lies in $I(G)$, if the following conditions holds:
(a) $\eta(i)\leqslant n$;
(b) $i\neq1$ for $q=2,3$, and $i\neq6$ for $q=2$;
(c) $i\neq2n$ for $G=D_n(q)$;
(d) $i\neq n$ for $G={}^2D_n(q)$ and $n$ odd.
By \cite[Proposition~4.3 and~4.4]{VasVd} and Propositions \ref{adjbn} and \ref{adjdn} it follows that for every $i\in I(G)$ two distinct
primes from $R_i$ are adjacent.
Denote by $N(G)$ the set $\{i\in I(G)\mid n/2<\eta(i)\leqslant n\}$ and by $\chi$ any set of type
$\{r_i\mid i\in N(G)\}$ such that $|\chi\cap R_i|=1$ for all $i\in N(G)$. Let $i\neq j$ and
$n/2<\eta(i),\eta(j)\leqslant n$. We have $\eta(i)+\eta(j)>n$. Suppose that $i/j$ is an odd
natural number. Then $i$ and $j$ is of the same parity, so $\eta(i)/\eta(j)$ is also an odd
natural number. Since $i\neq j$, we have $\eta(j)>2\eta(i)>n$, contrary to the choice of~$j$. Thus
$i/j$ is not an odd number. By Propositions~\ref{adjbn} and~\ref{adjdn}, primes $r_i$ and $r_j$
are not adjacent. Thus, every $\chi$ forms a coclique of $GK(G)$.
Denote by $\xi$ the set
$$\{p\}\cup\bigcup_{i\in I(G)\setminus N(G)}R_i.$$ By Propositions~\ref{adjbn},~\ref{adjdn} and
\cite[Propositions 3.1, 4.3, 4.4]{VasVd} every two distinct primes from $\xi$ are adjacent in
$GK(G)$. Thus, every coclique of $GK(G)$ contains at most one prime from~$\xi$.
Now we determine cocliques of maximal size considering the groups of different types separately.
However, by \cite[Theorem~7.5]{VasVd}, we have $GK(B_n(q))=GK(C_n(q))$, and so analysis for groups
of types $B_n$ and $C_n$ is mutual.
\textbf{Case 1.} Let $G$ be one of the simple groups $B_n(q)$ or $C_n(q)$.
Suppose that $n=2$. If $q=2$, then the group $G$ is not simple, so we can assume that
$q\geqslant3$. If $q=3$, then $I(G)=\{2,4\}$, and if $q>3$, then $I(G)=\{1,2,4\}$. In both cases,
$N(G)=\{4\}$. Since $r_4$ is non-adjacent to every $r\in\xi$, we have $M(G)=N(G)=\{4\}$, every
$\theta(G)\in\Theta(G)$ is a one-element set containing exactly one element $r_4$ from $R_4$.
Every $\theta'(G)\in\Theta'(G)$ is a one-element set containing exactly one element from~$\xi$.
Thus, $t(G)=2$.
Suppose that $n=3$. If $q\neq2$ then $N(G)=\{3,6\}$, and if $q=2$ then $N(G)=\{3\}$. The set
$\{1,2,3,4,6\}$ includes $I(G)$, and so $\xi=\{p\}\cup R_1\cup R_2\cup R_4$, where $\{p\}$, $R_2$ and
$R_4$ are always nonempty. The prime $p$ and any prime $r_4$ are adjacent one to another, and are
non-adjacent to every $r_i$ with $i\in N(G)$. On the other hand, for $i\in\{1,2\}$ and
$j\in\{3,6\}$, primes $r_i$ and $r_j$ are adjacent. Therefore, $M(G)=N(G)$, $\theta(G)$ is
of type $\{r_3\}$ for $q=2$, and is of type $\{r_3,r_6\}$ otherwise. The set $\Theta'(G)$ consists
of one-element sets of type $\{p\}$, $\{r_2\}$, and $\{r_4\}$, if $q=2$, and sets of type $\{p\}$,
and $\{r_4\}$ otherwise. Thus, $t(G)=2$ for $q=2$, and $t(G)=3$ otherwise.
Let $n\geqslant 4$. Now we consider four different cases subject to residue of $n$ modulo $4$. We
write $n=4t+k$, where $k=0,1,2,3$, and $t\geqslant1$. If $q=2$ we assume that $t>1$ to avoid
exceptional cases that arise because of $R_6=\varnothing$ for $q=2$.
Suppose that $n=4t$. Then
$$N(G)=\{2t+1,2t+3,\ldots,4t-1,4t+2,4t+4,\ldots,8t\},$$
and so $|N(G)|=3t$. By adjacency criterion, $r_{4t}$ is non-adjacent to every $r_i$, where $i\in
N(G)$. Therefore, $t(G)\geqslant3t+1\ge4$. By \cite[Propositions 3.1, 4.3]{VasVd}, we have
$t(2,G)\leqslant t(p,G)<4$, so $p$ and $2$ cannot lie in any coclique of maximal size. Furthermore,
if $\eta(i)<n/2=2t$, then any odd $r_i$ is adjacent to $r_{4t}$, $r_{2t+1}$, $r_{4t+2}$. Therefore,
$M(G)=N(G)\cup\{n\}$, every $\theta(G)\in\Theta(G)$ is of type $\{r_i\mid
n/2\leqslant\eta(i)\leqslant n\}$, $\Theta'(G)=\varnothing$, so $t(G)=3t+1=[(3n+5)/4]$.
Suppose that $n=4t+1$. Then
$$N(G)=\{2t+1,2t+3,\ldots,4t+1,4t+2,4t+4,\ldots,8t+2\},$$
so $|N(G)|=3t+2$ and $t(G)\geqslant5$. By \cite[Propositions 3.1, 4.3]{VasVd}, we have
$t(2,G)\leqslant t(p,G)<4$. Therefore, $p$ and $2$ cannot lie in any coclique of maximal size. If
$\eta(i)<n/2$, then any odd $r_i$ is adjacent to $r_{2t+1}$, $r_{4t+2}$, so cannot lie in any
coclique of maximal size. Thus, $M(G)=N(G)$, every $\theta(G)\in\Theta(G)$ is of type $\{r_i\mid
n/2<\eta(i)\leqslant n\}=\{r_i\mid n/2\leqslant\eta(i)\leqslant n\}$, $\Theta'(G)=\varnothing$, and
$t(G)=3t+2=[(3n+5)/4]$.
Suppose that $n=4t+2$. Then
$$N(G)=\{2t+3,2t+5,\ldots,4t+1,4t+4,4t+6,\ldots,8t+4\},$$
so $|N(G)|=3t+1$ and $t(G)\geqslant4$. Since $t(2,G)\leqslant t(p,G)<4$, primes $p$ and $2$ cannot
lie in any coclique of maximal size. Any primes $r_{2t+1}$ and $r_{4t+2}$ are adjacent one to
another and are non-adjacent to every $r_i$ with $i\in N(G)$. If $\eta(i)<n/2$, then $r_i$ is
adjacent to $r_{4t+4}$, $r_{4t+2}$, and $r_{2t+1}$. Therefore, $N(G)=M(G)$, every
$\theta(G)\in\Theta(G)$ is of type $\{r_i\mid n/2<\eta(i)\leqslant n\}$, and $\Theta'(G)$ consists
of one-element sets of type $\{r_{2t+1}\}$ or $\{r_{4t+2}\}$. Thus, $t(G)=3t+2=[(3n+5)/4]$.
Suppose that $n=4t+3$. Then $$N(G)=\{2t+3,2t+5,\ldots,4t+3,4t+4,4t+6,\ldots,8t+6\},$$ so
$|N(G)|=3t+3$ and $t(G)\geqslant6$. Since $t(2,G)\leqslant t(p,G)<4$, primes $p$ and $2$ cannot
lie in a coclique of maximal size. If $\eta(i)<2t+1$, then $r_i$ is adjacent to $r_{4t+4}$,
$r_{4t+6}$, and $r_{2t+3}$. Assume that $\eta(i)=2t+1$. If $r_i$ is adjacent to $r_j$ with $j\in
N(G)$, then $j=4t+4$. Since there are two distinct numbers $2t+1$ and $4t+2$ such that the value
of function $\eta$ of them is equal to $2t+1$, we have that $\Theta'(G)$ consists of one-element
sets of one of three types: $\{r_{4t+4}\}$, $\{r_{2t+1}\}$ or $\{r_{4t+2}\}$. Thus,
$M(G)=N(G)\setminus\{4t+4\}$, every $\theta(G)\in\Theta(G)$ is of type $\{r_i\mid
(n+1)/2<\eta(i)\leqslant n\}$, and $t(G)=3t+3=[(3n+5)/4]$.
It remains to consider the cases: $q=2$ and $n=4+k$, where $k=0,1,2,3$. All results (see
Table~\ref{ClassicTable}) are obtained by arguments similar to that in general case with respect
to the fact: $R_{4t+2}=R_6=\varnothing$, and can be easily verified by using
Proposition~\ref{adjbn} and \cite[Propositions 3.1, 4.3]{VasVd}.
\textbf{Case 2.} Let $G=D_n(q)$.
Suppose that $n=4$. If $q\neq2$ then $N(G)=\{3,6\}$, and if $q=2$ then $N(G)=\{3\}$. The set
$\{1,2,3,4,6\}$ includes $I(G)$, so $\xi=\{p\}\cup R_1\cup R_2\cup R_4$, where $\{p\}$, $R_2$ and $R_4$
are always nonempty. The prime $p$ and any prime $r_4$ are adjacent one to another, and are
non-adjacent to every $r_i$ with $i\in N(G)$. On the other hand, for $i\in\{1,2\}$ and
$j\in\{3,6\}$, primes $r_i$ and $r_j$ are adjacent. Therefore, $M(G)=N(G)$, $\theta(G)$ is of
type $\{r_3\}$ for $q=2$, and is of type $\{r_3,r_6\}$ otherwise. The set $\Theta'(G)$ consists of
one-element sets of type $\{p\}$, $\{r_2\}$, and $\{r_4\}$, if $q=2$, and sets of type $\{p\}$, and
$\{r_4\}$ otherwise. Thus, $t(G)=2$ for $q=2$, and $t(G)=3$ otherwise.
Let $n>4$. Now we consider four different cases subject to residue of $n$ modulo $4$. We write
$n=4t+k$, where $k=0,1,2,3$, and $t\geqslant1$. If $q=2$ we assume that $t>1$ to avoid exceptional
cases that arise because of $R_6=\varnothing$ for $q=2$.
Suppose that $n=4t>4$. Then
$$N(G)=\{2t+1,2t+3,\ldots,4t-1,4t+2,4t+4,\ldots,8t-2\},$$
so $|N(G)|=3t-1>4$. By \cite[Propositions 3.1, 4.4]{VasVd}, we have $t(2,G)\leqslant t(p,G)<4$, so
$p$ and $2$ cannot lie in any coclique of maximal size. By adjacency criterion, $r_{4t}$ is
non-adjacent to every $r_i$, where $i\in N(G)$. On the other hand, any prime $r_{4t-2}$ is adjacent
to $r_{4t}$, $r_{4t+2}$, any prime $r_{2t-1}$ is adjacent to $r_{4t}$, $r_{2t+1}$, and if
$\eta(i)<2t-1$, then $r_i$ is adjacent to at least three primes from every $\chi$. Therefore,
$M(G)=N(G)\cup\{n\}$, every $\theta(G)\in\Theta(G)$ is of type $\{r_i\mid
n/2\leqslant\eta(i)\leqslant n,i\neq2n\}$, $\Theta'(G)=\varnothing$, and $t(G)=3t=[(3n+1)/4]$.
Suppose that $n=4t+1$. Then
$$N(G)=\{2t+1,2t+3,\ldots,4t+1,4t+2,4t+4,\ldots,8t\},$$
so $|N(G)|=3t+1\geqslant4$. By \cite[Propositions 3.1, 4.4]{VasVd}, we have $t(2,G)\leqslant
t(p,G)<4$. Therefore, $p$ and $2$ cannot lie in any coclique of maximal size. If $\eta(i)<2t$, then
any prime $r_i$ is adjacent to $r_{4t+2}$, $r_{2t+1}$. Assume that $i=4t$, then $r_i$ is adjacent
to $r_{j}$, where $j\in N(G)$, if and only if $j=4t+2$. Thus, $M(G)=N(G)\setminus\{n+1\}$, every
$\theta(G)\in\Theta(G)$ is of type $\{r_i\mid n/2<\eta(i)\leqslant n,i\neq n+1,2n\}$, and
$\Theta'(G)$ consists of one-element sets of type $\{r_{4t}\}$ or $\{r_{4t+2}\}$. Therefore,
$t(G)=3t+1=[(3n+1)/4]$.
Suppose that $n=4t+2$. Then
$$N(G)=\{2t+3,2t+5,\ldots,4t+1,4t+4,4t+6,\ldots,8t+2\},$$
so $|N(G)|=3t\geqslant3$. Any primes $r_{2t+1}$ and $r_{4t+2}$ are adjacent one to another and are
non-adjacent to every $r_i$ with $i\in N(G)$. Hence $t(G)\geqslant4$. Since $t(2,G)\leqslant
t(p,G)<4$, primes $p$ and $2$ cannot lie in any coclique of maximal size. If $\eta(i)<n/2$, then
$r_i$ is adjacent to $r_{2t+1}$, $r_{4t+2}$, $r_{4t+4}$. Therefore, $N(G)=M(G)$, every
$\theta(G)\in\Theta(G)$ is of type $\{r_i\mid n/2<\eta(i)\leqslant n, i\neq2n\}$, and $\Theta'(G)$
consists of one-element sets of type $\{r_{2t+1}\}$ or $\{r_{4t+2}\}$. Thus,
$t(G)=3t+1=[(3n+1)/4]$.
Suppose that $n=4t+3$. Then $$N(G)=\{2t+3,2t+5,\ldots,4t+3,4t+4,4t+6,\ldots,8t+4\},$$ so
$|N(G)|=3t+2\geqslant5$. Since $t(2,G)\leqslant t(p,G)<4$, primes $p$ and $2$ cannot lie in a
coclique of maximal size. By adjacency criterion, $r_{2t+1}$ is non-adjacent to every $r_i$, where
$i\in N(G)$. On the other hand, if $\eta(i)<2t+1$ or $i=4t+2$, then $r_i$ is adjacent to
$r_{4t+4}$, $r_{2t+1}$. Therefore, $M(G)=N(G)\cup\{(n-1)/2\}$, every $\theta(G)\in\Theta(G)$ is of
type $\{r_i\mid (n-1)/2\leqslant\eta(i)\leqslant n,i\neq2n,n-1\}$, $\Theta'(G)=\varnothing$, and
$t(G)=3t+3=(3n+3)/4$. In case $n=4t+3$, any coclique of maximal size does not contain primes of
type $r_{4t+2}$, and so the group $D_7(2)$ is considered as well.
It remains to consider the cases: $q=2$ and $n=4+k$, where $k=1,2$. Both results (see
Table~\ref{ClassicTable}) are obtained by arguments similar to that in general case with respect to
the fact: $R_6=\varnothing$, and can be easily verified by using Proposition~\ref{adjdn} and
\cite[Propositions 3.1, 4.4]{VasVd}.
\textbf{Case 3.} Let $G={}^2D_n(q)$.
Suppose that $n=4$. If $q\neq2$ then $N(G)=\{3,6,8\}$, and if $q=2$ then $N(G)=\{3,8\}$. The set
$\{1,2,3,4,6,8\}$ includes $I(G)$, and so $\xi=\{p\}\cup R_1\cup R_2\cup R_4$, where $\{p\}$, $R_2$, and
$R_4$ are always nonempty. The prime $p$ and any prime $r_4$ are adjacent one to another, and are
non-adjacent to every $r_i$ with $i\in N(G)$. On the other hand, for $i\in\{1,2\}$ and
$j\in\{3,6\}$, primes $r_i$ and $r_j$ are adjacent. Therefore, $M(G)=N(G)$, $\theta(G)$ is of type
$\{r_3,r_8\}$ for $q=2$, and is of type $\{r_3,r_6,r_8\}$ otherwise. The set $\Theta'(G)$ consists
of one-element sets of type $\{p\}$, and $\{r_4\}$. Thus, $t(G)=3$ for $q=2$, and $t(G)=4$
otherwise.
Let $n>4$. Now we consider four different cases subject to residue of $n$ modulo $4$. We write
$n=4t+k$, where $k=0,1,2,3$, and $t\geqslant1$. If $q=2$ we assume that $t>1$ to avoid exceptional
cases that arise because of $R_6=\varnothing$ for $q=2$.
Suppose that $n=4t>4$. Then
$$N(G)=\{2t+1,2t+3,\ldots,4t-1,4t+2,4t+4,\ldots,8t\},$$
so $|N(G)|=3t>4$. By \cite[Propositions 3.1, 4.4]{VasVd}, we have $t(2,G)\leqslant
t(p,G)\leqslant4$, so $p$ and $2$ cannot lie in any coclique of maximal size. By adjacency
criterion, $r_{4t}$ is non-adjacent to every $r_i$, where $i\in N(G)$. On the other hand, any prime
$r_{2t-1}$ is adjacent to $r_{4t}$, $r_{4t+2}$, any prime $r_{4t-2}$ is adjacent to $r_{4t}$,
$r_{2t+1}$, and if $\eta(i)<2t-1$, then $r_i$ is adjacent to at least three primes from every
$\chi$. Therefore, $M(G)=N(G)\cup\{n\}$, every $\theta(G)\in\Theta(G)$ is of type $\{r_i\mid
n/2\leqslant\eta(i)\leqslant n\}$, $\Theta'(G)=\varnothing$, and $t(G)=3t+1=[(3n+4)/4]$.
Suppose that $n=4t+1$. Then
$$N(G)=\{2t+1,2t+3,\ldots,4t-1,4t+2,4t+4,\ldots,8t+2\},$$
so $|N(G)|=3t+1\geqslant4$. By \cite[Propositions 3.1, 4.4]{VasVd}, we have $t(2,G)\leqslant
t(p,G)<4$. Therefore, $p$ and $2$ cannot lie in any coclique of maximal size. If $\eta(i)<2t$, then
any prime $r_i$ is adjacent to $r_{4t+2}$, $r_{2t+1}$. Assume that $i=4t$, then $r_i$ is adjacent
to $r_{j}$, where $j\in N(G)$, if and only if $j=2t+1$. Thus, $M(G)=N(G)\setminus\{(n+1)/2\}$,
every $\theta(G)\in\Theta(G)$ is of type $\{r_i\mid n/2<\eta(i)\leqslant n,i\neq (n+1)/2,n\}$, and
$\Theta'(G)$ consists of one-element sets of type $\{r_{4t}\}$ or $\{r_{2t+1}\}$. Therefore,
$t(G)=3t+1=[(3n+4)/4]$.
Suppose that $n=4t+2$. Then
$$N(G)=\{2t+3,2t+5,\ldots,4t+1,4t+4,4t+6,\ldots,8t+4\},$$
so $|N(G)|=3t+1\geqslant4$. Any primes $r_{2t+1}$, $r_{4t}$ and $r_{4t+2}$ are adjacent one to
another and are non-adjacent to every $r_i$ with $i\in N(G)$. Hence $t(G)>4$. Since
$t(2,G)\leqslant t(p,G)\leqslant4$, primes $p$ and $2$ cannot lie in any coclique of maximal size.
If $\eta(i)<2t$, then $r_i$ is adjacent to $r_{2t+1}$, $r_{4t+2}$, $r_{4t}$, $r_{4t+4}$. Therefore,
$N(G)=M(G)$, every $\theta(G)\in\Theta(G)$ is of type $\{r_i\mid n/2<\eta(i)\leqslant n\}$, and
$\Theta'(G)$ consists of one-element sets of type $\{r_{2t+1}\}$, $\{r_{4t}\}$ or $\{r_{4t+2}\}$.
Thus, $t(G)=3t+2=[(3n+4)/4]$.
Suppose that $n=4t+3$. Then $$N(G)=\{2t+3,2t+5,\ldots,4t+1,4t+4,4t+6,\ldots,8t+6\},$$ so
$|N(G)|=3t+2\geqslant5$. Since $t(2,G)\leqslant t(p,G)<4$, primes $p$ and $2$ cannot lie in a
coclique of maximal size. By adjacency criterion, $r_{4t+2}$ is non-adjacent to every $r_i$, where
$i\in N(G)$. On the other hand, if $\eta(i)<2t+1$ or $i=2t+1$, then $r_i$ is adjacent to
$r_{4t+4}$, $r_{4t+2}$. Therefore, $M(G)=N(G)\cup\{n-1\}$, every $\theta(G)\in\Theta(G)$ is of type
$\{r_i\mid (n-1)/2\leqslant\eta(i)\leqslant n,i\neq n,(n-1)/2\}$, $\Theta'(G)=\varnothing$, and
$t(G)=3t+3=[(3n+4)/4]$.
It remains to consider the cases: $q=2$ and $n=4+k$, where $k=1,2,3$. All results (see
Table~\ref{ClassicTable}) are obtained by arguments similar to that in general case with respect
to the fact: $R_{4t+2}=R_6=\varnothing$, and can be easily verified by using
Proposition~\ref{adjdn} and \cite[Propositions 3.1, 4.4]{VasVd}.
\end{proof}
\begin{prop}\label{cocliqueexcept} If $G$ is an exceptional finite simple groups of Lie type
over a field of characteristic~$p$, then $t(G)$, and the sets $\Theta(G)$, $\Theta'(G)$ are listed
in Table~{\em\ref{ExceptTable}}.
\end{prop}
\begin{proof}
We consider all types of exceptional groups of Lie type separately. Following \cite{ZavGraph} we
use the compact form of the prime graph $GK(G)$. By the compact form we mean a graph whose vertices
are labeled with marks $R_i$. A vertex labeled $R_i$ represents the clique of $GK(G)$ such that
every vertex in this clique labeled by a prime from~$R_i$. An edge connecting $R_i$ and $R_j$
represents the set of edges of $GK(G)$ that connect each vertex in $R_i$ with each vertex in $R_j$.
If an edge occurs under some condition, we draw such edge by a dotted line and write corresponding
occurence condition. The technical tools for determining the compact form of the prime graph $GK(G)$ for an exceptional group of Lie type
$G$ are Propositions \ref{adjexcept} and \ref{adjsuzree}, and also \cite[Propositions~3.2, 3.3, and~4.5]{VasVd}. Notice that the compact
form of $GK(G)$ can be considered as a graphical form
of the adjacency criterion in~$GK(G)$.
\begin{center}
The compact form for $GK(G_2(q))$
\end{center}
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFontNFSS\undefined%
\gdef\SetFigFontNFSS#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(2724,1374)(2014,-2548)
\thinlines
{\color[rgb]{0,0,0}\put(2701,-2536){\line( 1, 0){1350}}
\put(4051,-2536){\line(-1, 1){675}}
\put(3376,-1861){\line(-1,-1){675}}
}%
{\color[rgb]{0,0,0}\put(2026,-1861){\line( 1,-1){675}}
\color[rgb]{0,0,0}\put(2701,-2536){\line(-2,1){180}}
\color[rgb]{0,0,0}\put(2701,-2536){\line(-1,2){90}}
}%
{\color[rgb]{0,0,0}\put(4726,-1861){\line(-1,-1){675}}
\color[rgb]{0,0,0}\put(4051,-2536){\line(2,1){180}}
\color[rgb]{0,0,0}\put(4051,-2536){\line(1,2){90}}
}%
{\color[rgb]{0,0,0}\multiput(2026,-1861)(0.00000,122.72727){6}{\line( 0, 1){ 61.364}}
}%
{\color[rgb]{0,0,0}\multiput(4726,-1861)(0.00000,122.72727){6}{\line( 0, 1){ 61.364}}
}%
\put(2026,-1861){\circle*{60}}\put(2126,-1801){\makebox(0,0){$3$}}
\put(4726,-1861){\circle*{60}}\put(4636,-1801){\makebox(0,0){$3$}}
\put(4726,-1181){\circle*{60}}\put(4566,-1181){\makebox(0,0){$R_6$}}
\put(4051,-2536){\circle*{60}} \put(4011,-2306){\makebox(0,0){$R_2$}}
\put(2701,-2536){\circle*{60}} \put(2751,-2306){\makebox(0,0){$R_1$}}
\put(3376,-1861){\circle*{60}} \put(3376,-1731){\makebox(0,0){$p$}}
\put(2026,-1181){\circle*{60}} \put(2186,-1181){\makebox(0,0){$R_3$}}
\end{picture}%
\vspace{1\baselineskip}
Let $G=G_2(q)$. In the compact form for $GK(G_2(q))$ the vector from $3$ to $R_1$ (resp. $R_2$) and
the dotted edge $(3,R_{3})$ (resp.$(3,R_{6})$) mean that $R_1$ (resp. $R_2$) and $R_{3}$ (resp.
$R_6$) are not connected, but if $3\in R_1$, i.~e., $q\equiv 1\pmod3$ (resp. $3\in R_2$, i.~e.,
$q\equiv-1\pmod 3$), then there exists an edge between $3$ and $R_{3}$ (resp. $R_6$). If
$R_1=\varnothing$, then one need to remove vertex $R_1$ with all corresponding edges. From the
compact form of $GK(G)$ it is evident, that $\Theta(G)=\{\{r_3,r_6\}\mid r_i\in R_i\}$, while
$\Theta'(G)=\{\{p\}, \{r_1\},\{r_2\}\mid r_i\in R_i\setminus\{3\}\}$.
\begin{center}
The compact form for $GK(F_4(q))$
\end{center}
\begin{tabular}{ccc}
\setlength{\unitlength}{3108sp}%
\begingroup\makeatletter\ifx\SetFigFontNFSS\undefined%
\gdef\SetFigFontNFSS#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(2724,2724)(3139,-3898)
\thinlines
{\color[rgb]{0,0,0}\put(5851,-1861){\line(-1, 1){675}}
}%
\put(5851,-1861){\circle*{90}} \put(6281,-1861){\makebox(0,0){$2(\not=p)$}}
\put(3826,-1186){\circle*{90}} \put(3576,-1186){\makebox(0,0){$R_4$}}
\put(4535,-1186){\circle*{90}} \put(4735,-1186){\makebox(0,0){$R_{12}$}}
{\color[rgb]{0,0,0}\put(5851,-1861){\line(-3, 1){2025}}
}%
\put(5176,-1186){\circle*{90}} \put(5376,-1186){\makebox(0,0){$R_8$}}
{\color[rgb]{0,0,0}\put(5851,-1861){\line(-1, 0){2700}}
}%
{\color[rgb]{0,0,0}\put(5851,-1861){\line(-1,-1){2025}}
}%
{\color[rgb]{0,0,0}\put(5851,-1861){\line(-1,-3){675}}
}%
{\color[rgb]{0,0,0}\put(5851,-1861){\line( 0,-1){1350}}
}%
{\color[rgb]{0,0,0}\put(3151,-1861){\line( 1, 1){675}}
}%
\put(3151,-1861){\circle*{90}} \put(3001,-1861){\makebox(0,0){$p$}}
{\color[rgb]{0,0,0}\put(3151,-1861){\line( 0,-1){1350}}
}%
\put(3151,-3211){\circle*{90}} \put(2951,-3211){\makebox(0,0){$R_1$}}
{\color[rgb]{0,0,0}\put(3151,-1861){\line( 1,-3){675}}
}%
{\color[rgb]{0,0,0}\put(3151,-1861){\line( 1,-1){2025}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 1,-1){675}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 3,-1){2025}}
}%
\put(5851,-3211){\circle*{90}} \put(6051,-3211){\makebox(0,0){$R_2$}}
{\color[rgb]{0,0,0}\put(5851,-3211){\line(-1,-1){675}}
}%
{\color[rgb]{0,0,0}\put(5851,-3211){\line(-3,-1){2025}}
}%
{\color[rgb]{0,0,0}\put(3826,-1186){\line(-1,-3){675}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 1, 0){2700}}
\put(5851,-3211){\line(-1, 1){2025}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 2, 1){2700}}
}%
{\color[rgb]{0,0,0}\put(5851,-3211){\line(-2, 1){2700}}
}%
\put(3826,-3886){\circle*{90}} \put(3626,-3886){\makebox(0,0){$R_3$}}
\put(5176,-3886){\circle*{90}} \put(5376,-3886){\makebox(0,0){$R_6$}}
\end{picture}&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ &
\setlength{\unitlength}{3108sp}%
\begingroup\makeatletter\ifx\SetFigFontNFSS\undefined%
\gdef\SetFigFontNFSS#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(2724,2724)(3139,-3898)
\thinlines
\put(3826,-1186){\circle*{90}} \put(3576,-1186){\makebox(0,0){$R_4$}}
\put(4535,-1186){\circle*{90}} \put(4735,-1186){\makebox(0,0){$R_{12}$}}
\put(5176,-1186){\circle*{90}} \put(5376,-1186){\makebox(0,0){$R_8$}}
{\color[rgb]{0,0,0}\put(3151,-1861){\line( 1, 1){675}}
}%
\put(3151,-1861){\circle*{90}} \put(2721,-1861){\makebox(0,0){$(2=)p$}}
{\color[rgb]{0,0,0}\put(3151,-1861){\line( 0,-1){1350}}
}%
\put(3151,-3211){\circle*{90}} \put(2951,-3211){\makebox(0,0){$R_1$}}
{\color[rgb]{0,0,0}\put(3151,-1861){\line( 1,-3){675}}
}%
{\color[rgb]{0,0,0}\put(3151,-1861){\line( 1,-1){2025}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 1,-1){675}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 3,-1){2025}}
}%
\put(5851,-3211){\circle*{90}} \put(6051,-3211){\makebox(0,0){$R_2$}}
{\color[rgb]{0,0,0}\put(5851,-3211){\line(-1,-1){675}}
}%
{\color[rgb]{0,0,0}\put(5851,-3211){\line(-3,-1){2025}}
}%
{\color[rgb]{0,0,0}\put(3826,-1186){\line(-1,-3){675}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 1, 0){2700}}
\put(5851,-3211){\line(-1, 1){2025}}
}%
{\color[rgb]{0,0,0}\put(5851,-3211){\line(-2, 1){2700}}
}%
\put(3826,-3886){\circle*{90}} \put(3626,-3886){\makebox(0,0){$R_3$}}
\put(5176,-3886){\circle*{90}} \put(5376,-3886){\makebox(0,0){$R_6$}}
\end{picture}\\
\end{tabular}
\vspace{1\baselineskip}
Let $G=F_4(q)$. The compact form for $GK(F_4(q))$ implies that $\{2, p,R_1,R_2,R_3\}$ is a clique, while the remaining
vertices in the compact form are pairwise non-adjacent. Since $R_3$ in non-adjacent to $R_4,R_6,R_8,R_{12}$ and the remaining vertices from
the set
$\{2,p, R_1,R_2\}$ are adjacent to at least two vertices from the set $\{R_4,R_6,R_8,R_{12}\}$, we obtain that
$\Theta(G)=\{\{r_3,r_4,r_6,r_8,r_{12}\}\mid r_i\in R_i\}$ if $R_6\not=\varnothing$ and
$\Theta(G)=\{\{r_3,r_4,r_8,r_{12}\}\mid r_i\in R_i\}$ if $R_6\not=\varnothing$ (i.~e., if $q=2$). In both cases~${\Theta'(G)=\varnothing}$.
\begin{center}
The compact form for $GK(E_6^\varepsilon(q))$
\end{center}
\vspace{1\baselineskip}
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFontNFSS\undefined%
\gdef\SetFigFontNFSS#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(3399,2724)(2464,-3898)
\thinlines
{\color[rgb]{0,0,0}\put(5851,-3211){\line(-1, 1){2025}}
}%
{\color[rgb]{0,0,0}\put(5851,-3211){\line(-3,-1){2025}}
}%
{\color[rgb]{0,0,0}\put(5851,-3211){\line( 1, 2){675}}
}%
{\color[rgb]{0,0,0}\put(5851,-3211){\line(-2, 1){2700}}
}%
{\color[rgb]{0,0,0}\put(5851,-3211){\line(-1,-1){675}}
}%
{\color[rgb]{0,0,0}\put(5851,-3211){\line( 0, 1){1350}}
}%
{\color[rgb]{0,0,0}\put(6526,-1861){\line(-4,-3){2700}}
}%
{\color[rgb]{0,0,0}\put(6526,-1861){\line(-5,-2){3375}}
}%
{\color[rgb]{0,0,0}\put(6526,-1861){\line(-2,-3){1350}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 0, 1){1350}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 3,-1){2025}}
}%
{\color[rgb]{0,0,0}\put(5851,-1861){\line(-1,-3){675}}
}%
{\color[rgb]{0,0,0}\put(5851,-1861){\line(-1, 0){2700}}
}%
{\color[rgb]{0,0,0}\put(3826,-3886){\line(-1, 1){675}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 2, 1){2700}}
\put(5851,-1861){\line(-1,-1){2025}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 1, 0){2700}}
}%
{\color[rgb]{0,0,0}\put(3151,-1861){\line( 1,-1){2025}}
}%
{\color[rgb]{0,0,0}\put(5176,-3886){\line(-1, 0){1350}}
\put(3826,-3886){\line( 0, 1){2700}}
\put(3826,-1186){\line(-1,-3){675}}
}%
{\color[rgb]{0,0,0}\put(3826,-1186){\line( 1,-2){1350}}
}%
{\color[rgb]{0,0,0}\put(3826,-1186){\line(-1,-1){675}}
}%
{\color[rgb]{0,0,0}\multiput(5851,-3211)(0.00000,-122.72727){6}{\line( 0,-1){ 61.364}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 4,-1){2700}}
\put(5851,-3886){\line(-1, 0){675}}
}%
{\color[rgb]{0,0,0}\put(5851,-1861){\line(-1, 1){675}}
}%
{\color[rgb]{0,0,0}\put(3826,-3886){\line(-1, 3){675}}
}%
\put(3826,-1186){\circle*{60}} \put(3626,-1186){\makebox(0,0){$R_4$}}
\put(3826,-3886){\circle*{60}} \put(3676,-3886){\makebox(0,0){$p$}}
\put(5176,-1186){\circle*{60}} \put(5376,-1186){\makebox(0,0){$R_{12}$}}
\put(3151,-1861){\circle*{60}} \put(2901,-1861){\makebox(0,0){$R_{\nu_\varepsilon(6)}$}}
\put(6901,-3576){\makebox(0,0){$(q-\varepsilon1)_3\not=3$ and $p\not=3$}}
\put(5851,-3211){\circle*{60}} \put(6001,-3211){\makebox(0,0){$3$}}
\put(3151,-3211){\circle*{60}} \put(2951,-3211){\makebox(0,0){$R_2$}}
\put(5851,-1861){\circle*{60}} \put(6101,-1861){\makebox(0,0){$R_{\nu_\varepsilon(3)}$}}
\put(6526,-1861){\circle*{60}} \put(6796,-1861){\makebox(0,0){$R_{\nu_\varepsilon(5)}$}}
\put(5176,-3886){\circle*{60}} \put(5176,-4006){\makebox(0,0){$R_1$}}
\put(5851,-3886){\circle*{60}} \put(6001,-3886){\makebox(0,0){$R_{8}$}}
\put(3151,-3886){\circle*{60}} \put(2901,-3886){\makebox(0,0){$R_{\nu_\varepsilon(9)}$}}
\end{picture}%
\vspace{1\baselineskip}
Let $G=E_6^\varepsilon(q)$. In the compact form for $GK(E_6(q))$ the set $\{3, p, R_1,R_2,R_{\nu_\varepsilon(3)},R_{\nu_\varepsilon(6)}\}$
forms
a clique, while the remaining vertices are pairwise non-adjacent. Moreover, $R_{\nu_\varepsilon(3)}$ and $R_{\nu_\varepsilon(6)}$ are the
only
vertices from $\{3, p, R_1,R_2,R_{\nu_\varepsilon(3)},R_{\nu_\varepsilon(6)}\}$, which are adjacent to precisely
one vertex from the set of all remaining vertices (namely,
$R_{\nu_\varepsilon(3)}$ is
adjacent to $R_{12}$, and $R_{\nu_\varepsilon(6)}$ is adjacent to $R_{4}$). Thus
$\Theta(G)=\{\{r_{\nu_\varepsilon(5)},r_8,r_{\nu_\varepsilon(9)}\}\mid r_i\in R_i\}$ and
$\Theta'(G)=\{\{r_4,r_{\nu_\varepsilon(3)}\},\{r_{\nu_\varepsilon(6)},r_{12}\},\{r_4,r_{12}\}\mid r_i\in R_i\}$.
Since $R_6=\varnothing$ for $q=2$, we obtain exceptions mentioned in Table~\ref{ExceptTable}.
\begin{center}
The compact form for $GK(E_7(q))$
\end{center}
\setlength{\unitlength}{3108sp}%
\begingroup\makeatletter\ifx\SetFigFontNFSS\undefined%
\gdef\SetFigFontNFSS#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(5424,3624)(889,-4123)
\thinlines
{\color[rgb]{0,0,0}\put(5401,-1411){\line( 0,-1){1800}}
}%
{\color[rgb]{0,0,0}\put(5401,-1411){\line(-1,-3){900}}
}%
{\color[rgb]{0,0,0}\put(5401,-1411){\line(-1, 0){3600}}
}%
{\color[rgb]{0,0,0}\put(1801,-3211){\line( 2, 1){3600}}
}%
{\color[rgb]{0,0,0}\put(5401,-1411){\line(-1,-1){2700}}
}%
{\color[rgb]{0,0,0}\put(1801,-3211){\line( 1, 0){3600}}
}%
{\color[rgb]{0,0,0}\put(5401,-3211){\line(-2, 1){3600}}
}%
{\color[rgb]{0,0,0}\put(5401,-3211){\line( 1, 2){900}}
}%
{\color[rgb]{0,0,0}\put(5401,-3211){\line(-3,-1){2700}}
}%
{\color[rgb]{0,0,0}\put(5401,-3211){\line(-1,-1){900}}
}%
{\color[rgb]{0,0,0}\put(4501,-511){\line( 0,-1){3600}}
}%
{\color[rgb]{0,0,0}\put(4501,-511){\line(-1,-2){1800}}
}%
{\color[rgb]{0,0,0}\put(1801,-3211){\line( 1, 1){2700}}
}%
{\color[rgb]{0,0,0}\put(1801,-3211){\line( 3,-1){2700}}
}%
{\color[rgb]{0,0,0}\put(1801,-3211){\line( 5, 2){4500}}
}%
{\color[rgb]{0,0,0}\put(1801,-3211){\line( 1,-1){900}}
}%
{\color[rgb]{0,0,0}\put(1801,-1411){\line( 0,-1){1800}}
}%
{\color[rgb]{0,0,0}\put(4501,-4111){\line(-1, 1){2700}}
}%
{\color[rgb]{0,0,0}\put(2701,-4111){\line(-1, 3){900}}
}%
{\color[rgb]{0,0,0}\put(4501,-4111){\line(-1, 0){1800}}
}%
{\color[rgb]{0,0,0}\put(4501,-4111){\line( 2, 3){1800}}
}%
{\color[rgb]{0,0,0}\put(6301,-1411){\line(-4,-3){3600}}
}%
{\color[rgb]{0,0,0}\put(901,-1411){\line( 1,-2){900}}
}%
{\color[rgb]{0,0,0}\put(901,-1411){\line( 2,-3){1800}}
}%
{\color[rgb]{0,0,0}\put(901,-1411){\line( 4,-3){3600}}
}%
{\color[rgb]{0,0,0}\put(2701,-511){\line(-1,-3){900}}
}%
{\color[rgb]{0,0,0}\put(2701,-511){\line(-1,-1){900}}
}%
{\color[rgb]{0,0,0}\put(2701,-511){\line( 1,-1){2700}}
}%
{\color[rgb]{0,0,0}\put(2701,-511){\line( 0,-1){3600}}
}%
{\color[rgb]{0,0,0}\put(2701,-511){\line( 1,-2){1800}}
}%
{\color[rgb]{0,0,0}\put(4501,-4111){\line( 1, 0){1800}}
}%
{\color[rgb]{0,0,0}\put(4501,-4111){\line( 2, 1){1800}}
}%
{\color[rgb]{0,0,0}\put(2701,-4111){\line(-1, 0){1800}}
}%
{\color[rgb]{0,0,0}\put(2701,-4111){\line(-2, 1){1800}}
}%
{\color[rgb]{0,0,0}\put(4501,-511){\line( 1,-1){900}}
}%
{\color[rgb]{0,0,0}\put(901,-1411){\line( 1, 0){900}}
}%
\put(1801,-1411){\circle*{60}} \put(1621,-1301){\makebox(0,0){$R_6$}}
\put(4501,-511){\circle*{60}} \put(4701,-511){\makebox(0,0){$R_8$}}
\put(5401,-3211){\circle*{60}} \put(5561,-3211){\makebox(0,0){$R_3$}}
\put(2701,-4111){\circle*{60}} \put(2701,-4261){\makebox(0,0){$R_1$}}
\put(5401,-1411){\circle*{60}} \put(5451,-1131){\makebox(0,0){$R_4$}}
\put(1801,-3211){\circle*{60}} \put(1801,-3411){\makebox(0,0){$p$}}
\put(4501,-4111){\circle*{60}} \put(4501,-4261){\makebox(0,0){$R_2$}}
\put(901,-4111){\circle*{60}} \put(721,-4111){\makebox(0,0){$R_9$}}
\put(6301,-4111){\circle*{60}} \put(6501,-4111){\makebox(0,0){$R_{14}$}}
\put(6301,-3211){\circle*{60}} \put(6501,-3211){\makebox(0,0){$R_{18}$}}
\put(2701,-511){\circle*{60}} \put(2401,-511){\makebox(0,0){$R_{12}$}}
\put(901,-1411){\circle*{60}} \put(681,-1411){\makebox(0,0){$R_{10}$}}
\put(6301,-1411){\circle*{60}} \put(6501,-1411){\makebox(0,0){$R_{5}$}}
\put(901,-3211){\circle*{60}} \put(721,-3211){\makebox(0,0){$R_{7}$}}
\end{picture}%
\vspace{1\baselineskip}
Let $G=E_7(q)$. In the compact form for $GK(E_7(q))$ the set $\{p, R_1,R_2,R_3,R_4,R_6\}$ forms
a clique, while the remaining vertices are pairwise non-adjacent. Moreover, $R_4$ is the only
vertices from $\{p, R_1,R_2,R_3,R_4,R_6\}$, which are adjacent to precisely one vertex from the set of all remaining vertices (namely,
$R_4$ is
adjacent to $R_8$). Thus
$\Theta(G)=\{\{r_5,r_7,r_9,r_{10},r_{12},r_{14},r_{18}\}\mid r_i\in R_i\}$ and
$\Theta'(G)=\{\{r_4\},\{r_8\}\mid r_i\in R_i\}$.
\begin{center}
The compact form for $GK(E_8(q))$
\end{center}
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFontNFSS\undefined%
\gdef\SetFigFontNFSS#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(4749,4074)(2464,-4573)
\thinlines
{\color[rgb]{0,0,0}\put(5851,-1861){\line(-1, 2){675}}
}%
{\color[rgb]{0,0,0}\put(5851,-1861){\line(-3, 1){2025}}
}%
{\color[rgb]{0,0,0}\put(5851,-1861){\line(-1, 0){2700}}
}%
{\color[rgb]{0,0,0}\put(5851,-1861){\line(-1,-1){2025}}
}%
{\color[rgb]{0,0,0}\put(5851,-1861){\line( 0,-1){1350}}
}%
{\color[rgb]{0,0,0}\put(3151,-1861){\line( 1, 1){675}}
}%
{\color[rgb]{0,0,0}\put(3151,-1861){\line( 0,-1){1350}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 1,-1){675}}
}%
{\color[rgb]{0,0,0}\put(5851,-3211){\line(-3,-1){2025}}
}%
{\color[rgb]{0,0,0}\put(3826,-1186){\line(-1,-3){675}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 1, 0){2700}}
\put(5851,-3211){\line(-1, 1){2025}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 2, 1){2700}}
}%
{\color[rgb]{0,0,0}\put(5176,-511){\line(-2,-5){1350}}
}%
{\color[rgb]{0,0,0}\put(3826,-1186){\line( 0,-1){2700}}
}%
{\color[rgb]{0,0,0}\put(3151,-1861){\line( 1,-3){675}}
}%
{\color[rgb]{0,0,0}\put(3151,-1861){\line( 2,-1){2700}}
}%
{\color[rgb]{0,0,0}\put(3151,-1861){\line( 3, 2){2025}}
}%
{\color[rgb]{0,0,0}\put(3151,-3211){\line( 3, 4){2025}}
}%
{\color[rgb]{0,0,0}\put(5851,-3211){\line(-1, 4){675}}
}%
{\color[rgb]{0,0,0}\put(6526,-4561){\line(-1, 2){675}}
}%
{\color[rgb]{0,0,0}\put(6526,-4561){\line(-4, 5){2700}}
}%
{\color[rgb]{0,0,0}\put(6526,-4561){\line(-5, 4){3375}}
}%
{\color[rgb]{0,0,0}\put(6526,-4561){\line(-5, 2){3375}}
}%
{\color[rgb]{0,0,0}\put(6526,-4561){\line(-4, 1){2700}}
}%
{\color[rgb]{0,0,0}\put(2476,-511){\line( 1,-2){675}}
}%
{\color[rgb]{0,0,0}\put(2476,-511){\line( 1,-4){675}}
}%
{\color[rgb]{0,0,0}\put(2476,-511){\line( 5,-4){3375}}
}%
{\color[rgb]{0,0,0}\put(2476,-511){\line( 5,-2){3375}}
}%
{\color[rgb]{0,0,0}\put(6526,-511){\line(-4,-1){2700}}
}%
{\color[rgb]{0,0,0}\put(6526,-511){\line(-5,-2){3375}}
}%
{\color[rgb]{0,0,0}\put(6526,-511){\line(-5,-4){3375}}
}%
{\color[rgb]{0,0,0}\put(6526,-511){\line(-4,-5){2700}}
}%
{\color[rgb]{0,0,0}\put(5851,-1861){\line( 1, 2){675}}
}%
{\color[rgb]{0,0,0}\put(2476,-4561){\line( 2, 1){1350}}
}%
{\color[rgb]{0,0,0}\put(2476,-1861){\line( 1, 0){675}}
}%
{\color[rgb]{0,0,0}\put(2476,-1861){\line( 5,-2){3375}}
}%
{\color[rgb]{0,0,0}\put(2476,-1861){\line( 1,-2){675}}
}%
{\color[rgb]{0,0,0}\put(3151,-1861){\line(-1,-1){675}}
}%
{\color[rgb]{0,0,0}\put(2476,-2536){\line( 5,-1){3375}}
}%
{\color[rgb]{0,0,0}\put(2476,-2536){\line( 1,-1){675}}
}%
{\color[rgb]{0,0,0}\put(2476,-4561){\line( 5, 2){3375}}
}%
{\color[rgb]{0,0,0}\put(2476,-4561){\line( 1, 2){675}}
}%
{\color[rgb]{0,0,0}\put(2476,-4561){\line( 1, 4){675}}
}%
{\color[rgb]{0,0,0}\put(5851,-1861){\line( 1,-4){675}}
}%
{\color[rgb]{0,0,0}\put(5851,-3211){\line( 1, 4){675}}
}%
{\color[rgb]{0,0,0}\put(3826,-1186){\line(-2, 1){1350}}
}%
{\color[rgb]{0,0,0}\put(3826,-511){\line( 0,-1){675}}
}%
{\color[rgb]{0,0,0}\put(3826,-511){\line(-1,-4){675}}
}%
{\color[rgb]{0,0,0}\put(3826,-511){\line(-1,-2){675}}
}%
{\color[rgb]{0,0,0}\put(3826,-511){\line( 3,-4){2025}}
}%
{\color[rgb]{0,0,0}\put(6526,-1861){\line(-1, 0){675}}
}%
{\color[rgb]{0,0,0}\multiput(6526,-1861)(54.00000,0.00000){13}{\line( 1, 0){ 27.000}}
}%
{\color[rgb]{0,0,0}\multiput(6526,-1861)(54.00000,0.00000){13}{\line( 1, 0){ 27.000}}
\color[rgb]{0,0,0}\put(5851,-1861){\line(3, 1){180}}
\color[rgb]{0,0,0}\put(5851,-1861){\line(3, -1){180}}
}%
\put(3826,-1186){\circle*{60}} \put(3986,-1026){\makebox(0,0){$R_6$}}
\put(5176,-511){\circle*{60}} \put(5326,-511){\makebox(0,0){$R_5$}}
\put(5851,-1861){\circle*{60}}\put(5841,-1591){\makebox(0,0){$R_4$}}
\put(6526,-1861){\circle*{60}}\put(6526,-1971){\makebox(0,0){$5$}}
\put(7201,-1861){\circle*{60}} \put(7201,-1971){\makebox(0,0){$R_{20}$}}
\put(3151,-1861){\circle*{60}} \put(3151,-1611){\makebox(0,0){$R_1$}}
\put(3151,-3211){\circle*{60}} \put(3021,-3181){\makebox(0,0){$R_2$}}
\put(5851,-3211){\circle*{60}} \put(5941,-3181){\makebox(0,0){$p$}}
\put(3826,-3886){\circle*{60}} \put(3656,-3836){\makebox(0,0){$R_3$}}
\put(7201,-3886){\circle*{60}} \put(7201,-3996){\makebox(0,0){$R_{24}$}}
\put(7201,-3211){\circle*{60}} \put(7201,-3321){\makebox(0,0){$R_{15}$}}
\put(7201,-2536){\circle*{60}} \put(7201,-2646){\makebox(0,0){$R_{30}$}}
\put(2476,-2536){\circle*{60}} \put(2321,-2536){\makebox(0,0){$R_{14}$}}
\put(2476,-1861){\circle*{60}} \put(2321,-1861){\makebox(0,0){$R_{7}$}}
\put(6526,-4561){\circle*{60}} \put(6676,-4561){\makebox(0,0){$R_{12}$}}
\put(6526,-511){\circle*{60}} \put(6626,-511){\makebox(0,0){$R_{8}$}}
\put(2476,-511){\circle*{60}} \put(2331,-511){\makebox(0,0){$R_{10}$}}
\put(3826,-511){\circle*{60}} \put(3986,-511){\makebox(0,0){$R_{18}$}}
\put(2476,-4561){\circle*{60}} \put(2326,-4561){\makebox(0,0){$R_{9}$}}
\end{picture}%
\vspace{1\baselineskip}
Let $G=E_8(q)$. In the compact form for $GK(E_8(q))$, the vector from $5$ to $R_4$ and the dotted
edge $(5,R_{20})$ means that $R_4$ and $R_{20}$ are not connected, but if $5\in R_4$ (i.~e.,
$q^2\equiv -1\pmod5$), then there exists an edge between $5$ and $R_{20}$. Now
$\{p,R_1,R_2,R_3,R_4,R_6\}$ forms a clique, while the remaining vertices are pairwise non-adjacent. Notice that each vertex from the clique
$\{p,R_1,R_2,R_3,R_4,R_6\}$ is adjacent to at least two vertices from the set of remaining vertices. So
$$\Theta(G)=\{\{r_5,r_7,r_8,r_9,r_{10}, r_{12}, r_{14},r_{15},r_{18},r_{20}, r_{24}, r_{30}\}\mid r_i\in R_i\}$$ and
$\Theta'(G)=\varnothing$.
\begin{center}
The compact form for $GK({}^3D_4(q))$
\end{center}
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFontNFSS\undefined%
\gdef\SetFigFontNFSS#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(1374,1374)(2014,-1423)
\thinlines
{\color[rgb]{0,0,0}\put(2701,-61){\line(-1,-1){675}}
\put(2026,-736){\line( 0,-1){675}}
\put(2026,-1411){\line( 2, 1){1350}}
\put(3376,-736){\line(-1, 1){675}}
\put(2701,-61){\line(-1,-2){675}}
}%
\put(2701,-61){\circle*{60}} \put(2851,-61){\makebox(0,0){$p$}}
\put(2026,-736){\circle*{60}} \put(1876,-736){\makebox(0,0){$R_2$}}
\put(2026,-1411){\circle*{60}} \put(1876,-1411){\makebox(0,0){$R_6$}}
\put(3376,-736){\circle*{60}} \put(3526,-736){\makebox(0,0){$R_1$}}
\put(3376,-1411){\circle*{60}} \put(3526,-1411){\makebox(0,0){$R_3$}}
\put(2701,-1411){\circle*{60}} \put(2851,-1411){\makebox(0,0){$R_{12}$}}
{\color[rgb]{0,0,0}\put(2701,-61){\line( 1,-2){675}}
\put(3376,-1411){\line( 0, 1){675}}
\put(3376,-736){\line(-1, 0){1350}}
\put(2026,-736){\line( 2,-1){1350}}
}%
\end{picture}%
\vspace{1\baselineskip}
Let $G={}^3D_4(q)$. From the compact form for $GK({}^3D_4(q))$ we immediately obtain that
$\Theta(G)=\{\{r_3,r_6,r_{12}\}\mid r_i\in R_i\}$ and $\Theta'(G)=\varnothing$ if $q\not= 2$. For
$q=2$ the result follows from the compact form for the prime graph $GK({}^3D_4(q))$, and the fact
that $R_6=\varnothing$.
Let $G={}^2B_2(q)$. In this case primes $s_i\in S_i$ and $s_j\in S_j$ are adjacent if and only if $i=j$, while $p=2$ is non-adjacent to all
vertices, and the proposition follows.
Let $G={}^2G_2(q)$. In this case odd primes $s_i\in S_i$ and $s_j\in S_j$ are adjacent if and only
if $i=j$, while $p=3$ is non-adjacent to all odd primes. Since $2$ is adjacent to $s_1$, $s_2$, and
$p$, we obtain the statement of the proposition in this case.
Let $G={}^2F_4(q)$. If $q>8$, then any set of type $\{s_2,s_3,s_4,s_5,s_6\}$ forms a coclique in
$GK(G)$ by Proposition \ref{adjsuzree}. The same proposition together with
\cite[Proposition~3.3]{VasVd} implies that the set $\{2\}\cup S_1\cup S_2$ forms a clique in
$GK(G)$, any prime $s_3$ is adjacent to $s_1$ and $2$, and $3$ is adjacent to $s_2$ and $s_4$. By
using this information we obtain the proposition in this case. If $G={}^2F_4(8)$, we have
$S_2=\pi(9)\setminus\{3\}=\varnothing$. With respect to this fact, we have that every
$\theta(G)\in\Theta(G)$ is of type $\{s_5,s_6\}$, and every $\theta'(G)\in\Theta'(G)$ is a
two-element set of type either $\{s_1,s_4\}$, or $\{3,s_3\}$, or $\{2,s_4\}$, or $\{s_3,s_4\}$. The group $G={}^2F_4(2)$ is not
simple, and its derived subgroup $T={}^2F_4(2)'$ is the simple Tits group. Using \cite{Atlas}, we
obtain that the prime graph of the Tits group $T$ contains a unique coclique $\rho(T)=\{3,5,13\}$
of maximal size.
\end{proof}
\begin{prop}\label{cocliquea12} If $G\simeq A_{n-1}^\varepsilon(q)$ is an finite simple groups of Lie type
over a field of characteristic~$p$ and $n\in \{2,3\}$, then $t(G)$, and the sets $\Theta(G)$, $\Theta'(G)$ are listed
in Table~{\em\ref{LinearUnitaryTable}}.
\end{prop}
\begin{proof}
Let $G=A_1(q)$. Then the compact form for $GK(A_1(q))$ is a coclique with the set of vertices $\{R_1,R_2,p\}$.
Thus $\Theta(G)=\{\{r_1,r_2,p\}\mid r_i\in R_i\}$
and $\Theta'(G)=\varnothing$.
\begin{center}
The compact form for $GK(A_2^\varepsilon(q))$
\end{center}
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFontNFSS\undefined%
\gdef\SetFigFontNFSS#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(1374,1374)(1114,-2323)
\thinlines
{\color[rgb]{0,0,0}\multiput(1126,-961)(117.39130,0.00000){12}{\line( 1, 0){ 58.696}}
}%
{\color[rgb]{0,0,0}\put(1126,-961){\line( 1,-2){675}}
}%
{\color[rgb]{0,0,0}\put(1801,-2311){\line( 1, 2){675}}
}%
{\color[rgb]{0,0,0}\put(2476,-1636){\line(-1,-1){675}}
}%
{\color[rgb]{0,0,0}\put(1126,-1636){\line( 1,-1){675}}
}%
{\color[rgb]{0,0,0}\put(1126,-1636){\line( 1, 0){1350}}
}%
{\color[rgb]{0,0,0}\put(1126,-961){\line( 0,-1){675}}
}%
{\color[rgb]{0,0,0}\multiput(2476,-961)(0.00000,-122.72727){6}{\line( 0,-1){ 61.364}}
}%
{\color[rgb]{0,0,0}\put(2476,-961){\line(-2,-1){1350}}
}%
\put(1126,-961){\circle*{60}} \put(866,-961){\makebox(0,0){$R_{\nu_\varepsilon(2)}'$}}
\put(1786,-861){\makebox(0,0){$(q-\varepsilon1)_3\not=3\not=p$}}
\put(3076,-1300){\makebox(0,0){$(q-\varepsilon1)_3>3$}}
\put(1801,-2311){\circle*{60}} \put(1951,-2311){\makebox(0,0){$2$}}
\put(2476,-961){\circle*{60}} \put(2576,-961){\makebox(0,0){$3$}}
\put(2476,-1636){\circle*{60}} \put(2576,-1636){\makebox(0,0){$p$}}
\put(1126,-1636){\circle*{60}} \put(866,-1636){\makebox(0,0){$R_{\nu_\varepsilon(1)}'$}}
\put(2476,-2311){\circle*{60}} \put(2746,-2311){\makebox(0,0){$R_{\nu_\varepsilon(3)}$}}
\end{picture}%
\vspace{1\baselineskip}
Let $G=A_2^\varepsilon(q)$. Set $R_{\nu_\varepsilon(1)}'= R_{\nu_\varepsilon(1)}\setminus\{2,3\}$,
and $R_{\nu_\varepsilon(2)}'= R_{\nu_\varepsilon(2)}\setminus\{2,3\}$. Assume first that
$(q-\varepsilon1)_3>3$. Then the set $\{2,3,p,R_{\nu_\varepsilon(1)}'\}$ is a clique in the compact
form for $GK(A_2^\varepsilon(q))$, while $R_{\nu_\varepsilon(2)}'$ and $R_{\nu_\varepsilon(3)}$ are
non-adjacent. If $R_{\nu_\varepsilon(2)}\not=\{2\}$ (i.~e., $q+\varepsilon1\not=2^k$ and
$R_{\nu_\varepsilon(2)}'\not=\varnothing$), then $p$ is the only vertex from the clique
$\{2,3,p,R_{\nu_\varepsilon(1)}'\}$, which is non-adjacent to both $R_{\nu_\varepsilon(2)}'$ and
$R_{\nu_\varepsilon(3)}$. Hence
$\Theta(G)=\{\{p,r_{\nu_\varepsilon(2)}\not=2,r_{\nu_\varepsilon(3)}\}\mid r_i\in R_i\}$ and
$\Theta'(G)=\varnothing$. If $R_{\nu_\varepsilon(2)}=\{2\}$ (i.~e., $q+\varepsilon1=2^k$ and
$R_{\nu_\varepsilon(2)}'=\varnothing$), then $\Theta(G)=\{\{r_{\nu_\varepsilon(3)}\}\mid
r_{\nu_\varepsilon(3)}\in R_{\nu_\varepsilon(3)}\}$ and
$\Theta'(G)=\{\{2\},\{p\},\{r_{\nu_\varepsilon(1)}\}\mid r_{\nu_\varepsilon(1)}\in
R_{\nu_\varepsilon(1)} \}$.
Now assume that $(q-\varepsilon1)_3=3$. Then the set $\{2,p,R_{\nu_\varepsilon(1)}'\}$ is a clique
in the compact form for $GK(A_2^\varepsilon(q))$, while $3$, $R_{\nu_\varepsilon(2)}'$, and
$R_{\nu_\varepsilon(3)}$ are pairwise non-adjacent. Since $p$ is the only vertex from the clique
$\{2,p,R_{\nu_\varepsilon(1)}'\}$, which is non-adjacent to $3$, $R_{\nu_\varepsilon(2)}'$, and
$R_{\nu_\varepsilon(3)}$, we obtain that
$\Theta(G)=\{\{3,p,r_{\nu_\varepsilon(2)}\not=2,r_{\nu_\varepsilon(3)}\}\mid r_i\in R_i\}$ if
$R_{\nu_\varepsilon(2)}\not=\{2\}$, and $\Theta(G)=\{\{3,p,r_{\nu_\varepsilon(3)}\}\mid
r_{\nu_\varepsilon(3)}\in R_{\nu_\varepsilon(3)}\}$ if $R_{\nu_\varepsilon(2)}=\{2\}$. In both
cases $\Theta'(G)=\varnothing$.
Assume at the end that $(q-\varepsilon1)_3=1$, i.~e., either $(q+\varepsilon1)_3>1$ and $3\in
R_{\nu_\varepsilon(2)}\not=\{2\}$, or $p=3$. As above we have that the set
$\{2,p,R_{\nu_\varepsilon(1)}'\}$ is a clique in the compact form for $GK(A_2^\varepsilon(q))$,
while $R_{\nu_\varepsilon(2)}'$ and $R_{\nu_\varepsilon(3)}$ are pairwise non-adjacent. Since $p$
is the only vertex from the clique $\{2,p,R_{\nu_\varepsilon(1)}'\}$, which is non-adjacent to
$R_{\nu_\varepsilon(2)}'$ and $R_{\nu_\varepsilon(3)}$, and since either $3\in
R_{\nu_\varepsilon(2)}$ or $p=3$, we obtain that
$\Theta(G)=\{\{p,r_{\nu_\varepsilon(2)}\not=2,r_{\nu_\varepsilon(3)}\}\mid r_i\in
R_i\setminus\{2\}\}$ and $\Theta'(G)=\varnothing$ if $R_{\nu_\varepsilon(2)}\not=\{2\}$, and
$\Theta(G)=\{\{r_{\nu_\varepsilon(3)}\}\mid r_{\nu_\varepsilon(3)}\in R_{\nu_\varepsilon(3)}\}$ and
$\Theta'(G)=\{\{p\},\{r_{\nu_\varepsilon(1)}\},\{2=r_{\nu_\varepsilon(2)}\}\mid
r_{\nu_\varepsilon(1)}\in R_{\nu_\varepsilon(1)}\}$ if $R_{\nu_\varepsilon(2)}=\{2\}$.
\end{proof}
Below we give Tables~\ref{LinearUnitaryTable}, \ref{ClassicTable}, \ref{ExceptTable}. These tables
are organized in the following way. Column~1 represents a group of Lie type $G$ with the base field
of order $q$ and characteristic $p$, Column~2 contains conditions on $G$, and Column~3 contains
value of $t(G)$. In Columns~4 and~5 we list the elements of $\Theta(G)$ and $\Theta'(G)$, that is
sets $\theta(G)\in\Theta(G)$ and $\theta'(G)\in\Theta'(G)$, and omit the braces for one-element
sets. In particular, the item $\{p,3,r_2\not=2,r_3\}$ in Column~4 means
$\Theta(G)=\{\{p,3,r_2,r_3\}\mid r_2\in R_2\setminus\{2\},r_3\in R_3\}$ and the item $p,r_4$ in
Column~5 means $\Theta'(G)=\{\{p\},\{r_4\}\mid r_4\in R_4\}$.
\newpage
\begin{tab}\label{LinearUnitaryTable}{\bfseries Cocliques for finite simple linear and unitary groups}
\smallskip
{\small \noindent\begin{tabular}{|c|l|c|c|c|}
\hline
$G$ & Conditions & $t(G)$ & $\Theta(G)$ & $\Theta'(G)$\\
\hline
$A_1(q)$&$q>3$&$3$&$\{p,r_1,r_2\}$&$\varnothing$\\ \hline
$A_2(q)$&$(q-1)_3=3$, $q+1\not=2^k$& $4$&$\{p,3,r_2\not=2,r_3\}$&$\varnothing$ \\
& $(q-1)_3=3$, $q+1=2^k$& $3$&$\{3,p,r_3\}$&$\varnothing$ \\
&$(q-1)_3\not=3$, $q+1\not=2^k$& $3$&$\{p,r_2\not=2,r_3\}$&$\varnothing$ \\
&$(q-1)_3\not=3$, $q+1=2^k$& $2$&$r_3$&$p,r_1,2=r_2$ \\ \hline
$A_3(q)$&$(q-1)_2\not=4$&$3$&$\{p,r_3,r_4\}$&$\varnothing$\\
&$(q-1)_2=4$&$3$&$\{r_3,r_4\}$&$p,2$\\ \hline
$A_4(q)$& $(q-1)_5\neq5$ &$3$&$\{r_4,r_5\}$&$p,r_3$\\
& $(q-1)_5=5$ &$3$&$\{r_4,r_5\}$&$5,p,r_3$\\
\hline
$A_5(q)$&$q=2$&$3$&$\{r_3,r_4,r_5\}$&$\varnothing$\\
&$q>2$ and $(q-1)_3\neq3$ &$3$&$r_5$&$\{p,r_6\}$,$\{r_3,r_4\}$, \\
& & & & $\{r_4,r_6\}$\\
&$(q-1)_3=3$ &$3$&$r_5$&$\{p,r_6\}$,$\{r_3,r_4\}$, \\
& & & & $\{r_4,r_6\}$, $\{3,r_6\}$\\ \hline
$A_{n-1}(q),$ &$n$ is odd and $q\not=2$&$[\frac{n+1}{2}]$&$\{r_i\mid \frac{n}{2}< i\le n\}$&$\varnothing$\\
$n\ge7$& for $7\le n\le 11$&&&\\
& $n$ is even and
$q\not=2$&$[\frac{n+1}{2}]$&$\{ r_{i}\mid \frac{n}{2}< i<
n\}$&$r_{\frac{n}{2}}$,$r_n$\\
& for $8\le n\le12$&&&\\
&$n=7$, $q=2$&$3$&$\{r_5,r_7\}$&$r_3,r_4$\\
&$n=8$, $q=2$&$3$&$r_7$& $\{p,r_8\}$,$\{r_5,r_8\}$, \\
&&&&$\{r_3,r_8\}$,$\{r_4,r_5\}$\\
&$n=9$, $q=2$&$4$&$\{r_5,r_7,r_8,r_9\}$&$\varnothing$\\
&$n=10$, $q=2$&$4$&$\{r_7,r_9\}$&$\{r_4,r_{10}\}$,$\{r_8,r_{10}\}$\\
&&&&$\{r_5,r_{8}\}$\\
&$n=11$, $q=2$&$5$&$\{r_7,r_8,r_9,r_{11}\}$&$r_5,r_{10}$ \\
&$n=12$, $q=2$&$6$&$\{r_7,r_8,r_9,r_{10},r_{11},r_{12}\}$&$\varnothing$\\ \hline
${}^2A_2(q)$,&$(q+1)_3=3$, $q-1\not=2^k$&$4$&$\{p,3,r_{1}\not=2,r_{6}\}$&$\varnothing$\\
$q>2$&$(q+1)_3=3$, $q-1=2^k$&$3$&$\{3,p,r_{6}\}$&$\varnothing$\\
&$(q+1)_3\not=3$, $q-1\not=2^k$&$3$&$\{p,r_{1}\not=2,r_{6}\}$&$\varnothing$\\
&$(q+1)_3\not=3$, $q-1=2^k>2$&$2$&$r_{6}$&$p, r_{2},2=r_{1}$\\
&$q=3$&$2$&$r_{6}$&$p, r_{2}=2$\\ \hline
${}^2A_3(q)$&$(q+1)_2\not=4$ and $q\neq2$ &$3$&$\{p,r_{6},r_{4}\}$&$\varnothing$\\
&$(q+1)_2=4$&$3$&$\{r_{6},r_{4}\}$&$p,2$\\
& $q=2$ & $2$ & $r_4$ & $p,r_2$\\ \hline
${}^2A_4(q)$& $q=2$ &$3$&$\{p,r_{4},r_{10}\}$&$\varnothing$\\
& $q>2$ and $(q+1)_5\neq5$ &$3$&$\{r_{4},r_{10}\}$&$p,r_{6}$\\
& $(q+1)_5=5$ &$3$&$\{r_{4},r_{10}\}$&$5,p,r_{6}$\\
\hline
${}^2A_5(q)$&$q=2$&$3$&$\{r_{10},r_{3}\}$&$3,p,r_{4}$\\
& $(q+1)_3\neq3$&$3$&$r_{10}$&$\{p,r_{3}\}$,$\{r_{6},r_{4}\}$, \\
& & & & $\{r_{4},r_{3}\}$\\
& $q>2$ and $(q+1)_3=3$&$3$&$r_{10}$&$\{p,r_{3}\}$,$\{r_{6},r_{4}\}$, \\
& & & & $\{r_{4},r_{3}\}$, $\{3,r_{3}\}$\\
\hline
${}^2A_{n-1}(q)$,&$n$ is odd&$[\frac{n+1}{2}]$&$\{r_i\mid \frac{n}{2}< \nu(i)\le n\}$&$\varnothing$\\
$n\ge7$&$n$ is even &$[\frac{n+1}{2}]$&$\{r_i\mid \frac{n}{2}< \nu(i)< n\}$&$r_{\nu(\frac{n}{2})},r_{\nu(n)}$\\
\hline
\end{tabular}}
\end{tab}
\newpage
\begin{tab}\label{ClassicTable}{\bfseries Cocliques for finite simple symplectic and orthogonal groups}
\smallskip
{\small \noindent\begin{tabular}{|c|l|c|c|c|}
\hline
$G$ & Conditions & $t(G)$ & $\Theta(G)$ & $\Theta'(G)$\\
\hline
$B_n(q)$ or & $n=2$, $q=3$ & 2 & $r_4$ & $p,r_2$\\
$C_n(q)$ & $n=2$, $q>3$ & 2 & $r_4$ & $p,r_1,r_2$\\
& $n=3$ and $q=2$ & $2$ & $r_3$ & $p,r_2,r_4$\\
& $n=3$ and $q>2$ & $3$ & $\{r_3,r_6\}$ & $p,r_4$\\
& $n=4$ and $q=2$ & $3$ & $\{r_3,r_4,r_8\}$ & $\varnothing$\\
& $n=5$ and $q=2$ & $4$ & $\{r_5,r_8,r_{10}\}$ & $r_3,r_4$\\
& $n=6$ and $q=2$ & $5$ & $\{r_3,r_5,r_8,r_{10},r_{12}\}$ & $\varnothing$\\
& $n=7$ and $q=2$ & $6$ & $\{r_5,r_7,r_{10},r_{12},r_{14}\}$ & $r_3,r_8$\\
& $n>3$, $n\equiv{0,1}(\mod 4)$ and & $\left[\frac{3n+5}{4}\right]$ & $\{r_i\mid
\frac{n}{2}\leqslant\eta(i)\leqslant n\}$ &
$\varnothing$\\
& $(n,q)\neq(4,2),(5,2)$ & & &\\
& $n>3$, $n\equiv{2}(\mod 4)$ and & $\left[\frac{3n+5}{4}\right]$ & $\{r_i\mid \frac{n}{2}<\eta(i)\leqslant n\}$ & $r_{n/2},r_n$\\
& $(n,q)\neq(6,2)$ & & &\\
& $n>3$, $n\equiv{3}(\mod 4)$ and & $\left[\frac{3n+5}{4}\right]$ & $\{r_i\mid \frac{n+1}{2}<\eta(i)\leqslant n\}$ &
$r_{(n-1)/2},r_{n-1},$\\
& $(n,q)\neq(7,2)$ & & & $r_{n+1}$\\
\hline
$D_n(q)$ & $n=4$, $q=2$ & 2 & $r_3$ & $p,r_2,r_4$\\
& $n=4$ and $q>2$ & $3$ & $\{r_3,r_6\}$ & $p,r_4$\\
& $n=5$ and $q=2$ & $4$ & $\{r_3,r_4,r_5,r_8\}$ & $\varnothing$\\
& $n=6$ and $q=2$ & $4$ & $\{r_3,r_5,r_8,r_{10}\}$ & $\varnothing$\\
& $n>4$, $n\equiv{0}(\mod 4)$ & $\left[\frac{3n+1}{4}\right]$ & $\{r_i\mid \frac{n}{2}\leqslant\eta(i)\leqslant n,$ & $\varnothing$\\
& & & $i\neq2n\}$ & \\
& $n>4$, $n\equiv{1}(\mod 4)$ and & $\left[\frac{3n+1}{4}\right]$ & $\{r_i\mid \frac{n}{2}<\eta(i)\leqslant n,$ & $r_{n-1},r_{n+1}$\\
& $(n,q)\neq(5,2)$ & & $i\neq2n,n+1\}$ &\\
& $n>4$, $n\equiv{2}(\mod 4)$ and & $\left[\frac{3n+1}{4}\right]$ & $\{r_i\mid \frac{n}{2}<\eta(i)\leqslant n,$ & $r_{n/2},r_{n}$\\
& $(n,q)\neq(6,2)$ & & $i\neq2n\}$ &\\
& $n>4$, $n\equiv{3}(\mod 4)$ & $\frac{3n+3}{4}$ & $\{r_i\mid \frac{n-1}{2}\leqslant\eta(i)\leqslant n,$ & $\varnothing$\\
& & & $i\neq2n,n-1\}$ &\\
\hline
${}^2D_n(q)$ & $n=4$, $q=2$ & 3 & $\{r_3,r_8\}$ & $p,r_4$\\
& $n=4$ and $q>2$ & $4$ & $\{r_3,r_6,r_8\}$ & $p,r_4$\\
& $n=5$ and $q=2$ & $3$ & $\{r_8,r_{10}\}$ & $p,r_3,r_4$\\
& $n=6$ and $q=2$ & $5$ & $\{r_5,r_8,r_{10},r_{12}\}$ & $r_3,r_4$\\
& $n=7$ and $q=2$ & $5$ & $\{r_5,r_{10},r_{12},r_{14}\}$ & $r_3,r_8$\\
& $n>4$, $n\equiv{0}(\mod 4)$ and & $\left[\frac{3n+4}{4}\right]$ & $\{r_i\mid \frac{n}{2}\leqslant\eta(i)\leqslant n\}$ & $\varnothing$\\
& $n>4$, $n\equiv{1}(\mod 4)$ and & $\left[\frac{3n+4}{4}\right]$ & $\{r_i\mid \frac{n}{2}<\eta(i)\leqslant n,$
& $r_{(n+1)/2},r_{n-1}$\\
& $(n,q)\neq(5,2)$ & & $i\neq n,\frac{n+1}{2}\}$ & \\
& $n>4$, $n\equiv{2}(\mod 4)$ and & $\left[\frac{3n+4}{4}\right]$ & $\{r_i\mid \frac{n}{2}<\eta(i)\leqslant n\}$ &
$r_{n/2},r_{n-2},r_n$\\
& $(n,q)\neq(6,2)$ & & & \\
& $n>4$, $n\equiv{3}(\mod 4)$ and & $\left[\frac{3n+4}{4}\right]$ & $\{r_i\mid \frac{n-1}{2}\leqslant\eta(i)\leqslant n,$ & $\varnothing$\\
& $(n,q)\neq(7,2)$ & &$i\neq n,\frac{n-1}{2}\}$ &\\
\hline
\end{tabular}}
\end{tab}
\newpage
\begin{tab}\label{ExceptTable}{\bfseries Cocliques for finite simple exceptional groups}
\smallskip
{\small \noindent\begin{tabular}{|c|l|c|c|c|}
\hline
$G$ & Conditions & $t(G)$ & $\Theta(G)$ & $\Theta'(G)$\\
\hline
$G_2(q)$ & $q=3,4$ &3& $\{r_3,r_6\}$ & $p,r_2$ \\
& $q=8$ &3& $\{r_3,r_6\}$ & $p,r_1$ \\
& $q=3^m>3$ &3& $\{r_3,r_6\}$ & $p,r_1,r_2$ \\
& $q\equiv{1}(\mod 3)$ and $q\neq4$ & 3 & $\{r_3,r_6\}$ & $p,r_2,r_1\neq3$\\
& $q\equiv{2}(\mod 3)$ and $q\neq8$ & 3 & $\{r_3,r_6\}$ & $p,r_1,r_2\neq3$\\ \hline
$F_4(q)$&$q=2$&4&$\{r_3,r_4,r_8,r_{12}\}$ & $\varnothing$ \\
& $q>2$&5&$\{r_3,r_4,r_6,r_8,r_{12}\}$ & $\varnothing$ \\ \hline
$E_6(q)$&$q=2$&5&$\{r_4,r_5,r_8,r_9\}$ & $r_3,r_{12}$\\
&$q>2$&5&$\{r_5,r_8,r_9\}$ & $\{r_3,r_4\}$, $\{r_4,r_{12}\}$, \\
& & & & $\{r_6,r_{12}\}$ \\ \hline
${}^2E_6(q)$ & $q=2$ & 5 & $\{r_8,r_{10},r_{12},r_{18}\}$ & $r_3,r_4$\\
& $q>2$ & 5 & $\{r_8,r_{10},r_{18}\}$ & $\{r_3,r_{12}\}$, $\{r_4,r_6\}$, \\
& & & & $\{r_4,r_{12}\}$ \\
\hline
$E_7(q)$&&8&$\{r_5,r_7,r_9,r_{10},$ & $r_4,r_8$\\
&&&$r_{12},r_{14},r_{18}\}$&\\ \hline
$E_8(q)$&&12&$\{r_5,r_7,r_8,r_9,r_{10},r_{12},$ & $\varnothing$\\
&&&$r_{14},r_{15},r_{18},r_{20},r_{24},r_{30}\}$&\\
\hline
${}^3D_4(q)$& $q=2$ & 2 & $r_{12}$ & $p,r_2,r_3$\\
& $q>2$ & 3 & $\{r_3,r_6,r_{12}\}$ & $\varnothing$\\
\hline
${^2B_2(2^{2n+1})}$&$n\ge1$&4&$\{2,s_1,s_2,s_3\}$ & $\varnothing$\\
\hline
${^2G_2(3^{2n+1})}$&$n\ge1$&5&$\{3,s_1,s_2,s_3,s_4\}$ & $\varnothing$\\
\hline
${}^2F_4(2^{2n+1})$&$n\ge2$,&$5$&$\{s_2,s_3,s_4,s_5,s_6\}$ & $\varnothing$\\
\hline
${}^2F_4(8)$ & & $4$&$\{s_5,s_6\}$ & $\{3,s_3\}$, $\{s_1,s_4\}$,\\
&&&&$\{2,s_4\}$, $\{s_3,s_4\}$\\\hline
${}^2F_4(2)'$ & & $3$&$\{3,5,13\}$ & $\varnothing$\\ \hline
\end{tabular}}
\end{tab}
\newpage
\section{Appendix}\label{appendix}
In this section we give a list of corrections for~\cite{VasVd} which we obtain in the present paper.
Items (4), (5), (9) of Lemma 1.3 should be substituted by items (1), (2), (3) of Lemma
\ref{toriofexcptgrps} of the present paper respectively.
Lemma 1.4 should be substituted by Lemma \ref{Zsigmondy Theorem}.
Lemma 1.5 should be substituted be Lemma \ref{SuzReeDivisors}.
Proposition 2.3 should be substituted by Proposition \ref{adjbn}.
Proposition 2.4 should be substituted by Proposition \ref{adjdn}.
Proposition 2.5 should be substituted by Proposition \ref{adjexcept}.
In Tables 4 and 8 the following corrections are necessary.
The lines
\begin{longtable}{|c|c|c|c|}
$A_{n-1}(q)$&$n=3$, $(q-1)_3=3$, and $q+1\not=2^k$&$4$&$\{p,3,r_2,r_3\}$\\
&$n=3$, $(q-1)_3\not=3$, and $q+1\not=2^k$&$3$&$\{p,r_2,r_3\}$\\
\end{longtable}
should be substituted by the lines
\begin{longtable}{|c|c|c|c|}
$A_{n-1}(q)$&$n=3$, $(q-1)_3=3$, and $q+1\not=2^k$&$4$&$\{p,3,r_2\not=2,r_3\}$\\
&$n=3$, $(q-1)_3\not=3$, and $q+1\not=2^k$&$3$&$\{p,r_2\not=2,r_3\}$\\
\end{longtable}
The lines
\begin{longtable}{|c|c|c|c|}
${}^2A_{n-1}(q)$&$n=3$, $(q+1)_3=3$, and $q-1\not=2^k$&$4$&$\{p,3,r_1,r_6\}$\\
&$n=3$, $(q+1)_3\not=3$, and $q-1\not=2^k$&$3$&$\{p,r_1,r_6\}$\\
\end{longtable}
should be substituted by the lines
\begin{longtable}{|c|c|c|c|}
${}^2A_{n-1}(q)$&$n=3$, $(q+1)_3=3$, and $q-1\not=2^k$&$4$&$\{p,3,r_1\not=2,r_6\}$\\
&$n=3$, $(q+1)_3\not=3$, and $q-1\not=2^k$&$3$&$\{p,r_1\not=2,r_6\}$\\
\end{longtable}
In Table 4 in the penultimate line corresponding to $D_n(q)$ instead of
$n\equiv 1\pmod 1$, $n>4$ there should be $n\equiv 1\pmod 2$, $n>4$.
In Table 8 the following corrections are necessary.
The line
\begin{longtable}{|c|c|c|c|}
$D_n(q)$&$n\ge 4$, $(n,q)\not=(4,2),(5,2),(6,2)$&$\left[\frac{3n+1}{4}\right]$&$\{r_{2i}\mid \left[\frac{n+1}{2}\right]\le
i< n\}\cup$\\
&&&$\cup\{r_i\mid \left[\frac{n}{2}\right]<i\le
n,$\\
&&&$i\equiv 1\pmod2\}$
\end{longtable}
should be substituted by
\begin{longtable}{|c|c|c|c|}
$D_n(q)$&$n\ge 4$, $n\not\equiv3\pmod4$,&$\left[\frac{3n+1}{4}\right]$&$\{r_{2i}\mid
\left[\frac{n+1}{2}\right]\le
i< n\}\cup$\\
&$(n,q)\not=(4,2),(5,2),(6,2)$&&$\cup\{r_i\mid \left[\frac{n}{2}\right]<i\le
n,$\\
&&&$i\equiv 1\pmod2\}$\\
&$n\equiv3\pmod4$&$\frac{3n+3}{4}$&$\{r_{2i}\mid \left[\frac{n+1}{2}\right]\le
i< n\}\cup$\\
&&&$\cup\{r_i\mid \left[\frac{n}{2}\right]\le i\le
n,$\\
\end{longtable}
In Table 8 the line
\begin{longtable}{|c|c|c|c|}
${}^2D_n(q)$&$n\ge 4$, $n\not\equiv1\pmod
4,$&$\left[\frac{3n+4}{4}\right]$&$\{r_{2i}\mid \left[\frac{n}{2}\right]\le
i\le n\}\cup$\\
&$(n,q)\not=(4,2),(6,2),(7,2)$&&$\cup\{r_i\mid \left[\frac{n}{2}\right]<i\le
n,$\\
&&&$i\equiv 1\pmod2\}$
\end{longtable}
should be substituted by the line
\begin{longtable}{|c|c|c|c|}
${}^2D_n(q)$&$n\ge 4$, $n\not\equiv1\pmod
4,$&$\left[\frac{3n+4}{4}\right]$&$\{r_{2i}\mid \left[\frac{n}{2}\right]\le
i\le n\}\cup$\\
&$(n,q)\not=(4,2),(6,2),(7,2)$&&$\cup\{r_i\mid \left[\frac{n}{2}\right]<i<
n,$\\
&&&$i\equiv 1\pmod2\}$
\end{longtable}
In Table 9 the following corrections are necessary.
The line
\begin{longtable}{|c|c|c|c|}\hline
$E_6(q)$&$q=2$&5&$\{5,12,17,19,31\}$\\
&$q>2$&$6$&$\{r_4,r_5,r_6,r_8,r_9,r_{12}\}$\\ \hline
\end{longtable}
should be substituted by the line
\begin{longtable}{|c|c|c|c|}\hline
$E_6(q)$&none&5&$\{r_4,r_5,r_8,r_9,r_{12}\}$\\ \hline
\end{longtable}
The line
\begin{longtable}{|c|c|c|c|}\hline
$E_7(q)$&none&7&$\{r_7,r_8,r_9,r_{10},r_{12},r_{14},r_{18}\}$\\ \hline
\end{longtable}
should be substituted by the line
\begin{longtable}{|c|c|c|c|}\hline
$E_7(q)$&none&8&$\{r_5,r_7,r_8,r_9,r_{10},r_{12},r_{14},r_{18}\}$\\ \hline
\end{longtable}
The line
\begin{longtable}{|c|c|c|c|}\hline
$E_8(q)$&none&11&$\{r_7,r_8,r_9,r_{10},r_{12},r_{14},r_{15},r_{18},r_{20},r_{24},
r_{30}\} $\\ \hline
\end{longtable}
should be substituted by the line
\begin{longtable}{|c|c|c|c|}\hline
$E_8(q)$&none&12&$\{r_5,r_7,r_8,r_9,r_{10},r_{12},r_{14},r_{15},r_{18},r_{20},r_{
24 } ,
r_{30}\} $\\ \hline
\end{longtable}
\newpage
|
1,108,101,564,955 | arxiv | \section{Introduction}
We consider the lattice graph $(\Z^d,\B^d)$, $d>2$, where $\B^d$ denotes the set of nearest-neighbor
edges. Given a stationary and ergodic probability measure
$\expec{\cdot}$ on $\Omega$ -- the space of conductance fields
$\aa:\B^d\to[0,1]$ -- we study the \textit{corrector equation} from stochastic
homogenization, i.e.~the elliptic difference equation
\begin{equation}\label{eq:cor-equation-phys}
\nabla^*(\aa(\nabla\phi+e))=0,\qquad x\in\Z^d.
\end{equation}
Here, $\nabla$ and $\nabla^*$ denote discrete versions of the continuum gradient
and (negative) divergence, cf. Section~\ref{S2}, and $e\in\R^d$ denotes
a vector of unit length, which is fixed throughout the paper. The corrector equation \eqref{eq:cor-equation-phys} emerges in the homogenization of discrete elliptic equations with random
coefficients: For random conductances that are stationary and
ergodic (with respect to the shifts $\aa(\cdot)\mapsto\aa(\cdot+z)$,
$z\in\Z^d$, cf. Section~\ref{S2}), and under the assumption of uniform
ellipticity (i.e.~there exists $\lambda_0>0$ such
that $\aa\geq \lambda_0$ on $\B^d$ almost surely), a classical result from stochastic homogenization (e.~g. see \cite{Kozlov-87, Kunnemann-83}) shows that the effective behavior of $\nabla^*\aa\nabla$ on large length scales is captured by
the homogenized elliptic operator $\nabla^*\aa_{\hom}\nabla$ where
$\aa_{\hom}$ is a
deterministic, symmetric and positive definite $d\times d$ matrix. It is characterized by
the minimization problem
\begin{equation}\label{eq:min}
e \cdot\aa_{\hom}e =\inf\limits_{\varphi}\expec{(e +\nabla\varphi)\cdot\aa(e +\nabla\varphi)},
\end{equation}
where the infimum is taken over random fields
$\varphi$ that are $\expec{\cdot}$-stationary in the sense of $\varphi(\aa,x+z)=\varphi(\aa(\cdot+z),x)$ for all
$x,z\in\Z^d$ and $\expec{\cdot}$-almost every $\aa\in\Omega$.
Minimizers to \eqref{eq:min} are called \textit{stationary
correctors} and are characterized as the stationary solutions to the corrector
equation \eqref{eq:cor-equation-phys}. Due to the lack of a Poincar\'e inequality for $\nabla$ on the infinite dimensional space of
stationary random fields, the elliptic operator $\nabla^*\aa\nabla$ is highly degenerate
and the minimum in \eqref{eq:min} may not be obtained in general. In
fact, it is known to fail generally for $d=2$. The only existence result of a stationary corrector (in dimensions
$d>2$) has been obtained
recently in \cite{GO1} by Gloria and the third author under the
assumption that the $\aa$'s are uniformly elliptic, and that
$\expec{\cdot}$ satisfies a Spectral Gap Estimate, which is in
particular the case for independent and identically distributed coefficients. They also show that $\expec{|\phi|^p}\lesssim 1$ for all $p<\infty$.
\medskip
The goal of the present paper is to extend this result to the case of
conductances with degenerate ellipticity. To be definite, consider the probability
measure $\expec{\cdot}_{\lambda}$ constructed by the following procedure:
\begin{equation}\label{eq:modbernoulli}
\begin{aligned}
&\text{Take the classical $\{0,1\}$-Bernoulli-bond percolation on
$\B^d$
with parameter $\lambda\in(0,1]$}\\
&\text{and declare all bonds parallel to the coordinate direction
$e_1$ to be open.}
\end{aligned}
\end{equation}
(We adapt the convention
to call a bond ``open'' if the associated coefficient is ``$1$'',
while a bond is ``closed'' if the associated coefficient is
``$0$''. The parameter $\lambda$ denotes the probability that a bond
is ``open''). As for $d$-dimensional
Bernoulli percolation, $\expec{\cdot}_\lambda$ describes a random
graph of open bonds, which is locally
disconnected with positive probability. However, as a
merit of the modification, any two vertices in the random graph are
almost surely connected by some open path. As a main result we show that
\eqref{eq:cor-equation-phys} admits a stationary solution, all finite
moments of which are bounded:
\paragraph{Theorem (main result).} Let $d>2$ and $\lambda\in(0,1]$. There exits $\phi:\Omega\times\Z^d\to\R$ such
that for $\expec{\cdot}_\lambda$ almost every $\aa\in\Omega$ we have
\begin{itemize}
\item $\phi(\aa,\cdot)$ solves \eqref{eq:cor-equation-phys},
\item $\phi(\aa,\cdot+z)=\phi(\aa(\cdot+z),\cdot)$ for all $z\in\Z^d$,
\end{itemize}
and
\begin{equation*}
\forall p<\infty\,:\qquad
\expec{|\phi|^p}_\lambda^{\frac{1}{p}}\leq C.
\end{equation*}
Here $C$ denotes a constant that only depends on $p$, $\lambda$ and $d$.
\medskip
The modified Bernoulli percolation model $\expec{\cdot}_\lambda$ fits into a
slightly more general framework that we introduce in Section~\ref{S2}
below, cf. Lemma~\ref{L:regul-perc-model}. The result above will then
follow as a special case of Theorem~\ref{T1} stated below.
\medskip
\medskip
{\bf Relation to stochastic
homogenization.} Consider the decaying solution $u_\e:\Z^d\to\R$ to the
equation
\begin{equation*}
\nabla^*(\aa\nabla u_\e)(\cdot)=\e^2f(\e \cdot)\qquad\text{in }\Z^d,
\end{equation*}
where $f:\R^d\to\R$ is smooth, compactly supported, and $\aa$ is
distributed according to $\expec{\cdot}_\lambda$. Classical results of
stochastic
homogenization (see \cite{Papanicolaou-Varadhan-79}, \cite{Kozlov-79},
\cite{Kunnemann-83}) show that for almost every $\aa\in\Omega$ the
(piecewise constant interpolation of the) rescaled
function $u_\e(\tfrac{\cdot}{\e})$ converges as
$\e\downarrow 0$ to the
unique decaying solution $u_{\hom}:\R^d\to\R$ of the deterministic elliptic equation
\begin{equation*}
-\nabla\cdot(\aa_{\hom}\nabla u_{\hom})=f\qquad\text{in }\R^d.
\end{equation*}
Moreover, a formal two-scale expansion suggests that
\begin{equation}\label{eq:22}
u_\e(x)\approx u_{\hom}(\e x)+\e\sum_{j=1}^d\phi_{j}(x)\partial_ju_{\hom}(\e x),
\end{equation}
where $\phi_j$ denotes the (stationary) solution of
\eqref{eq:cor-equation-phys} for $e=e_j$ -- the $j$th coordinate
direction. The question how to \textit{quantify} the errors emerging
in this limiting process is rather subtle. Note that in the case of deterministic periodic homogenization, the
good compactness properties of the $d$-dimensional ``reference cell of periodicity''
yield a natural starting point for estimates. In contrast, in the
stochastic case the reference cell has to be replaced by the probability space
$(\Omega,\expec{\cdot})$, which has infinite dimensions and thus most
``periodic technologies'' break down. Nevertheless,
estimates for the homogenization error
$\|u_\e-u_{\hom}\|$ and related quantities have been obtained by
\cite{Yurinskii-76, Caputo-Ioffe-03, Bourgeat-04, Conlon-Spencer-13},
see also \cite{Cafarelli-Souganidis-10,Armstrong-Smart-13} for recent
results on fully nonlinear elliptic equations or equations in
non-divergence form.
While the asymptotic result of stochastic homogenization holds for general stationary and ergodic coefficients (at least in the uniformly elliptic
case), the derivation of error estimates requires a quantification of
ergodicity. In a series of papers (see \cite{GNO1,GNO3, GO1,GO2}) two of the authors and Gloria developed a
quantitative theory for the corrector equation
\eqref{eq:cor-equation-phys} (and regularized versions) based on the
assumption that the underlying statistics satisfies a Spectral Gap
Estimate (SG) for a Glauber dynamics on the coefficient fields. This
assumption is satisfied e.g. in the case of independent and identically distributed
(i.~i.~d.~coefficients). In \cite{GO1, GNO1} moment bounds for the corrector, similar to the
one in the present paper, have been obtained. These bounds are
at the basis of various optimal estimates; e.g. \cite{GNO1} contains a complete and optimal
analysis of the approximation of $\aa_{\hom}$ via periodic
representative volume elements, and \cite{GNO3} establishes optimal
estimates for the homogenization error and the expansion in \eqref{eq:22}.
While in the works mentioned above it is always assumed that the coefficients are
\textit{uniformly elliptic}, i.e. $\aa\in[\lambda_0,1]^{\B^d}$ for
some fixed $\lambda_0>0$, in the present paper we derive moment bounds for a
model with \textit{degenerate} elliptic coefficients. As in \cite{GO1,GNO1}, a
crucial element of our approach is an estimate on the g\textit{radient} of the \textit{elliptic Green's function} associated with
$\nabla^*\aa\nabla$. The required estimate is pointwise in $\aa$, but (dyadically) averaged
in space, and obtained by a self-contained and short argument, see Proposition~\ref{P1} below. It
extends the argument in \cite{GO1} to the degenerate elliptic case. Since in the degenerate case the elementary inequality $\lambda_0|\nabla
u|^2\leq \nabla u\cdot\aa\nabla u$ breaks down, we replace it by a weighted, integrated
version (see Lemma~\ref{lem:coercivity-phys} below). Compared to
more sophisticated methods that e.g. rely on isoperimetric properties
of the graph, an advantage of our approach is that it only invokes
simple geometrical properties, namely spatial averages (on balls) of the inverse of the chemical distance between nearest
neighbor vertices. We believe that our
approach extends (although not in a straight-forward manner) to the case of standard supercritical Bernoulli
percolation. This is a question that we study in a work in progress.
\medskip
{\bf Connection to random walks in random environments (RWRE).}
Although, the main motivation of our work is quantitative
homogenization, we would like to comment on the connection to invariance principles for (RWRE). In fact, there
is a strong link between stochastic homogenization and (RWRE): The operator $\nabla^*(\aa\nabla)$ generates
a stochastic process, namely the variable-speed random walk
$X=(X_{\aa}(t))_{t\geq 0}$, which is a continuous-time random walk in the random
environment $\aa$. In the early work \cite{Kipnis-Varadhan-86} (see
also \cite{Kunnemann-83}) the authors considered general stationary
and ergodic environments. For uniformly elliptic coefficients they prove an \textit{annealed
invariance principal} for $X$, saying that the law of the rescaled process $\sqrt\e
X_{\aa}(\e^{-1}t)$ weakly converges to that of a Brownian motion with
covariance matrix $2\aa_{\hom}$. In \cite{Sidoravicius-Sznitman-04}
Sidoravicius and Sznitman prove a stronger \textit{quenched invariance
principle} for $X$, which says that the convergence even holds for
almost every environment $\aa$.
More recently, invariance principles have been obtained for
more general environments, see \cite{Biskup-11} and \cite{Kumagai-14} for recent surveys in
this direction. Most prominently, supercritical bond percolation on
$\Z^d$ has been considered: Here, the annealed result is due to
\cite{DeMasi-etal-1989}, while quenched results have been obtained in
\cite{Sidoravicius-Sznitman-04} for $d\geq 4$ and in
\cite{Berger-Biskup-07, Mathieu-Piatnitski-07} for $d\geq 2$. See
also \cite{Andres-etal-ta, Andres-Deuschel-Slowik-ta} for recent
related results on degenerate elliptic, possibly unbounded
conductances.
The main difficulty in proving a quenched invariance
principle compared to the annealed version is to establish a
\textit{quenched sublinear growth} property (see \eqref{eq:18} below)
for a corrector field $\chi$. The latter is closely related to the function
$\phi$ considered in Theorem~\ref{T1}, see the discussion below
Corollary~\ref{C1} for more details. In the uniformly elliptic case,
sublinearity of $\chi$ is obtained by soft arguments from
ergodic theory combined with a Sobolev embedding, see
\cite{Sidoravicius-Sznitman-04}. For supercritical Bernoulli
percolation the argument is more subtle: For $d\geq 3$ the proofs in
\cite{Sidoravicius-Sznitman-04,Berger-Biskup-07,
Mathieu-Piatnitski-07} use heat-kernel upper bounds (as deduced by
Barlow \cite{Barlow-04}) or other ``heat-kernel technologies'' (e.g. see
\cite{Biskup-Prescott-07, Andres-etal-ta, Andres-Deuschel-Slowik-ta}) that require a detailed
understanding of the geometry of the percolation cluster, and thus
require the use of sophisticated arguments from percolation theory
(e.g. isoperimetry, regular volume growth and comparison of chemical
and Euclidean distances). Conceptually, the use of
such fine arguments seems not to be necessary in the derivation of
quenched invariance principles. Motivated by this in
\cite{Biskup-Prescott-07} and \cite{Andres-Deuschel-Slowik-ta}
different methods are employed with a reduced usage of heat-kernel technology.
Our approach yields, as a side-result, an alternative way to achieve
this goal: The quenched sublinear growth property can easily be
obtained from the moment bound derived in Theorem~\ref{T1}. In fact,
the estimate of Theorem~\ref{T1} is stronger: As we explain in the
discussion following Corollary~\ref{C1}, our moment bounds imply that the
growth of $\chi$ is not only sublinear, but \textit{slower than any rate}, see
\eqref{eq:21}. Of course, the environment considered in the
present paper, namely the modified percolation model
$\expec{\cdot}_\lambda$, is much simpler than
supercritical Bernoulli percolation. Nevertheless, it shares some of
the ``degeneracies'' featured by percolation; e.g. for every ball
$B\subset\Z^d$ with finite radius Poincar\'e's inequality $\sum_{x\in
B}u^2(x)\leq C(\aa,B)\sum_{\bb\in B}\aa(\bb)|\nabla u(\bb)|^2$ fails
with positive probability. Furthermore, in contrast to the above mentioned results, our
argument requires only mild estimates on the Green's function. More
precisely, as already mentioned, we require an estimate on the \textit{gradient} of the
\textit{elliptic} Green's function, which -- in contrast to quenched heat kernel
estimates -- can be obtained by fairly simple arguments, see
Proposition~\ref{P1}. Of course, as it is well-known, estimates on the gradient of
the elliptic Green's function can also be obtained from estimates on the associated heat kernel by an
integration in time, and a subsequent application of Caccioppoli's
inequality. In particular, heat-kernel estimates in the spirit of the one obtained by Barlow in the case of
supercritical Bernoulli percolation, see \cite[Theorem~1]{Barlow-04},
would be sufficient to make this program work. Yet, since the elliptic
estimates that we require are less sensitive to the geometry of the
graph, and thus can be obtained by simpler arguments, we opt for a self-contained proof that only relies on elliptic
regularity theory. Another interesting, and -- as we believe -- advantageous property of our approach is that (thanks to the Spectral Gap
Estimate) probabilistic and deterministic considerations are well
separated, e.g. Proposition~\ref{P1} is pointwise in $\aa$ and does not
involve the ensemble.
\medskip
{\bf Structure of the paper.} In Section~\ref{S2} we gather basic
definitions and introduce the slightly more general framework studied
in this paper. We then present the main result in the general framework. Section~\ref{S3} is devoted to the proof of the main result: we first
discuss the general strategy of the proof and present several
auxiliary lemmas needed for the proof of the main theorem -- in
particular, the coercivity estimates, see Lemmas~\ref{lem:coercivity-phys}
and \ref{lem:coercivity-prob}, and an estimate
for the gradient of the elliptic Green's function, see
Proposition~\ref{P1}, which play a key
role in our argument. The proof of the main result is given at the end
of Section~\ref{S3}, while the auxiliary results are proven in
Section~\ref{S:proofs}.
\medskip
Throughout this article, we use the following notation, see
Section~\ref{S2} for more details:
\begin{itemize}
\item $d$ is the dimension;
\item $\Z^d$ is the integer lattice;
\item $(\ee_1,\dots,\ee_d)$ is the canonical basis of $\Z^d$;
\item $e\in\R^d$, which appears in \eqref{eq:cor-equation-phys},
denotes a vector of unit length and is fixed throughout the paper;
\item $\B^d:=\{\,\bb=\{x,x+e_i\}\,:\,x\in\Z^d,\,i=1,\ldots,d\,\}$ is the set of nearest neighbor bonds of $\Z^d$;
\item $B_R(x_0)$ is the cube of vertices $x\in x_0+([-R,R]\cap\Z)^d$;
\item $Q_R(x_0)$ is the cube of bonds $\bb=\{x, x+e_i\}\in\B^d$ with
$x\in B_{R}(x_0)$ and $i\in\{1,\ldots,d\}$;
\item $|A|$ denotes the number of elements in $A\subset\Z^d$ (resp. $A\subset\B^d$).
\end{itemize}
\section{General framework}\label{S2}
In the first part of this section, we introduce the general framework
following the presentation of \cite{GNO1}: We introduce a discrete differential
calculus, the random conductance model, and finally recall the standard definitions of the
corrector and the modified corrector.
\subsection{Lattice and discrete differential calculus}
We consider the lattice graph $(\Z^d,\B^d)$, where $\B^d:=\{\,\bb=\{x,x+e_i\}\,:\,x\in\Z^d,\,i=1,\ldots,d\,\}$ denotes the
set of nearest-neighbor bonds. We write $\ell^p(\Z^d)$ and $\ell^p(\B^d)$, $1\leq p\leq\infty$, for the
usual spaces of $p$-summable (resp. bounded for $p=\infty$) functions on $\Z^d$ and $\B^d$. For $u:\Z^d\to\R$ the
\textit{discrete derivative} $\nabla u(\bb)$, $\bb\in\B^d$, is
defined by the expression
\begin{gather*}
\nabla u(\bb):= u(y_{\bb})-u(x_{\bb}).
\end{gather*}
Here $x_{\bb}$ and $y_{\bb}$ denote the unique vertices with
$\bb=\{x_{\bb},y_{\bb}\}\in\B^d$ satisfying
$y_{\bb}-x_{\bb}\in\{e_1,\ldots,\ee_d\}$.
We denote by $\nabla^*$ the adjoined of $\nabla$, so that we have for $F:\B^d\to\R$
\begin{equation*}
\nabla^*F(x)=\sum_{i=1}^dF(\{x-e_i,x\})-F(\{x,x+e_i\}).
\end{equation*}
Furthermore, the discrete integration by parts formula reads
\begin{equation}\label{int-by-parts}
\sum_{\bb\in\B^d}\nabla u(\bb) F(\bb)= \sum_{x\in\Z^d}u(x)\nabla^*F(x),
\end{equation}
and holds whenever the sums converge.
\medskip
\subsection{Random conductance field} To each bond $\bb\in\B^d$ a \textit{conductance} $\aa(\bb)\in[0,1]$ is
attached. Hence, a \textit{configuration} of the lattice is described by a \textit{conductance field}
$\aa\in\Omega$, where $\Omega:=[0,1]^{\B^d}$ denotes the
\textit{configuration space}. Given $\aa\in\Omega$ we define the
chemical distance between vertices
$x,y\in\Z^d$ by
\begin{equation*}
\dist_{\aa}(x,y):=\inf\left\{\,\sum_{\bb\in\pi}\aa(\bb)^{-1}\ :\
\pi\text{ is a path from $x$ to $y$}\ \right\}\qquad(\text{where }\tfrac{1}{0}:=+\infty).
\end{equation*}
We equip $\Omega$ with the product topology (induced
by $[0,1]\subset\R$) and the usual product $\sigma$-algebra, and
describe \textit{random configurations} by means of a probability
measure on $\Omega$, called the \textit{ensemble}. The associated expectation is denoted by
$\expec{\cdot}$.
\medskip
Our assumptions on $\expec{\cdot}$ are the following:
\begin{assumption}\label{A}
\begin{itemize}
\item[]
\item[(A1)] (\textit{Stationarity}). The \textit{shift operators} $\Omega\ni\aa\mapsto\aa(\cdot+z)\in\Omega$, $z\in\Z^d$ preserve the measure $\expec{\cdot}$. (For a bond $\bb=\{x,y\}\in\B^d$ and $z\in\Z^d$ we write
$\bb+z:=\{x+z,y+z\}$ for the \textit{shift} of $\bb$ by $z$.)
\item[(A2)] (\textit{Moment condition}). There exists a modulus of
integrability $\Lambda:[1,\infty)\to[0,\infty)$ such that the distance of neighbors is finite on average in the
sense that
\begin{equation*}
\forall p<\infty\ :\
\max_{i=1,\ldots,d}\expec{(\dist_{\aa}(0,e_i))^p}^{\frac{1}{p}}\leq\Lambda(p).
\end{equation*}
\item[(A3)] (\textit{Spectral Gap Estimate}). There exists a
constant $\rho>0$ such that for all $\zeta\in
L^2(\Omega)$ we have
\begin{equation*}
\expec{(\zeta-\expec{\zeta})^2}\leq \frac{1}{\rho}\sum_{\bb\in\B^d}\expec{\left(\frac{\partial\zeta}{\partial\bb}\right)^2},
\end{equation*}
where $\frac{\partial\zeta}{\partial\bb}$ denotes the
\textit{vertical derivative} as defined in
Definition~\ref{D:vertical} below.
\end{itemize}
For technical reasons we need to strengthen (A2):
\begin{itemize}
\item[(A2+)] We assume that
\begin{equation*}
\forall p<\infty\ :\
\max_{i=1,\ldots,d}\expec{(\dist_{\aa^{e_i,0}}(0,e_i))^p}^{\frac{1}{p}}\leq\Lambda(p),
\end{equation*}
where $\aa^{e_i,0}$ denotes the conductance field obtained by
``deleting'' the bond $\{0,e_i\}$ (i.~e. $\aa^{e_i,0}(\bb)=\aa(\bb)$ for all $\bb\neq\{0,e_i\}$ and
$\aa^{e_i,0}(\{0,e_i\})=0$).
\end{itemize}
\end{assumption}
Let us comment on these properties. A minimal requirement needed for
qualitative stochastic homogenization in the uniformly elliptic case is stationarity and
ergodicity of the ensemble. The basic example for such an ensemble are i.~i.~d.~coefficients which means that $\expec{\cdot}$ is a $\B^d$-fold product of a ``single edge'' probability measure on $[0,1]$. The assumption (A3) is weaker than assuming i.~i.~d., but stronger
than ergodicity. Indeed, in \cite{GNO1} it is shown that any i.~i.~d.
ensemble satisfies (A3) with constant $\rho=1$. Moreover, it is shown
that (A3) can be
seen as a quantification of ergodicity. From the functional analytic point of
view the spectral gap estimate is a Poincar\'e inequality where the
derivative is taken in vertical direction, see below. (The terminology ``vertical'' versus ``horizontal'' is motivated from viewing $\aa\in\Omega$ as a
``height''-function defined on the ``horizontal'' plane $\B^d$). We
recall from \cite{GNO1} the definition of the vertical derivative:
\begin{definition}
\label{D:vertical}
For $\zeta\in L^1(\Omega)$ the \textit{vertical derivative} w.~r.~t.
$\bb\in\B^d$ is given by
\begin{equation*}
\frac{\partial \zeta}{\partial \bb}:=\zeta-\expec{\zeta}_\bb,
\end{equation*}
where $\expec{\zeta}_{\bb}$ denotes the conditional expectation
where we condition on $\{\aa(\bb')\}_{\bb'\neq\bb}$. For $\zeta:\Omega\to\R$ sufficiently smooth we denote by
$\frac{\partial \zeta}{\partial\aa(\bb)}$ the classical partial
derivative of $\zeta$ w.~r.~t. the coordinate $\aa(\bb)$.
\end{definition}
Property (A2) is a crucial assumption on the connectedness of the
graph. In particular it implies that almost surely every pair of vertices can be connected
by a path with finite intrinsic length. However, (A2) and (A2+) do not exclude
configurations with coefficients that vanish with non-zero
probability, as it is the case for $\expec{\cdot}_\lambda$ -- the
model considered in the introduction:
\begin{lemma}
\label{L:regul-perc-model}
The modified Bernoulli percolation model $\expec{\cdot}_\lambda$
defined via \eqref{eq:modbernoulli} satisfies Assumption~1 with $\rho=1$.
\end{lemma}
\begin{proof}
Evidently, $\expec{\cdot}_{\lambda}$ can be written as the (infinite) product of probability measures attached to
the bonds in $\B^d$. These ``single-bond'' probability measures only
depend on the direction of the bond. Hence, $\expec{\cdot}_{\lambda}$ is stationary.
Another consequence of the product structure is that $\expec{\cdot}_{\lambda}$ satisfies
(A3) with constant $\rho=1$ (see \cite[Lemma~7]{GNO1} for the argument). It
remains to check (A2+).
By stationarity and symmetry we may assume that $e_i=e_d$. Consider the (random) set
\begin{equation*}
{\mathcal L}(\aa):=\{\,j\in\Z\,:\,\aa^{e_d,0}(\{je_1,je_1+e_d\})=1\,\}.
\end{equation*}
Clearly, each $j\in{\mathcal L(\aa)}$ yields an open path connecting
$0$ and $e_d$, for instance the ``U-shaped'' path through the sites $0$,
$je_1$, $je_1+e_d$ and $e_d$. Hence, $\dist_{\aa^{e_d,0}}(0,e_d)\leq
2\dist(0,{\mathcal L(\aa)})+1$ almost surely, where $\dist(0,\mathcal
L(\aa)):=\min_{j\in\mathcal L(\aa)}|j|$. Consequently, it suffices to prove that
\begin{equation*}
\expec{(2\dist(0,{\mathcal L(\aa)})+1)^{p}}^{\frac{1}{p}}_\lambda<\infty
\end{equation*}
for any $p\geq 1$. Note that due to the definition $\aa^{e_d,0}(\{0,e_d\})=0$
and thus $\dist(0,{\mathcal L(\aa)})\in\N$. Hence,
\begin{equation*}
\expec{(2\dist(0,{\mathcal L(\aa)})+1)^{p}}_\lambda=\sum_{k=1}^\infty
(2k+1)^{p}\expec{{\boldsymbol 1}(A_k)}_{\lambda},
\end{equation*}
where ${\boldsymbol 1}(A_k)$ denotes the set indicator function of $A_k:=\{\,\aa\,:\,\dist(0,{\mathcal L}(\aa))= k\,\}$. Evidently, we have
\begin{equation*}
A_k\subset A_k':=\Big\{\,\aa\,:\,\aa(\{je_1,je_1+e_d\})=0\text{ for
all }|j|=1,\ldots,k-1\,\Big\}.
\end{equation*}
From $\expec{{\boldsymbol 1}(A_k')}_{\lambda}=(1-\lambda)^{k-1}$, we
deduce that
\begin{equation*}
\expec{(2\dist(0,{\mathcal L(\aa)})+1)^{p}}_\lambda\leq \sum_{k=1}^\infty
(2k+1)^{p}(1-\lambda)^{2(k-1)}.
\end{equation*}
The sum on the right-hand side converges, since $0<\lambda\leq 1$ by
assumption.
This completes the proof.
\end{proof}
\section{Main result}
We are interested in stationary solutions to the corrector equation \eqref{eq:cor-equation-phys}. Note that we tacitly identify the vector $e\in\R^d$ with the translation invariant vector field $e(\bb):=e \cdot(y_{\bb}-x_{\bb})$. For conciseness we write
\begin{align*}
\mathcal S:=\,\Big\{\,\varphi\,:\,\Omega\times\Z^d\to\R\,\big|\,&\varphi\text{ is measurable and stationary, i.~e. }\varphi(\aa(\cdot+z),x)=\varphi(\aa,x+z)\\
&\text{for all $x,z\in\Z^d$ and $\expec{\cdot}$-almost every $\aa\in\Omega$}\,\Big\}
\end{align*}
for the space of \textit{stationary random fields}. Thanks to (A1) the
expectation $\expec{\varphi}=\expec{\varphi(\cdot,x)}$ of a stationary
random variable does not depend on $x$. Therefore,
$\|\varphi\|_{L^2(\Omega)}:=\expec{|\varphi|^2}^{\frac{1}{2}}$ defines
a norm on $(\mathcal S,\|\cdot\|_{L^2(\Omega)})$.
\medskip
We are interested in solutions to \eqref{eq:cor-equation-phys} in
$(\mathcal S,\|\cdot\|_{L^2(\Omega})$. Thanks to discreteness, the
operator $\nabla^*(\aa\nabla)$ is bounded and linear on $(\mathcal
S,\|\cdot\|_{L^2(\Omega)})$. However, it is degenerate-elliptic for two-reasons:
\begin{itemize}
\item In general the Poincar\'e inequality does not hold in $(\mathcal S,\|\cdot\|_{L^2(\Omega)})$.
\item The conductances $\aa$ may vanish with positive probability.
\end{itemize}
Therefore, following \cite{Papanicolaou-Varadhan-79}, we regularize the equation by adding a $0$th order term and consider for $T>0$ the modified corrector equation
\begin{equation}\label{eq:cor-modified}
\frac{1}{T}\phi_T(x)+\nabla^*\aa(x)(\nabla\phi_T(x)+e )=0\qquad\mbox{
for all $x\in\Z^d$ and $\aa\in\Omega$}.
\end{equation}
Thanks to the regularization, \eqref{eq:cor-modified} admits (for all $T>0$) a unique solution in $(\mathcal S,\|\cdot\|_{L^2(\Omega)})$ as follows from Riesz' representation theorem.
\begin{definition}[modified corrector]
\label{def:1}
The unique solution $\phi_T\in(\mathcal
S,\|\cdot\|_{L^2(\Omega)})$ to \eqref{eq:cor-modified} is called the
modified corrector.
\end{definition}
We think about the modified corrector as an approximation for the stationary corrector and hope to recover a solution to \eqref{eq:cor-equation-phys} in the limit $T\uparrow\infty$. This is possible as soon as we have estimates on (some) moments of $\phi_T$ that are uniform in $T$ --- this is the main result of the paper:
\begin{theo}[Moment bounds for the modified corrector]
\label{T1}
Let $d>2$ and $\expec{\cdot}$ satisfy Assumption~\ref{A} for some $\rho$ and
$\Lambda$. Let
$\phi_T$ denote the modified corrector as defined in Definition~\ref{def:1}. Then
for all $T>0$ and $1\leq p<\infty$ we have
\begin{equation}\label{eq:42}
\expec{|\phi_T|^{p}}^{\frac{1}{p}}\lesssim 1.
\end{equation}
Here $\lesssim$ means $\leq$ up to a constant that
only depends on $p$, $\Lambda$, $\rho$, and $d$.
\end{theo}
Since the estimate in Theorem \ref{T1} is uniform in $T$ we get as a corollary:
\begin{corollary}
\label{C1}
Let $d>2$ and $\expec{\cdot}$ satisfy Assumption~\ref{A} for some $\rho$ and
$\Lambda$. Then the corrector equation
\eqref{eq:cor-equation-phys} has a unique stationary solution $\phi\in(\mathcal S,\|\cdot\|_{L^2(\Omega})$ with $\expec{\phi}=0$. Moreover, we have
\begin{equation*}
\expec{|\phi|^p}^{\frac{1}{p}}\lesssim 1
\end{equation*}
for all $1\leq p<\infty$. Here $\lesssim$ means $\leq$ up to a constant that
only depends on $p$, $\Lambda$, $\rho$ and $d$.
\end{corollary}
As mentioned in the introduction the corrector can be used to
establish invariance principles for random walks in random
environments. Suppose
that $\expec{\cdot}$ satisfies Assumption~\ref{A} for some $\rho$ and
$\Lambda$. Then, thanks to Corollary~\ref{C1}, for each coordinate direction
$e_k$ there exist stationary
correctors $\phi^k\in(\mathcal
S,\|\cdot\|_{L^2(\Omega})$ with $\expec{\phi^k}=0$ that solve
(\ref{eq:cor-equation-phys}) with $e=e_k$. Hence, we can consider the random vector field $\chi=(\chi^1,\ldots,\chi^d):\Omega\times\Z^d\to\R^d$ defined by
\begin{equation*}
\chi^k(\aa,x):=\phi^k(\aa,x)-\phi^k(\aa,x=0).
\end{equation*}
By construction the map $\Z^d\ni x\mapsto
x+\chi(\aa,x)$ is $\aa$-harmonic, has finite second moments, and is shift covariant (i.e.
$\chi(\aa,x+y)-\chi(\aa,x)=\chi(\aa(\cdot+y),x)$. The field $\chi$ is
precisely the ``corrector'' used e.g. in
\cite{Kipnis-Varadhan-86, Sidoravicius-Sznitman-04} to introduce
harmonic coordinates for which the
random walk in the random environment is a martingale. In particular, in \cite{Sidoravicius-Sznitman-04} Sidovaricius and Sznitman
use $\chi$ to prove a quenched invariance principle
for the random walk in a random environment.
A key step in
their argument is to show that $\chi$ has sublinear growth, i.e.
\begin{equation}\label{eq:18}
\lim\limits_{R\to\infty}\max_{x\in B_R(0)}\frac{|\chi(\aa,x)|}{R}=0\qquad\text{for
$\expec{\cdot}$-almost every }\aa\in\Omega.
\end{equation}
This property has been established for supercritical bond percolation on $\Z^d$ in
dimension $d\geq 4$ in \cite{Sidoravicius-Sznitman-04} and for $d\geq
2$ in \cite{Berger-Biskup-07,Mathieu-Piatnitski-07}. The moment
bounds established in our work (under the more restrictive Assumption
\ref{A}) are stronger.
Indeed, from Theorem~\ref{T1} we get \eqref{eq:18} in a stronger
form by the following simple argument: For every $\theta\in(0,1)$, $p>\frac{d}{1-\theta}$ and
$k=1,\ldots,d$ we have
\begin{eqnarray*}
R^{\theta-1}\max_{x\in B_R(0)}|\chi^k(\aa,x)|&\leq&
|\phi^k(\aa,0)|+R^{\theta-1}\max_{x\in B_R(0)}|\phi^k(\aa,x)|\\
&\leq&
|\phi^k(\aa,0)|+\left(R^{-d}\sum_{x\in B_R(0)}|\phi^k(\aa,x)|^{\frac{d}{1-\theta}}\right)^{\frac{1-\theta}{d}}.
\end{eqnarray*}
Hence,
since $\phi^k$ is stationary and $p>\frac{d}{1-\theta}$, the maximal function estimate yields
\begin{equation*}
\expec{\sup_{R\geq 1}\left(\max_{x\in
B_R(0)}\frac{|\phi^k(x)|}{R^{1-\theta}}\right)^p}\leq C\expec{|\phi^k|^p}.
\end{equation*}
With the moment bounds of Corollary~\ref{C1} we
get for $\expec{\cdot}$ satisfying Assumption~\ref{A}:
\begin{equation}\label{eq:21}
\forall\theta\in(0,1)\,:\qquad \lim\limits_{R\to\infty}\max_{x\in
B_R(0)}\frac{|\chi(\aa,x)|}{R^{1-\theta}}=0\qquad\text{$\expec{\cdot}$-almost surely.}
\end{equation}
\subsection{Outline and Proof of Theorem~\ref{T1}}\label{S3}
The proof of Theorem~\ref{T1} is inspired by the approach in
\cite{GO1} where uniformly elliptic conductances are treated. The starting point of our argument is the following $p$-version of the
{\em Spectral Gap Estimate} (A3), which we recall from \cite[Lemma~2]{GNO1}:
\begin{lemma}[p-version of (SG)]
\label{lem:SGp}
Let $\expec{\cdot}$ satisfy (A3) with constant $\rho>0$. Then for $p\in\mathbb{N}$ and all $\zeta\in
L^{2p}(\Omega)$ with $\expec{\zeta}=0$ we have
\begin{equation*}
\expec{\zeta^{2p}}\lesssim \expec{\left(\sum_{\bb\in\B^d}
\left(\frac{\partial\zeta}{\partial\bb}\right)^2\right)^p},
\end{equation*}
where $\lesssim$ means $\leq$ up to a constant that
only depends on $p$, $\rho$ and $d$.
\end{lemma}
Applied to $\zeta=\phi_T(x=0)$, this estimate yields a bound
on stochastic moments of $\phi_T$ in terms of the vertical derivatives
$\frac{\partial\phi_T(x=0)}{\partial\bb}$, $\bb\in\B^d$ (see
Definition~\ref{D:vertical}). Heuristically, we expect the vertical derivative
$\frac{\partial\phi_T(x=0)}{\partial\bb}$ to behave as the classical partial
derivative $\frac{\partial\phi_T(x=0)}{\partial\aa(\bb)}$. As we shall see, the latter
admits the Green's function representation
\begin{equation}\label{eq:43}
\frac{\partial\phi_T(x=0)}{\partial\aa(\bb)}=-\nabla G_T(\aa,\bb,0)(\nabla\phi_T(\bb)+e(\bb)).
\end{equation}
Here $G_T$ denotes the Green's function associated with
$(\frac{1}{T}+\nabla^*\aa\nabla)$ and is defined as follows:
\begin{definition}
\label{D:G}
For $T>0$ the Green's function
$G_T:\Omega\times\Z^d\times\Z^d\to\R$ is defined as follows: For
each $\aa\in\Omega$ and $y\in\Z^d$ the function $x\mapsto
G_T(\aa,x,y)$ is the unique solution in $\ell^2(\Z^d)$ to
\begin{equation}\label{eq:D:G}
\frac{1}{T} G_T(\aa,\cdot,y)+\nabla^*\aa\nabla G_T(\aa,\cdot,y)=\delta(\cdot-y).
\end{equation}
\end{definition}
For uniformly elliptic conductances we have
$\frac{\partial\phi_T(x=0)}{\partial\bb}\sim\frac{\partial\phi_T(x=0)}{\partial\aa(\bb)}$
up to a constant that only depends on the ratio of ellipticity. In the
case of degenerate ellipticity this is no longer true. However,
the discrepancy between the vertical and classical partial derivative
of $\phi_T$ can be quantified in terms of weights defined as follows:
We introduce the weight function $\omega:\Omega\times\B^d\to[0,\infty]$ as
\begin{equation}\label{D:omega}
\omega(\aa,\bb):=(\dist_{\aa}(x_{\bb},y_{\bb}))^{d+2}\qquad\qquad (\aa\in\Omega,\ \bb=\{x_{\bb},y_{\bb}\}\in\B^d).
\end{equation}
For $\bb\in\B^d$ and $\aa\in\Omega$ we denote by
$\aa^{\bb,0}$ the conductance field obtained by ``deleting'' the bond
$\bb$ (i.~e. $\aa^{\bb,0}(\bb')=\aa(\bb')$ for all $\bb'\neq\bb$ and
$\aa^{\bb,0}(\bb)=0$), and introduce the modified weight $\omega_0$ as
\begin{equation}\label{D:tildeomega}
\omega_0(\aa,\bb):=\omega(\aa^{\bb,0},\bb).
\end{equation}
\begin{lemma}
\label{lem:2}
Assume that $\expec{\cdot}$ satisfies (A1) and (A2+). For $T>0$ let $\phi_T$ denote the modified corrector. Then for all $\bb\in\B^d$ we have
\begin{equation*}
\left|\frac{\partial\phi_T(x=0)}{\partial\bb}\right|\lesssim \omega_0^{2}(\bb)\left|\nabla G_T(\bb,0)\right|\,\left|\nabla\phi_T(\bb)+e(\bb)\right|.
\end{equation*}
Here $\lesssim$ means $\leq$ up to a constant that only depends on $d$.
\end{lemma}
To benefit from \eqref{eq:43} (in the form of Lemma~\ref{lem:2}) we require an {\em estimate on the gradient of the
Green's function}. As it is well known, the constant coefficient Green's function
$G_T^0(x):=G_T(\aa={\boldsymbol 1},x,0)$ (which is associated with the modified
Laplacian $\frac{1}{T}+\nabla^*\nabla$) satisfies the pointwise estimate
\begin{equation}\label{eq:5}
\forall \bb:=\{x,x+e_i\}\,:\qquad |\nabla G_T^0(\bb)|\lesssim (1+|x|)^{1-d}\qquad\text{uniformly in $T>0$}.
\end{equation}
We require an estimate that captures the same decay in $x$. It is
known from the continuum, uniformly elliptic case, that such an
estimate cannot hold pointwise in $x$ and at the same time pointwise in $\aa$. In \cite[Lemma~2.9]{GO1}, for uniformly elliptic conductances, a spatially averaged version of \eqref{eq:5} is established, where the averages are taken over dyadic annuli. The constant in this estimate depends on the conductances only through their contrast of ellipticity. In the degenerate elliptic case, the ellipticity contrast is infinite. In order to keep the optimal decay in $x$, we need to allow the constant in the estimate to depend on $\aa$. For $x_0\in\Z^d$, $R>1$ and $1\leq q<\infty$ consider the spatial average of the weight $\omega$ (cf. \eqref{D:omega})
\begin{equation}
\label{eq:C}
C(\aa,Q_R(x_0),q):=\left(\frac{1}{|Q_R(x_0)|}\sum_{\bb\in Q_R(x_0)}\omega^q(\aa,\bb)\right)^{\frac{1}{q}}.
\end{equation}
We shall prove the following estimate:
\begin{prop}\label{P1}
For $R_0>1$ and $k\in\N_0$ consider
\begin{equation*}
A_k:=\left\{
\begin{aligned}
&Q_{R_0}(0)&&k=0,\\
&Q_{2^{k}R_0}(0)\setminus Q_{2^{k-1}R_0}(0)&&k\geq 1.
\end{aligned}
\right.
\end{equation*}
Then for all $\frac{2d}{d+2}<p<2$ we have
\begin{equation*}
\left(\frac{1}{|A_k|}\sum_{\bb\in A_k}|\nabla
G_T(\aa,\bb,0)|^p\right)^{\frac{1}{p}}\lesssim C(\aa)\,2^{k(1-d)},
\end{equation*}
where $\lesssim$ means $\leq$ up to a constant that
only depends on $R_0$, $d$ and $p$, and
\begin{equation} \label{eq:CP1}
C(\aa):=C^{\frac{\beta}{2}}(\aa,Q_{2^{k+1}R_0}(0),\tfrac{p}{2-p})
\end{equation}
with $\beta:=2\frac{p^*-1}{p^*-2}+p^*$ and $p^*:=\frac{dp}{d-p}$.
\end{prop}
The precise form of the constant $C$ in \eqref{eq:CP1} is not crucial. In fact, in the random setting, when $\Omega$ is equipped with a probability
measure satisfying (A1) and (A2), we may view $C$ as a random variable with controlled finite moments:
\begin{rem}
\label{lem:3}
Let $\expec{\cdot}$ satisfy Assumption (A1). Then the spatial
average introduced in \eqref{eq:C} satisfies
\begin{eqnarray*}
\expec{ C^{q}(\aa, Q_{R}(x_0),q')}=\expec{\left(\frac{1}{|Q_{R}(x_0)|}
\sum_{\bb\in Q_{R}(x_0)}\omega^{q'}(\aa,\bb)\right)^{\frac{q}{q'}}} \leq
\begin{cases}
\expec{\omega^{q'}}^{\frac{q}{q'}}&\text{if } q'\geq q,\\
\expec{\omega^q}&\text{if }q'<q,
\end{cases}
\end{eqnarray*}
as can be seen by appealing to Jensen's inequality and stationarity. Moreover, if $\expec{\cdot}$ additionally fulfills (A2), then $C$ defined in \eqref{eq:CP1} satisfies
\begin{equation*}
\forall m\in\N\,:\,\expec{C^m}^{\frac{1}{m}}\lesssim 1,
\end{equation*}
where $\lesssim$ means $\leq$ up to a constant that
only depends on $m$, $p$, $\Lambda$ and $d$.
\end{rem}
\medskip
The proof of Proposition~\ref{P1} relies on arguments from elliptic regularity theory, which in the uniformly elliptic case are standard.
They typically involve the pointwise inequality
\begin{equation}\label{eq:40}
\lambda_0|\nabla u(\bb)|^2\leq\nabla u(\bb)\,\aa(\bb)\nabla u(\bb),\qquad(\bb\in\B^d),
\end{equation}
where $\lambda_0>0$ denotes the constant of ellipticity. In the
degenerate case, the conductances $\aa$ may vanish on a non-negligible
set of bonds and \eqref{eq:40} breaks down. As a replacement we establish estimates
which provide a weighted, integrated version of \eqref{eq:40}:
\begin{lemma}
\label{lem:coercivity-phys}
Let $p>d+1$. For any function $u:\Z^d\to\R$ and all $\aa\in\Omega$ we have (with the convention $\frac{1}{\infty}=0$)
\begin{equation}\label{eq:coercivity}
\sum_{\bb\in\B^d}|\nabla
u(\bb)|^2\dist_{\aa}^{-p}(x_{\bb},y_{\bb})\leq C(p,d)\sum_{\bb\in\B^d}\aa(\bb)|\nabla
u(\bb)|^2,
\end{equation}
where $C(p,d):=\sum_{x\in\Z^d}(|x|+1)^{1-p}$ and the inequality holds whenever the sums converge.
\end{lemma}
While Lemma~\ref{lem:coercivity-phys} is purely deterministic, we also need the following statistically averaged version:
\begin{lemma}
\label{lem:coercivity-prob}
Let $\expec{\cdot}$ be stationary, cf. (A1), and $p>d+1$. Then for any
stationary random field $u$ and any bond $\bb\in\B^d$ we have (with the convention $\frac{1}{\infty}=0$)
\begin{equation*}
\expec{|\nabla u(\bb)|^2\dist_{\aa}^{-p}(x_{\bb},y_{\bb})}\leq
C(p,d) \sum_{\bb'=\{0,e_i\}\atop i=1\ldots
d}\expec{\aa(\bb')|\nabla u(\bb')|^2},
\end{equation*}
where $C(p,d):=\sum_{k=0}^\infty2^{k(1-p)}|B_{2^{k+1}}(0)|<\infty$.
\end{lemma}
A last ingredient required for the proof of Theorem~\ref{T1} is a {\em
Caccioppoli inequality in probability} that yields a gain of
stochastic integrability and helps to treat the $\nabla\phi_T$-term on
the right-hand side in \eqref{eq:43}. In the uniformly elliptic case, i.~e.~when $0<\lambda_0\leq\aa\leq1$, the Caccioppoli inequality
\begin{equation}\label{eq:30}
\expec{|\nabla\phi_T|^{2p+2}}^{\frac{1}{2p+2}}\lesssim\expec{\phi_T^{2p}}^{\frac{1}{2p}\frac{p}{p+1}}
\end{equation}
holds for any integer exponents $p$ (see \cite[Lemma~2.7]{GO1}). The inequality follows from combining
the elementary discrete
inequality
\begin{equation}\label{eq:discrete}
|\nabla u(\bb)|=|u(y_{\bb})-u(x_{\bb})|\leq|u(y_{\bb})|+|u(x_{\bb})|,
\end{equation}
with the estimate
\begin{equation}\label{eq:29}
\expec{\phi_T^{2p}|\nabla\phi_T|^2}\lesssim\frac{1}{\lambda_0}\expec{\phi^{2p}_T|\nabla\phi_T|}.
\end{equation}
The latter is obtained by testing the modified corrector equation
\eqref{eq:cor-modified} with
$\phi_T^{2p+1}$ and uses the uniform ellipticity of $\aa$. In the
degenerate elliptic case \eqref{eq:29} is not true any longer. However, by appealing to Lemma
\ref{lem:coercivity-prob} the following weaker version of \eqref{eq:30} survives:
\begin{equation}\label{eq:31}
\expec{|\nabla\phi_T|^{(2p+2)\theta}}^{\frac{1}{(2p+2)\theta}}\lesssim\expec{\phi^{2p}_T}^{\frac{1}{2p}\frac{p}{p+1}}
\end{equation}
for any factor $0<\theta<1$. Hence, we only gain an increase of
integrability by exponents strictly smaller two. As a matter of fact, in the proof of our main result we only need the estimate in
the following form:
\begin{lemma}[Caccioppoli estimate in probability]
\label{L:CEP}
Let $\expec{\cdot}$ satisfy (A1) and (A2).
Let $\phi_T$ denote the corrector associated with $e \in\R^d$,
$|e |=1$, $T>0$.
For every even integer $p$ we have
\begin{equation}
\label{eq:cacciopprob}
\expec{|\nabla\phi_T|^{2p+1}}^{\frac{1}{2p+1}}\lesssim\expec{\phi^{2p}_T}^{\frac{1}{2p}\frac{p}{p+1}},
\end{equation}
where $\lesssim$ means $\leq$ up to a constant that
only depends on $p$, $\Lambda$ and $d$.
\end{lemma}
Now we are ready to prove our main result:
\begin{proof}[Proof of Theorem~\ref{T1}]
It suffices to consider exponents $p\in 2\N$ that are larger than a
threshold only depending on $d$ -- the threshold is determined by
\eqref{eq:23} below.
Further, we only need to prove
\begin{equation}\label{eq:6}
\expec{\phi_T^{2p}}^{\frac{1}{2p}}\lesssim
\max_{\bb'=\{0,e_i\}\atop i=1,\ldots,d}\expec{|\nabla\phi_T(\bb')|^{2p+1}}^{\frac{1}{2p+1}} + 1.
\end{equation}
Indeed, in combination with the Caccioppoli estimate in
probability, cf. Lemma~\ref{L:CEP}, estimate \eqref{eq:6} yields $\expec{\phi_T^{2p}}^{\frac{1}{2p}}\lesssim\expec{\phi_T^{2p}}^{\frac{1}{2p}\frac{p}{p+1}}
+ 1$. Since $\frac{p}{p+1}<1$ the first term can be absorbed and the
desired estimate follows.
We prove \eqref{eq:6}. For reasons that will become clear at the end of
the argument we fix an exponent $\frac{2d}{d+2}<q<2$ such that
\begin{equation}\label{eq:23}
d(\frac{1}{q}+\frac{1}{2p}-1)+1<0.
\end{equation}
This is always possible for $p\gg 1$ and $0<2-q\ll 1$, since
\begin{equation*}
\lim_{q\uparrow 2,p\uparrow\infty}
d(\frac{1}{q}+\frac{1}{2p}-1)=-\frac{d}{2}<-1\qquad\text{for }d>2.
\end{equation*}
Our argument for \eqref{eq:6} starts with the $p$-version of the spectral gap
estimate, see Lemma~\ref{lem:SGp}, that we combine with
Lemma~\ref{lem:2}:
\begin{eqnarray*}
\expec{\phi_T^{2p}}^{\frac{1}{p}}&=&\expec{\phi_T^{2p}(x=0)}^{\frac{1}{p}}\lesssim\expec{\left(\sum_{\bb\in\B^d}\left(\frac{\partial\phi_T(x=0)}{\partial\bb}\right)^2\right)^p}^{\frac{1}{p}}\\
&\lesssim&\expec{\left(\sum_{\bb\in\B^d}(\nabla G_T(\bb,0))^2(\nabla\phi_T(\bb)+e (\bb))^2\omega_0^{4}(\bb)\right)^p}^{\frac{1}{p}}.
\end{eqnarray*}
Now we wish to benefit from the decay estimate for $\nabla G_T$ in Proposition~\ref{P1},
and therefore decompose $\B^d$ into dyadic annuli: Let the dyadic annuli $A_k$, $k\in\N_0$ be defined as in
Proposition~\ref{P1} with initial radius $R_0=2$. Note that $\B^d$ can be written as the
disjoint union of $A_0,A_1,A_2,\ldots$ .
With the triangle inequality w.~r.~t. $\expec{(\cdot)^p}^{\frac{1}{p}}$ and
H\"older's inequality in $\bb$-space with exponents
$(\frac{p}{p-1},p)$ we get
\begin{align}\label{eq:25}
\begin{split}
\expec{\phi_T^{2p}}^{\frac{1}{p}}
&\lesssim\ \sum_{k\in\N_0}\expec{\left(\sum_{\bb\in A_k}(\nabla
G_T(\bb,0))^2(\nabla\phi_T(\bb)+e (\bb))^2\omega_0^{4}(\bb)\right)^p}^{\frac{1}{p}}\\
&\lesssim\ \sum_{k\in\N_0}\expec{ \left(\sum_{\bb\in A_k}|\nabla
G_T(\bb,0)|^{\frac{2p}{p-1}}\right)^{p-1}\left(\sum_{\bb\in A_k}
(\nabla\phi_T(\bb)+e (\bb))^{2p}\omega_0^{4p}(\bb)\right)}^{\frac{1}{p}}.
\end{split}
\end{align}
Because $\frac{2d}{d+2}<q<2<\frac{2p}{p-1}$, the discrete
$\ell^q$-$\ell^{\frac{2p}{p-1}}$-estimate combined with the decay
estimate of Proposition~\ref{P1} yields
\begin{eqnarray}\label{eq:24}
\left(\sum_{\bb\in A_k}|\nabla
G_T(\bb,0)|^{\frac{2p}{p-1}}\right)^{p-1}
&\leq&
\left(\sum_{\bb\in A_k}|\nabla
G_T(\bb,0)|^{q}\right)^{\frac{2p}{q}}\leq C 2^{k(2p(1-(1-\frac{1}{q})d))}.
\end{eqnarray}
Here and below, $C$ denotes a generic, non-negative
random variable with the property that $\expec{C^m}\lesssim 1$ for
all $m<\infty$, where $\lesssim$ means $\leq$ up to a constant that
only depends on $m$, $p$, $q$, $\Lambda$ and $d$. Combining \eqref{eq:25} and
\eqref{eq:24} yields
\begin{equation}\label{eq:26}
\expec{\phi_T^{2p}}^{\frac{1}{p}}
\lesssim
\sum_{k\in\N_0}2^{2k(1-(1-\frac{1}{q})d)}\left(
\sum_{\bb\in A_k}
\expec{C\ (\nabla\phi_T(\bb)+e (\bb))^{2p}\ \omega_0^{4p}(\bb)} \right)^{\frac{1}{p}}.
\end{equation}
Next we apply a triple H\"older inequality in probability with
exponents $(\theta,\theta',\theta')$, where we choose
$\theta=\frac{2p+1}{2p}$ (so that $2p\theta=2p+1$). We have
\begin{eqnarray*}
\expec{C\ (\nabla\phi_T(\bb)+e (\bb))^{2p}\ \omega_0^{4p}(\bb)}
\leq
\expec{(\nabla\phi_T(\bb)+e (\bb))^{2p+1}}^{\frac{2p}{2p+1}}
\expec{C^{\theta'}}^{\frac{1}{\theta'}}
\expec{\omega_0^{4p\theta'}(\bb)}^{\frac{1}{\theta'}}.
\end{eqnarray*}
The first term is estimated by stationarity of $\nabla\phi_T$ and the assumption $|e |=1$ as
\begin{equation*}
\expec{(\nabla\phi_T(\bb)+e (\bb))^{2p+1}}^{\frac{2p}{2p+1}}\lesssim
\max_{\bb'=\{0,e_i\}\atop
i=1,\ldots,d}\expec{|\nabla\phi_T(\bb')|^{2p+1}}^{\frac{2p}{2p+1}}
+ 1.
\end{equation*}
For the second term we have $\expec{C^{\theta'}}^{\frac{1}{\theta'}}\expec{\omega_0^{4p\theta'}(\bb)}^{\frac{1}{\theta'}}\lesssim
1$ due to (A2+), so that we obtain
\begin{equation}\label{eq:27}
\expec{C\ (\nabla\phi_T(\bb)+e (\bb))^{2p}\ \omega_0^{4p}(\bb)}
\lesssim \max_{\bb'=\{0,e_i\}\atop
i=1,\ldots,d}\expec{|\nabla\phi_T(\bb')|^{2p+1}}^{\frac{2p}{2p+1}}
+ 1.
\end{equation}
Combined with \eqref{eq:26} we get
\begin{eqnarray*}
\expec{\phi_T^{2p}}^{\frac{1}{p}}
&\lesssim& \left(\max_{\bb'=\{0,e_i\}\atop
i=1,\ldots,d}\expec{|\nabla\phi_T(\bb')|^{2p+1}}^{\frac{2}{2p+1}}+1\right)\
\times\
\sum_{k\in\N_0}2^{2k(1-(1-\frac{1}{q})d)}|A_k|^{\frac{1}{p}}\\
&\lesssim& \left(\max_{\bb'=\{0,e_i\}\atop
i=1,\ldots,d}\expec{|\nabla\phi_T(\bb')|^{2p+1}}^{\frac{2}{2p+1}}+1\right).
\end{eqnarray*}
In the last line we used that
\begin{equation*}
\sum_{k\in\N_0}2^{2k(1-(1-\frac{1}{q})d)}|A_k|^{\frac{1}{p}}\lesssim\sum_{k\in\N_0}2^{2k(1-(1-\frac{1}{2p}-\frac{1}{q})d)}\lesssim 1,
\end{equation*}
which holds since the exponent is negative, cf. \eqref{eq:23}.
This proves \eqref{eq:6}.
\end{proof}
\section{Proofs of the auxiliary lemmas}\label{S:proofs}
\subsection{Proof of Lemma~\ref{lem:2}}
The argument for Lemma~\ref{lem:2} is split into three lemmas.
\begin{lemma}\label{L:ODE}
Let $\bb\in\B^d$ be fixed. For $T>0$ let $\phi_T$ and $G_T$ denote the modified corrector
and the Green's function, respectively. Then
\begin{eqnarray}
\label{eq:ODE:0}
\frac{\partial\phi_T(x=0)}{\partial \aa(\bb)}&=&-\nabla G_T(\bb,0)(\nabla\phi_T(\bb)+e (\bb)),\\
\label{eq:ODE:1}
\frac{\partial}{\partial \aa(\bb)}\frac{\partial\phi_T(x=0)}{\partial \aa(\bb)}
&=&-2\nabla\nabla G_T(\bb,\bb)\frac{\partial\phi_T(x=0)}{\partial \aa(\bb)},\\
\label{eq:ODE:2}
\frac{\partial}{\partial\aa(b)}\nabla\nabla G_T(\bb,\bb)
&=&-\left(\nabla\nabla G_T(\bb,\bb)\right)^2.
\end{eqnarray}
Moreover, $\nabla\nabla G_T(\bb,\bb)$ and $1-\aa(\bb)\nabla\nabla
G_T(\bb,\bb)$ are strictly positive.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{L:ODE}]
For simplicity we write $\phi$ and $G$ instead of $\phi_T$ and
$G_T$.
\medskip
\step 1 Argument for \eqref{eq:ODE:2}.
We first claim that
\begin{subequations}
\begin{align}
\label{eq:rel3}
&\frac{\partial}{\partial \aa(\bb)}G(x,y)=-\nabla G(\bb,y)\nabla G(\bb,x),\\
\label{eq:rel4}
&\frac{\partial}{\partial \aa(\bb)}\nabla G(x,\bb)=-\nabla\nabla G(\bb,\bb)\nabla G(\bb,x).
\end{align}
\end{subequations}
Indeed, since $\nabla$ and $\frac{\partial}{\partial\aa(\bb)}$
commute, an application of $\frac{\partial}{\partial \aa(\bb)}$ to
\eqref{eq:D:G} yields
\begin{equation}\label{eq:ODE:step-1:1}
\left(\frac{1}{T}+\nabla^*\aa\nabla\right)\frac{\partial G(\cdot,y)}{\partial\aa(\bb)}=-\nabla^*\frac{\partial \aa(\cdot)}{\partial \aa(\bb)}\nabla G(\cdot,y).
\end{equation}
We test this identity with $G(\cdot,x)$:
\begin{eqnarray}
\label{eq:ODE:step-1:2}
\frac{\partial G(x,y)}{\partial
\aa(\bb)}&=&\sum_{y'\in\Z^d}\frac{\partial G(y',y)}{\partial
\aa(\bb)} \delta(x-y')\\\nonumber
&\stackrel{\eqref{eq:D:G}}{=}&\sum_{y'\in\Z^d}\frac{\partial G(y',y)}{\partial
\aa(\bb)} \,\left(\frac{1}{T}+\nabla^*\aa\nabla\right)G(y',x)\\\nonumber
&\stackrel{\eqref{int-by-parts}}{=}&\sum_{y'\in\Z^d}G(y',x)\,\left(\frac{1}{T}+\nabla^*\aa\nabla\right)\frac{\partial
G(y',y)}{\partial \aa(\bb)}\\\nonumber
&\stackrel{\eqref{eq:ODE:step-1:1},\eqref{int-by-parts}}{=}&-\sum_{\bb'\in\B^d}\frac{\partial \aa(\bb')}{\partial
\aa(\bb)}\nabla G(\bb',y)\,\nabla G(\bb',x).
\end{eqnarray}
Since $\frac{\partial \aa(\bb')}{\partial
\aa(\bb)}$ is equal to $1$ if $\bb'=\bb$ and $0$ else, the sum on
the right-hand side reduces to $\nabla G(\bb,y)\nabla G(\bb,x)$ and we get
\eqref{eq:rel3}. An application of $\nabla$ to \eqref{eq:rel3} yields
\eqref{eq:rel4}, and an application of $\nabla$ to \eqref{eq:rel4}
finally yields \eqref{eq:ODE:2}.
\medskip
\step 2 Argument for \eqref{eq:ODE:0} and \eqref{eq:ODE:1}.
We apply $\frac{\partial}{\partial\aa(\bb)}$ to the
modified corrector equation \eqref{eq:cor-modified}:
\begin{equation}\label{eq:ODE:step1:1}
\frac{1}{T}\frac{\partial\phi}{\partial\aa(\bb)}+\nabla^*\aa\nabla\frac{\partial\phi}{\partial\aa(\bb)}=-\nabla^*\frac{\partial\aa(\cdot)}{\partial\aa(\bb)}(\nabla\phi+e(\bb) ).
\end{equation}
As in \eqref{eq:ODE:step-1:2} testing with $G(\cdot,x)$ yields
\begin{equation}\label{eq:41}
\frac{\partial\phi(x)}{\partial\aa(\bb)}=-(\nabla\phi(\bb)+e (\bb))\nabla G(\bb,x),
\end{equation}
and \eqref{eq:ODE:0} follows. By
applying $\frac{\partial}{\partial\aa(\bb)}$ and $\nabla$ to
\eqref{eq:41} we obtain the two identities
\begin{eqnarray*}
\frac{\partial}{\partial\aa(\bb)}\frac{\partial\phi(x)}{\partial\aa(\bb)}&=&-
\frac{\partial(\nabla\phi(\bb)+e (\bb))}{\partial\aa(\bb)}\nabla
G(\bb,x)-(\nabla\phi(\bb)+e (\bb))\frac{\partial\nabla
G(\bb,x)}{\partial\aa(\bb)},\\
\nabla\frac{\partial\phi(\bb)}{\partial\aa(\bb)}&=&-(\nabla\phi(\bb)+e (\bb))\nabla\nabla G(\bb,\bb).
\end{eqnarray*}
By combining the first with the second identity, \eqref{eq:rel4} and \eqref{eq:41}
we get
\begin{eqnarray*}
\frac{\partial}{\partial\aa(\bb)}\frac{\partial\phi(x)}{\partial\aa(\bb)}&=&
2(\nabla\phi(\bb)+e (\bb))\nabla\nabla G(\bb,\bb)\nabla
G(\bb,x)\\
&=&-2\frac{\partial\phi(x)}{\partial\aa(\bb)}\nabla\nabla G(\bb,\bb),
\end{eqnarray*}
and thus \eqref{eq:ODE:1}.
\medskip
\step 3 Positivity of $\nabla\nabla G(\bb,\bb)$ and
$1-\aa(\bb)\nabla\nabla G(\bb,\bb)$.
Let $\bb=(x_{\bb},y_{\bb})\in\B^d$ be fixed. An application of $\nabla$
(w.~r.~t. the $y$-component) to
\eqref{eq:D:G} yields
\begin{equation*}
(\frac{1}{T}+\nabla^*\aa\nabla)\nabla G(\cdot,\bb)=\delta(\cdot-y_{\bb})-\delta(\cdot-x_{\bb}).
\end{equation*}
We test this equation with $\nabla G(\cdot,\bb)$ and get
\begin{equation}\label{eq:46}
\frac{1}{T}\sum_{x\in\Z^d}\left(\nabla
G(x,\bb)\right)^2+\sum_{\bb'\in\B^d}\aa(\bb')\left(\nabla\nabla
G(\bb',\bb)\right)^2=\nabla\nabla G(\bb,\bb).
\end{equation}
This identity implies that $\nabla\nabla G(\bb,\bb)$ and
$1-\aa(\bb)\nabla\nabla G(\bb,\bb)$ are strictly positive. Indeed,
$\nabla\nabla G(\bb,\bb)$ must be strictly positive, since otherwise $\sum_{x\in\Z^d}|\nabla G(x,\bb)|^2=0$ and thus $G(\cdot,\bb)=0$ in
contradiction to \eqref{eq:D:G}. The strict positivity of
$1-\aa(\bb)\nabla\nabla G(\bb,\bb)$ follows from the strict positivity of
$\nabla\nabla G(\bb,\bb)-\aa(\bb)\left(\nabla\nabla
G(\bb,\bb)\right)^2$. The latter can be seen by the following argument:
\begin{eqnarray*}
\lefteqn{\nabla\nabla G(\bb,\bb)-\aa(\bb)\left(\nabla\nabla
G(\bb,\bb)\right)^2}&&\\
&=&\left(\nabla\nabla G(\bb,\bb)-\frac{1}{T}\sum_{x\in\Z^d}\left(\nabla
G(x,\bb)\right)^2-\sum_{\bb'\in\B^d}\aa(\bb')\left(\nabla\nabla
G(\bb',\bb)\right)^2\right)\\
&&+\frac{1}{T}\sum_{x\in\Z^d}\left(\nabla
G(x,\bb)\right)^2+\sum_{\bb'\neq\bb}\aa(\bb')\left(\nabla\nabla
G(\bb',\bb)\right)^2\\
&\stackrel{\eqref{eq:46}}{\geq}&\frac{1}{T}\sum_{x\in\Z^d}\left(\nabla
G(x,\bb)\right)^2>0.
\end{eqnarray*}
\end{proof}
The next lemma establishes a (quantitative) link between the vertical and classical
partial derivative of $\phi_T$.
\newcommand{\osc}{\mathop{\operatorname{osc}}}
\begin{lemma}\label{L:osc}
Let $\bb\in\B^d$ be fixed. For $T>0$ let $\phi_T$ and $G_T$ denote the modified corrector
and the Green's function. Then
\begin{equation}
\label{eq:osc:1}
\left|\frac{\partial\phi_T(x=0)}{\partial\bb}\right|\leq\left(1+\frac{\aa(\bb)}{1-\aa(\bb)\nabla\nabla
G_T(\bb,\bb)}\right)\left|\frac{\partial\phi_T(x=0)}{\partial\aa(\bb)}\right|.
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{L:osc}]
Fix $\aa\in\Omega$ and $\bb\in\B^d$. Set $a_0:=\aa(\bb)$. We shall
use the following shorthand notation
\begin{equation}\label{eq:44}
\varphi(a):=\frac{\partial\phi_T(\aa^{\bb,a},x=0)}{\partial\aa(\bb)},\qquad
g(a):=\nabla\nabla G_T(\aa^{\bb,a},\bb,\bb),\qquad (a\in[0,1]),
\end{equation}
where $\aa^{\bb,a}$ denotes the coefficient field obtained from
$\aa$ by setting $\aa^{\bb,a}(\bb')=a$ if $\bb'=\bb$ and
$\aa^{\bb,a}(\bb'):=\aa(\bb')$ else. With that notation \eqref{eq:ODE:1} and \eqref{eq:ODE:2} turn into
\begin{eqnarray}
\label{eq:L:osc:1a}
\varphi'=-2g\varphi,\\
\label{eq:L:osc:1b}
g'=-g^2.
\end{eqnarray}
Since we have $\left|\frac{\partial\phi_T(x=0)}{\partial\bb}\right|\leq
\int_0^1|\varphi(a)|\,da$, it suffices to show
\begin{equation}\label{eq:L:osc:2}
\int_0^1|\varphi(a)|\,da\leq\left(1+\frac{a_0}{1-a_0g(a_0)}\right)|\varphi(a_0)|.
\end{equation}
The positivity of $g$ and \eqref{eq:L:osc:1a} imply that $\varphi$
is either strictly positive, strictly
negative or that it vanishes identically. In the latter case, the claim is
trivial. In the other cases we have
\begin{equation*}
\varphi(a)=\exp(h(a))\varphi(a_0),\qquad\mbox{where }h(a):=\ln\frac{\varphi(a)}{\varphi(a_0)},
\end{equation*}
and \eqref{eq:L:osc:2} reduces to the inequality
\begin{equation}
\label{eq:19}
\int_0^1\exp(h(a))\,da\leq 1+ \frac{a_0}{1-a_0g(a_0)}.
\end{equation}
From \eqref{eq:L:osc:1a} we learn that $h'=-2g$. Since $g>0$, $h$
is decreasing. Combined with the identity
$h(a_0)=0$ we get
\begin{equation}\label{eq:20}
h(a)\leq
\left\{\begin{aligned}
&2\int_a^{a_0}g(a')\,da'&\qquad&\mbox{for }a\in[0,a_0),\\
&0&&\mbox{for }a\in[a_0,1].
\end{aligned}\right.
\end{equation}
On the other hand, we learn from integrating \eqref{eq:L:osc:1b}
that $g(a')=\frac{g(a_0)}{1+(a'-a_0)g(a_0)}$. Hence, for $a<a_0$ the
right-hand side in \eqref{eq:20} turns into
\begin{equation*}
2\int_a^{a_0}g(a')\,da'=-2\ln(1+(a-a_0)g(a_0)),
\end{equation*}
which in combination with \eqref{eq:20} yields \eqref{eq:19}.
\end{proof}
Lemma~\ref{lem:2} is a direct consequence of \eqref{eq:osc:1},
\eqref{eq:ODE:0} and the following estimate:
\begin{lemma}\label{L:bound}
Let $G_T$ denote the Green's function. Assume that (A1) is
satisfied. Then for all $T>0$, $\aa\in\Omega$ and $\bb\in\B^d$ we have
\begin{eqnarray}
\label{eq:bound}
1+\frac{\aa(b)}{1-\aa(b)\nabla\nabla G_T(\bb,\bb)}\lesssim \omega_0^2(\aa,\bb),
\end{eqnarray}
where $\lesssim$ means up to a constant that only depends on $d$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{L:bound}]
\step 1 Reduction to an estimate for $\aa^{\bb,0}$.
We claim that
\begin{equation*}
\frac{\aa(\bb)}{1-\aa(\bb)\nabla\nabla G_T(\aa,\bb,\bb)}\leq
(1+\nabla\nabla G_T(\aa^{\bb,0},\bb,\bb))^2
\end{equation*}
For the argument let $\aa\in\Omega$ and $\bb\in\B^d$ be fixed. With
the shorthand
notation introduced in \eqref{eq:44}, the claim reads
\begin{equation}\label{eq:bound:step1}
\frac{a_0}{1-a_0g(a_0)}\leq (1+g(0))^2.
\end{equation}
For $a_0=0$ the
statement is trivial. For $a_0>0$ consider the function
\begin{equation*}
f(a):=\frac{1}{a}g(a)-g^2(a),
\end{equation*}
with help of which the left-hand side in \eqref{eq:bound:step1} can be written as $\frac{g(a_0)}{f(a_0)}$. The function $f$ is non-negative and decreasing, as can be seen by
combining the inequality $0<g(a)<\frac{1}{a}$ from Lemma~\ref{L:ODE} with the identity
$f'(a)=g(a)(g^2(a)-\frac{1}{a^2}+g^2(a)-\frac{1}{a}g(a))$ which
follows from \eqref{eq:L:osc:1b}. The latter also implies that $g(1)=\frac{g(0)}{1+g(0)}$ and
thus $f(1)=g(1)(1-g(1))=\frac{g(0)}{(1+g(0))^2}$.
Hence,
\begin{equation*}
\frac{a_0}{1-a_0g(a_0)}=\frac{g(a_0)}{f(a_0)}\leq \frac{g(a_0)}{f(1)}=
(1+g(0))^2\frac{g(a_0)}{g(0)}\leq (1+g(0))^2;
\end{equation*}
in the last step we used in addition that $g(a_0)\leq g(0)$ which is a
consequence of \eqref{eq:L:osc:1b}.
\medskip
\step 2 Conclusion.
To complete the argument we only need to show that
\begin{equation}\label{eq:48}
\nabla\nabla G_T(\aa^{\bb,0},\bb,\bb)\lesssim \omega_0(\aa,\bb).
\end{equation}
For simplicity set $\aa_0:=\aa^{\bb,0}$. Note that
$\omega_0(\aa,\bb)=\omega(\aa_0,\bb)$. From \eqref{eq:46} we obtain
\begin{eqnarray*}
\nabla\nabla G_T(\aa_0,\bb,\bb)&\stackrel{\eqref{eq:46}}{\geq}& \sum_{\bb'\in\B^d}\aa_0(\bb')\left(\nabla\nabla
G_T(\aa_0,\bb',\bb)\right)^2\stackrel{\eqref{eq:coercivity}}{\gtrsim} \sum_{\bb'\in\B^d}\omega^{-1}(\aa_0,\bb')\left(\nabla\nabla
G_T(\aa_0,\bb',\bb)\right)^2\\
&\geq& \omega^{-1}(\aa_0,\bb)\left(\nabla\nabla
G_T(\aa_0,\bb,\bb)\right)^2.
\end{eqnarray*}
Dividing both sides by $\omega^{-1}(\aa_0,\bb)\nabla\nabla
G_T(\aa_0,\bb,\bb)$ yields \eqref{eq:48}.
\end{proof}
\subsection{Proof of Lemma~\ref{lem:coercivity-phys} and Lemma~\ref{lem:coercivity-prob}}
\begin{proof}[Proof of Lemma~\ref{lem:coercivity-phys}]
Fix for a moment $\aa\in\Omega$. For $\bb\in\B^d$ with
$\dist_{\aa}(x_{\bb},y_{\bb})<\infty$, let $\pi_{\aa}(\bb)$ denote a shortest
open path that connects $x_{\bb}$ and $y_{\bb}$, i.e.
\begin{equation*}
\dist_{\aa}(x_{\bb},y_{\bb})=\sum_{\bb'\in\pi_{\aa}(\bb)}\frac{1}{\aa(\bb')}.
\end{equation*}
Thanks to the
triangle inequality and the Cauchy-Schwarz inequality we
have
\begin{eqnarray*}
|\nabla u(\bb)|&\leq&\sum_{\bb'\in\pi(\bb)}|\nabla
u(\bb')|\leq
\left(\sum_{\bb'\in\pi_{\aa}(\bb)}\frac{1}{\aa(\bb')}\right)^{\frac{1}{2}}\,\left(\sum_{\bb'\in\pi_{\aa}(\bb)}|\nabla
u(\bb')|^2\aa(\bb')\right)^{\frac{1}{2}}\\
&=&\dist_{\aa}^\frac{1}{2}(x_{\bb},y_{\bb})\left(\sum_{\bb'\in\pi_{\aa}(\bb)}|\nabla
u(\bb')|^2\aa(\bb')\right)^{\frac{1}{2}}.
\end{eqnarray*}
Hence, using the convention $\frac{1}{\infty}=0$, we conclude that for all
$\bb\in\B^d$ and $\aa\in\Omega$:
\begin{equation}\label{eq:11}
\dist^{-p}_{\aa}(x_{\bb},y_{\bb})|\nabla u(\bb)|^2\,\leq\,\dist^{1-p}_{\aa}(x_{\bb},y_{\bb})\sum_{\bb'\in\pi_{\aa}(\bb)}|\nabla
u(\bb')|^2\aa(\bb').
\end{equation}
We drop the ``$\aa$'' in the notation from now on. Summation of
\eqref{eq:11} in
$\bb\in\B^d$ yields
\begin{eqnarray*}
\sum_{\bb\in\B^d}\dist^{-p}(x_{\bb},y_{\bb})|\nabla u(\bb)|^2
&\leq&\sum_{\bb\in\B^d}\sum_{\bb'\in\pi(\bb)}\dist^{1-p}(x_{\bb},y_{\bb})|\nabla
u(\bb')|^2\aa(\bb')\\
&=&\sum_{\bb'\in\B^d}\sum_{\bb\in\B^d\text{ with }\atop\pi(\bb)\ni\bb'}\dist^{1-p}(x_{\bb},y_{\bb})|\nabla
u(\bb')|^2\aa(\bb').
\end{eqnarray*}
Since $\pi(\bb)$ is a shortest path, and because $\aa\leq 1$, we
have $\dist(x_{\bb},y_{\bb})\geq |x_{\bb}-x_{\bb'}|+1$ for all
$\bb,\bb'\in\B^d$ with $\bb'\in\pi(\bb)$. Combined with the
previous estimate we get
\begin{eqnarray*}
\sum_{\bb\in\B^d}\dist^{-p}(x_{\bb},y_{\bb})|\nabla u(\bb)|^2
&\leq&\sum_{\bb'\in\B^d}\sum_{\bb\in\B^d\text{ with }\atop\pi(\bb)\ni\bb'}(|x_{\bb}-x_{\bb'}|+1)^{1-p}|\nabla
u(\bb')|^2\aa(\bb')\\
&\leq&C(d,p)\,\sum_{\bb'\in\B^d}|\nabla
u(\bb')|^2\aa(\bb').
\end{eqnarray*}
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:coercivity-prob}]
Fix $\bb\in\B^d$. For $L\in\N$ consider the indicator function
\begin{equation}\label{eq:13}
\chi_L(\aa):=
\left\{\begin{aligned}
&1&&\text{if }L\leq\dist_{\aa}(x_{\bb},y_{\bb})<2L,\\
&0&&\text{else}.
\end{aligned}\right.
\end{equation}
With the convention $\frac{1}{\infty}=0$, we have
\begin{equation}\label{eq:15}
\sum_{k=0}^\infty\chi_{2^k}(\aa)\dist^{-p}_{\aa}(x_{\bb},y_{\bb})=\dist^{-p}_{\aa}(x_{\bb},y_{\bb})
\end{equation}
for all $\aa\in\Omega$. In the following we drop ``$\aa$''
in the notation. We recall \eqref{eq:11} in the form of
\begin{equation}\label{eq:14}
\chi_L\dist^{-p}(x_{\bb},y_{\bb})|\nabla u(\bb)|^2
\,\leq\,\chi_L\dist^{1-p}(x_{\bb},y_{\bb})\sum_{\bb'\in\pi(\bb)}|\nabla
u(\bb')|^2\aa(\bb').
\end{equation}
From $\aa\leq 1$ and $\dist(x_{\bb},y_{\bb})<2L$ for $\chi_L\neq 0$, cf. \eqref{eq:13},
we learn that $\pi(\bb)$ is contained in the box $Q_{2L}(x_{\bb})$.
Hence, \eqref{eq:14} turns into
\begin{equation*}
\chi_L\dist^{-p}(x_{\bb},y_{\bb})|\nabla u(\bb)|^2
\,\stackrel{\eqref{eq:13}}{\leq}\,\chi_L L^{1-p}\sum_{\bb'\in Q_{2L}(x_{\bb})}|\nabla
u(\bb')|^2\aa(\bb').
\end{equation*}
We take the expectation on both sides and appeal to stationarity:
\begin{eqnarray*}
\expec{\chi_L\dist^{-p}(x_{\bb},y_{\bb})|\nabla u(\bb)|^2}
&\leq&L^{1-p}\sum_{\bb'\in Q_{2L}(x_{\bb})}\expec{\chi_L|\nabla
u(\bb')|^2\aa(\bb')}\\
&\stackrel{\chi_L\leq 1}{\leq}&L^{1-p}\sum_{x\in
B_{2L}(x_{\bb})}\sum_{\bb'=\{x,x+e_i\}\atop i=1,\ldots,d}\expec{|\nabla
u(\bb')|^2\aa(\bb')}\\
&\stackrel{\text{stationarity}}{\leq}& L^{1-p}|B_{2L}(0)|\sum_{\bb'=\{0,e_i\}\atop i=1,\ldots,d}\expec{|\nabla
u(\bb')|^2\aa(\bb')}.
\end{eqnarray*}
Using $1+d-p<0$ we get
\begin{eqnarray*}
\expec{\dist^{-p}(x_{\bb},y_{\bb})|\nabla u(\bb)|^2}
&\stackrel{\eqref{eq:15}}{=}&\sum_{k=0}^\infty\expec{\chi_{2^k}\dist^{-p}(x_{\bb},y_{\bb})|\nabla u(\bb)|^2}\\
&\leq& C(p,d)\,\sum_{\bb'=\{0,e_i\}\atop i=1,\ldots,d}\expec{|\nabla
u(\bb')|^2\aa(\bb')}.
\end{eqnarray*}
\end{proof}
\subsection{Proof of Proposition~\ref{P1} -- Green's function estimates}\label{S:green}
We first establish an estimate for
the Green's function itself:
\begin{lemma}\label{L:P1:2}
Let $d\geq2$ and consider $u,f\in\ell^1(\Z^d)$ with
\begin{equation}\label{L:P1:2-1}
\nabla^*\aa\nabla u=f\qquad\text{in }\Z^d.
\end{equation}
Then for all $\frac{2d}{d+2}<p<2$, $R\geq 1$ and $x_0\in\Z^d$ we have
\begin{equation}\label{L:4:a}
\sum_{x\in B_R(x_0)}|u(x)-\bar u|\lesssim
C\,R^{2}\sum_{x\in\Z^d}|f(x)|.
\end{equation}
Here, $\bar u:=\frac{1}{|B_R(x_0)|}\sum_{x\in B_R(x_0)}u(x)$
denotes the average of $u$ on $B_R(x_0)$,
$C:=C(\aa,Q_R(x_0),\tfrac{p}{2-p})$, and $\lesssim$
means $\leq$ up to a constant that only depends on
$d$ and $p$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{L:P1:2}]
W.~l.~o.~g. we assume $\sum_{\Z^d}|f|=1$ and $R\in\N$. To shorten
the notation we write $B_R$ and $Q_R$ for $B_R(x_0)$ and
$Q_R(x_0)$, respectively.
Let $M(u)$ denote a median of $u$ on $B_R$, i.~e.
\begin{equation*}
|\{u\geq M(u)\}\cap B_R|, |\{u\leq M(u)\}\cap B_R|\geq\frac{1}{2}|B_R|.
\end{equation*}
By Jensen's inequality we have $|\bar u-M(u)|\leq\frac{1}{|B_R|}\sum_{B_R}|u-M(u)|$,
so that it suffices to prove for $v:=u-M(u)$ the estimate
\begin{equation*}
\sum_{B_R}|v|\lesssim C\,R^2\sum_{\Z^d} |f|
=C\,R^2.
\end{equation*}
For $0\leq M<\infty$ consider the cut-off version of $v$
\begin{equation*}
v_M:=\max\{\min\{v,M\},0\}.
\end{equation*}
Then $v_M$ satisfies
\begin{eqnarray*}
\sum_{\B^d}\nabla v_M\,\aa\nabla v_M=\sum_{\B^d}\nabla u\,\aa\nabla v_M.
\end{eqnarray*}
Since $u\in\ell^1(\Z^d)$ (by assumption) and $v_M\in\ell^\infty(\Z^d)$ (by construction), we may integrate by parts:
\begin{equation*}
\sum_{\B^d}\nabla u\,\aa\nabla v_M=
\sum_{\Z^d} v_M\,\nabla^*\aa\nabla u=\sum_{\Z^d} fv_M\leq
M\sum_{\Z^d}|f|=M.
\end{equation*}
Hence,
\begin{equation}\label{eq:L1:1}
\sum_{\B^d}\nabla v_M\,\aa\nabla v_M\leq M.
\end{equation}
Set $p^*=\frac{pd}{d-p}$ and $q^*:=\frac{p^*}{p^*-1}$.
By construction we have $|\{v_M=0\}\cap B_R|=|\{v_M\leq 0\}\cap B_R|\geq \frac{1}{2}|B_R|$.
Hence, the Sobolev-Poincar\'e inequality yields
\begin{equation*}
\left(R^{-d}\sum_{B_R}|v_M|^{p^*}\right)^{\frac{1}{p^*}}
\lesssim R\left(R^{-d}\sum_{Q_R}|\nabla v_M|^p\right)^{\frac{1}{p}}.
\end{equation*}
Lemma \ref{lem:coercivity-phys} combined with H\"older's inequality
with exponents $(\frac{2}{2-p},\frac{2}{p})$ yields
\begin{eqnarray}\nonumber
\left(R^{-d}\sum_{Q_R}|\nabla v_M|^{p}\right)^{\frac{1}{p}}
&=&
\left(R^{-d}\sum_{Q_R}\omega^{\frac{p}{2}}|\nabla
v_M|^{p}\omega^{-\frac{p}{2}}\right)^{\frac{1}{p}}\\\nonumber
&\leq&
\left(R^{-d}\sum_{Q_R}\omega^{\frac{p}{2-p}}\right)^{\frac{2-p}{2p}}
\left(R^{-d}\sum_{Q_R}|\nabla
v_M|^2\omega^{-1}\right)^{\frac{1}{2}}\\\label{eq:coercivity2}
&\stackrel{\text{Lemma~\ref{lem:coercivity-phys}}}{\lesssim}&
C^{\frac{1}{2}}\
\left(R^{-d}\sum_{\B^d}\nabla v_M\,\aa\nabla
v_M\right)^{\frac{1}{2}},
\end{eqnarray}
so that
\begin{align}
\label{eq:L1:2}
\left(R^{-d}\sum_{B_R}|v_M|^{p^*}\right)^{\frac{1}{p^*}}\,
\lesssim\, C^{\frac{1}{2}}R\left(R^{-d}
\sum_{\B^d}\nabla v_M\,\aa\nabla v_M\right)^{\frac{1}{2}}
\stackrel{\eqref{eq:L1:1}}{\lesssim}\,(C
R^{2-d}M)^{\frac{1}{2}}.
\end{align}
Next we use Chebyshev's inequality in the form of
\begin{equation*}
M\left(R^{-d}|\{\,v>M\,\}\cap B_R|\right)^{\frac{1}{p^*}}
\lesssim \left(R^{-d}\sum_{B_R}|v_M|^{p^*}\right)^{\frac{1}{p^*}}.
\end{equation*}
With \eqref{eq:L1:2} we get
\begin{equation*}
R^{-d}|\{\,v>M\,\}\cap B_R|\lesssim C^{\frac{p^*}{2}}\,R^{(2-d)\frac{p^*}{2}}M^{-\frac{p^*}{2}},
\end{equation*}
which upgrades by symmetry to
\begin{equation*}
R^{-d}|\{\,|v|>M\,\}\cap B_R|\lesssim C^{\frac{p^*}{2}}\,R^{(2-d)\frac{p^*}{2}}M^{-\frac{p^*}{2}}.
\end{equation*}
Since $p>\frac{2d}{d+2}$ (by assumption), we have $\frac{p^*}{2}>1$ and the ``wedding cake
formula'' for $M:=CR^{2-d}$ yields
\begin{eqnarray*}
R^{-d}\sum_{B_R}|v|&=&\int_0^\infty R^{-d}|\{\,|v|>M'\,\}\cap
B_R|\,dM'\,\lesssim\, M+\int_M^\infty R^{-d}|\{\,|v|>M'\,\}\cap B_R|\,dM'\\
&\lesssim& M + C^{\frac{p^*}{2}}R^{(2-d)\frac{p^*}{2}}M^{1-\frac{p^*}{2}}\,\lesssim\,CR^{2-d}.
\end{eqnarray*}
\end{proof}
A careful Caccioppoli estimate combined with the previous lemma yields:
\begin{lemma}\label{L:P1:2b}
Let $d\geq2$, $x_0\in\Z^d$ and $R\geq 1$. Consider $f\geq 0$ and $u$
related as
\begin{equation}\label{eq:L2:1}
\nabla^*\aa\nabla u=-f\qquad\text{in }B_{2R}(x_0).
\end{equation}
Then for $\frac{2d}{d+2}<p<2$ we have
\begin{equation}\label{L2:1}
\left(R^{-d}\sum_{Q_R(x_0)}|R\nabla u|^p\right)^{\frac{1}{p}}
\lesssim C^{\frac{\alpha}{2}}\,\left(R^{-d}\sum_{B_{2R}(x_0)}|u|+\left(R^{2-d}\sum_{B_{2R}(x_0)} fu_-\right)^{\frac{1}{2}}\right),
\end{equation}
where $u_-:=\max\{-u,0\}$ denotes the negative part of $u$,
$C:=C(\aa, Q_{2R}(x_0),\tfrac{p}{2-p})$,
$\alpha:=2\frac{p^*-1}{p^*-2}$ and $p^*:=\frac{dp}{d-p}$. Here
$\lesssim$ stands for $\leq$ up to a constant that only depends on
$p$ and $d$
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{L:P1:2b}]
\step 1 Caccioppoli estimate.
We claim that for every cut-off function $\eta$ that is supported in $B_{2R-1}(x_0)$ (so that in particular $\nabla\eta=0$
outside of $Q_{2R}(x_0)$) we have
\begin{equation}\label{eq:cacc2}
\left(R^{-d}\sum_{\B^d}|R\nabla(u\eta)|^p\right)^{\frac{1}{p}}\lesssim
C^{\frac{1}{2}}\left(R^{2-d}\sum_{\Z^d}fu_-\eta^2+R^{-d}\sum_{\bb\in\B^d}u(x_{\bb})u(y_{\bb})|R\nabla\eta(\bb)|^2\aa(\bb) \right)^{\frac{1}{2}}.
\end{equation}
Indeed, we get with Lemma~\ref{lem:coercivity-phys} (using an argument
similar to \eqref{eq:coercivity2}):
\begin{equation*}
\left(R^{-d}\sum_{\B^d}|R\nabla(u\eta)|^p\right)^{\frac{1}{p}}=\left(R^{-d}\sum_{Q_{2R}(x_0)}|R\nabla(u\eta)|^p\right)^{\frac{1}{p}}\lesssim
\,C^{\frac{1}{2}}\left(R^{-d}\sum_{\B^d}|R\nabla(u\eta)|^2\aa\right)^{\frac{1}{2}},
\end{equation*}
Combined with the elementary identity
\begin{equation*}
|\nabla(u\eta)(\bb)|^2=\nabla u(\bb)\nabla(u\eta^2)(\bb)+u(x_{\bb})u(y_{\bb})|\nabla\eta(\bb)|^2,
\end{equation*}
the equation for $u$, and the fact that $-fu\eta^2\leq
fu_-\eta^2$ (here we use $f\geq 0$), the claimed estimate
\eqref{eq:cacc2} follows.
\medskip
\step 2 Conclusion.
Set $\theta:=\frac{\alpha-1}{\alpha}$ and note that $\alpha$ is
defined in such a way that for the considered range of $p$ we have
\begin{equation}\label{eq:L2:03}
\frac{1}{2}=\theta
\frac{1}{p^{*}}+(1-\theta)\qquad\text{and}\qquad 2(1-\theta)<1.
\end{equation}
As we shall see below in Step~3, there exists a cut-off function
$\eta$ with $\eta=1$ in $B_{R+1}(x_0)$ and $\eta=0$ outside of $B_{2R-1}(x_0)$, such that
\begin{multline}
\label{eq:L2:02}
\left(R^{-d}\sum_{\bb\in\B^d}|u(x_{\bb})||u(y_{\bb})||R\nabla\eta(\bb)|^2\right)^{\frac{1}{2}}\,\lesssim\,\left(R^{-d}\sum_{\Z^d} |u\eta|^{p^*}\right)^{\frac{\theta}{p^*}}
\left(R^{-d}\sum_{B_{2R}(x_0)}|u|\right)^{1-\theta}\\
+\left(R^{-d}\sum_{\Z^d} |u\eta|^{p^*}\right)^{\frac{1}{2p^*}}
\left(R^{-d}\sum_{B_{2R}(x_0)}|u|\right)^{\frac{1}{2}}.
\end{multline}
Let us explain the right-hand side of this estimate. While the first
term on the right-hand side would also appear in the continuum case (i.e.~when $\Z^d$ is
replaced by $\R^d$), the second term is an error term coming from discreteness. In
fact, it is of lower order: A sharp look at \eqref{eq:16} below shows
that \eqref{eq:L2:02} holds with the vanishing factor $R^{-\epsilon}$ (for
some $\epsilon>0$ only depending on $p$ and $d$) in front of the
second term on the right-hand side.
By combining this estimate with the Gagliardo-Nirenberg-Sobolev
inequality on $\Z^d$,\linebreak i.e.~$\left(R^{-d}\sum_{\Z^d}|u\eta|^{p^*}\right)^{\frac{1}{p^*}}
\,\lesssim\,\left(R^{-d}\sum_{\B^d}|R\nabla(u\eta)|^p\right)^{\frac{1}{p}}$,
and two applications of Youngs' inequality, we find that for all $\delta>0$ there exists a constant $C(\delta)>0$ only depending on $\delta$, $p$ and $d$, such that
\begin{equation*}
\begin{split}
&\left(CR^{-d}\sum_{\bb\in\B^d}|u(x_{\bb})||u(y_{\bb})|(\nabla\eta(\bb))^2\aa(\bb)\right)^{\frac{1}{2}}\\
&\qquad\qquad\leq\,\delta\left(R^{-d}\sum_{\B^d}
|R\nabla(u\eta)|^{p}\right)^{\frac{1}{p}} +
C(\delta)\left(C^{\frac{1}{2(1-\theta)}}R^{-d}\sum_{B_{2R}(x_0)}|u|
+ CR^{-d}\sum_{B_{2R}(x_0)}|u|\right)\\
&\qquad\qquad\stackrel{2(1-\theta)<1}{\leq}\,\delta\left(R^{-d}\sum_{\B^d}
|R\nabla(u\eta)|^{p}\right)^{\frac{1}{p}} +
2C(\delta)C^{\frac{1}{2(1-\theta)}}R^{-d}\sum_{B_{2R}(x_0)}|u|.
\end{split}
\end{equation*}
We combine this estimate with \eqref{eq:cacc2} and absorb the first
term on the right-hand side of the previous estimate into the
left-hand side of \eqref{eq:cacc2}. Since $\nabla(\eta u)=\nabla u$ in $Q_{R}(x_0)$ this yields \eqref{L2:1}.
\smallskip
\step 3 Proof of \eqref{eq:L2:02}.
We first construct a suitable cut-off function $\eta$ for $B_{R+1}(x_0)$ in
$B_{2R-1}(x_0)$. W.~l.~o.~g. we assume that $x_0=0$. Recall that $\alpha=2\frac{p^*-1}{p^*-2}$.
For $t\geq 0$ set
\begin{equation*}
\tilde\eta(t):=\max\{1-2\max\{\tfrac{t}{R+1}-1,0\},0\}^\alpha,
\end{equation*}
and define
\begin{equation}
\label{eq:cutoff}
\eta(x):=\prod_{i=1}^d\tilde\eta(|x_i|).
\end{equation}
Using the relation $\alpha-1=\theta\alpha$, cf. \eqref{eq:L2:03}, it is straightforward to check that $\eta$ satisfies for all edges $\bb$ with
$|\nabla\eta(\bb)|>0$:
\begin{align}\label{eq:L2:3a}
&R|\nabla\eta(\bb)|\lesssim
\begin{cases}
\min\{\eta^{\theta}(x_{\bb}),\eta^\theta(y_{\bb})\} & \text{if }\min\{\eta(x_{\bb}),\eta(y_{\bb})>0\},\\
R^{1-\alpha} & \text{if }\min\{\eta(x_{\bb}),\eta(y_{\bb})\}=0.
\end{cases}
\end{align}
Now we turn to \eqref{eq:L2:02}. We split the sum into a ``interior''
and a ``boundary'' contribution:
\begin{multline*}
\sum_{\bb\in\B^d} |u(x_{\bb})||u(y_{\bb})|(\nabla\eta(\bb))^2\\
= \sum_{\bb\in A_{\text{int}}} |u(x_{\bb})||u(y_{\bb})|(\nabla\eta(\bb))^2
+\sum_{\bb\in A_{\text{bound}}} |u(x_{\bb})||u(y_{\bb})|(\nabla\eta(\bb))^2,
\end{multline*}
where
\begin{eqnarray*}
A_{\text{int}}&:=&\{\,\bb\,:\,|\nabla\eta(\bb)|>0\,\text{ and
}\,\min\{\eta(x_{\bb}),\eta(y_{\bb})\}>0\,\},\\
A_{\text{bound}}&:=&\{\,\bb\,:\,|\nabla\eta(\bb)|>0\,\text{ and }\,\min\{\eta(x_{\bb}),\eta(y_{\bb})\}=0\,\}.
\end{eqnarray*}
For $A_{\text{int}}$ we get with \eqref{eq:L2:3a}, Young's
inequality, and H\"older's inequality with exponents
$(p^*\frac{1}{2\theta},\frac{1}{2(1-\theta)})$:
\begin{equation}\label{eq:L2:021}
\begin{aligned}
&R^{-d}\sum_{\bb\in A_{\text{int}}}
|u(x_{\bb})||u(y_{\bb})||R\nabla\eta(\bb)|^2\,\lesssim\,
R^{-d}\sum_{\Z^d}u^2\eta^{2\theta}\\
&\qquad =\,R^{-d}\sum_{\Z^d}(u\eta)^{2\theta}u^{2(1-\theta)}
\, \leq\,
\left(R^{-d}\sum_{\Z^d}
(u\eta)^{p^*}\right)^{\frac{2\theta}{p^*}}\left(R^{-d}\sum_{B_{2R}}
|u|\right)^{2(1-\theta)}.
\end{aligned}
\end{equation}
Next we treat $A_{\text{bound}}$, which is an error term coming from discreteness. By the definition of $A_{\text{bound}}$ the cut-off function
$\eta$ vanishes at one and only one of the two sites adjacent to $\bb\in
A_{\text{bound}}$. Given $\bb\in A_{\text{bound}}$ we denote by $\tilde x_{\bb}$ (resp. $\tilde
y_{\bb}$) the site adjacent to $\bb$ with $\eta(\tilde x_{\bb})=0$ (resp.
$\eta(\tilde y_{\bb})\neq 0$), so that
\begin{equation*}
R^{-d}\sum_{\bb\in A_{\text{bound}}}|u(x_{\bb})||u(y_{\bb})||R\nabla\eta(\bb)|^2=R^{1-d}\sum_{\bb\in A_{\text{bound}}}|u(\tilde
x_{\bb})||u(\tilde y_{\bb})|\eta(\tilde y_{\bb})|R\nabla\eta(\bb)|.
\end{equation*}
We combine this with \eqref{eq:L2:3a}, H\"older's inequality with
exponents $(p^*,q^*:=\frac{p^*}{p^*-1})$, and the discrete $\ell^{1}$-$\ell^{q^*}$-estimate:
\begin{align}\nonumber
&R^{-d}\sum_{\bb\in
A_{\text{bound}}}|u(x_{\bb})||u(y_{\bb})||R\nabla\eta(\bb)|^2\,\lesssim
\,R^{2-d-\alpha}\sum_{\bb\in A_{\text{bound}}}|u(\tilde x_{\bb})||u(\tilde y_{\bb})|\eta(\tilde y_{\bb})\\\nonumber
&\qquad\leq\,
R^{2-d-\alpha}\left(\sum_{B_{2R}}|u\eta|^{p^*}\right)^{\frac{1}{p^*}}\left(\sum_{B_{2R}}|u|^{q^*}\right)^{\frac{1}{q^*}}\,\leq\,
R^{2-d-\alpha}\left(\sum_{B_{2R}}|u\eta|^{p^*}\right)^{\frac{1}{p^*}}\sum_{B_{2R}}|u|\\\label{eq:16}
&\qquad=\,
R^{\frac{d}{p^*}-2-\alpha}\left(R^{-d}\sum_{B_{2R}}|u\eta|^{p^*}\right)^{\frac{1}{p^*}}\left(R^{-d}\sum_{B_{2R}}|u|\right).
\end{align}
From the definition of $\alpha$ and $p^*$, and the fact that
$\alpha>2$, we deduce that the exponent $\frac{d}{p^*}-2-\alpha$ is negative. Together with \eqref{eq:L2:021} the desired estimate \eqref{eq:L2:02} follows.
\end{proof}
Now we are ready to prove Proposition~\ref{P1}. We distinguish the
cases $k\geq 1$ and $k=0$.
\begin{proof}[Proof of Proposition~\ref{P1}]
\step 1 Argument for $k\geq 1$.
For brevity set $R:=2^{k-1}R_0$ and recall that $A_k=Q_{2
R}(0)\setminus Q_{R}(0)$. We cover the annulus $A_k$ by boxes $Q_{\frac{R}{2}}(x_0)$, $x_0\in X_R\subset\Z^d$, such that
\begin{equation}\label{eq:4}
A_k\subset\bigcup\limits_{x_0\in X_R}Q_{\frac{R}{2}}(x_0)\subset \bigcup\limits_{x_0\in X_R}Q_{R}(x_0)\subset
Q_{3R}(0)\setminus \{0\}.
\end{equation}
Since the diameter of the annulus and the side length of the boxes
are comparable, we may choose $X_R$ such that its
cardinality is bounded by a constant only depending on $d$.
Since in addition we have for $x_0\in X_R$ the inequality
$C(\aa,Q_R(x_0),\tfrac{p}{2-p})\lesssim C(\aa, Q_{3R}(0),\tfrac{p}{2-p})$ (thanks to the
third inclusion in \eqref{eq:4}), it suffices to prove
\begin{equation*}
\left(R^{-d}\sum_{\bb\in Q_{\frac{R}{2}}(x_0)}|\nabla
G_T(\aa,\bb,0)|^p\right)^{\frac{1}{p}}\lesssim
C^{\frac{\beta}{2}}\,R^{1-d},\qquad\text{where } C:=C(\aa,
Q_{R}(x_0),\tfrac{p}{2-p}),
\end{equation*}
for each $x_0\in X_R$ separately.
We use the shorthand $G_T(x):=
G_T(\aa,x,0)$ and set $\bar G_T:=\frac{1}{|B_R(x_0)|}\sum_{x\in
B_R(x_0)}G_T(x)$. In view of \eqref{eq:D:G}, $u(x):=G_T(x)-\bar
G_T$ satisfies \eqref{L:P1:2-1} with $f=\delta-\frac{1}{T}G_T$.
Since
\begin{equation}\label{eq:17}
\sum_{\Z^d}|\delta-\frac{1}{T}G_T|\leq 1+\frac{1}{T}\sum_{\Z^d}G_T(x)=2,
\end{equation}
Lemma~\ref{L:P1:2} yields
\begin{equation}\label{eq:P1:1}
R^{-d}\sum_{B_{R}(x_0)}|u|\lesssim C^{\frac{1}{2}p^*}\,R^{2-d}.
\end{equation}
Thanks to the third inclusion in \eqref{eq:4} we have $0\notin
B_R(x_0)$, and thus $u$ satisfies \eqref{eq:L2:1} with
$f=\frac{1}{T}G_T$ (with $B_{2R}(x_0)$ replaced by $B_{R}(x_0)$).
Hence, Lemma~\ref{L:P1:2b} yields
\begin{eqnarray}
\begin{split}
\label{eq:lemma2app}
\left(R^{p-d}\sum_{ Q_{\frac{R}{2}}(x_0)}|\nabla G_T|^p\right)^{\frac{1}{p}}&=\left(R^{p-d}\sum_{ Q_{\frac{R}{2}}(x_0)}|\nabla u|^p\right)^{\frac{1}{p}}\\
&\lesssim C^{\frac{1}{2}\alpha}\,
R^{-d}\sum_{B_{R}(x_0)}
|u|\,+\,C^{\frac{1}{2}\alpha}\,\left(R^{2-d}\sum_{B_R(x_0)}
\frac{1}{T}G_Tu_-\right)^{\frac{1}{2}}\\
&\stackrel{\eqref{eq:P1:1}}{\lesssim} C^{\frac{1}{2}(\alpha+p^*)}\,R^{2-d}\,+\,C^{\frac{1}{2}\alpha}\,\left(R^{2-d}\sum_{B_R(x_0)}
\frac{1}{T}G_Tu_-\right)^{\frac{1}{2}}.
\end{split}
\end{eqnarray}
Regarding the second term on the right-hand side we only need to
show \begin{equation}\label{eq:P1:2}
\frac{1}{T}\sum_{B_R(x_0)} G_Tu_-\lesssim C^{p^*} R^{2-d}.
\end{equation}
We note that
$(G_T-\bar G_T)(G_T-\bar G_T)_-\leq 0$, so that
\begin{eqnarray*}
\frac{1}{T}\sum_{B_R(x_0)} G_T u_-
&=&\frac{1}{T}\sum_{B_R(x_0)}(G_T-\bar G_T+\bar G_T)(G_T-\bar G_T)_-
\leq \frac{1}{T}\bar G_T\sum_{B_{R}(x_0)}|G_T-\bar G_T|.
\end{eqnarray*}
Combined with \eqref{eq:P1:1} and the inequality $\frac{1}{T}\bar G_T\lesssim R^{-d}\frac{1}{T}\sum_{B_{R}(x_0)}G_T\leq R^{-d}$,
\eqref{eq:P1:2} follows.
\medskip
\step 2 Argument for $k=0$.
Fix $\aa\in\Omega$. For brevity set $G_T(x):=G_T(\aa,x,0)$ and $\bar G_T:=\frac{1}{|B_{2R_0}(0)|}\sum_{x\in
B_{2R_0}(0)}G_T(x)$. By the discrete
$\ell^1$-$\ell^{p}$-estimate and the elementary inequality
$|\nabla G_T(\bb)|\leq |G_T(x_{\bb})-\bar G_T|+|G_T(y_{\bb})-\bar
G_T|$ we have
\begin{equation*}
\left(\frac{1}{|Q_{R_0}(0)|}\sum_{\bb\in Q_{R_0}(0)}|\nabla
G_T(\bb)|^p\right)^{\frac{1}{p}}\lesssim \sum_{ B_{2R_0}(0)}|G_T-\bar G_T|.
\end{equation*}
As in Step~1 an application of Lemma~\ref{L:P1:2} yields
\begin{equation*}
\sum_{B_{2R_0}(0)}|G_T-\bar
G_T|\lesssim C^{\frac{p^*}{2}}(\aa,Q_{2R_0}(0),\tfrac{p}{2-p})
\,R_0^{2}.
\end{equation*}
Since $R_0^{2}\sim R_0^{1-d}$ and because the exponent of the
constant satisfies $\frac{p^*}{2}\leq\frac{\beta}{2}$, the desired
estimate follows.
\end{proof}
\subsection{Proof of Lemma~\ref{L:CEP}}
In order to deal with the failure of the Leibniz rule we will appeal to a number of discrete
estimates, which are stated in Lemma~\ref{lem:1} below. As already mentioned, we
replace the missing uniform ellipticity of $\aa$ by the coercivity
estimate of Lemma~\ref{lem:coercivity-prob} which makes use of the
weight $\omega$ defined in \eqref{D:omega}. Morally speaking it plays the
role of $\frac{1}{\lambda_0}$ in \eqref{eq:29}. In view of Assumption (A2) all
moments of $\omega$ are bounded, i.~e. $\expec{\omega^{k}}\lesssim 1$, where $\lesssim$ means $\leq$ up to a constant that
only depends on $k$, $p$, $\Lambda$ and $d$.
We split the proof of Lemma~\ref{L:CEP} into the following two inequalities:
\begin{align}
\label{eq:36}
\expec{|\nabla\phi(\bb)|^{2p+1}}^{\frac{2p+2}{2p+1}}\ &\lesssim
\sum_{\bb'=\{0,e_i\}\atop
i=1,\ldots,d}\expec{|\nabla(\phi^{p+1})(\bb')|^2\aa(\bb')},\\
\label{eq:37}
\sum_{\bb'=\{0,e_i\}\atop
i=1,\ldots,d}\expec{|\nabla(\phi^{p+1})(\bb')|^2\aa(\bb')}\
&\lesssim \expec{\phi^{2p}(x)}.
\end{align}
Here and below we write $\phi$ instead of $\phi_T$ for simplicity.
Note that due to stationarity the left-hand side of \eqref{eq:36} and
the right-hand side of \eqref{eq:37} do not depend on $\bb\in\B^d$
(resp. $x\in\Z^d$). Therefore, we suppress these arguments in the
following. We start with \eqref{eq:36}. We smuggle in $\omega$
by appealing to H\"older's inequality with exponent
$\frac{2p+2}{2p+1}$ and exploit that all moments of $\omega$ are
bounded by Assumption (A2):
\begin{align*}
\expec{|\nabla\phi|^{2p+1}}^{\frac{2p+2}{2p+1}}
\lesssim\expec{|\nabla\phi|^{2p+2}\omega^{-1}}.
\end{align*}
We combine \eqref{eq:discrete} in the form of
$|\nabla\phi(\bb)|^{2p+2}\lesssim(\frac{\phi^p(x_{\bb})+\phi^p(y_{\bb})}{2})^2|\nabla\phi(\bb)|^2$ (where
we use that $p$ is even) with the discrete version of the Leibniz rule
$F^p\nabla F=\frac{1}{p+1}\nabla(F^{p+1})$, see \eqref{eq:CorLeibniz2} in
Corollary~\ref{C:leibniz} below:
\begin{equation}\label{eq:35}
\expec{|\nabla\phi|^{2p+2}\omega^{-1}}
\lesssim \expec{|\nabla(\phi^{p+1})|^2\omega^{-1}}.
\end{equation}
Now \eqref{eq:36} follows from the coercivity estimate of
Lemma~\ref{lem:coercivity-prob}.
Next we prove \eqref{eq:37}. The discrete version of the Leibniz rule $|\nabla(F^{p+1})|^2=\frac{(p+1)^2}{(2p+1)}\nabla
F\nabla(F^{2p+1})$ (see Lemma~\ref{lem:1} (ii)) yields
\begin{equation*}
\sum_{\bb'=\{0,e_i\}\atop
i=1,\ldots,d}\expec{|\nabla(\phi^{p+1})(\bb')|^2\aa(\bb')}
\lesssim \sum_{\bb'=\{0,e_i\}\atop i=1,\ldots,d}\expec{\nabla\phi(\bb')\aa(\bb')\nabla(\phi^{2p+1})(\bb')}.
\end{equation*}
By stationarity and the modified corrector equation
\eqref{eq:cor-modified} we have
\begin{eqnarray*}
\lefteqn{\sum_{\bb'=\{0,e_i\}\atop
i=1,\ldots,d}\expec{\nabla\phi(\bb')\aa(\bb')\nabla(\phi^{2p+1})(\bb')}=\expec{(\nabla^*\aa\nabla\phi)\
\phi^{2p+1}}}&& \\
&=& -\frac{1}{T}\expec{\phi^{2(p+1)}}-\sum_{\bb'=\{0,e_i\}\atop
i=1,\ldots,d}\expec{\nabla\phi^{2p+1}(\bb')\aa(\bb')e (\bb')}\\
&\leq & \sum_{\bb'=\{0,e_i\}\atop
i=1,\ldots,d}\expec{|\nabla(\phi^{2p+1})(\bb')|\aa(\bb')},
\end{eqnarray*}
where for the last inequality we use that $\phi^{2(p+1)}\geq 0$ and
$|e |=1$. By Corollary~\ref{C:leibniz} and Young's inequality we get for any $\epsilon>0$
\begin{eqnarray*}
\sum_{\bb'=\{0,e_i\}\atop i=1,\ldots,d}\expec{|\nabla(\phi^{2p+1})(\bb')|\aa(\bb')}
&\stackrel{\eqref{eq:CorLeibniz1}}{\lesssim}&
\epsilon\sum_{\bb'=\{0,e_i\}\atop i=1,\ldots,d}\expec{|\nabla\phi(\bb')|^2\left(\tfrac{\phi^p(x_{\bb'})+\phi^p(y_{\bb'})}{2}\right)^2\aa(\bb')}
\\
&&+\frac{1}{\epsilon}
\sum_{\bb'=\{0,e_i\}\atop i=1,\ldots,d}\expec{\left(\tfrac{\phi^p(x_{\bb'})+\phi^p(y_{\bb'})}{2}\right)^2}\\
&\stackrel{\eqref{eq:CorLeibniz2}}{\lesssim}&
\epsilon\sum_{\bb'=\{0,e_i\}\atop
i=1,\ldots,d}\expec{|\nabla(\phi^{p+1})(\bb')|^2\aa(\bb')}
+\frac{1}{\epsilon}\expec{\phi^{2p}}.
\end{eqnarray*}
Since we may choose $\epsilon>0$ as small as we wish, the first term
on the right-hand side can be absorbed into the left-hand side of
\eqref{eq:37} and the claim follows.
\section*{Acknowledgments}
We thank Artem Sapozhnikov for stimulating discussions on percolation
models. Stefan Neukamm was
partially supported by ERC-2010-AdG no.267802 AnaMultiScale. Most of
this work was done while all three authors were employed at the
Max-Planck-Institute for Mathematics in the Sciences, Leipzig.
|
1,108,101,564,956 | arxiv | \section{Introduction} \label{sec_Introduction}
Learning to improve AUC performance is an important topic in machine learning, especially for imbalanced datasets \cite{huang2005using,ling2003auc,cortes2004auc}. Specifically, for severely imbalanced binary classification datasets, a classifier may achieve a high prediction accuracy if it predicts all samples to be the dominant class. However, the classifier actually has a poor generalization performance because it cannot properly classify samples from non-dominant class. AUC (area under the ROC curve) \cite{hanley1982meaning}, which measures the probability of a randomly drawn positive sample having a higher decision value than a randomly drawn negative sample \cite{mcknight2010mann}, would be a better evaluation criterion for imbalanced datasets.
Real-world data tend to be massive in quantity, but with quite a few unreliable noisy data that can lead to decreased generalization performance. Many studies have tried to address this, with some degree of success \cite{wu2007robust,safe,zhang2020self}. However, most of these studies only consider the impact of noisy data on accuracy, rather than on AUC. In many cases, AUC maximization algorithms may decrease generalization performance due to the noisy data. Thus, how to deal with noisy data in AUC maximization problems is still an open topic.
Since its birth \cite{kumar2010self}, self-paced learning (SPL) has attracted increasing attention \cite{wan2020self,klink2020self,ghasedi2019balanced} because it can simulate the learning principle of humans, \emph{i.e.}, starting with easy samples and then gradually introducing more complex samples into training. Complex samples are considered to own larger loss than easy samples, and noise samples normally have a relatively large loss. Thus, SPL could reduce the importance weight of noise samples because they are treated as complex samples. Under the learning paradigm of SPL, the model is constantly corrected and its robustness
is improved. Thus, SPL is an effective method for handling noisy data. Many experimental and theoretical analyses have proved its robustness \cite{meng2017theoretical,liu2018understanding,zhang2020self}. However, existing SPL methods are limited to pointwise learning, while AUC maximization is a pairwise learning problem.
To solve this challenging problem, we innovatively propose a balanced self-paced AUC maximization algorithm (BSPAUC).
Specifically, we first provide an upper bound of expected AUC risk by the empirical AUC risk on training samples obeying pace distribution plus two more terms related to SPL. Inspired by this, we propose our balanced self-paced AUC maximization formulation. In particular, the sub-problem with respect to all weight variables may be non-convex in our formulation, while the one is normally convex in existing self-paced problems. To solve this challenging difficulty, we propose a doubly cyclic block coordinate descent method to optimize our formulation.
The main contributions of this paper are summarized as follows.
\begin{enumerate}[leftmargin=0.2in]
\setlength{\parsep}{0ex}
\setlength{\topsep}{0ex}
\setlength{\itemsep}{0ex}
\item Inspired by our statistical explanation for self-paced AUC, we propose a balanced self-paced AUC maximization formulation with a novel balanced self-paced regularization term. To the best of our knowledge, this is the first objective formulation introducing SPL into the AUC maximization problem.
\item We propose a doubly cyclic block coordinate descent method to optimize our formulation. Importantly, we give closed-form solutions of the two weight variable blocks and provide two instantiations of optimizing the model parameter block on kernel learning and deep learning, respectively.
\item We prove that the sub-problem with respect to all weight variables converges to a stationary point on the basis of closed-form solutions, and our BSPAUC converges to a stationary point of our fixed optimization objective under a mild assumption.
\end{enumerate}
\section{Self-Paced AUC}\label{sec_stat}
In this section, we first provide a statistical objective for self-paced AUC. Inspired by this, we provide our objective.
\subsection{Statistical Objective}
\textbf{Empirical and Expected AUC Objective for IID Data:}
Let $X$ be a compact subset of $\mathbb{R}^d$, $Y = \{-1, +1\}$ be the label set and $Z = X \times Y$. Given a distribution $P(z)$ and let $S=\{z_i=(x_i,y_i)\}_{i=1}^n$ be an independent and identically distributed (IID) training set drawn from $P(z)$, where $x_i \in X$, $y_i \in Y$ and $z_i \in Z$. Thus, empirical AUC risk on $S$ can be formulated as:
\begin{align} \label{emp}
R_{emp}(S;f)=\frac{1}{n(n-1)}\sum_{z_i,z_j \in S, z_i \neq z_j} L_f(z_i,z_j).
\end{align}
Here, $f \in \mathcal{F}: \mathbb{R}^d \to \mathbb{R}$ is one real-valued function and the pairwise loss function $L_f(z_i,z_j)$ for AUC is defined as:
\begin{equation*}
L_f(z_i,z_j)=
\left \{\begin{array} {l@{\ \ \textrm{if} \ \ }l} 0 &y_i=y_j
\\ \mathbb{I}(f(x_i) \le f(x_j)) &y_i =+1 \& \ y_j=-1\\
\mathbb{I}(f(x_j) \le f(x_i)) &y_j =+1 \& \ y_i=-1
\end{array} \right.
\end{equation*}
where $\mathbb{I}(\cdot)$ is the indicator function such that
$\mathbb{I}(\pi)$ equals 1 if $\pi$ is true and 0 otherwise. Further, the expected AUC risk for the distribution $P(z)$ can be defined as:
\begin{align}\label{exp}
R_{exp}(P(z);f):=&\mathbb{E}_{z_1,z_2 \sim P(z)^2 } L_f(z_1,z_2) \nonumber \\
=&\mathbb{E}_{S}[R_{emp}(S;f)].
\end{align}
\noindent \textbf{Compound Data:} \ In practice, it is expensive to collect completely pure dataset because that would involve domain experts to evaluate the quality of collected data. Thus, it is a reasonable assumption that our training set in reality is composed of not only clean target data but also a proportion of noise samples \cite{natarajan2013learning,kang2019robust}. If we denote the distribution of clean target data by $P_{target}(z)$, and one of noisy data by $P_{noise}(z)$, the distribution for the real training data can be formulated as $P_{train}(z)=\alpha P_{target}(z)+(1-\alpha)P_{noise}(z)$, where $\alpha \in [0,1]$ is a weight to balance $P_{target}(z)$ and $P_{noise}(z)$. We also illustrate this compound training data in Figure \ref{data}. Note that we assume noise samples normally have a relatively large loss, and thus they are treated as complex samples in SPL as discussed previously.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{AUCimage/data.png}
\caption{Data distribution on the degree of complexity.} \label{data}
\end{figure}
\noindent \textbf{Upper Bound of Expected AUC for Compound Data:} \
Gong et al. \cite{why} connect the distribution $P_{train}(z)$ of the training set with the distribution $P_{target}(z)$ of the target set using a weight function $W_{\lambda}(z)$:
\begin{align} \label{OriginalF}
P_{target}(z)=\frac{1}{\alpha_*}W_{\lambda}(z)P_{train}(z),
\end{align}
where $0 \le W_{\lambda}(z) \le 1 $ and $\alpha_*=\int_{Z}W_{\lambda}(z)P_{train}(z)dz$ denotes the normalization factor. Intuitively, $W_{\lambda}(z)$ gives larger weights to easy samples than to complex samples and with the increase of pace parameter $\lambda$, all samples tend to be assigned larger weights.
Then, Eq. (\ref{OriginalF}) can be reformulated as:
\begin{align} \label{NewF}
& P_{train}(z)=\alpha_*P_{target}(z)+(1-\alpha_*)E(z), \\
& E(z)=\frac{1}{1-\alpha_*}(1-W_{\lambda_*}(z))P_{train}(z). \nonumber
\end{align}
Here, $E(z)$ is related to $P_{noise}(z)$. Based on (\ref{NewF}), we define the pace distribution $Q_{\lambda}(z)$ as:
\begin{align} \label{Q}
Q_{\lambda}(z)=\alpha_{\lambda}P_{target}(z)+(1-\alpha_{\lambda})E(z),
\end{align}
where $\alpha_{\lambda}$ varies from $1$ to $\alpha_{*}$ with increasing pace parameter $\lambda$. Correspondingly, $Q_{\lambda}(z)$ simulates the changing process from $P_{target}(z)$ to $P_{train}(z)$. Note that $Q_{\lambda}(z)$ can also be regularized into the following formulation:
\begin{align*}
Q_{\lambda}(z) \propto W_{\lambda}(z)P_{train}(z),
\end{align*}
where $W_{\lambda}(z)$ through normalizing its maximal value to 1.
We derive the following result on the upper bound of the expected AUC risk. Please refer to Appendix for the proof.
\begin{theorem} \label{theoremUp}
For any $\delta>0$ and any $ f \in \mathcal{F}$, with confidence at least $1-\delta$ over a training set $S$, we have:
\begin{align} \label{analysis}
&R_{exp}(P_{target};f) \nonumber \\
\leq & \frac{1}{n_{\lambda}(n_{\lambda}-1)} \sum_{z_i,z_j \in S \atop z_i\neq z_j} W_{\lambda}(z_i) W_{\lambda}(z_j) L_f(z_i,z_j) \nonumber\\
& +\sqrt{\frac{\ln (1/\delta)}{n_{\lambda}/2}} + e_{\lambda}
\end{align}
where $n_{\lambda}$ denotes the number of selected samples from the training set and $e_{\lambda}:= R_{exp}(P_{target};f) - R_{exp}(Q_{\lambda};f)$ decreases monotonically from $0$ with the increasing of $\lambda$.
\end{theorem}
We will give a detailed explanation on the three terms of the upper bound (\ref{analysis}) for Theorem \ref{theoremUp} as follows.
\begin{enumerate}[leftmargin=0.2in]
\item The first term corresponds to the empirical AUC risk on training samples obeying pace distribution $Q_{\lambda}$. With increasing $\lambda$, the weights $W_{\lambda}(z)$ of complex samples gradually increase and these complex samples are gradually involved in training.
\item The second term reflects the expressive capability of training samples on the pace distribution $Q_{\lambda}$. With increasing $\lambda$, more samples are considered, the pace distribution $Q_{\lambda}$ can be expressed better.
\item The last term measures the generalization capability of the learned model. As shown in Eq. (\ref{Q}), with increasing $\lambda$, $\alpha_{\lambda}$ gets smaller and the generalization of the learned model becomes worse. This is due to the gradually more evident deviation $E(z)$ from $Q_{\lambda}$ to $P_{target}$.
\end{enumerate}
Inspired by the upper bound (\ref{analysis}) and the above explanations, we will propose our self-paced AUC maximization formulation in the next subsection.
\subsection{Optimization Objective}
First of all, we give the definition of some necessary notations. Let $\theta$ represent the model parameters, $n$ and $m$ denote the number of positive and negative samples respectively, $\mathbf{v} \in [0,1]^n$ and $\mathbf{u} \in [0,1]^m$ be the weights of positive and negative samples respectively, $\lambda$ be the pace parameter for controlling the learning pace, and $\mu$ balance the proportions of selected positive and negative samples. The zero-one loss is replaced by the pairwise hinge loss which is a common surrogate loss in AUC maximization problems \cite{brefeld2005auc,zhao2011online,gao2015consistency}. Then, inspired by the upper bound (\ref{analysis}), we have the following optimization objective:
\begin{align} \label{BSPAUCOF}
& \mathcal{L}(\theta,\mathbf{v},\mathbf{u};\lambda) \nonumber \\
=& \underbrace{\frac{1}{nm} \sum_{i=1}^{n}\sum_{j=1}^{m}v_i u_j \xi_{ij}}_{\mathbf{1}} \underbrace{- \lambda \left(\frac{1}{n}\sum_{i=1}^{n} v_i+\frac{1}{m}\sum_{j=1}^{m} u_j \right)}_{\mathbf{2}} \nonumber \\
& \underbrace{ + \tau \Omega(\theta)}_{\mathbf{3}} \underbrace{+ \mu \left(\frac{1}{n}\sum_{i=1}^{n} v_i-\frac{1}{m}\sum_{j=1}^{m} u_j \right)^2}_{\mathbf{4}} \\
& \ s.t. \ \mathbf{v}\in [0,1]^n,\mathbf{u}\in [0,1]^m \nonumber
\end{align}where $\xi_{ij}=\max \{1-f(x^+_i)+f(x^-_j), 0 \}$ is the pairwise hinge loss and $\Omega(\theta)$ is the regularization term to avoid overfitting. Specifically, $\theta$ is a matrix composed of weights and biases of each layer and $\Omega(\theta)$ is formulated as $\frac{1}{2}||\theta||_F^2$ in the deep learning setting and in the kernel-based setting, $\Omega(\theta)$ is formulated as $\frac{1}{2}||\theta||_\mathcal{H}^2$, where $||\cdot||_{\mathcal{H}}$ denotes the norm in a reproducing kernel Hilbert space (RKHS) $\mathcal{H}$.
As the explanation about the upper bound (\ref{analysis}) shows, the upper bound is composed of three aspects: empirical risk, sample expression ability and model generalization ability. Inspired by this, we construct our optimization objective (\ref{BSPAUCOF}) which also considers the above three aspects. Specifically, the term \textbf{1} in Eq. (\ref{BSPAUCOF}) corresponds to the (weighted) empirical risk . The term \textbf{2} in Eq. (\ref{BSPAUCOF}) corresponds to the sample expression ability. As we explained before, sample expression ability is related to the number of selected samples and pace parameter $\lambda$ in term \textbf{2} is used to control the number of selected samples. The term \textbf{3} in Eq. (\ref{BSPAUCOF}) corresponds to the model generalization ability which is a common model regularization term used to avoid model overfitting.
In addition, the term \textbf{4} in Eq. (\ref{BSPAUCOF}) is our new proposed balanced self-paced regularization term, which is used to balance the proportions of selected positive and negative samples as Figure \ref{BorNot} shows. Specifically, if this term is not used, only the degree of sample complexity is considered and this would lead to severe imbalance between the proportions of selected positive and negative samples in practice. But the proportions of selected positive and negative samples could be ensured properly if this term is enforced.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{AUCimage/balanced.png}
\caption{Contrast between whether or not using the balanced self-paced regularization term. (The yellow ball represents the positive sample, and the blue ball represents the negative sample. The green dotted ellipsoid represents the selected samples without using the term, and the red solid ellipsoid denotes the selected samples by using the term.)}
\label{BorNot}
\end{figure}
\section{BSPAUC Algorithm}\label{sec_alg}
In this section,
we propose our BSPAUC algorithm (\emph{i.e.}, Algorithm \ref{alg1}) to solve the problem (\ref{BSPAUCOF}). Different from traditional SPL algorithms \cite{wan2020self,klink2020self,ghasedi2019balanced} which have two blocks of variables, our problem (\ref{BSPAUCOF}) has three blocks of variables which makes the optimization process more challenging. To address this issue, we propose a doubly cyclic block coordinate descent algorithm as shown in Algorithm \ref{alg1}, which consists of two layers of cyclic block coordinate descent algorithms. The outer layer cyclic block coordinate descent procedure (\emph{i.e.}, lines 2-9 of Algorithm \ref{alg1}) follows the general optimization procedure of SPL to optimize all weight variables and model parameters alternatively. The inner layer cyclic block coordinate descent procedure (\emph{i.e.}, lines 3-6 of Algorithm \ref{alg1}) is aimed to optimize the two blocks of weight variables (\emph{i.e.}, $\mathbf{v}$ and $\mathbf{u}$) alternatively.
In the following, we revolve around the outer layer cyclic block coordinate descent procedure to discuss how to optimize all weight variables (\emph{i.e.}, $\mathbf{v}$ and $\mathbf{u}$) and model parameters (\emph{i.e.}, $\theta$) respectively.
\begin{algorithm} [htb]
\caption{Balanced self-paced learning for AUC maximization}
\begin{algorithmic}[1] \label{BSPAUC}
\REQUIRE The training set, $\theta^0$, $T$, $\lambda^0$, $c$, $\lambda_{\infty}$ and $\mu$.\\
\STATE Initialize $\mathbf{v}^0= \mathbf{1}_n$ and $\mathbf{u}^0= \mathbf{1}_m$.
\FOR { $t=1, \cdots ,T$}
\REPEAT
\STATE Update $\mathbf{v}^{t}$ through Eq. (\ref{solutionofv}).
\STATE Update $\mathbf{u}^{t}$ through Eq. (\ref{solutionofu}).
\UNTIL{Converge to a stationary point.}
\STATE Update $\theta^{t}$ through solving (\ref{f}).
\STATE $\lambda^{t}=\min \{ c\lambda^{t-1},\lambda_{\infty} \}$.
\ENDFOR
\ENSURE The model solution $\theta$.\\
\end{algorithmic}
\label{alg1}
\end{algorithm}
\begin{algorithm} [htb]
\caption{Deep learning implementation of solving Eq. (\ref{f})}
\begin{algorithmic}[1] \label{Deeplearning}
\REQUIRE $T,\mathbf{\eta},\tau, \hat{X}^+, \hat{X}^-, \pi,\theta^0$.
\FOR { $i=1, \cdots ,T$}
\STATE Sample $\hat{x}^+_1,...,\hat{x}^+_\pi$ from $\hat{X}^+$.
\STATE Sample $\hat{x}^-_1,...,\hat{x}^-_\pi$ from $\hat{X}^-$.
\STATE Calculate $f({x})$.
\STATE Update $\theta$ by the following formula:
\begin{align*}
\theta = &(1-\eta_i\tau)\theta
-\frac{\eta_i}{\pi}\sum_{j=1}^\pi v_j u_j \frac{ \partial \xi_{jj}} { \partial \theta } . \qquad \qquad
\end{align*}
\ENDFOR
\ENSURE $\theta$.
\end{algorithmic}
\end{algorithm}
\subsection{Optimizing $\mathbf{v}$ and $\mathbf{u}$}
Firstly, we consider the sub-problem with respect to all weight variables which is normally convex in existing self-paced problems. However, if we fix $\theta$ in Eq. (\ref{BSPAUCOF}), the sub-problem with respect to $\mathbf{v}$ and $\mathbf{u}$ could be non-convex as shown in Theorem \ref{Non-con}. Please refer to Appendix for the proof of Theorem \ref{Non-con}.
\begin{theorem} \label{Non-con}
If we fix $\theta$ in Eq. (\ref{BSPAUCOF}), the sub-problem with respect to $\mathbf{v}$ and $\mathbf{u}$ maybe non-convex.
\end{theorem}
In order to address the non-convexity of sub-problem, we further divide all weight variables into two disjoint blocks, \emph{i.e.}, weight variables $\mathbf{v}$ of positive samples and weight variables $\mathbf{u}$ of negative samples. Note that the sub-problems \emph{w.r.t.} $\mathbf{v}$ and $\mathbf{u}$ respectively are convex. Thus, we can solve the following two convex sub-problems to update $\mathbf{v}$ and $\mathbf{u}$ alternatively
\begin{align}
\mathbf{v}^{t}= \argmin \limits_{\mathbf{v} \in [0,1]^n} \ \mathcal{L}(\mathbf{v};\theta,\mathbf{u},\lambda) , \label{v} \\
\mathbf{u}^{t}= \argmin \limits_{\mathbf{u} \in [0,1]^m} \ \mathcal{L}(\mathbf{u};\theta,\mathbf{v},\lambda). \label{u}
\end{align}
We derive the closed-form solutions of the optimization problems (\ref{v}) and (\ref{u}) respectively in the following theorem. Note that sorted index represents the index of the sorted loss set $\{l_1, l_2, \dots \}$ which satisfies $l_i \leq l_{i+1}$. Please refer to Appendix for the detailed proof.
\begin{theorem} \label{solutionofvu}
The following formula gives one global optimal solution for problem \eqref{v}:
\begin{equation} \label{solutionofv}
\left \{\begin{array} {ll}
&v_p=1 { \ \ \textrm{if} \ \ } l^+_p < \lambda - 2\mu \left(\frac{p}{n}-\frac{\sum_{j=1}^m u_j}{m} \right)
\\ &v_p=n \left( \frac{\sum_{j=1}^m u_j}{m}-\frac{l^+_p - \lambda}{2 \mu}-\frac{p-1}{n} \right) {\textrm{otherwise} }
\\
&v_p=0 {\ \ \textrm{if} \ \ } l^+_p > \lambda - 2\mu \left(\frac{p-1}{n}-\frac{\sum_{j=1}^m u_j}{m} \right)
\end{array} \right.
\end{equation}
where $p \in \{1,...,n\}$ is the sorted index based on the loss values $l^+_p=\frac{1}{m}\sum_{j=1}^m u_j\xi_{pj}$. \\
The following formula gives one global optimal solution for problem \eqref{u}:
\begin{equation} \label{solutionofu}
\left \{\begin{array} {ll}
&u_q=1 {\ \ \textrm{if} \ \ } l^-_q < \lambda - 2\mu \left(\frac{q}{m}-\frac{\sum_{i=1}^n v_i}{n}\right)
\\
&u_q=m \left(\frac{\sum_{i=1}^n v_i}{n}-\frac{l^-_q - \lambda}{2 \mu}-\frac{q-1}{m}\right) { \textrm{otherwise}}
\\
&u_q=0 {\ \ \textrm{if} \ \ } l^-_q > \lambda - 2\mu \left(\frac{q-1}{m}-\frac{\sum_{i=1}^n v_i}{n}\right)
\end{array} \right.
\end{equation}
where $q \in \{1,...,m\}$ is the sorted index based on the loss values $l^-_q=\frac{1}{n}\sum_{i=1}^n v_i\xi_{iq}$.
\end{theorem}
In this case, the solution (\ref{solutionofv}) of problem ({\ref{v}}) implies our advantages. Obviously, a sample with a loss greater/less than the threshold, \emph{i.e.}, $ \lambda - 2\mu (\frac{p-1}{n}-\frac{\sum_{j=1}^m u_j}{m})$ is ignored/involved in current training. In particular, the threshold is also a function of the sorted index, and consequently decreases as the sorted index increases. In this case, easy samples with less loss are given more preference. Besides, the proportion, \emph{i.e.}, $\frac{\sum_{j=1}^m u_j}{m}$ of selected negative samples also affects the threshold. This means that the higher/lower the proportion of selected negative samples is, the more/fewer positive samples will be assigned high weights. Because of this, the proportions of selected positive and negative samples can be guaranteed to be balanced. What's more, we can easily find that the solution (\ref{solutionofu}) of problem ({\ref{u}}) yields similar conclusions. In summary, our algorithm can not only give preference to easy samples, but also ensure that the selected positive and negative samples have proper proportions.
\subsection{Optimizing $\theta$}
In this step, we fix $\mathbf{v},\mathbf{u}$ to update $\theta$:
\begin{equation}
\label{f}
\theta^{t}= \argmin \limits_{ \theta} \frac{1}{nm} \sum_{i=1}^{n}\sum_{j=1}^{m}v_i u_j \xi_{ij}
+\tau \Omega(\theta) + \text{const},
\end{equation}
where $\xi_{ij}=\max \{ 1-f(x^+_i)+f(x^-_j),0 \} $. Obviously, this is a weighted AUC maximization problem. We provide two instantiations of optimizing the problem on kernel learning and deep learning settings, respectively.
For the deep learning implementation, we compute the gradient on random pairs of weighted samples which are selected from two subsets $\hat{X}^+$ and $\hat{X}^-$ respectively. $\hat{X}^+$ is a set of selected positive samples with weights $\hat{x}^+=(v_i,x_i^+), \forall v_i >0$ and $\hat{X}^-$ is a set of selected negative samples with weights $\hat{x}^-=(u_j, x_j^-), \forall u_j >0$. In this case, we introduce the weighted batch AUC loss:
\begin{align*}
\sum_{j=1}^\pi v_j u_j \xi_{jj} =\sum_{j=1}^\pi v_j u_j \max \{1-f(x^+_j)+f(x^-_j), 0 \} ,
\end{align*}and obtain Algorithm \ref{Deeplearning} by applying the doubly stochastic gradient descent method (DSGD) \cite{gu2019scalable} \emph{w.r.t.} random pairs of weighted samples, where $\eta$ means the learning rate.
For the kernel-based implementation, we apply random Fourier feature method to approximate the kernel function \cite{rahimi2008random,dai2014scalable} for large-scale problems. The mapping function of $D$ random features is defined as
\begin{align*}
\phi_{\omega}(x)=\sqrt{1/D}[ &\cos(\omega_1x),\ldots,\cos(\omega_Dx), \\ &\sin(\omega_1x),\ldots,\sin(\omega_Dx)]^T,
\end{align*}where $\omega_i$ is randomly sampled according to the density function $p(\omega)$ associated with $k(x,x')$ \cite{odland2017fourier}. Then, based on the weighted batch AUC loss and the random feature mapping function $\phi_{\omega}(x)$, we obtain Algorithm 3 by applying the triply stochastic gradient descent method (TSGD) \cite{TSAM} \emph{w.r.t.} random pairs of weighted samples and random features which can be found in Appendix.
\section{Theoretical Analysis}
In this section, we prove the convergence of our algorithms and all the proof details are available in Appendix.
For the sake of clarity, we define $\mathcal{K}(\mathbf{v},\mathbf{u})=\mathcal{L}(\mathbf{v},\mathbf{u};\theta,\lambda)$ as the sub-problem of (\ref{BSPAUCOF}) where $\theta$ and $\lambda$ are fixed, and then prove that $\mathcal{K}$ converges to a stationary point based on the closed-form solutions (\emph{i.e.}, Theorem \ref{solutionofvu}).
\begin{theorem} \label{theormKstation}
With the inner layer cyclic block coordinate descent procedure (\emph{i.e.}, lines 3-6 of Algorithm \ref{alg1}), the sub-problem $\mathcal{K}$ with respect to all weight variables converges to a stationary point.
\end{theorem}
Next, we prove that our BSPAUC converges along with the increase of hyper-parameter $\lambda$ under a mild assumption.
\begin{theorem} \label{Converge}
If Algorithm \ref{Deeplearning} or Algorithm 3 (in Appendix) optimizes $\theta$ such that $\mathcal{L}(\theta^{t+1};\mathbf{v}^{t+1},\mathbf{u}^{t+1},\lambda^{t}) \le \mathcal{L}(\theta^{t};\mathbf{v}^{t+1},\mathbf{u}^{t+1},\lambda^{t})$, BSPAUC converges along with the increase of hyper-parameter $\lambda$.
\end{theorem}
\begin{remark}
No matter the sub-problem (\ref{f}) is convex or not, it is a basic requirement for a solver (e.g., Algorithm \ref{Deeplearning} and Algorithm 3 in Appendix) such that the solution $\theta^{t+1}$ satisfies:
\begin{align*}
\mathcal{L}(\theta^{t+1};\mathbf{v}^{t+1},\mathbf{u}^{t+1},\lambda^{t}) &\le \mathcal{L}(\theta^{t};\mathbf{v}^{t+1},\mathbf{u}^{t+1},\lambda^{t}).
\end{align*}
Thus, we can have that our BSPAUC converges along with the increase of hyper-parameter $\lambda$.
\end{remark}
Considering that the hyper-parameter $\lambda$ reaches its maximum $\lambda_{\infty}$, we will have that our BSPAUC converges to a stationary point of $\mathcal{L}(\theta,\mathbf{v},\mathbf{u};\lambda_{\infty})$ if the iteration number $T$ is large enough.
\begin{theorem} \label{ConvergeToStan}
If Algorithm \ref{Deeplearning} or Algorithm 3 (in Appendix) optimizes $\theta$ such that $\mathcal{L}(\theta^{t+1};\mathbf{v}^{t+1},\mathbf{u}^{t+1},\lambda^{t}) \le \mathcal{L}(\theta^{t};\mathbf{v}^{t+1},\mathbf{u}^{t+1},\lambda^{t})$, and $\lambda$ reaches its maximum $\lambda_{\infty}$, we will have that our BSPAUC converges to a stationary point of $\mathcal{L}(\theta,\mathbf{v},\mathbf{u};\lambda_{\infty})$ if the iteration number $T$ is large enough.
\end{theorem}
\begin{table}[H]
\centering
\begin{tabular}{c|c|c|c}
\hline
$\mathbf{Dataset}$ & $\mathbf{Size}$ & $\mathbf{Dimensions}$ & $\mathbf{N_- \backslash N_+}$ \\ \hline
sector & 9,619 & 55,197 & 95.18 \\
rcv1 & 20,242 & 47,236 & 1.07 \\
a9a & 32,561 & 123 & 3.15 \\
shuttle & 43,500 & 9 & 328.54 \\
aloi & 108,000 & 128 & 999.00 \\
skin$\_$nonskin & 245,057 & 3 & 3.81 \\
cod-rna & 331,152 & 8 & 2.00 \\
poker & 1,000,000 & 10 & 701.24 \\ \hline
\end{tabular}
\caption{Datasets. ($N_{-}$ means the number of negative samples and $N_{+}$ means the number of positive samples.)} \label{Datasets}
\end{table}
\section{Experiments}
In this section, we first describe the experimental setup, and then provide our experimental results and discussion.
\begin{table*}[htb]
\centering
\setlength{\tabcolsep}{4.5pt}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
\multirow{2}{*}{Datasets} & \multicolumn{4}{c|}{Non Deep Learning Methods} & \multicolumn{3}{c}{Deep Learning Methods} \\ \cline{2-8}
& \textbf{BSPAUC} & \textbf{TSAM} & \textbf{KOIL}$_{FIFO++}$ & \textbf{OPAUC} & \textbf{BSPAUC} & \textbf{DSAM} & \textbf{PPD}$_{SG}$ \\ \hline
sector & \textbf{0.991$\pm$0.005} & 0.986$\pm$0.005 & 0.953$\pm$0.014 & 0.971$\pm$0.018 & \textbf{0.991$\pm$0.002} & 0.978$\pm$0.007 & 0.935$\pm$0.008 \\
rcv1 & \textbf{0.979$\pm$0.001} & 0.970$\pm$0.001 & 0.913$\pm$0.018 & 0.966$\pm$0.012 & \textbf{0.993$\pm$0.001} & 0.990$\pm$0.001 & 0.988$\pm$0.001 \\
a9a & \textbf{0.926$\pm$0.008} & 0.904$\pm$0.015 & 0.858$\pm$0.020 & 0.869$\pm$0.013 & \textbf{0.927$\pm$0.005} & 0.908$\pm$0.003 & 0.906$\pm$0.001 \\
shuttle & \textbf{0.978$\pm$0.002} & 0.970$\pm$0.004 & 0.948$\pm$0.010 & 0.684$\pm$0.036 & \textbf{0.994$\pm$0.003} & 0.989$\pm$0.004 & --- \\
aloi & \textbf{0.999$\pm$0.001} & \textbf{0.999$\pm$0.001} & \textbf{0.999$\pm$0.001} & 0.998$\pm$0.001 & \textbf{0.999$\pm$0.001} & \textbf{0.999$\pm$0.001} & --- \\
skin$\_$nonskin & \textbf{0.958$\pm$0.004} & 0.946$\pm$0.004 & --- & 0.943$\pm$0.007 & \textbf{0.999$\pm$0.001} & \textbf{0.999$\pm$0.001} & 0.949$\pm$0.001 \\
cod-rna & \textbf{0.973$\pm$0.006} & 0.966$\pm$0.010 & --- & 0.924$\pm$0.024 & \textbf{0.994$\pm$0.001} & 0.992$\pm$0.001 & 0.988$\pm$0.001 \\
poker & \textbf{0.934$\pm$0.013} & 0.901$\pm$0.021 & --- & 0.662$\pm$0.025 & \textbf{0.990$\pm$0.004} & 0.976$\pm$0.015 & --- \\ \hline
\end{tabular}
\caption{Mean AUC results with the corresponding standard deviation on original benchmark datasets. ('–' means out of memory or unable to handle severely imbalanced datasets.)} \label{AUCR}
\end{table*}
\begin{table*}[htb]
\centering
\begin{tabular}{c|ccc|ccc|ccc|ccc}
\hline
Datasets & \multicolumn{3}{c|}{rcv1} & \multicolumn{3}{c|}{a9a} & \multicolumn{3}{c|}{skin\_nonskin} & \multicolumn{3}{c}{cod-rna} \\ \hline
FP & $10\%$ & $20\%$ & $30\%$ & $10\%$ & $20\%$ & $30\%$ & $10\%$ & $20\%$ & $30\%$ & $10\%$ & $20\%$ & $30\%$ \\ \hline
\textbf{OPAUC} & 0.958 & 0.933 & 0.845 & 0.824 & 0.804 & 0.778 & 0.925 & 0.884 & 0.815 & 0.902 & 0.864 & 0.783 \\
\textbf{KOIL}$_{FIFO++}$ & 0.901 & 0.889 & 0.804 & 0.836 & 0.806 & 0.726 & --- & --- & --- & --- & --- & --- \\
\textbf{TSAM} & 0.961 & 0.946 & 0.838 & 0.877 & 0.846 & 0.752 & 0.937 & 0.913 & 0.842 & 0.933 & 0.880 & 0.808 \\
\textbf{PDD}$_{SG}$ & 0.964 & 0.936 & 0.855 & 0.881 & 0.849 & 0.739 & 0.940 & 0.912 & 0.852 & 0.937 & 0.873 & 0.788 \\
\textbf{DSAM} & 0.983 & 0.962 & 0.862 & 0.886 & 0.837 & 0.811 & 0.961 & 0.917 & 0.819 & 0.975 & 0.922 & 0.781 \\
\textbf{BSPAUC} & $\textbf{0.991}$ & $\textbf{0.985}$ & $\textbf{0.945}$ & $\textbf{0.914}$ & $\textbf{0.894}$ & $\textbf{0.883}$ & $\textbf{0.979}$ & $\textbf{0.944}$ & $\textbf{0.912}$ & $\textbf{0.990}$ & $\textbf{0.956}$ & $\textbf{0.874}$ \\ \hline
PP & $10\%$ & $20\%$ & $30\%$ & $10\%$ & $20\%$ & $30\%$ & $10\%$ & $20\%$ & $30\%$ & $10\%$ & $20\%$ & $30\%$ \\ \hline
\textbf{OPAUC} & 0.923 & 0.863 & 0.803 & 0.833 & 0.816 & 0.797 & 0.927 & 0.880 & 0.856 & 0.914 & 0.893 & 0.858 \\
\textbf{KOIL}$_{FIFO++}$ & 0.891 & 0.838 & 0.793 & 0.831 & 0.823 & 0.806 & --- & --- & --- & --- & --- & --- \\
\textbf{TSAM} & 0.930 & 0.859 & 0.809 & 0.872 & 0.849 & 0.838 & 0.933 & 0.911 & 0.877 & 0.953 & 0.903 & 0.886 \\
\textbf{PDD}$_{SG}$ & 0.934 & 0.918 & 0.828 & 0.885 & 0.873 & 0.839 & 0.935 & 0.927 & 0.898 & 0.972 & 0.941 & 0.881 \\
\textbf{DSAM} & 0.902 & 0.845 & 0.757 & 0.881 & 0.852 & 0.843 & 0.980 & 0.954 & 0.902 & 0.973 & 0.938 & 0.913 \\
\textbf{BSPAUC} & $\textbf{0.955}$ & $\textbf{0.937}$ & $\textbf{0.876}$ & $\textbf{0.911}$ & $\textbf{0.907}$ & $\textbf{0.896}$ & $\textbf{0.995}$ & $\textbf{0.982}$ & $\textbf{0.965}$ & $\textbf{0.991}$ & $\textbf{0.973}$ & $\textbf{0.953}$ \\ \hline
\end{tabular}
\caption{Mean AUC results on noisy datasets. The corresponding standard deviations can be found in Appendix. (FP means the proportion of noise samples constructed by flipping labels, PP denotes the proportion of injected poison samples and '–' means out of memory.)} \label{AUCnoise}
\end{table*}
\begin{figure*}[htb]
\centering
\captionsetup[subfigure]{aboveskip=1pt,belowskip=-2.3pt}
\subfigure[rcv1]{
\centering
\includegraphics[width=1.6in]{AUCimage/rcv1lambda.eps}
}
\subfigure[a9a]{
\centering
\includegraphics[width=1.6in]{AUCimage/a9alambda.eps}
}
\subfigure[skin\_nonskin]{
\centering
\includegraphics[width=1.6in]{AUCimage/skin_nonskinlambda.eps}
}
\subfigure[cod-rna]{
\centering
\includegraphics[width=1.6in]{AUCimage/cod-rnalambda.eps}
}
\centering
\caption{AUC results with different values of $\nu$ and $\lambda_{\infty}$ on datasets with 20\% injected poison samples. (Missing results are due to only positive or negative samples are selected.)} \label{AUCParameter}
\end{figure*}
\begin{figure*}[htb]
\centering
\captionsetup[subfigure]{aboveskip=1pt,belowskip=-2.3pt}
\subfigure[rcv1]{
\centering
\includegraphics[width=1.6in]{AUCimage/rcv1balance.eps}
}
\subfigure[a9a]{
\centering
\includegraphics[width=1.6in]{AUCimage/a9abalance.eps}
}
\subfigure[skin\_nonskin]{
\centering
\includegraphics[width=1.6in]{AUCimage/skin_nonskinbalance.eps}
}
\subfigure[cod-rna]{
\centering
\includegraphics[width=1.6in]{AUCimage/cod-rnabalance.eps}
}
\centering
\caption{The results of absolute proportion difference (APD) with different values of $\nu$ and $\lambda_{\infty}$ on datasets with 20\% injected poison samples.} \label{APDParameter}
\end{figure*}
\subsection{Experimental Setup} \label{subsec_setup}
\noindent \textbf{Design of Experiments:} \
To demonstrate the advantage of our BSPAUC for handling noisy data, we compare our BSPAUC with some state-of-the-art AUC maximization methods on a variety of benchmark datasets with/without artificial noisy data.
Specifically, the compared algorithms are summarized as follows.
\\
$\mathbf{TSAM }$: A kernel-based algorithm which updates the solution based on triply stochastic gradient descents \emph{w.r.t.} random pairwise loss and random features \cite{TSAM}. \\
$\mathbf{DSAM }$: A modified deep learning algorithm which updates the solution based on the doubly stochastic gradient descents \emph{w.r.t.} random pairwise loss \cite{gu2019scalable}. \\
\textbf{KOIL$_{FIFO++}$} \footnote{KOIL$_{FIFO++}$ is available at \url{https://github.com/JunjieHu}.}: A kernelized online imbalanced learning algorithm which directly
maximizes the AUC objective with fixed budgets for positive and negative class \cite{KOIL}.\\
$\mathbf{PPD}_{SG}$ \footnote{PPD$_{SG}$ is available at \url{https://github.com/yzhuoning}.}: A deep learning algorithm that builds on the saddle point reformulation and explores Polyak-\L ojasiewicz condition \cite{PPD}.\\
$\mathbf{OPAUC}$ \footnote{OPAUC is available at \url{http://lamda.nju.edu.cn/files}.} : A linear method based on a regression algorithm, which only needs to maintain the first and second order statistics of data in memory \cite{OPAUC}.
In addition, considering hyper-parameters $\mu$ and $\lambda_{\infty}$, we also design experiments to analyze their roles. Note that we introduce a new variable $\nu$ to illustrate the value of $\mu$, \emph{i.e.}, $\mu=\nu \lambda_{\infty}$, and a new indicator called absolute proportion difference (APD):
$\text{APD}= \left |\frac{1}{n}\sum_{i=1}^n v_i - \frac{1}{m}\sum_{j=1}^m u_j \right |.$
\iffalse
\begin{figure*}[htb]
\centering
\captionsetup[subfigure]{aboveskip=1pt,belowskip=-2.3pt}
\subfigure[sector]{
\centering
\includegraphics[width=1.43in]{AUCimage/deepsector.eps}
}
\subfigure[rcv1]{
\centering
\includegraphics[width=1.5in]{AUCimage/deeprcv1.eps}
}
\subfigure[a9a]{
\centering
\includegraphics[width=1.5in]{AUCimage/deepa9a.eps}
}
\subfigure[shuttle]{
\centering
\includegraphics[width=1.5in]{AUCimage/deepshuttle.eps}
}
\\
\subfigure[aloi]{
\centering
\includegraphics[width=1.5in]{AUCimage/deepaloi.eps}
}
\subfigure[skin$\_$nonskin]{
\centering
\includegraphics[width=1.5in]{AUCimage/deepskin_nonskin.eps}
}
\subfigure[cod-rna]{
\centering
\includegraphics[width=1.5in]{AUCimage/deepcod-rna.eps}
}
\subfigure[poker]{
\centering
\includegraphics[width=1.5in]{AUCimage/deeppoker.eps}
}
\centering
\caption{AUC v.s. number of iterations. (The missing curves are due to the inability to handle severely imbalanced datasets.)} \label{AUCIter}
\end{figure*}
\fi
\noindent \textbf{Datasets:} \
The benchmark datasets are obtained from the LIBSVM repository\footnote{Datasets are available at \url{https://www.csie.ntu.edu.tw/~cjlin/ libsvmtools/datasets}.} which take into account different dimensions and imbalance ratios, as summarized in Table \ref{Datasets}. The features have been scaled to $[-1,1]$ for all datasets, and the multiclass classification datasets have been transformed to class imbalanced binary classification datasets. Specifically, we denote one class as the positive class, and the remaining classes as the negative class. We randomly partition each dataset into $75\%$ for training and $25\%$ for testing.
In order to test the robustness of all methods, we construct two types of artificial noisy datasets. The first method is to turn normal samples into noise samples by flipping their labels \cite{frenay2013classification,ghosh2017robust}. Specifically, we first utilize training set to obtain a discriminant hyperplane, and then stochastically select samples far away from the discriminant hyperplane to flip their labels.
Another method is to inject poison samples.
Specifically, we generate poison samples for each dataset according to
the poisoning attack method \footnote{The poisoning attack method is available at \url{https://github.com/ Trusted-AI/adversarial-robustness-toolbox}.} \cite{poison}, and inject these poison samples into training set to form a noisy dataset.
We conduct experiments with different noise proportions (from 10\% to 30\%).
\noindent \textbf{Implementation:} \
All the experiments are conducted on a PC with 48 2.2GHz cores, 80GB RAM and 4 Nvidia 1080ti GPUs and all the results are the average of 10 trials. We complete the deep learning and kernel-based implementations of our BSPAUC in python, we also implement the TSAM and the modified DSAM in python. We use the open codes as the implementations of KOIL$_{FIFO++}$, PPD$_{SG}$ and OPAUC, which are provided by their authors.
For the OPAUC algorithm on high dimensional datasets (the feature size is larger than $1000$), we use the low-rank version, and set the rank parameter at $50$. For all kernel-based methods, we use Gaussian kernel $k(x,x')=\exp{( - \frac{ ||x-x'||^2}{2\sigma^2}) }$ and tune its hyper-parameter $\sigma \in 2^{[-5,5]}$ by a 5-fold cross-validation. For TSAM method and our Algorithm 3 (in Appendix), the number of random Fourier features is selected from $[500 : 500 : 4000]$. For KOIL$_{FIFO++}$ method, the buffer size is set at $100$ for each class. For all deep learning methods, we utilize the same network structure which consists of eight full connection layers and uses the ReLu activation function.
For PPD$_{SG}$ method, the initial stage is tuned from $200$ to $2000$. For our BSPAUC, the hyper-parameters are chosen according to the proportion of selected samples. Specifically, we start training with about $50\%$ samples, and then linearly increase $\lambda$ to include more samples. Note that all algorithms have the same pre-training that we select a small number of samples for training and take the trained model as the initial state of experiments.
\subsection{Results and Discussion}
First of all, we explain the missing part of the experimental results on Tables \ref{AUCR} and \ref{AUCnoise}. The KOIL$_{FIFO++}$ is a kernel-based method and needs to calculate and save the kernel matrix. For large datasets, such operation could cause out of memory. The PPD$_{SG}$ does not consider the severe imbalance of datasets and it can only produce negligible updates when the stochastic batch samples are severely imbalanced. If facing these issues, these two algorithms could not work as usual. Thus, we cannot provide the corresponding results.
Table \ref{AUCR} presents the mean AUC results with the corresponding standard deviation of all algorithms on the original benchmark datasets. The results show that, due to the SPL used in our BSPAUC, the deep learning and kernel-based implementations of our BSPAUC outperforms the DSAM and TSAM, which are the non self-paced versions of our implementations. At the same time, our BSPAUC also obtains better AUC results compared to other existing state-of-the-art AUC maximization methods (\emph{i.e.}, OPAUC and PPD$_{SG}$).
Table \ref{AUCnoise} shows the performance of the deep learning implementation of our BSPAUC and other compared methods on the two types of noisy datasets. The results clearly show that our BSPAUC achieves the best performance on all noisy datasets. Specifically, the larger the proportion of noisy data is, the more obvious the advantage is. Our BSPAUC excludes the noise samples from training by giving a zero weight and thus has a better robustness.
Figures \ref{AUCParameter} and \ref{APDParameter} show the results of the deep learning implementation of BSPAUC with different hyper-parameter values. Figure \ref{AUCParameter} clearly reveals that with the increase of $\lambda_{\infty}$, AUC increases first and then decreases gradually. This phenomenon is expected. When $\lambda_{\infty}$ is small, increasing $\lambda_{\infty}$
will cause more easy samples to join the training and thus the generalization of model is improved. However, when $\lambda_{\infty}$ is large enough, complex (noise) samples start being included and then AUC decreases. What's more, Figure \ref{APDParameter} directly reflects that with the increase of $\mu$, which can be calculated by $\mu=\nu \lambda_{\infty}$, the proportions of selected positive and negative samples gradually approach.
Further more, combining with the above two figures, we observe that too large APD often tends to low AUC. Large APD often implies that some easy samples in the class with low proportion of selected samples don't join the training while some complex samples in the other class with high proportion are selected. The above case leads to the reduction of generalization ability and thus causes low AUC. Importantly, our balanced self-paced regularization term is proposed for this issue.
\section{Conclusion}
In this paper, we first provide a statistical explanation to self-paced AUC. Inspired by this, we propose our self-paced AUC maximization formulation with a novel balanced self-paced regularization term. Then we propose a doubly cyclic block coordinate descent algorithm (\emph{i.e.}, BSPAUC) to optimize our objective function.
Importantly, we prove that the sub-problem with respect to all weight variables converges to a stationary point on the basis of closed-form solutions, and our BSPAUC converges to a stationary point of our fixed optimization objective under a mild assumption. The experimental results demonstrate that our BSPAUC outperforms existing state-of-the-art AUC maximization methods and has better robustness.
\section{Acknowledgments}
Bin Gu was partially supported by the National Natural Science Foundation of China (No:61573191).
\section{Appendix A. Table \ref{AUCR} with standard deviation}
Mean AUC results with the corresponding standard deviation on noisy datasets are shown in Table \ref{AUCR}. The results clearly show that our BSPAUC achieves the best performance on all noisy datasets. Specifically, the larger the proportion of noisy data is, the more obvious the advantage is. Our BSPAUC excludes the noise samples from training by giving a zero weight and thus has a better robustness.
\setcounter{table}{2}
\begin{table*} [htb] \small
\centering
\caption{\small{Mean AUC results with the corresponding standard deviation on noisy datasets. (FP means the proportion of noise samples constructed by flipping labels, PP denotes the proportion of injected poison samples and '–' means out of memory.)}} \label{AUCR}
\begin{tabular}{c|ccc|ccc|}
\hline
Datasets & \multicolumn{3}{c|}{rcv1} & \multicolumn{3}{c|}{a9a} \\ \hline
FP & \multicolumn{1}{c|}{$10\%$} & \multicolumn{1}{c|}{$20\%$} & $30\%$ & \multicolumn{1}{c|}{$10\%$} & \multicolumn{1}{c|}{$20\%$} & $30\%$ \\ \hline
\textbf{OPAUC} & 0.958$\pm$0.015 & 0.933$\pm$0.025 & 0.845$\pm$0.023 & 0.824$\pm$0.009 & 0.804$\pm$0.013 & 0.778$\pm$0.016 \\
\textbf{KOIL}$_{FIFO++}$ & 0.901$\pm$0.026 & 0.889$\pm$0.031 & 0.804$\pm$0.042 & 0.836$\pm$0.015 & 0.806$\pm$0.024 & 0.726$\pm$0.033 \\
\textbf{TSAM} & 0.961$\pm$0.002 & 0.946$\pm$0.015 & 0.838$\pm$0.024 & 0.877$\pm$0.012 & 0.846$\pm$0.015 & 0.752$\pm$0.018 \\
\textbf{PDD}$_{SG}$ & 0.964$\pm$0.002 & 0.936$\pm$0.007 & 0.855$\pm$0.017 & 0.881$\pm$0.002 & 0.849$\pm$0.002 & 0.739$\pm$0.006 \\
\textbf{DSAM} & 0.983$\pm$0.005 & 0.962$\pm$0.011 & 0.862$\pm$0.034 & 0.886$\pm$0.003 & 0.837$\pm$0.007 & 0.811$\pm$0.006 \\
\textbf{BSPAUC} & \textbf{0.991$\pm$0.002} & \textbf{0.985$\pm$0.009} & \textbf{0.945$\pm$0.009} & \textbf{0.914$\pm$0.006} & \textbf{0.894$\pm$0.012} & \textbf{0.883$\pm$0.010} \\ \hline
Datasets & \multicolumn{3}{c|}{skin\_nonskin} & \multicolumn{3}{c|}{cod-rna} \\ \hline
FP & \multicolumn{1}{c|}{$10\%$} & \multicolumn{1}{c|}{$20\%$} & $30\%$ & \multicolumn{1}{c|}{$10\%$} & \multicolumn{1}{c|}{$20\%$} & $30\%$ \\ \hline
\textbf{OPAUC} & 0.925$\pm$0.009 & 0.884$\pm$0.009 & 0.815$\pm$0.008 & 0.902$\pm$0.021 & 0.864$\pm$0.036 & 0.783$\pm$0.034 \\
\textbf{KOIL}$_{FIFO++}$ & --- & --- & --- & --- & --- & --- \\
\textbf{TSAM} & 0.937$\pm$0.001 & 0.913$\pm$0.004 & 0.842$\pm$0.007 & 0.933$\pm$0.006 & 0.880$\pm$0.004 & 0.808$\pm$0.015 \\
\textbf{PDD}$_{SG}$ & 0.940$\pm$0.001 & 0.912$\pm$0.004 & 0.852$\pm$0.012 & 0.937$\pm$0.002 & 0.873$\pm$0.006 & 0.788$\pm$0.021 \\
\textbf{DSAM} & 0.961$\pm$0.003 & 0.917$\pm$0.002 & 0.819$\pm$0.014 & 0.975$\pm$0.002 & 0.922$\pm$0.002 & 0.781$\pm$0.016 \\
\textbf{BSPAUC} & \textbf{0.979$\pm$0.001} & \textbf{0.944$\pm$0.002} & \textbf{0.912$\pm$0.010} & \textbf{0.990$\pm$0.001} & \textbf{0.956$\pm$0.001} & \textbf{0.874$\pm$0.007} \\ \hline
Datasets & \multicolumn{3}{c|}{rcv1} & \multicolumn{3}{c|}{a9a} \\ \hline
PP & \multicolumn{1}{c|}{$10\%$} & \multicolumn{1}{c|}{$20\%$} & $30\%$ & \multicolumn{1}{c|}{$10\%$} & \multicolumn{1}{c|}{$20\%$} & $30\%$ \\ \hline
\textbf{OPAUC} & 0.923$\pm$0.016 & 0.863$\pm$0.020 & 0.803$\pm$0.026 & 0.833$\pm$0.015 & 0.816$\pm$0.018 & 0.797$\pm$0.016 \\
\textbf{KOIL}$_{FIFO++}$ & 0.891$\pm$0.023 & 0.838$\pm$0.027 & 0.793$\pm$0.044 & 0.831$\pm$0.022 & 0.823$\pm$0.034 & 0.806$\pm$0.035 \\
\textbf{TSAM} & 0.930$\pm$0.002 & 0.859$\pm$0.012 & 0.809$\pm$0.021 & 0.872$\pm$0.002 & 0.849$\pm$0.002 & 0.838$\pm$0.005 \\
\textbf{PDD}$_{SG}$ & 0.934$\pm$0.003 & 0.918$\pm$0.005 & 0.828$\pm$0.017 & 0.885$\pm$0.002 & 0.873$\pm$0.001 & 0.839$\pm$0.004 \\
\textbf{DSAM} & 0.902$\pm$0.007 & 0.845$\pm$0.017 & 0.757$\pm$0.051 & 0.881$\pm$0.004 & 0.852$\pm$0.004 & 0.843$\pm$0.009 \\
\textbf{BSPAUC} & \textbf{0.955$\pm$0.002} & \textbf{0.937$\pm$0.003} & \textbf{0.876$\pm$0.010} & \textbf{0.911$\pm$0.002} & \textbf{0.907$\pm$0.002} & \textbf{0.896$\pm$0.008} \\ \hline
Datasets & \multicolumn{3}{c|}{skin\_nonskin} & \multicolumn{3}{c|}{cod-rna} \\ \hline
PP & \multicolumn{1}{c|}{$10\%$} & \multicolumn{1}{c|}{$20\%$} & $30\%$ & \multicolumn{1}{c|}{$10\%$} & \multicolumn{1}{c|}{$20\%$} & $30\%$ \\ \hline
\textbf{OPAUC} & 0.927$\pm$0.010 & 0.880$\pm$0.020 & 0.856$\pm$0.021 & 0.914$\pm$0.015 & 0.893$\pm$0.023 & 0.858$\pm$0.034 \\
\textbf{KOIL}$_{FIFO++}$ & --- & --- & --- & --- & --- & --- \\
\textbf{TSAM} & 0.933$\pm$0.002 & 0.911$\pm$0.004 & 0.877$\pm$0.005 & 0.953$\pm$0.002 & 0.903$\pm$0.006 & 0.886$\pm$0.005 \\
\textbf{PDD}$_{SG}$ & 0.935$\pm$0.001 & 0.927$\pm$0.002 & 0.898$\pm$0.009 & 0.972$\pm$0.001 & 0.941$\pm$0.001 & 0.881$\pm$0.014 \\
\textbf{DSAM} & 0.980$\pm$0.001 & 0.954$\pm$0.001 & 0.902$\pm$0.005 & 0.973$\pm$0.004 & 0.938$\pm$0.005 & 0.913$\pm$0.004 \\
\textbf{BSPAUC} & \textbf{0.995$\pm$0.001} &\textbf{0.982$\pm$0.001} & \textbf{0.965$\pm$0.006} & \textbf{0.991$\pm$0.003} & \textbf{0.973$\pm$0.003} & \textbf{0.953$\pm$0.008} \\ \hline
\end{tabular}
\end{table*}
\section{Appendix B. Implementation of Algorithm \ref{Kernel}}
When we fix $\mathbf{v},\mathbf{u}$ to update $\theta$:
\begin{equation}
\label{f}
\theta^{t}= \argmin \limits_{ \theta} \frac{1}{nm} \sum_{i=1}^{n}\sum_{j=1}^{m}v_i u_j \xi_{ij}
+\tau \Omega(\theta) + \text{const},
\end{equation}
where $\xi_{ij}=\max \{ 1-f(x^+_i)+f(x^-_j),0 \} $.
For the kernel-based implementation of solving Eq. (\ref{f}), we also compute the gradient on random pairs of weighted samples which are selected from two subsets $\hat{X}^+$ and $\hat{X}^-$ respectively. In this case, the weighted batch AUC loss is defined as:
\begin{small}
\begin{align} \label{DF}
\sum_{j=1}^\pi v_j u_j \xi_{jj} =\sum_{j=1}^\pi v_j u_j \max \{1-f(x^+_j)+f(x^-_j), 0 \}.
\end{align}\end{small}
Then, we apply random Fourier feature method to approximate the kernel function \cite{rahimi2008random,dai2014scalable} for large-scale problems. The mapping function of $D$ random features is defined as
\begin{small}
\begin{align*}
\phi_{\omega}(x)=\sqrt{1/D}[ &\cos(\omega_1x),\ldots,\cos(\omega_Dx), \\ &\sin(\omega_1x),\ldots,\sin(\omega_Dx)]^T,
\end{align*}\end{small}where $\omega_i$ is randomly sampled according to the density function $p(\omega)$ associated with $k(x,x')$ \cite{odland2017fourier}. Then, based on the weighted batch AUC loss (\ref{DF}) and the random feature mapping function $\phi_{\omega}(x)$, we obtain Algorithm \ref{Kernel} by applying the triply stochastic gradient descent method (TSGD) \cite{TSAM} \emph{w.r.t.} random pairs of weighted samples and random features.
\setcounter{algorithm}{2}
\begin{algorithm} [H]
\caption{Kernel-based implementation of solving Eq. (\ref{f})} \label{Kernel}
\begin{algorithmic}[1]
\REQUIRE $p(\omega),\phi_{\omega}(x),\tau, T, \mathbf{\eta}, \hat{X}^+, \hat{X}^-, \pi,\theta^0$.
\FOR{$i=1, \cdots ,T$}
\STATE Sample $\hat{x}^+_1,...,\hat{x}^+_\pi$ from $\hat{X}^+$.
\STATE Sample $\hat{x}^-_1,...,\hat{x}^-_\pi$ from $\hat{X}^-$.
\STATE Sample $\omega_i$ $\sim$ $p(\omega)$ with seed $i$.
\STATE Calculate $f({x})$ by \textbf{Predict}$(x,\{\alpha_i\}_{j=1}^{i-1})$.
\STATE Get $\alpha_i$ according to the following formula:
\begin{align*} \small
\alpha_i=&-\frac{\eta_i}{\pi}\sum_{j=1}^\pi v_j u_j \frac{\partial \xi_{jj} }{ \partial \theta } .\qquad \qquad \qquad \qquad
\end{align*}
\STATE Update $\alpha_j=(1-\eta_j\tau)\alpha_j,j\in [1,i-1]$.
\ENDFOR
\ENSURE $\theta=\{\alpha_i\}_{i=1}^T$.
\end{algorithmic}
\end{algorithm}
\begin{algorithm} [H]
\caption{$f(x)=$ \textbf{Predict}$(x,\{\alpha_i\}_{i=1}^{t})$}
\begin{algorithmic}[1]
\REQUIRE $p(\omega),\phi_{\omega}(x),x,\{\alpha_i\}_{i=1}^{t}$.
\STATE Set $f(x)=0$.
\FOR { $i=1, \cdots ,t$}
\STATE Sample $\omega_i$ $\sim$ $p(\omega)$ with seed $i$.
\STATE $f(x)=f(x)+\alpha_i \phi_{\omega_i}(x)$.
\ENDFOR
\ENSURE $f(x)$.
\end{algorithmic}
\end{algorithm}
\section{Appendix C. Proof of Theorem \ref{theoremUp}}
Firstly, we introduce some notions again.
Let $X$ be a compact subset of $\mathbb{R}^d$, $Y = \{-1, +1\}$ be the label set and $Z = X \times Y$. Given a distribution $P(z)$ and let $S=\{z_i=(x_i,y_i)\}_{i=1}^n$ be an independent and identically distributed (IID) training set drawn from $P(z)$, where $x_i \in X$, $y_i \in Y$ and $z_i \in Z$. Thus, empirical AUC risk on $S$ can be formulated as:
\begin{align} \label{emp}
R_{emp}(S;f)=\frac{1}{n(n-1)}\sum_{z_i,z_j \in S, z_i \neq z_j} L_f(z_i,z_j).
\end{align}
Here, $f \in \mathcal{F}: \mathbb{R}^d \to \mathbb{R}$ is one real-valued function and the pairwise loss function $L_f(z_i,z_j)$ for AUC is defined as:
\begin{equation*} \small
L_f(z_i,z_j)=
\left \{\begin{array} {l@{\ \ \textrm{if} \ \ }l} 0 & \ y_i=y_j
\\ \mathbb{I}(f(x_i) \le f(x_j)) &y_i =+1 \& \ y_j=-1\\
\mathbb{I}(f(x_j) \le f(x_i)) &y_j =+1 \& \ y_i=-1
\end{array} \right.
\end{equation*}
where $\mathbb{I}(\cdot)$ is the indicator function such that
$\mathbb{I}(\pi)$ equals 1 if $\pi$ is true and 0 otherwise.
Further, the expected risk of AUC for the distribution $P(z)$ can be defined as:
\begin{align}\label{exp}
&R_{exp}(P(z);f):=\mathbb{E}_{z_1,z_2 \sim P(z)^2 } L_f(z_1,z_2) \nonumber \\
=&\mathbb{E}_{S \sim P(z)^n } \left[ \frac{1}{n(n-1)} \sum_{z_i,z_j \in S, z_i \neq z_j} L_f(z_i,z_j) \right] \nonumber\\
=&\mathbb{E}_{S}[R_{emp}(S;f)].
\end{align}
Under the assumption that our training set in reality is composed by not only target clean data but also a proportion of noise samples, Gong et al. \cite{why} connect the data distribution $P_{train}(z)$ of the training set with the distribution $P_{target}(z)$ of the target set using a weight function $W_{\lambda}(z)$:
\begin{align} \small \label{OriginalF}
& P_{target}(z)=\frac{1}{\alpha_*}W_{\lambda}(z)P_{train}(z), \\
&\alpha_*=\int_{Z}W_{\lambda}(z)P_{train}(z)dz , \nonumber
\end{align}
where $0 \le W_{\lambda}(z) \le 1 $ and $\alpha_*$ denotes the normalization factor. Intuitively, $W_{\lambda}(z)$ gives larger weights to easy samples than to complex samples and with the increase of pace parameter $\lambda$, all samples tend to be assigned larger weights.
Then, Eq. (\ref{OriginalF}) can be reformulated as:
\begin{align} \label{NewF}
& P_{train}(z)=\alpha_*P_{target}(z)+(1-\alpha_*)E(z), \\
& E(z)=\frac{1}{1-\alpha_*}(1-W_{\lambda_*}(z))P_{train}(z). \nonumber
\end{align}
Based on (\ref{NewF}), we define the pace distribution $Q_{\lambda}(z)$ as:
\begin{align} \label{Q}
Q_{\lambda}(z)=\alpha_{\lambda}P_{target}(z)+(1-\alpha_{\lambda})E(z),
\end{align}
where $\alpha_{\lambda}$ varies from $1$ to $\alpha_{*}$ with increasing pace parameter $\lambda$. Correspondingly, $Q_{\lambda}(z)$ simulates the changing process from $P_{target}(z)$ to $P_{train}(z)$. Note that $Q_{\lambda}(z)$ can also be regularized into the following formulation:
\begin{align} \label{Q2}
Q_{\lambda}(z) \propto W_{\lambda}(z)P_{train}(z).
\end{align}
Here,
\begin{align*}
W_{\lambda}(z) \propto \frac{\alpha_{\lambda}P_{target}(z)+(1-\alpha_{\lambda})E(z)}{\alpha_*P_{target}(z)+(1-\alpha_*)E(z)},
\end{align*}
and
\begin{align*}
&\frac{\alpha_{\lambda}P_{target}(z)+(1-\alpha_{\lambda})E(z)}{\alpha_*P_{target}(z)+(1-\alpha_*)E(z)}\\
=&\frac{\alpha_{\lambda}P_{target}(z)+(1-\alpha_{\lambda})E(z)}{P_{train}(z)},
\end{align*}
where $0 \le W_{\lambda}(z) \le 1 $ through normalizing its maximal value to 1.
Then, after introducing the definition of McDiarmid inequality, we obtain the relationship between the empirical AUC risk $R_{emp}$ (\ref{emp}) and the expected AUC risk $R_{exp}$ (\ref{exp}) in Lemma \ref{TwoR}.
\begin{definition} \label{Ineq}
(McDiarmid Inequality) Let $z_1,...z_n \in S$ be a set of independent random variables and assume that there exists $c_1,...,c_n >0$ such that $f:S^n \rightarrow \mathbb{R}$ satisfies the following inequalities:
\begin{align*}
|f(z_1,...,z_i,...,z_n)-f(z_1,...,z_i',...z_n)| \le c_i ,
\end{align*}
for all $i \in \{1,...,n\}$ and points $z_1',...z_n' \in S$. Then, $\forall \epsilon >0$ we have
\begin{small}
\begin{align*}
&P(f(z_1,...,z_n)-\mathbb{E}(f(z_1,...,z_n)) \ge \epsilon) \le \exp \left( \frac{-2 \epsilon^2}{\sum_{i=1}^n c_i^2} \right ) \\
&P(f(z_1,...,z_n)-\mathbb{E}(f(z_1,...,z_n)) \le -\epsilon) \le \exp \left (\frac{-2 \epsilon^2}{\sum_{i=1}^n c_i^2} \right)
\end{align*}
\end{small}
\end{definition}
\begin{lemma} \label{TwoR}
An independent and identically distributed sample set $S=\{z_1,...,z_n\}, z_i \in Z$ is obtained according to the distribution $P(z)$. Then for any $\delta > 0$ and any $ f \in \mathcal{F}$, the following holds for $R_{emp}$ (\ref{emp}) and $R_{exp}$ (\ref{exp}) with confidence at least $1-\delta$:
\begin{align}
R_{exp}(P(z);f) \le R_{emp}(S;f) +\sqrt{\frac{\ln (1/\delta)}{n/2}}.
\end{align}
\end{lemma}
\begin{proof}
Let $S'$ be a set of samples same as $S$ and then we change the $z \in S'$ to $z'$. In this case, $S'$ is different from $S$ with only one sample and $z \in S,z'\in S'$ are the two different samples. Then, we have
\begin{align*}
&R_{emp}(S;f)-R_{emp}(S';f)\\
\overset{a}{=}&\frac{2}{n(n-1)}\left(\sum_{z_j \in S, z_j \neq z} L_f(z,z_j)-\sum_{z_j \in S', z_j \neq z' } L_f(z',z_j)\right) \\
\overset{b}{\le} &\frac{2(n-1)}{n(n-1)} = \frac{2}{n}
\end{align*}
The equality (a) is due to the property that $\forall z_i,z_j \in Z, L_f(z_i,z_j)=L_f(z_j,z_i)$, and the inequality (b) is obtained by the fact that the function $L_f$ is bounded by $[0,1]$. Similarly, we can get
\begin{align*}
|R_{emp}(S;f)-R_{emp}(S';f)| \le \frac{2}{n}.
\end{align*}
According to Definition \ref{Ineq}, $\forall \epsilon >0$ we have
\begin{equation*} \small
\begin{aligned}
&P( R_{emp}(S;f) - E_{S}[R_{emp}(S;f)] \le - \epsilon ) \le \exp (\frac{-\epsilon^2}{2/n}) \\
\iff & P( R_{emp}(S;f) - R_{exp}(P(z);f) \ge - \epsilon ) \ge 1- \exp (\frac{-\epsilon^2}{2/n})
\end{aligned}
\end{equation*}
Then, we define $\delta$ as $\exp (\frac{-\epsilon^2}{2/n})$ and calculate $\epsilon$ as $\sqrt{\frac{\ln (1/\delta)}{n/2}}$.\\
In this case, we proof that with confidence at least $1 - \delta$, the following holds
\begin{align}
R_{exp}(P(z);f) \le R_{emp}(S;f) +\sqrt{\frac{\ln (1/\delta)}{n/2}}.
\end{align}
The proof is then completed.
\end{proof}
Combining with the pace distribution $Q_{\lambda}$ (\ref{Q}), we get the following Lemma.
\begin{lemma} \label{old}
An independent and identically distributed sample set $S=\{z_1,...,z_n\}$ is obtained according to the pace distribution $Q_{\lambda}$ (\ref{Q}). Then for any $\delta>0$ and any $ f \in \mathcal{F}$, with confidence at least $1-\delta$ we have:
\begin{align}
R_{exp}(P_{target};f) & \le R_{emp}(S;f)+e_{\lambda} +\sqrt{\frac{\ln (1/\delta)}{n/2}}
\end{align}
where $e_{\lambda}$ is defined as $R_{exp}(P_{target};f) - R_{exp}(Q_{\lambda};f)$ and with the increasing of $\lambda$, $e_{\lambda}$ decreases monotonically from $0$.
\end{lemma}
\begin{proof}
The empirical risk in training set tends not to approximate the expected risk due to the inconsistence of $P_{train}$ and $P_{target}$. However, by introducing pace empirical risk with pace distribution $Q_{\lambda}$ in the error analysis, we can formulate the following error decomposition:
\begin{small}
\begin{align} \label{S12}
&R_{exp}(P_{target};f) - R_{emp}(S;f) \\
=&R_{exp}(P_{target};f)-R_{exp}(Q_{\lambda};f)+R_{exp}(Q_{\lambda};f)-R_{emp}(S;f), \nonumber
\end{align}
\end{small}
and we define
\begin{align} \label{S1}
e_{\lambda}=R_{exp}(P_{target};f) - R_{exp}(Q_{\lambda};f).
\end{align}
Considering the definition of $Q_{\lambda}$ (\ref{Q}) which simulates the changing process from $P_{target}$ to $P_{train}$ and the relationship (\ref{OriginalF}) between $P_{train}$ and $P_{target}$, we conclude that with the increasing of $\lambda$, $e_{\lambda}$ decreases monotonically from $0$.\\
Because $S$ is subject to the pace distribution $Q_{\lambda}$ (\ref{Q}), according to Lemma \ref{TwoR}, the following holds with confidence $ 1 - \delta$
\begin{align} \label{S2}
R_{exp}(Q_{\lambda};f)-R_{emp}(S;f) \le \sqrt{\frac{\ln (1/\delta)}{n/2}} .
\end{align}
Combining (\ref{S12}),(\ref{S1}) with (\ref{S2}), we have
\begin{align*}
R_{exp}(P_{target};f) \le R_{emp}(S;f)+ e_{\lambda} + \sqrt{\frac{\ln (1/\delta)}{n/2}} .
\end{align*}
The proof is then completed.
\end{proof}
Finally, the proof of Theorem \ref{theoremUp} is as follows.
\begin{proof}
When we use training set $S=\{z_1,...,z_m\}$ to approximate $P_{train}$, we have
\begin{align*}
P_{train}=\sum_{i=1}^mp_i D_{z_i}(z), \ \forall i \in \{1,...,m\},
\end{align*}
where $p_i=\frac{1}{m}$ and $D_{z_i}(z)$ denotes the Dirac delta function centered at $z_i$:
\begin{align*}
\forall z!=z_i, \ D_{z_i}(z)=0 \ \text{and} \ \int_{Z} D_{z_i}(z) dz =1.
\end{align*}
It is easy to see that $P_{train}$ supposes a uniform density on each sample $z_i$. Next, according to the special formulation (\ref{Q2}) of $Q_{\lambda}$, we have:
\begin{align*}
Q_{\lambda}(z) \propto \sum_{i=1}^m W_{\lambda}(z_{i})p_i D_{z_i}(z).
\end{align*}
In this case, $S_{Q}=\{W_{\lambda}(z_1)z_1,...,W_{\lambda}(z_m)z_m \}$ is subject to the pace distribution $Q_{\lambda}$. And the empirical risk on $Q_{\lambda}$ can be rewritten as
\begin{align*}
&R_{emp}(S_{Q};f)\\
=&\frac{1}{n_{\lambda}(n_{\lambda}-1)}\sum_{z_i,z_j \in S, z_i\neq z_j} W_{\lambda}(z_i) W_{\lambda}(z_j) L_f(z_i,z_j)
\end{align*}
where $n_{\lambda}=\sum_{i=1}^m \mathbb{I}(W_{\lambda}(z_i)!=0)$ denotes the number of selected samples in SPL setting.
And combining with Lemma \ref{old}, we conclude that for any $\delta>0$ and any $f \in \mathcal{F}$, with confidence at least $1-\delta$ over a sample set $S$, we have:
\begin{small}
\begin{align}
R_{exp}(P_{target};f) \le& \frac{1}{n_{\lambda}(n_{\lambda}-1)} \sum_{z_i,z_j \in S \atop z_i\neq z_j} W_{\lambda}(z_i) W_{\lambda}(z_j) L_f(z_i,z_j) \nonumber \\
+ & e_{\lambda}+\sqrt{\frac{\ln (1/\delta)}{n_{\lambda}/2}},
\end{align}
\end{small}
where $n_{\lambda}$ denotes the number of selected samples, $e_{\lambda}$ is defined as $R_{exp}(P_{target};f) - R_{exp}(Q_{\lambda};f)$ and with the increasing of $\lambda$, $e_{\lambda}$ decreases monotonically from $0$.
\end{proof}
\section{Appendix D. Proof of Theorem \ref{Non-con}}
The objective function of our BSPAUC is defined as follows:
\begin{align} \label{BSPAUCOF}
&\min_{\theta,\mathbf{v},\mathbf{u}} \ \mathcal{L}(\theta,\mathbf{v},\mathbf{u};\lambda) \\ =&\min_{\theta,\mathbf{v},\mathbf{u}} \ \frac{1}{nm} \sum_{i=1}^{n}\sum_{j=1}^{m}v_i u_j \xi_{ij} +\tau \Omega(\theta) \\
-& \lambda \left(\frac{1}{n}\sum_{i=1}^{n} v_i+\frac{1}{m}\sum_{j=1}^{m} u_j \right)+ \mu \left(\frac{1}{n}\sum_{i=1}^{n} v_i-\frac{1}{m}\sum_{j=1}^{m} u_j \right)^2 \nonumber \\
& \ s.t. \ \mathbf{v}\in [0,1]^n, \mathbf{u}\in [0,1]^m \nonumber
\end{align}
where $\xi_{ij}=\max \{1-f(x^+_i)+f(x^-_j), 0 \}$ is the pairwise hinge loss.
\begin{proof}
For the sake of clarity, we define $\mathcal{K}(\mathbf{v},\mathbf{u})=\mathcal{L}(\mathbf{v},\mathbf{u};\theta,\lambda)$ as the sub-problem of (\ref{BSPAUCOF}) where $\theta$ and $\lambda$ are fixed:
\begin{align} \label{K}
&\min_{\mathbf{v},\mathbf{u}} \ \mathcal{K}(\mathbf{v},\mathbf{u}) =\min_{\mathbf{v},\mathbf{u}} \ \mathcal{L}(\mathbf{v},\mathbf{u};\theta,\lambda) \\
=&\min_{\mathbf{v},\mathbf{u}} \ \frac{1}{nm} \sum_{i=1}^{n}\sum_{j=1}^{m}v_i u_j \xi_{ij}
- \lambda \left(\frac{1}{n}\sum_{i=1}^{n} v_i+\frac{1}{m}\sum_{j=1}^{m} u_j \right) \nonumber\\
+& \mu \left(\frac{1}{n}\sum_{i=1}^{n} v_i-\frac{1}{m}\sum_{j=1}^{m} u_j \right)^2 + \text{const} \nonumber \\
& \ s.t. \ \mathbf{v}\in [0,1]^n, \mathbf{u}\in [0,1]^m \nonumber
\end{align}
First of all, we have to be clear that the necessary condition for $\mathcal{K}(\mathbf{v},\mathbf{u})$ to be a convex function is
\begin{equation} \label{necessary} \small
\begin{aligned}
&\mathcal{K}(\frac{1}{2}(\mathbf{v}^1+\mathbf{v}^2),\frac{1}{2}(\mathbf{u}^1+\mathbf{u}^2))-\frac{1}{2}(\mathcal{K}(\mathbf{v}^1,\mathbf{u}^1)+\mathcal{K}(\mathbf{v}^2,\mathbf{u}^2)) \le 0,\\
&\mathbf{v}^1,\mathbf{v}^2 \in [0,1]^n, \mathbf{u}^1, \mathbf{u}^2 \in [0,1]^m.
\end{aligned}
\end{equation}
Without loss of generality, let
\begin{align*}
\mathbf{v}^1=(1,1,...,1),\mathbf{u}^1=(0,0,...,0),\\
\mathbf{v}^2=(0,0,...,0),\mathbf{u}^2=(1,1,...,1).
\end{align*}
Then $\mathcal{K}(\mathbf{v},\mathbf{u})$ satisfies
\begin{align*}
&\mathcal{K}(\frac{1}{2}(\mathbf{v}^1+\mathbf{v}^2),\frac{1}{2}(\mathbf{u}^1+\mathbf{u}^2))-\frac{1}{2}(\mathcal{K}(\mathbf{v}^1,\mathbf{u}^1)+\mathcal{K}(\mathbf{v}^2,\mathbf{u}^2)) \\
=&\frac{1}{4nm} \sum_{i=1}^{n}\sum_{j=1}^{m} \xi_{ij}-\mu.
\end{align*}
Due to the condition that $\mu$ is a hyperparameter greater than 0 and $\xi_{ij}$ is not less than 0, the formula $\frac{1}{4nm} \sum_{i=1}^{n}\sum_{j=1}^{m} \xi_{ij}-\mu$ is not guaranteed to be less than or equal to 0. And then according to the necessary condition (\ref{necessary}), we can conclude that if we fix $\theta$ in Eq. (\ref{BSPAUCOF}), the sub-problem with respect to $\mathbf{v}$ and $\mathbf{u}$ maybe non-convex.
\end{proof}
\section{Appendix E. Proof of Theorem \ref{solutionofvu}}
When we fix $\mathbf{u}$ and $\theta$, the sub-problem with respect to $\mathbf{v}$ is as follows:
\begin{small}
\begin{align} \label{v}
\mathbf{v}^{t}= & \argmin \limits_{\mathbf{v} \in [0,1]^n} \ \mathcal{L}(\mathbf{v};\theta,\mathbf{u},\lambda) \\
=&\argmin \limits_{\mathbf{v} \in [0,1]^n} \ \frac{1}{n} \sum_{i=1}^{n}v_i l^+_i - \lambda \frac{1}{n}\sum_{i=1}^{n} v_i + \mu\left(\frac{1}{n}\sum_{i=1}^{n} v_i-Q\right)^2 + \text{const}, \nonumber
\end{align}
\end{small}
where $l^+_i=\frac{1}{m}\sum_{j=1}^{m} u_j \xi_{ij}$ and $Q=\frac{1}{m}\sum_{j=1}^{m} u_j$.
And when we fix $\mathbf{v}$ and $\theta$, the sub-problem with respect to $\mathbf{u}$ is as follows:
\begin{small}
\begin{align} \label{u}
\mathbf{u}^{t}= &\argmin \limits_{\mathbf{u} \in [0,1]^m} \ \mathcal{L}(\mathbf{u};\theta,\mathbf{v},\lambda) \\
=& \argmin \limits_{\mathbf{u} \in [0,1]^m} \ \frac{1}{m} \sum_{j=1}^{m}u_j l^-_j - \lambda \frac{1}{m}\sum_{j=1}^{m} u_j + \mu\left(\frac{1}{m}\sum_{j=1}^{m} u_j-P\right)^2 + \text{const}, \nonumber
\end{align}
\end{small}
where $l^-_j=\frac{1}{n}\sum_{i=1}^{n} v_i \xi_{ij}$ and $P=\frac{1}{n}\sum_{i=1}^{n} v_i$.
\begin{proof}
Eq. (\ref{v}) can be expressed as
\begin{align} \label{simple}
&\argmin \limits_{\mathbf{v} \in [0,1]^n} \quad \frac{1}{n} \sum_{i=1}^{n}v_i l^+_i - \lambda \frac{1}{n}\sum_{i=1}^{n} v_i + \mu(\frac{1}{n}\sum_{i=1}^{n} v_i-Q)^2 \nonumber\\
\iff &\argmin \limits_{\mathbf{v} \in [0,1]^n} \quad \sum_{i=1}^n v_ib_i+\frac{1}{n}(\sum_{i=1}^nv_i-nQ)^2 \\
:= &\argmin \limits_{\mathbf{v} \in [0,1]^n} \quad F(\mathbf{v})
\end{align}
where $l^+_i=\frac{1}{m}\sum_{j=1}^{m} u_j \xi_{ij}$, $Q=\frac{1}{m}\sum_{j=1}^{m} u_j$ and $b_i=\frac{l_i^+ -\lambda}{\mu}$.\\
Without loss of generality, we suppose that $b_1 \le b_2 \le \cdots \le b_n$. In this case, we firstly proof that the following $\mathbf{v} = (1,1,\ldots,1, v_p, 0,0,\ldots,0)$, which means that the $p$-th element $v_p$ is the only element in $\mathbf{v}$ that can not equal to 0 or 1, minimizes $F(\mathbf{v})$.\\
For each $\mathbf{v}\in[0,1]^n$, if $v_i<v_j$ for some $i<j$, then we can exchange $v_i$ and $v_j$, thus $\mathbf{v}$ becomes $\mathbf{v'}$. We have
\begin{align*}
&F(\mathbf{v'}) - F(\mathbf{v}) \\
=& (b_iv_j +b_jv_i) - (b_iv_i +b_jv_j) = (b_i-b_j) (v_j-v_i) \leq 0.
\end{align*}
Therefore, we can always assume that $1\geq v_1\geq v_2\geq \cdots \geq v_n \geq 0$ when $\mathbf{v}$ minimizes $F(\mathbf{v})$. \\
Assuming that there exist $1>v_i>v_{i+1}>0$, we consider the following two cases.\\
Case 1: $v_i+v_{i+1}\geq 1$. In this case, we replace $(v_i,v_{i+1})$ by $(1,v_i+v_{i+1}-1)$, then $\mathbf{v}$ becomes $\mathbf{v'}$. And we have
\begin{align*}
&F(\mathbf{v'}) - F(\mathbf{v}) \\
= &(b_i +b_{i+1}(v_i+v_{i+1}-1)) - (b_iv_i +b_{i+1} v_{i+1}) \\
= &(b_i-b_{i+1}) (1-v_i) \leq 0.
\end{align*}
Case 2: $v_i+v_{i+1}< 1$. In this case, we replace $(v_i,v_{i+1})$ by $(v_i+v_{i+1},0)$, then $\mathbf{v}$ becomes $\mathbf{v'}$. And we have
\begin{align*}
F(\mathbf{v'}) - F(\mathbf{v})& = b_i (v_i+v_{i+1}) - (b_iv_i +b_{i+1} v_{i+1})\\
& = (b_i-b_{i+1}) v_{i+1} \leq 0.
\end{align*}
Therefore, we can always assume that at most one element in $\mathbf{v}$ is not equal to 0 or 1 when $\mathbf{v}$ minimizes $F(\mathbf{v})$. \\
According to the fact that $1\geq v_1\geq v_2\geq \cdots \geq v_n \geq 0$ and at most one element in $\mathbf{v}$ is not equal to 0 or 1 when $\mathbf{v}$ minimizes $F(\mathbf{v})$, we conclude that the following $\mathbf{v} = (1,1,\ldots,1, v_p, 0,0,\ldots,0)$, which means that the $p$-th element $v_p$ is the only element in $\mathbf{v}$ that can not equal to 0 or 1, minimizes $F(\mathbf{v})$.\\
Then, we rewrite $F(\mathbf{v})$ as
\begin{align} \label{F}
F(\mathbf{v}) = \frac{1}{n} ( v_p +p-1+\frac{nb_p}{2} -nQ )^2 + \text{const}
\end{align}
and consider the following three cases when $\mathbf{v} = (1,1,\ldots,1, v_p, 0,0,\ldots,0)$ minimizes $F(\mathbf{v})$.\\
Case 1: $v_p=1$. According to Eq. (\ref{F}) we have that
$$
p-1+\frac{nb_p}{2} -nQ \leq -1
$$
and thus
$$
-\frac{b_p}{2} \geq \frac{p}{n} - Q.
$$
Furthermore, for each $i<p$ we have $v_i =1$ and
$$
-\frac{b_i}{2} \geq -\frac{b_p}{2} \geq \frac{p}{n} - Q \geq \frac{i}{n} - Q .
$$
And for each $i>p$ we have $v_i =0$, we rewrite $F(\mathbf{v})$ as
$$
F(\mathbf{v}) = \frac{1}{n} ( v_i +p+\frac{nb_i}{2} -nQ )^2 + \text{const}.
$$
Since $\mathbf{v}$ minimizes $F(\mathbf{v})$, we have
$$
p+\frac{nb_i}{2} -nQ \geq 0
$$
and thus
$$
-\frac{b_i}{2} \leq \frac{p}{n} - Q \leq \frac{i-1}{n} - Q .
$$
Therefore, in this case we can get $\forall i \in\{1,2,...,n\}$,
\begin{align}
v_i=1 &\iff -\frac{b_i}{2} \geq \frac{i}{n} - Q ,\\
v_i=0 &\iff -\frac{b_i}{2} \leq \frac{i-1}{n} - Q .
\end{align}
Case 2: $0<v_p<1$. According to Eq. (\ref{F}) we have that
$$
0< v_p= -(p-1+\frac{nb_p}{2} -nQ )<1
$$
and
thus
$$
\frac{p-1}{n} - Q <-\frac{b_p}{2} < \frac{p}{n} - Q.
$$
Furthermore, for each $i<p$ we have $v_i =1$, we rewrite $F(\mathbf{v})$ as
$$
F(\mathbf{v}) = \frac{1}{n} ( v_i +p-2+v_p+\frac{nb_i}{2} -nQ )^2 + \text{const}.
$$
Since $\mathbf{v}$ minimizes $F(\mathbf{v})$, we have
$$
p-2+v_p+\frac{nb_i}{2} -nQ \leq -1
$$
and thus
$$
-\frac{b_i}{2} \geq \frac{p-1+v_p}{n} - Q \geq \frac{i}{n} - Q .
$$
And for each $i>p$ we have $v_i =0$, we rewrite $F(\mathbf{v})$ as
$$
F(\mathbf{v}) = \frac{1}{n} ( v_i +p-1+v_p+\frac{nb_i}{2} -nQ )^2 + \text{const}.
$$
Since $\mathbf{v}$ minimizes $F(\mathbf{v})$, we have
$$
p-1+v_p+\frac{nb_i}{2} -nQ \geq 0
$$
and thus
$$
-\frac{b_i}{2} \leq \frac{p-1+v_p}{n} - Q \leq \frac{i-1}{n} - Q .
$$
Therefore, in this case we can get $\forall i \in\{1,2,...,n\} $,
\begin{align}
&v_i=1 \iff -\frac{b_i}{2} \geq \frac{i}{n} - Q , \\
&v_i=0 \iff -\frac{b_i}{2} \leq \frac{i-1}{n} - Q,
\end{align}
and
\begin{align}
& 0< v_i = -(i-1+\frac{nb_i}{2} -nQ ) < 1 \nonumber\\
\iff& \frac{i-1}{n} - Q <-\frac{b_i}{2} < \frac{i}{n} - Q.
\end{align}
Case 3: $v_p=0$. According to Eq. (\ref{F}) we have that
$$
p-1+\frac{nb_p}{2} -nQ \geq 0
$$
and thus
$$
-\frac{b_p}{2} \leq \frac{p-1}{n} - Q.
$$
Similarly as Case 1, in this case, we can get $\forall i \in\{1,2,...,n\}$,
\begin{align}
v_i=1 &\iff -\frac{b_i}{2} \geq \frac{i}{n} - Q , \\
v_i=0 &\iff -\frac{b_i}{2} \leq \frac{i-1}{n} - Q .
\end{align}
By Cases 1, 2 and 3, it is easy to see that if $\mathbf{v}$ minimizes $F(\mathbf{v})$, then for each $1\leq i \leq n$, we always have
\begin{align*}
&v_i =0 \iff -\frac{b_i}{2} \leq \frac{i-1}{n} - Q , \\
& v_i =1 \iff -\frac{b_i}{2} \geq \frac{i}{n} - Q ,
\end{align*}
and
\begin{align*}
&0< v_i = -(i-1+\frac{nb_i}{2} -nQ ) < 1 \\
\iff &\frac{i-1}{n} - Q <-\frac{b_i}{2} < \frac{i}{n} - Q .\\
\end{align*}
which can be rewritten as:
\begin{equation} \small
\left \{\begin{array} {ll}
&v_p=1 \qquad {\ \ \textrm{if} \ \ } l^+_p < \lambda - 2\mu \left(\frac{p}{n}-\frac{\sum_{j=1}^m u_j}{m} \right)
\\ &v_p=n \left( \frac{\sum_{j=1}^m u_j}{m}-\frac{l^+_p - \lambda}{2 \mu}-\frac{p-1}{n} \right) {\ \ \textrm{otherwise} \ \ }
\\
&v_p=0 \qquad {\ \ \textrm{if} \ \ } l^+_p > \lambda - 2\mu \left(\frac{p-1}{n}-\frac{\sum_{j=1}^m u_j}{m} \right)
\end{array} \right.
\end{equation}
where $p \in \{1,...,n\}$ is the sorted index based on the loss values $l^+_p=\frac{1}{m}\sum_{j=1}^m u_j\xi_{pj}$.\\
Similarly, we can get one global optimal solution of $\mathbf{u}$
\begin{equation} \small
\left \{\begin{array} {ll}
&u_q=1 \qquad {\ \ \textrm{if} \ \ } l^-_q < \lambda - 2\mu \left(\frac{q}{m}-\frac{\sum_{i=1}^n v_i}{n}\right)
\\
&u_q=m \left(\frac{\sum_{i=1}^n v_i}{n}-\frac{l^-_q - \lambda}{2 \mu}-\frac{q-1}{m}\right) {\ \ \textrm{otherwise} \ \ }
\\
&u_q=0 \qquad {\ \ \textrm{if} \ \ } l^-_q > \lambda - 2\mu \left(\frac{q-1}{m}-\frac{\sum_{i=1}^n v_i}{n}\right)
\end{array} \right.
\end{equation}
where $q \in \{1,...,m\}$ is the sorted index based on the loss values $l^-_q=\frac{1}{n}\sum_{i=1}^n v_i\xi_{iq}$.
\end{proof}
\section{Appendix F. Proof of Theorem \ref{theormKstation}}
Firstly, we prove that $\mathcal{K}$ converges with the inner layer cyclic block coordinate descent procedure (\emph{i.e.}, lines 3-6 of Algorithm 1).
\begin{lemma} \label{theormKconverge}
With the inner layer cyclic block coordinate descent procedure (\emph{i.e.}, lines 3-6 of Algorithm 1), the sub-problem $\mathcal{K}$ with respect to all weight variables converges.
\end{lemma}
\begin{proof}
Firstly, we prove that $\mathcal{K}$ has a lower bound:
\begin{equation} \label{Kbound} \small
\begin{aligned}
\mathcal{K}&\overset{(a)}{\ge} \frac{1}{nm} \sum_{i=1}^{n}\sum_{j=1}^{m}v_i u_j \xi_{ij} - \lambda\left(\frac{1}{n}\sum_{i=1}^{n} v_i+\frac{1}{m}\sum_{j=1}^{m} u_j\right) \\
&\overset{(b)}{\ge} - \lambda\left(\frac{1}{n}\sum_{i=1}^{n} v_i+\frac{1}{m}\sum_{j=1}^{m} u_j\right) \overset{(c)}{\ge} -2\lambda_{\infty} > -\infty.
\end{aligned}
\end{equation}
Inequality (a) follows from $ \text{const}=\tau \Omega(\theta) \ge 0$ and $ \mu(\frac{1}{n}\sum_{i=1}^{n} v_i-\frac{1}{m}\sum_{j=1}^{m} u_j)^2 \ge 0 $, and inequality (b) follows from $\xi_{ij} \ge 0$, $\mathbf{v}\in [0,1]^{n}$ and $\mathbf{u}\in [0,1]^{m}$. Considering that $\lambda_{\infty}$ is the maximum threshold of hyperparameter $\lambda$, when all $v_i$ and $u_j$ are equal to $1$, $ - \lambda(\frac{1}{n}\sum_{i=1}^{n} v_i+\frac{1}{m}\sum_{j=1}^{m} u_j) $ reaches its minimum value $-2\lambda_{\infty}$. Therefore, we can obtain the inequality (c). We prove that $\mathcal{K}$ has a lower bound.
Then, according to the update model of inner layer cyclic block coordinate descent procedure of Algorithm 1, we need to solve the following two convex sub-problems iteratively:
\begin{align*}
&\mathbf{v}^{k+1}= \argmin \limits_{\mathbf{v} \in [0,1]^n} \mathcal{K}(\mathbf{v};\mathbf{u}^k), \\
&\mathbf{u}^{k+1}= \argmin \limits_{\mathbf{u} \in [0,1]^m} \mathcal{K}(\mathbf{u};\mathbf{v}^{k+1}).
\end{align*}
Obviously, we have:
\begin{align} \label{Keachstep}
\mathcal{K}(\mathbf{v}^k,\mathbf{u}^k) \geq \mathcal{K}(\mathbf{v}^{k+1},\mathbf{u}^k) \geq \mathcal{K}(\mathbf{v}^{k+1},\mathbf{u}^{k+1}),
\end{align}
and we prove that $\mathcal{K}$ does not increase at each update.
Since $\mathcal{K}$ does not increase at each update and it has a lower bound, we prove that with the inner layer cyclic block coordinate descent procedure (\emph{i.e.}, lines 3-6 of Algorithm 1), the sub-problem $\mathcal{K}$ with respect to all weight variables converges:
\begin{align} \label{Kdecreasing}
&\lim\limits_{k \rightarrow \infty}\mathcal{K}(\mathbf{v}^{k},\mathbf{u}^{k})-\mathcal{K}(\mathbf{v}^{k+1},\mathbf{u}^{k})=0, \\
&\lim\limits_{k \rightarrow \infty}\mathcal{K}(\mathbf{v}^{k+1},\mathbf{u}^{k})-\mathcal{K}(\mathbf{v}^{k+1},\mathbf{u}^{k+1})=0. \nonumber\\
\Longrightarrow &\lim\limits_{k \rightarrow \infty}\mathcal{K}(\mathbf{v}^{k},\mathbf{u}^{k})-\mathcal{K}(\mathbf{v}^{k+1},\mathbf{u}^{k+1})=0. \nonumber
\end{align}
The proof is then completed.
\end{proof}
Next, we introduce the necessary definition and lemma.
\begin{definition} \label{definitionConstrainedF}
\cite{bertsekas1997nonlinear,nouiehed2018convergence} For the constrained optimization problem: $ \min_{\mathbf{x} \in \mathcal{F}} f(\mathbf{x}) $ where $\mathcal{F} \subseteq \mathcal{R}^n$ is a closed convex set. A point $\mathbf{x}^* \in \mathcal{F}$ is a first order stationary point when
\begin{align*}
\triangledown f(\mathbf{x}^*)' (\mathbf{x}-\mathbf{x}^*) \geq 0, \forall \mathbf{x}\in \mathcal{F}.
\end{align*}
\end{definition}
\begin{lemma} \label{theoremLocalMinimum}
\cite{bertsekas1997nonlinear}
If $\mathbf{x}^*$ is a local minimun of $f$ over $\mathcal{F}$, then
\begin{align*}
\triangledown f(\mathbf{x}^*)' (\mathbf{x}-\mathbf{x}^*) \geq 0, \forall \mathbf{x}\in \mathcal{F}.
\end{align*}
\end{lemma}
\begin{lemma} \label{TheoremCauchy}
(Cauchy's convergence criterion) \cite{waner2001introduction} The necessary and sufficient condition for sequence $\{X_n\}$ convergence is : For every $\varepsilon>0$, there is a number $N$, such that for all $n, m > N$ holds
$$|X_n -X_m| \leq \varepsilon.$$
\end{lemma}
Finally, we prove that combining with the closed-form solutions provided in Theorem \ref{solutionofvu}, $\mathcal{K}$ converges to a stationary point in Theorem \ref{theormKstation}.
\begin{proof}
When we talk about $\mathbf{v}$, we suppose that
\begin{align} \label{vnotconverge}
\lim\limits_{k \rightarrow \infty}||\mathbf{v}^{k+1}-\mathbf{v}^{k}||^2 \geq \varepsilon>0,
\end{align}
where $k$ is the iteration number of inner layer cyclic block coordinate descent procedure (\emph{i.e.}, lines 3-6 of Algorithm 1). Same as Eq. (\ref{Kdecreasing}), we have:
\begin{align} \label{Kconverge}
\lim\limits_{k \rightarrow \infty}\mathcal{K}(\mathbf{v}^{k};\mathbf{u}^{k})-\mathcal{K}(\mathbf{v}^{k+1};\mathbf{u}^{k})=0.
\end{align}
According to the fact that $\mathbf{v}^{k+1}$ is one global optimal solution of the sub-problem $\mathcal{K}(\mathbf{v};\mathbf{u}^{k})$, we have that $\mathbf{v}^{k}$ in Eq. (\ref{Kconverge}) is also one global optimal solution of the sub-problem $\mathcal{K}(\mathbf{v};\mathbf{u}^{k})$. And according to the supposition (\ref{vnotconverge}), we have that $\mathbf{v}^k$ and $\mathbf{v}^{k+1}$ are two different points. However, when we update $\mathbf{v}$ with the Eq. (\ref{solutionofv}) in Theorem \ref{solutionofvu}, we can only own one global optimal solution of the sub-problem $\mathcal{K}(\mathbf{v};\mathbf{u}^{k})$ and thus the supposition does not hold. We prove that $\lim\limits_{k \rightarrow \infty}||\mathbf{v}^{k+1}-\mathbf{v}^{k}||^2=0$. Note that the proof of $\lim\limits_{k \rightarrow \infty}||\mathbf{u}^{k+1}-\mathbf{u}^{k}||^2=0$ is similar and then we have
\begin{align} \label{VUToPoint}
\lim\limits_{k \rightarrow \infty}||(\mathbf{v}^{k+1},\mathbf{u}^{k+1})-(\mathbf{v}^{k},\mathbf{u}^{k})||^2=0.
\end{align}
Then, according to Lemma \ref{TheoremCauchy}, we have that there exists a limit point $(\mathbf{v}^*,\mathbf{u}^*)$ of the sequence $\{(\mathbf{v}^k,\mathbf{u}^k)\}$, which satisfies:
\begin{align*}
\lim \limits_{k \rightarrow \infty}(\mathbf{v}^k,\mathbf{u}^k)=(\mathbf{v}^*,\mathbf{u}^*).
\end{align*}
And then, according to the update model of inner layer cyclic block coordinate descent procedure of Algorithm 1 :
\begin{align*}
&\mathbf{v}^{k+1}= \argmin \limits_{\mathbf{v} \in [0,1]^n} \mathcal{K}(\mathbf{v};\mathbf{u}^k) \\
&\mathbf{u}^{k+1}= \argmin \limits_{\mathbf{u} \in [0,1]^m} \mathcal{K}(\mathbf{u};\mathbf{v}^{k+1})
\end{align*}
we have
\begin{align}
&\mathcal{K}(\mathbf{v}^*;\mathbf{u}^*) \le \mathcal{K}(\mathbf{v};\mathbf{u}^*) \ \forall \mathbf{v} \in [0,1]^n, \label{globalV}\\
&\mathcal{K}(\mathbf{u}^*;\mathbf{v}^*) \le \mathcal{K}(\mathbf{u};\mathbf{v}^*) \ \forall \mathbf{u} \in [0,1]^m. \label{globalu}
\end{align}
According to Eq. (\ref{globalV}), we have that $\mathbf{v}^*$ is one global minimum solution of $\mathcal{K}(\mathbf{v};\mathbf{u}^*)$. Then, according to Lemma \ref{theoremLocalMinimum}, we have that:
\begin{align*}
\triangledown_{\mathbf{v}}\mathcal{K}(\mathbf{v}^*)'(\mathbf{v}-\mathbf{v}^*) \geq 0, \forall \mathbf{v} \in [0,1]^n
\end{align*}
where $\triangledown_{\mathbf{v}}\mathcal{K}$ denotes the gradient of $\mathcal{K}$ with respect to the block $\mathbf{v}$. Similarly, we have that
\begin{align*}
\triangledown_{\mathbf{u}}\mathcal{K}(\mathbf{u}^*)'(\mathbf{u}-\mathbf{u}^*) \geq 0, \forall \mathbf{u} \in [0,1]^m.
\end{align*}
Then, combining with the above two inequalities, we have that
\begin{align*}
&\triangledown\mathcal{K}(\mathbf{v}^*,\mathbf{u}^*)'\left((\mathbf{v},\mathbf{u})-(\mathbf{v}^*,\mathbf{u}^*)\right) \\
=&(\triangledown_{\mathbf{v}}\mathcal{K}(\mathbf{v}^*),\triangledown_{\mathbf{u}}\mathcal{K}(\mathbf{u}^*))'(\mathbf{v}-\mathbf{v}^*,\mathbf{u}-\mathbf{u}^*) \\
=&\triangledown_{\mathbf{v}}\mathcal{K}(\mathbf{v}^*)'(\mathbf{v}-\mathbf{v}^*) + \triangledown_{\mathbf{u}}\mathcal{K}(\mathbf{u}^*)'(\mathbf{u}-\mathbf{u}^*) \\
\geq & 0, \ \forall(\mathbf{v},\mathbf{u}) \in [0,1]^{n+m}.
\end{align*}
Finally, according to the Definition \ref{definitionConstrainedF}, we have the limit point $(\mathbf{v}^*,\mathbf{u}^*)$ is a stationary point and then we prove that with the inner layer cyclic block coordinate descent procedure (\emph{i.e.}, lines 3-6 of Algorithm 1), the sub-problem $\mathcal{K}$ with respect to all weight variables converges to a stationary point.
\end{proof}
\section{Appendix G. Proof of Theorem \ref{Converge}}
\begin{proof}
Before we prove the convergence of our BSPAUC (Algorithm 1), we first show that the value of the objective function $\mathcal{L}(\theta,\mathbf{v},\mathbf{u};\lambda)$ does not increase in each iteration of our BSPAUC. Let $\mathbf{v}^{t},\mathbf{u}^{t},\theta^{t}$ and $\lambda^t$ indicate the values of $\mathbf{v},\mathbf{u},\theta$ and $\lambda$ in the $t$-th iteration of outer layer cyclic block coordinate descent procedure (\emph{i.e.}, lines 2-9 of Algorithm 1).
Same as Ep. (\ref{Keachstep}), we have:
\begin{align*}
\mathcal{K}(\mathbf{v}^k,\mathbf{u}^k) \geq \mathcal{K}(\mathbf{v}^{k+1},\mathbf{u}^k) \geq \mathcal{K}(\mathbf{v}^{k+1},\mathbf{u}^{k+1}),
\end{align*}
where $k$ is the iteration number of inner layer cyclic block coordinate descent procedure (\emph{i.e.}, lines 3-6 of Algorithm 1). As such, we can obtain the following inequality:
\begin{align} \label{KDecrease}
\mathcal{L}(\theta^{t},\mathbf{v}^{t+1},\mathbf{u}^{t+1 };\lambda^{t}) &\le \mathcal{L}(\theta^{t},\mathbf{v}^{t},\mathbf{u}^{t};\lambda^{t}).
\end{align}
And then, we follow the assumption on Algorithm 2 and Algorithm 3 about $\theta$:
\begin{align} \label{ThetaDecrease}
\mathcal{L}(\theta^{t+1},\mathbf{v}^{t+1},\mathbf{u}^{t+1 };\lambda^{t}) &\le \mathcal{L}(\theta^{t},\mathbf{v}^{t+1},\mathbf{u}^{t+1};\lambda^{t}) .
\end{align}
Since $\lambda_{t+1} \ge \lambda_{t} >0$ and $\mathbf{v}\in [0,1]^n,\mathbf{u} \in [0,1]^m $, we obtain
\begin{align*}
\mathcal{L}(\theta^{t+1},\mathbf{v}^{t+1},\mathbf{u}^{t+1 };\lambda^{t+1}) \le \mathcal{L}(\theta^{t+1},\mathbf{v}^{t+1},\mathbf{u}^{t+1};\lambda^{t}) .
\end{align*}
Combining with the above inequalities, we have
\begin{align*}
\mathcal{L}(\theta^{t+1},\mathbf{v}^{t+1},\mathbf{u}^{t+1 };\lambda^{t+1}) \le \mathcal{L}(\theta^{t},\mathbf{v}^{t},\mathbf{u}^{t};\lambda^{t}) .
\end{align*}
We prove that $\mathcal{L}$ does not increase in each iteration of our BSPAUC. Similar to Eq. (\ref{Kbound}), we can easily prove that $\mathcal{L}$ has a lower bound:
\begin{equation} \small
\begin{aligned}
\mathcal{L}&\ge \frac{1}{nm} \sum_{i=1}^{n}\sum_{j=1}^{m}v_i u_j \xi_{ij} - \lambda\left(\frac{1}{n}\sum_{i=1}^{n} v_i+\frac{1}{m}\sum_{j=1}^{m} u_j\right) \\
&\ge - \lambda\left(\frac{1}{n}\sum_{i=1}^{n} v_i+\frac{1}{m}\sum_{j=1}^{m} u_j\right) \ge -2\lambda_{\infty} > -\infty.
\end{aligned}
\end{equation}
Then, we prove that BSPAUC converges along with the increase of hyper-parameter $\lambda$.
\end{proof}
\section{Appendix H. Proof of Theorem \ref{ConvergeToStan}}
\begin{proof}
When $\lambda$ reaches its maximum $\lambda_{\infty}$, we obtain the fixed objective function: $\mathcal{L}(\theta,\mathbf{v},\mathbf{u};\lambda_{\infty}).$ Let $\mathbf{v}^{t},\mathbf{u}^{t}$ and $\theta^{t}$ indicate the values of $\mathbf{v},\mathbf{u}$ and $\theta$ in the $t$-th iteration of outer layer cyclic block coordinate descent procedure (\emph{i.e.}, lines 2-9 of Algorithm 1).
We first talk about the weight parameters $(\mathbf{v},\mathbf{u})$. According to Theorem \ref{Converge} and the optimizing procedure (\ref{KDecrease}) of Algorithm 1, we have
\begin{small}
\begin{align} \label{VUconvergeT}
\lim\limits_{t \rightarrow \infty}\mathcal{L}(\mathbf{v}^{t},\mathbf{u}^{t};\theta^t,\lambda_{\infty})-\mathcal{L}(\mathbf{v}^{t+1},\mathbf{u}^{t+1};\theta^t,\lambda_{\infty})=0,
\end{align}
\end{small}
which implies that for $t \to \infty$ and $ \forall k$:
\begin{align} \label{Kconverge2}
\mathcal{K}(\mathbf{v}^{k};\mathbf{u}^{k})-\mathcal{K}(\mathbf{v}^{k+1};\mathbf{u}^{k})=0,
\end{align}
where $k$ is the iteration number of inner layer cyclic block coordinate descent procedure (\emph{i.e.}, lines 3-6 of Algorithm 1).
We suppose that
\begin{align} \label{vnotconvergeT}
\lim\limits_{t \rightarrow \infty}||\mathbf{v}^{t+1}-\mathbf{v}^{t}||^2 \geq \varepsilon>0,
\end{align}
which implies that for $t \to \infty$, $\exists k,$
\begin{align} \label{vnotconvergeK}
||\mathbf{v}^{k+1}-\mathbf{v}^{k}||^2 >0.
\end{align}
According to the fact that $\mathbf{v}^{k+1}$ is one global optimal solution of the sub-problem $\mathcal{K}(\mathbf{v};\mathbf{u}^{k})$, we have that $\mathbf{v}^{k}$ in Eq. (\ref{Kconverge2}) is also one global optimal solution of the sub-problem $\mathcal{K}(\mathbf{v};\mathbf{u}^{k})$. And according to the supposition (\ref{vnotconvergeK}), we have that $\mathbf{v}^k$ and $\mathbf{v}^{k+1}$ are two different points. However, when we update $\mathbf{v}$ with the Eq. (\ref{solutionofv}) in Theorem \ref{solutionofvu}, we can only own one global optimal solution of the sub-problem $\mathcal{K}(\mathbf{v};\mathbf{u}^{k})$ and thus the supposition (\ref{vnotconvergeT}) does not hold. We prove that $\lim\limits_{t \rightarrow \infty}||\mathbf{v}^{t+1}-\mathbf{v}^{t}||^2=0$. Note that the proof of $\lim\limits_{t \rightarrow \infty}||\mathbf{u}^{t+1}-\mathbf{u}^{t}||^2=0$ is similar and then we have
\begin{align} \label{VUPointConvergeT}
\lim\limits_{t \rightarrow \infty}||(\mathbf{v}^{t+1},\mathbf{u}^{t+1})-(\mathbf{v}^{t},\mathbf{u}^{t})||^2=0.
\end{align}
And then, we consider the model parameter $\theta$. According to Theorem \ref{Converge} and the optimizing procedure (\ref{ThetaDecrease}) of Algorithm 1, we have:
\begin{align*}
\lim\limits_{t \rightarrow \infty}\mathcal{L}(\theta^t;\mathbf{v}^{t+1},\mathbf{u}^{t+1},\lambda_{\infty})-\mathcal{L}(\theta^{t+1};\mathbf{v}^{t+1},\mathbf{u}^{t+1},\lambda_{\infty})=0,
\end{align*}
where $t$ is the iteration number of outer layer cyclic block coordinate descent procedure (\emph{i.e.}, lines 2-9 of Algorithm 1).
Combining with that $\theta^{t+1}$ is initialized with $\theta^{t}$ and is one stationary point of $\mathcal{L}(\theta;\mathbf{v}^{t+1},\mathbf{u}^{t+1},\lambda_{\infty})$ obtained by gradient based method, \emph{i.e.}, Algorithm 2 \cite{gu2019scalable} or Algorithm 3 \cite{TSAM}, we get
\begin{align} \label{thetaConvergeT}
\lim\limits_{t \rightarrow \infty}||\theta^{t+1}-\theta^{t}||^2=0.
\end{align}
Combining with the above two formulas (\ref{thetaConvergeT}) and (\ref{VUPointConvergeT}), we have $$\lim\limits_{t \rightarrow \infty}||(\theta^{t+1},\mathbf{v}^{t+1},\mathbf{u}^{t+1})-(\theta^{t},\mathbf{v}^{t},\mathbf{u}^{t})||^2=0.$$
Then, according to Lemma \ref{TheoremCauchy}, we have that there exists a limit point $(\theta^*,\mathbf{v}^*,\mathbf{u}^*)$ of the sequence $\{(\theta^t,\mathbf{v}^t,\mathbf{u}^t)\}$ satisfying:
$$\lim \limits_{t \rightarrow \infty}(\theta^t,\mathbf{v}^t,\mathbf{u}^t)=(\theta^*,\mathbf{v}^*,\mathbf{u}^*).$$
According to Theorem \ref{theormKstation}, the sub-problem $\mathcal{L}(\mathbf{v},\mathbf{u};\theta^*,\lambda_{\infty})$ with respect to all weight parameters converges to one stationary point, thus we have
$$\triangledown_{(\mathbf{v},\mathbf{u})}\mathcal{L}(\mathbf{v}^*,\mathbf{u}^*;\theta^*,\lambda_{\infty})'((\mathbf{v},\mathbf{u})-(\mathbf{v}^*,\mathbf{u}^*)) \geq 0 $$
for $\forall(\mathbf{v},\mathbf{u}) \in [0,1]^{n+m}.$ At the same time, with gradient based method to optimize $\theta$, \emph{i.e.}, Algorithm 2 \cite{gu2019scalable} or Algorithm 3 \cite{TSAM}, the sub-problem $\mathcal{L}(\theta;\mathbf{v}^*,\mathbf{u}^*,\lambda_{\infty})$ with respect to model parameter converges to one stationary point, thus we have
$$\triangledown_{\theta}\mathcal{L}(\theta^*;\mathbf{v}^*,\mathbf{u}^*,\lambda_{\infty})'(\theta - \theta^*) \geq 0 $$
for any $\theta$. Then, combining with the above two inequalities, we have that
\begin{align*}
&\triangledown\mathcal{L}(\theta^*,\mathbf{v}^*,\mathbf{u}^*;\lambda_{\infty})'\left((\theta,\mathbf{v},\mathbf{u})-(\theta^*,\mathbf{v}^*,\mathbf{u}^*)\right) \\
=&(\triangledown_{\theta}\mathcal{L}(\theta^*),\triangledown_{(\mathbf{v},\mathbf{u})}\mathcal{L}(\mathbf{v}^*,\mathbf{u}^*))'(\theta -\theta^*,(\mathbf{v},\mathbf{u})-(\mathbf{v}^*,\mathbf{u}^*))\\
=&\triangledown_{\theta}\mathcal{L} (\theta^*;\mathbf{v}^*,\mathbf{u}^*,\lambda_{\infty})' (\theta - \theta^*) \\ &+\triangledown_{(\mathbf{v},\mathbf{u})}\mathcal{L} (\mathbf{v}^*,\mathbf{u}^*;\theta^*,\lambda_{\infty})' ((\mathbf{v},\mathbf{u})-(\mathbf{v}^*,\mathbf{u}^*))\\
\geq& 0.
\end{align*}
Finally, according to the Definition \ref{definitionConstrainedF}, we have that our BSPAUC converges to a stationary point of $\mathcal{L}(\theta,\mathbf{v},\mathbf{u};\lambda_{\infty})$ if the iteration number $T$ is large enough.
\end{proof}
|
1,108,101,564,957 | arxiv | \section{#1}\setcounter{equation}{0}}
\def\nappendix#1{\vskip 1cm\no{\bf Appendix #1}
\def#1{#1}
\setcounter{equation}{0}}
\renewcommand{\theequation}{#1.\arabic{equation}}
\thispagestyle{empty}
\begin{flushright}
{\bf hep-th/9405193}
\end{flushright}
\vskip 2truecm
\begin{center}
{ \large \bf LINEAR DIFFERENTIAL EQUATIONS FOR A FRACTIONAL SPIN
FIELD}\footnote{ Revised version of the preprint DFTUZ/92/24,
to be published in J. Math. Phys.}\\
\vskip0.8cm
{ \bf
Jos\'e L. Cort\'es${}^{a,}$\footnote{e-mail: [email protected]}
and Mikhail S. Plyushchay${}^{a,b,}$\footnote{e-mail:
[email protected]}\\[0.3cm]
{\it ${}^{a}$Departamento de F\'{\i}sica Te\'orica,
Facultad de Ciencias}\\
{\it Universidad de Zaragoza, 50009 Zaragoza, Spain}\\
[0.5ex]{\it ${}^{b}$Institute for High Energy Physics,
Protvino, Russia}}\\[0.5cm]
\vskip2.0cm
{\bf Abstract}
\end{center}
The vector system of linear differential equations for a field with arbitrary
fractional spin is proposed using infinite-dimensional
half-bounded unitary representations of the $\overline{SL(2,R)}$ group.
In the case of $(2j+1)$-dimensional nonunitary representations
of that group, $0<2j\in Z$, they are transformed into equations for
spin-$j$ fields. A local gauge symmetry associated to the vector system
of equations is identified and the simplest gauge invariant
field action, leading to these equations, is constructed.
\newpage
\nsection{Introduction}
A (2+1)-dimensional space-time offers new possibilities which are not
present in any higher dimensional case: due to the Abelian nature of the
spatial rotation group, $SO(2)$, and the topology of many-particle
configuration space, the spin of a relativistic particle can be an
arbitrary real number, and a generalized statistics, intermediate between
Bose and Fermi statistics, is also possible \cite{1}
(see also reviews \cite{2} and
references therein).
The considerable interest to the field models of such particles, called
anyons, is due to their application to different planar physical
phenomena: the fractional quantum Hall effect, high-$T_{c}$
superconductivity and description of the physical processes in the
presence of cosmic strings \cite{3}.
There are several field models realizing anyonic states. They appear as
topological solitons in the O(3) nonlinear $\sigma$ model or the $CP^{1}$
model with the topological Hopf term \cite{4}, which turns in the low-energy
limit into the $CP^{1}$ model with the Chern-Simons term \cite{5}.
In the Higgs models with the topological Chern-Simons
term \cite{6,7}, anyons are the electrically charged vortices,
whereas in the models with the topologically massive vector gauge field,
the anyons are the particles directly associated to the matter field \cite{8}.
But, since all these models contain other states, they do not give a minimal
theory of anyons.
In the best known approach, point particles, described by scalar or spinor
fields, are coupled to a U(1) gauge field, the so-called statistical gauge
field, whose dynamics is governed by the Chern-Simons action \cite{9,2}. This
statistical gauge field changes the spin and statistics of particles, but
here it is not clear whether the only effect of the gauge field is to
endow the particle with arbitrary spin or whether residual interactions
are also present.
Therefore, we arrive at the natural question: is it possible to describe
the anyons in a minimal way, without using the statistical Chern-Simons
gauge field? For the purpose, one can turn over the group theoretical
approach, generalizing ordinary approach to the description of bosonic and
fermionic fields. Within such an approach, one can work with the
multi-valued representations of the (2+1)-dimensional Lorentz group
$SO(2,1)$ \cite{10}--\cite{12}, or with the definite infinite-dimensional
representations of its universal covering group $\overline{SO(2,1)}$ (or
$\overline{SL(2,R)}$, isomorphic to it) \cite{12}--\cite{17*}.
Though, there is a
close connection between these two possibilities \cite{12}, the problem of
constructing the field actions in the case of using the multi-valued
representations of the Lorentz group is open. At the same time, different
variants of the free field equations and corresponding actions were
proposed for fractional spin fields within the framework of the approach
dealing with the infinite-dimensional representations of
$\overline{SL(2,R)}$ \cite{12}, \cite{14}--\cite{16}.
But here mutually connected
problems of second quantization and spin-statistics relation are still
unsolved. Therefore, strictly speaking, we cannot use the term `anyons'
for such fractional spin fields before establishing the spin-statistics
relation, and it seems important to
continue a search of new equations and corresponding actions for a
fractional spin field in the framework of the group-theoretical approach.
In the present paper we propose new equations for
fractional spin fields, which, in our opinion, have definite advantages with
respect to those from refs. \cite{12},\cite{14}--\cite{16}. They are linear
differential equations,
and the corresponding fields here, unlike those from equations
proposed in refs. \cite{14,16}, carry irreducible representations of
$\overline{SL(2,R)}$. In this sense the proposed equations are similar
to equations for `semions' (i.e. fields with spin $\pm(1/4+n)$,
$n=0,\pm1,...$),
which have been proposed in \cite{13,17},
and generalized to the case of arbitrary spin fields
with the help of the deformed Heisenberg algebra in a recent paper \cite{17*}.
The equations which we shall construct,
have the following property of `universality':
if we choose in them $(2j+1)$-dimensional nonunitary representation of
$\overline{SL(2,R)}$, we will get the equations for a massive field with
integer or half-integer spin $j$. In particular, at $j=1/2$ and $j=1$ these
equations are reduced to the Dirac equation and to the equation for a
topologically massive vector gauge field, respectively. On the other hand, the
choice of infinite-dimensional unitary representation of the discrete type
series, restricted from below or from above, which is the only additional
possibility allowing nontrivial solutions, gives the equations for
a field with fractional (arbitrary) spin. Therefore, the proposed equations
give some link between the ordinary description of bosonic integer and
fermionic half-integer spin fields, and the fields with arbitrary spin.
Moreover, as we shall see, they represent by themselves the first example
of equations which fix the choice of infinite-dimensional
unitary representations of $\overline{SL(2,R)}$ group
for the description of fractional spin fields.
Also, we shall see that the vector system of linear equations satisfy some
identity which,
being a consequence of the choice of irreducible representation
of $\overline{SL(2,R)}$, can be used as a dynamical principle
for the construction of the corresponding gauge invariant field action.
The paper is organized as follows.
In sect. 2 we investigate the equation, which, in general case,
establishes a mass-spin relation for $(2j+1)$-- or infinite--component fields,
depending on the choice of the corresponding representation of
$\overline{SL(2,R)}$. Except for the case $j=1/2$ and $j=1$,
it does not describe irreducible representations of (2+1)-dimensional
quantum mechanical Poincar${\rm \acute{e}}$ group $\overline{ISO(2,1)}$.
Then, in sect. 3, proceeding from this equation,
we find in a simple way the system of equations
which desribe a relativistic field with arbitrary (fixed) spin and fixed mass,
i.e., an irreducible representation of $\overline{ISO(2,1)}$. A first attempt
in the direction of identifying the field action is pointed out.
Sect. 4 is devoted to the discussion of the results and to
the concluding remarks. Here, in particular, we demostrate that the
proposed equations unambiguosly fix the choice of the representations
of the discrete series
$D^{\pm}_{\alpha}$ of $\overline{SL(2,R)}$
for the description of fractional spin fields, and that
they are the only possible linear vector equations for such fields.
\nsection{Mass-spin equation}
Let us consider the (2+1)-dimensional field equation \cite{14,15}
\begin{equation}
(PJ-\varepsilon \alpha m)\Psi=0,
\label{maj}
\end{equation}
where $\varepsilon=+1$ or $-1$, and $J^{\mu}$ are the generators of the
$\overline{SL(2,R)}$ group, which satisfy the algebra:
\begin{equation}
[J^{\mu},J^{\nu}]=-i\epsilon^{\mu\nu\lambda}J_{\lambda}.
\label{alg}
\end{equation}
Here, a real parameter $\alpha\neq 0$ defines the value of the
$\overline{SL(2,R)}$ Casimir operator:
\begin{equation}
J^{2}=-\alpha(\alpha-1),
\label{cas}
\end{equation}
i.e. in the case $\alpha=-j$, $j>0$ being integer or half-integer,
we suppose the choice of a $(2j+1)$-dimensional irreducible nonunitary
representation $\tilde{D}_{j}$ of $\overline{SL(2,R)}$, whereas in the case
$\alpha>0$ we mean the choice of an irreducible unitary infinite-dimensional
representation of the discrete type series $D^{\pm}_{\alpha}$ of that group
\cite{18}.
These two types of representations are characterized by the following
property: they have a lowest or a highest state annihilated by
the corresponding operator $J_{+}$ or $J_{-}$ (see below) in the case
of representations $D^{\pm}_{\alpha}$, or both such states in the case of
finite-dimensional representations $\tilde{D}_{j}$.
Let us turn over the case of finite-dimensional representations and
first consider the simplest nontrivial
case of the spinor representation
\begin{equation}
J^{\mu}=-\frac{1}{2}\gamma^{\mu},
\label{spinor}
\end{equation}
\begin{equation}
\gamma^{\mu}\gamma^{\nu}=-g^{\mu\nu}+i\epsilon^{\mu\nu\lambda}\gamma_{\lambda},
\ \gamma^{0}=\sigma^{3},\ \gamma^{i}=i\sigma^{i},\ i=1,2,
\label{gamma}
\end{equation}
where $\sigma^{a},$ $a=1,2,3$, are the Pauli matrices. It is this simplest
case that will help us to find the equations we are looking for. Generators
(\ref{spinor}) correspond to
the 2-dimensional irreducible nonunitary representation $\tilde{D}_{1/2}$
with $-\alpha=j=1/2$, and reduce equation
(\ref{maj}) to the (2+1)-dimensional Dirac equation:
\begin{equation}
(P\gamma-\varepsilon m)\Psi=0.
\label{dir}
\end{equation}
The angular momentum operator
\begin{equation}
M_{\mu\nu}=x_{\mu}P_{\nu}-x_{\nu}P_{\mu}+\epsilon_{\mu\nu\lambda}J^{\lambda}
\label{ang}
\end{equation}
is not hermitian and therefore it is necessary to use the indefinite
`internal' Dirac scalar product
\[
(\Psi_{1},\Psi_{2})=\overline{\Psi}{}^{a}_{1}\Psi_{2}^{a},\quad
\overline{\Psi}=\Psi^{\dagger}\gamma^{0},
\]
to restore hermiticity.
{}From the Klein-Gordon equation
\begin{equation}
(P^{2}+m^{2})\Psi=0,
\label{kle}
\end{equation}
following from (\ref{dir}), we conclude that
in the case $-\alpha=j=1/2$ initial equation (\ref{maj})
describes a particle
with mass $M=m$ and spin $s=-\varepsilon/2$, where $s$ is the eigenvalue
of the relativistic spin operator
\begin{equation}
S=-\frac{1}{2\sqrt{-P^{2}}}\epsilon_{\mu\nu\lambda}P^{\mu}M^{\nu\lambda}.
\label{4}
\end{equation}
In the case of the vector representation ($\alpha=-1$),
\begin{equation}
(J_{\mu})^{\alpha}{}_{\beta}=-i\epsilon^{\alpha}{}_{\mu\beta},
\label{j1}
\end{equation}
we have $J^{2}=-2$, and eq.
(\ref{maj}) becomes the equation for the topologically massive
vector field \cite{19}:
\begin{equation}
(-i\epsilon^{\alpha\mu}{}_{\beta}P_{\mu}+\varepsilon mg^{\alpha}{}_{\beta})
\Psi^{\beta}=0.
\label{7}
\end{equation}
{}From (\ref{7}) it follows that $P_{\mu}\Psi^{\mu}=0$,
and that the field $\Psi^{\mu}$ satisfies the Klein-Gordon
equation (\ref{kle}).
Then, using definition (\ref{4}),
we conclude that the field $\Psi^{\mu}$ has spin $s=-\varepsilon$.
$J^{\mu}$ and the angular momentum operator (\ref{ang})
are hermitian with respect to
the obvious indefinite `internal' scalar product
\[
(\Psi_{1},\Psi_{2})=\Psi_{1}^{*\alpha}g_{\alpha\beta}\Psi_{2}^{\beta}.
\]
Putting $\Psi_{\alpha}=\frac{1}{2}
\epsilon_{\alpha\beta\gamma}F^{\beta\gamma},$
$F^{\alpha\beta}=\partial^{\alpha} A^{\beta}-\partial^{\beta}A^{\alpha}$,
we can rewrite eq. (\ref{7}) in the form of equations of motion for the field
strength tensor \cite{19}:
\begin{equation}
(g_{\mu\lambda}\partial_{\nu}+\frac{1}{2}m\epsilon_{\mu\nu\lambda})
F^{\nu\lambda}=0.
\label{8}
\end{equation}
An arbitrary $(2j+1)$-dimensional nonunitary representation
$\tilde{D}_{j}$ of
$\overline{SL(2,R)}$ can be obtained from the corresponding
$(2j+1)$-dimensional representation $D_{j}$, $j=1/2,1,3/2,...$,
of the $SU(2)$ group.
Indeed, let the hermitian operators ${\cal J}^{a}$,
$a=1,2,3$, be the generators of $SU(2)$ group in the representation
$D_{j}$, i.e.
\begin{equation}
[{\cal J}^{a},{\cal J}^{b}]=i\epsilon^{abc}{\cal J}^{c},
\label{su}
\end{equation}
and
\begin{equation}
{\cal J}^{a}{\cal J}^{a}=j(j+1).
\label{casu}
\end{equation}
Then the substitution
\begin{equation}
J_{0}={\cal J}^{3},\quad
J^{i}=-i{\cal J}^{i},\quad i=1,2,
\label{sub}
\end{equation}
gives us the operators $J^{\mu}$ satisfying commutation relations
(\ref{alg}), and condition (\ref{cas}) with $\alpha=-j$.
Then to have an angular momentum operator (\ref{ang}) as a hermitian
one, it is
necessary to use the corresponding indefinite scalar product,
which we do not write here for the general case.
Note only that representation (\ref{spinor})
is exactly representation (\ref{sub}) for $j=1/2$
if we put ${\cal J}^{a}=\sigma^{a}/2$,
whereas representation (\ref{j1}) is connected with the
corresponding representation (\ref{sub})
with $({\cal J}^{b})^{ac}=i\epsilon^{abc}$ at $j=1$
via the unitary transformation:
\[
U\tilde{J}^{\mu}U^{-1}=J^{\mu},
\]
where we denoted the operators (\ref{sub}) as
$\tilde{J}^{\mu}$, and $U$ is the
unitary diagonal $3\times 3$-matrix with nonzero elements:
$U^{0}{}_{0}=-i$, $U^{1}{}_{1}=U^{2}{}_{2}=1$.
In the case of $j>1$, the corresponding $(2j+1)$-component field $\Psi$
satisfying the equation (\ref{maj}) with $\alpha=-j$, does not describe an
irreducible representation of the (2+1)-dimensional quantum mechanical
Poincar${\rm \acute{e}}$ group $\overline{ISO(2,1)}$.
Indeed, first of all we note that the equation has no nontrivial solutions
in the cases $p^{2}>0$ and $p^{2}=0$ since according to (\ref{sub}),
operators $J^{i}$ and $J^{0}\pm J^{i}$ have no real nonzero eigenvalues.
Therefore, the nontrivial solutions may exist only for $p^{2}<0$.
Then, passing over to the rest frame ${\bf p}={\bf 0}$ via the corresponding
Lorentz transformation, and using the representation
where the operator $J_{0}$ is diagonal, we find the solutions of the equation
(\ref{maj}):
\begin{equation}
\Psi_{r}\propto \delta(p^{0}-\epsilon^{0}M_{\vert r\vert})\delta({\bf p}).
\label{solf}
\end{equation}
Here
$r=-j,-j+1,...,j-1,j$, except for the value $r=0$ for integer
$j$, $\epsilon^{0}=\varepsilon \cdot sign\ r$,
\begin{equation}
M_{\vert r\vert}=m\frac{j}{\vert r\vert}
\label{euc}
\end{equation}
is the mass of the corresponding state, whereas, according to
(\ref{4}), its spin is $s=-\varepsilon\vert r\vert$.
Therefore, we conclude that eq. (\ref{maj})
describes two states with fixed mass $M=m$
and spin $s=-\varepsilon j$ only in the cases when $-\alpha=j=1/2$ and $1$.
These two states differ in their energy signs.
In all other cases eq. (\ref{maj})
describes a set of $2N$ states, where $N=j$ and $N=(2j+1)/2$ for
the cases of integer and half-integer $j$'s, respectively.
Now let us turn to the case of irreducible unitary infinite-dimensional
representations of the discrete series $D_{\alpha}^{+}$ or $D^{-}_{\alpha}$ of
$\overline{SL(2,R)}$. These representations are characterized by the value
of the Casimir operator (\ref{cas}) with $\alpha>0$,
and by the eigenvalues of the operator $J_{0}$:
$j_{0}^{n}=\alpha+n$ and $j^{n}_{0}=-(\alpha+n)$ in these two series,
respectively, where $n=0,1,2,...$ \cite{18}.
In the representation $D^{+}_{\alpha}$ with diagonal operator $J^{0}$,
the matrix elements of $J^{\mu}$ are \cite{15}:
\begin{equation}
J^{0}_{kn}=-(\alpha+n)\delta_{k,n},
\label{9a}
\end{equation}
\begin{equation}
J^{+}_{kn}=-\sqrt{(2\alpha+n-1)n}\cdot\delta_{k+1,n},\ \
J^{-}_{kn}=-\sqrt{(2\alpha+n)(n+1)}\cdot\delta_{k-1,n},
\label{9b}
\end{equation}
where $J^{\pm}=J^{1}\mp iJ^{2}$ and $k,n=0,1,2,....$
The representation $D^{-}_{\alpha}$
can be obtained from (\ref{9a}), (\ref{9b}) through the substitution
\cite{12}:
\begin{equation}
J_{0}\rightarrow -J_{0},\quad
J_{1}\rightarrow -J_{1},\quad
J_{2}\rightarrow J_{2}.
\label{5}
\end{equation}
Here generators $J^{\mu}$ are hermitian with respect to the positive
definite scalar product $(\Psi_{1},\Psi_{2})=\Psi^{*n}_{1}\Psi^{n}_{2}$.
In the cases of the infinite-dimensional representations $D^{\pm}_{\alpha}$,
eq. (\ref{maj}) is a $(2+1)$-dimensional analog of the Majorana equation
\cite{20}, which appears as the equation for the physical subspace in
the model of the relativistic particle with torsion \cite{15}.
Moreover, the mass spectrum (\ref{euc}) appears in that model too:
it is the spectrum of the model in the euclidean space-time.
Note also that the action for the model of the relativistic particle with
torsion, in turn, appeared as the effective
action for a charged particle interacting with a $U(1)$
statistical gauge field \cite{21}.
Passing over to the rest frame ${\bf p}={\bf 0}$ in the case when $p^{2}<0$,
we find the solutions of this equation:
\begin{equation}
\Psi_{n}\propto \delta(p^{0}-\varepsilon \varepsilon'M_{n})\delta({\bf p}),
\label{solm}
\end{equation}
where $\varepsilon'=+1$ and $-1$ for representations $D^{+}_{\alpha}$ and
$D^{-}_{\alpha}$, respectively. Their masses and spins are
\begin{equation}
M_{n}=m\frac{\alpha}{\alpha+n},\quad
s_{n}=\varepsilon(\alpha+n).
\label{mn}
\end{equation}
If we take the direct sum of representations,
$D^{+}_{\alpha}\oplus D^{-}_{\alpha}$,
we will have the states with both energy signs in the massive sector
\cite{12}. Majorana equation (\ref{maj}) as well as its
$(~3~+~1~)$-dimensional analog \cite{20}, besides massive solutions
also has massless and tachyonic solutions (see ref. \cite{15}).
To single out the state with highest mass, $M_{0}=m$, and lowest spin,
$s_{0}=\varepsilon\alpha$, and to get rid of massless and tachyonic
solutions, one can supplement equation (\ref{maj}), linear in $P^{\mu}$,
with the Klein-Gordon equation (\ref{kle}) \cite{12,15,16}. Obviously,
in the case of finite-dimensional representations $\tilde{D}_{j}$
and for the corresponding choice $\alpha=-j$, these
two equations single out two states with mass $M=m$ and spin
$s=-\varepsilon j$, differing in their energy sign.
\nsection{Linear differential equations for fractional spin}
Since eqs. (\ref{maj}) and (\ref{kle}) are completely independent in the case
of representations $D^{\pm}_{\alpha}$ (as well as in the case of
representations $\tilde{D}_{j}$, $j\neq 1/2,1$), they are not very suitable
equations as a basis for constructing the action and quantum theory of the
fractional spin field.
In this section we shall construct
the set of linear differential equations
for the field with arbitrary fractional spin in such a way that
both equations (\ref{maj}) and (\ref{kle}) will appear as a consequence
of them.
To find such equations, let us multiply eq. (\ref{dir}) by an invertible
operator $\frac{1}{2}\gamma^{\mu}$. Then we obtain the vector system of
three equations:
\begin{equation}
L_{\mu}\Psi=0,
\label{3eq}
\end{equation}
with
\begin{equation}
L_{\mu}\equiv (\alpha P_{\mu}-i\epsilon_{\mu\nu\lambda}P^{\nu}J^{\lambda}+
\varepsilon mJ_{\mu}),
\label{leq}
\end{equation}
where $J_{\mu}=-\frac{1}{2}\gamma_{\mu}$ and $\alpha=-\frac{1}{2}$.
These equations are equivalent to eq. (\ref{dir}). Let us show now that in
the general case, i.e. for the choice of any representation $\tilde{D}_{j}$
or $D^{\pm}_{\alpha},$
these equations are equivalent to eqs. (\ref{maj}) and
(\ref{kle}). Indeed, multiplying eq. (\ref{3eq}) by $J_{\mu}$, $P_{\mu}$
and $i\epsilon_{\mu\nu\lambda}P^{\nu}J^{\lambda}$, we correspondingly get:
\begin{equation}
(\alpha-1)(PJ-\varepsilon \alpha m)\Psi=0,
\label{c1}
\end{equation}
\begin{equation}
\left(\alpha(P^{2}+m^{2})+\varepsilon m(PJ-\varepsilon \alpha m)\right)
\Psi=0,
\label{c2}
\end{equation}
\begin{equation}
\left(\alpha(\alpha-1)(P^{2}+m^{2})+(PJ+\varepsilon(\alpha-1)m)(PJ-
\varepsilon \alpha m)\right)\Psi=0.
\label{c3}
\end{equation}
Whence we immediately arrive at the desired conclusion for
$\alpha>0,$ $\alpha\neq1$, and $\alpha=-j$.
As for the case $\alpha=1$, in which eq. (\ref{c1}) disappears, we note
that eqs. (\ref{c2}) and (\ref{c3}) have no nontrivial solutions in the
massless case $P^{2}=0$, and as a result, these two equations also are
equivalent to equations (\ref{maj}) and (\ref{kle}).
Therefore, the vector set of three equations (\ref{3eq})
describes a relativistic field with spin $s=\epsilon\alpha$ and mass $M=m$
for any corresponding choice of irreducible representations
$\tilde{D}_{j}$ or $D^{\pm}_{\alpha}$.
Moreover, by direct verification one can get convinced that
in the general case any two
equations from eqs. (\ref{3eq}) are equivalent to the complete set of three
equations. For example, in the case of representation $D^{+}_{\alpha}$
it can be easily done with the help of the explicit form of the generators
(\ref{9a}), (\ref{9b}). Therefore, the presence of three equations
(\ref{3eq}) gives us a covariant set of linear differential equations
for the description of an arbitary spin field.
As a consequence, we have the following relation
\begin{equation}
R^{\mu}L_{\mu}\equiv 0
\label{gau}
\end{equation}
with
\begin{equation}
R_{\mu}=\left((\alpha-1)^{2}g_{\mu\nu}
-i(\alpha-1)\epsilon_{\mu\nu\lambda}J^{\lambda}+J_{\nu}J_{\mu}\right)P^{\nu},
\label{R}
\end{equation}
which reflects the dependence
of eqs. (\ref{3eq}) in a covariant way.
For completeness, let us write here the commutation relations of the
operators $L_{\mu}$:
\begin{equation}
[L_{\mu},L_{\nu}]=-im\epsilon_{\mu\nu\lambda}\left(L^{\lambda}+
\frac{P^{\lambda}}{m}(PJ-\varepsilon \alpha m)\right),
\label{LL}
\end{equation}
and note, that in the case $\alpha\neq 1$ they can be rewritten in the
following simple form:
$$
[L_{\mu},L_{\nu}]=-im\epsilon_{\mu\nu\lambda}\left(g^{\lambda\rho}+
\frac{P^{\lambda}J^{\rho}}{m(\alpha-1)}\right)L_{\rho}.
$$
As the simplest action leading to the proposed equations (\ref{3eq}),
we can take
\begin{equation}
A=\int {\cal L}d^{3}x, \quad
{\cal L}=\bar{\chi}^{\mu}L_{\mu}\Psi+ \bar{\Psi}L_{\mu}^{\dagger}\chi^{\mu}+
c\cdot\bar{\Psi} (PJ-\varepsilon \alpha m) \Psi,
\label{act}
\end{equation}
where $c$ is an arbitrary real parameter and
$\chi_{\mu}=\chi_{\mu}^{a}$ and
$\bar{\chi}_{\mu}=\bar{\chi}{}^{a}_{\mu}$
are mutually conjugate fields
with index $a$ taking values in the chosen
representation of $\overline{SL(2,R)}$ group.
The variation of the action (\ref{act}) with respect to
$\bar{\chi}{}^{\mu}$ gives equations (\ref{3eq}), whereas the
$\bar{\Psi}$-variation gives
\begin{equation}
L_{\mu}^{\dagger}\chi^{\mu}+ c\cdot (PJ-\varepsilon \alpha m) \Psi = 0,
\label{l1}
\end{equation}
and, besides, we have corresponding equations for the conjugate fields $
\bar{\Psi}$ and $\bar{\chi}^{\mu}.$ Hence, the basic field satisfy the
equations which we want to have, and from (\ref{l1}) we conclude that
$$
L_{\mu}^{\dagger}\chi^{\mu} = 0
$$
for any choice of $c$ \cite{21*}.
Now it is necessary to get convinced that the
fields $\chi^{\mu}$ and $\bar{\chi}^{\mu}$are pure auxiliary fields.
In the simplest way this can be done within the Hamiltonian formalism
which we hope to present in a future work.
\nsection{Discussion and conclusions}
We have proposed the system of linear differential equations (\ref{3eq})
for a fractional spin field using infinite-dimensional representations
$D^{\pm}_{\alpha}$. They have the form of a covariant (vector) set of
matrix infinite-dimensional equations, from which only two equations
are independent and the presence of the third one allows to have a covariant
set of equations.
One can show that eq. (\ref{3eq}) is in fact the only possible linear
vector set of equations for a fractional spin field.
In other words, if we take an arbitrary linear combination of the operators
$mJ_{\mu}$, $P_{\mu}$ and $\epsilon_{\mu\nu\lambda}P^{\nu}J^{\lambda}$
as the operator $L_{\mu}$ and then demand that equations
of the form (\ref{3eq})
would be equivalent to eqs. (\ref{maj}) and (\ref{kle}), we shall obtain for
the operators $L_{\mu}$ the form (\ref{leq}).
Moreover, the following more general remarkable property of eqs. (\ref{3eq})
is valid. Let us take a set of linear differential
equations of the form (\ref{3eq}) as the equations for a
(2+1)-dimensional field, assuming that
$L_{\mu}=\alpha P_{\mu}
-i\beta\epsilon_{\mu\nu\lambda}P^{\nu}J^{\lambda}+
\varepsilon mJ_{\mu}$,
and that the generators
$J_{\mu}$ are the most general
translation-invariant Lorentz group generators satisfying
the commutation relations (\ref{alg}) (i.e. not fixing the choice of
a representation of $\overline{SO(2,1)}$ from the very beginning).
In this case the parameters $\alpha$ and $\beta$ are arbitrary dimensionless
constants.
Then, multiplying these linear equations by the operators
$mJ_{\mu}$, $P_{\mu}$ and $-i\epsilon_{\mu\nu\lambda}P^{\nu}J^{\lambda}$,
we find that there are only two possible cases in which eqs. (\ref{3eq})
are consistent. The first case is trivial and corresponds to the choice
of a trivial representation for generators: $J_{\mu}=0$, and, therefore,
to a trivial system with $p_{\mu}=0$.
In the nontrivial case there is an arbitrariness in the normalization of the
operator $L_{\mu}$which can be fixed by putting $\beta=1$. Then
the system of vector equations (\ref{3eq}) will be
equivalent to the equations (\ref{maj}), (\ref{kle}) and
\begin{equation}
(J^{2}+\alpha(\alpha-1))\Psi=0.
\label{ir}
\end{equation}
Eq. (\ref{ir}) is simply the condition of irreducibility,
and one can check that the system of eqs. (\ref{maj}), (\ref{kle}) and
(\ref{ir}) is consistent only in the case of the choice of either
finite-dimensional nonunitary representations $\tilde{D}_{j}$, or the
infinite-dimensional unitary representations $D^{\pm}_{\alpha}$.
Therefore, eq. (\ref{3eq}) is the most general vector set of linear
differential equations for a fractional (arbitrary)
spin field in $2+1$ dimensions, whose consistency
fixes the choice of unitary representations of the universal covering
group of (2+1)-dimensional Lorentz group.
Let us notice here that refs. \cite{12},\cite{14}--\cite{16} have used
representations $D^{\pm}_{\alpha}$ for the description of fractional
spin fields proceeding, in fact, simply from the first quantized theory
of the relativistic point particle with torsion,
and have not excluded for the purpose the choice of other unitary
infinite-dimensional representations of the principal and supplementary
continuous series of the $\overline{SL(2,R)}$ group
(see refs. \cite{18} and a discussion in ref. \cite{22}).
After fixing the representation we have
property (\ref{gau}) as a simple consequence
of the irreducibility condition $J^{2}=-\alpha(\alpha-1)$,
and, as a result, the number of independent equations for the description
of arbitrary spin fields here
is the same as in the spinor-like system of equations
from the recent paper \cite{17*}.
To conclude, let us list some related problems to be solved.
1. We have constructed the simplest action (\ref{act}) leading
to the proposed equations (\ref{3eq}).
The action is invariant with respect to the local transformations:
$\delta \chi_{\mu}=R_{\mu}^{\dagger} \lambda$, $\delta\Psi=0$,
$\lambda$ being an arbitrary field, due to the identity
(\ref{gau}). Therefore, the following question
arises: is it possible to derive the fractional spin field action
from a gauge symmetry principle based on the local transformations
$\delta \chi_{\mu}=R_{\mu}^{\dagger} \lambda$ ? It is not clear
whether more complicated choices of the field action (including the
possibility of having additional auxiliary fields) will be equivalent
to the action (\ref{act}) or whether some new basic ingredient
should be identified in order to understand the formulation of a
free fractional spin system.
It would be interesting to investigate the possible relationship
between the proposed approach to
the description of a fractional spin field and the approach
based on the use of a $U(1)$ statistical gauge field \cite{9}.
Revealing such possible relationship seems very important because
there are some reasons to expect that anyons can occur only in
gauge theories,
or in theories with a hidden local gauge invariance \cite{7}.
2. One can verify that the prescription of a simple substitution
$P_{\mu}\rightarrow P_{\mu}-eA_{\mu}$ in equations (\ref{3eq}) to describe
the interaction with the simplest $U(1)$ gauge (electromagnetic) field
is consistent only in the case of the spinor representation (\ref{spinor}).
Therefore, the introduction of the interaction of the fractional spin
field with gauge fields remains an open problem in the present approach.
3. The next interesting problem consists in the construction of a
corresponding singular classical model
of a relativistic particle, whose quantization would lead to equations
(\ref{3eq}) as the equations for the physical states of the system.
Note, that corresponding classical models leading in an analogous way to
equations (\ref{maj}) and (\ref{kle}), and to `semionic' equations
were constructed in refs. \cite{12,16}, and \cite{17}, respectively.
4. The most interesting and intriguing problem within the approach
considered in this paper is the problem of second quantization of the
fractional spin field. The solution of this problem would answer the
question of spin-statistics relation for such fields. In connection with
this problem, we would like to make two remarks. First, we note, that the
infinite-component nature of the fractional spin field within the present
approach can be considered as some indication of a hidden nonlocal
nature of the theory, and, therefore, can be treated in favour of an
existence of a spin-statistics relation \cite{7,23}. Second, let us point out,
that when performing the second quantization of a fractional spin field
within Hamiltonian approach, an infinite number of Hamiltonian constraints
must appear, which are to single out only one physical component (like
$\Psi_{0}$ from eq. (2.21)) from the infinite component basic field
$\Psi_{n}(x)$. This infinite set of constraints should appropriately be
taken into account.
At last, let us point out here that it seems interesting to investigate
the system of (four) linear vector differential equations for
(3+1)-dimensional field in an analogous way, starting from the
generalization of eqs. (3.1) to the (3+1)-dimensional case.
\nsection{Acknowledgements}
This work has been supported by CICYT (Proyecto AEN 90-0030).
M.P. thanks
P.A.~Marchetti, S.~Randjbar-Daemi, D.P.~Sorokin and D.V.~Volkov for
useful discussions.
\newpage
|
1,108,101,564,958 | arxiv | \section{Introduction}
\label{intro}
The transition to turbulence in wall bounded shear flows is characterized by the presence of localized turbulent regions containing coherent structures in the form of streamwise streaks \citep{schmid_stability_2001,landahl_a_1980}. These have been observed in different classical confined shear flows such as boundary layers \citep{gaster_experimental_1975,cantwell_structure_1978}, water tables \citep{emmons_laminar-turbulent_1951}, and pipe \citep{hof_turbulence_2005,mullin_experimental_2011}, channel \citep{lemoult_turbulent_2013} and plane Couette flows \citep{tillmark_experiments_1992,bottin_discontinuous_1998}. These turbulent structures are advected downstream with a speed that is approximately proportional to the bulk velocity. When this bulk velocity is non-zero, a very long test section is required to retain the turbulent spots for an appreciable time interval. Another difficulty is that these turbulent structures must be tracked as they move downstream
However, it is possible to cancel the mean flow velocity, as has been realized by pioneering experiments in plane Couette flow \citep{tillmark_experimental_1991,daviaud_subcritical_1992}. In these experimental set-ups, the base flow is induced by imposing opposite velocities at each wall of the test section, which generates a linear profile with zero mean velocity. If a turbulent spot is generated under such conditions, it remains stationary in the laboratory framework and there is no time limit on the observation of its evolution. The great advantage of a zero mean velocity has motivated us to construct a facility which is a generalization of the plane Couette experimental set-up. We combine the effect of one moving wall (which introduces a Couette component) and a streamwise pressure gradient due to the backflow generated by imposing zero mean flux rate (responsible for a Poiseuille component). The resulting base flow is a plane Couette-Poiseuille flow with zero mean advection velocity, shown in Fig.~\ref{fig:Scheme1}. To our knowledge, this is the first experimental investigation of subcritical transition to turbulence in plane Couette-Poiseuille flow.%
There are a number of theoretical results concerning Couette-Poiseuille flow. The linear stability analysis of this flow (necessarily two dimensional in the streamwise-cross-channel directions, due to Squire's theorem) was carried out \citep{potter_stability_1966, reynolds_finite-amplitude_1967, cowley_stability_1985, hains_stability_1967, drazin_hydrodynamic_1981, balakumar_finite-amplitude_1997, ozgen_heat_2006, savenkov_features_2010}, showing that when the Couette component is increased, the linear instability threshold shifts to higher values and the critical wave number decreases with respect to that of pure plane Poiseuille flow (see Fig.~4 in Ref. \onlinecite{balakumar_finite-amplitude_1997}). Even a relatively small component of Couette flow is sufficient to completely stabilize plane Poiseuille flow \citep{drazin_hydrodynamic_1981}. In this case, the linear instability threshold is infinite as it is for pure plane Couette flow. Specifically, it has been proved in Ref. \onlinecite{potter_stability_1966} that when the velocity of the Couette component exceeds $70\%$ of the center velocity of the Poiseuille component, the flow becomes stable to infinitesimal perturbations for all finite values of Reynolds number. This is in agreement with other results \citep{reynolds_finite-amplitude_1967, cowley_stability_1985} (note however, a slight difference of the coefficient values characterising the contribution of Couette and Poiseuille components reported in Ref. \onlinecite{drazin_hydrodynamic_1981}). Weakly nonlinear stability analysis was used to prove that, while it is stable to infinitesimal disturbances, Couette-Poiseuille flow is unstable to finite amplitude perturbations \citep{reynolds_finite-amplitude_1967,cowley_stability_1985,balakumar_finite-amplitude_1997,zhuk_asymptotic_2006}. The only fully nonlinear (but still two-dimensional) numerical study of transition to turbulence \citep{ehrenstein_two-dimensional_2008} used Poiseuille-Couette homotopy to continue a streamwise-localized finite-amplitude solution from Poiseuille to Couette flow. Another two dimensional study used a weakly nonlinear approach to investigate the time evolution of localized solutions in mariginally stable Couette-Poiseuille flow in the framework of the Ginzburg-Landau equation \citep{jennings_when_1999}.
However, it is now believed that two dimensional evolution is not dynamically relevant for subcritical transition to three dimensional turbulence in shear flows. One of the features of subcritical transition to turbulence and three dimensional flow with streamwise or quasi-streamwise elongated streaks is transient linear growth. Its origin is the nonorthogonality of the linearized Navier-Stokes operator and the fact that streaks are the structures most amplified by this process \citep{butler_threedimensional_1992}. Investigation of transient growth in Couette-Poiseuille flow has shown that adding even a small Couette component (introduced by a moving wall) to a Poiseuille flow (driven by a pressure gradient) significantly increases the nonmodal growth of the energy \citep{bergstrom_nonmodal_2005}. The flow thus becomes more sensitive to perturbations. As transient growth usually governs the dynamics of flow at early stages of transition to turbulence, one would expect Couette-Poiseuille flow to be less stable than pure Poiseuille flow.
\begin{figure}
\begin{center}
\includegraphics[scale=0.9]{Fig1}\\
\caption{Schematic representation of plane Couette-Poiseuille flow, which is a combination of Couette (in red) and Poiseuille (in blue) flows. The Couette flow is forced by the upper moving wall, whereas the Poiseuille flow is induced by streamwise pressure gradient. Dash-dotted gray line in the right subfigure separates the two inner regions of the flow: the upper one with $U>0$ dominated by the Couette component (high-shear region) and the lower one with $U<0$ dominated by the Poiseuille component (low-shear region).}
\label{fig:Scheme1}
\end{center}
\end{figure}
Recall that linear transient growth cannot explain why turbulence does not decay for sufficiently high Reynolds number. A nonlinear cyclic process has been proposed that makes the turbulence self-sustained \citep{waleffe_self-sustaining_1997}. Instability of the streaks, manifested by their sinusoidal streamwise waviness \citep{duriez_self-sustaining_2009}, has been found to be necessary to maintain the turbulence. It has been shown quantitatively that the self-sustaining process is relevant to the evolution of turbulent spots in channel flow \citep{lemoult_turbulent_2014}.
We note that fully developed turbulence in Couette-Poiseuille flow has been studied numerically \citep{pirozzoli_large-scale_2011,bernardini_statistics_2011,gretler_calculation_1997,kuroda_direct_1995}, experimentally \citep{huey_plane_1974,stanislas_experimental_1992,telbany_velocity_1980,telbany_turbulence_1981,thurlow_experimental_2000,nakabayashi_similarity_2004} and theoretically \citep{lund_asymptotic_1980,wei_scaling_2007}. Specifically, the flow with zero net flux was investigated in Ref. \onlinecite{huey_plane_1974}. We complete our survey of Couette-Poiseuille flow by stating that a similar kind of flow profile appears in a long lid-driven cavity \citep{FLM1} or between two horizontal coaxial cylinders where the gap is partially filled with water (as in the case of Taylor-Dean instability in circular Couette-Poiseuille flow investigated in Ref. \onlinecite{mutabazi_oscillatory_1989,mutabazi_spatiotemporal_1990}.
Despite the studies cited above, plane Couette-Poiseuille flow has received relatively little attention until now, especially in the transitional regime. Having a relatively large test section and a base flow with zero mean advection velocity, we have been able to gain more insight into the dynamics of intermittent turbulent structures in shear flows. Until now, the only experimental attempts to generate stationary turbulent structures have been in plane Couette flow, which by definition has no streamwise pressure gradient. We present for the first time nearly stationary turbulent structures in a flow with non-zero streamwise pressure gradient, which represents a wide class of flows with practical relevance. Specifically, we report the first observations of turbulent spots localized in both the streamwise and spanwise directions. Another result of our research is the observation of the macro-organization of turbulence to form oblique turbulent bands in plane Couette-Poiseuille flow, as has been observed for Taylor-Couette \citep{coles_transition_1965}, Taylor-Dean \citep{mutabazi_spatiotemporal_1990}, plane Couette \citep{prigent_large-scale_2002,prigent_long-wavelength_2003,barkley_mean_2007,duguet_formation_2010,philip_temporal_2011} and plane Poiseuille flow \citep{tuckerman_turbulent-laminar_2014,tsukahara_experimental_2014}.
The article is organized as follows: in section \ref{sec:2} we describe our new experimental set-up. Next, in section \ref{sec:3}, we present a general characterization of our installation, including the natural transition to turbulence due to intrinsic noise of the facility. In section \ref{sec:4} we characterize the forced transition to turbulence which we triggered by applying a steady, continuous disturbance into the test section. Finally, in section \ref{sec:5} we discuss our results.
\section{Description of the experimental set-up}
\label{sec:2}
\begin{figure}
\begin{center}
\begin{minipage}[t]{0.495\linewidth
\includegraphics[scale=1]{Fig2a}\\
\end{minipage}
\begin{minipage}[t]{0.495\linewidth
\includegraphics[scale=1]{Fig2b}\\
\end{minipage}
\caption{Sketch of: a) the plane Couette experimental set-up \cite{tillmark_experiments_1992, daviaud_subcritical_1992}; b) our new facility to investigate plane Couette-Poiseuille flow. The upper moving wall induces the Couette flow to the right, which in turns generates the streamwise pressure gradient inducing Poiseuille flow to the left (compare with Fig.~\ref{fig:Scheme1}). The Roman numerals correspond to those of Fig.~\ref{fig:ExSetUp}. Red arrow in the inset marks the lower layer of the plastic belt.}
\label{fig:1}
\end{center}
\end{figure}
\begin{figure}[ht]
\includegraphics[scale=1]{Fig3}\\
\caption{Perspective view of the new experimental set-up with cross-section at the $y_*=0$ plane.}
\label{fig:ExSetUp}
\end{figure}
\begin{figure}[ht]
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=1]{Fig4a}
\end{minipage}
\begin{minipage}[t]{0.48\linewidth}
\includegraphics[scale=1]{Fig4b}
\end{minipage}
\caption{Configuration we use to: a) perform flow visualizations. The source of conventional, incoherent light is placed at the top and the camera at the side of the test section; b) perform 2D PIV measurements.}
\label{fig:5}
\end{figure}
\begin{figure}[ht]
\includegraphics[scale=0.4]{Fig5}\\
\caption{a) Instantaneous snapshot of laminar flow with parabolic streamwise velocity profiles superposed on it. The velocity vectors are measured with the PIV technique by cross-correlating the particles seen in the photo. The stationary wall and moving belt are located at the bottom ($y_*=-1$) and top ($y_*=1$) respectively. Near the moving belt, we are not able to measure the velocity with PIV; b) representation of streamwise velocity profile $U_*(y_*)$ (blue crosses) as a function of wall-normal direction normalized with the belt speed. This example corresponds to the central velocity profile in a). We also show a quadratic interpolation of the measured velocity profile (solid red line), which can be observed to fit the data. The blue solid line at the top here and in a) represents the instantaneous $y_*$ position of the moving belt; the green dashed line marks the last point which can be measured with PIV. The interpolating function representing the velocity profile and the blue line cross very close to the $(U_*=U/U_{\rm belt}=1,y_*=1)$ point, which is precisely the position of the moving belt obtained from image processing.}
\label{fig:ExPiv}
\end{figure}
\begin{figure}[ht]
\includegraphics[scale=0.4]{Fig6}\\
\caption{Time series of deviations of the position of the moving belt. The ordinate represents the deviation from its time-averaged position ($y_*=1$). Black and red lines represent the belt position obtained with image treatment ($y_{\rm belt,img}$) and with interpolation ($y_{\rm belt,int}$) respectively. Two sharp peaks on the black lines at $t_*=400$ and $t_*=900$ are artifacts corresponding to the passage of the adhesive tape joining the two ends of the belt and do not represent the real film displacement. The vertical lines with numbers represent the instants at which instantaneous velocity profiles will be shown (see Fig.~\ref{fig:InterpolationProfiles}).}
\label{fig:TimeSeries}
\end{figure}
\begin{figure}[ht]
\includegraphics[scale=0.4]{Fig7}\\
\caption{a) Instantaneous streamwise velocity profiles obtained from interpolation of the measured streamwise velocity. The numbers corresponds to the instants marked by vertical lines in Fig.~\ref{fig:TimeSeries}; b) time-averaged velocity profiles $U_*(y_*)$ over one full period of the belt motion for different Reynolds numbers.}
\label{fig:InterpolationProfiles}
\end{figure}
First, we denote by $x,y,z$ the streamwise, wall-normal and spanwise directions respectively. The center of the coordinate system is placed in the center of the test section. Our new installation is a generalization of the classical plane Couette experimental set-up (see Fig.~\ref{fig:1}a and Ref. \onlinecite{tillmark_experimental_1991,tillmark_experiments_1992,daviaud_subcritical_1992}). As shown in Fig.~\ref{fig:1}b we use looped plastic belt to impose the speed on one wall of the test section, while the other wall remains stationary. The moving wall drives the Couette flow toward the right side (red velocity profile on the left of Fig.~\ref{fig:Scheme1}), which in turn increases the pressure in tank 2. This positive streamwise pressure gradient induces the reverse Poiseuille flow (blue velocity profile in the center of Fig.~\ref{fig:Scheme1}). The resulting plane Couette-Poiseuille flow (black velocity profile on the right of Fig.~\ref{fig:Scheme1} and inset of Fig.~\ref{fig:1}b) is a superposition of these two contributions. It has zero mean advection velocity $\int_{-1}^{1} U(y)dy = 0$. We will also sometimes refer hereafter to the high-shear/low-shear regions close to the moving/stationary wall as the Couette/Poiseuille regions respectively.
In Fig.~\ref{fig:ExSetUp} we present a perspective view with a cross-section at the midgap plane to show in detail the side of the test section where the moving belt is placed. There are two lines of guiding cylinders (marked as I and II, see also Fig.~\ref{fig:1}b) which guide both layers of the plastic belt into the test section. We can regulate the $y$ position of both ends of these cylinders. There is one additional cylinder (III in Fig.~\ref{fig:1}, \ref{fig:ExSetUp}) to keep the plastic belt tight. Its position (just after the motorized cylinder) was chosen carefully to provide us the best stabilization of the $z$ position of the belt when it moves. All the cylinders we use are provided by Interoll\textregistered \, with the exception of the motorized cylinder, which requires a large diameter (to diminish the slip between the cylinder and the moving belt) and a slightly tapered shape at both ends (to better control the $z$ position of the belt when it moves). For these reasons we have manufactured it on a three dimensional printer. Four additional external steel beams reinforce the test section and diminish the deflection of the side walls of the test section due to the hydrostatic pressure of the water.
The experimental set-up is mounted on a heavy granite table which provides mechanical stability and thermal inertia. The motorized cylinder is driven by a servo-motor produced by Yaskawa Electrics\textregistered \, ($100$ W) with a gear reduction of 1:26. We use glass plates (of $8$ mm thickness) as side walls, a Plexiglas beam as the upper wall and a transparent plastic belt made of Mylar\textregistered \,(of $175$ $\mu$m thickness), granting optical access to the test section. The gap between the glass walls of the test section is $2h_1 = 14$ mm. However, one can observe in schematic Fig.~\ref{fig:1}b that there are two layers of the plastic belt in the vicinity of the upper wall, which bounds the effective gap of the test section. We measure it with an optical method as $2h = 11.5$ mm. Hereafter we use the phrase moving belt to refer only to the lower (inner) layer of plastic film (indicated by the red arrow in the inset of Fig.~\ref{fig:1}b). The streamwise and spanwise dimensions of the test section are $2000$ mm and $540$ mm respectively. However, the width of the plastic belt is $520$ mm, which is our effective spanwise dimension. The belt is positioned slightly asymmetrically in the spanwise direction. Taking all of this into consideration, the aspect ratios of the test section in the streamwise/spanwise directions are $L_x/h = 347.8$ and $L_z/h = 90.4$ respectively.
Spatial coordinates and time are nondimensionalized with the effective half gap $h$ and $h/U_{\rm belt}$ respectively and are marked by $*$ subscript. The velocity profile normalized by the belt speed is $$U_*=U/U_{\rm belt}=\frac{3}{4} ({y_*}^2-1)+\frac{1}{2}(y_* +1), ~~{\rm where}~~~ y_* = y/h \in (-1,1).$$ The Reynolds number is based on effective half gap $h$ (following the convention from plane Couette and plane Poiseuille flows) and the speed of the moving wall, $U_{\rm belt}$, namely $\Rey=U_{\rm belt} h/\nu$. We explore the range between $\Rey=160$ and $\Rey=780$.
We photograph the flow visualizations with a Nikon D200\textregistered \,camera (3800$\times$2800 pixels matrix) and Nikkor\textregistered \,$f=35$ mm lens. Its optical axis is collinear with the $y$ axis and the source of the white light is at the top of the test section (Fig.~\ref{fig:5}a). In addition, we also acquire supplementary video of flow visualization with video camera Canon\textregistered \, 3CCD XM2 Pal (720$\times$520 pixels), which will be described in details at the end of the section \ref{sec:4}. To perform the PIV measurements we use a Phantom MIRO M120\textregistered \,camera (1920x1600 pixels) with Nikkor\textregistered \,$f=85$ mm lens and a Darvin Duo\textregistered\, laser (double-headed, maximum output $80$ W, wavelength $527$ nm) in the configuration presented in Fig.~\ref{fig:5}b. We acquire a sequence of either single or double frame snapshots, which are then post-processed with Dantec Dynamic Studio\textregistered \,4.2 software.
We use a high concentration of small seeding particles made of Polyamid with a diameter of $5\mu$m. We use a rectangular 256 $\times$ 16 pixel interrogation window in the $x$ and $y$ directions with a 50\% overlap. This unconventional choice is justified by the dominant streamwise velocity component, which implies that the streamwise pixel displacement is an order of magnitude larger than that in the wall-normal direction. The rectangular window enables us to increase the signal-to-noise ratio, keeping a high spatial resolution in the $y$ wall-normal direction. With this procedure we measure instantaneously three velocity profiles with $0.3h$ spacing in $x$ and with 100 points across the gap.
When the camera is placed on top of the test section (with its optical axis aligned along the $z$ axis), we can measure the streamwise velocity with PIV only in the vicinity of the upper wall of the test section due to the high concentration of seeding particles. In order to measure the streamwise velocity profiles at different spanwise locations, we put the camera on the side of the test section (with its optical axis inclined at $45^{\circ}$ with respect to the $z$ axis, see Fig.~\ref{fig:5}b). We also use a Scheimpflug mount to record a well-focused image despite the inclination of the camera, as well as a water prism to reduce the optical distortions due to the difference in refractive indices of water and air.
In order to study the laminar base flow, we measure the instantaneous streamwise velocity for different Reynolds numbers ($\Rey \in (250,340,430,510)$) in the central part of the test section. We acquire a single image sequence and correlate two consecutive images. We set a high enough frequency (from $19$ Hz to $42$ Hz, depending on Reynolds number) to retain the time correlation between two snapshots. We need to record about 2800 images on the 3 GB internal memory of the Phantom\textregistered \, camera to cover one period of the belt motion. For this reason we use part of the camera matrix (512 pixels in $x$ and 896 pixels in $y$). This procedure provides us with the best possible temporal resolution for a given spatial resolution (directly related to the size of the camera matrix) and for a given measurement time. In Fig.~\ref{fig:ExPiv}a we present one example of an instantaneous PIV vector field acquired for $\Rey=510$, which shows three similar velocity profiles within the measurement area. We plot the central profile in $x$ in Fig.~\ref{fig:ExPiv}b.
Fig.~\ref{fig:ExPiv}a shows that the width of the optical image of the belt is about 1 mm, which is more than its actual thickness. This is a consequence of the inclination of the optical axis of the camera with respect to the laser sheet of finite width. Indeed, the light coming from the laser sheet of finite width is reflected by the belt and then registered on the camera matrix as a thick line. There are additional contributions from the defocusing and scattering of the laser light in the vicinity of the moving belt. In the region above the dashed green line we are not able to measure the velocity with our PIV technique, as it produces many spurious vectors. We define the center of this thick line as the instantaneous position of the moving belt ($y_{* \rm belt,img}$) and we determine it for each image using edge detection techniques. In Fig.~\ref{fig:ExPiv} we mark this instantaneous belt position by a solid blue line and we assign the value $y_*=1$ to this location. In Fig.~\ref{fig:TimeSeries} we present as a solid black line a time series of the deviations of the moving belt position from its time-averaged location. The actual belt position changes smoothly in time.
As in the measured data in Fig.~\ref{fig:ExPiv}b there is a maximum and due to the fact that we expect the laminar Couette-Poiseuille flow to be a quadratic function of $y_*$, we interpolate the measured velocity points using a quadratic polynomial of the form $U_*(y_*)=\sigma_1(y_*^2-1) + \sigma_2(y_*+1)$ (red line in Fig.~\ref{fig:ExPiv}b). The interpolation fits the data very well. Then we estimate the $y_{*\rm belt,int}$ position at which the interpolation function reaches the known value of belt speed ($U(y_{*\rm belt,int})=U_{\rm belt}$). In Fig.~\ref{fig:ExPiv}b the interpolated streamwise velocity profile (red line) and the measured belt position (thick blue line) cross very close to the point $(1,1)$ as expected, which confirms the validity of our interpolation curve. We also compare the time evolution of the two wall positions obtained by these two methods ($y_{*\rm belt,int}$ as the red and $y_{*\rm belt,img}$ as the black line). The belt position predicted by interpolation matches very well the real position of the moving belt, with a deviation of less than 0.1 mm. We note that this is the first time that a detailed study of the gap variation is performed for this type of experiment with moving walls (including plane Couette facilities).
In order to check whether the base flow is affected by temporal fluctuations of the moving belt position we plot in Fig.~\ref{fig:InterpolationProfiles}a instantaneous interpolations of the streamwise velocity profiles for six different instants, which are marked in Fig.~\ref{fig:TimeSeries} by dashed vertical lines and numbers from 1 to 6. These velocity profiles are virtually the same, which proves that the base flow does not depend in a significant way on the phase of the belt motion. We also calculate the time averaged velocity profiles $<U(y_*)>_t$ for different Reynolds numbers (Fig.~\ref{fig:InterpolationProfiles}b). They collapse onto a single curve after being normalized with the belt speed. Finally, we calculate the mean advection velocity of the time averaged velocity profile ($U_{\rm avg}=\frac{1}{2h}\int_{-1}^1 <U(y_*)>_tdy$), which does not exceed $0.03 U_{\rm belt}$ in the central part of the test section.
Having determined the fluctuations of the belt position (Fig.~\ref{fig:TimeSeries}), as well as the variation of the velocity profiles in time (Fig.~\ref{fig:InterpolationProfiles}a) and for a given Reynolds number (Fig.~\ref{fig:InterpolationProfiles}b), we can estimate the total error of the local Reynolds number (for a given $z$ position) as lower than 5\%. The variation of the effective gap for different $z_*$ locations is lower than $\pm 0.5$ mm. The spatial variation of the temperature in the test section does not exceed $0.2^{\circ}$C. The resulting error related to the fluid viscosity is less than $0.6\%$. We estimate the global variation of the Reynolds number for different $z_*$ locations as lower than $7.6\%$. The cross-flow component of the base flow (in the spanwise direction) is lower than $2.0\%$.
In all subsequent figures the direction of motion of the plastic belt is toward the right.
\section{Characterization of the natural transition to turbulence triggered by the intrinsic noise of the installation}
\label{sec:3}
\begin{figure}[ht]
\includegraphics[width=\linewidth]{Fig8}
\caption{Flow visualizations for different Reynolds numbers: a) uniform laminar flow ($\Rey=330$); b) featureless turbulent region in the entire test section ($\Rey=780$).}
\label{fig:2}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=\linewidth]{Fig9}\\%[width=8cm]
\caption{Flow visualizations of a localized turbulent spot surrounded by laminar flow in plane Couette-Poiseuille flow ($\Rey=530$). Sequence of pictures shows the slow advection of the turbulent structure toward the right (with time interval of $96$ advection units).}
\label{fig:4}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=\linewidth]{Fig10}\\
\caption{Example of macro-organization of turbulent spots in the form of oblique turbulent bands ($\Rey=670$).}
\label{fig:3}
\end{figure}
\begin{figure}[ht]
\includegraphics[scale=0.28]{Fig11}\\
\caption{Spatio-temporal diagram of: a) measured streamwise velocity, $U_*(t,y_*)$, for the laminar flow ($\Rey=480$). The black solid line represents the time-averaged profile; b) measured streamwise velocity, $U(t,y_*)$, for the intermittent flow ($\Rey=520$); c) streamwise velocity fluctuations $u_*'(t,y_*)=(U(t,y_*)-U_{\rm base}(y_*))/U_{\rm belt}$. We consider the time-averaged streamwise velocity profile in the range $t_* \in (20-120)$ as the base flow without perturbations. The pattern at $t_* \in (800, 1400)$ is a signature of the unsteady, wavy structure of the turbulent spot. The black profiles have been calculated by time-averaging the instantaneous velocity profiles within the ranges marked by white dashed lines.}
\label{fig:STturb}
\end{figure}
We perform flow visualizations to characterize qualitatively the flow in the test section. For this purpose we use reflective aluminium flakes (STAPA IL HYDROLAN 2154 55900/G produced by ECKART) of typical diameter $d_{al}\in(30$ $\mu$m$,80$ $\mu$m$)$, which are dispersed in water. These tracers enable the detection of three dimensional vortical structures in the flow, through spatial fluctuations of reflected light intensity. In contrast, the light distribution in laminar regions is nearly uniform and featureless. In this way, turbulent regions can be distinguished. The pictures presented here are taken with a Nikon camera with a 3800$\times$ 2800 pixel matrix, with a pixel pitch equal to $0.167$ mm.
Note that the fluid in the main tanks is continually disturbed by the rotating cylinders, which makes it always turbulent for the range of Reynolds numbers considered here. For this reason the inlets of the test section are the sources of the natural perturbations. Even though these operating conditions are similar to those presented in Ref. \onlinecite{tsukahara_experimental_2014} for Poiseuille flow, we recall here that the mean advection velocity is nearly zero, so the turbulent flow is not advected from the inlets to the test section.
The flow is laminar in the entire test section up to $\Rey \simeq 420$ (as in Fig.~\ref{fig:2}a). For higher Reynolds numbers some turbulent structures appear at the left entry ($x_*<0$) generated by the turbulence in the main tank. Up to $\Rey \simeq 480$, the amplitude of these perturbations is not strong enough to trigger the transition and the turbulent structures present in the main tank do not propagate further into the test section.
For $\Rey \gtrsim 480$ the Couette-Poiseuille flow is no longer stable and the test section is occasionally invaded by transient patches of turbulence. However, up to $\Rey \simeq 510$ these events rarely occur and the undisturbed laminar base flow can persist for most of the time. In figure \ref{fig:4}a,b we present a sequence of images illustrating such a localized turbulent spot surrounded by laminar flow, which is slowly advected to the right with a very small advection speed of $U_{\rm advection}\simeq 0.095 U_{\rm belt}$.
As we increase the Reynolds number even further (to $\Rey \simeq 670$), the spots expand obliquely to form a turbulent structure reminiscent of laminar-turbulent bands, one example of which is presented in Fig.~\ref{fig:3}. Finally, at high enough Reynolds number ($\Rey \simeq 780$), the flow is uniformly turbulent (Fig.~\ref{fig:2}b).
In order to demonstrate the transition to turbulence in more detail, we measure the instantaneous streamwise velocity as a function of $y_*$ with the PIV configuration shown in Fig.~\ref{fig:5}. We acquire double-frame images with the sampling frequency of 2 Hz, which are cross-correlated to determine the instantaneous velocity fields. In Fig.~\ref{fig:STturb}a we present the spatio-temporal diagram of a single streamwise velocity profile $U_*(t_*,y_*)$ for low Reynolds number ($\Rey=480$) measured at $(x_*=0,z_*=0)$. The isocontours on this diagram are nearly horizontal, which shows that the flow is laminar and does not depend on time. This corresponds to the visualization of a laminar flow shown in Fig.~\ref{fig:2}a). We determine the base flow profile $U_{\rm base}(y_*)$ by time-averaging the results within the entire sequence of measurements. The resulting profile (black solid line in the center of Fig.~\ref{fig:STturb}a, see also Fig.~\ref{fig:InterpolationProfiles}b) is a quadratic polynomial.
As we increase the Reynolds number to $\Rey = 520$, we observe a transition to turbulence triggered by intrinsic noise of the installation. In Fig.~\ref{fig:STturb}b the flow becomes locally time-dependent/intermittent for $t_* \in (170, 1550)$, due to the passage of the localized turbulent spot through the PIV measurement section (compare also with the visualizations of the turbulent spot in Fig.~\ref{fig:4}). However, for $t_* \in (0, 170)$ the flow is stationary and laminar. We calculate the time-averaged profile in this range (parabolic profile on left side of Fig.~\ref{fig:STturb}b), which we consider as the base flow without perturbation $U_{\rm base}(y_*)$. Then we subtract it from the measured instantaneous streamwise velocity component $U(t,y_*)$ to calculate the streamwise velocity fluctuations $u'(t,y_*)=U(t,y_*)-U_{\rm base}(y_*)$ (Fig.~\ref{fig:STturb}c).
We can clearly observe the unsteady structure of these fluctuations, a signature of a turbulent spot in plane Couette-Poiseuille flow. The transition starts in the vicinity of the moving wall, in the high shear region ($t_* \in (250, 500)$), and then, as the turbulent structure grows, it gradually spreads across the whole gap. Finally the flow relaxes back to the laminar state ($t_*>1550$).
We also show two examples of turbulent streamwise velocity and fluctuation profiles ($t_*=415$ and $t_*=1100$ in Fig.~\ref{fig:STturb}b,c), which are calculated by time averaging the data within the range delimited by white dashed lines. The profile at $t_*=415$ shows the wall-normal transfer of fluid with negative velocity from the low-shear region toward the high-shear region, whereas the profile at $t_*=1100$ shows the opposite. Note in Fig.~\ref{fig:STturb}c that near the moving wall, the streamwise velocity fluctuations are negative, whereas away from the wall (low shear region) the streamwise velocity fluctuations are positive.
These PIV measurements demonstrate that plane Couette-Poiseuille flow can be also regarded as asymmetric Poiseuille flow with one active, high shear region near the moving belt. This is in contrast to the classical symmetric Poiseuille profile with two active regions, one next to each wall.
\section{Transition to turbulence triggered by a permanent perturbation}
\label{sec:4}
\begin{figure}[ht]
\includegraphics[scale=0.68]{Fig12}\\
\caption{Instantaneous flow visualizations for different Reynolds numbers. The transition to turbulence is triggered by a sphere placed in the test section near the moving wall. The $(x_*,z_*)$ origin is now located at the center of the sphere. The stationary/moving wall is closer to/further from the reader and the direction of the moving wall is toward the right. We also superpose on these images the time-averaged envelope contours that represent the total area of the spot (black contours) and its active turbulent core (white contours).}
\label{fig:FT_Vis8a}
\end{figure}
\begin{figure}[ht]
\includegraphics[scale=0.5]{Fig13}\\
\caption{Time-averaged envelope representing the total area of the turbulent spot: a) spatial extent of the envelope along the streamwise direction for $z=0$; b) spanwise extent of the envelope, where the $x$-coordinate has been selected for each Reynolds number to maximize the spanwise extent.}
\label{fig:ENV_RES}
\end{figure}
\begin{figure}[ht]
\includegraphics[scale=0.5]{Fig14}\\
\caption{Dependence on $\Rey$ of the envelopes of the total area and the active core of the turbulent spot: a) the area; b) the $x_*$ position of the centroids; c),d) the extent in $z$ and $x$ directions. Crosses correspond to the active area (containing wavy streaks), while circles represent the region which includes in addition the weak undisturbed streaks and oblique waves at the laminar-turbulent interface.}
\label{fig:ENVplots}
\end{figure}
\begin{figure}[ht]
\includegraphics[scale=0.65]{Fig15}\\
\caption{Decomposition of the flow visualization of the turbulent spot
into the structures that move upstream and downstream: a) full image
of flow visualization; b) pattern which moves upstream (right); c) pattern which moves downstream (left). All three pictures correspond to the same instant of time.}
\label{fig:LeftRight}
\end{figure}
In this section we present flow visualizations for the transition forced by an external and permanent perturbation. For this we insert a ferromagnetic sphere of diameter $6.2$ mm, which is held at a fixed position within the test section by a strong magnet. The sphere touches the moving wall and so is within the high shear region. In addition, the friction with the moving belt causes rotation of the obstacle. However, this imposed frequency is higher than the typical frequencies observed in the flow and thus can be neglected. This obstacle locally modifies the flow \citep{bottin_discontinuous_1998}, creating a steady, localized disturbance.
For each Reynolds number, we take a sequence of 90 images with sampling frequency $f=1$ Hz. We recall that we acquire the images with very high spatial resolution (3800 $\times$ 2800 pixel matrix). We shift the origin of the coordinate system with respect to Fig.~\ref{fig:2}-\ref{fig:4} by placing it at the center of the sphere. We call the left/right side of the sphere the downstream/upstream direction, taking the direction of the back flow (Poiseuille component) as the reference. In Fig.~\ref{fig:FT_Vis8a} we present flow visualizations representing the flow structure as the Reynolds number is increased. For $\Rey = 165$ we observe a few stationary streamwise vortices which expand towards the left (Fig.~\ref{fig:FT_Vis8a}a). The vortical structure observed on the left side of the sphere is probably due to a pair of streamwise counter-rotating vortices generated in the wake of the sphere, which, in uniform background flow, appears at $\Rey_{\rm sphere}\simeq 210$ \citep{johnson_flow_1999,gumowski_transition_2008}, where $\Rey_{\rm sphere}=(U_{\rm freestream}\,d_{\rm sphere})/\nu$. In our case $d_{\rm sphere}=6.2$ mm $\simeq h$, which implies that $\Rey_{\rm sphere} \simeq \Rey$. For $\Rey = 255$ the turbulence starts to invade the right side of the sphere (note the appearance of small vortices for $x_*>0$ in Fig.~\ref{fig:FT_Vis8a}c). As the Reynolds number is further increased, this streamwise extent to the right becomes increasingly important.
We have also observed that for $\Rey \lesssim 480$ the spot stays in a fixed location pinned to the sphere, but for higher Reynolds numbers the size of the spot fluctuates and it moves toward the right. This can be compared with the front speeds in pipes for puffs, where the upstream front (with respect to the direction of the Poiseuille component) travels more slowly downstream than the average velocity of the base flow \citep{barkley_rise_2015}. The analogue of this situation in our case is the motion to the right (upstream).
The spots have a preferred inner structure (a spanwise-periodic pattern of streamwise streaks) with wavelength $\lambda_z$ about $2.5h$ (representing the wave vector ($k_{x*}=0, k_{z*}=2.52$) in Fourier space). However the spot structure also includes oblique waves (i.e. straight streaks which are oriented slightly obliquely with respect to the streamwise direction) at the laminar-turbulent interface and undulated (or wavy) rolls in the center of the spot, which broaden the spatial Fourier spectrum.
In order to describe the dependence of the area of the turbulent spot on Reynolds number, we use the two-dimensional Hilbert transform to compute the envelope of the modulated function of gray levels representing the spot. First, we normalize the pixel intensity of each image by dividing it by the background reference corresponding to the laminar flow without the sphere. Then we compute its two dimensional FFT spectrum and we filter it, retaining the range $|k_{x*}| < 1.19$ and $k_{z*} \in (0.76, 4.28)$. Next, we use the two-dimensional inverse FFT transform to compute the filtered spot and we get its envelope/amplitude $\rho(x,z)$. Finally, we compute for each Reynolds number the time-averaged spatial envelope for all images in the sequence. This is justified because the global dynamics of the turbulent spot is nearly stationary, as the forcing is constant in time and the turbulent region is pinned to the sphere until $\Rey \simeq 480$.
We also estimate the size of the more dynamically active, turbulent region at the core of the turbulent spot. To do this we use the observation that this region is directly related to streamwise waviness of the streaks, which resembles travelling waves. The streamwise dependence of wavy streaks is manifested by the appearance of modes with $k_{x*} \neq 0$ in the spatial spectrum. However, such modes generate higher harmonics. This effect is further increased by the fact that we analyse the pixel intensity of flow visualizations, which adds spurious nonlinear content. As a result we are not able to identify a single mode which corresponds to the streak waviness. Instead, we consider the spectral range $|k_{x*}| \in (1.13,1.85)$ and $k_{z*} \in (1.60,3.44)$, which is related to the harmonics of this structure. In this way, we can insure that the envelope computed from this spectral region corresponds to the short wavelength streamwise undulation rather than to the long oblique straight waves. In order to describe a spatial distribution of both regions (namely the active core related to the waviness of the streaks and the total area, which in addition includes the surrounding region with oblique waves at the laminar-turbulent interface) for different Reynolds numbers, we superpose iso-contours of both envelopes on the flow visualization pictures (Fig.~\ref{fig:FT_Vis8a}). Note in Fig.~\ref{fig:FT_Vis8a}a that there is no active region for $\Rey = 165$, since the structures there are wake vortices generated by the sphere and unrelated to turbulence.
In order to better illustrate how the size of the turbulent spot changes with increasing Reynolds number, we show spatial profiles of the total spot envelope along the $x_*$ (Fig.~\ref{fig:ENV_RES}a) and $z_* $ (Fig.~\ref{fig:ENV_RES}b) directions. The former is plotted for $z_*=0$, and the latter for the value of $x_*$ which maximizes the size along the $z_*$ direction. For low Reynolds numbers ($\Rey=165$ in Fig.~\ref{fig:FT_Vis8a}a and \ref{fig:ENV_RES}a) all of the activity takes place on the left side of the sphere. The upstream front is steep, whereas in the downstream direction the envelope slowly decays to zero with a large tail extending toward $x_*<0$. As we increase the Reynolds number the turbulent region extends further and further upstream.
In Fig.~\ref{fig:ENVplots} we present several quantities to further characterize this dependence. Fig.~\ref{fig:ENVplots}a shows that the area of both the total and the active regions increase monotonically with Reynolds number. In Fig.~\ref{fig:ENVplots}b we show the dependence on Reynolds number of the $x_*$ centroid position for both total and active regions. First, we observe that both centroids follow the same evolution. For low Reynolds numbers they are located on the left side of the sphere, at $\Rey \simeq 380$ they cross zero and for higher $\Rey$ they continue to shift upstream. This indicates that the high-shear (Couette) region near the moving wall becomes increasingly important as the Reynolds number is increased. Finally, at $\Rey=510$ almost all activity takes place within the high shear (Couette) part. This agrees with the numerical observations in plane Poiseuille flow with zero net flux that the turbulent structures move with/against the direction of the Poiseuille component for low/high Reynolds numbers (see Ref. \onlinecite{tuckerman_turbulent-laminar_2014}, note that in that paper the direction of the Poiseuille component is in the positive $x_*$ direction, opposite to our case). However, recall that instead of measuring the propagation speed of the turbulent structure, we are measuring the direction in which the turbulent spot extends. One should think of this as continuous advection of the turbulence, which decays as it moves downstream and is simultaneously continuously regenerated by a permanent perturbation.
As mentioned in the discussion of Fig.~\ref{fig:FT_Vis8a}a, for sufficiently low $\Rey$, there is no active region; the perturbations seen are vortices in the wake of the perturbing sphere, which are located downstream/left from the sphere. Since the right side ($x_*>0$) is less affected by the sphere, we plot in Fig.~\ref{fig:ENVplots}a the part of the active region located only on the right side of the sphere (green crosses). We note that the area of this portion of the active area remains nearly equal to zero up to $\Rey=330$ and then starts to grow.
The Reynolds number dependence of the streamwise and spanwise size of turbulent spots is presented in Fig.~\ref{fig:ENVplots}c,d. Both of them grow monotonically with Reynolds number up to $\Rey=470$. At $\Rey=510$ the spanwise extent seems to saturate as a result of the finite size of our test section. The streamwise extent is less affected, as the streamwise dimension of our installation is bigger ($L_x/h = 2000/5.75 = 347.8$) than the spanwise one ($L_z/h = 520/5.75 = 90.4$). The spanwise extent grows with Reynolds number by adding new streaks in the $z$ direction, which can be observed in Fig.~\ref{fig:ENV_RES}b. Similar behaviour has been observed numerically in plane Couette flow \citep{duguet_stochastic_2011}.
Finally, in order to separate the turbulent structures that move downstream and upstream, we record a video of a turbulent spot generated by the sphere for $\Rey=470$. To do this, we use the video camera with acquisition frequency $f=25$ Hz, which enables us to calculate the two dimensional spatio-temporal ($x,t$) FFT transform for each $z$ location. Motivated by the decomposition of travelling waves in thermal convection \citep{croquette_nonlinear_1989,kolodner_complex_1990}, we introduce the following procedure: we calculate the inverse FFT of the two dimensional spectrum ($k_x,\omega$) for each of the quadrants I ($k_x>0,\omega>0$) and II ($k_x<0,\omega>0$) separately. These two quadrants represent the travelling waves that go to the right/upstream (with phase $k_xx-\omega t$) and left/downstream (with phase $k_xx+\omega t$) respectively. In Fig.~\ref{fig:LeftRight} we present the resulting fields for a given instant of time. Video frames are normalized by dividing their intensity by that of the image of the laminar flow (Fig.~\ref{fig:LeftRight}a), whereas in Fig.~\ref{fig:LeftRight}b,c we plot the fluctuations of the normalized pixel intensity. Fig.~\ref{fig:LeftRight}a shows a turbulent spot with a characteristic V-shape pointing to the left. The dominant pattern of the right-going structures (Fig.~\ref{fig:LeftRight}b) are oblique waves at the tips of turbulent spot (similar to those found in plane Poiseuille flow \citep{carlson_flow-visualization_1982,henningson_wave_1987}). In contrast, the downstream pattern (Fig.~\ref{fig:LeftRight}c) contains the wavy streaks, which defines our active region. Thus it is associated with the turbulent core of the spot (see also Supplemental Material at [URL will be inserted by publisher] for the full video showing this propagation). This difference between two patterns indicates the role of the streamwise pressure gradient (absent in plane Couette flow), which breaks the left/right symmetry. One can observe the tape joining the two ends of the plastic belt ($x_*=35$ in Fig.~\ref{fig:LeftRight}a), which at this instant moves to the left, is visible only in the pattern moving downstream/left (Fig.~\ref{fig:LeftRight}c). This confirms that our decomposition of the flow visualization separates the upstream and downstream patterns.
\section{Conclusions}
\label{sec:5}
We have presented a new installation to investigate plane Couette-Poiseuille flow. We have achieved this by combining a moving belt with a streamwise pressure gradient forcing the back-flow in the opposite direction. The mean velocity of the resulting base flow is nearly zero, which enables us to generate turbulent structures which are nearly stationary in the laboratory frame. This is the first time that stationary structures have been generated experimentally in a shear flow with a non-zero streamwise pressure gradient.
We describe the observation of the following sequence as Reynolds number is increased: laminar state, localized spots which grow with Reynolds number (both in the streamwise and spanwise directions) and finally, oblique expansion of the spot, which forms a turbulent band.
We note that we have characterized the linear stability of our particular velocity profile, given by $U(y_*)=\frac{3}{4} ({y_*}^2-1)+\frac{1}{2}(y_* +1)$ by solving the Orr-Sommerfeld/Squire equations for two dimensional, wall-bounded, parallel shear flow, using a Matlab\textregistered \, code~\citep{computer_Hoepffner} and have not found any linear instability up to $\Rey=10^8$. Our velocity profile is equivalent (under the combined operations of reflection in $x$ and $y$ and a Galilean transformation) to the profile $U(y_*)=\frac{3}{4} (1-{y_*}^2)+\frac{1}{2}(y_* +1)$, which was shown to be linearly stable \citep{balakumar_finite-amplitude_1997}, similarly to plane Couette or pipe flow.
We present the first demonstration that the transition to turbulence in plane Couette-Poiseuille flow is subcritical in nature and occurs through localized turbulent structures (spots), similarly to other shear flows such as boundary layer, pipe, pure plane Poiseuille and Couette flows. We have measured with Particle Image Velocimetry (PIV) the flow structure inside the gap, showing that the domain is divided into high-shear (Couette) and low-shear (Poiseuille) regions of activity (see Fig.~\ref{fig:STturb}a). The plane Couette-Poiseuille flow has thus only one high-shear layer near the moving wall, which differentiates it from the classical symmetric plane Poiseuille flow with two high-shear regions. We have also measured the perturbation of the flow due to the passage of a spot, which is located initially in the high-shear region near the wall. As time proceeds the spot fills the whole gap. Finally, the flow in the high-shear region becomes less turbulent but the low-shear (Poiseuille) region remains active. In addition, we can observe (see Fig.~\ref{fig:4}a,b and Fig.~\ref{fig:STturb}c) that the spot moves to the right, which is the upstream direction with respect to the backflow induced by the pressure gradient (Poiseuille component).
We have also investigated the $\Rey$ dependence of the size of the turbulent spot triggered by a constant, localized perturbation. Two regions of the turbulent spot can be distinguished: the active turbulent core characterized by waviness of the streaks (reminiscent of travelling waves) and the total area, which also includes the weak undisturbed streaks and oblique waves at the laminar-turbulent interface. We have shown that the area of both regions and their streamwise/spanwise extents grow monotonically with Reynolds number. Analyzing the evolution of the centroid positions, we have shown that for low Reynolds numbers the turbulent spot extends downstream. As the Reynolds number is increased the spot shifts further upstream, which means that the high-shear (Couette) region becomes increasingly dominant. Similar behaviour has been observed numerically in Poiseuille flow with zero mean flux, where the turbulent structures move with/against the direction of the Poiseuille component for low/high $\Rey$ \citep{tuckerman_turbulent-laminar_2014}.
We isolate the right and left going waves, showing that the former are dominated by oblique waves located mainly at the tips of the turbulent spot, while the latter are related to the active core of the turbulent spot. This left-right symmetry breaking is due to the streamwise pressure gradient
This new experiment to study subcritical transition to turbulence in wall-bounded flows is capable of producing high-quality detailed information on the dynamics of turbulent spots. The only existing investigation of Couette-Poiseuille flow with zero mean velocity was reported in Ref. \onlinecite{huey_plane_1974}, however they operated only in the fully turbulent regime, with sparse spatial resolution inside the gap and without visualizations. In contrast, in our installation we have measured the streamwise fluctuations (the basic flow modifications) produced during the passage of the spot by very precise PIV measurements with high spatial resolution inside the narrow gap of the facility. In addition, the wide geometry of this experimental set up gives us a large field for clear and very high contrast flow visualizations. This has allowed us to obtain the first quantitative and systematic results on the spatial evolution of turbulent spots, marking an important advance with respect to previous experiments.
\begin{acknowledgements}
We thank Matthiew Chantry and Tomasz Bobinski for fruitful discussions, as well as Arnaud Prigent for help with image processing. We also acknowledge Konrad Gumowski and Tahar Amorri for technical assistance. This work was supported by a grant, TRANSFLOW, provided by the Agence Nationale de la Recherche (ANR).
\end{acknowledgements}
\input{KlotzPrfArXiv2.bbl}
\end{document}
|
1,108,101,564,959 | arxiv | \section{Introduction}
\noindent The Alexander polynomial and the Jones polynomial, both
characterized by simple crossing change formulae, are probably the two most
celebrated invariants in knot theory.
While the Alexander polynomial appears again and again in different contexts,
making us feel quite comfortable with it,
the nature of the Jones polynomial remains mysterious.
In this paper, we will provide a new perspective for the study of the Jones
polynomial (and its generalizations -- the so-called colored Jones
polynomial), the Alexander polynomial and their relationship. An immediate
outcome of this new perspective is a straightforward proof of the
Melvin-Morton conjecture.
In \cite{LTW}, a model of random walk on knot diagrams was introduced. When
we were seeking formulations of the Alexander and Jones polynomials in this
model of random walk, a paper of Foata and Zeilberger \cite{FZ} caught
our attention. In that paper, Foata and Zeilberger established a general
combinatorial framework for counting with weights Lyndon words in
a free monoid generated by a totally ordered set, one of its consequences
is a proof of Bass' evaluations of the Ihara-Selberg zeta function for
graphs. We noticed that one of the main theorems of \cite{FZ} implies the
following fact: Take a 1-string link and
consider all families of cycles on
this 1-string link in our model of random walk. Every cycle is assigned
with a weight (probability). Then the Ihara-Selberg type zeta function
constructed using these weights is equal to the inverse of the
Alexander polynomial of the knot obtained as the closure of the
1-string link, up to a factor in the form of a power of the weight parameter.
There is a remarkable relation between the colored Jones polynomial
and the Alexander polynomial, which was first noticed and conjectured by
Melvin and Morton \cite{MM}. Rozansky \cite{RO} gave an argument for
this conjecture using the Chern-Simons path integral formalism of the
colored Jones polynomial and the relation between Ray-Singer analytic
torsion and the Alexander polynomial. The rigorous proof of the
Melvin-Morton conjecture was given by Bar-Natan and Garoufalidis \cite{BG},
using the full power of the theory of finite type knot invariants.
In our setting of random walk on knot diagrams, the Jones polynomial
counts only simple families of cycles on the 1-string link,
i.e. families of cycles which do not share any edge. To take into account of
all cycles, we have to use the colored Jones polynomial. A state sum
formula for the (renormalized) colored Jones polynomial with the coloring
parameter $d+1$
implies that it counts simple
families of cycles on $d$-cabling of the 1-string link in question.
To relate the colored Jones polynomial with the Alexander polynomial, we lift
families of cycles on the string link to its $d$-cabling with the weight
parameter adjusted appropriately. A family of cycles on the 1-string link can
have many liftings to its cabling. Weights of all liftings
add up to the weight of the original family of cycles,
whereas the weights of non-simple liftings vanish in the limit when
$d\rightarrow\infty$. So in the limit, only weights
of simple families of cycles survive and this calculation leads to a
proof of the Melvin-Morton conjecture.
We remark that our formulation of the limit of the colored Jones
polynomial is analogous to the limit of partition functions on a finite
lattice with a fixed boundary condition in statistical mechanics. Our
proof of the Melvin-Morton conjecture is in spirit close to Rozansky's proof
using the semi-classical limit of Chern-Simons path integral.
The model of random walk on knot diagrams has a much richer content than we
have touched upon here. A more detailed exploration of this model
will be the subject of our future publications.
\section{Random walk on knot diagrams}
\subsection{Wirtinger presentation and free derivatives}
Fix an oriented knot diagram $K$, we will label the arcs in the knot diagram
separated by crossings at the under-crossed strands using
the letters $x_1,x_2,\dots, x_n$. The knot group
$G(K)=\pi_1({\mathbb R}^3\setminus K)$ admits a
Wirtinger presentation
as follows: It has $x_1,x_2,
\dots,x_n$ as generators, and one relation for each crossing. If a crossing
has incident
arcs $x_i,x_j,x_k$, where $x_i$ separates $x_j$ and $x_k$ in a small
neighborhood of the crossing
and the knot orientation points $x_j$ toward $x_k$,
the relation is
$$x_j=x_i^{\epsilon}x_kx_i^{-\epsilon}.$$
Here $\epsilon=\pm1$ is the sign of the crossing.
With respect to the abelianization $\phi:{\mathbb Z}G(K)\rightarrow{\mathbb Z}
[t^{\pm1}]$, sending each
$x_i$ to $t$, a free derivative $\partial:{\mathbb Z}G(K)\rightarrow{\mathbb
Z}[t^{\pm1}]$ is a linear map such that
$$\partial(g_1g_2)=\partial(g_1)+\phi(g_1)\partial(g_2)\qquad\text{for
all $g_1,g_2\in G(K)$}.$$
The ${\mathbb Z}[t^{\pm1}]$-module of free derivatives on the free group
$F$ generated by
$x_1,x_2,\dots,x_n$ is spanned by
$\partial_i,\,i=1,2,\dots,n$ with $\partial_i(x_j)=\delta_{ij}$. Let $\partial$
be a free
derivative on $G(K)$. Then $\partial=\sum_{i=1}^nA_i\partial_i$ as
a free derivative on $F$, where $A_i\in {\mathbb Z}[t^{\pm1}]$, and it has to
satisfy the relation
$$\partial(x_j)=t^{\epsilon}\partial(x_k)+(1-t^{\epsilon})\partial(x_i)$$
for each Wirtinger relation $x_j=x_i^{\epsilon}x_kx_i^{-\epsilon}$. Thus the
${\mathbb Z}[t^{\pm1}]$-module of free derivatives on $G(K)$ can be
thought of as generated by
the symbols $A_i,\,i=1,2\dots,n$ and subject to the relation
$$A_j=t^{\epsilon}A_k+(1-t^{\epsilon})A_i$$
for each Wirtinger relation $x_j=x_i^{\epsilon}x_kx_i^{-\epsilon}$.
We define an $n\times n$ matrix $\tilde\mathcal B$ as follows. The $j$-th row of
$\tilde\mathcal B$ has at most
two non-zero entries: for each relation $A_j=t^{\epsilon}A_k+
(1-t^{\epsilon})A_i$, when $k\neq i$, the $(j,k)$-entry is $t^{\epsilon}$
and the $(j,i)$-entry is $1-t^{\epsilon}$;
when $k=i$, the only non-zero entry is the $(j,k)$-entry, which is equal to 1.
Let $\mathcal B$ be the $(n-1)\times(n-1)$ matrix obtained from $\tilde\mathcal B$ by deleting
the first
row and the first column. Then
$\text{det}(I-\mathcal B)$ is the Alexander
polynomial of the knot $K$ (recall that the Alexander polynomial of
a knot is only defined up to powers of $t$).
In fact, this is always true no matter which
$j$-th row and column are deleted.
\subsection{A model of random walk on knot diagrams}
In our model of random walk on the knot diagram $K$, we take $\{A_1,A_2,\dots,A
_n\}$ to be the
space of states. The transition matrix is simply $\tilde\mathcal B$.
This is obviously a
stochastic matrix
in the sense that the entries in each row add up to 1. In the case when all
crossings of $K$ are
positive ($K$ is a positive knot diagram), we get a genuine Markov chain
for each $t\in [0,1]$. Otherwise,
we may have
negative probabilities for negative crossings.
In this model of random walks on $K$, a path from $A_i$ to $A_j$ is
a sequence of transitions of states from $A_i$ to $A_j$.
Each such path is associated with a weight (\lq\lq probability"), which is the
product of \lq\lq transition
probabilities" along this path.
Pick a state, say $A_1$, consider paths
from $A_1$ to itself
which will not contain $A_1$ at any intermediate stage, i.e. we consider paths
of first return from $A_1$.
Equivalently, we may regard $A_1$ as being
broken into two states $A'_1$
and $A''_1$, one initial and one terminal.
This can be done by breaking
the arc $x_1$ into two arcs $x'_1$ and $x''_1$ and changing
the knot $K$ into a
1-string link $T$.
Then we consider all paths on $T$
from $A'_1$ (the bottom of $T$) to $A''_1$ (the top of $T$).
\begin{pro} The summation of weights over
all paths on $T$ from $A'_1$ to $A''_1$ is equal to 1.
\end{pro}
\begin{proof} To calculate
the sum of weights of all paths from $A'_1$ to $A''_1$ amounts to solve the
system of
linear equations
$$A_j=t^{\epsilon}A_k+(1-t^{\epsilon})A_i$$
for $A''_1$ with $A'_1=1$ given. We have the unique solution $A''_1=1$. For
more details of the proof,
see \cite{LTW}.
\end{proof}
We have the following theorem.
\begin{thm}
1. Let $K$ be a positive knot diagram with $n$ arcs.
Then for every pair $(i,j)$, there is an integer $m\leq n$, such that the
$(i,j)$-entry of the matrix $\tilde\mathcal B^m$ is positive.
Hence, the Markov chain is irreducible.
2. Let $p^{(k)}_{i,j}$ be the $(i,j)$-th entry of
$\tilde\mathcal B^k$. For each $t\in [0,1]$ and $i,j$,
$\sum_{k=1}^{\infty} p^{(k)}_{i,j}=\infty$.
Hence each state is persistent.
\end{thm}
\begin{proof} 1. This is true because we can travel along the knot
from any state $A_i$ to $A_j$ in $\leq n$ steps.
2. If $i=j$, by Proposition 2.1,
if we sum the weights of all the $k$-th return paths for $1\leq k\leq n$, the
sum is $n$. For $i\neq j$, the sum $\sum_{k=1}^{n} p^{(k)}_{i,j}>n$.
\end{proof}
Imagine that a ball travels on the knot diagram in the
direction specified by the orientation of the knot.
It will make a choice when it comes to an $\epsilon$-crossing from
the under-crossed segment: it may either jump up with probability
$1-t^{\epsilon}$ and keep traveling on the over-crossed segment or
keep traveling with probability $t^{\epsilon}$ on the under-crossed segment.
This is an intuitive picture of our model of random walk on knot diagrams.
We will call this model the ``jump-up'' model. There is also a ``dual'' model
of jump-down random walk on knot diagrams. In this model, one needs to
make a choice at the over-crossed segment of a crossing: jump-down or keep
traveling. There are some delicate connections and differences between these
two models which we will not discuss here. We only notice that the two
random walk models correspond to different
choices of base points in the Wirtinger presentation.
\subsection{State sum for the Jones polynomial}
State sum models on knot diagrams is one of the main tools attained in the
development of topological quantum field theories. The
state model we will use for the Jones polynomial
is given by Turaev in \cite{T} based on earlier constructions of Jones.
For this model, we need an $R$-matrix.
The $R$-matrix of $\mathfrak {sl}(2)$ with
respect to the fundamental
representation is given as follows (with $\bar{q}=q^{-1}$ and $\bar{R}=
R^{-1}$):
$$\begin{aligned}
&R_{0,0}^{0,0}=R_{1,1}^{1,1}=-q,\,R_{0,1}^{1,0}=R_{1,0}^{0,1}=1,
\,R_{0,1}^{0,1}=\bar{q}-q,\\
&\bar{R}_{0,0}^{0,0}=\bar{R}_{1,1}^{1,1}=-\bar{q},
\,\bar{R}_{0,1}^{1,0}=\bar{R}_{1,0}^{0,1}=1,\,\bar{R}_{1,0}^{1,0}=q-\bar{q},
\end{aligned}$$
and all other entries of the $R$-matrix are zero.
In this model, we consider the
1-string link $T$ as a planar graph by
looking at its projection. A state $s$ is an assignment of 0 or 1 to each
edge of the graph. For each
vertex (crossing) $v$, if $a,b,c,d$ are edges incident to $v$, define
$$\pi_v(s)=(R^{\epsilon})_{s(a),s(b)}^{s(c),s(d)},$$
where $\epsilon$ is the sign of the crossing $v$, $a,b$ are incoming edges and
$c,d$ are outgoing edges.
A state $s$ is admissible
if $\pi_v(s)\neq 0$ for all vertices $v$, and the initial and terminal edges
having the same assignments.
The set of all admissible states will be denoted by
$\text{adm}(T)$. We have
$$\text{adm}(T)=\text{adm}_0(T)\amalg\text{adm}_1(T)$$
where $\text{adm}_i(T)$, $i=0,1$, is the set of admissible states with $s=i$ on
the initial and terminal edges of $T$. For each admissible state $s$, define
$$\Pi(s)=\prod_{v:\,\text{vertices}}\pi_v(s).$$
Given a 1-string link diagram $T$, and let
$K$ be a closure of $T$ to a knot diagram without
introducing any additional crossings, and a state $s\in\text{adm}_i(T),
i=0,1$. The state $s$ on $T$ can naturally be extended as a state on the
knot diagram $K$. There are
quite a few quantities associated with $T$ or the pair $(T,s)$.
We will define them here, and these notations will be in force throughout
this paper. Also, we will use dashed lines for edges having the assignment
0 in the state $s$ and solid lines for edges having assignment 1.
First we define a modification of diagrams according to a state.
{\em A smoothing of $(T,s)$ or $(K,s)$} is the modification
of the diagram by smoothing
the crossings marked as
$${\epsfysize=0.12 truein \epsfbox{0000.eps}},\,\,
{\epsfysize=0.12 truein \epsfbox{0101.eps}},\,\,
{\epsfysize=0.12 truein \epsfbox{1010.eps}},\,\,
{\epsfysize=0.12 truein \epsfbox{1111.eps}},$$
we get a collection of circles and an arc in the case
of $(T,s)$, and only circles in the case of $(K,s)$.
Each circle or arc is marked by $0$ or $1$.
\begin{enumerate}
\item {\em The writhe of $T$:} Denote
by $\omega(T)$ the writhe, i.e. the summation of
signs over all crossings of $T$.
\item {$\beta_i(s), i=0,1$:} Denote by $\beta_i(s)$ be the sum of signs
of crossings whose incident edges all marked by $i$ in $s$.
\item {\em Rotation numbers}, $\text{rot}(T), \text{rot}_i(K,s),
\text{rot}_i(T,s)$: Smoothing all
crossings of $T$, we get a collection of oriented circles in the plane
(together with an oriented arc), and
$\text{rot}(T)$ is defined to be the sum of rotation numbers
(Whitney's indices) of these
circles; For the smoothing of $(T,s)$,
the circles are divided into two collections marked by
$0$ or $1$ respectively, and $\text{rot}_i(T,s)$ is defined to be the
sum of rotation numbers of the circles marked by $i$;
The definition of $\text{rot}_i(K,s)$ is similar to that of
$\text{rot}_i(T,s)$, only that the smoothing of $(K,s)$ has one more circle
then $(T,s)$.
\end{enumerate}
For the Jones polynomial $J(K)$, Turaev's state model gives the following
formula:
$$J(K)=(-q^2)^{-\omega(T)}\sum_{s\in\text{adm}(T)}q^{\text{rot}_0(K,s)-
\text{rot}_1(K,s)}\,\Pi(s) .$$
This formula for the Jones polynomial has the value $q+\bar{q}$ on the unknot,
and the standard variable of the
Jones polynomial is $t={\bar q}^2$. It is determined by the following
crossing change formula:
$$\bar t\,J(K_+)- t\,J(K_-)=({\bar t}^{\frac12}-t^{\frac12})\,J(K_0).$$
\noindent{\bf Remark:} This formula is derived from Theorem 5.4 in
\cite{T}. The only nontrivial fact is our computation of
$\int_{D}f$ in the formula which is
$q^{\text{rot}_0(K,s)-\text{rot}_1(K,s)}$ in our
notations. To be more specific, our colors 0,1 correspond to
the colors 1,2 in \cite{T}, respectively. Also our conventions for rotation
numbers are different. Our convention is that the clockwise oriented
circle has $\text{rot}=-1$, while the counterclockwise one has $\text{rot}=1$.
Now let us interpret the state sum from the point view of random walks
on knot diagrams.
First we take a look at the following table:
\medskip
\centerline{\begin{tabular}{r|ccccccc}\hline
$\mathfrak{sl}(2)\,\,\,$ & $\,\,\,
{\epsfysize=0.12 truein \epsfbox{0000.eps}}\,\,\,$ & $\,\,\,
{\epsfysize=0.12 truein \epsfbox{0101.eps}}\,\,\,$ & $\,\,\,
{\epsfysize=0.12 truein \epsfbox{0110.eps}}\,\,\,$ & $\,\,\,
{\epsfysize=0.12 truein \epsfbox{1001.eps}}\,\,\,$ & $\,\,\,
{\epsfysize=0.12 truein \epsfbox{1010.eps}}\,\,\,$ & $\,\,\,
{\epsfysize=0.12 truein \epsfbox{1111.eps}}\,\,\,$
&\,\,\,model\,\,\ \\ \hline
${\epsfysize=0.12 truein \epsfbox{plus.eps}}
\,\,\,$ & $-q$ & $\bar{q}-q$ & 1 & 1 & 0 & $-q$ &\\ \hline
$-\bar q\,{\epsfysize=0.12 truein \epsfbox{plus.eps}}
\,\,\,$ & $1$ & $1-{\bar q}^2$ & ${\bar q}^2\cdot(-q)$ & $(-\bar q)$ & 0 &
${\bar q}^2q^2$
&up\\ \hline
$-\bar q\,{\epsfysize=0.12 truein \epsfbox{plus.eps}}
\,\,\,$ & ${\bar q}^2q^2$ & $1-{\bar q}^2$ & $(-\bar q)$ & ${\bar q}^2\cdot(-q)$ & 0 & $1$
&down\\ \hline
${\epsfysize=0.12 truein \epsfbox{minus.eps}}
\,\,\,$ & $-\bar{q}$ & 0 & 1 & 1 & $q-\bar{q}$ & $-\bar{q}$ &\\ \hline
$-q\,{\epsfysize=0.12 truein \epsfbox{minus.eps}}
\,\,\,$ & $1$ & 0 & $(-q)$ & $q^2\cdot(-\bar q)$ &
$1-q^2$ & $q^2{\bar q}^2$
&up\\ \hline
$-q\,{\epsfysize=0.12 truein \epsfbox{minus.eps}}
\,\,\,$ & $q^2{\bar q}^2$ & 0 & $q^2\cdot(-\bar q)$ & $(-q)$ &
$1-q^2$ & $1$
&down\\ \hline
\end{tabular}}
\medskip
Here, as before, a dashed edge has the assignment 0 and a solid edge has
the assignment 1.
The entry at the row ${\epsfysize=0.12 truein \epsfbox{plus.eps}}$
(or $x\,{\epsfysize=0.12 truein \epsfbox{plus.eps}}$) and the column
${\epsfysize=0.12 truein \epsfbox{0101.eps}}$ is $R_{0,1}^{0,1}$
(or $xR_{0,1}^{0,1}$), etc. The last
column indicates two random walk models
for this state sum. The two rows marked by ``up'' in the last column
compare entries of the $xR$ with the weights of
the jump-up model, and the two rows marked by ``down'' compare entries of
$xR$ with weights of the jump-down model.
Given a state $s\in\text{adm}_0(T)$, think of the edges with assignments $1$ as
a collection of cycles that several balls traveled in the jump-up model.
Note that their paths may cross transversely but will not pass through the
same edge twice. Conversely, if we simultaneously have a few balls traveling
on $T$ avoiding the two open arcs, they do not travel over the same edge but
may cross transversely, we get a state $s\in\text{adm}_0(T)$
by assigning 1 to all the traveled edges, and $0$ otherwise.
With such a one-one correspondence, for a state $s\in\text{adm}_0(T)$,
we denote by $W_1^{\circ}(s)$ the
product of weights of the collection of cycles
formed by edges marked by 1 as cycles in
the jump-up model with
$t={\bar q}^2$.
The case of jump-down model is similar, and it corresponds to
states in $\text{adm}_1(T)$. Given such a state $s$,
the collection of cycles formed by edges marked by 0 are
thought of as cycles in the jump-down model of random walks and $W_0^{\circ}
(s)$ denotes the product of weights.
\begin{lm}
In the $\mathfrak{sl}(2)$ state model, we have
$$
\begin{aligned}
&\Pi(s)=(-q)^{\omega(T)}q^{2\beta_1(s)}W_1^{\circ}(s)\qquad
\text{for $s\in\text{\rm adm}_0(T)$},\\
&\Pi(s)=(-q)^{\omega(T)}q^{2\beta_0(s)}W_0^{\circ}(s)\qquad
\text{for $s\in\text{\rm adm}_1(T)$}.
\end{aligned}
$$
\end{lm}
\begin{proof} We will show the case $i=0$. The other case
is completely similar.
The factor $(-q)^{\omega(T)}$ comes in since we multiply
each $R$-matrix entry at an $\epsilon$-crossing by $(-q^{-\epsilon})$.
The term $q^{2\beta_1(s)}$ comes in since we get an
extra factor $q^{2\epsilon}$ at a solid $\epsilon$-crossing in the
jump-up model.
Now using the rows marked by ``up''
in the table above, we need to show the extra multiplicative
factors of $-q^{\pm1}$ inside $()$ in the columns
${\epsfysize=0.12 truein \epsfbox{0110.eps}}$ and
${\epsfysize=0.12 truein \epsfbox{1001.eps}}$ will cancel
out in the product $\Pi(s)$.
Notice that after the modification
of $T$ as we did before, the edges marked by 0 is decomposed into a
collection of cycles and an arc, having
transverse intersections with the cycles formed by edges marked
by 1. The intersections between a
cycle marked by 1 and a cycle or the path marked by 0 can be paired
up. Consider
two cases according to whether
such a pair makes a contribution to the linking number. In both cases, we see
that the extra multiplicative
factors of $- q^{\pm1}$ cancel out.
\end{proof}
Denote
$$
\begin{aligned}
\,&\int_0^0(T)=(-q^2)^{-\omega(T)}\sum_{s\in\text{adm}_0(T)}
q^{\text{rot}_0(T,s)-\text{rot}_1(T,s)}\,\Pi(s),\\
&\int^1_1(T)=(-q^2)^{-\omega(T)}\sum_{s\in\text{adm}_1(T)}
q^{\text{rot}_0(T,s)-\text{rot}_1(T,s)}\,\Pi(s).
\end{aligned}
$$
\begin{lm} We have $\int^0_0(T)=\int^1_1(T)$ and $J(K)=(q+\bar q)\int^0_0(T)=
(q+\bar q)\int^1_1(T).$
\end{lm}
\begin{proof} There are two ways to close up $T$, both giving the same knot $K$.
Thus, we have
$$J(K)=q\int^0_0(T)+\bar q \int^1_1(T)=\bar q\int^0_0(T)+q\int^1_1(T)$$
and this implies the conclusions of the lemma.
\end{proof}
\subsection{Toward a relationship between Jones polynomial and zeta functions}
Various kinds of zeta functions are basically all about counting
of cycles. We may also express the Jones polynomial in terms of counting
``simple families of cycles'' with weights in our model of random walk
on a $1$-string link $T$.
Combining previous lemmas, we get the following formula for the
Jones polynomial.
\begin{lm} Let $K$ be the closure of a 1-string link $T$,
$$\begin{aligned}
J(K)
&=(q+\bar q)\,q^{-\omega(T)+\text{\rm rot}(T)}\sum_{s\in\text{\rm adm}_0(T)}
q^{2(\beta_1(s)-\text{\em rot}_1(T,s))}W_1^{\circ}(s)\\
&=(q+\bar q)\,q^{-\omega(T)-\text{\rm rot}(T)}\sum_{s\in\text{\rm adm}_1(T)}
q^{2(\beta_0(s)+\text{\rm rot}_0(T,s))}W_0^{\circ}(s).
\end{aligned}
$$
\end{lm}
\begin{proof} It is not hard to see that $\text{\rm rot}_0(T,s)+
\text{\rm rot}_1(T,s)$ is independent of the state $s$. It is equal to the
sum of rotation
numbers of circles obtained by smoothing all crossings of $T$, i.e. the
rotation number $\text{rot}(T)$ of $T$ by definition.
\end{proof}
To see how the Jones polynomial is related to the Alexander polynomial,
let us describe an expansion of the inverse of the Alexander polynomial.
Consider all cycles in
our model of random walk which avoid the first arc $A_1$ on the knot diagram.
Let $\mathcal{Q}$ be the set
of all such cycles which are primitive, i.e. they are
not powers of any other cycles.
Recall that $\text{det}\,(I-\mathcal{B})$ is, up to
a factor of a power of $t$,
the Alexander polynomial of the
knot in question.
Given a cycle $c$, we will use $W(c)$ to denote its weight. Then
$$(\text{det}\,(I-\mathcal{B}))^{-1}=
\prod_{c\in\mathcal{Q}}(1-W(c))^{-1}=1+\sum_{k=1}^\infty\,\sum_{(c_1,\dots,
c_k)\in\mathcal{Q}^k}\,
W(c_1)\cdots W(c_k).$$
This is the Foata-Zeilberger formula we mentioned in the introduction. For
the convenience of readers, an exposition of this formula will be given in
Section 4.
A $k$-tuple of cycles in ${\mathcal{Q}}^k$ is called {\it simple} if
no edges are shared by cycles in this $k$-tuple. Let $\mathcal{Q}^t$
be the set of all simple $k$-tuples of cycles, for $k=1,2,\dots$.
Given $c\in {\mathcal{Q}}^t$, let $\beta_1(c)$ be the number of crossings
in $c$, and $\text{rot}(c)$ be the rotation number of $c$. Note they are
the same as the $\beta_1$ and $\text{rot}_1$ of the corresponding
state in $\text{adm}_0(T)$. Finally, in order to have a one-one correspondence
between $\text{adm}_0(T)$ and $\mathcal{Q}^t$, we have to modify $T$ slightly.
The simplest way is to add a negative kink with rotation number $-1$ to the
bottom of $T$ and a positive kink with rotation number 1 to the top of $T$.
We will denote by ${\mathcal Q}_*^t$ the set of all simple $k$-tuples of
cycles in the jump-down model.
\begin{thm}\label{jexp} With the 1-string link $T$ appropriately chosen as
described above, we have
$$\begin{aligned}
\frac{J(K)}{q+\bar q}
&=t^{\frac{\omega(T)-\text{\rm rot}(T)}{2}}(1+\sum_{c\in{\mathcal{Q}}^t}
t^{\,\text{\em rot}(c)-\beta_1(c)}W(c))\\
&=t^{\frac{\omega(T)+\text{\rm rot}(T)}{2}}(1+\sum_{c\in{\mathcal Q}_*^t}
{\bar t}^{(\,\text{\em rot}(c)+\beta_0(c))}W(c)).
\end{aligned}
$$
\end{thm}
Comparing with the expansion of the Alexander polynomial, we see that the
Jones polynomial $J(K)$ uses the
summands $W(c_1)\cdots W(c_k)$ where no edges are repeated in the collection
of cycles $c_1,\dots,c_k$.
A simple idea is that
collections of cycles with repeated edges
in the expansion of the Alexander polynomial might be lifted to
collection of simple cycles on the cabling of $T$.
This idea is realized in Theorem~\ref{mm}.
In the next section, we will first generalize our discussion
about the Jones polynomial to the colored Jones polynomial.
\section{Limit of the colored Jones polynomial}
\subsection{State sum for the colored Jones polynomial}
The set of finite dimensional irreducible representations of
$\mathfrak{sl}(2)$ (or rather, the quantum group $U_q\mathfrak{sl}(2)$)
can be listed as $V_1,V_2,V_3,\dots ,$
where $V_d$ is $d$-dimensional. The fundamental representation is $V_2$, which
is the one used to construct the
Jones polynomial $J(K)$. Other representations can also be used to construct
knot polynomials. The knot polynomial
obtained by \lq\lq coloring" the (zero framed) knot $K$ with the
irreducible representation $V_d$
is called the colored Jones polynomial $J(K,V^d)$
\cite{RT}. We have $J(K,V_1)=1$ and $J(K,V_2)=J(K)$. And if $K$ is the unknot,
$$J(K,V_d)=[d]=\frac{q^d-\bar{q}^d}{q-\bar{q}}.$$
We may also color $K$ by non-irreducible representations, for example, by
$V_2^{\otimes d}$. Such a colored Jones polynomial can be interpreted in
two ways:
\begin{enumerate}
\item Assume that $K$ has zero framing, let $K^d$ be the link obtained by
replacing $K$ with $d$ parallel copies
(this is the zero framing cabling operation), then $J(K,V_2^{\otimes d})=
J(K^d).$
\item We have the following relation in the representation ring of $\mathfrak
{sl}(2)$: $V_2\otimes V_d=V_{d+1}\oplus
V_{d-1}$. Thus, $V_2^{\otimes d}$ is a linear combination
of the irreducible modules $V_{d+1}$,
$V_{d-1}$, $V_{d-3}$, ... and
$J(K,V_2^{\otimes d})$ is the same linear combination of $J(K,V_{d+1})$,
$J(K,V_{d-1})$, $J(K,V_{d-3})$, ....
\end{enumerate}
These two interpretations can be used to establish a precise relation between
the colored Jones polynomials and
the cablings of the Jones polynomial. We quote from \cite{KM} such a relation
in the case considered here:
$$J(K,V_{d+1})=\sum_{j=0}^{d/2}\,(-1)^j\,\begin{pmatrix} d-j\\ j\end{pmatrix}
\, J(K^{d-2j})\,.$$
The decomposition $V_2\otimes V_d=V_{d+1}\oplus V_{d-1}$ can be given
explicitly in terms of the standard bases of these
irreducible representations \cite{KR}.
Suppose the
standard basis of $V_2$ is $\{e_0,e_1\}$, and the standard basis of $V_{d+1}$
is $\{f_0,f_1,\dots,f_d\}$, then we have
$$\begin{aligned}
\,&f_0=a\cdot e_0\otimes e_0\otimes\cdots\otimes e_0\in V_2^{\otimes d},\\
&f_d=b\cdot e_1\otimes e_1\otimes\cdots\otimes e_1\in V_2^{\otimes d},
\end{aligned}$$
where $a,b$ are products of $q$-analogue of Clebsch-Gordan coefficients
\cite{KR}.
For a 1-string link $T$, if it is colored by $V_{d+1}$, we get an invariant
$F(T)$ which is a $U_q\mathfrak{sl}(2)$-morphism of $V_{d+1}$. Since $V_{d+1}$
is an irreducible $U_q\mathfrak{sl}(2)$-module, we have
$$F(T)(f_i)=\lambda\,f_i,\qquad i=0,1,\dots,d.$$
Furthermore, let $K$ be the closure of $T$, then
$$J(K,V_{d+1})=[d+1]\cdot\lambda.$$
On the other hand, if we color $T$ by $V_2^{\otimes d}$, we may write the
induced
$U_q\mathfrak{sl}(2)$-morphism $F(T)$ on $V_2^{\otimes d}$ as follows:
$$F(T)(e_{i_1}\otimes\cdots\otimes e_{i_d})=
\sum_{j_1,\dots,j_d}\,\int_{i_1\cdots i_d}
^{j_1\cdots j_d}(T)\,e_{j_1}\otimes\cdots\otimes e_{j_d}.$$
Thus, the following lemma holds, which generalizes Lemma 2.4.
\begin{lm} We have $\int_{0\cdots0}^{0\cdots0}(T)=
\int_{1\cdots1}^{1\cdots1}(T)=\lambda$
and
$J(K,V_{d+1})=[d+1]
\,\int_{0\cdots0}^{0\cdots0}(T)=[d+1]\,\int_{1\cdots1}^{1\cdots1}(T)$.
\end{lm}
We now can extend Theorem~\ref{jexp}
to $J(K,V_{d+1})$. Notice that we assume the
writhe $\omega(T)=0$ and $T^d$ is the zero-framing $d$-cabling of $T$. We denote
by $\text{adm}_0(T^d)$ the set of admissible states on $T^d$ which assign 0 to
all the top and bottom edges. The notation $\text{adm}_1(T^d)$ has the
obvious
meaning.
\begin{lm}\label{color} With the notations as above, we have
$$\begin{aligned}
J(K,V_{d+1})
&=[d+1]\,q^{\text{\rm rot}(T^d)}\sum_{s\in\text{\rm adm}_0(T^d)}
q^{2(\beta_1(s)-\text{\em rot}_1(T^d,s))}W_1^{\circ}(s)\\
&=[d+1]\,{\bar q}^{\text{\rm rot}(T^d)}\sum_{s\in\text{\rm adm}_1(T^d)}
q^{2(\beta_0(s)+\text{\rm rot}_0(T^d,s))}W_0^{\circ}(s).
\end{aligned}
$$
\end{lm}
\begin{proof} Applying Turaev's state model to the tangle $T^d$, we get
$$\int_{i_1\cdots i_d}^{j_1\cdots j_d}(T)=(-q^2)^{w(T^d)}\sum_{s\in\text{adm}_*(T^d)}
\,q^{\text{rot}_0(T^d,s)-\text{rot}_1(T^d,s)}\Pi(s)$$
where $\text{adm}_*(T^d)$ is the set of admissible states on $T^d$ such that
the bottom edges are assigned with $i_1,\dots,i_d$ and top edges with
$j_1,\dots,j_d$, respectively. Then we can translate this expression for
$\int_{i_1\cdots i_d}^{j_1\cdots j_d}(T)$ into the form appeared in
Proposition~\ref{color} as we did in Section 2.4.
\end{proof}
Now the corresponding formula
for $J(K,V_{d+1})$ of Theorem~\ref{jexp} is
obtained from Theorem~\ref{jexp} by replacing
$q+\bar q=[2]$ by $[d+1]$.
\subsection{Computation of the limit}
In this section, we prove our main theorem which calculates
the limit of the renormalized colored Jones polynomials
when the color
parameter tends to infinity and the weight parameter tends to 1.
\begin{thm}\label{mm}
Let $T$ be a 0-framed 1-string link, modified appropriately as in the
Theorem 2.6, and $K$ be the closure of $T$. Denote by $\mathcal Q$
(${\mathcal Q}_*$)
the set of primitive cycles in the jump-up (jump-down) model of random walk
on $T$ with $t=e^{-2h}$. Then
$$\begin{aligned}
\lim_{d\rightarrow\infty}\frac{J(K,V_{d+1})(e^{\frac{h}{d}})}{[d+1]}
&={\bar t}^{\,\frac{\text{\rm rot}(T)}2}\,\left(1+\sum_{(c_1,c_2,\dots,c_k)\in
{\mathcal Q}^k}\,W(c_1)W(c_2)\cdots W(c_k)\right)\\
&={t}^{\,\frac{\text{\rm rot}(T)}2}\,\left(1+\sum_{(c_1,c_2,\dots,c_k)\in
{\mathcal Q}_*^k}\,W(c_1)W(c_2)\cdots W(c_k)\right).
\end{aligned}
$$
\end{thm}
\begin{proof}
Using the expansion of the colored Jones polynomials, it
suffices to show that the weight of cycles
$(c_1, c_2, \cdots , c_k)$ on $T$ in the right-handed side
is the limit of some cycles on $T^d$ for large $d$.
Let us compare the two jump-up models of random walks on $T$
and $T^d$ with $t=e^{-2h}$,
and with $t=e^{-\frac{2h}{d}}$, respectively.
Consider first a simple cycle $c$
on $T$. Recall that this is a cycle on $T$ with no edges repeated.
There are many ways to lift $c$ to become a simple cycle $\tilde c$
on $T^d$. The reason for this multiplicity is that for each jump-up on $c$,
we can choose one of the $d$ over-crossed segments to jump up on $T^d$. In fact,
if there are $m$ jump-ups on $c$, there will be $m^d$ lifts $\tilde c$ on
$T^d$. We need to calculate $\sum\,W(\tilde c)$, a sum over all liftings of
$c$. For a jump-up at a positive crossing on $c$, we get a (multiplicative)
contribution
$1-e^{-2h}$ to $W(c)$. The corresponding contribution to $\sum\,W(\tilde c)$
is a multiplicative factor
$$(1-e^{-\frac{2h}{d}})(1+e^{-\frac{2h}{d}}+e^{-\frac{4h}{d}}+\cdots+
e^{-\frac{2(d-1)h}{d}})=1-e^{-2h}.$$
Also, passing through an under-crossing on $c$ contributes $e^{-2h}$ to $W(c)$
and the corresponding contribution of $\tilde c$ is
$$(e^{-\frac{2h}{d}})^d=e^{-2h}.$$
Thus we have
$$\sum\,W(\tilde c)=W(c).$$
Obviously,
$\beta_1(\tilde c)$ and $\text{rot}_1(T^d,\tilde c)$ depend only on $c$.
We also notice that
$\text{rot}(T^d)=d\,\text{rot}(T)$. Thus,
$$\begin{aligned}
\lim_{d\rightarrow\infty}&\,(e^{\frac{2h}{d}})^{\,\frac{\text{rot}(T^d)}{2}}
\sum_{\tilde c}
(e^{-\frac{2h}{d}})^{\text{rot}_1(T^d,\tilde c)-\beta_1(\tilde c)}
W(\tilde c)\\
&=(e^{2h})^{\,\frac{\text{\rm rot}(T)}2}W(c).
\end{aligned}
$$
Notice that the same argument holds true for a simple collection of cycles
on $T$.
In general, given a non-simple collection of cycles $c$ on $T$, we decorate
each edge by an integer which is the number of times $c$ traveling over
that edge. There are only finitely many collections of cycles on $T$ with a
fixed decoration. For $d$ sufficiently large, we can lift $c$ to a simple
collection of cycles on $T^d$. To get such a lifting, we will not have the
freedom of jumping up onto any of the $d$ over-crossed segments at a crossing.
A particular jump-up at a crossing $X$ on $T$ has at most $d$ liftings.
For some other
jump-up onto the segment going over $X$, we have to avoid
the over-crossed segments jumped onto previously. There are at most $d$ possible collisions
for the liftings of these two jump-ups. Since
$$\lim_{d\rightarrow\infty}\,(1-e^{\pm\frac{2h}{d}})(1-e^{\pm\frac{2h}{d}})d=0,$$
we conclude that in the limit when $d\rightarrow\infty$,
the sum of weight of all
non-simple liftings of $c$ is zero. We may just do our calculation
as if there are only simple liftings. Thus,
the same calculation as we did before leads to
$$\lim_{d\rightarrow\infty}\sum_{\text{$\tilde c$ simple}}\,W(\tilde c)
=W(c).$$
Finally,
$\beta_1(\tilde c)$ and
$\text{rot}_1(T^d,\tilde c)$ are bounded by quantities depending only on $c$.
Thus, we get
$$\lim_{d\rightarrow\infty}\frac{J(K,V_{d+1})(e^{\frac{h}{d}})}{[d+1]}
={\bar t}^{\frac{\text{\rm rot}(T)}2}\,\left(1+\sum_{(c_1,c_2,\dots,c_k)\in
{\mathcal Q}^k}\,W(c_1)W(c_2)\cdots W(c_k)\right).
$$
This finishes the proof of Theorem~\ref{mm}.
\end{proof}
\section{Ihara-Selberg zeta function and Melvin-Morton conjecture}
\subsection{Lyndon words and the Foata-Zeilberger formula}
Let us recall the notion of Lyndon words and some results in \cite{FZ}. For
references to quoted results in this section, see \cite{FZ}.
Given a finite nonempty set $X$ whose elements are totally ordered,
we consider the monoid $X^*$ generated by $X$. Let $<$ be the lexicographic
order on $X^*$ derived from the
total order on $X$. A {\it Lyndon word} is defined to be a nonempty word in
$X^*$ which is prime, i.e. not the
power of any other word, and is minimal in the class of its cyclic
rearrangements. Let $L$ denote the set of all
Lyndon words. The following result is due to Lyndon.
\begin{lm} Each nonempty word $w\in X^*$ can be uniquely written as a
non-increasing juxtaposition of Lyndon words:
$$w=l_1l_2\cdots l_m,\qquad l_i\in L,\,\,\, l_1\geq l_2\geq\cdots\geq l_m.$$
\end{lm}
Let $X$ be a finite set. Let $\mathcal B$ be a square matrix whose entries
$b(x,x')$ ($x,x'\in X$) form a set of commuting
variables. For each Lyndon word $l\in L$, we associate with it a variable
denoted by $[l]$. These variables
$[l]$, $l\in L$, are assumed to be all distinct and commute with each other.
Given a word $w=x_1x_2\cdots x_k$ in $X^*$, define
$$\beta_{\text{circ}}(w)=b(x_1,x_2)b(x_2,x_3)\cdots b(x_{k-1},x_k)b(x_k,x_1)$$
and $\beta(w)=1$ if $w$ is the empty word. Notice that all the words in the
same cyclic rearrangement class have the
same $\beta_{\text{circ}}$-image. Also define
$$\beta([l])=\beta_{\text{circ}}(l)$$
for $l\in L$.
Now form the $\mathbb Z$-algebras of formal power series in the variables
$[l]$ and $b(x,x')$ respectively.
Extend $\beta$ to a continuous homomorphism between these two
$\mathbb Z$-algebras. It makes sense
to consider the product
$$\Lambda=\prod_{l\in L}(1-[l])$$
as well as its inverse $\Lambda^{-1}$. We have
$$\beta(\Lambda)=\prod_{l\in L}(1-\beta[l])$$
and
$$\beta(\Lambda^{-1})=(\beta(\Lambda))^{-1}.$$
For a nonempty word $w\in X^*$, let it be written as in Lemma 4.1. Then define
$$\beta_{\text{dec}}(w)=\beta_{\text{circ}}(l_1)\beta_{\text{circ}}(l_2)
\cdots\beta_{\text{circ}}(l_k).$$
If $w$ is empty, $\beta_{\text{dec}}(w)=1$. Finally, define
$$\beta_{\text{dec}}(X^*)=\sum_{w\in X^*}\,\beta_{\text{dec}}(w).$$
The following theorem of Foata and Zeilberger is what we need.
\begin{thm}\label{FZfor} {\em (Foata-Zeilberger formula)}
$\beta(\Lambda^{-1})=\beta_{\text{\rm dec}}(X^*)=(\text{\rm det}\,
(I-\mathcal{B}))^{-1}.$
\end{thm}
This is a generalization of the Bowen-Lanford formula \cite{BL},
which comes directly
from the identity $\text{det}(e^{A})=e^{\text{tr}A}$ for a matrix $A$.
\subsection{The Ihara-Selberg zeta function of a graph} The
Foata-Zeilberger formula in
Theorem 3.2 is used in \cite{FZ} to derive one of
Bass' evaluations of the Ihara-Selberg zeta function for a graph \cite{B}. For
the reader's convenience, let
us first recall Ihara's formulation of the zeta function in the original
setting of Selberg (see \cite{B}).
Let $\Gamma<PSL_2(\mathbb{R})$ be a uniform lattice (= discrete cocompact
subgroup). An element $g\in\Gamma$ is hyperbolic if
$$l(g)={\text{min}}\{d\,(gx,x)\,;\,x\in\mathbb{R}^2_+\}>0\qquad \text{($d=$
Poincar\'e metric)}.$$
Let $\mathcal{P}$ be the set of $\Gamma$-conjugacy classes of primitive
hyperbolic elements
in $\Gamma$, then the Ihara-Selberg zeta function is
$$Z(s)=\prod_{g\in\mathcal{P}}(1-u^{l(g)})^{-1},\qquad u=e^{-s}.$$
Let $G$ be an directed graph with the set of edges $E(G)=\{e_1,e_2,\dots,
e_n\}$.
Let $S$ be an $n\times n$ matrix
whose $(i,j)$-entry is equal to 1 if the terminal point of $e_i$ is the same
as the initial point of $e_j$, and
0 otherwise. On $G$, we may consider primitive cycles, which are oriented
cycles formed by directed edges in the
usual sense and which are not powers of some other cycles. Let $\mathcal{C}$
be the set of
primitive cycles on $G$. The Ihara-Selberg zeta function of $G$ is
$$Z_G(u)=\prod_{c\in\mathcal{C}}(1-u^{|c|})^{-1},$$
where $|c|$ is the length of the cycle $c$ (= the number of edges in $c$).
The Foata-Zeilberger formula implies
$$Z_G(u)=(\text{det}\,(I-uS))^{-1}.$$
If $G$ is an undirected graph, in \cite{B}, Bass transformed $G$ into an
directed graph $G'$ by
giving each edge of $G$ two different orientations and thinking of them as
different directed edges. To study
primitive, reduced cycles on $G$, where \lq\lq reduced" means that an edge
will not be traveled twice successively, Bass
looked at the matrix $T=S-J$, where $S$ is the matrix we defined in the
previous paragraph for $G'$ and $J$
is the matrix whose $(i,j)$-entry is 1 if the $i$-th and $j$-th edges of $G'$
come from the same edge of $G$,
and 0 otherwise. Now let $\mathcal{R}$ be the set of primitive, reduced
cycles on $G$, define
$$Z_G(u)=\prod_{c\in\mathcal{R}}\,(1-u^{|c|})^{-1}.$$
One of Bass' evaluations of $Z_G(u)$, which is now a consequence of
the Foata-Zeilberger formula, is
$$Z_G(u)= (\text{det}\,(I-uT))^{-1}.$$
The Foata-Zeilberger formula is general enough so that we may apply it to
Markov processes with a finite set of
states. A cycle now will be a sequence of transitions of states from
and back to a given one. In particular, in our model of random walk on a knot
diagram discussed in Sections 2.1
and 2.2, we have the set of states $\{A_1,A_2,\dots,A_n\}$ and the transition
matrix $\tilde{\mathcal{B}}$.
This case is degenerate since $\text{det}\,(I-\tilde{\mathcal{B}})=0$.
Nevertheless, we may consider all cycles in
our model of random walk which avoid the first arc $A_1$ on the knot diagram.
Let $\mathcal{Q}$ be the set
of all such cycles which are primitive, then the
Foata-Zeilberger formula implies
$$\prod_{c\in\mathcal{Q}}\,(1-W(c))^{-1}=(\text{det}\,(I-\mathcal{B}))^{-1},$$
where $W(c)$ is the weight of the cycle $c$ and $\mathcal{B}$ is obtained from
$\tilde{\mathcal{B}}$ by deleting
the first row and column. Notice that $\text{det}\,(I-\mathcal{B})$ is, up to
a factor of a power of $t$,
the Alexander polynomial of the
knot in question. So we see that the inverse of the Alexander polynomial is an
Ihara-Selberg type zeta function.
We have
$$\prod_{c\in\mathcal{Q}}(1-W(c))^{-1}=1+\sum_{k=1}^\infty\,\sum_{(c_1,\dots,
c_k)\in\mathcal{Q}^k}\,
W(c_1)\cdots W(c_k).$$
Hence we obtain the following expansion of the inverse of the
Alexander polynomial:
\begin{thm}\label{aexp}
$$(\text{det}\,(I-\mathcal{B}))^{-1}=
\prod_{c\in\mathcal{Q}}(1-W(c))^{-1}=1+\sum_{k=1}^\infty\,\sum_{(c_1,\dots,
c_k)\in\mathcal{Q}^k}\,
W(c_1)\cdots W(c_k).$$
\end{thm}
\subsection{Melvin-Morton function and Melvin-Morton Conjecture}
In \cite{MM}, Melvin and Morton
studied the dependence of the colored Jones polynomial on the
``color'' (that is the dimension $d$). They observed that
$$\frac{J(K,V_{d+1})(e^h)}{[d+1]}=\sum_{m\geq0,\,j\leq m}\,a_{jm}(K)d^jh^m.$$
Furthermore, Melvin and Morton conjectured that the function (which will be
called the {\it Melvin-Morton function})
$$M(K)(h)=\sum_{m\geq0}\,a_{mm}(K)h^m$$
is the inverse of the Alexander polynomial.
Rozansky then was able to give a proof of this conjecture, on the level of
rigor of physics, based essentially on calculating the limit
$$\lim_{d\rightarrow\infty}\frac{J(K,V_{d+1})(e^{\frac{h}{d}})}{[d+1]}$$
and the known relationship between the semi-classical limit of Witten's
Chern-Simons path integral and the Ray-Singer torsion. Rozansky's work
went beyond the particular simple Lie algebra $\mathfrak{sl}(2)$ and extended
the Melvin-Morton conjecture to its full generality.
The first rigorous proof of the Melvin-Morton conjecture was given by
Bar-Natan and Garoufalidis \cite{BG}. Their proof used the full power of
the theory of finite type knot invariants, together with some quite complicated
combinatorial arguments. Later, Vaintrob and others simplified the
combinatorial arguments of Bar-Natan and Garoufalidis (see, for example, \cite{V}).
The Melvin-Morton conjecture can be deduced now as follows.
By Theorem~\ref{mm} and Theorem~\ref{aexp},
$$\lim_{d\rightarrow\infty}\frac{J(K,V_{d+1})(e^{\frac{h}{d}})}{[d+1]}
=\frac{{\bar t}^{\frac{\text{\rm rot}(T)}2}}{\text{\rm det}(I-{\mathcal B})}.
$$
On the other hand, it is easy to see that
$$\lim_{d\rightarrow\infty}\frac{J(K,V_{d+1})(e^{\frac{h}{d}})}{[d+1]}
=M(K)(h).$$
Hence the Melvin-Morton conjecture follows:
\begin{thm}
For any knot $K$ which is the closure of a 0-framed 1-string link $T$,
$$M(K)(h)=
\frac{{\bar t}^{\,\frac{\text{\rm rot}(T)}2}}{\text{\rm det}(I-{\mathcal B})},
$$ where $t=e^{-2h}$.
\end{thm}
Note the right side is the inverse of the symmetric Alexander
polynomial of $K$ when the 1-string link $T$ is chosen appropriately as in
Theorem 2.6.
\noindent{\bf Remark:} In Theorem 3.3, we are actually calculating the
limit of the
partition function $\int_{0\cdots0}^{0\cdots0}(T)$
with a fixed boundary condition.
This is rather like the calculation in statistical mechanics (e.g
the limit of the Ising model). In statistic mechanics,
the discontinuities of the limiting function are
related with phase transitions. Thus, it might make sense to ask
whether the zeros of the Alexander polynomial are of any significance and
could be ``observed''.
|
1,108,101,564,960 | arxiv | \section{Introduction: stellar communication with artificial transits}
It is already known that the shape of a planet (its oblateness or the presence of rings, etc.) slightly modifies the shape of the transit light curve (Barnes \& Fortney 2003, 2004). In a recent paper (Arnold 2005), we have analyzed the transit light-curve signatures of artificial (i.e. non-planetary) planet-size objects, possibly built by an extraterrestrial advanced civilization. We have shown that an artificial object leaves a specific signature in the light-curve depending on its shape. The object shape can be chosen (rotating triangle, screen with holes, etc.) to produce a signature recognizable from a natural transit.
An artificial transit, considered as an elementary bit of information, is visible (transmitted) in a relatively large solid angle at a given time, but requires a complete orbit to be observed a second times. If we consider stellar communication with laser (Kingsley 2001), bits of information (laser pulses) can be much more numerous but only sent in a narrow solid angle. Calculation shows that the amount of bits transmitted through a given solid angle per unit of time are similar for both methods. Moreover both have roughly the same range with our assumptions.
For a Jupiter-size object, the signature is in the $10^{-4}$ magnitude range, thus within the photometric resolution of Corot or Kepler missions. For Earth-size objects, it is not possible to detect details in the extremely shallow ($\approx 10^{-4}$) transit light-curve. To produce a distinguishable transit signal with objects as small as the Earth, a solution could be to arrange transits of multiple objects in a particular sequence such as prime numbers.
\section{Feasibility}
Let's consider the building of a $1\mu m$ thick, $12000km$ in diameter mask made of iron. The required volume of iron represents a sphere of $632m$ in diameter.
It is amazing to see that its mass, $1.04 \times 10^{12} kg$, is almost exactly the mass of iron (steel) produced in the world in 2004, just above one billion-tonne \footnote{International Iron \& Steel Institute 2005 report at $http://www.worldsteel.org/wsif.php$}.
The energy required to heat this amount of iron from $\approx 0$ to $1808 K$ is $7.1\times 10^{14} Wh$, and its fusion at $1808 K$ needs
$2.3\times 10^{14}$ more $Wh$. This represents 3 days of the 2002 world energy mean daily consumption\footnote{Energy Information Administration report DOE/EIA-0484(2005) July 2005 $http://www.eia.doe.gov/oiaf/ieo/$}, obviously consistent with the annual production of steel mentioned above. These numbers suggest that the building of such a large mask, possibly from material collected on an iron-rich M-type asteroid, might be within human technology capabilities in the future.
Assuming the mask is built in the Main Asteroid Belt, it would be necessary to transfer it toward the inner solar system, in order to increase the solid angle from which
the transit is visible and thus increase the communication efficiency. This migration is possible in principle through solar sailing: the mask can be oriented
to be decelerated by the pressure of solar radiations, inducing its controlled fall toward an orbit closer to the Sun.
To protect the mask against asteroids or meteoroids encounter in the Main Belt, it can be produced into small \textit{sails} and then migrated into a safer inner orbit where all sails are assembled. These much smaller parts might be connected together by mechanical fuses adjusted to break
above a given acceleration. A bolide colliding with a strength above the fuses threshold would take with it the impacted part from the rest of the mask, producing a small hole in the structure. This at least seems applicable for non- or slowly-rotating masks, because the acceleration at the outer edge of the mask requires stronger thus less sensitive fuses.
|
1,108,101,564,961 | arxiv | \section{Introduction}
\label{sec:Introduction}
Word Sense Disambiguation (WSD) aims to determine which sense ({\em i.e.} meaning) a word may denote in a given context. This is a challenging task due to the semantic ambiguity of words. For example, the word ``book'' as a noun has ten different senses in Princeton WordNet such as ``a written work or composition that has been published'' and ``a number of pages bound together''. WSD has been a challenging task for many years but has gained recent attention due to the advances in contextualized word embedding models such as BERT \citep{devlin-etal-2019-bert}, ELMo \citep{peters-etal-2018-deep} and GPT-2 \citep{radford2018improving}. Such language models require less labeled training data since they are initially pre-trained on large corpora using self-supervised learning. The pre-trained language models can then be fine-tuned on various downstream NLP tasks such as sentiment analysis, social media mining, Named-Entity Recognition, word sense disambiguation, topic classification and summarization, among others.
A gloss is a short dictionary definition describing one sense of a lemma or lexical entry \cite{J06,J05}. A context is an example sentence in which the lemma or one of its inflections (\emph{i.e.} the target word) appears. In this paper, we aim to fine-tune Arabic models for Arabic WSD. Given a target word in a context and a set of glosses, we will fine-tune BERT models to decide which gloss is the correct sense of the target word. To do that, we converted the WSD task into a BERT sentence-pair binary classification task similar to \citep{huang2019,yap2020, blevins2020}. Thus, BERT is fine-tuned on a set of context-gloss pairs, where each pair is labeled as $True$ or $False$ to specify whether or not the gloss is the sense of the target word. In this way, the WSD task is converted into a sentence-pair classification task.
One of the main challenges for fine-tuning BERT for Arabic WSD is that Arabic is a low-resourced language and that there are no proper labeled context-gloss datasets available.
To overcome this challenge, we collected a relatively large set of definitions from the Arabic Ontology \cite{J21} and multiple Arabic dictionaries available at Birzeit University's lexicographic database \cite{JA19, JAM19} then we extracted glosses and contexts from lexicon definitions.
Another challenge was to identify, locate and tag target words in context. Tagging target words with special markers is important in the fine-tuning phase because they act as supervised signals to highlight these words in their contexts, as will be explained in section \ref{sec:Methodology}. Identifying target words is not straightforward as they are typically inflections of lemmas, \emph{i.e.} with different spellings. Moreover, locating them is another challenge as the same word may appear multiple times in the same context with different senses. For example, the word ({\scriptsize \<ذَهَب>}) appears two times in this context ({\scriptsize \<ذَهَب ليشتري ذَهَب>}) with two different meanings: \emph{went} and \emph{gold}. We used several heuristics and techniques (as described in subsection \ref{sec:annotating}) to identify and locate target words in context in order to tag them with special markers.
As a result, the dataset we constructed consists of about 167K context-gloss pair instances, 60K labeled as $True$ and 107K labeled as $False$. The dataset covers about 26k unique lemmas (undiacritized), 32K glosses and 60k contexts.
We used this dataset to fine-tune three pre-trained Arabic BERT models: AraBERT \citep{arabert}, QARiB \citep{qarib} and CAMeLBERT \citep{inoue-etal-2021-interplay}\footnote{We were not able to use the ABERT and MARBERT models \cite{abdul2020arbert} as they appear very recently.}. Each of the three models was fine-tuned for context-gloss binary classification. Furthermore, we investigated the use of different supervised signals used to highlight target words in context-gloss pairs.\\
The contributions of this paper can be summarized as follows:
\begin{enumerate}
\item Constructing a dataset of labeled Arabic context-gloss pairs;
\item Identifying, locating and tagging target words;
\item Fine-tuning three BERT models for Arabic context-gloss pairs binary classification;
\item Investigating the use of different markers to highlight target words in context.
\end{enumerate}
The remainder of this paper is organized as follows. Section \ref{sec:relatedwork} presents related work. Section~\ref{sec:dataset} describes the constructed dataset and our methodology to extract and label context-gloss pairs, and splitting the dataset into training and testing sets. Section~\ref{sec:taskoverview} outlines the task we resolved in this paper and Section~\ref{sec:Methodology} presents the fine-tuning methodology. The experiments and the obtained results are presented in Sections~\ref{sec:experiment} and ~\ref{sec:results} respectively. Finally, Section~\ref{sec:conclusion} presents conclusions and future work.
\section{Related Work}
\label{sec:relatedwork}
Recent experiments in fine-tuning pre-trained language models for WSD and related tasks have shown promising results, especially those that use context-gloss pairs in fine-tuning such as \citep{huang2019, yap2020, blevins2020}.
\citet{huang2019} proposed to fine-tune BERT on context-gloss pairs (\(label \in \{yes,no\}\)) for WSD, such that the gloss corresponding to the context-gloss pair candidate, with the highest output score for $yes$, is selected. \citet{yap2020} proposed to group context-gloss pairs with the same context but different candidate glosses as 1 training instance (groups of 4 and 6 instances). Then, they proposed to fine-tune BERT model on group instances with 1 neuron in the output layer. After that, they formulated WSD as a ranking/selection problem where the most probable sense is ranked first.
Others also suggested to emphasize target words in context-gloss training instances. \citet{huang2019,botha-etal-2020-entity,lei2017swim,yap2020} proposed to use different special signals in the training instance, which makes the target word ``special'' in it. As such, \citet{huang2019} proposed to use quotation marks around target words in context. In addition, they proposed to add the target word followed by a colon at the beginning of each gloss, which contributes to emphasizing the target word in the training instance. \citet{yap2020} proposed to surround the target word in context with two special [TGT] tokens. In contrast, \citet{botha-etal-2020-entity,lei2017swim} proposed to surround the target word in context with two different special tokens as marks of opening and closing. In this paper, we investigate the use of different types of signals to emphasize target words in context for Arabic WSD.
\citet{el2021arabic} fine-tuned two BERT models on a small dataset of context-gloss pairs, consisting of about 5k lemmas, about 15k positive and 15k negative context-gloss pairs. They claimed an F1-score of 89\%. However, this result is not reliable. After repeating the same experiment, we found that the majority of the context sentences used in the tests were already used for training. In this paper, we carefully selected the test set such that no contexts are used in both the training and the test sets. Additionally, we used a much larger sense repository (26k lemmas, 33k concepts and 167k context-gloss positive and negative pairs), which makes the task more challenging.
Other works related to Arabic WSD includes the use of static embeddings such as context and sense vectors \cite{laatar2017word}, Stem2Vec and Sense2Vec\cite{alkhatlan2018word}, Lemma2Vec \cite{al-hajj-jarrar-2021-lu}, Word Sense Induction \cite{alian2020sense}, or using fastText \cite{logacheva2020word}. \citet{elayeb2019arabic} reviewed Arabic WSD approaches until 2018.
\section{Dataset Construction}
\label{sec:dataset}
This section describes how we constructed a dataset of labeled Arabic context-gloss pairs (See examples of pairs in Figure \ref{fig:pairs_example}). We extracted the context-gloss pairs from the Arabic Ontology and multiple lexicons in the Birzeit University's lexicographic database. The extracted pairs are labeled as $True$, and based on these $True$ pairs, we generated the $False$ pairs. Additionally, we identified the target word in each context and tagged it with different types of markers.
\subsection{Context-Gloss Pairs Extraction}
\label{sec:trainigtestset}
Arabic is a low-resource language \cite{DH21} and there are no proper sense repositories available for Arabic \cite{KAJ21, JKKS21} that can be used to generate a dataset of context-gloss pairs, e.g. similar to the Princeton WordNet for English \cite{PWN}. The largest available lexical-semantic resource for Arabic is the Birzeit University's lexicographic database\footnote{Lexicographic Search Engine: \url{https://ontology.birzeit.edu/about}}, which contains the Arabic Ontology \citep{J21,J11} and about 400K glosses extracted from about 150 lexicons \citep{JA19, JAM19, ADJ19}. The problem is that each of the 150 lexicons covers a partial set of glosses and lemmas. Thus, for a given lemma, collecting the glosses from all lexicons may result in a set of redundant senses. Another problem is that some lexicons provide multiple senses within the same definition with no clear structure or separation markers, which makes it difficult to extract senses. Furthermore, some lexicons do not provide contexts (\emph{i.e.} example sentences) or they mix them with the definitions.\\
To overcome the above challenges and build a context-gloss pairs dataset, we performed the following steps:
\textbf{First, selection of candidate definitions:} We quired the 400K lexicon definitions to select a set of good candidate definitions. A good definition represents either one sense or multiple senses that are easy to parse and split (\emph{i.e.} contains some markers) and has context examples. That is, definitions that are not easy to parse or do not provide contexts were excluded.
\textbf{Second, extraction of glosses and contexts:} Each of the collected candidate definitions in the first phase was parsed and split into gloss(es) and context(s). Some definitions did not need to be split and some were split into separate glosses (one for each sense) in case a definition contains multiple glosses (\emph{i.e.} senses). Contexts were also extracted from the candidate definitions, taking into account that a definition may include multiple contexts for one sense. A parser was developed for each lexicon as each lexicon has its structure and text markers\footnote{We used the same parsing framework developed by \cite{ADJ19_report} for lexicon digitization.}. Nevertheless, some lexicons were clean and well-structured (e.g. the Arabic Ontology) that did not need any parsing.
\textbf{Third, selection of glosses and contexts:} Given that the glosses and contexts were extracted in the second phase, we applied the following criteria to select the glosses and contexts that we need to build a dataset of context-gloss pairs:
\begin{itemize}
\item Short glosses and contexts (\emph{i.e.} one-word long) were excluded as they do not add useful information in the fine-tuning phase.
\item For each lemma, if one of its glosses does not have a context example then all glosses for this lemma were not selected. That is, for a lemma and its glosses to be selected, each gloss must have at least one context example.
\item In case the same lemma appears in multiple lexicons, the one with more glosses was selected. For example, let \emph{m} be a lemma with two glosses in lexicon A and three glosses in lexicon B, then the lexicon B set of glosses for \emph{m} is favored. If the same lemma has an equal number of glosses in multiple lexicons, we manually favor the more renowned lexicon. The idea of favoring lemmas with more glosses is because it indicates a richer set of distinct senses, and in this way, we avoid redundant senses for the same lemma in the dataset.
\item Only glosses for single-word lemmas are selected. Although multi-word expression lemmas are important, in this phase, we only focus on single-word lemmas as BERT can process single-word tokens. We plan to consider multi-word lemmas in the future.
\end{itemize}
\begin{table}
\begin{tabular}{ p{6cm}@{}>{\arraybackslash}m{3.5em}@{} } \hline
& \textbf{count} \\ \hline
Unique Lemmas (undiacritized) & 26169 \\
Avg glosses per Lemmas & 1.25 \\
Unique Glosses & 32839 \\
Unique Contexts & 60272 \\
Avg context per gloss & 1.83 \\
True context-gloss pairs & 60323 \\
False context-gloss pairs & 106884 \\
Total True and False pairs & 167207 \\
\end{tabular}
\caption{\label{table:dataset_stat} Statistics about our context-gloss pairs dataset}
\end{table}
As a result, we selected about 32k glosses and 60k contexts for about 26K single lemmas (undiacritized), resulting in about 60k context-gloss pairs that we labeled as $True$ pairs (see Table \ref{table:dataset_stat} for more statistics). It is important to note that our dataset cannot be considered an Arabic sense repository because a sense repository should contain all senses for a given lemma, but our dataset does not necessarily include all senses for every lemma.
\subsection{Labeling Context-Gloss Pairs}
\label{sec:labeling_pairs}
The 60k context-gloss pairs extracted in the previous phase were labeled as $True$. The $False$ context-gloss pairs were then generated based on the $True$ pairs, as follows: For each lemma with more than one gloss, we cross-related its glosses with its contexts. For example, let ($context1-gloss1$) and ($context2-gloss2$) be the two $True$ pairs for the same lemma, then ($context1-gloss2$) and ($context2-gloss1$) are generated and labeled as $False$ pairs. As a result, about 107K context-gloss $False$ pairs were generated in this way.
\begin{figure*}[h]
\centering
\includegraphics[width=0.8\textwidth]{images/pairs_examples.pdf}\par
\caption{Examples of labeled context-gloss pairs}
\label{fig:pairs_example}
\end{figure*}
\subsection{Annotating Target Words}
\label{sec:annotating}
This section presents our methodology for identifying the target word inside a given context and tagging it with a special supervised signal, which we need in the fine-tuning phase (see section \ref{sec:Methodology}). Figure \ref{fig:Signales} illustrates different tags of target words.
Given a lemma and a context, our goal is to identify which word is the target word in this context. As explained in section \ref{sec:Introduction}, a context is an example sentence in which a word (called target word) is mentioned with its sense defined in the gloss. Identifying a target word inside its context is not straightforward because: (i) it does not necessarily share the same spelling with its lemma, e.g. the word ({\scriptsize \<عيون>}) and its lemma ({\scriptsize \<عين>}) and, more importantly, (ii) it might occur multiple times and each time with a different sense such as ({\scriptsize \<كتب>}) which appears two times in this context ({\scriptsize \<كتب عدة كتب>}), with two different meanings: \emph{wrote} and \emph{books}.\\
The following four methods were performed at the same time to maximize the certainty in identifying target words. The resulting target words were verified manually:
\begin{itemize}
\item \emph{Sub-string}: We compared every word in the context with the given lemma (string-matching, after undiacritization). If the lemma is a sub-string of one or more words, then these words are candidate target words.
\item \emph{Character-level cosine similarity}: We developed a function\footnote{ The function converts two Arabic words (after removing diacritics) into two vectors (each cell represents the occurrence of a character), then computes their cosine similarity.} that takes a lemma and a context and returns the word with the max cosine similarity with the lemma. The minimum cosine value should be more than 0.75 $-$ an empirical threshold that we learned while reviewing the results. If a word is returned, then we considered it a candidate target word.
\item \emph{Levenshtein distance}: This function takes a lemma and a context and returns the word with max Levenshtein distance (after removing diacritics) by comparing each word in the context with the lemma. The returned word is considered a candidate target word.
\item \emph{Lemmatization}: We used our in-house lemmatizer and lexicographic database to lemmatize every word in the given context and return those words that have their lemmas the same as the given lemma. The returned words are considered candidate target words.
\end{itemize}
These four methods were applied in parallel to maximize the certainty of correct matching and identification of target words. The results (candidate words, their scores and position) of the four methods were then combined and sorted (from more to less certain) and given to linguists to review. Each identified target word\footnote{In some cases, multiple words having the same sense can be considered target words inside the same context. For example ({\scriptsize \<كتابه>}) and ({\scriptsize \<الكتب>}) in the context ({\scriptsize \<كتابه كان من آفضل الكتب>}). In our dataset, we only considered one target word, most likely the first one.} was manually verified and, if needed, corrected by a linguist.
\subsection{Training and Test Datasets}
\label{sec:trainigtestset}
This section describes how we divided our dataset into training and test sets and the criteria we used to avoid repeated context in training and test sets. Recall that our dataset contains one or more glosses for each lemma and one or more contexts for each gloss, which we used to generate the context-gloss pairs dataset. The dataset cannot be arbitrarily divided as contexts used for training should not be used for testing. We selected the test set taking into account these two criteria: (\emph{i}) every context selected in the test set should not be selected in the training set and (\emph{ii}) every gloss should be selected in both the training and the test sets.
Given these criteria, we selected the test set as follows:
(\emph{First}) we selected the pairs with repeated glosses from the set of context-gloss pairs (\emph{i.e.} glosses with more than one context). (\emph{Second}) we grouped pairs according to their glosses then selected one pair from each group larger than one and included it in the test set. All of these pairs were labeled as $True$. (\emph{Third}) we cross-related contexts with glosses of the same lemma to generate $False$ pairs in the test set from the $True$ pairs $-$ as described in subsection \ref{sec:labeling_pairs}. That is, again, the $False$ pairs were generated after selecting the $True$ pairs, and every pair selected for testing should not be part of the training set.
\begin{table}
\begin{tabular}{ |@{}>{\centering\arraybackslash}m{4em}@{}|lm{5.0em}@{}r|r|} \hline
\textbf{Datasets} & \textbf{Pairs} & \textbf{Count} & \textbf{Total} \\ \hline \hline
Training & True pairs& 55,585 & \\
& False pairs & 96,450 & 152,035 \\ \hline
Test & True pairs & 4,738 & \\
& False pairs & 10,434 & 15,172\\ \hline \hline
& & \textbf{Total} & 167,207\\ \hline
\end{tabular}
\caption{\label{table:train_test_data} Counts of the training and testing pairs}
\end{table}
The resulted training and test datasets\footnote{The datasets and the fine-tuned BERT models are available at: \url{ https://ontology.birzeit.edu/downloads}} consist of 152,035 and 15,172 pairs, respectively. Table~\ref{table:train_test_data} provides statistics about the training and test sets.
\section{Task Overview}
\label{sec:taskoverview}
Given a context, a target word in the context and a gloss, our task is to decide whether or not the gloss corresponds to a specific sense of the target word. We approached the problem as a binary sequence-pair classification task. We concatenated the context and the gloss and separated them by the special [SEP] token (See Figure \ref{fig:pairs_example}). Afterward, we fine-tuned Arabic BERT models on our labeled dataset of context-gloss pairs (\(label \in \{True,False\}\)). \\
It is worth noting that although this binary context-gloss pair classification task is related to the WSD task, they are not exactly the same task. The WSD task aims at determining which sense (or gloss) a word in context denotes from a given set of senses. It is also worth noting that these two tasks are not the same as the Word-In-Context (WIC) task \cite{al-hajj-jarrar-2021-lu, martelli-etal-2021-semeval}, which aims at determining whether a target word has the same sense in two given contexts.
\section{Methodology}
\label{sec:Methodology}
\begin{figure*}[h]
\centering
\includegraphics[width=1\textwidth,frame]{images/signals.pdf}\par
\caption{Illustration of the four context-gloss pairs variations.}
\label{fig:Signales}
\end{figure*}
To address the binary context-gloss classification task, we experimented with four variations of the context-gloss pairs. The idea is to investigate using different supervised signals around target words to give them special attention during the fine-tuning. Figure~\ref{fig:Signales} illustrates these four variations. In variation 1, context-gloss pairs were left intact, without any signal. In the other three variations, we followed the techniques used by \citet{huang2019}, \citet{yap2020} and \citet{blevins2020} to signal target words. We surrounded target words with (\emph{\romannum{1}}) single quotes in variation 2, (\emph{\romannum{2}}) the special token [UNUSED0] in variation 3, and (\emph{\romannum{3}}) [UNUSED0] before and [UNUSED1] after in variation 4. Moreover, in the last three variations, we added the target word followed by a colon at the beginning of each gloss. In these four variations, the context and the gloss were concatenated into a sequence separated with the [SEP] token. \\
We fine-tuned three Arabic pre-trained models: AraBERT \cite{arabert}, QARiB \cite{qarib} and CAMeLBERT \cite{inoue-etal-2021-interplay} using our training dataset described in Section \ref{sec:dataset}. Before fine-tuning AraBERT, we used the pre-processing method used in \citep{arabert} to pre-train version 2 of their model. Before fine-tuning CAMeLBERT and QARiB models, we used the pre-processing method used in \citep{inoue-etal-2021-interplay} to pre-train the CAMeLBERT which consists in the normalization of alif maksura ({\scriptsize \<ى>}), teh marbuta ({\scriptsize \<ة>}), alif ({\scriptsize \<ا>}) and undiacritization.
Since BERT has a max length limit of tokens equal to 512, we limit the length of each training instance (\emph{i.e.} context-gloss pair) with a maximum of 512 tokens. Given, for example, the tokenizer used in AraBERTv02, only 216 pairs are larger than 512 tokens out of the 167,207 pairs in our dataset. Instances shorter than 512 were padded to the max length limit.
The BertForSequenceClassification model architecture is used in fine-tuning the three Arabic BERT models. The last hidden state of the token [CLS] is used for the classification task. The linear layer in the output consists of two neurons for the $True$ and $False$ classes.
\section{Experiment Setup}
\label{sec:experiment}
We selected the base configuration of AraBERTv02, QARiB, and CAMeLBERT models due to computational constraints and as larger models do not necessitate better performance \citep{qarib,inoue-etal-2021-interplay}. We used the huggingface ``Trainer'' class in the fine-tuning. We performed a limited grid search to find a good hyperparameters combination then we fine-tuned each of the three models using the optimal configuration: initial learning rate of 2e-5, warmup\_steps of 1412 with a batch size of 16 over 4 training epochs. All other hyperparameters were kept at their default values. We used a single Tesla P100-PCIE-16GB in fine-tuning models.
\section{Results and Discussion}
\label{sec:results}
This section presents the results of two experiments. Table \ref{table:results_modeles} presents the results of the first experiment in which we fine-tuned three BERT models on the variation 2 (\emph{i.e.} single quotes signal) of context-gloss pairs.
As AraBERTv02 outperformed other models in the first experiment, it has been chosen for conducting a second experiment in which we fine-tuned on variation 1 (intact context-gloss pairs), variation 3 (two [UNUSED0] tokens around the target word in context-gloss pairs) and variation 4 ([UNUSED0] and [UNUSED1] tokens around the target word in context-gloss pairs).
Reported results in Table \ref{table:results_signals} reveal that the use of different supervised signals around the target word did not significantly improve the overall results. The use of supervised signals reveals only $1\%$ of improvement over variation 1 (no signals). This improvement is comparable to the improvement of 1-2$\%$ achieved by \citet{huang2019} using special signals on English datasets.
\begin{table}[ht]
\centering
\begin{tabular}{|@{}>{\centering\arraybackslash}m{6.0em}@{}|l@{}>{\centering\arraybackslash}m{2.3em}@{}@{}>{\centering\arraybackslash}m{2.5em}@{}|@{}>{\centering\arraybackslash}m{4.2em}@{}|}
\hline \textbf{Model} & & \textbf{True} & \textbf{False} & \textbf{Accuracy} \\ \hline \hline
\multirow{3}{*}{AraBERTv02} & Precision & 81 & 85 & \multirow{3}{*}{84} \\
& Recall & 66 & 93 & \\
& F1-score & 72 & 89 & \\ \hline
\multirow{3}{*}{CAMeLBERT} & Precision & 77 & 83 & \multirow{3}{*}{82} \\
& Recall & 60 & 92 & \\
& F1-score & 67 & 87 & \\ \hline
\multirow{3}{*}{QARiB} & Precision & 73 & 82 & \multirow{3}{*}{80} \\
& Recall & 58 & 90 & \\
& F1-score & 65 & 86 & \\ \hline
\end{tabular}
\caption{ Achieved results (\%) after fine-tuning three Arabic BERT models with the \emph{single quotes} supervised signal around the target word.}
\label{table:results_modeles}
\end{table}
\begin{table}[ht]
\centering
\begin{tabular}{|@{}>{\centering\arraybackslash}m{6.3em}@{}|l@{}>{\centering\arraybackslash}m{2.3em}@{}@{}>{\centering\arraybackslash}m{2.5em}@{}|@{}>{\centering\arraybackslash}m{4.2em}@{}|}
\hline \textbf{Variation} & & \textbf{True} & \textbf{False} & \textbf{Accuracy} \\ \hline \hline
\multirow{3}{*}{\shortstack{\textbf{Variation 1} \vspace{0.3em} \\ No signal}} & Precision & 80 & 85 & \multirow{3}{*}{83} \\
& Recall & 64 & 92 & \\
& F1-score & 71 & 88 & \\ \hline
\multirow{3}{*}{\shortstack{\textbf{Variation 3} \vspace{0.3em}\\ UNUSED0}} & Precision & 81 & 85 & \multirow{3}{*}{84} \\
& Recall & 64 & 93 & \\
& F1-score & 71 & 89 & \\ \hline
\multirow{3}{*}{\shortstack{\textbf{Variation 4} \vspace{0.3em}\\UNUSED0,1}} & Precision & 81 & 85 & \multirow{3}{*}{84} \\
& Recall & 64 & 93 & \\
& F1-score & 71 & 89 & \\ \hline
\end{tabular}
\caption{ Achieved results (\%) with AraBERTv02 using the other three supervised signals around the target word.}
\label{table:results_signals}
\end{table}
\section{Conclusion and Future Work}
\label{sec:conclusion}
We presented a large dataset of context-gloss pairs (167,207 pairs) that we carefully extracted from the Arabic Ontology and diverse lexicon definitions. Each pair was labeled as $True$ and $False$ and each target word in each context was annotated and tagged. We used this dataset to fine-tune three Arabic BERT models on binary context-gloss pair classification, and we achieved a promising accuracy of 84\%, especially as we used a large set of senses. Our experiments show that the use of different supervised signals around target words did not bring significant improvements (about $1\%$).\\
We will further build a large-scale content-gloss dataset.
We also plan to include contexts written in Arabic dialects \cite{JHRAZ17} so that dialectal text can be sense-disambiguated. Additionally, we plan to consider Arabic text that is partially or fully diacritized, which requires lemmas across lexicons to be linked with each other \cite{JZAA18}. Lastly but more importantly, we plan to extend our work to address the WSD task and build a semantic analyzer for Arabic.
\section*{Acknowledgments}
We would like to thank the reviewers for their valuable comments and efforts for improving our manuscript. We would also like to thank Taymaa Hammouda for her technical support while preparing the dataset and annotating the contexts. We extend our thanks to Dr Abeer Naser Eddine for proofreading this paper.
\bibliographystyle{acl_natbib}
|
1,108,101,564,962 | arxiv | \section{Introduction}
In many bio-medical applications in survival analysis it
is of interest and needed to use multiple time-scales. A
medical study will often have a follow-up time
(for example time since diagnosis) for patients of different ages, and
here both time-scales will contain important but different information
about how the risk of, for example, dying is changing.
We therefore consider the situation with two time-scales that
are equivalent up to a constant for each
individual, such as for example follow-up time and age.
One may see this as arising from the
the illness-death model, or the disability model,
where the additional time-scale may be duration in the illness
state of the model; see \cite{Keiding1991} for a general discussion
of these models.
There is rather limited work on how to deal with
multiple time-scales in a biomedical context, see for example
\cite{Oakes1995,Iacobelli2013} and \cite{Duchesne2000} and references therein.
We present a non-parametric regression approach with two time-scales where
each time-scale contribute additively to the mortality.
The regression setting models the
effect of covariates by additive Aalen models
on each time-scale \citep{aale:1989,huff:mcke:1991,abgk-book,ms06}.
This allows covariates to have
effects that vary on two different time-scales.
In a motivating example we consider patients that experience
myocardial infarction, and aim at predicting the
intensity considering the two time-scales age and time since myocardial infarction.
As a consequence, we can make survival predictions for patients given their age
at diagnosis. This model was considered previously by \cite{Scheike2001} where estimation
was based on smoothing for one of the time-scales. A study closely related to ours is \cite{Kauermann2006} who studied the two most common time scales: age and duration. The underlying technical setting of \cite{Kauermann2006} was a multiplicative hazard model without covariates that is estimated via splines. In contrast our approach is an additive hazard model including covariates and estimating without smoothing. Alternative smoothing methodologies to multiplicative hazard estimation includes \cite{Linton:etal:03,huang:00,hastietib86, Lin:etal:16}.
None of the known multiplicative hazard approaches including the ones mentioned above are able to estimate without smoothing, include time varying covariate-effects, or are able to provide simultaneous confidence bands as the additive approach of this paper does provide. We do know that smoothing improves efficiencies of cumulatively estimated quantities, see \cite{Guillen:etal:07} for the simplest possible case. However, smoothing is also a complexity and experts applying survival analysis have developed a practical way of smoothing by eye the underlying rough non-parametric estimators of \cite{Kaplan:Meier:58, Nelson:72}. The advantage of providing estimators without smoothing is that there can be no confusion from the complicated process of picking the smoothing procedure first and the amount of smoothing after that. Even if a smoothing approach is eventually used, then the smoothing free procedure would always count as a benchmark approach to check whether something went wrong during the smoothing. Our backfitting approach is different from standard backfitting in regression, see for example the smooth additive backtiffing approach of \cite{Mammen:etal:99}, where data is projected down via a smoothing kernel onto an additive subspace. In the backfitting approach of this paper, the non-parametric dynamics is only taking place in the two time directions, and the end result is therefore closer to the classical approach of \cite{Nelson:72} with a non-smooth estimator of the dynamics in the one-dimensional time axis. What is obtained through Aalen's additive hazard regression model on two time axis is that the dynamics of the two time effects are adjusted for covariaties in a way that keep the one-dimensional structure of the non-parametric dynamics.
The expert user of survival methodology can therefore use the well developed intuition from looking at Nelson-Aalen estimators and Kaplan-Meier estimators when interpreting the empirical results based on the new methodology of this paper.
Another advantage of estimating directly the cumulative hazards is that we are able to obtain a simple
uniform asymptotic description of our estimators. We are thus
able to construct confidence bands and intervals, that are
based on bootstrapping the underlying martingales.
The paper is organised as follows.
Section 2 presents the model via counting processes.
Section 3 gives some least squares based local estimating equations that are
solved to give simple explicit estimators of the
non-parametric
effects of the model. Based on these explicit estimators we are
able to derive asymptotic results and provide the estimators
with asymptotic standard errors.
Sections 4-6 discusses how to solve the equations and compute the estimator
practically and how deal with identifiability issues.
Section 7 shows how the large sample properties may be derived and in Section 8
we construct confidence bands.
Section 9 demonstrates the finite sample properties supporting
Section 10 where we use our proposed methods in a worked example.
Finally, Section 10 discusses some possible extensions.
\section{Aalen's Additive Hazard Model for Two Time-Scales}
Let $N_i(t) \; \; i=1,...,n$ be $n$ independent counting
processes
that do not have common jumps and are adapted to a filtration that
satisfy the usual conditions \citep{abgk-book}.
We assume that the counting processes have intensities given by
\begin{align}\label{model}
\lambda_i(t) &= \sum_{j=1}^p X_{ij}(t)\alpha_j (t) + \sum_{k=1}^qZ_{ik}(t)\beta_k( t+ a_i) \notag
\\&=X_i(t)\alpha (t) + Z_i(t) \beta( t+ a_i), \quad (0\leq t \leq t_{max}),
\end{align}
where $\alpha=(\alpha_1, \dots, \alpha_p)$ and $\beta=(\beta_1,\dots,\beta_q)$ are tupels of one dimensional deterministic functions,
$X_i^T(t) \in \Re^p$ and $Z_i^T(t) \in \Re^q$ are predictable cadlag covariate vectors with $X(t)$ and $Z(t)$ having almost surely full rank, and $a_i$ is a
real-valued random variable observed at time $t=0$.
If $Z_i(t)=0$ for all $t$, $a_i$ does not need to be observed.
The model is the sum of two Additive Alalen Models running on two different time scales, see also Scheike(2001).
The two time-scales are $t$ and $a=t+a_i \in [a_0,a_{max}]$ where the latter
time-scale is specific to each individual and $a_0$ is some lower-limit that depends on the
observed range of the second time-scale.
Note, that no indicator variables are introduced but are absorbed
in the covariates.
In the
illness-death model, say, $t$ might be time since diagnosis
(duration)
among subjects that have entered the illness stage of the model
and
$a_i$ could be the age when the transition
to the illness stage occurred, such that $t+a_i$ is the
age of the subject.
After introducing some notation we present an
estimation procedure that leads to explicit estimators
of $A(t)=\int_0^t \alpha(s) ds=(\int_0^t \alpha_1(s) ds, \dots, \int_0^t \alpha_p(s) ds)^T$
and
$B(a)= \int_{a_0}^a \beta(u) du=(\int_{a_0}^a \beta_1(u) du, \dots, \int_{a_0}^a \beta_q(u) du)^T$.
The cumulative effects have the advantage compared to
$\alpha(s)$ and $\beta(a)$ that they may be used
for inferential purposes since a more satisfactory simultaneous
convergence can be established for these processes.
We derive the asymptotic distribution
for these estimators and a bootstrapping procedure quantifying the estimation uncertainty. Based on the
cumulative intensity $A(t)$ one may estimate the intensity
$\alpha(t)$ by smoothing techniques.
\noindent {\bf 2.1\ \ Notation}
Let $\Lambda_i(t) = \int_0^t \lambda_i(s) ds$ such that
$M_i(t) = N_i(t) - \Lambda_i(t)$ are martingales. Let further
$N(t)=(N_1(t),...,N_n(t))^T$ be the n-dimensional counting process,
$\Lambda (t)=(\Lambda_1(t),...,\Lambda_n(t))^T$ is its
compensator, such that
$M(t)=(M_1(t),...,M_n(t))^T$ is an n-dimensional martingale,
and
define matrices
$X(t)=(X_{1}(t),\ldots , X_{n}(t))^{T}$ and
$Z(t)= ( Z_1(t),\ldots, Z_n(t))^T,$ with dimensions $n\times p$ and $n\times q$, respectively.
The individual entry times are summarised in one vector
$a_\bullet=(a_1,\dots, a_n)$.
A superscript $a>0$ denotes a shift in the argument, i.e,
for a generic function $f$, $f^{a}(y)=f(y+a)$.
For a generic matrix $C(t)$, with $n$ rows $C_i(t)$, and a n-dimensional vector $v$, $C^v(t)$ is defined through shifting the rows: $C_i^{v}(t)=C_i(t+v_i)$.
For a generic matrix $C$, a minus superscript, $C^-$, denotes the Moore-Penrose inverse.
An integral, $\int$, with no limits denotes integration over the whole range.
\section{Identification of the entering nonparametric parameters}\label{sec:identification}
In many cases some covariates will enter both the $X $ and the $Z$ design. If this is the case, then the functions $\alpha$ and $\beta$
are not identified in model \eqref{model} -- constants can be shifted for the components that share the same covariate without altering the intensity.
Without loss of generality we assume that $X$ and $Z$ share the first $d$ $(0\leq d \leq \min(p,q))$ columns, i.e., for all $i=1,\dots, n$,
\[
X_{il}=Z_{il}, \quad l\leq d.
\]
We formulate the problem using group-theoretic arguments, see also \cite{Carstensen:07, Kuang:etal:08}. Fix constants
$c_1,\dots, c_d$ and define
$f_l$ as $\Re^{p+q}$ valued function having all entries but the $l'th$ and the $(d+l)'th$ equal zero:
\[
f_l(s,u)=\left(0, \cdots ,0, c_ls, 0, \cdots, 0,-c_l(u-a_0), 0,\cdots,0)\right)^T, \ (l=1,\dots,d).
\]
We define the group $G$ by
\begin{align}\notag
G=\left\{g: \begin{pmatrix} A\\ B\end{pmatrix} \mapsto
\begin{pmatrix}
A\\
B
\end{pmatrix}
+ h\ | \quad h \in Lin(f_1, \dots f_d)\right\}.
\end{align}
The identification problem can be rephrased as that
the intensity defined in \eqref{model} is a function of $(A,B)^T$, which is invariant to transformations $g \in G$.
In the sequel we circumvent the identification issue by adding the following constraint
\begin{align}\label{eq:identification}
A_l(t_{max})=\int_{0}^{t_{max}} \alpha_l(s) \mathrm ds =0, \quad (l=1,\dots,d),\end{align}
noting that for any solution $(A_0,B_0)$ of model \eqref{model}, there exists a unique solution $(A,B)=g(A_0,B_0)$ that fulfills
\eqref{eq:identification}.
Clearly other choices are also possible.
\section{Least squares minimisation ignoring the identificating of the nonparametric parameters}
We split the identification challenge in two. First we estimate ignoring identification of the parameters, and then we show in next section how to identify the estimated parameters.
In this section we therefore ignore the identification problem keeping in mind that the solutions below are hence not unique.
We motivate our estimator $(\widehat A, \widehat B)$
via the following least squares criteria.
\begin{align*}
\arg \min_{\overline A, \overline
}\sum_i \int \left\{ \int_0^t \mathrm dN_i(s) - \sum_j \int_0^t X_{ij}(s) d \overline A_j(s) - \sum_k \int_0^t Z_{ik}(s) d \overline B_k^{a_i}(s)
\right\}^2 \mathrm dt,
\end{align*}
where the integrals can be understood as Stieltjes integrals, noting that $X_i$ and $Z_i$ are left continuous.
Minimisation runs over all possible integrators.
One can already see that the minimiser, if it exists, will be a step-function, since $\int_0^t \mathrm dN_i(s) $ is a step function.
To simplify notation we will generally work in matrix notation so that above minimisation criteria can also be written as
\begin{align*}
\arg \min_{\overline A, \overline
}\sum_i \int \left\{ \int_0^t \mathrm dN_i(s) - \int_0^t X_i(s) d \overline A(s) - \int_0^t Z_i(s) d \overline B^{a_i}(s)
\right\}^2 \mathrm dt.
\end{align*}
Straight forward computations utilzing calculus of variations lead to $(\widehat A, \widehat
)$ solving the following first order conditions for all $t\in [0,t_{max}]$, $a \in [a_0,a_{max}]$:
\begin{align*}
& \sum_i X_i(t)^T \left \{ dN_i(t) - X_i(t) d\widehat A(t) - Z_i(t) \mathrm d \widehat B^{a_i}(t)
\mathrm dt \right \}=0,\\
& \sum_i Z_i^{-a_i}(a)^T \left\{ dN_i^{-a_i}(a) - Z_i^{-a_i}(a) d \widehat B(a) - X_i^{-a_i}(a) \mathrm d \widehat A^{-a_i}(a)
\ \right\} =0.
\end{align*}
Rearranging yields
\begin{align*}
& \sum_i X_i(t)^T dN_i(t) - \sum_i X_i(t)^T Z_i(t) \mathrm d\widehat B^{a_i}(t)
= X(t)^T X(t) \mathrm d\widehat A(t),\\
&\sum_i Z_i^{-a_i}(a)^T dN_i^{-a_i}(a) - \sum_i Z_i^{-a_i}(a)^T X_i^{-a_i}(a) \mathrm d\widehat A^{-a_i}(a)
= Z^{-a_\bullet}(a)^T Z^{-a_\bullet}(a) d \widehat B(a).
\end{align*}
The last set of equations can be further rewritten to the backfitting equations
\begin{align} \label{bf1}
\widehat A(t)
&= \int_{0}^t X(s)^{-} dN(s) - \int E_1(t|u) d\widehat B(u)
\\
\widehat B(a)
& = \int_{a_0}^a Z^{-a_\bullet}(u)^{-} dN^{-a_\bullet}(u) - \int E_2(a|s) d \widehat A(s),
\label{bf2}
\end{align}
where
\begin{align*}
E_1(s|u) & = \sum_i \{X^{T}(u-a_i)X(u-a_i)\}^{-1}X_i^{-a_i, T}(u) Z^{-a_i}_i(u) I(a_i \leq u \leq a_i+s), \\
E_2(u|s) & = \sum_i \{Z^{-a_\bullet,T}(s+a_i)Z^{-a_\bullet}(s+a_i)\}^{-1}Z_i^T(s)X_i(s) I(a_0-a_i \leq s \leq u-a_i).
\end{align*}
\begin{remark}\label{remark:simpleE}
In the case with no covariates, i.e.,
\[
\lambda_i(t) = Y_i(t)\{ \alpha (t) + \beta(a_i+t)\},
\]
with $X_i(s)=Z_i(s)=Y_i(s) \in \Re$,
the risk indicators are
\begin{align*}
E_1(s|u) & = \sum_i \frac{1}{\sum_{i'} Y_{i'}(u-a_i)} Y_i^{-a_i}(u) I(a_i \leq u \leq a_i+s), \\
E_2(u|s) & = \sum_i \frac{1}{\sum_{i'} Y^{-a_{i'}}_{i'}(s+a_i)} Y_i(s) I(a_0-a_i \leq s \leq u-a_i).
\end{align*}
\end{remark}
\section{Establishing existence, identification and uniqueness of the estimator}
In section \ref{sec:identification} we outlined the identification problem but ignored it when establishing the estimator in the previous section. In this section we provide a fully identified estimator of our problem.
When aiming to solve equations \eqref{bf1} and \eqref{bf2} the identification problem can no longer be ignored.
In order to get a better grip of the situation we will now rewrite the backfitting equations as a linear operator equation.
We can compress equations \eqref{bf1} and \eqref{bf2} into one matrix equation:
\[
\begin{pmatrix}
\widehat A \\
\widehat B\\
\end{pmatrix}=
\begin{pmatrix}
\int_{0}^t X(s)^{-} dN(s) \\
\int_{a_0}^a Z^{-a_\bullet}(u)^{-} dN^{-a_\bullet}(u)
\end{pmatrix} +
\begin{pmatrix}
0 &- E_1 \\
- E_2& 0
\end{pmatrix}\times \begin{pmatrix}
\widehat A \\
\widehat B\\
\end{pmatrix},
\]
where with some miss-use of notation $E_l f(\cdot)=\int E_l (\cdot,y)f(y) \mathrm dx, (l=1,2)$. Or even simpler
\begin{align}\label{eq:operator1}
\widehat \theta= \widehat m + E\widehat \theta,
\end{align}
with obvious notation, and linear operator $E$:
\[
\widehat \theta= \begin{pmatrix}
\widehat A \\
\widehat B\\
\end{pmatrix}, \quad \widehat m= \begin{pmatrix}
\int_{0}^t X(s)^{-} dN(s) \\
\int_{a_0}^a Z^{-a_\bullet}(u)^{-} dN^{-a_\bullet}(u)
\end{pmatrix}, \quad E= \begin{pmatrix}
0 &- E_1 \\
- E_2& 0
\end{pmatrix}.
\]
Note that $\widehat m$ is composed of the marginal Aalen estimators of the two time scales, $t$ and $a$.
Additionally, the operator $E$ is compact because it is the composition of an integral operator, which is compact, and a derivative operator, which is bounded.
The operator $E$ being compact means that it can be arbitrarily close approximated by a finite dimensional matrix which simplifies both the numerical and theoretical considerations.
If the eigenvalues of $E$ are bounded away from one, then, $(I-E)$ is invertible and we have
\[
\widehat \theta= (I-E)^{-1} \widehat m.
\]
Hence existence and uniqueness of our proposed estimator can be translated to properties of the eigenvalues of $E$.
One can for instance easily verify that if some covariates are both in the $X $ and the $Z$ design, then $E$ will have an eigenvalue equal to one - as discussed in the following remark.
\begin{remark}
Consider the most simple case $1=d=p=q$, i.e., $\lambda_i(t) = Y_i(t)\{ \alpha (t) + \beta(a_i+t)\}$.
Given a constant $c\in \Re $,
consider the pair of linear function $f_1=(f_{11},f_{12})^T$ with $f_{11}(s)=cs, \ f_{12}(u)=-c(u-a_0)$, as defined in Section \ref{sec:identification}.
Assuming that $\sum Y_i(s)$ and $\sum Y_i(u-a_i)$ are bounded away from zero on the whole range
$s\in [0, t_{max}], u\in [a_0, a_{max}]$,
one can easily verify that
\begin{align*}
E_2f_{11}(u)&=c \int E_2(u|s) \mathrm ds= c(u-a_0),\\
E_1f_{12}(s)&=-c \int E_1(s|u) \mathrm du= -cs.
\end{align*}
To see this, e.g., for the second equation, note
\[
\int E_1(s|u) \mathrm du= \sum_i \int_{a_i}^{a_i+s}\frac{1}{\sum_{i'}Y_{i'}(u-a_i)} Y_i^{-a_i}(u) \mathrm du
=\int_0^s \frac{\sum_i Y_i(t) }{\sum_{i'}Y_{i'}(t)} \mathrm dt=s.
\]
Hence, we have
\[
E \begin{pmatrix}f_{11}\\ f_{12}\end{pmatrix}=\begin{pmatrix}-E_1f_{12}\\- E_2f_{12}\end{pmatrix}=
\begin{pmatrix}f_{11}\\ f_{12}\end{pmatrix}.
\]
So that one is clearly an eigenvalue of $E$ with corresponding eigenfunction $f_1=(f_{11},f_{12})^T$.
In other words the identification issue of the model carries over to the estimator.
With analogue arguments one can show that in the more general case the eigenspace corresponding
to eigenvalue equal one includes the functions in $Lin(f_1, \dots f_d)$. Functions $f_1,\dots, f_d$ are defined in Section \ref{sec:identification}.
\end{remark}
We now utilize constraint \eqref{eq:identification} and incorporate it into new backfitting equations:
\begin{align} \label{bf11}
\widehat A(t)
&= \int_{0}^t X(s)^{-} dN(s) - \int E_1(t|u) d\widehat B(u),
\\
\widehat B(a)
& = \int_{a_0}^a Z^{-a_\bullet}(u)^{-} dN^{-a_\bullet}(u) - \int E_2(a|s) d \widehat A(s) + \frac{\widehat A^{d_q}(t_{max})}{t_{max}} (a-a_0), \label{bf22}
\end{align}
where $\widehat A^{d_q}$ is the q-dimensional vector $\widehat A^{d_q}= (A_1, \dots, A_d, 0, \dots, 0)^T$.
This translates to the new operator equation
\begin{align}\label{eq:operator2}
\widehat \theta= \widehat m + \overline E\widehat \theta, \quad \overline E= \begin{pmatrix}
0 &- E_1 \\
- \overline E_2& 0
\end{pmatrix},
\end{align}
where $\overline E_2 h (a)= \int E_2 (a|s)dh(s)- (a-a_0) h^{d_q}(t_{max}) t_{max}^{-1}$.
The next proposition states that the solutions of $\eqref{eq:operator2}$
include all relevant solutions of \eqref{eq:operator1}
and that every solution of $\eqref{eq:operator2}$ is a solution of $\eqref{eq:operator1}$.
\begin{prop}\label{prop:operator1}
For every solution $\widehat \theta$ of \eqref{eq:operator1}, define
\[
\widehat {\theta}_0=(I-\widetilde \Pi )\widehat \theta,
\]
where
\[
\widetilde \Pi \begin{pmatrix}
h_1 (t)\\ h_2(a)
\end{pmatrix}
=
\begin{pmatrix}
t h_1^{d_p}(t_{max}) t_{max}^{-1} \\ -(a-a_0) h_1^{d_q}(t_{max}) t_{max}^{-1}
\end{pmatrix}.
\]
Then
$\widehat {\theta}_0$ is a solution of $\eqref{eq:operator2}$ and
\begin{align}\label{directsum}
\widehat \theta_0 + Lin(f_1, \dots f_d),
\end{align}
are further solutions of \eqref{eq:operator1}. Reversly, for every solution ${\widehat \theta}_0$
of \eqref{eq:operator2},
all functions of the form \eqref{directsum} are solutions of \eqref{eq:operator1}.
\end{prop}
The proof can be found in the appendix.
With Proposition \ref{prop:operator1} at hand it is justified to define our estimator as the solution of
\eqref{eq:operator2}.
We will now discuss existence and uniqueness of the solution of \eqref{eq:operator2}.
Note that $E$ is known and hence one can calculate a numerical approximation of its eigenvalues by working on a grid.
Consider the sub-space
\begin{align*}
K=\{h=(h_1,\dots, h_d,0,\dots,0) | \ h_l: \Re \to \Re , \ x \mapsto c_lx , \ c_l\in \Re , \ l=1,\dots, d\}.
\end{align*}
It holds that $\overline E_2=E_2(I-\Pi)$, where $\Pi$ is a projection into $K$.
We have $K \subseteq kern(I-E_2)$.
We can check whether $K$ equals $kern(I-E_2)$.
This can be done by calculating the dimension of the eigenspace of $E_2$ corresponding to an eigenvalue equal one.
The dimension will be at least $d$. If it is exactly $d$, then $K=kern(I-E_2)$.
The next proposition states that if $kern(I-E_2)=K$,
and $kern(I-E)=Lin( f_1,\dots,f_d)$,
then both $I-\overline E_2$
and $I-\overline E$ are bijective.
\begin{prop}\label{prop:operator2}
Assume that $E_2$ has Eigenvalue 1 with multiplicity $d$.
Then, $(I-\overline E_2)$ will be bijective.
If furthermore E has Eigenvalue 1 with multiplicity $d$,
then $(I-\overline E)$ is bijective and hence invertible. In particular a solution of equations \eqref{eq:operator2} exists and it is unique.
\end{prop}
The proof can be found in the Appendix.
\section{Calculating the estimator}
There are two major ways of calculating the proposed estimator.
Either one directly calculates $(I-\overline E)^{-1}$ and applies it on $\widehat \theta$ or something closer to an iterative procedure.
For the latter,
by iterative application of \eqref{eq:operator2} we derive that
\begin{align}\label{infinitesum}
\widehat \theta= \sum_{r=0}^\infty \overline E^r (\widehat m ) +\overline E^\infty(\widehat \theta).
\end{align}
If the absolute values of the eigenvalues of $\overline E$
are bounded from above by a constant strictly smaller than 1, then \eqref{infinitesum} is well defined with $E^\infty=0$, and the converging series
\[
\widehat \theta= \sum_{r=0}^\infty \overline E^r (\widehat m ),
\]
so that the iterative algorithm
\begin{align}\label{bf}
\widehat \theta^{(r)}= \widehat m + \overline E\widehat \theta^{(r-1)}
\end{align}
converges from any starting point.
Note that \eqref{bf} is the usual way the backfiting equations \eqref{bf11},\eqref{bf22} or equivalently \eqref{eq:operator2} are solved.
Another way is to calculate the finite sum
\[
\widetilde \theta= \sum_{r=0}^{\overline r} \overline E^r (\widehat m ),
\]
with some stopping criteria $\overline r$.
We conclude
that the proposed estimator can be calculated in a straight forward manner from
the compound Aalen estimator $\widehat m$ and the operator $\overline E.$
We now briefly discuss how $\overline E$ can be calculated in the simple case $1=d=p=q$.
Here, $\overline E$ can be approximated by a $j\times k$ matrix where $j, k$ are the number of grid points
in $[0,t_{max}]$ and $[a_0, a_{max}]$, respectively.
This is done by first calculating the values $E_1(s_0,a_0)$ and $E_2(a_0,s_0)$ for every grid point; see Remark \ref{remark:simpleE} for the definitions of the the functions.
We call the resulting matrices $E_1^{mx}$ and $E_2^{mx}$.
Afterwards, $\overline {E}_2^{mx}$ is derived from $E_2^{mx}$, via
\[
\overline E_2^{mx}= E_2^{mx}+ \begin{pmatrix}
0 & \cdots & 0& s_1/s_j \\
0&\dots&0& s_2/s_j\\
\vdots &&\vdots&\vdots \\
0&\dots&0& 1
\end{pmatrix}.
\]
The matrices are then transformed to the desired operator via
\begin{align*}
\Delta=\begin{pmatrix}
1 & -1 & 0 & \cdots & 0 \\
0 & \ddots & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & 0 \\
\vdots & \ddots & \ddots & \ddots &-1 \\
0 & \cdots & \cdots & 0&1 \\
\end{pmatrix},\quad
E_1^{op}=E_1^{mx} \times \Delta, \quad
\overline E_2^{op}= \overline E_2^{mx}\times \Delta.
\end{align*}
Finally,
\begin{align*}
\overline E^{op}= \begin{pmatrix}
0 &- E_1^{op} \\
- \overline E_2^{op}& 0
\end{pmatrix}.
\end{align*}
So that given a function $h: [0,t_{max}]\times [a_0, a_{max}] \rightarrow \Re $,
one calculates its values on the grid and summarises it in a vector
$ h^{grid}$. The function
$ \overline Eh $ is then approximated via $\overline E^{op}h^{grid}$ where the latter is a simple matrix multiplication.
\section{Asymptotics}
Note that we have
\begin{align}\label{true:eq}
\theta= m + \overline E\theta,
\end{align}
where $m$ arises from $\widehat m$ by replacing $N$ by $\Lambda$. It is hereby quite remarkable that $\overline E$
is the observable operator from the previous sections and not some asymptotic limit.
We further conclude that the least square solution \eqref{bf11} and \eqref{bf22} is a plug-in estimator of \eqref{true:eq}.
The estimation error is then given as
\begin{align}\label{error:eq}
\widehat \theta - \theta= \widehat m -m + \overline E(\widehat \theta- \theta).
\end{align}
As in the last section,
If $\overline E$ has eigenvalues all bounded away from one, then
\[
\widehat \theta - \theta= (I-\overline E)^{-1} (\widehat m -m).
\]
So the asymptotic behaviour of $\widehat \theta - \theta$ can be deduced from the asymptotic behaviour
of $(I-\overline E)^{-1}$ and $(\widehat m -m)$, with the latter being the compound estimation error of two additive Aalen models on different time-scales.
\begin{theorem}\label{thm:asymptotics}
Under assumptions (A)--(G), the estimator $\widehat \theta$ exists.
Furthermore the estimator $\widehat \theta$ is $n^{1/2}$ consistent:
\[
n^{-1/2} (\widehat \theta - \theta )\rightarrow (I-\widetilde E)^{-1}U,
\]
in Skorohod space $D^{p+q}[0,a_{max}]$.
Here, $(\widehat \theta - \theta )$ is treated as one stochastic process defined on $[0,a_{max}]$
by setting for $j=1,\dots, p$ and $\nu \in [t_{max}, a_{max}]$, $(\widehat \theta - \theta )_j (\nu) =(\widehat \theta - \theta )_j(t_{max})$. And similarly, for $j=p+1, \dots, p+q$ and $\nu \in [0, a_{0}]$, $(\widehat \theta - \theta )_j (\nu) =0$.
The process $U$ is a $p+q$ dimensional mean-zero Gaussian process with covariation matrix $\Sigma(\nu_1,\nu_2)$ described in the Appendix,
and $\widetilde E$ is the limit of $\overline E$.
\end{theorem}
The proof can be found in the Appendix.
\section{Confidence Bands}
While we could use the central limit theorem of the previous section to construct confidence bands,
it has been suggested that better small sample performance can be achieved by directly
bootstrapping the estimation error.
We propose a wild bootstrap approach based on the relationship
\begin{align*}
\widehat \theta - \theta=(I-\overline E)^{-1} (\widehat m -m)&=(I-\overline E)^{-1} \begin{pmatrix} \int_{0}^t X(s)^{-} dM(s) \\
\int_{a_0}^a Z^{-a_\bullet}(u)^{-} dM^{-a_\bullet}(u)
\end{pmatrix}
\\ &= (I-\overline E)^{-1} \begin{pmatrix} \mathcal M_1 \\
\mathcal M_2
\end{pmatrix}
\end{align*}
Since ($I-\overline E)^{-1} $ is known,
it is enough to to only approximate $\mathcal M$.
We do this via the wild bootstrap version
\[
\widehat {\mathcal M}^{(1)}= \begin{pmatrix} \int_{0}^t X(s)^{-} d\widetilde {N}(s) \\
\int_{a_0}^a Z^{-a_\bullet}(u)^{-} d\widetilde {N}^{-a_\bullet}(u)
\end{pmatrix}, \quad \widetilde N_i(s)= G_i N_i(s),
\]
or
\begin{align*}
\widehat {\mathcal M}^{(2)}&= \begin{pmatrix} \int_{0}^t X(s)^{-} d\widetilde {M}(s) \\
\int_{a_0}^a Z^{-a_\bullet}(u)^{-} d\widetilde {M}^{-a_\bullet}(u)
\end{pmatrix}, \\ \quad \int_0^t \widetilde M_i(s)\mathrm d s&= G_i \left( \int_0^t N_i(s) \mathrm ds -\big(\int_0^t (X_i(s)\mathrm d\widehat A(s)+ \int_0^t Z_i(s) \mathrm d \widehat B(s+a_i)\big)\right),
\end{align*}
where $G_i$ is a mean zero random variable with unit variance.
The random variable $G_i$ is generated such that for fixed $i$, it is independent to all other variables.
It is straight forward to confirm that $\widehat {\mathcal M}^{(r)}, \ r=1,2$ is a mean zero process that has the same covariance as
$\mathcal M$ (The covariance of $\mathcal M$ is given in the appendix).
Hence, we directly derive the following proposition.
\begin{prop}\label{prop:bootstrap}
Under assumptions (A)--(G), the bootstrapped estimation error is uniformly consistent, i.e., for $r=1,2$
\[
n^{-1/2} ((I-\overline E)^{-1}\widehat {\mathcal M}^{(r)} )\rightarrow (I-\widetilde E)^{-1}U,
\]
in Skorohod space $D^{p+q}[0,T]$,
where $U$ is is described in Theorem 1.
\end{prop}
The proof can be found in the Appendix.
One useful consequence of this is that we can estimate standard errors of our estimator $\hat \theta$ based on
the approximation from the bootstrap. We denote these estimators as $\hat \sigma_r(t)$ for the two components $r=1,2$.
\begin{corollary}\label{cor:bootstrap}
Under assumptions (A)--(G), the bootstrapped errors lead
to confidence bands $CB^{(r)}$ for ${\theta}(\nu)$ over $\nu\in[\nu_1,\nu_2]$ providing an asymptotic coverage probability of $1 - \alpha$, where
\[
CB^{(r)}(\nu)= \theta(\nu) +/- c_{1-\alpha} \hat \sigma_r(\nu),
\]
and
\[
c_{1-\alpha} = (1-\alpha)\quad \textrm{quantile of } \quad \mathcal L\left\{ \sup_{[\nu_1,\nu_2]}n^{-1/2}
\frac{\@ifstar{\oldabs}{\oldabs*}{(I-\overline E)^{-1}\widehat {\mathcal M}^{(r)} }}{\hat \sigma_r} | X,Z, N\right\}
\]
\end{corollary}
We explore the performance of the estimator of the standard error and the uniform bands in the next section.
\section{Simulations}
We generated data from the simple two-time scale model with age and duration
that resemble the data we consider in worked example in the next section.
Thus assuming that the hazard for those under risk is given as
$\beta(t+a_i)+\alpha(t)$, where $\beta(a) \equiv 0.067$ and the entry ages where
drawn uniformly from $[0,25]$ but making sure that 10 \% of the data started in $0$ to
(to avoid difficulties with left truncation in the estimation).
The $\alpha(t)$ component was piecewise constant
with rate $0.32$ in the time-interval $[0,0.25]$, then $0.48$ in $(0.25,0.5]$ and then finally to
satisfy our constraint $-0.044$ in $(0.5,5]$, so that $\int_0^5 \alpha(s) ds =0$.
All subjects were censored after $5$ years of follow up.
In all simulations we
used a discrete approximation based on a time-grid of
either 100 points in both the age direction $[0,30]$ and on the duration
time-scale $[0,5]$.
\subsection{Bias of backfitting}
We considered sample sizes 100, 200 and 400 and show the bias for
the two-components in
Table 1 based on 1000 realizations.
\begin{table}[ht]
\begin{tabular}{l| c c c}
age & n=100&n=200&n=400 \\ \hline
\hline
\hline
$ 6.717 $&$ -0.001 $&$ 0.006 $&$ -0.004 $ \\
$ 13.788 $&$ 0.009 $&$ 0.003 $&$ -0.006 $ \\
$ 20.859 $&$ 0.018 $&$ 0.001 $&$ 0.002 $ \\
$ 27.929 $&$ 0.027 $&$ 0.004 $&$ 0.010 $ \\
$ 35 $&$ 0.078 $&$ 0.006 $&$ 0.013 $ \\
\hline
\hline
time & n=100&n=200&n=400 \\ \hline
\hline
$ 0.96 $&$ 0.018 $&$ 0.009 $&$ 0.006 $ \\
$ 1.97 $&$ 0.015 $&$ 0.007 $&$ 0.005 $ \\
$ 2.98 $&$ 0.009 $&$ 0.005 $&$ 0.003 $ \\
$ 3.99 $&$ 0.005 $&$ 0.002 $&$ 0.002 $ \\
$ 5 $&$ 0 $&$ 0 $&$ 0 $ \\
\end{tabular}
\caption{Bias of backfitting algorithm for sample sizes $n = 100, 200, 400$ for the
age and time component for selected ages and time points.
Based on 1000 realisations.
}
\label{tab:tab1}
\end{table}
We note that the the backfitting algorithm is almost unbiased across all sample
size and improves as the sample size increases. This is despite the fact that
the simulated component in the time-direction really is quite wild.
\subsection{Bootstrap uncertainty}
Secondly, we demonstrate that our bootstrap seems to work well to describe the uncertainty of the
estimates. We simulated data as before and based on 1000 realisations with 100 bootstrap's based
on $G_i dN_i$ we estimated: a) the point-wise standard error for the two-components; b) computed the
pointwise coverage baed on these; c) and constructed
uniform confidence bands, as described in Corollary 1, for the the two components and its coverage.
\centerline{Table 2 around here}
\begin{table}
\begin{tabular}{ l| c c c c|| c c c c }
n &age&mean se&sd&cov&time&mean se&sd&cov \\\hline
\hline
\hline
$ 100 $&$ 6.717 $&$ 0.224 $&$ 0.231 $&$ 0.912 $&$ 0.96 $&$ 0.044 $&$ 0.045 $&$ 0.954 $ \\
$ 100 $&$ 13.788 $&$ 0.297 $&$ 0.298 $&$ 0.935 $&$ 1.97 $&$ 0.039 $&$ 0.04 $&$ 0.946 $ \\
$ 100 $&$ 20.859 $&$ 0.351 $&$ 0.357 $&$ 0.943 $&$ 2.98 $&$ 0.032 $&$ 0.034 $&$ 0.951 $ \\
$ 100 $&$ 27.929 $&$ 0.391 $&$ 0.402 $&$ 0.938 $&$ 3.99 $&$ 0.024 $&$ 0.024 $&$ 0.966 $ \\
$ 100 $&$ 35 $&$ 0.460 $&$ 0.464 $&$ 0.932 $&$ 5 $&$ 0.016 $&$ 0.017 $&$ 0.874 $ \\
\hline
\hline
$ 200 $&$ 6.717 $&$ 0.158 $&$ 0.155 $&$ 0.94 $&$ 0.96 $&$ 0.031 $&$ 0.031 $&$ 0.951 $ \\
$ 200 $&$ 13.788 $&$ 0.207 $&$ 0.206 $&$ 0.942 $&$ 1.97 $&$ 0.027 $&$ 0.027 $&$ 0.960 $ \\
$ 200 $&$ 20.859 $&$ 0.243 $&$ 0.237 $&$ 0.948 $&$ 2.98 $&$ 0.022 $&$ 0.022 $&$ 0.966 $ \\
$ 200 $&$ 27.929 $&$ 0.271 $&$ 0.262 $&$ 0.945 $&$ 3.99 $&$ 0.017 $&$ 0.017 $&$ 0.972 $ \\
$ 200 $&$ 35 $&$ 0.328 $&$ 0.329 $&$ 0.933 $&$ 5 $&$ 0.011 $&$ 0.012 $&$ 0.933 $ \\
\hline
\hline
$ 400 $&$ 6.717 $&$ 0.114 $&$ 0.118 $&$ 0.948 $&$ 0.96 $&$ 0.022 $&$ 0.022 $&$ 0.951 $ \\
$ 400 $&$ 13.788 $&$ 0.148 $&$ 0.153 $&$ 0.946 $&$ 1.97 $&$ 0.019 $&$ 0.019 $&$ 0.957 $ \\
$ 400 $&$ 20.859 $&$ 0.173 $&$ 0.18 $&$ 0.937 $&$ 2.98 $&$ 0.015 $&$ 0.015 $&$ 0.960 $ \\
$ 400 $&$ 27.929 $&$ 0.192 $&$ 0.196 $&$ 0.943 $&$ 3.99 $&$ 0.012 $&$ 0.012 $&$ 0.970 $ \\
$ 400 $&$ 35 $&$ 0.235 $&$ 0.245 $&$ 0.934 $&$ 5 $&$ 0.008 $&$ 0.008 $&$ 0.950 $ \\
\end{tabular}
\caption{
Uncertainty estimated from bootstrap for sample sizes $n = 100, 200, 400$ for the age and time component for selected ages and time points.
Based on 1000 realisations and a bootstrap with 100
repetitions. mean of estimated standard errors (mean se), standard deviation of estimates (sd) and 95 \% pointwise coverage (cov).
}
\label{tab:tab2}
\end{table}
We note that the standard error is well estimated by the bootstrapped standard deviation across all
sample sizes and for both components. In addition the pointwise coverage is
close to the nominal 95 \% level for the larger sample sizes. But even for $n=100$ the coverage is
reasonable for most time-points for the two components.
Finally, we also considered the performance of the confidence bands based on our bootstrap approach.
\centerline{Table 3 around here}
\begin{table}
\begin{tabular}{ l| c c }
n &coverage (age) & coverage (time) \\\hline
\hline
$ 100 $&$ 0.797 $&$ 0.792 $ \\
$ 200 $&$ 0.912 $&$ 0.915 $ \\
$ 400 $&$ 0.952 $&$ 0.939 $ \\
\hline
\end{tabular}
\caption{Coverage of confidence bands estimated from bootstrap for sample sizes $n = 100, 200, 400$ for the age and time component.
Based on 1000 realisations and a boostrap with 100 repetitions.
}
\label{tab:tab3}
\end{table}
When $n$ gets larger these bands are quite close to the nominal 95 \% level,
but for $n=100$ the asymptotics have not quite set in to make the
entire band work well.
\section{Application to the TRACE study}
The TRACE study group (see e.g. \cite{trace} ) has
collected information on more than 4000 consecutive patients with
acute myocardial infarction (AMI) with the aim of
studying the prognostic importance of various risk
factors on mortality. We here consider a subset of 1878 of these
patients that are available in the timereg R package.
At the age of entry (age of diagnosis) the
patients had various risk factors recorded, but we here just show the
simple model with the effects of the two-time-scales age and duration.
It is expected that the duration time-scale has a strong initial effect of dying that then
disappears when patients survive the first period right after their AMI.
We then estimated the two-time-scale model $\alpha(t)+\beta(t+a_i)$ under the
identifiability condition
that $\int_0^5 \alpha(s) ds=0$. Restricting attention to patients more than
40 years of age, and within the first 5 duration years after the diagnosis.
First we estimate the mortality on the two time-scales separately, the two
marginal estimates, see Figure 1. Panel (a) shows the cumulative hazard on the
age time-scale with the marginal estimate (full line) and the one with
adjustment for duration effects (broken line), and panel (b) the mortality on
the duration time-scale with the marginal estimate (full line) and with
adjustment for age effects (broken line). We note that on the duration
time-scale the cumulative hazard is quite steep. In addition we show 95 \%
confidence bands based on our bootstrap (regions), and the pointwise confidence
intervals (dotted line).
\centerline{Figure 1 about here}
\begin{figure}[ht]
\centering
\includegraphics{fig1-marg-backfit.pdf}
\caption{Cumulative baseline on the two time-scales estimated marginally (full line) and in the two-time-scale model (broken line).
Confidence bands (regions) and pointwise confidence intervals (dotted lines).}
\label{figt:tracefig}
\end{figure}
Taking out the duration effect slightly alters the estimate of the age-effect. In contrast the
duration effect is strongly confounded by age effect estimates, and here the two-time scale model more
clearly demonstrates what is going on on the duration time-scale. The duration effect is strong initially and then
after surviving the first 220 days we see a protective effect (dotted vertical line).
We stress that the interpretation of the hazards on the two-time scales are difficult, due to, for example, the
constraint that needs to be imposed to identify a specific solution. Nevertheless, it very useful to see the components
from the two time-scales that jointly make up the hazard for an individual, and can be used for the prediction purposes
as we demonstrate further below. Note also that due to the additive structure the duration effect can be
interpreted as giving relative survival due to the duration time-scale.
\centerline{Figure 2 about here}
\begin{figure}[ht]
\centering
\includegraphics{fig2-survival-pred.pdf}
\caption{Predicted survival with 95 \% confidence bands (regions) for a subject that is 60,70, and 80, respectively (full lines).
Predicted survival using only age for the three ages (broken lines), and survival using only duration (dotted line). }
\label{figt:survtracefig}
\end{figure}
In Figure 2 we show the survival predictions for subjects that are 60, 70, or
80, respectively, using the two-time scale model. Thus computing
$\exp(- (\hat B(a_0+t) - \hat B(a_0)) + \hat A(t))$ and constructing the
confidence bands using the bootstrap approach for
$(\hat B(a_0+t) - \hat B(a_0)) + \hat A(t)$ for $t \in [0,5]$.
These curves are a direct consequence of having the two-components and
are directly interpretable.
\section{Discussion}
By utilising the additive structure we have demonstrated that one can estimate
the effect of two time-scales directly by a backfitting algorithm that does not
involve smoothing. By working on the cumulative this also lead to uniform
asymptotic description and a simple bootstrap procedure for getting estimates of the
uncertainty and for constructing for example confidence intervals.
These cumulative may form the basis for smoothing based estimates when the hazard are
of interest, but often the cumulative are the quantities of key interest for
example when interest is on survival predictions.
Clearly, the model could also be fitted by a more standard backfitting approach
working on the hazard scale as in ... for multiplicative hazard models.
Our backfitting approach can be extended for example
the age-period-cohort model but here identifiability conditions are more complex
to build into the estimation.
\newpage
|
1,108,101,564,963 | arxiv | \section{Introduction}
Existing artificial neural networks including the well celebrated deep learning architectures, such as Convolutional Neural Networks (CNNs) \cite{krizhevsky2012imagenet,he2016deep} and Recurrent Neural Networks (RNNs) \cite{graves2013speech}, are uniformly plastic. In the presence of large amounts of training data and guided by a sensible loss function, the plasticity of artificial neural networks enables them to learn from the data in an end-to-end manner and often provide the state-of-the-art performance in various applications. These include object detection \cite{redmon2017yolo9000}, action recognition from videos \cite{carreira2017quo}, speech recognition \cite{saon2017english}, and language translation \cite{edunov2018understanding}, among many others. The same uniform plasticity, on the other hand, is the culprit for a phenomenon known as ``catastrophic forgetting/interference” \cite{mccloskey1989catastrophic,mcclelland1995there}, that is, a tendency to rapidly forget previously learned tasks when presented with new training data.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Catastrophic_Forgetting.png}
\caption{Depiction of catastrophic forgetting in binary classification tasks when there is a distribution shift from an initial task to a secondary task. When exposed to the distribution of the new task, the uniformly plastic parametric model, $f(\cdot,\theta)$, conforms to the new distribution with no constraints on maintaining its performance on the previous task.}
\label{fig:intro}
\end{figure}
A uniformly plastic neural network requires independent and identically distributed samples from a stationary distribution of training samples, i.e., the i.i.d. assumption. In other words, The `identically distributed' part of the assumption, however, is easily violated in real-world applications, specially in the continual, sequential, and lifelong learning settings. The training data could violate the identically distributed assumption, i.e., have non-stationary data distribution, when: 1) there is a shift in the distribution of the training data over time (e.g., the visual input data to a lifelong learning agent during `day' versus `night'), and
2) the training data is not fully observable at once and different modes of variations of the data will be explored or revealed through time. This leads to a fundamental challenge in lifelong learning known as `catastrophic forgetting/interference', which indicates that a learning agent forgets its previously acquired information when learning a new task. A cartoon depiction of catastrophic forgetting is depicted in Figure \ref{fig:intro}. An ideal system should provide a balance between its plasticity and stability in order to acquire new information while preserving the old one (e.g., the decision boundary in the rightmost panel in Figure \ref{fig:intro}).
The general idea behind our approach for overcoming catastrophic forgetting is similar in essence to the work of \cite{kirkpatrick2017overcoming,zenke2017continual,aljundi2018memory}. In short, we propose to selectively and dynamically modulate the plasticity of the synapses that are `important' for solving old tasks. Inspired by human visual cortex, we define an attention-based synaptic importance that leverages Hebbian learning \cite{hebb1961organization}. Our method is biologically inspired, in that it borrows ideas from the neuromodulatory systems in the human brain. Neuromodulators are important contributors for attention and goal-driven perception. In particular, the cholinergic system drives bottom-up, stimulus-driven attention, as well as top-down, goal-directed attention \cite{avery2014}. Furthermore, it increases attention to task-relevant stimuli, while decreasing attention to the distractions \cite{oros2014learning}. This is a similar idea to contrastive Excitatory Backpropagation (c-EB) where a top-down excitation mask increments attention to the target features and an inhibitory mask decrements attention to distractors \cite{zhang2018top}. We leverage the c-EB method and introduce a new framework for learning task-specific synaptic importance in neural networks, that enables the network to preserve its previously acquired knowledge while learning new tasks.
Our specific contributions in this work are:
\begin{enumerate}
\item Leveraging brain-inspired attention mechanisms for overcoming catastrophic forgetting for the first time
\item Hebbian learning of synaptic importance in parallel to updating synaptic weights via back-propagation and leveraging the rich literature on Hebbian learning
\item Showing the effectiveness of the proposed method on benchmark datasets
\end{enumerate}
\section{Relevant work}
In order to overcome catastrophic forgetting three general strategies are reported in the literature:
\begin{enumerate}
\item selective synaptic plasticity to protect consolidated knowledge,
\item additional neural resource allocation to learn new information, and
\item complementary learning for memory consolidation and experience replay.
\end{enumerate}
Interestingly, all three strategies have roots in biology. The first strategy is inspired by synaptic consolidation in the mammalian neocortex \cite{benna2016computational} where
knowledge from a previously acquired task is encoded in a subset of synapses that are rendered less plastic and therefore preserved for longer periods of time. The general idea for this strategy is to solidify and preserve synaptic parameters that are crucial for the previously learned tasks. This is often done via selective and task-specific updates of synaptic weights in a neural network. The second strategy is based on similar ideas to neurogenesis in the brain \cite{aimone2011resolving}. For a new task, allocate new neurons that utilize the shared representation learned from previous tasks but do not interfere with the old synapses. Strategy 3 is based on the theory of complementary learning systems (CLS) \cite{mcclelland1995there} in the brain and comes in various flavors. From simply recording training samples (e.g., episodic memory), to utilizing generative models (e.g., generative adversarial networks, GANs) to learn/memorize the distribution of the data. The idea behind these methods is to make the training samples as identically distributed as possible, by adding random samples from the old distribution to the newly observed training data, providing an identically distributed data that gets close to the ideal case shown in Figure \ref{fig:intro}.
\begin{table}[t]
\centering
\begin{tabular}{ll}
\toprule
Notation & Representing\tabularnewline
\midrule
\(f(\cdot;\theta)\) & Parametric mapping defined by a NN \tabularnewline
\(f^l_i(\cdot;\theta)\) & Output of the i'th neuron in l'th layer \tabularnewline
\(\lambda\) & Regularization coefficient\tabularnewline
\(X\) & Input data\tabularnewline
\(x\) & Input sample\tabularnewline
\(y\) & Label\tabularnewline
\(P(\cdot)\) & Probability\tabularnewline
\(\mathcal{L}\) & Loss function\tabularnewline
\(\sigma(\cdot)\) & Nonlinearity in a neural network\tabularnewline
\(\gamma^l_{ji} \text{~~or~~} \gamma_k\) & Synaptic importance parameter \tabularnewline
\(\theta^l_{ji} \text{~~or~~} \theta_k\) & Synaptic weights \tabularnewline
\bottomrule
\end{tabular}
\caption{Notations used throughout the paper. }
\label{tab:notations}
\end{table}
In this paper we are interested in the first strategy, where the plasticity of synapses in a neural network are selectively and dynamically changed, allocating more plasticity to synapses that do not contribute to solving the previously learned tasks. To that end, several notable works have been recently proposed for overcoming catastrophic forgetting using selective plasticity. Some of these studies include \cite{kirkpatrick2017overcoming}, \cite{zenke2017continual}, \cite{lee2017overcoming}, and more recently \cite{aljundi2018memory}. The common theme behind all these methods is the definition of the synaptic importance parameters, $\gamma_k$, in addition to synaptic weights $\theta_k$. In all these methods, during or following learning task $A$, the synaptic importance parameters are updated along with the synaptic weights. Then, for learning task $B$, the loss function is updated to change the plasticity of different synapses with respect to their importance as:
\begin{equation}
\mathcal{L}(\theta)=\mathcal{L}_B(\theta)+\underbrace{\lambda\sum_{k} \gamma_k(\theta_k -\theta^\star_{A,k})^2}_{\text{Regularizer}}
\label{eq:updated_loss}
\end{equation}
where $\theta^\star_{A,k}$ are the optimized synaptic weights for task $A$, and $\mathcal{L}_B(\theta)$ is the original loss function for learning task $B$, e.g., the cross entropy loss. Intuitively, the regularizer penalizes large change for synapses that are important for solving task $A$, therefore, the network is forced to utilize synapses that are less important for previously learned tasks to solve a new one. The difference between these methods is on the way they calculate the importance parameters, $\gamma_k$.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{framework.png}
\caption{Illustration of our proposed framework for continual learning. Connections in the neural network are committed to a given task based on contrastive excitation backpropagation (c-EB). For each training example (e.g., an image with a ``7"), c-EB is applied with the ground truth label (``7" here) to generate attentional maps at each upstream layer in the hierarchy. A connection is considered important for a given task (e.g., classifying digits for a particular MNIST task) if its pre- and post-synaptic neurons are highlighted by the c-EB process. We use Oja's rule to incrementally update the importance of such connections during task learning. This procedure consolidates various important connections in the network for experienced tasks, preventing their forgetting as new tasks are learned.}
\label{fig:framework}
\end{figure*}
In the Elastic Weight Consolidation (EWC) work, Kirkpatrick et al. provide a Bayesian argument that the information about task $A$ is fully absorbed in the posterior distribution $p(\theta\vert X_A)$. Then, they approximate the posterior as a Gaussian distribution with mean given by $\theta^\star_{A}$ and a diagonal precision matrix given by the Fisher information matrix, $F$, where they set the importance parameter to be the diagonal values of this matrix, $\gamma_k=F_{kk}$. The methods proposed by Kirkpatrick et al., however, is not online. In the sense that the importance parameters are calculated at the end of learning each task in an offline manner. Zenke et al. and Aljundi et al. provided online variations of the EWC. Specifically, Zenke et al. set the synaptic importance, $\gamma_k$, to be a function of the cumulative change a synapse experiences during training on a specific task. They denote their algorithm as Synaptic Intelligence. The more cumulative changes correspond to more importance. Similarly, Aljundi et al. consider the importance as the cumulative effect of a synapse on the norm of the last layer of the neural network before the softmax classifier, hence decoupling the importance parameters from labels and enabling the importance parameters to continue to update even in absence of labels. Aljundi et al. further show that their proposed importance is equivalent to calculating the Hebbian trace of a synapse.
In this paper, we follow the existing work in the literature, but bring in a biologically plausible solution based on neuromodulatory attentional mechanisms in the human brain.
\section{Method}
Our proposed method leverages the bio-inspired top-down attention mechanism of contrastive excitation backpropagation (c-EB), to update synaptic importance parameters of a network in an online fashion. Figure \ref{fig:framework} depicts the core idea in our proposed framework. We denote the notations used throughout this paper in Table \ref{tab:notations}.
\subsection{Excitation back-propagation}
Excitation Back-Propagation and its contrastive variation are biologically inspired top-down attention mechanisms \cite{zhang2018top}, which are used in computer vision applications as visualization tools for CNNs' top-down attention. With an abuse of notation we let $f^l_i$ denote the i'th neuron in layer $l$ of a neural network. Define the \emph{relative importance} of neuron $f^{(l-1)}_{j}$ on the activation of neuron $f^l_{i}$, where $f^l_{i}=\sigma(\sum_{ji}\theta^{l}_{ji}f^{(l-1)}_{j})$ and for $\theta^{l}$ being the synaptic weights between layers $(l-1)$ and $l$, as a probability distribution $P(f^{(l-1)}_{j})$ over neurons in layer $(l-1)$. This probability distribution can be factored as,
\begin{equation}
P(f^{(l-1)}_{j}) = \sum_{i}P(f^{(l-1)}_{j} \vert f^l_{i})P(f^l_{i}).
\label{eq:ebp0}
\end{equation}
$P(f^{l}_{i})$ is the Marginal Winning Probability (MWP) for neuron $f^{l}_{i}$, Zhang et al. then define the conditional probability $P(f^{(l-1)}_{j} \vert f^l_{i})$ as
\begin{equation}
P(f^{(l-1)}_{j} \vert f^l_{i}) =
\begin{cases}
Z^{(l-1)}_{i}f^{(l-1)}_{j}\theta^{l}_{ji} & \text{if } \theta^{(l-1)}_{ji} \geq 0, \\
0 & \text{otherwise},
\end{cases}
\label{eq:ebp}
\end{equation}
where $$Z^{(l-1)}_{i}=\left(\sum_j f^{(l-1)}_{j}\theta^{l}_{ji}\right)^{-1}$$ is a normalization factor such that $\sum_{j}P(f^{(l-1)}_{j} \vert f^l_{i}) = 1$. For a given input, $x$, (e.g., an image), EB generates a heat-map in the pixel-space w.r.t. class $y$ by starting with $P(f^{L}_{i}=y)=1$ at the output layer and applying Equation (\ref{eq:ebp}) recursively.
Furthermore, the contrastive-EB (c-EB) assigns a hypothetical negative node $\bar{f}^L_i$, with weights $\bar{\gamma}^L_{ji}=-{\gamma}^L_{ji}$.c-EB then recursively calculates $\bar{P}(f^{(l-1)}_{j} \vert f^l_{i})$ for this negative node $\bar{f}^L_i$. The final \emph{relative importance} of the neurons is then calculated as a normalized difference of $P(f^{(l-1)}_{j} \vert f^l_{i})$ and $\bar{P}(f^{(l-1)}_{j} \vert f^l_{i})$,
$$ P_{c}(f_j^{(l-1)}\vert f_i^{l})=\frac{ReLU(P(f^{(l-1)}_{j} \vert f^l_{i})-\bar{P}(f^{(l-1)}_{j} \vert f^l_{i}))}{\sum_j ReLU(P(f^{(l-1)}_{j} \vert f^l_{i})-\bar{P}(f^{(l-1)}_{j} \vert f^l_{i}))}$$
where $ReLU$ is the rectified linear function. Finally, the contrastive-MWP, $P_{c}(f_i^{l})$, indicates the relative importance of neuron $f_i^{l}$ for specific prediction $y$. Alternatively, $P_{c}(f_i^{l})$ could be thought as the implicit amount of attention that the network pays to neuron $f_i^{l}$ to predict $y$. Next, we will use the contrastive-MWPs to update the synaptic importance parameters.
\subsection{Attention-Based Synaptic Importance}
Let $\gamma_{ji}^{l}$ denote the importance of the synapse between neurons $f_j^{(l-1)}$ and $f_i^l$ for a particular task. Here we hypothesize that the importance of a synapse should be increased if its pre and post synaptic neurons are important (relative to the task that is being learned), where the importance of the neurons are identified via Equation \eqref{eq:ebp0}. This is the basic idea behind Hebbian learning \cite{hebb1961organization}. Hebbian learning of importance parameters, however, suffers from the severe problem of unbounded growth of these parameters. To avoid the unbounded growth and following the large body of work on Hebbian learning, we use Oja's learning rule \cite{oja1982simplified} that provides an alternative and more stable learning algorithm. We then update the importance parameters as follows:
\begin{equation}
\gamma_{ji}^{l}=\gamma_{ji}^{l}+\epsilon\left(P_c(f^{(l-1)}_j)P_c(f^{(l)}_i)- P_c\left(f^{(l)}_i\right)^2\gamma^l_{ji}\right)
\end{equation}
where $\epsilon$ is the rate of Oja's learning rule.
While the network is being updated via back-propagation, we also update the importance parameters via Oja's learning rule in an online manner, starting from $\gamma_{ji}^l=0$.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{cEBP.png}
\caption{The visualization of c-EB at the input layer for different top-down signals. The first column shows the input image, the second column shows the attentional map generated by c-EB for the predicted label (i.e., with highest activity after the softmax layer), and the third column is for the runner-up predicted label.}
\label{fig:cEBP}
\end{figure}
\subsection{Updated loss}
Following the existing work for overcoming catastrophic forgetting \cite{kirkpatrick2017overcoming,zenke2017continual} we regularize the loss function with the computed synaptic importance parameters as in Equation \eqref{eq:updated_loss}, i.e.,
$$\mathcal{L}(\theta)=\mathcal{L}_B(\theta)+\lambda\sum_{k} \gamma_k(\theta_k -\theta^\star_{A,k})^2 $$
We further note that, as opposed to the work of \cite{kirkpatrick2017overcoming} and similar to the work of \cite{zenke2017continual,aljundi2018memory} the importance parameters in our work could be calculated in an online fashion. Therefore, there is no need for definition of tasks, and our method could adaptively learn the changes in the training data. However, in order to be able to compare our results with those of the Elastic Weight Consolidation (EWC), we use the exact loss function used in that work \cite{kirkpatrick2017overcoming}. As can be seen the c-EB is capable of identifying parts of the input image (i.e., neurons in layer 0) that correspond to the top-down signal.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{image001.png}
\caption{Performance of our algorithm on the Permuted MNIST tasks in comparison with and without c-EB.}
\label{fig:permMNSITAcc}
\end{figure}
\section{Experiments}
\subsection{Permuted MNIST}
We test our algorithm on the benchmark permuted MNIST task, with five sequential tasks. The first task is set as the original MNSIT problem while the consequent tasks obtained by fixed but random permutations of the digit images (See Figure \ref{fig:permMNSITAcc}) top row). We start by learning the first task, i.e., original MNIST problem, with our attention-based selectively plastic multilayer perceptron. After training on original MNIST and achieving saturated accuracy ($\sim 98\%$), we test our c-EB top-down attention. We first add Gaussian noise to MNIST test images and calculated the attention maps at the input layer setting the top down signal to be: 1) the predicted label (i.e., neuron with the highest activation after softmax layer), and 2) the runner up predicted label (i.e., the neuron with the second highest activation). The inputs and their corresponding attention maps for three sample digits are shown in Figure \ref{fig:cEBP}.
The result on learning the consecutive permuted MNIST problems is shown in Figure \ref{fig:permMNSITAcc}. We followed the work of \cite{kirkpatrick2017overcoming} and used a Multi-Layer Perceptron (MLP) with two hidden layers of size $400$ (each). We used Rectified Linear Units (ReLUs) as nonlinear activation functions and the ADAM optimizer with learning rate, $lr=1e-3$, for optimizing the networks. We report the average training loss as well as the average testing accuracy over 10 runs for all five tasks, for a vanilla network, i.e., a uniformly plastic neural network without selective plasticity, and for our proposed method. It can be seen that the Vanilla network suffers from catastrophic forgetting while our attention-based selective plasticity enables the network to preserve its important synapses.
Furthermore, we compared our performance to that of the EWC \cite{kirkpatrick2017overcoming} and Synaptic Intelligence \cite{zenke2017continual}. The networks architecture, optimizer, learning rates, and batch size (batch size=100) was kept the same for all methods and we used the optimal hyper parameters reported in these papers. We emphasize that we performed little to no hyper-parameter tuning for our algorithm. The comparison between the methods is shown in Figure \ref{fig:permMNIST}.
Each plot in Figure \ref{fig:permMNIST} shows the classification accuracy for task $t$ after learning tasks $t,t+1,...,T=5$. An ideal system should provide high accuracy for task $t$, and maintain it when learning the subsequent tasks. As can be seen our method performs on par with the SOA algorithms and we suspect a better hyper-parameter tuning would in fact further boost the results of our algorithm.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Permuted_MNIST.png}
\caption{Comparison between our method, EWC \cite{kirkpatrick2017overcoming}, and Synaptic Intelligence \cite{zenke2017continual} (where $c$ is a hyper-parameter for Synaptic Intelligence). As can be seen our method performs on par with these algorithms. We emphasize that we spent little to no efforts on hyperparameter tuning for our algorithm. }
\label{fig:permMNIST}
\end{figure}
\subsection{Split MNIST}
For the Split MNIST tasks we learn five consecutive pairs of digits, (e.g., $[0,5],[1,6],[2,7],[3,8],[4,9]$) where the pairs are randomly chosen. The Split MNIST task is a more realistic lifelong learning scenario compared to the Permuted MNIST task. In Split MNIST, knowledge from the previously learned tasks could be transferred to learning future tasks. Figure \ref{fig:splitMNSITAcc} shows the performance of our algorithm on the split MNIST tasks and compare it to a vanilla neural network with the same architecture.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Split_MNIST_CatastrophicForgetting.png}
\caption{Performance of our algorithm on the Split MNIST tasks in comparison with and without c-EB.}
\label{fig:splitMNSITAcc}
\end{figure}
Finally, we compare our work with the Synaptic Intelligence \cite{zenke2017continual} on the Split MNIST tasks. The results of this comparison are shown in Figure \ref{fig:splitMNIST}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Split_MNIST.png}
\caption{Comparison between our method and Synaptic Intelligence \cite{zenke2017continual} on the Split MNIST tasks. As can be seen our method performs on par with synaptic intelligence. We emphasize that we spent little to no efforts on hyper-parameter tuning for our algorithm. }
\label{fig:splitMNIST}
\end{figure}
\section{Conclusion}
In this paper we introduced a biologically inspired mechanism for overcoming catastrophic forgetting. We propose a top-down neuromodulatory mechanism for identifying important neurons relevant to the task. We then attach an importance parameter to all synapses in the neural network and update this importance based on Oja's learning rule on pre and post synaptic importance of neurons. This is a novel online method for synaptic consolidation in neural networks to preserve previously acquired knowledge. While our results were demonstrated for sequential acquisition of classification tasks, we believe the biological principle of top-down attention driven by the cholinergic neuromodulatory system would also be applicable to deep reinforcement learning networks. Future work will also look at other ways of implementing top-down attention such as Grad-CAM \cite{selvaraju2017grad}, and establish the generality of the principle.
\bibliographystyle{named}
|
1,108,101,564,964 | arxiv | \section{INTRODUCTION}
Objects are ubiquitous in our everyday lives. Every common activity, such as cooking or cleaning, implies the capability of understanding and operating a set of objects to successfully complete a task. In order for a Service Robot (SR) to operate in human environments as well, the ability to recognize objects is a basic requirement. Object recognition is rarely a self-contained task, but it is rather a proxy for a large variety of high-level tasks, such as navigation, manipulation and user interaction, that heavily rely on an accurate description of the visual scene.
The advent of deep learning has had a huge impact on the object recognition task after decades of plateaued results. The progressive design of deeper and more sophisticated networks, starting from AlexNet~\cite{alexnet}, VGG~\cite{vgg}, Inception~\cite{inception}~\cite{inception-v2-3}~\cite{inception-v4} to ResNet~\cite{resnet}~\cite{wide_resnet}, has led to outstanding results in competitions such as the ImageNet Large Scale Visual Recognition Challenge (ILSVRC)~\cite{ilsvrc}. Arguably the primary driving force of the deep learning revolution is the availability of large scale datasets. The majority of these datasets, such as the popular ImageNet~\cite{imagenet}, Pascal VOC~\cite{pascal_voc}, and Caltech-256~\cite{caltech-256}, are composed of images collected through Web search engines. However, the representation of the visual world provided by these datasets implies a bias from the observer (a human photographer) and the Web search engines \cite{unbiased} that are incoherent with the representation perceived by, for example, a SR. It is then legitimate to ask whether the features learned from Web-based datasets can generalize well to robotic data, despite the aforementioned bias.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{v4core.png}
\caption{Glimpse of the data collection process with the robotic platform (left) acquiring data of a cluttered scene populated with everyday objects.}
\label{fig: v4core}
\end{figure}
In the last years, computer vision has progressed enormously due to the establishment of standard references and benchmarks, e.g. ImageNet, which have enabled consistent comparison and development of new methods. Unfortunately, the robot vision community has not experienced the same progress due to the lack of accurate testbeds for validating novel algorithms. In the past years, the RGB-D Object Dataset~(ROD)~\cite{wrgb-d} has become ``de facto" standard in the robotic community for the object classification task \cite{obj_rec} \cite{conv_rec} \cite{depthnet}. Despite its well-deserved fame, this dataset has been acquired in a very constrained setting and does not present all the challenges that a robot faces in a real-life deployment. In order to fill the existing gap in the robot vision community between research benchmark and real-life application, we introduce a large-scale, multi-view object dataset collected with an RGB-D camera mounted on a mobile robot (see figure~\ref{fig: v4core}), called Autonomous Robot Indoor Dataset~(ARID). The data are autonomously acquired by a robot patrolling in a defined human environment. The dataset presents 6,000+ RGB-D scene images and 120,000+ 2D bounding boxes for 153 common everyday objects appearing in the scenes. Analogously to ROD, the object instances are organized into 51 categories, each containing three different object instances. In contrast, our dataset is designed to include real-world characteristics such as variation in lighting conditions, object scale and background as well as occlusion and clutter. To our knowledge, no other robotic dataset embedding all the challenges of real-life data can be found in the literature. All the collected data, together with the information needed to replicate the experiments, is publicly available at \url{https://www.acin.tuwien.ac.at/en/vision-for-robotics/software-tools/autonomous-robot-indoor-dataset/}.
In addition to introducing a new dataset, we inspect the effectiveness of features learned from the Web domain on robotic data and compare them with the features learned from the RGB-D domain. This comparison is made possible by collecting a second dataset containing the images downloaded from the Web representing the same categories as ARID. The acquisition of this Web-based dataset is performed by using query expansion strategies from \cite{web_download} on different search engines followed by a manual cleaning to remove noisy images. Exhaustive experiments with different deep convolutional networks demonstrate that, despite the greater similarity between the RGB-D and the robotic domain, models learned from Web images are more effective. Finally, the best performing network, ResNet-50, is used to study the classification results on subsets of ARID representing three problematic characteristics of robotic data: small images, occlusion and clutter. The experiments point out small images as the main challenge of robotic data, indicating a path to follow for the resolution of the object classification problem for robotics.
In summary, our contributions are the following:
\begin{itemize}
\item a new RGB-D object dataset, collected in-the-wild with a mobile robot, that provides a ``litmus test" for the validation of object recognition algorithms developed for robotic applications,
\item a detailed analysis of publicly available RGB-D datasets from a robotic perspective,
\item comprehensive experiments with several well-established deep convolutional networks, comparing the effectiveness of data coming from the Web and RGB-D domain in generating features for object classification in robotics, and
\item a study of the main factors responsible for the difficulties faced by classifiers on robotic data.
\end{itemize}
The rest of the paper is organized as follows: the next section positions our approach compared to related work, section~\ref{sec: arid} introduces the proposed robotic dataset, section~\ref{sec: experiments} presents the experimental results and section~\ref{sec: conclusions} draws the conclusions.
\begin{table*}[t]
\captionof{table}{Summary of the characteristics of different RGB-D datasets with focus on variation in lighting condition, variation in scale, multiple views, occlusion, clutter, variation in background and whether or not the data are collected directly from a robot. \textit{Not Available} (NA) indicates that the dataset focuses on object instances rather than categories and the number of categories is unknown.}
\label{tab: datasets}
\centering
\bgroup
\def\arraystretch{1.5
\begin{tabular}{@{}*9l@{}}
\toprule[1.5pt]
\multicolumn{2}{c}{\head{Dataset}}
& \multicolumn{6}{c}{\head{Characteristic}}\\
\head{Name} & \head{\# classes} &
\head{light var.} & \head{scale var.} & \head{multiview} & \head{occlusion} & \head{clutter} & \head{bkg var.} & \head{robot}\\
\cmidrule(l){1-2}\cmidrule(l){3-9}
RGB-D Object Dataset~\cite{wrgb-d} & 51 & %
& & \checkmark & & & & \\
RGB-D Scene Dataset~\cite{wrgb-d_scene} & 5 & %
& \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \\
BigBIRD~\cite{bigbird} & NA & %
& & \checkmark & & & & \\
Active Vision Dataset~\cite{activevision} & NA &
\checkmark & \checkmark & \checkmark & \checkmark & & \checkmark & \checkmark \\
JHUIT-50~\cite{jhuit-50} & NA & %
& & \checkmark & & & & \\
JHUScene-50~\cite{jhuscene-50} & NA & %
& & \checkmark & \checkmark & \checkmark & & \\
iCubWorld Transf.~\cite{icubwt} & 15 & %
\checkmark & \checkmark & \checkmark & & & \checkmark & \checkmark \\
\textbf{Autonomous Robot Indoor Dataset (ARID)} & 51 &
\checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\
\bottomrule[1.5pt]
\end{tabular}
\egroup
\end{table*}
\section{RELATED WORK}
\label{sec: related_work}
In the following, we first analyze the characteristics of existing RGB-D datasets from a robotic perspective. Then, we review related works on the transferability of learned features across different domains by focusing on the Web and RGB-D domains.
\subsection{Datasets}
\label{subsec: rw_datasets}
During the last decade, a variety of datasets have been made publicly available for research. With the popularization of deep neural networks, which require a considerable amount of data for training, the race for large-scale datasets has become more intense. While Web images exist in abundance, robotic images are difficult to obtain because platforms are expensive and data acquisition is time consuming. Nevertheless, the robotic community has produced some interesting datasets. In particular, for indoor objects, the most relevant datasets are JHUIT-50, BigBIRD, iCubWorld Transformation, ROD, and the Active Vision Dataset.
ROD~\cite{wrgb-d} contains 300 objects from 51 categories spanning from fruit and vegetables to tools and containers. Despite the availability of multiple views, each object is presented in isolation and variation in lighting condition, object scale and background are missing. The corresponding scene dataset, the RGB-D Scene Dataset~\cite{wrgb-d_scene}, presents multiple objects in the same scene, but considers only five object categories.
BigBIRD~\cite{bigbird} contains 125 common human-made objects, with particular focus on boxes and bottles. This dataset is specifically designed for instance recognition and the selected objects belong to very few categories. In addition, occlusion, clutter, scale and light variation are not captured. A more recent dataset, the Active Vision Dataset~\cite{activevision}, uses a subset of 33 objects from BigBIRD in densly acquired scenes. The data is directly acquired by a robot and it embeds most of the nuisances typical of real-life data. Nevertheless, the limited number of considered objects makes this dataset unsuitable for classification.
JHUIT-50~\cite{jhuit-50} contains 50 industrial objects and hand tools used in mechanical operations. The objects are captured in isolation and from multiple viewpoints. Due to its limited scope, this dataset is more suitable for instance recognition rather that classification. In addition, nuisances such as occlusion, clutter, scale and light variation are not captured. The corresponding scene dataset, JHUScene-50~\cite{jhuscene-50}, includes occlusion and clutter, but limits the number of considered objects to 10.
iCubWorld Transformation~\cite{icubwt} contains 150 common indoor objects from 15 different categories. The data are collected directly with the iCub humanoid robot~\cite{icub}. This dataset addresses specifically variance in the background as well as the variance in scale and rotation of the object. Nevertheless, each object is presented in isolation, avoiding problems caused by cluttered scenes.
Despite the high-quality that characterizes each of these datasets, their constrained setting makes them incoherent with real-life data. In addition, only the Active Vision Dataset and the iCubWorld Transformation present data collected directly from a robot. Table~\ref{tab: datasets} presents a summary of the characteristics of the datasets discussed above and highlights that, differently from other datasets, ARID embeds all these characteristics.
\subsection{Transfer Learning}
Deep convolutional networks are currently dominating several computer vision tasks. One of the key factors contributing to their success is the transferability of the produced deep representation for a variety of visual recognition tasks. The deep representations, also called features, of these networks have been empirically proven to be superior to traditional hand-crafted features, e.g.~\cite{tl1}~\cite{tl2}~\cite{tl3}. In order to take advantage of the generalization power of deep models, the networks need a large amount of training data. For this reason, large-scale datasets, such as ImageNet, with millions of samples, have been extensively used across different domains. It is common practice to further adapt the deep representation learned from a large dataset to the specific domain of interest through fine-tuning \cite{ft1} \cite{ft2}, i.e.,~by refining the representation using annotated data from the novel task in a subsequent training stage.
The effectiveness of using features learned from the Web domain in the robotic domain has been previously studied~\cite{web_download}~\cite{teaching_icub}. In Massouh et al.~\cite{web_download}, features learned from the Web domain are tested on the RGB-D Object Dataset, while in Pasquale et al.~\cite{teaching_icub}, features learned from ImageNet are used to train a classifier on the iCubWorld28~\cite{teaching_icub}, a former version of the iCubWorld Transformation. Although both works exhibit interesting results, we claim that, due to the intrinsic constraints discussed in section \ref{subsec: rw_datasets}, the utilized datasets cannot be considered as reliable representatives of real-life robotic data. In addition, only AlexNet and Inception are used to produce the analyzed features. Our work exhaustively benchmarks deep models obtained with five different networks against a robotic dataset collected in-the-wild.
\section{AUTONOMOUS ROBOT INDOOR DATASET}
\label{sec: arid}
In the following, we describe the characteristics of the proposed robotic dataset by highlighting its most significant peculiarities. In addition, we unveil the protocol used for the autonomous data collection and the details of the provided annotation.
\subsection{Scope and Motivation}
The Autonomous Robot Indoor Dataset contains RGB and depth images of daily life objects belonging to 51 categories. Each object category contains three instances, for a total of 153 physical objects, and it coincides with one of the 51 WordNet leaf nodes used to determine the categories of a very well-known dataset, the RGB-D Object Dataset. In other words, there is a complete overlap between the categories represented in the two datasets, ARID and ROD. Figure~\ref{fig: objects} gives a concrete idea of the dataset's content by showing one sample object per category.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{objects.png}
\caption{Sample of objects used in Autonomous Robot Indoor Dataset. Each object shown belongs to a different category.}
\label{fig: objects}
\end{figure*}
Since we are mostly interested in autonomous assistive robots operating in indoor environments, the object classes considered in ROD are a valid representative. These objects consist of a large variety of food items, such as fruit, vegetables and packed goods, and human-made objects common to homes and offices. Nevertheless, our goal is not to extend and contribute to ROD, but rather fill the gap between research-oriented datasets and real-life data by introducing a robotic dataset collected in-the-wild. While ROD contains images collected in a constrained setting (fixed camera-object distance, static background, invariant light conditions), our dataset includes all the nuisances of robotic data by acquiring it directly with a mobile robot navigating autonomously in an indoor environment. More precisely, the following challenges are taken into account:
\begin{itemize}
\item variation of lighting conditions,
\item object scale variation,
\item significant changes in the viewpoint,
\item partial view and occlusion,
\item clutter, and
\item background variation.
\end{itemize}
We hope that this work provides the robot vision community with a tool to advance the visual capabilities of robots in order to accelerate their integration in our lives.
\subsection{Data Acquisition Protocol}
In order to avoid a human bias in data acquisition and to observe the objects from the robot's perspective, a mobile robot with an RGB-D camera is used. In particular, the mobile robotic platform is powered by a Pioneer P3-DX with a customized structure that supports an Asus Xtion Pro camera mounted on a pan/tilt unit (see figure~\ref{fig: v4core}).
The data collection is performed in 10 different sessions conducted during different days and at different times of the day: this allows a natural variation of the lighting conditions among the data. At each run, 30-31 objects are spread in the environment where the mobile robot patrols predefined waypoints. When a waypoint is reached, the camera scans the scene with a horizontal movement of the pan/tilt unit and acquires RGB and depth data, both with a resolution of 640x480 pixels and a frame rate of 30 Hz. The RGB and the depth frames are later synchronized based on their acquisition time and unmatched frames are discarded. Each session lasts for approximately one hour in which the robot continuously loops over four distinct waypoints. In order to guarantee the appropriate variability in terms of camera-object distance and object view, the objects are randomly moved in between two patrolling loops.
\subsection{Annotation}
In order to discard similar frames, every fifth frame is chosen for annotation for a total of over 6,000 frames. For each frame, a bounding box annotation indicates the location and the label (at instance level) of every visible object for a total of over 120,000 2D bounding boxes for the whole dataset. A modified version of Sloth annotation tool \cite{sloth} is used for this purpose. In case of occlusion or partial view, if the object is still distinguishable, a bounding box is drawn around the visible part of the object. Figure~\ref{fig: frame} shows a sample frame, together with its bounding box annotation. Since the objects are captured in a realistic scenario rather than in isolation, the dataset is also suitable for object detection. In addition, the availability of object labels at instance level allows the dataset to be used for object classification as well as object identification (also referred to as instance recognition).
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{frame.png}
\caption{Example of an RGB-D frame from the Autonomous Robot Indoor Dataset with 2D bounding box annotation.}
\label{fig: frame}
\end{figure}
\section{Experiments}
\label{sec: experiments}
We take advantage of the availability of ARID to disclose the characteristics of robotic data. In particular, we want to (i) analyze the transferability of features from the Web domain to the robotic domain (Section \ref{subsec: exp_features}) and (ii) study the characteristics of robotic data to identify the main source(s) of complication for classifying objects (Section \ref{subsec: exp_small}). In order to accomplish these goals, another dataset, called Web Object Dataset (WOD), is collected. WOD is composed of images downloaded from the Web representing objects from the same categories as ARID. The images are downloaded from multiple search engines (Google, Yahoo, Bing and Flickr) using the method proposed by Massouh et al.~\cite{web_download}. This method uses a concept expansion strategy by leveraging visual and natural language processing information to minimize the noise while maximizing the visual variability. The remaining noise is then manually removed, leaving a total of 50,547 samples.
\subsection{Baseline and Features Transferability}
\label{subsec: exp_features}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{same_ds.png}
\caption{Accuracy of different deep convolutional networks on three datasets: Autonomous Robot Indoor Dataset (ARID), RGB-D Object Dataset (ROD)~\cite{wrgb-d} and Web Object Dataset (WOD). The results are obtained by training and testing on different splits of the same dataset.}
\label{fig: same_ds}
\end{figure}
The limited availability of robotic data raises the question of whether data coming from a more accessible domain, the Web domain, can be effectively used instead of data from the RGB-D domain to learn features that are transferable to the robotic data. In particular, we want to compare the performance of well-known deep convolutional networks on robotic data (ARID), when trained on Web data (WOD) and on RGB-D data (ROD). In order to allow a fair evaluation, a subset of 40,000 samples from ARID dataset is selected, such that all the involved datasets are approximately the same size. It is worth noticing that, since WOD does not contain depth information, only RGB data are considered for all datasets. For this benchmark, we employ some of the most utilized network architectures in the literature, CaffeNet\footnote{A slightly modified version of AlexNet in which the normalization is performed after the pooling.}, VGG-16, Inception v2, ResNet-18 and ResNet-50. All networks are pre-trained on ImageNet and then fine-tuned on the desired dataset, according to the guidelines provided in~\cite{ft1}~\cite{ft2}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{test_arid.png}
\caption{Accuracy of different deep convolutional networks on Autonomous Robot Indoor Dataset (ARID). The results are obtained by training independently on ARID, RGB-D Object Dataset (ROD)~\cite{wrgb-d} and Web Object Dataset (WOD) and testing on ARID.}
\label{fig: test_arid}
\end{figure}
In order to provide a reference for the upcoming evaluations, we assess the performances of all considered networks for each of the three datasets (ARID, ROD, WOD) when training and test set come from the same dataset. For each dataset, multiple training/test splits are considered and the results are averaged to obtain the final classification accuracy. In particular, for ARID, each split uses one different object instance per class in the test set, for ROD, the first three splits indicated by the authors is used and, for WOD, each split uses $25\%$ of the data in the test set. From the results displayed in figure~\ref{fig: same_ds}, it can be noticed that the different networks consistently obtain a higher accuracy on WOD. Unsurprisingly, ARID appears to be the most challenging dataset and all the networks achieve an accuracy much lower (on average, $\sim0.4$ lower) on ARID than on the other two datasets.
\begin{table*}[t!]
\captionof{table}{Accuracy of multiple deep convolutional networks on different training/test combination of three datasets: Autonomous Robot Indoor Dataset (ARID), RGB-D Object Dataset (ROD)~\cite{wrgb-d} and Web Object Dataset (WOD). For each training/test set combination, the mean and maximum accuracy among the considered networks is shown.}
\label{tab: summary}
\centering
\bgroup
\def\arraystretch{1.5
\begin{tabular}{@{}*9l@{}}
\toprule[1.5pt]
\multicolumn{2}{c}{\head{Dataset}}
& \multicolumn{5}{c}{\head{Network}}
& \multicolumn{2}{c}{\head{Statistics}}\\
\head{Train on} & \head{Test on}
& \head{CaffeNet} & \head{VGG-16} & \head{Inception-v2} & \head{ResNet-18} & \head{ResNet-50}
& \head{Mean} & \head{Max} \\
\cmidrule(r){1-2}\cmidrule(l){3-7}\cmidrule(l){8-9}
ROD & ROD &
0.832 & 0.889 & 0.897 & 0.864 & 0.876 &
\rmfamily 0.872 & 0.897\\
ROD & ARID &
0.291 & 0.270 & 0.266 & 0.243 & 0.337 &
0.281 & 0.337\\
WOD & WOD &
0.924 & 0.942 & 0.914 & 0.953 & 0.956 &
0.938 & 0.956\\
WOD & ARID &
0.268 & 0.297 & 0.282 & 0.282 & 0.388 &
0.303 & 0.388\\
ARID & ARID &
0.441 & 0.458 & 0.481 & 0.458 & 0.540 &
\rmfamily 0.476 & 0.540\\
\bottomrule[1.5pt]
\end{tabular}
\egroup
\end{table*}
The networks fine-tuned on ROD and WOD are then tested on ARID to evaluate the transferability of the learned features to the robotic data. From the results displayed in figure~\ref{fig: test_arid}, it can be noticed that, as expected, all the networks undergo a performance drop when the training and test set belong to different datasets with respect to the case in which both sets belong to the same dataset (see figure~\ref{fig: same_ds}). The domain shift responsible for this negative inflection of the classification results occurs because the data composing training and test set are drawn from different distributions \cite{domain_adaptation}. However, features learned from Web data (WOD) consistently allow a higher classification accuracy (with improvements up to $0.05$) on robotic data (ARID) than features learned from RGB-D data (ROD) on all networks, with the exception of CaffeNet. The key factor to interpret this phenomenon is the greater variability of Web images: while ROD contains a limited number of instances per class, with some classes containing only 3 instances, in WOD each sample potentially represents a different object instance. Very deep networks, like ResNet-50, with high capacity and generalization power, take advantage of this richness in information to generate better models. This is further highlighted by the difference between the accuracy of ResNet-50 and the mean accuracy of all tested networks when training with WOD (see table~\ref{tab: summary}). The results of this experiment have a twofold implication: (i) despite the greater visual affinity between the RGB-D and the robotic domain, data from the Web domain generate more effective models for object classification in robotic applications, and (ii) the currently well-established deep convolutional network, when used in their plain stand-alone form and without any prior, do not perform satisfactorily for object classification in robotics.
\subsection{Robotic Challenges}
\label{subsec: exp_small}
In order to better understand which characteristics of robotic data negatively influence the results of the object classification task, we independently analyze three key variables: image dimension, occlusion and clutter\footnote{Since ARID is collected in-the-wild, by definition, the data acquisition is performed in an unconstrained manner. For this reason, rigorously isolating other characteristics of the data, such as light variation, background variation and different object view is prohibitive.}. Image dimension is a variable related to the camera-object distance: when the camera is not near enough to clearly capture the object, the object occupies only few pixels in the whole frame, making the classification task more challenging. For obvious reasons, this problem is emphasized when dealing with small and/or elongated objects, such as dry batteries or glue sticks. Occlusion occurs when a portion of an object is hidden by another object or when only part of the object enters the field of view. Since distinctive characteristics of the object might be hidden, occlusion makes the classification task considerably more challenging. Clutter refers to the presence of other objects in the vicinity of the considered object. The simultaneous presence of multiple objects may interfere with the classification task.
\begin{table}[t]
\captionof{table}{Accuracy of ResNet-50, trained on Web Object Dataset and on its augmented version (++), for three subsets of Autonomous Robot Indoor Dataset containing small images, occluded objects and clutters. The model is also tested on the whole dataset to show the overall impact of data augmentation.}
\label{tab: challenges}
\centering
\bgroup
\def\arraystretch{1.5
\begin{tabular}{m{2.5cm} m{1.5cm} m{1.5cm}}
\toprule[1.5pt]
\multicolumn{1}{c}{\head{Challenge}}
& \multicolumn{2}{c}{\head{Accuracy}}\\
\head{}
& \head{Top-1} & \head{Top-5}\\
\cmidrule(r){1-1}\cmidrule(l){2-3}
Small image &
0.230 & 0.511\\
Occlusion &
0.273 & 0.508\\
Clutter &
0.558 & 0.777\\
Small image ++ &
0.240 & 0.513\\
Occlusion ++ &
0.318 & 0.577\\
Clutter ++ &
0.543 & 0.802\\\hline
All ++ &
0.441 & 0.702\\
\bottomrule[1.5pt]
\end{tabular}
\egroup
\end{table}
\begin{figure*}[th]
\centering
\includegraphics[width=0.9\textwidth]{per_class_accuracy.png}
\caption{Accuracy of each of the 51 classes of the Autonomous Robot Indoor Dataset obtained with a ResNet-50 trained on the augmented Web Object Dataset.}
\label{fig: per_class}
\end{figure*}
Table~\ref{tab: challenges} shows the classification results of the best performing model of section \ref{subsec: exp_features} (ResNet-50 trained on WOD) on three subsets of ARID, each containing samples with the characteristics discussed above. The set of small images is obtained by taking half of ARID containing images with the smallest area, while the occlusion and clutter set have been manually selected. It is worth noticing that the three sets are mutually exclusive in order to avoid interference between the analyzed variables. The occlusion and, especially, the small images set exhibit low accuracy, thus negatively affecting the classification score of the whole dataset. It is possible to improve the classification by performing problem-specific data augmentation during the training phase. In particular, we augmented WOD by resizing the original samples to different scales and by randomly adding rectangular noise patches to simulate occlusion. These two strategies are commonly used to encourage the network to learn scale-/occlusion-invariant models \cite{inception} \cite{depthnet} \cite{dilated}. Table~\ref{tab: challenges} also shows the performances of ResNet-50 trained with this augmented WOD on the three subsets and on the whole ARID dataset. Even though the occlusion set benefits from this strategy (and so does the whole dataset) the classification of small images does not exhibit significant improvement. The difficulty of classifying small images is further confirmed by the results in figure~\ref{fig: per_class}, where classes representing small or elongated objects have the lowest accuracy.
\section{Discussion and Conclusion}
\label{sec: conclusions}
In this paper, we have presented ARID: a large-scale, multi-view, RGB-D object dataset collected with a mobile robot in-the-wild. This dataset is designed to capture the challenges a robot faces when deployed in an indoor environment and fills the current gap in the robot vision community between research oriented datasets and real-life data. Furthermore, with an extensive comparative study, we have shown that it is possible to overcome the complication of collecting a large amount of robotic data for training data-craving deep convolutional networks by using images downloaded from the Web. We have found that, despite being relatively easy to obtain, Web-based data allow the generation of more effective deep models than the RGB-D counterpart for the classification of robotic images. Nevertheless, object classification remains a challenging task in robotics and current algorithms present results that are insufficient for a successful integration of robotic systems in our homes. In order to shed light on the difficulties of this task, we have analyzed the effects of specific factors, such as object dimension, occlusion and clutter, on the performance. Results indicate that clutter is rather a secondary problem: occlusions and, especially, small objects more seriously degrade the classification accuracy. These observations suggest a research path in which visual tasks for robotic applications are tackled through methods designed to cope with domain-specific challenges. ARID is a valuable resource to pursue this goal and provides an important testbed for the robot vision community. In addition, the dataset may also be used to explore other aspects of robotic data, such as the integration of RGB and depth information.
Our dataset is available for download at \url{https://www.acin.tuwien.ac.at/en/vision-for-robotics/software-tools/autonomous-robot-indoor-dataset/}.
\balance
\section*{ACKNOWLEDGMENT}
This work has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 676157, project ACROSSING, by the ERC grant 637076 - RoboExNovo (B.C.), and the CHIST-ERA project ALOOF (B.C.). The authors would like to thank Victor Le\'{o}n, Mirco Planamente and Silvia Bucci for their help during the data collection and annotation process. The authors are also grateful to Tim Patten for the support in finalizing the work and Georg Halmetschlager for the help in the camera calibration process.
\bibliographystyle{IEEEtran}
|
1,108,101,564,965 | arxiv | \section{INTRODUCTION}
Biological systems generally contain large degrees of freedom.
Cells contain a huge variety of components and grow as a result of their complex reaction dynamics.
Even proteins, which are relevant to biological function, consist of a large number of units.
Currently, several methods have been developed to extract such high-dimensional data from cells, including transcriptome, proteome, and metabolome analyses to measure the abundances of chemicals within a cell \cite{Matsumoto2013GrowthBacteria,Schmidt2016TheProteome,Marguerat2012QuantitativeCells}.
However, in spite of advances in these omics measurements, extracting biologically important information from such high-dimensional data remains difficult.
Biologically important information, such as cell growth rate, capacity to adapt to novel environmental conditions, and survivability under stressful conditions, is often expressed in terms of a few degrees of freedom of variables.
If a few relevant variables could be extracted from many intracellular variables, it may be possible to bridge high-dimensional omics data with biologically essential information.
Of course, it could be too optimistic to assume that such a reduction to a few variables is possible for all cellular states, as these include dynamically changing states during differentiation and dormant states upon nutrient depletion.
Instead, by restricting our interest to cells growing while preserving their intracellular compositions under sufficient nutrient supplies, we may be able to achieve the desired dimension reduction.
Indeed, in cellular states with such steady exponential growth, several laws represented by a few variables or parameters have been uncovered.
These include the classic laws by Monod \cite{Monod1949} and Pirt \cite{Pirt1965,Pirt1982}, whereas Scott and Hwa recently unveiled proportionality between changes in growth rate and ribosome abundance \cite{Scott2010,Scott2011,Scott2014} following the observations of Schaechter et al. \cite{Schaechter1958}.
Furthermore, by noting that all components are equally diluted by steady cell growth, common proportionality in the changes of chemical concentrations across all components can be assumed, as has been experimentally verified by transcriptome and proteome analysis over thousands of mRNA and protein species.
Such analyses have shown that the common proportion coefficient in gene expression changes agrees with that in cell growth rate \cite{Kaneko2015,Furusawa2015b,Furusawa2018}.
Similar relationships have also been recently uncovered in laboratory evolution of bacteria \cite{Furusawa2015b}.
Changes in gene expression induced by evolution and adaptation to environmental changes are highly correlated across thousands of genes.
The common proportionality of expression changes upon adaptation and evolution suggests that even though cells involve a great number of components, adaptive changes in cellular states are restricted to a lower-dimensional subspace.
Cell model simulations with stochastic catalytic reaction dynamics have shown that that adaptive changes in chemical concentrations after evolution are restricted to the one-dimensional sub-space of chemical composition \cite{Furusawa2015b, Furusawa2018}.
Furthermore, repeated bacterial evolution experiments have suggested that changes in chemical concentrations follow the same low-dimensional paths despite differences in genetic changes\cite{Horinouchi2017PredictionMutations, Horinouchi2015b}.
The hypothesis that evolution to increase fitness leads to this dimension reduction seen in phenotypic change explains the experimental results of common proportionality.
However, how this dimension reduction to a few degrees of freedom occurs through evolution is unclear.
Can the reduction be formulated explicitly in terms of dynamical systems?
Note that the dimension reduction suggests that phenotypic variation in one- or few-dimensional subspace is much larger than those in other dimensions, or in other words, the relaxation process along the subspace is much slower than in the other dimensions.
If this is the case, can one characterize the dimension reduction of phenotypes as a separation of slow modes?
Last, but not least, can the convergence observed in phenotypic evolution mentioned above be explained in terms of the dimension reduction of phenotypic changes?
In the present paper, we address these questions by formulating the dimension reduction as an emerging property of the relaxation spectrum to a steady state as a result of evolution.
By computing the Jacobian matrix for the relaxation dynamics to the steady state, we demonstrate that one singular value is separated from others and trends closer to zero through evolution.
We then discuss how environmental switches may influence this dimension reduction.
In Sec.\ref{sec:theory_of_dimension_reduction}, we describe the formulation of the dimension reduction of phenotypic changes in terms of dynamical systems theory.
The dimension reduction is formulated as the separation of larger singular values of the inverse Jacobian matrix in relaxation dynamics of the cellular state.
The organization of the present paper is as follows.
In Sec.\ref{sec:model}, we describe the adopted cell model.
It consists of a thousand components whose concentrations change through catalytic reaction dynamics.
Evolution of the network is introduced so that cellular fitness as measured growth rate increases.
In Sec.\ref{sec:evolution_from_randomly_generated_genotype}, through the numerical evolution of the cellular reaction network, we demonstrate how the dimension reduction emerges through evolution in a thousand-dimensional space.
By computing the singular values of the inverse Jacobian matrix for the relaxation dynamics of a cellular state, we demonstrate that the largest singular value is separated from all others.
In the following sections, adaptive evolution to novel or fluctuating environments is discussed in relation to the dimension reduction.
In Sec.\ref{sec:evolution_from_evolved_genotype}, by taking cells that have evolved to adapt to one environment, we study evolution to a novel environment and how the dimension reduction already shaped by previous evolution provides a constraint on evolution in the novel environment.
Interestingly, this constraint can accelerate evolution under novel conditions.
Sec.\ref{sec:convergence_of_phenotypic_evolutional_pathways} is devoted to explaining phenotypic convergence during evolution in a novel environment.
In Sec.\ref{sec:evolution_under_fluctuating environment}, evolution in fluctuating environments is studied to show that the dimension reduction is valid under such conditions.
Sec.\ref{sec:discussion} is devoted to the summary and discussion, where the relevance of the dimension reduction to biology is discussed.
\section{MATHEMATICAL DESCRIPTION OF DIMENSION REDUCTION}
\label{sec:theory_of_dimension_reduction}
In this section, we present a general formulation of the response of the phenotypic state $\boldsymbol{x^*}$ upon perturbation.
It is reformulated from Ref.\cite{Furusawa2018} and is valid not only for the cell model adopted in the present paper, but also generally applicable to phenotypes given by the fixed point of any dynamical systems.
Consider the following $N$-dimensional dynamical system;
\begin{equation}
\dot{\boldsymbol{x}}=\boldsymbol{f}(\boldsymbol{g_0}, \boldsymbol{x}),
\label{eq:dynamics_theory}
\end{equation}
where $\boldsymbol{x}=(x_1, x_2,\dots, x_N)$ is an $N$-dimensional vector of the state variable for the dynamical system, and $\boldsymbol{g_0}$ is a set of parameters characterizing the dynamical system and corresponding to the genotype in our model.
The phenotype, on the other hand, is given by the fixed point of $\boldsymbol{x}$, which is denoted by $\boldsymbol{x}^*=(x^*_1, x^*_2,\dots, x^*_N)$, which is given by
\begin{equation}
\boldsymbol{f}(\boldsymbol{g_0},\boldsymbol{x}^*(\boldsymbol{g_0}))=0.
\label{eq:fix_theory}
\end{equation}
Now consider a slight parameter change, $\boldsymbol{g_0}\rightarrow \boldsymbol{g_0} + \boldsymbol{\delta g}$, due to mutation.
The fixed point for the modified parameter values is given by
\begin{equation}
\boldsymbol{f}(\boldsymbol{g_0}+\boldsymbol{\delta g},\boldsymbol{x}^*(\boldsymbol{g}+\boldsymbol{\delta g}))=0,
\end{equation}
from which one gets
\begin{align}
\boldsymbol{f}(\boldsymbol{g_0}+\boldsymbol{\delta g},&\boldsymbol{x}^*(\boldsymbol{g_0}+\boldsymbol{\delta g})) \nonumber\\
&\simeq\boldsymbol{f}(\boldsymbol{g_0},\boldsymbol{x}^*(\boldsymbol{g_0}))+\boldsymbol{R_g}\boldsymbol{\delta g}+\boldsymbol{J}\boldsymbol{\delta x}^*=0,
\label{eq:fix_delta_theory}
\end{align}
where $\boldsymbol{J}=(\partial\boldsymbol{f}/\partial\boldsymbol{x})_{\boldsymbol{x}=\boldsymbol{x}^*(\boldsymbol{g_0})}^{\boldsymbol{g}=\boldsymbol{g_0}}$ is the Jacobian matrix at the fixed point, and $\boldsymbol{R_g}=(\partial\boldsymbol{f}/\partial\boldsymbol{g})_{\boldsymbol{x}=\boldsymbol{x}^*(\boldsymbol{g_0})}^{\boldsymbol{g}=\boldsymbol{g_0}}$ is the "susceptibility tensor" against parameter change.
From Eq.(\ref{eq:fix_theory}) and Eq.(\ref{eq:fix_delta_theory}), the change of the fixed point by the parametric change $\boldsymbol{\delta g}$ is given by
\begin{equation}
\boldsymbol{\delta x}^* \simeq -\boldsymbol{L}\boldsymbol{R_g}\boldsymbol{\delta g},
\label{eq:relation1}
\end{equation}
where $\boldsymbol{L}=\boldsymbol{J}^{-1}$ is the inverse Jacobian matrix.
Moreover, by applying SVD (Singular Value Decomposition) to $\boldsymbol{L}$, it can be decomposed to $\boldsymbol{L}= \boldsymbol{V\Sigma U^{T}}$, where $\boldsymbol{V}=[\boldsymbol{v^{(1)}}, \boldsymbol{v^{(2)}},\dots,\boldsymbol{v^{(N)}}]$ and $\boldsymbol{U}=[\boldsymbol{u^{(1)}}, \boldsymbol{u^{(2)}},\dots,\boldsymbol{u^{(N)}}]$ are orthogonal matrices whose columns are the left and right-singular vectors of $\boldsymbol{L}$, and $\boldsymbol{\Sigma}$ is diagonal matrix whose elements are the singular values $\sigma_i\ (i=1,2,\dots,N)$.
Then, Eq.(\ref{eq:relation1}) can be written in the following form:
\begin{equation}
\boldsymbol{\delta x}^*\simeq-\left(\sum_{i=1}^N\sigma_i\boldsymbol{v^{(i)}}\boldsymbol{u^{(i)T}}\right)\boldsymbol{R_g}\boldsymbol{\delta g}.
\label{eq:relation2}
\end{equation}
Note that from the stability of the fixed point, $\sigma_i > 0$ follows.
The timescale for relaxation to the fixed point along each mode $v_i$ upon some perturbation is given by $\sigma_i$.
If the singular values are separated, so that $\sigma_1>\sigma_2,\dots,\sigma_l\gg\sigma_{l+1}>\dots$, then the state change upon perturbation is restricted to the $l$-dimensional subspace spanned by $\boldsymbol{v^{(1)}}, \boldsymbol{v^{(2)}},\dots,$ and $\boldsymbol{v^{(l)}}$:
\begin{equation}
\boldsymbol{\delta x}^* \simeq-\left(\sum_{i=1}^{l} \sigma_i \boldsymbol{v^{(i)}}\boldsymbol{u^{(i)T}}\right)\boldsymbol{R}_g\boldsymbol{\delta g}.
\label{eq:relation_comp}
\end{equation}
Note that, in some cases, there are some constraints on the dynamics of the model, for example, $\sum_{i=1}^Nx_i=1$ in the model to be described in the next section and adopted in the present paper.
There also exist trivial null exponents that correspond to the conserved quantities.
To remove such trivial directions, we compute $\boldsymbol{J}$ in the subspace except in the directions corresponding to the constraints.
\section{MODEL}
\label{sec:model}
\subsection{Cell model}
\label{ssub:cell_model}
To examine the hypothesis that high-dimensional phenotypic changes are restricted to a low-dimensional space as a result of adaptive evolution, we adopted a cell model with a large degree of freedom.
In this model, changes in cellular state follow genetically determined catalytic reactions.
We focused on the cellular state of steady growth as a phenotype.
The model, albeit simplistic, includes the basic, minimum properties of cells, i.e., absorption of nutrients from the environment and their conversion to the components essential for the cell growth, including catalysts for cellular reactions \cite{Furusawa2003}.
In the model, a cell consists of $N$ species of components; thus, the cellular state is represented by the $N$-dimensional vector $\boldsymbol{x}=(x_1,x_2,\dots,x_N)$, where $x_i$ is the concentration of each component.
We assume that the summation of the concentrations of all components in the cell is constant, which can be set to 1 without loss of generality ($\sum_{i=1}^Nx_i=1$).
In other words, each concentration $x_i$ is given by the abundance of the $i$th chemical divided by the total abundance of all chemicals.
This assumption is equivalent to cellular volume increasing in proportion to the total abundance of cellular components.
The cell model consists of 3 different types of species components: nutrients, transporters, and catalysts.
Nutrient components exist in the environment and are absorbed into the cell with the aid of cellular transporter chemicals \cite{Furusawa2012AdaptationCriticality}.
For simplicity, we assumed one transporter per nutrient species.
The concentrations of nutrients and transporters are given by $\boldsymbol{x_{nut}}=(x_1, x_2, \dots, x_n)$ and $\boldsymbol{x_{tr}}=(x_{n+1}, x_{n+2},\dots,x_{2n}) $\, respectively ($n$ is the number of species of nutrient components).
The flow rate of each nutrient is given by $Ds_ix_{i+n}$, where $D$ is the parameter of flow rates of nutrients, and $s_i$ is the concentration of the $i$th component in the environment.
Other components $(k=2n+1, \cdots, N)$ work as catalysts that catalyze $2$-body chemical reactions ( $j+k\rightarrow i+k$ ) in a cell.
By using the rate equation of catalytic chemical reactions, absorption of nutrients, dilution due to cellular volume growth, and the dynamics of chemical concentrations are given by
\begin{align}
\dot{x_i} &= R_i(\boldsymbol{G}, \boldsymbol{x}) + Ds_ix_{i+n} - \mu(\boldsymbol{x}) x_i
\label{eq:dynamics}\\
R_i(\boldsymbol{G}, \boldsymbol{x}) &= \sum_{j,k=1}^NG_{ijk}x_jx_k - \sum_{j,k=1}^NG_{jik}x_ix_k
\label{eq:chemical_reaction}
\end{align}
Each term in Eq.(\ref{eq:dynamics}) represents the conversion of components in a cell by catalytic chemical reactions $(j+k\rightarrow i+k)$, absorption of nutrients from the environment, and the dilution effect that accompanies cellular volume growth.
The cell volume grows according to the absorption of nutrients.
Hence, the growth rate $\mu(\boldsymbol{x})$ is given by $\mu(\boldsymbol{x})=\sum_{j=1}^nDs_jx_{j+n}$, which is derived from the hypothesis $\sum_{i=1}^N x_i = 1$.
$\boldsymbol{G}=\{G_{ijk}\}$ is a 3rd-order tensor corresponding to the genotype of the cell, which takes the value of 1 when a catalytic chemical reaction ($ j+k \rightarrow i+k $) can occur in a cell and takes the value zero otherwise.
For simplicity, we assumed all reaction coefficients take the same value, 1.
Unless otherwise noted, we used $N=1000, n=10, D= 0.001\ (=1/N)$ in the present paper.
In the parametric region we adopted in the present paper, cellular states reached a unique, nontrivial fixed point ($\forall i ; x_i^*>0$).
The concentrations at the fixed point $\{x_i^*\}$ give the phenotype of the cell, which is uniquely determined for a given genotype and environment.
\subsection{Evolution}
\label{sub:evolution}
As the initial genotype before evolution, we generated a catalytic chemical reaction network by randomly putting $\rho N$ reaction paths from each $N$ component and allocating a catalytic component randomly to each chemical reaction path avoiding autocatalytic reactions.
Evolution simulation was carried out in the following procedure.
At the first generation, we prepared an initial population of $L$ ancestral cells.
In each generation, from each of the $L$ mother cells, we produced $c$ different mutants and calculated their growth rates $\mu$ with the concentrations of components in the steady state $\boldsymbol{x^*}$.
This gives the fitness of the cell.
Then, the top $L$ cells with the highest fitness values were selected to produce offspring for the next generation.
In the present paper, we used $L=10, c=10$, so that the total cell population was set at $cL=100$.
Unless otherwise noted, mutations were carried out in the following procedure.
At first, we randomly picked a pair of catalytic chemical reactions $(i,j,k), (i',j',k')$, where $(i,j,k)$ represents the catalytic chemical reaction $(j+k\rightarrow i+k)$.
In the case that $i{\neq}j', i'{\neq}j, j'{\neq}k, j{\neq}k'$, we changed the pair of the reaction paths $(i,j,k), (i',j',k')$ to $(i,j',k), (i',j,k')$ or to $(i',j,k), (i,j',k')$.
For all the evolution simulations, mutation rate was fixed at 0.04 per component so that 40 paths were changed per generation.
With this mutation procedure, the total number of incoming and outgoing paths and catalysts, $I_i = \sum_{jk}G_{ijk}$,\ $O_j = \sum_{ik}G_{ijk}$\ and $C_k = \sum_{ij}G_{ijk}$ are conserved.
\section{RESULTS}
\subsection{Evolution from random network}
\label{sec:evolution_from_randomly_generated_genotype}
First, we analyzed cellular evolution with random catalytic chemical reaction networks, following the scheme discussed in Sec.\ref{sub:evolution}.
In this section, we used the fixed environmental condition, $\boldsymbol{s}^{old}$, through the evolution as
\begin{equation}
s_i^{old} = \begin{cases}
\ 2/n &(\ i=1,2,\cdots,n/2\ )\\
\ \ 0 &(\ i=n/2+1,n/2+2,\cdots,n\ ).
\end{cases}
\label{eq:env_random}
\end{equation}
Through evolution, maximum fitness in the population monotonically increased (Fig.\ref{FIG:evo_from_random_1}).
In Fig.\ref{FIG:evo_from_random_1}, the top fitness values in the population are plotted across different strains (i.e., for different runs of the evolution simulation) in different colors.
For all runs with different random mutations, fitness increased to a sufficiently high level, and the time course of the increases, after rescaling the generations, was approximately the same.
\begin{figure}
\centering
\includegraphics[clip,width=0.9\linewidth]{Figure/Figure1.png}
\caption{Evolutionary courses of maximum fitness in the population starting from cells with randomly chosen reaction networks.
Different colors correspond to different runs with different random mutations, starting from the same ancestral cell in the same environment.
The speed of the fitness increase differed in each run.
Inset: we used the “scaled generation” for the horizontal axis to normalize a given fitness maximum ( here, $5\times 10^{-5}$ ) to 1.
In the figures below, this rescaling is often adopted to compare the results of different runs.}
\label{FIG:evo_from_random_1}
\end{figure}
Next, we computed the Jacobian matrix $\boldsymbol{J}$ at the fixed point $\boldsymbol{x^*}$ for the reaction dynamics at each generation.
We then obtained the singular values $\{\sigma_i\}\ (\sigma_1>\sigma_2>\dots>\sigma_{N-1})$ of its inverse matrix $\boldsymbol{L}$ (see also Eq.\ref{eq:relation_comp}) and the corresponding left-singular vectors $\{\boldsymbol{v^{(i)}}\}$.
As shown in Fig.\ref{FIG:evo_from_random_2}(a), the largest singular value $\sigma_1$ was separated from the other singular values through evolution.
This trend was common across all evolutionary runs.
The ratio of the first to the second largest singular values $\sigma_1/\sigma_2$ increased through evolution (Fig.\ref{FIG:evo_from_random_2}(b)).
This suggests that the relaxation dynamics of phenotypes are highly constrained along $\boldsymbol{v^{(1)}}$, the left-singular vector for the largest singular value $\sigma_1$.
Next, we analyzed the distribution of phenotypic changes caused by genetic mutation as follows:
At each generation, we picked the fittest cell in the population and produced many thousands of mutant cells, which have slightly modified chemical reaction networks from those of the original.
For each of these mutant cells, the phenotypes $\boldsymbol{x}^*$ were computed.
Then, each of the high-dimensional phenotypic changes due the mutation was obtained.
To visualize these changes in the $N$-dimensional space, we used PCA (principal component analysis).
In Fig.\ref{FIG:evo_from_random_3}(a), we plotted the phenotypic changes using the 1st and 2nd PC modes, $\boldsymbol{p^{(1)}_{old}}$ and $\boldsymbol{p^{(2)}_{old}}$, respectively.
At the start of evolution, the phenotypic changes of the mutant cells were uniformly distributed in the PC plane (Fig.\ref{FIG:evo_from_random_3}(a.1)).
As evolution progressed, the distribution of phenotypic changes was largely biased along $\boldsymbol{p^{(1)}_{old}}$ (Fig.\ref{FIG:evo_from_random_3}(a.2)).
In fact, at the end of the evolution, more than $4\%$ of the phenotypic changes due to mutation were explained by the 1st PC (Fig.\ref{FIG:evo_from_random_3}(b)), though less than $0.3\%$ were explained at the start of the evolution.
This indicates that phenotypic changes due to mutation were constrained to a lower-dimensional space as the evolution progressed.
\begin{figure*}
\centering
\includegraphics[clip,width=0.9\linewidth]{Figure/Figure2.png}
\caption{
(a)\ Evolutionary changes of singular values $\{\sigma_i\}$ of the inverse Jacobian matrix $\boldsymbol{L}$ for the phenotype $\boldsymbol{x}^*$.
They are calculated for the fittest cells in the population every generation.
(b)\ Evolutionary changes of the ratio of the first to the second largest singular values $\sigma_1/\sigma_2$ for different strains.
Both (a) and (b) are plotted against the scaled generation.}
\label{FIG:evo_from_random_2}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[clip,width=0.9\linewidth]{Figure/Figure3.png}
\caption{(a)\ Distribution of phenotypic changes due to genetic mutations in the PC (principal component) plane at the start (a.1) and end of evolution (a.2).
These PCs were calculated from the phenotypes $\boldsymbol{x^*}$ of $10^5$ mutants.
(b)\ Explained variance ratio of the PC modes of phenotypic changes due to genetic mutation.
The blue line corresponds to the data at the start of evolution and the orange to that at the end of evolution.
(c)\ Inner products between the 1st PC mode of the phenotypic changes due to mutation at each generation and the left-singular vectors $\boldsymbol{v^{(i)}}$ of the inverse Jacobian matrix $\boldsymbol{L}$.
The blue line shows the inner product with $\boldsymbol{v^{(1)}}$, the left-singular vector of the largest singular value, and orange lines depict those with the other singular vectors, plotted against the rescaled generation.}
\label{FIG:evo_from_random_3}
\end{figure*}
Next, we studied how the low-dimensional constraint in relaxation dynamics and the low-dimensional response to genetic mutations are correlated.
As an indicator of the similarity between relaxation dynamics of the phenotype and its response to genetic mutations, we calculated the absolute inner products of $\boldsymbol{p^{(1)}_{old}}$ with $\boldsymbol{v^{(i)}}(i=1,2,\dots,N-1)$ for the fittest cell in each generation (Fig.\ref{FIG:evo_from_random_3}(c)).
In Fig.\ref{FIG:evo_from_random_3}(c), the inner product with $\boldsymbol{v^{(1)}}$ is depicted by the blue line and the others by the thin black line.
\begin{figure*}
\centering
\includegraphics[clip,width=0.9\linewidth]{Figure/Figure4.png}
\caption{(a)\ Dependency of fitness on PC1 values of the phenotype of mutant cells computed across 10000 mutants after the last generation of evolution.
(b)\ Correlation between fitness and the PC1 value of the phenotype of mutant cells computed every 10 generations, as above, and plotted against the scaled generation.}
\label{FIG:evo_from_random_4}
\end{figure*}
The former took on a much higher value as the evolution progressed.
This result shows that phenotypic changes caused by genetic mutation were restricted to the one-dimensional subspace spanned by $\boldsymbol{v^{(1)}}$, i.e., the direction of the slowest relaxation mode, after adaptive evolution had progressed.
Thus, our results demonstrate that a low-dimensional constraint in the phenotypic space emerged through adaptive evolution.
The next question concerns the biological meaning of this low-dimensional structure.
We then plotted the dependency of fitness on PC1 value $p^{(1)}_{old}=(\boldsymbol{p^{(1)}_{old}}\cdot \boldsymbol{x^*})$.
As shown in Fig.\ref{FIG:evo_from_random_4}(a), $p^{(1)}_{old}$ was highly correlated with fitness after evolution.
Indeed, the correlation between $p_1$ and fitness increased through evolution (Fig.\ref{FIG:evo_from_random_4}(b)).
This result indicates that the one-dimensional constraint in the phenotypic space both in relaxation dynamics and in response to genetic mutation was acquired through adaptive evolution, which embeds fitness into the most variable direction in the phenotypic subspace.
\subsection{Evolution in novel environment}
\label{sec:evolution_from_evolved_genotype}
In the previous section, we analyzed evolution of the cells with randomly generated chemical reaction networks.
In nature, however, evolution occurs for organisms that have already evolved under earlier environmental conditions.
Hence, in this section, we use those already evolved in one environmental condition and study their evolution in a new environment.
For this, we took cells that had evolved in the fixed environment $\boldsymbol{s}^{old}$ (given by Eq.(\ref{eq:env_evo})) from a random network, and then put them into the following new fixed environment $\boldsymbol{s}^{new}$.
\begin{equation}
s_i^{new} = \begin{cases}
\ \ 0 &(\ i=1,2,\cdots,n/2\ )\\
\ 2/n &(\ i=n/2+1,n/2+2,\cdots,n\ )
\end{cases}
\label{eq:env_evo}
\end{equation}
Again, maximum fitness in the population monotonically increased through evolution of all strains following approximately the same course (Fig.\ref{FIG:evo_from_evo}(a)).
We then computed the distribution of $\{\sigma_i\}$, the singular values of $\boldsymbol{L}$.
In this case, the largest singular value $\sigma_1$ remained separated from the other singular values through evolution (Fig.\ref{FIG:evo_from_evo}(b)), whereas $\sigma_1/\sigma_2$ first decreased, and then increased later.
Fig.\ref{FIG:evo_from_evo}(c) shows a minimum value approximately at the scaled generation $0.15$.
Next, we computed how the most changeable direction of the phenotype with $\boldsymbol{v^{(1)}}$ changes through evolution to a new environment.
In each generation, we calculated the inner products between the vector and that either at the start or end of the evolution (Fig.\ref{FIG:evo_from_evo}(d)).
As shown, the former decreased from $1$ to $0$, whereas the latter increased from $0$ to $1$.
The latter inner product exceeded the former at the same relative generation (0.15) when $\sigma_1/\sigma_2$ took the minimum value.
The results in the present section imply that adaptation to a novel environment first progressed within the one-dimensional phenotypic constraint imposed by the previous evolution, and that this one-dimensional subspace was later redirected to match the novel environment.
\begin{figure*}
\centering
\includegraphics[clip, width=0.9\linewidth]{Figure/Figure5.png}
\caption{(a)\ Evolutionary courses of maximum fitness in a population evolving in a novel environment starting from cells evolved in another environment.
The speed of fitness increase differed in each run.
Hence, in the embedded figure, we used the 'scaled generation' for the horizontal axis to normalize a given fitness maximum (here $5\times 10^{-5}$ ) to 1.
Different colors correspond to different runs with different random mutations starting from the same ancestral cell and environmental conditions.
(b)\ Evolutionary changes of all the singular values $\sigma_i$.
(c)\ Evolutionary changes to the ratio of the first to the second singular values.
(d)\ Evolutionary changes to the inner product between $v_1$ at each generation and that at the start of evolution (blue line) or end of evolution (orange line).
All of (a)-(d) are plotted against the scaled generation.}
\label{FIG:evo_from_evo}
\end{figure*}
\subsection{Convergence of phenotypic evolutionary pathways}
\label{sec:convergence_of_phenotypic_evolutional_pathways}
In this section, we will investigate evolutionary pathways in the phenotypic space from the phenotype adapted to the old environment.
We observed that evolutionary pathways in phenotypic space converged as a result of adaptive evolution (Fig.\ref{FIG:pathway}(b)).
In the figure, the evolutionary pathways of 20 different strains from the same ancestral cell evolved under the same environmental conditions $\boldsymbol{s}^{old}$ are plotted in the plane with the 1st and the 2nd PCs, where the PCs were computed over a set of phenotypes of the 20 strains over 1000 equally divided generations from the start to end of each evolution.
The gradation in the background of the figure represents the fitness values shown as a function of the PC plane.
Different strains from the same ancestral cell similarly climbed up the fitness landscape due to strong selection during adaptive evolution.
However, in the neutral direction, in which fitness does not change, the phenotypes of different strains could move in different directions due to random mutations.
Indeed, this was the case for the original evolution from the random network (Fig.\ref{FIG:pathway}(a)).
In contrast, the evolutionary pathways from the cells which had evolved in the old environment $\boldsymbol{s}^{old}$ followed the same path, not only in the direction along which fitness changes but also in the direction along which it does not (Fig. \ref{FIG:pathway}(b)).
The difference between the above two evolutionary pathways originates in how the genetic mutations are mapped to the phenotypic changes.
For cells with randomly chosen genotypes, phenotypic changes caused by mutations were uniformly distributed in all directions across the phenotypic space.
Thus, evolutionary pathways diverged in the directions along which fitness does not change.
On the other hand, for cells evolved in a given environment, the phenotypic changes induced by mutations were restricted to a low-dimensional subspace.
Therefore, phenotypic changes in later evolutionary cycles were highly biased within this subspace.
Thus, the evolutionary pathways in the phenotypic space were restricted to this biased direction.
\begin{figure*}
\centering
\includegraphics[clip,width=0.9\linewidth]{Figure/Figure6.png}
\caption{Evolutionary pathways plotted in the plane of PC1 and PC2: (a)\ evolution under the environment condition $\boldsymbol{s^{old}}$ starting from a random network and (b)\ evolution under the condition $\boldsymbol{s^{new}}$ for genotypes already evolved under the condition $\boldsymbol{s^{old}}$.
The red vector in (b) is the projection of $v_1$, the left-singular vector of the largest singular value of $\boldsymbol{L}$ at the start of the evolution, and the blue one is that at the end of the evolution.
The gradation in both figures represents the fitness value, with darker color corresponding to higher value as plotted in the PC plane.
The PC planes are calculated according to the phenotypes of the fittest cells in the population every 0.001 relative generations for the 20 different strains.}
\label{FIG:pathway}
\end{figure*}
Thus far, we have shown the convergence of evolutionary pathways of the phenotypes that emerged as a result of low-dimensional constraints on phenotypic changes already realized in earlier evolution under different conditions.
One might wonder whether such constraints hinder or foster evolution to a new environment.
To examine these possibilities, we computed evolution speeds with and without such constraints.
In Fig.\ref{FIG:speed_evolution}, evolutionary courses of fitness are plotted: one for cells evolved under a previous environmental condition and the other for cells evolved based on a random network.
The generations needed to reach a given growth rate ($\mu^*=5\times 10^{-6}$) were $20\%$ shorter for the former as compared with the latter, even though initial growth rates were the same.
As shown in Sec.\ref{sec:evolution_from_evolved_genotype}, cells took advantage of the already evolved low-dimensional structure and modified it rather than destroying the structure in order to adapt to a novel environment.
These results suggest that the low-dimensional phenotypic constraint hastens evolution to a novel environment.
\begin{figure}
\centering
\includegraphics[clip,width=0.9\linewidth]{Figure/Figure7.png}
\caption{ Fitness changes through the evolution from cells with randomly generated networks (blue line) and from the cells that have already evolved to an earlier, different environmental condition. }
\label{FIG:speed_evolution}
\end{figure}
\subsection{Evolution in fluctuating environment}
\label{sec:evolution_under_fluctuating environment}
In the last subsection, we studied adaptation to changes from one environment to another.
In nature, environmental conditions can continuously change over generations.
Thus, the next question to be addressed is whether such a one-dimensional phenotypic constraint is also generated through evolution in a fluctuating environment, where environmental conditions repeatedly change after some number of generations.
To be specific, nutrient conditions were randomly changed every 10 generations to one of 5 different conditions $\boldsymbol{s}^{(i)} (i=1,2,\dots,5)$ as given by
\begin{equation}
s_j^{(i)} = \begin{cases}
(1-\epsilon)/2 &(\ j=2i-1,\ 2i\ )\\
\epsilon/(n-2) &(\ \rm{the\ others}\ )
\end{cases}
\label{eq:env_fluc}
\end{equation}
with $\epsilon=0.2$ and $n=10$.
Note that the following results are qualitatively consistent with the intervals for changing environmental conditions.
Herein, we term this changing environmental condition $E_s$, whereas each of the 5 nutrient conditions are termed $E_1, E_2, \dots, E_5$, respectively.
\begin{figure*}
\centering
\includegraphics[clip,width=0.9\linewidth]{Figure/Figure8.png}
\caption{(a)\ Evolutionary changes to fitness value under 5 environmental conditions for the network with the highest fitness under a fluctuating environment.
(b)\ Evolutionary changes to the singular values $\sigma_i$ of the inverse Jacobian matrix $\boldsymbol{L}$.
(c)\ Evolutionary changes to the explained variance ratio of PCs calculated from the phenotypic changes in many thousands of mutants in each generation.
(d)\ Relation between the 1st PC mode of the phenotypic changes caused by mutations and fitness in different environments.
The vertical axis, fitness difference, represents the fitness at each environmental condition subtracted from the average.
Those in all 5 environments showed similar trends.
}
\label{FIG:evo_multi_1}
\end{figure*}
By carrying out numerical evolution, we confirmed that the growth rate of the fittest genotype increased in all 5 environmental conditions, as shown in Fig.\ref{FIG:evo_multi_1}(a).
We then computed the singular values $\{\sigma_i\}$ of the inverse Jacobian matrix $\boldsymbol{L}$ in the same way as in the previous subsections.
Again, only one largest singular value $\sigma_1$ was separated from the others in spite of the diversity of environmental conditions (Fig.\ref{FIG:evo_multi_1}(b)).
Phenotypic changes due to genetic mutation were again restricted to the one-dimensional subspace spanned by $\boldsymbol{v^{(1)}}$, the left-singular vector corresponding to $\sigma_1$ (Fig.\ref{FIG:evo_multi_1}(c)).
For convenience, we label this 1st PC mode of the phenotypic changes due to genetic mutation in cells evolved under $E_s$ as $\boldsymbol{p^{(1)}_s}$, whereas each of the 1st PC modes computed from the network evolved separately under each $\boldsymbol{E_i}$ condition is denoted as $\boldsymbol{p^{(1)}_i} (i=1,2,\dots,5)$.
We then examined whether and how $\boldsymbol{p^{(1)}_s}$ represents fitness in all 5 conditions.
As shown in Fig.\ref{FIG:evo_multi_1} (d), the inner product $p^{(1)}_s=(\boldsymbol{p^{(1)}_s}\cdot \boldsymbol{x^*})$ correlated well with fitness in all 5 conditions.
This suggests that $\boldsymbol{p^{(1)}_s}$ involves all modes evolved in each environmental condition $E_i\ (i=1,2,\dots,5)$.
To examine how $\boldsymbol{p^{(1)}_s}$ is related to phenotypic constraint of the networks evolved in each condition $E_i (i=1,2,\dots,5)$, we computed the similarity of $\boldsymbol{p^{(1)}_s}$ with each of $\boldsymbol{p^{(1)}_i} (i=1,2\dots,5)$, as given by the absolute inner products $\boldsymbol{p^{(1)}_s}\cdot\boldsymbol{p^{(1)}_i}$.
Firstly, the inner products $|(\boldsymbol{p^{(1)}_i}\cdot\boldsymbol{p^{(1)}_j})| (i\neq j=1,2,\dots,5)$ were smaller than 0.006; i.e., $\boldsymbol{p^{(1)}_i}$s were almost orthogonal to each other.
In contrast, $a_i\equiv(\boldsymbol{p^{(1)}_s}\cdot\boldsymbol{p^{(1)}_i}) (i=1,2,\dots,5)$ were 0.23, 0.076, 0.11, 0.20, and 0.12, respectively.
These values were much larger than $|(\boldsymbol{p^{(1)}_i}\cdot\boldsymbol{p^{(1)}_j})|$.
Since $\boldsymbol{p^{(1)}_i}$s are orthogonal to each other, $\boldsymbol{p^{(1)}_s}$ is represented by the linear combination $\boldsymbol{p^{(1)}_s}\simeq\sum_{i=1}^5a_i\boldsymbol{p^{(1)}_i}+(others)$.
If $\boldsymbol{p^{(1)}_s}$ were completely represented by the linear combination of $\boldsymbol{p^{(1)}_i}$s with the same weight and no other contributions, $\boldsymbol{p^{(1)}_s}=\frac{1}{\sqrt{5}}\sum_{i=1}^5\boldsymbol{p^{(1)}_i}$ would hold.
The observed inner products $a_i$ were smaller than this value $1/\sqrt{5}\sim0.42$, but were substantially large.
This indicates that the most variable direction in phenotypic changes acquired through changing environmental conditions mainly reflected each of the variable modes acquired for each environmental condition, but also included some others which may be necessary for adaptation to changing environments.
\section{Discussion}
\label{sec:discussion}
In the present paper, we studied the evolution of a catalytic reaction network to higher fitness as measured by growth rate to confirm that the phenotypic change in a high-dimensional space of chemical concentrations is restricted mainly within a one-dimensional subspace after evolution.
This was demonstrated by the observation that one singular value of the inverse Jacobian matrix at the steady growth state is significantly larger than others.
Along the direction of the left-singular vector corresponding to this value, the relaxation is slowest.
Thus, phenotypic change is constrained in this direction.
The drastic dimension reduction from high-dimensional phenotypic changes to this one-dimensional subspace emerges after evolution.
Furthermore, this one-dimensional direction agrees with the direction in dominant phenotypic changes due to mutation from the fitted network.
The most variable direction in the reaction dynamics agrees with the most variable direction due to genetic changes.
In addition, along this one-dimensional subspace, the fitness gradient is highest, implying that this left-singular vector represents fitness.
Next, it was shown that this one-dimensional phenotypic space evolved under a given environmental condition was used for later adaptation to a novel environment.
Evolution is accelerated by this dimension reduction.
Further, this reduction to one-dimensional phenotypic space emerges even under fluctuating environments.
Dimension reduction from high-dimensional phenotypic states has gathered much attention recently in the studying biological systems and in protein dynamics \cite{Kaneko2015,Carroll2013,Pancaldi2010Meta-analysisYeast, Keren2013, Tlusty2017PhysicalProteins, Schreier2017ExploratoryNetworks, Frentz2015StronglyCommunities, Daniels2008SloppinessBiology}.
Changes in the concentrations of most mRNA and protein species in response to environmental stress are suggested to be mainly restricted to a one-dimensional subspace.
Thus, the present results of evolved catalytic reaction networks are consistent with previous experimental observations.
Notably, such a reduction to a one-dimensional subspace from a high-dimensional phenotypic space is explained by the separation of a single singular value of the inverse Jacobian matrix around the steady state from all others.
Changes upon perturbation are mainly constrained along the left-singular vector of this outlier singular value, and accordingly, the one-dimensional constraint is derived as a result of evolution.
Can one then directly confirm this separation of one singular value in experiments? Upon perturbation, cellular states relax to the original steady state.
The slowest relaxation mode is given by the left-singular vector for the separated singular value of the Jacobian matrix.
For example, by means of transcriptome analysis, Braun measured the temporal course of the concentrations of all mRNA species upon environmental stress \cite{Stolovicki2011a}.
Interestingly, the changes were highly correlated across all species, suggesting constraint of relaxation dynamics to a low- (one-) dimensional subspace.
An alternative possibility for experimental confirmation of singular value separation would be the use of the concentration fluctuations of chemicals inevitable in a cell due to the small number of molecules within it \cite{Elowitz2002StochasticCell.,Furusawa2005UbiquityDynamics,Bar-Even2006NoiseAbundance,Sato2003}.
If the fluctuations are mainly governed by the slowest mode, i.e., that corresponding to the largest singular value of the inverse Jacobian matrix, then one possible strategy is to measure the temporal fluctuations in chemical concentrations (gene expression) and their correlation matrix.
Even though it may be difficult to measure such fluctuations for many components at a single-cell level, the information could be collected from a population \cite{Brenner2015}.
It is interesting to note that one-dimensional constraint in concentration changes is observed even during evolution under fluctuating environmental conditions.
Even though the fittest phenotype and the corresponding one-dimensional subspace is different for each environmental condition, the evolved state under the fluctuating environment still has only one distinct singular value, and the cellular state is dominantly changeable along its left-singular vector.
This left-singular vector (manifold) keeps some overlap with that shaped for each of the environments, but is not just the combination of each.
The one-dimensional phenotypic space retaining the information of each environmental condition as well as that needed for adaptation to environmental conditions that switch over generations is shaped through the evolution.
Why was only a one-dimensional structure acquired in evolution of the present cell model?
In the present study, we used cell growth rate as an indicator of fitness, even though we changed the environmental conditions.
In reality, cells often need to evolve to satisfy other conditions, such as survival in poor nutrient conditions or resistance to antibiotics, in addition to the pressure for higher growth.
To confirm that the reduction to a one-dimension subspace is valid even for different indicators of fitness in evolution, we carried out evolution simulations of fitness that were not correlated with growth rate (See Supplemental Materials).
To summarize the result of these simulations, a one-dimensional constraint with separation of a single singular value always evolved.
The specific choice of the fitness function in the simulation is not important to the dimension reduction for phenotypic changes.
In reality, however, phenotypic changes in bacteria are not necessarily restricted to a one-dimensional subspace.
For example, major phenotypic changes in {\sl E. Coli} under applications across a variety of antibiotics were recently measured by transcriptome analysis.
Even though a dimension reduction was observed, the transcriptome changes were located in a subspace of more than one-dimension, possibly in an approximately 8-dimensional subspace \cite{Suzuki2014b}.
One possible reason that the present model produces a lower-dimensional subspace than does real-life omics analysis may be the lack of an explicit tradeoff, an important topic in the study of evolution \cite{Kashtan2005, Shoval2012EvolutionarySpace, Tendler2015EvolutionaryShells}.
As shown in Fig.\ref{FIG:evo_multi_1} (d), fitness changes across the different environmental conditions were highly correlated.
This positive correlation indicates that there existed no tradeoff among the fitness pressures under the different conditions.
Thus, whether the dimensionality of constrained phenotypic space depends on environmental conditions including explicit tradeoffs is an important question.
As shown in Sec.\ref{sec:convergence_of_phenotypic_evolutional_pathways}, once a low-dimensional structure is formed as a result of evolution, evolutionary pathways in the phenotypic space follow similar trajectories in different populations.
Indeed, this convergence of evolutionary pathways was previously observed in an evolution experiment with bacteria \cite{Horinouchi2015b}.
According to our results, this convergence is explained as follows.
Evolutionary pathways in the phenotypic space are determined by selection according to fitness and drift due to genetic mutation.
Under a low-dimensional phenotypic constraint, phenotypic changes due to genetic drift are restricted to the low-dimensional subspace.
In this way, evolutionary pathways are strongly biased in this direction as shown in Fig.\ref{FIG:pathway}. Hence, possible evolutionary pathways in the phenotypic space are highly restricted within the low-dimensional subspace. This may shed new light on the long-lasting question of “necessity or chance” in evolution (or “replaying the tape of life” as described by Gould \cite{GouldWonderfulHistory}); even though genetic evolution could be random, phenotypic evolution is rather deterministic and constrained.
Whether or not the low-dimensional constraint imposed by evolution is beneficial remains unclear.
One benefit we observed is the acceleration of adaptation to novel environments.
As shown in Fig.\ref{FIG:speed_evolution}, the generations needed to adapt to a novel environment were fewer for the network previously evolved in an earlier different environment than those for the random network.
As discussed above, the evolved low-dimensional structure was reused to aid adaptation rather than discarded. Searching for fitter states within the restricted subspace would be generally faster than random searching over a higher-dimensional space.
Hence, evolution shapes the low-dimensional structure in the phenotypic space, which accelerates further evolution in turn.
Of course, this acceleration is only possible if fitter states are accessible in the low-dimensional structure.
In this sense, if explicit trade-offs exist between the fitted states for the old and new environmental conditions, acceleration may not occur.
This issue should be explored in the future.
Our description of the dimension reduction adopts linearization around the steady growth state, which assumes small changes upon perturbation.
Although it has been suggested that such a linear regime is expanded as a result of increased robustness through evolution \cite{Furusawa2018, Kaneko2007}, it will be important to examine the ranges at which linearization is valid and also to explore extension of dimension reduction to a nonlinear regime.
In summary, we have formulated the dimension reduction in biological systems in terms of dynamical systems and confirmed it by simulation of evolution in a cell model with reaction networks of large numbers of chemical species.
The low-dimensional constraint on phenotypic changes leads to further deterministic phenotypic evolution, which also allows for the acceleration of adaptation to novel environments.
These results lay the groundwork for establishing a theory of the constraint and predictability of phenotypic evolution.
\section*{Acknowledgments}
The authors would like to thank Chikara Furusawa, QianYuan Tang, Omri Barak, Tetsuhiro Hatakeyama, and Atsushi Kamimura for stimulating discussions.
This research was partially supported by a Grant-in-Aid for Scientific Research (S) (15H05746) from the Japanese Society for the Promotion of Science (JSPS) and a Grant-in-Aid for Scientific Research on Innovative Areas (17H06386) from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan.
|
1,108,101,564,966 | arxiv | \section{Introduction}\label{section: introduction}
In any physical theory, it is necessary to describe the mechanism that allows us to gather information about the physical systems we are modelling, that is, it is necessary to describe measurements. In classical theories, the description of measurements is frequently not explicit, often hidden under the assumption that we can neglect the effect of measurements on the state of the system of interest. However, in quantum mechanics, describing measurement processes has been problematic and a subject of discussion since its very inception (see e.g.,~\cite{Wheeler1983}). Nevertheless, from an operational point of view the problem can be bypassed in the context of non-relativistic quantum mechanics by employing \mbox{L\"uders rule}, also known as the projection postulate~\cite{Luders1951}. This rule prescribes how to update the state of the system after the measurement in a way consistent with the measurement outcome, through projection-valued measures (PVMs). This model of measurement is called \textit{projective measurement} or \textit{idealized measurement}.
However, projective measurements are not suitable to describe measurements in quantum field theory, since they are not compatible with relativistic causality, and therefore they are not consistent with the very foundational framework of quantum field theory (QFT). Specifically, there are no local projectors of finite rank in QFT~\cite{ReehSchlieder,Schlieder1968,Hellwig1970formal,Redhead1995}. Any finite rank projector in QFT, such as a projector onto some single particle wave-packet state, is inherently non-local, and so any attempt to generalize the projection postulate with such a projector leads to spurious faster-than-light signalling~\cite{Sorkin1993,Lin2014,Benincasa2014,Borsten2021,Jubb2021}. It should be clear from the beginning that when we talk about the causality issues of the projection postulate, we are indeed referring to superluminal causation, and not the non-locality that arises from entanglement, which can be present even between non-relativistic systems~\cite{EPR}. The latter is perfectly compatible with causality as long as it does not enable signalling---it just tells us that quantum theories are non-local in nature, and correlations can be present between quantum systems that are spacelike separated even in QFT~\cite{Summers1985,Summers1987}.
The impossibility of naively generalizing the projection postulate to QFT has been addressed mainly in three different ways.
First, one could consider localized ideal measurements\footnote{For a more thorough analysis where general CPTP maps (not necessarily ideal measurements) on quantum fields are characterized according to its causal behaviour, see~\cite{Jubb2021}.} (in the form of infinite rank projectors) and try to modify the projection postulate in a covariant way, as in Hellwig and Kraus's proposal~\cite{Hellwig1970formal,Hellwig1970operations,Hellwig1969}. This prescription however suffers from the same faster-than-light signalling that Sorkin pointed out in~\cite{Sorkin1993}, as discussed in~\cite{Borsten2021}.
Second, another way consists of formulating a measurement theory completely within quantum field theory, such as Fewster and Verch's framework~\cite{FewsterVerch,Fewster2019covariant,Bostelmann2021}.
By giving a covariant update rule, they obtain a measurement scheme consistent with QFT and therefore completely safe from any causality issues. In this framework, however, being entirely within QFT, the localized measurement probes are also quantum fields, and we are still left with the problem of how to access the information of that second ancillary field~\cite{DanBruno2021}. This is because low energy measurement apparatuses (like atoms, photodetectors, photomultipliers, human retinas, etc.) are not well described by a free field theory, and the treatment of bound states in QFT is still very much an open problem~\cite{WeinbergQFT}.
Finally, the third option relies on coupling so-called \textit{particle detectors}---localized non-relativistic quantum systems---to quantum fields, such as the Unruh-DeWitt (UDW) model~\cite{Unruh,DeWitt}. Although pointlike detector models are fully compatible with relativistic causality~\cite{Edu2015,TalesBruno2020,Pipo2021}, their singular nature leads, in certain contexts, to divergences~\cite{Schlicht2004}. However, those divergences are not present for detectors that are adiabatically switched on~\cite{Satz2007}, or that are spatially smeared~\cite{Louko2006,Langlois}. In the latter case, even though the unitary evolution is perfectly compatible with causality and does not allow faster-than-light signalling with a second detector~\cite{Edu2015,Pipo2021}, the use of a (non-pointlike) spatial smearing along with the non-relativistic approximation can indeed enable some degree of faster-than-light signalling between two detectors with the help of a third ancillary one in between~\cite{Pipo2021}. However, unlike in the case of projective measurements, in the smeared setups that show any degree of superluminal signalling, its impact is bounded by the smearing lengthscale of the ancillary detector (which by approximating it to be non-relativistic we already neglected in our frame of reference, to start with) and moreover does not show up at leading orders in perturbation theory~\cite{Pipo2021}. These results, together with the fact that detector models can realistically represent the way fields are measured experimentally~\cite{Edu2013,Rodriguez2018,Lopp2020}, make this option especially appealing for modelling measurements in QFT.
In the particle detector approach, however, not every step of the process is already well understood. We still need to describe the mechanism through which we go from a field state and a detector that are originally decoupled and uncorrelated, to a measurement outcome that an experimentalist can put in a plot or write in a notepad. After the experiment is performed and some classical information has been obtained about the field, how does one take into account this information for the description of future experiments involving the field?
This question is particularly relevant for the field of relativistic quantum information. Indeed, there are several landmark protocols and experimental proposals in the context of quantum information (e.g., the quantum Zeno effect~\cite{Misra1977,Patil2015}, the delayed choice quantum eraser experiment~\cite{Scully1982,Scully1991,Kwiat1992,Kim2000,Ma2013}, or the Wigner's friend experiment~\cite{Wignerchapter1961,Frauchiger2018}, among others) in which the ability to perform measurements and using the information of the outcomes to update the state is essential for their implementation. To be able to formulate these scenarios in relativistic contexts, it is necessary to have a well-understood measurement theory that works in the context of quantum field theory and that connects to experimentally measurable quantities.
In this paper we aim to formulate consistently a measurement theory for QFT using detectors as measuring tools. First, in Section~\ref{section: setup} we present our working model. In Section~\ref{section: the updated state of the field} we describe the measurement process (including field-detector interaction and idealized measurement of the detector) and obtain the field state update according to the measurement outcome. In Section~\ref{section: causal behaviour} we analyze this update in order to determine whether this kind of measurement abides to relativistic causality. In Section~\ref{section: the update rule} we present a context-dependent update rule consistent with QFT. In Section~\ref{section: update of n-point functions} we explicitly formulate it in terms of $n$-point functions and in Section~\ref{section: generalization to the presence of entangled third parties} we analyze the most general initial scenario. Section~\ref{section: discussion} is devoted to discussing how the framework presented in this manuscript constitutes a valid measurement theory for QFT. Finally we present our conclusions in Section~\ref{section: conclusions}.
\section{Setup}\label{section: setup}
In this work we consider a spatially smeared Unruh-DeWitt model~\cite{Unruh,DeWitt} for a detector coupled to a real scalar field in a (1+$d$)-dimensional flat spacetime. This is a simplified model that is both covariant~\cite{TalesBruno2020} and yet captures the phenomenology of light-matter interaction neglecting angular momentum exchange but without any further common quantum optics approximations---such as the rotating-wave or single (or few) mode approximation~\cite{Edu2013,Pozas2016,Rodriguez2018,Lopp2020}. For our purposes, let us consider that the detector is inertial and at rest in the frame of coordinates $(t,\bm{x})$ so that its proper time coincides with the coordinate time $t$. Then, in the interaction picture, the interaction Hamiltonian is~\cite{Langlois}
\begin{equation}\label{UDW Hamiltonian}
\hat{H}_{I}(t)=\lambda \chi(t)\hat{\mu}(t) \int\mathrm{d}^{d}\spatial{x}\,F(\spatial{x})\hat \phi(t,\spatial{x}) \;.
\end{equation}
In this equation, the scalar field $\hat\phi$ can be expanded in terms of plane-wave solutions in the quantization frame $(t,\bm x)$ as
\begin{equation}
\hat \phi(t,\bm{x})\!=\!\!\int\!\frac{\mathrm{d}^{d}\bm{k}}{\sqrt{(2\pi)^{d}2\omega_{\bm{k}}}}\left( \hat{a}_{\bm{k}}e^{-\mathrm{i}(\omega_{\bm{k}}t-\bm{k}\cdot\bm{x})}+\hat{a}^{\dagger}_{\bm{k}}e^{\mathrm{i}(\omega_{\bm{k}}t-\bm{k}\cdot\bm{x})} \right),
\end{equation}
where $\hat a^\dagger_{\bm k}$ and $\hat a_{\bm k}$ are canonical creation and annihilation operators satisfying the commutation relations $[\hat{a}^{\phantom{\dagger}}_{\bm{k}},\hat{a}_{\bm{k'}}^{\dagger}]=\delta(\bm{k}-\bm{k}')$. The internal degree of freedom (monopole moment) of the detector, which we choose to have two levels (ground $\ket{g}$ and excited $\ket{e}$) with an energy gap $\Omega$ between them is given by
\begin{equation}
\hat{\mu}(t)=\proj{g}{e} e^{-\mathrm{i}\Omega t}+\proj{e}{g} e^{\mathrm{i}\Omega t} \;.
\end{equation}
$\lambda$ is the coupling strength and $\chi(t)$ is the switching function controlling the time dependence of the coupling. The interaction is on only for times in the support of $\chi(t)$. For simplicity we assume this to be a finite interval $[t_\textrm{on},t_\textrm{off}]$ (i.e. $\chi(t)$ is compactly supported). $F(\bm{x})$ is the spatial smearing function that models the localization of the detector, and therefore the support of the product of $\chi$ and $F$,
\begin{equation}\label{interaction region}
\mathcal{D}\coloneqq \operatorname{supp}\{\chi(t)\cdot F(\bm{x})\} \;,
\end{equation}
is what we will call \textit{interaction region} or, slightly abusing nomenclature, \textit{detector}. Its \textit{causal future/past} $\mathcal{J}^\pm(\mathcal{D})$ is the union of the future/past light cones of all its points and their interiors. In Minkowski spacetime\footnote{The metric in a coordinate system associated with inertial observers is $\eta_{\mu\nu}=\text{diag}(-1,1,\hdots,1)$, $\{x^0,\dots,x^{d}\}$ are the coordinates of the event $\mathsf{x}$, and $\mathsf{x}\cdot\mathsf{y}\coloneqq \eta_{\mu\nu}x^\mu y^\nu$.} $\mathcal{M}$,
\begin{equation}\label{causal future of the interaction region}
\mathcal{J}^{\pm}(\mathcal{D})\!:=\!\{\,\mathsf{y}\! \in\! \mathcal{M}\!:\!\exists \:\mathsf{x}\in\!\mathcal{D}\:,\: \mathsf{x}\cdot\mathsf{y} \leq 0 \:,\: \pm(y^{0}-x^{0})\geq 0\} \;.
\end{equation}
The \textit{causal support} of $\mathcal{D}$ is the union of both its causal past and its causal future,
\begin{equation}
\mathcal{J}(\mathcal{D}):=\mathcal{J}^{+}(\mathcal{D})\cup\mathcal{J}^{-}(\mathcal{D}) \;.
\end{equation}
\section{The updated state of the field}\label{section: the updated state of the field}
In this section we will compute what is the update on the field state after an observable of the detector is measured by an experimenter through an idealized measurement. In other words, we will compute what is the POVM that is applied to update the field state if the detector is updated by a projective measurement. Although we will consider the most general case, a physical example of this kind of situations could be an experiment in the lab where we check whether the detector clicks (gets excited) or not (stays in the ground state) after interacting with an excited state of the electromagnetic field.
It is reasonable to consider that the detector and the field are initially uncorrelated. That is, in the absence of third parties\footnote{We will generalize to cases where the field is entangled with third parties in Section~\ref{section: generalization to the presence of entangled third parties}.}, the state of the system before the interaction is $\hat{\rho}=\hat{\rho}_{\textrm{d}} \otimes \hat{\rho}_{\phi}$. At time \mbox{$t=T$}, the experimenter performs a rank-one projective measurement \mbox{$P=\ket{s}\!\bra{s}$} of an arbitrary observable of the detector. One can think of the idealized measurement as performing a measurement of an observable of the detector, and of $\proj{s}{s}$ as being the eigenprojector associated to the obtained measurement outcome. For simplicity we assume that the measurement takes place after the interaction between the field and the detector has been switched off, that is, $T\geq t_\textrm{off}$. Then, after the projective measurement, the updated state of the joint system is~\cite{Nielsen2010}
\begin{equation}
\hat{\rho}^{P}=\frac{(P \otimes \mathds{1})\hat{U}\hat{\rho} \hat{U}^{\dagger}(P\otimes \mathds{1})}{\textrm{tr}\big[(P\otimes \mathds{1})\hat{U}\hat{\rho} \hat{U}^{\dagger}\big]} \;,
\end{equation}
where the unitary evolution operator $\hat{U}$ is given by
\begin{equation}\label{unitary}
\hat{U}=\mathcal{T}\exp(-\mathrm{i}\int_{-\infty}^{\infty}\!\!\!\!\mathrm{d} t'\, \hat{H}_I(t')) \;.
\end{equation}
The assumption that $\chi(t)=0$ for all $t\geq t_{\textrm{off}}$ allows us to safely extend the integration range to infinity. From now on, we will use the integral sign without specifying limits whenever the integral is carried out in the whole domain of the integrand. We thus have, for the updated state of the field,
\begin{equation}
\hat{\rho}^{P}_{\phi}= \tr_{\text{d}}(\hat{\rho}^{P}) \propto \textrm{tr}_{\text{d}}\big[(P\otimes\mathds{1})\hat{U}\hat{\rho} \hat{U}^{\dagger}(P\otimes \mathds{1})\big] \;,
\end{equation}
where $\tr_{\text{d}}(\cdot)$ stands for the partial trace over the Hilbert space of the detector. We note that the matrix elements of $\hat{\rho}^{P}_{\phi}$ satisfy that
\begin{equation}
\bra{\varphi_{1}}\hat{\rho}^{P}_{\phi} \ket{\varphi_{2}} \propto \bra{s,\varphi_1}\hat{U}\hat{\rho} \hat{U}^{\dagger} \ket{s,\varphi_2} \;.
\end{equation}
where $\ket{s,\varphi_i}\equiv\ket{s}\otimes\ket{\varphi_i}$ and $\ket{\varphi_i}$ is a vector in the field Hilbert space. From now on, we will use the following notation: if $\hat{\mathcal{O}}$ is an operator acting on the detector-field Hilbert space and $\ket{\psi_1},\ket{\psi_2} \in \mathcal{H}_\textrm{d}$ are detector states, then we will understand $\bra{\psi_1}\hat{\mathcal{O}}\ket{\psi_2}$ to be the field operator that satisfies
\begin{equation}\label{notation for sandwiched operators}
\bra{\varphi_1}\bra{\psi_1}\hat{\mathcal{O}}\ket{\psi_2}\ket{\varphi_2}=\bra{\psi_1,\varphi_1}\hat{\mathcal{O}}\ket{\psi_2,\varphi_2}
\end{equation}
for any field states $\ket{\varphi_1},\ket{\varphi_2}\in\mathcal{H}_\phi$. Finally, let us assume that the initial state of the detector is pure\footnote{This simplification can be easily dropped, and the result straightforwardly generalized, in the same way that happens with POVMs in non-relativistic quantum mechanics~\cite{Nielsen2010}.}, $\hat{\rho}_\textrm{d}=\proj{\psi}{\psi}$. Thus, using the convention in~\eqref{notation for sandwiched operators},
\begin{equation}
\bra{s,\varphi_1}\hat{U}\hat{\rho}\hat{U}^\dagger\ket{s,\varphi_2}=\bra{s}\hat{U}\ket{\psi}\hat{\rho}_\phi \bra{\psi}\hat{U}^\dagger\ket{s} \;.
\end{equation}
We can therefore write the updated state of the field for the projection over the state $\ket{s}$ of the detector as
\begin{equation}\label{updated state field}
\hat{\rho}^{s,\psi}_{\phi}=\frac{\hat{M}_{s,\psi}\hat{\rho}_{\phi}\hat{M}_{s,\psi}^{\dagger}}{\textrm{tr}_{\phi}\big(\hat{\rho}_{\phi}\hat{E}_{s,\psi}\big)} \;,
\end{equation}
where
\begin{equation}\label{M operator}
\hat{M}_{s,\psi}=\bra{s}\hat{U}\ket{\psi}
\end{equation}
is an operator acting on the field Hilbert space, and the POVM elements~\cite{Nielsen2010} are
\begin{equation}
\hat{E}_{s,\psi}=\hat{M}_{s,\psi}^{\dagger}\hat{M}_{s,\psi} \;.
\end{equation}
For our system, we can get a tractable expression for the $\hat{M}_{s,\psi}$ operators proceeding perturbatively in $\lambda$. First, the unitary $\hat{U}$ in \eqref{unitary} can be written as
\begin{equation}
\hat{U}=\mathds{1}+\sum_{n=1}^{\infty}\lambda^n \hat{U}^{(n)} \;.
\end{equation}
For the first two orders, substituting~\eqref{UDW Hamiltonian}, we have
\begin{equation}
\hat{U}^{(1)}=-\mathrm{i} \int\mathrm{d} t \,\mathrm{d}^{n}\spatial{x} \, \chi(t) F(\spatial{x}) \hat{\mu}(t) \hat \phi(t,\spatial{x})
\end{equation}
and
\begin{align}
\hat{U}^{(2)}=&-\int\mathrm{d} t \,\mathrm{d} t' \, \mathrm{d}^{n}\spatial{x} \, \mathrm{d}^{n}\spatial{x}'\, \theta(t-t')\chi(t)\chi(t') \\
& \times \, F(\spatial{x})F(\spatial{x}') \hat{\mu}(t)\hat{\mu}(t')\hat \phi(t,\spatial{x})\hat \phi(t',\spatial{x}') \;. \nonumber
\end{align}
As a result, we can apply the same expansion to $\hat{M}_{s,\psi}$,
\begin{equation}
\hat{M}_{s,\psi}=\hat{M}_{s,\psi}^{(0)}+\lambda\hat{M}_{s,\psi}^{(1)}+\lambda^2\hat{M}_{s,\psi}^{(2)}+\hdots \;,
\end{equation}
where we are denoting \mbox{$\hat{M}_{s,\psi}^{(n)}=\bra{s}\hat{U}^{(n)}\ket{\psi}$}. In particular,
\begin{align}
&\hat{M}_{s,\psi}^{(0)}=\braket{s}{\psi}\mathds{1}_{\phi}\;,\label{M operator order 0}\\
&\hat{M}_{s,\psi}^{(1)}=-\mathrm{i}\int \mathrm{d} t \,\mathrm{d}^{n}\spatial{x} \,\chi(t) F(\spatial{x}) \bra{s}\hat{\mu}(t)\ket{\psi}\hat \phi(t,\spatial{x})\;,\label{M operator order 1}\\
&\hat{M}_{s,\psi}^{(2)}=-\int \mathrm{d} t\,\mathrm{d} t'\, \mathrm{d}^{n}\spatial{x}\,\mathrm{d}^{n}\spatial{x}'\,\theta(t-t')\chi(t)\chi(t') \nonumber \\
&\qquad \times\, F(\spatial{x})F(\spatial{x}') \bra{s}\hat{\mu}(t)\hat{\mu}(t')\ket{\psi}\hat \phi(t,\spatial{x})\hat \phi(t',\spatial{x}') \;.\label{M operator order 2}
\end{align}
\section{Causal behaviour}\label{section: causal behaviour}
Once the form of the POVM that updates the state of the field~\eqref{updated state field} has been obtained, we want to analyze whether this update respects relativistic causality. In this section we will study whether the measurement defined in the previous section influences the field state outside of the causal future of the measurement.
Concretely, in order to understand the causal behaviour of the update of the field state that arises from performing a projective measurement \textit{on the detector}, we need to compare the updated state of the field $\hat{\rho}_{\phi}^\textrm{u}$ (post-measurement) and the initial state of the field $\hat{\rho}_{\phi}$ (pre-measurement) and see that there is no influence on the field state outside the causal future of the interaction region. Since the state of the field is fully characterized by its \mbox{$n$-point} functions, the analysis can be reduced to studying how the $n$-point functions change after the measurement process (consisting of \mbox{(i) interaction} with the detector, and---after switching off the interaction---\mbox{(ii) idealized} measurement of the detector and corresponding POVM update on the field) in the region that is spacelike separated from the detector.
Regarding the updated field state $\hat{\rho}_\phi^\textrm{u}$, note that the update given by Eqs.~\eqref{updated state field} and~\eqref{M operator} corresponds to a \textit{selective measurement}~\cite{Luders1951,Hellwig1970formal}---the measurement is performed and its outcome is checked, updating the state of the field accordingly. However, if an observer is spacelike separated from the detector, then they might know that the measurement was prearranged to be performed, but they cannot know the outcome of such measurement since information cannot be transmitted to them. Thus, from that observer's perspective, the update of the state has to be the one associated with a \textit{non-selective measurement}~\cite{Luders1951,Hellwig1970formal}---the state of the field is updated taking into account that the measurement has been carried out, but without knowing its outcome. This measurement model respects causality if the spacelike separated observer cannot tell with local operations whether the measurement was carried out or not, i.e. if the non-selective update does not impact the outcome of local operations performed outside the causal support of the measurement.
A non-selective measurement has to be understood as having made the projective measurement on the detector when the outcome is not made concrete. Therefore, to update the state we apply a convex mixture of all the projectors over all the possible proper subspaces associated with every potential outcome of the measurement, weighted by its associated probabilities given by Born's rule (see again \cite{Luders1951,Hellwig1970formal}).
Since we are considering a two-level Unruh-DeWitt detector, the most general non-selective projective measurement can be described by two complementary rank-one projections, $\ket{s}\!\bra{s}$ and $\ket{\bar{s}}\!\bra{\bar{s}}$, such that
\begin{equation}\label{s and r are a basis}
\mathds{1}_{\textrm{d}}-\ket{s}\!\bra{s}=\ket{\bar{s}}\!\bra{\bar{s}},
\end{equation}
where $\ket{s}$ and $\ket{\bar{s}}$ are two orthonormal vectors in the detector's Hilbert space, $\mathcal{H}_{\textrm{d}}$. The state of the field updated by a non-selective measurement can then be written as the mixture of the updates for each projection $\hat{\rho}_{\phi}^{s,\psi}$ and $\hat{\rho}_{\phi}^{\bar{s},\psi}$ given by \eqref{updated state field}, weighted by their respective probabilities, $\langle\hat{E}_{s,\psi}\rangle_{\hat{\rho}_{\phi}}$ and $\langle\hat{E}_{\bar{s},\psi}\rangle_{\hat{\rho}_{\phi}}$,
\begin{align}\label{non-selective}
\hat{\rho}_{\phi}^\textrm{u}&=\langle\hat{E}_{s,\psi}\rangle_{\hat{\rho}_{\phi}}\hat{\rho}_{\phi}^{s,\psi}+\langle\hat{E}_{\bar{s},\psi}\rangle_{\hat{\rho}_{\phi}}\hat{\rho}_{\phi}^{\bar{s},\psi}\\
&=\hat{M}_{s,\psi}\hat{\rho}_{\phi}\hat{M}_{s,\psi}^{\dagger}+\hat{M}_{\bar{s},\psi}\hat{\rho}_{\phi}\hat{M}_{\bar{s},\psi}^{\dagger} \;. \nonumber
\end{align}
By~\eqref{M operator}, recalling the notation described in~\eqref{notation for sandwiched operators}, from~\eqref{non-selective},
\begin{align}\label{non-selective updated state as a trace}
\hat{\rho}_{\phi}^{\textsc{NS}}&=\bra{s}\hat{U}\ket{\psi}\hat{\rho}_\phi \bra{\psi}\hat{U}^\dagger\ket{s}+\bra{\bar{s}}\hat{U}\ket{\psi}\hat{\rho}_\phi \bra{\psi}\hat{U}^\dagger\ket{\bar{s}} \nonumber \\
&=\textrm{tr}_{\textrm{d}}\big[\hat{U}(\ket{\psi}\!\bra{\psi}\otimes \hat{\rho}_{\phi})\hat{U}^{\dagger}\big] \;,
\end{align}
where we have used~\eqref{s and r are a basis} to reduce the sum to a trace over the detector Hilbert space.
Let $\hat{A}$ be a field observable. Then we get that its expectation value for the non-selective update of the field state is
\begin{align}\label{A rho phi}
&\langle \hat{A} \rangle_{\hat{\rho}_{\phi}^{\textsc{NS}}}=\textrm{tr}_{\phi}\big[\textrm{tr}_{\textrm{d}}[\hat{U}(\ket{\psi}\!\bra{\psi}\otimes \hat{\rho}_{\phi})\hat{U}^{\dagger}]\hat{A}\big] \\
&=\textrm{tr}\big[\hat{U}(\ket{\psi}\!\bra{\psi}\otimes \hat{\rho}_{\phi})\hat{U}^{\dagger}\hat{A}\big]=\textrm{tr}\big[(\ket{\psi}\!\bra{\psi}\otimes \hat{\rho}_{\phi})\hat{U}^{\dagger}\hat{A}\hat{U}\big] \;, \nonumber
\end{align}
where \mbox{$\langle \hat{\mathcal{O}}\rangle_{\hat{\rho}}=\textrm{tr}\big( \hat{\rho} \hat{\mathcal{O}} \big)$} as usual. We have used the cyclic property of trace and we have denoted the detector-field operator $\mathds{1}_\textrm{d}\otimes\hat{A}$ simply as $\hat{A}$, omitting the identity of the detector. Now, taking into account the form of the UDW interaction Hamiltonian~\eqref{UDW Hamiltonian} and the unitary evolution operator~\eqref{unitary}, if $\hat{A}$ is a field observable supported outside the causal support of the interaction region, then microcausality ensures that
\begin{equation}
[\hat{A},\hat \phi(t,\bm{x})]=0
\end{equation}
for every $(t,\bm{x}) \in \mathcal{D}$, and therefore
\begin{equation}
[\hat{A},\hat{U}]=0 \;.
\end{equation}
This means that Eq.~\eqref{A rho phi} yields
\begin{align}\label{non-selective and initial are the same when spacelike}
&\langle \hat{A} \rangle_{\hat{\rho}_\phi^\textsc{NS}}=\textrm{tr}_\phi\big(\hat{\rho}_\phi^\textsc{NS}\hat{A}\big)=\textrm{tr}\big[ (\proj{\psi}{\psi}\otimes\hat{\rho}_\phi)\hat{U}^\dagger \hat{A} \hat{U} \big] \\
&=\textrm{tr}\big[ (\proj{\psi}{\psi}\otimes\hat{\rho}_\phi)\hat{U}^\dagger \hat{U} \hat{A} \big]=\textrm{tr}_\phi\big(\hat{\rho}_\phi\hat{A}\big)=\langle \hat{A} \rangle_{\hat{\rho}_\phi} \;. \nonumber
\end{align}
We conclude that the non-selective POVM does not affect the expectation value of any local observable outside the causal influence region of the detector. Of particular importance is the case when we take \mbox{$\hat{A}=\hat \phi(t_1,\bm{x}_1)\cdots\hat \phi(t_n,\bm{x}_n)$}, with all $(t_1,\bm{x}_1),\hdots,(t_n,\bm{x}_n)$ outside the causal support of the interaction region, i.e. spacelike separated from the interaction region. Then Eq.~\eqref{non-selective and initial are the same when spacelike} allows us to conclude that the corresponding $n$-point functions do not change under the non-selective update.
Therefore, if we are in the regimes where causality is respected by the coupling between detector and field (e.g., pointlike detectors in any scenario or spatially smeared detectors in the scales identified in~\cite{Pipo2021}), the projective measurement performed \textit{on the detector} is safe from any causality issues\footnote{For the sake of brevity, we will not restate the conditions under which particle detector models behave causally and will just state that the particle detector models are causal. The facts that should be acknowledged throughout this manuscript are that a pointlike detector is fully causal, that its quantum dynamics is non-singular (when switched on carefully) and that the causality violations (if any) of the model come through (well-controlled) smearing scales.} of the kind exposed in ``Impossible measurements on quantum fields''~\cite{Sorkin1993}. Moreover, Eq.~\eqref{A rho phi} also shows that, for the non-selective update, expectation values of arbitrary observables only depend on the joint state of the field and the detector after the interaction, and not on the measurement performed on the detector. Indeed, the non-selective measurement eliminates the entanglement that the detector and the field acquired through the interaction, but it does not change the partial state of the field, as Eq.~\eqref{non-selective updated state as a trace} explicitly shows. This is of course a physically reasonable feature of the update: we have specified that the measurement is performed after the interaction is switched off, but it could be some arbitrary amount of time after this. The physical change of the field state due to the measurement is due only to the physical coupling between the detector and the field, and not to the fact that we decide to do a projective measurement on the detector after this interaction. This is because the projective measurement acts only on the detector once the interaction has been switched off, and it does not provide additional information since being non-selective the outcome is not known. This important interpretational point will be revisited when we consider the update rule for selective measurements, where the state of the field has to be updated consistently with the concrete outcome of the measurement.
\section{The update rule}\label{section: the update rule}
In the previous section we have shown that the process of measuring a quantum field through locally coupling an Unruh-DeWitt detector and then carrying out an idealized measurement on the detector---which corresponds to a field state update with the appropriate (non-selective) POVM---does not introduce causality violations. We are now in a position to build an update rule for the field when we assume that the experimenter knows the concrete outcome of the idealized measurement carried out on the detector, which is akin to considering what is the field state update induced by a selective measurement on the detector after the detector finished interacting with the field.
\subsection{Issues of a non-contextual update}
Prescribing an update rule for selective measurements in a way that is compatible with the relativistic nature of QFT requires more care than in regular quantum mechanics. An update rule for selective measurements based on particle detectors should:
\begin{itemize}
\item[(1)]{Include the knowledge of the measurement outcome in the description of the field and implement the compatibility between measurements that are sequentially applied to the field, in the spirit of L\"uders rule~\cite{Luders1951}.}
\item[(2)]{Be compatible with relativity.}
\end{itemize}
To guarantee that condition (1) is fulfilled, it is necessary to use the update of the state of the field given by~\eqref{updated state field}. However, in Appendix~\ref{appendix: causal behaviour of the selective update} we show that this update cannot be applied outside the causal future of the detector in a way consistent with relativity. Hence, we see that any non-contextual update (i.e. an update where one gives the density operator a global nature and its change affects all observers regardless of whether they are in causal contact with the detector or not) cannot satisfy conditions (1) and (2) simultaneously. To bypass this difficulty, a first attempt that one could try is to prescribe that the selective update given by~\eqref{updated state field} should only be used in the causal future of the measurement. This prescription, however, is ill-defined since the density operator does not naturally depend on points of the spacetime manifold. In particular, this kind of prescription does not provide a way to calculate arbitrary $n$-point functions, since naively looking at the formula $w_n(\mathsf{x},\mathsf{x}',\hdots)=\textrm{tr}_{\phi}\big(\hat\sigma_{\phi} \hat \phi(\mathsf{x})\hat \phi(\mathsf{x'})\cdots\big)$---where $\hat\sigma_\phi$ is an arbitrary field state---it is not clear what density matrix $\hat{\sigma}_\phi$ should we use when considering points $\mathsf{x},\mathsf{x}'$ in regions of spacetime with different updates.
We conclude that a non-contextual update that includes the information obtained from a selective measurement performed on the detector is at odds with relativistic causality. Instead, in order to satisfy conditions (1) and (2) we must partially give up on the physical significance of density operators \mbox{$\hat{\rho}_{\phi}$ and $\hat{\rho}_{\phi}^\textrm{u}$} as representatives of observer-independent field states and simply treat them as states of information about the field (much like it is done in quantum informational approaches to the measurement problem in quantum mechanics~\cite{Wheelerchapter1977,Wheelerchapter1996,Zehchapter2004,Spekkens2007,Leifer2014,Brukner2015}). This is precisely what the next subsection focuses on.
\subsection{A contextual update rule}
As we just concluded, to properly formulate an update rule that is respectful with relativity we need to consider the field density operators to be observer-dependent. In particular, we propose that the update depends on the context of the observer, i.e. the information available to them according to their position in spacetime. Moreover, because it depends on the observer, once they receive information about a measurement the update only takes place inside their causal future. It is perhaps interesting to remark here the distinction between \textit{the measurement}, that is performed by the experimenter, and \textit{the update}, that is performed by each observer according to the information they have about the field. It is in this sense that we say that the update is observer-dependent. As such, when we write that a certain observer updates their field state, we mean that they are updating their information about the field and changing the field density operator that describes the field state for them, without acting upon the field in any way whatsoever. This operational approach can be summarized as follows:
\begin{enumerate}
\item{After an experimenter provided with a detector performs a projective measurement on the detector, they can be aware or not of the outcome of the measurement. If they are not, they apply the non-selective update \eqref{non-selective}; if they are, they apply the selective update \eqref{updated state field}. Both updates take place in the causal future of the experimenter.}
\item{If an observer is spacelike separated from the interaction region, at most they can be aware of the measurement being performed, but they do not have access to the outcome of the measurement. Their update, if anything, should be non-selective, and we have already seen that it does not have any effect on the outcome of spacelike separated operations. Roundly said, the spacelike separated observer does not have to take into account at all that a measurement has been performed. As it is desirable in a relativistic measurement theory, spacelike operations do not affect each other.}
\item{If an observer becomes aware of the performance of a measurement, not necessarily carried out by them, they update their field state accordingly with the same update rules \eqref{updated state field} or \eqref{non-selective}, depending on whether they have the information about the outcome or not.}
\item To update the $n$-point functions we need to take into account whether the information of the measurements is going to be accessible to future observers, and as such, the $n$-point functions will only be non-trivially updated for observers in the causal futures of the sets where the measurements have spacetime support, as will be addressed in-depth in Section~\ref{section: update of n-point functions}.
\end{enumerate}
This update rule respects causality by fiat, and its consistency for spacelike separated observers is guaranteed by the fact that the non-selective update is causal. However, since it only updates the state in the causal future of the detector, one could legitimately wonder if the measurement prescription takes into account the correlations present in spacelike separated regions of the field that are well-known to exist~\cite{ReehSchlieder,Redhead1995,Summers1985,Summers1987}. Condition 3 tells us how to proceed in order to ensure that this is the case. Consider two experimenters, Alba and Blanca, each provided with their own detector. The initial state of Alba's detector is \mbox{$\hat{\rho}_{\textsc{a}}=\ket{\xi}\!\bra{\xi}$}, while Blanca's is \mbox{$\hat{\rho}_{\textsc{b}}=\ket{\zeta}\!\bra{\zeta}$}. While being spacelike separated, they perform measurements, i.e. 1)~they couple their detectors to the field and 2)~after switching off the interaction they perform a projective measurement on the detectors and update their field states selectively with the information obtained in their local measurements. Alba gets a result associated to state $\ket{a}$ of her detector, while Blanca gets another associated to $\ket{b}$. Their corresponding updates are
\begin{equation}
\hat{\rho}_{\phi}^{\textsc{a}}=\frac{\hat{M}_{a,\xi}\hat{\rho}_{\phi}\hat{M}_{a,\xi}^{\dagger}}{\textrm{tr}_{\phi}\big(\hat{\rho}_{\phi}\hat{E}_{a,\xi}\big)} \quad \textrm{and}\quad \hat{\rho}_{\phi}^{\textsc{b}}=\frac{\hat{M}_{b,\zeta}\hat{\rho}_{\phi}\hat{M}_{b,\zeta}^{\dagger}}{\textrm{tr}_{\phi}\big(\hat{\rho}_{\phi}\hat{E}_{b,\zeta}\big)} \;.
\end{equation}
In the future, they eventually meet and inform each other of their results. Their final updates based on the exchanged information are as follows: for Alba,
\begin{equation}\label{rho AlbaBlanca}
\hat{\rho}_{\phi}^{\textsc{ab}}=\frac{\hat{M}_{b,\zeta}\hat{\rho}_{\phi}^{\textsc{a}}\hat{M}_{b,\zeta}^{\dagger}}{\textrm{tr}_{\phi}\big(\hat{\rho}_{\phi}^{\textsc{a}}\hat{E}_{b,\zeta}\big)}=\frac{\hat{M}_{b,\zeta}\hat{M}_{a,\xi}\hat{\rho}_{\phi}\hat{M}_{a,\xi}^{\dagger}\hat{M}_{b,\zeta}^{\dagger}}{\textrm{tr}_{\phi}\big( \hat{\rho}_{\phi}\hat{E}_{a,\xi} \big)\textrm{tr}_{\phi}\big(\hat{\rho}_{\phi}^{\textsc{a}}\hat{E}_{b,\zeta}\big)} \;,
\end{equation}
while for Blanca
\begin{equation}\label{rho BlancaAlba}
\hat{\rho}_{\phi}^{\textsc{ba}}=\frac{\hat{M}_{a,\xi}\hat{\rho}_{\phi}^{\textsc{b}}\hat{M}_{a,\xi}^{\dagger}}{\textrm{tr}_{\phi}\big(\hat{\rho}_{\phi}^{\textsc{b}}\hat{E}_{a,\xi}\big)}=\frac{\hat{M}_{a,\xi}\hat{M}_{b,\zeta}\hat{\rho}_{\phi}\hat{M}_{b,\zeta}^{\dagger}\hat{M}_{a,\xi}^{\dagger}}{\textrm{tr}_{\phi}\big( \hat{\rho}_{\phi}\hat{E}_{b,\zeta} \big)\textrm{tr}_{\phi}\big(\hat{\rho}_{\phi}^{\textsc{b}}\hat{E}_{a,\xi}\big)} \;.
\end{equation}
Now, taking into account the form of the $\hat{M}$ operators~\eqref{M operator}, in terms of the unitary~\eqref{unitary} and therefore the Hamiltonian~\eqref{UDW Hamiltonian}, it is straightforward to prove that if Alba's and Blanca's measurements are carried out in spacelike separated regions, then
\begin{equation}\label{Ms commute!!}
[\hat{M}_{a,\xi},\hat{M}_{b,\zeta}]=[\hat{M}_{a,\xi},\hat{M}_{b,\zeta}^{\dagger}]=0 \;.
\end{equation}
This means that the numerators in \eqref{rho AlbaBlanca} and \eqref{rho BlancaAlba} are the same. Since the denominators are normalization factors, we first conclude that the updates are consistent. Once they meet they have the same information, and indeed it holds that
\begin{equation}\label{consistency of updates}
\hat{\rho}_{\phi}^{\textsc{ab}}=\hat{\rho}_{\phi}^{\textsc{ba}} \;.
\end{equation}
Moreover, by \eqref{Ms commute!!}
\begin{equation}
\textrm{tr}_{\phi}\big(\hat{M}_{b,\zeta}\hat{M}_{a,\xi}\hat{\rho}_{\phi}\hat{M}_{a,\xi}^{\dagger}\hat{M}_{b,\zeta}^{\dagger}\big)=\textrm{tr}_{\phi}\big( \hat{\rho}_{\phi}\hat{E}_{a,\xi}\hat{E}_{b,\zeta} \big)
\end{equation}
so that we can write
\begin{equation}\label{correlations are there}
\textrm{tr}_{\phi}\big( \hat{\rho}_{\phi}^{\textsc{a}}\hat{E}_{b,\xi} \big)=\frac{\textrm{tr}_{\phi}\big( \hat{\rho}_{\phi}\hat{E}_{a,\xi}\hat{E}_{b,\zeta} \big)}{\textrm{tr}_{\phi}\big( \hat{\rho}_{\phi}\hat{E}_{a,\xi} \big)} \neq \textrm{tr}_\phi\big( \hat{\rho}_\phi \hat{E}_{b,\xi} \big) \;.
\end{equation}
For a POVM, the probability of getting an outcome $r$ from a generic state $\hat{\sigma}_\phi$ is the trace $\textrm{tr}_{\phi}(\hat{\sigma}_\phi \hat{E}_{r})$, where $\hat{E}_{r}$ is the POVM element associated to the outcome $r$~\cite{Nielsen2010}. This means that \eqref{correlations are there} displays the correlations between the measurements due to the initial correlations in the field state. Indeed, Eq.~\eqref{correlations are there} can be read in terms of probabilities as
\begin{align}
\textrm{Prob}(& \textrm{Blanca gets }b \;|\; \textrm{Alba gets }a ) \nonumber \\
&\phantom{==}=\frac{\textrm{Prob}(\textrm{Alba gets $a$ and Blanca gets $b$})}{\textrm{Prob}(\textrm{Alba gets }a)} \\
&\phantom{==}\neq \textrm{Prob}(\textrm{Blanca gets }b) \;, \nonumber
\end{align}
where the first equality is the formula for conditional probability, and in particular shows that Alba's and Blanca's outcomes are not independent.
We conclude that the proposed update rule respects causality, is consistent for spacelike separated measurements (and trivially for timelike separated measurements), and accurately accounts for spacelike correlations. Therefore it is a suitable contextual rule for updating the state of the field after measuring with detectors in QFT.
It is remarkable that the proposed update rule, for a particular observer, is somewhat similar in its structure to that proposed by Hellwig and Kraus~\cite{Hellwig1970formal}. The formalism proposed in our work, however, relies on particle detectors instead of on local projections, establishing a direct connection with experiments~\cite{Edu2013,Rodriguez2018,Lopp2020,Pozas2016}.
It should be noted that by giving up on density operators as global descriptions of the field state we are displacing the focus from the Hilbert space description to another based on what experimenters measure and the correlations between the possible measurements. This is precisely the approach adopted in algebraic quantum field theory (see e.g.~\cite{Haag1996,Hollands2015,AdvancesAQFT,Fewster2019AQFT}), where the \textit{algebraic state} is interpreted to be the complex linear form that associates to each observable\footnote{As an element of the direct limit of the net of local algebras~\cite{Fewster2019AQFT}.} its expectation value. Indeed, the contextual update rule described above should be given just in terms of updated $n$-point functions, as we will show in the next section.
\section{Update of n-point functions}\label{section: update of n-point functions}
In a free quantum field theory, the state of the field can be described in two interchangeable ways: either by a density operator in some particular Hilbert space representation or, equivalently, by the set of the field $n$-point functions. However, in Section~\ref{section: the update rule}, we have argued that there are serious difficulties to apply a selective update to a field density operator because of the incompatibility with a context-independent description. Fortunately, the formalism of $n$-point functions is still adequate for describing the contextual update rule proposed in the previous section. In the present section, we will formulate the update rule proposed in the previous section explicitly in terms of the $n$-point functions that fully characterize the state of the field. The $n$-point functions directly appear in the most common expressions for the response of particle detectors (see e.g.~\cite{Takagi1986,Louko2006,Satz2007} among others), so having an update rule for all the $n$-point functions is not only fully characterizing the updated state but it is also of practical interest for any calculations involving particle detectors.
Notice that for the update after the measurement to be given \textit{just} as an update of $n$-point functions, we need to initially assume that in our particular experiment the only relevant systems are the field and the detector. If the field is entangled with a third party in the past of the detector, we will assume for now that this third party will not be addressed in this measurement experiment, leaving the more complicated case for future sections\footnote{This initial assumption can indeed be relaxed: treated with some care, the update rule for $n$-point functions which we are about to formulate can also be used in arbitrarily general scenarios. The reason is that the scheme given in Section~\ref{section: the update rule} applies to arbitrary states $\hat{\rho}_{\phi}$ that may be extended to include third-party systems in addition to the field. We will show how this more general scenario can be straightforwardly dealt with in Section~\ref{section: generalization to the presence of entangled third parties}. }.
For this section, this simplifying constraint will allow us to ``forget'' about the causal past of the measurement and define the update only in the region of spacetime outside of it.
In the spirit of the discussion in the previous section, we will distinguish whether the measurement performed on the detector is non-selective or selective.
\subsection{Non-selective update}\label{subsection: non-selective update}
The non-selective update is straightforward to implement from the state update in Section~\ref{section: causal behaviour}. Since, as we showed, non-selective updates do not affect the state in regions spacelike separated from the measurement, there is no need to prescribe different updates whether the arguments are in the causal future of the detector or in the spacelike separated region. Hence, the updated $n$-point function is
\begin{align}\label{non-selective updated n-point function non-perturbative}
w_{n}^{\textsc{NS}}(\mathsf{x}_1,\hdots,\mathsf{x}_n)&=\textrm{tr}_{\phi}\big( \hat{\rho}_{\phi}^\textrm{u}\hat \phi(\mathsf{x}_1)\cdots\hat \phi(\mathsf{x}_n) \big) \nonumber \\
&=\langle \hat{M}_{s,\psi}^{\dagger}\hat \phi(\mathsf{x}_1)\cdots\hat \phi(\mathsf{x}_n)\hat{M}_{s,\psi}\rangle_{\hat{\rho}_{\phi}}\\
&\phantom{==}+\langle \hat{M}_{\bar{s},\psi}^{\dagger}\hat \phi(\mathsf{x}_1)\cdots\hat \phi(\mathsf{x}_n)\hat{M}_{\bar{s},\psi}\rangle_{\hat{\rho}_{\phi}} \nonumber
\end{align}
for every $\mathsf{x}_1,\hdots,\mathsf{x}_n \in \R{4}$ outside the causal past of the interaction region. This update can be given explicitly in terms of the $n$-point functions of the initial state of the field. In particular, for the one-point function and to first order in $\lambda$,
\begin{align}\label{non-selective updated one-point function}
&w_{1}^{\textsc{NS}}(t_1,\bm{x}_1)=w_{1}(t_1,\bm{x}_1) \nonumber\\
&\;+2\lambda\int\mathrm{d} t\,\mathrm{d}^{d}{\bm{x}}\,\chi(t)F(\bm{x})\bra{\psi}\hat{\mu}(t)\ket{\psi}\\
&\phantom{=========}\times\Im(w_{2}(t_1,\bm{x}_1,t,\bm{x})) \nonumber\\
&\;+O(\lambda^2) \nonumber
\end{align}
where $w_{n}$ is the $n$-point function of the initial state of the field $\hat{\rho}_{\phi}$. Analogously, for the two-point function,
\begin{align}\label{non-selective updated two-point function}
&w_{2}^{\textsc{NS}}(t_1,\bm{x}_1,t_2,\bm{x}_2)=w_{2}(t_1,\bm{x}_1,t_2,\bm{x}_2) \nonumber\\
&\;+\mathrm{i}\lambda\int\mathrm{d} t\,\mathrm{d}^{d}{\bm{x}}\,\chi(t)F(\bm{x})\bra{\psi}\hat{\mu}(t)\ket{\psi}\\
&\;\times \big( w_3(t,\bm{x},t_1,\bm{x}_1,t_2,\bm{x}_2)-w_3(t_1,\bm{x}_1,t_2,\bm{x}_2,t,\bm{x}) \big) \nonumber\\[2mm]
&\;+O(\lambda^2) \;. \nonumber
\end{align}
And in general,
\begin{align}\label{non-selective updated n-point function}
&w_{n}^{\textsc{NS}}(t_1,\bm{x}_1,\hdots,t_n,\bm{x}_n)=w_{n}(t_1,\bm{x}_1,\hdots,t_n,\bm{x}_n) \nonumber \\
&\;+\mathrm{i}\lambda\int\mathrm{d} t\,\mathrm{d}^{d}{\bm{x}}\,\chi(t)F(\bm{x})\bra{\psi}\hat{\mu}(t)\ket{\psi} \nonumber\\
&\;\times \big( w_{n+1}(t,\bm{x},t_1,\bm{x}_1,\hdots,t_n,\bm{x}_n)\\[2mm]
&\phantom{=======}-w_{n+1}(t_1,\bm{x}_1,\hdots,t_n,\bm{x}_n,t,\bm{x}) \big) \nonumber \\[2mm]
&\;+O(\lambda^2) \;. \nonumber
\end{align}
The details of these calculations can be seen in Appendix~\ref{appendix: update rules for n-point functions}. The second order terms in $\lambda$ for the previous perturbative expressions are also displayed in Eq.~\eqref{non-selective updated n-function up to second order in lambda} of Appendix~\ref{appendix: update rules for n-point functions}. It is worth remarking that microcausality ensures that Eqs.~\eqref{non-selective updated one-point function}, \eqref{non-selective updated two-point function} and \eqref{non-selective updated n-point function} reduce to the unchanged $n$-point function whenever their arguments are outside the causal future of the detector, since in that case \mbox{$[\hat \phi(t,\bm{x}),\hat \phi(t_j,\bm{x}_j)]=0$} for every $j\in\{1,\hdots,n\}$ and therefore
\begin{align}
&w_{n+1}(t,\bm{x},t_1,\bm{x}_1,\hdots,t_n,\bm{x}_n) \nonumber\\
&\phantom{===}=\textrm{tr}_\phi\big(\hat{\rho}_\phi \hat \phi(t,\bm{x})\hat \phi(t_1,\bm{x}_1)\cdots\hat \phi(t_n,\bm{x}_n)\big) \\
&\phantom{===}=\textrm{tr}_\phi\big(\hat{\rho}_\phi \hat \phi(t_1,\bm{x}_1)\cdots\hat \phi(t_n,\bm{x}_n)\hat \phi(t,\bm{x})\big) \nonumber\\
&\phantom{===}=w_{n+1}(t_1,\bm{x}_1,\hdots,t_n,\bm{x}_n,t,\bm{x}) \;.\nonumber
\end{align}
\subsection{Selective update}\label{subsection: selective update}
For selective measurements, we will first present the update for the one-point function. Second, we will consider the two-point function, which involves a few subtleties that deserve attention. And finally, with the one-point and two-point functions as landmarks, we will generalize the update scheme to $n$-point functions. As before, the details of the calculations (as well as results at higher orders in the coupling strength) can be found in Appendix~\ref{appendix: update rules for n-point functions}.
\subsubsection{One-point function}\label{subsubsection: one-point function}
As shown in Appendix~\ref{appendix: causal behaviour of the selective update}, when dealing with selective measurements the update cannot be applied outside the causal future of the detector. Moreover, it should be noticed that in full rigour, and unlike for non-selective measurements, the selective update does depend on the region of spacetime in which the projective measurement on the detector is performed, whose future we shall denote $\mathcal{P}$. Therefore we have to distinguish three cases depending on the argument \mbox{$\mathsf{x}_1\in\mathcal{M}$} of the one-point function:
\begin{itemize}
\item[(a)] If $\mathsf{x}_1\in\mathcal{P}$, then we should consider the state of the field to be updated by the selective rule \eqref{updated state field}.
\item[(b)] If $\mathsf{x}_1\in\mathcal{J}^{+}(\mathcal{D})\setminus\mathcal{P}$, with $\mathcal{J}^+(\mathcal{D})$ being the causal future of the interaction region\footnote{Note that, since the projective measurement on the detector is performed in the causal future of the interaction region, \mbox{$\mathcal{P}\subset\mathcal{J}^+(\mathcal{D})$}.} as defined in~\eqref{causal future of the interaction region}, then we only have to take into account the interaction, which as shown in~\eqref{non-selective updated state as a trace} yields the same update as the non-selective rule~\eqref{non-selective}.
\item[(c)] If $\mathsf{x}_1$ is spacelike separated from $\mathcal{D}$ (i.e. it is outside the causal support of the interaction region, \mbox{$\mathsf{x}_1\notin\mathcal{J}(\mathcal{D})$}), then we should use the initial state of the field, or equivalently the non-selective update.
\end{itemize}
However, we saw in Subsection~\ref{subsection: non-selective update} that the non-selective update can be safely applied to spacelike separated regions. Therefore, we can consider cases (b) and (c) jointly when prescribing the update rule. All together,
\begin{equation}\label{selective updated one-point function def}
w_{1}^{\textsc{S}}(\mathsf{x}_1)=
\begin{dcases*}
\frac{\langle \hat{M}_{s,\psi}^{\dagger}\hat \phi(\mathsf{x}_1)\hat{M}_{s,\psi} \rangle_{\hat{\rho}_{\phi}}}{\langle \hat{E}_{s,\psi}\rangle_{\hat{\rho}_{\phi}}} & if $\mathsf{x}_1\in\mathcal{P}$, \\[1mm]
w_{1}^{\textsc{NS}}(\mathsf{x}_1) & otherwise.\\
\end{dcases*}
\end{equation}
Note that all expectation values are calculated for the initial state of the field, $\hat{\rho}_{\phi}$. For the case in which $\mathsf{x}_1\in\mathcal{P}$, we have used~\eqref{updated state field} and the cyclic property of trace. Therefore, the update can be given in terms of the $n$-point functions of the initial state of the field. In particular, if $\braket{s}{\psi}\neq 0$, to first order in $\lambda$,
\begin{align}\label{selective update one-point function perturbative}
&w_{1}^{\textsc{S}}(t_{1},\bm{x}_{1})= w_{1}(t_{1},\bm{x}_{1})+\frac{2\lambda}{|\!\braket{s}{\psi}\!|^2}\int\mathrm{d} t\,\mathrm{d}^{d} \bm{x}\,\chi(t) \nonumber\\
&\;\times F(\bm{x})\,\textrm{Im}\big[\braket{\psi}{s}\bra{s}\hat{\mu}(t)\ket{\psi} \big( w_{2}(t_1,\bm{x}_1,t,\bm{x}) \\[1mm]
&\;\;\;\;\;-w_1(t_1,\bm{x}_1)w_{1}(t,\bm{x})\big) \big]+O(\lambda^2) \nonumber
\end{align}
whenever $(t_1,\bm{x}_1)\in\mathcal{P}$. The more cumbersome case in which $\braket{s}{\psi}=0$ is displayed in Eq.~\eqref{selective update n-point function orthogonal} of Appendix~\ref{appendix: update rules for n-point functions}, along with the case $\braket{s}{\psi}\neq 0$ up to order 2 in $\lambda$, that can be seen in Eq.~\eqref{second order of selective update n-point functions non-orthogonal}.
\subsubsection{Two-point function}\label{subsubsection: two-point function}
For prescribing the update of the two-point function we also need to distinguish different cases. Following the same spirit of the prescription of the one-point function, when both arguments $\mathsf{x}_1,\mathsf{x}_2 \in \mathcal{P}$ are in the causal future of the projective measurement, we consider the field state to be updated by~\eqref{updated state field}, while if both $\mathsf{x}_1,\mathsf{x}_2$ are outside $\mathcal{P}$, the information of the measurement cannot propagate to those points and therefore we should use the non-selective update of the field state to calculate the expectation value. However, what should we do when we have a mixed situation (e.g. if $\mathsf{x}_1 \in \mathcal{P}$ and $\mathsf{x}_2\notin\mathcal{P}$)?
First, note that the two-point function is a non-local object that is only relevant in non-local experiments (for example, coordinating several labs around the world, or an interaction that is extended in space). However, it is only pertinent to ask about the result of a non-local experiment if we assume that the information obtained by the different measurements can be combined in a ``processing'' region\footnote{Notice the similarity with the notion of processing region introduced in~\cite{Ruep2021}.} that intersects the causal futures of all the experiments. It is reasonable then that when the two-point function has mixed arguments inside and outside $\mathcal{P}$, the information about the outcome of the selective measurement is accessible to the processing region, as it has to have a non-zero intersection with $\mathcal{P}$. Hence, as long as one of the points of the two-point function is inside $\mathcal{P}$ we must use the selective update of the field state.
This is consistent with treating the field state as a state of information about the field. To update the field in accordance with the outcome of a measurement we need to look at where in spacetime the information obtained in the measurement can be accessed. Conversely, if an observer never accesses the causal future of a region in spacetime, it does not make sense for them to ask about the correlations between the field in that region and the region they have access to\footnote{For example, if two observers never get to communicate, directly or indirectly, it lacks physical meaning that they can ask any question that involves the correlations between their operations.}. All this considered, we shall prescribe the selective update for the two-point function as
\begin{equation}\label{selective updated two-point function def}
w_{2}^{\textsc{S}}(\mathsf{x}_1,\mathsf{x}_2)\!=\!
\begin{dcases*}
\frac{\langle \hat{M}_{s,\psi}^{\dagger}\hat \phi(\mathsf{x}_1)\hat \phi(\mathsf{x}_2)\hat{M}_{s,\psi} \rangle_{\hat{\rho}_{\phi}}}{\langle \hat{E}_{s,\psi}\rangle_{\hat{\rho}_{\phi}}} & if $\mathsf{x}_1$ or $\mathsf{x}_2\in\mathcal{P}$, \\
w_{2}^{\textsc{NS}}(\mathsf{x}_1,\mathsf{x}_2) & otherwise.\\
\end{dcases*}
\end{equation}
Again, the update can be given in terms of the $n$-point functions of the initial state of the field. In particular, if $\braket{s}{\psi}\neq 0$, to first order in $\lambda$,
\begin{align}\label{selective update two-point function perturbative}
&w_{2}^{\textsc{S}}(t_1,\bm{x}_1,t_2,\bm{x}_2)=w_{2}(t_1,\bm{x}_1,t_2,\bm{x}_2) \nonumber\\
& +\frac{\lambda}{|\!\braket{s}{\psi}\!|^2}\int\mathrm{d} t\,\mathrm{d}^{d} \bm{x} \,\chi(t)F(\bm{x}) \nonumber \\
&\times\Big( \mathrm{i}\braket{s}{\psi}\!\bra{\psi}\hat{\mu}(t)\ket{s} w_3(t,\bm{x},t_1,\bm{x}_1,t_2,\bm{x}_2)\\[0.5mm]
&-\mathrm{i}\braket{\psi}{s}\!\bra{s}\hat{\mu}(t)\ket{\psi}w_3(t_1,\bm{x}_1,t_2,\bm{x}_2,t,\bm{x}) \nonumber\\
&-2\Im(\braket{\psi}{s}\!\bra{s}\hat{\mu}(t)\ket{\psi})w_{2}(t_1,\bm{x}_1,t_2,\bm{x}_2)w_1(t,\bm{x}) \Big) \nonumber\\
&+O(\lambda^2) \nonumber
\end{align}
whenever $(t_1,\bm{x}_1) \in \mathcal{P}$ or $(t_2,\bm{x}_2) \in \mathcal{P}$ (or both). As before, the case in which $\braket{s}{\psi}=0$ and the second order term of the previous expression are left to be displayed in Appendix~\ref{appendix: update rules for n-point functions}.
\subsubsection{n-point function}\label{subsubsection: n-point function}
The arguments given to justify the prescription for the two-point function immediately generalize to arbitrary \mbox{$n$-point} functions, for which the selective update is
\begin{equation}\label{selective updated n-point function def 1}
w_{n}^{\textsc{S}}(\mathsf{x}_1,\hdots,\mathsf{x}_n)=w_{n}^{\textsc{NS}}(\mathsf{x}_1,\hdots,\mathsf{x}_n)
\end{equation}
if all $\mathsf{x}_1,\hdots,\mathsf{x}_n$ are outside $\mathcal{P}$, and
\begin{equation}\label{selective updated n-point function def 2}
w_{n}^{\textsc{S}}(\mathsf{x}_1,\hdots,\mathsf{x}_n)= \frac{\langle \hat{M}_{s,\psi}^{\dagger}\hat \phi(\mathsf{x}_1)\cdots\hat \phi(\mathsf{x}_n)\hat{M}_{s,\psi} \rangle_{\hat{\rho}_{\phi}}}{\langle \hat{E}_{s,\psi}\rangle_{\hat{\rho}_{\phi}}}
\end{equation}
otherwise. Once again, the update can be given in terms of the $n$-point functions of the initial state of the field, and in particular, if $\braket{s}{\psi}\neq 0$, to first order in $\lambda$,
\begin{align}\label{selective update n-point function perturbative}
&w_{n}^{\textsc{S}}(t_1,\bm{x}_1,\hdots,t_n,\bm{x}_n)=w_{n}(t_1,\bm{x}_1,\hdots,t_n,\bm{x}_n)\\
& +\frac{\lambda}{|\!\braket{s}{\psi}\!|^2}\int\mathrm{d} t\,\mathrm{d}^{d} \bm{x} \,\chi(t)F(\bm{x}) \nonumber\\
&\times\Big( \mathrm{i}\braket{s}{\psi}\!\bra{\psi}\hat{\mu}(t)\ket{s} w_{n+1}(t,\bm{x},t_1,\bm{x}_1,\hdots,t_n,\bm{x}_n) \nonumber\\[0.5mm]
&-\mathrm{i}\braket{\psi}{s}\!\bra{s}\hat{\mu}(t)\ket{\psi}w_{n+1}(t_1,\bm{x}_1,\hdots,t_n,\bm{x}_n,t,\bm{x}) \nonumber\\
&-2\Im(\braket{\psi}{s}\!\bra{s}\hat{\mu}(t)\ket{\psi})\,w_{n}(t_1,\bm{x}_1,\hdots,t_n,\bm{x}_n)\,w_1(t,\bm{x}) \Big) \nonumber\\
&+O(\lambda^2) \; \nonumber
\end{align}
whenever $(t_i,\bm{x}_i) \in \mathcal{P}$ for some \mbox{$i\in\{1,\hdots,n\}$}. Just as before, the second order terms and the more tedious case in which $\braket{s}{\psi}=0$ can be found in Appendix~\ref{appendix: update rules for n-point functions}.
\section{Generalization to the presence of entangled third parties}\label{section: generalization to the presence of entangled third parties}
In the previous sections the analysis was performed considering that the initial entanglement of the field with systems other than the detector is not addressable and hence irrelevant for the scenarios considered. However, the update rule given in sections~\ref{section: the updated state of the field} and~\ref{section: the update rule} is not restricted to these situations. Generalizing beyond these situations is rather straightforward and conceptually identical to the prescription given in previous sections. For completeness, we will show here how the prescribed update rule can be generalized to the case in which there are other physical systems apart from the field and the detector that are relevant for the experiments under analysis.
First, note that in Section~\ref{section: the updated state of the field} we considered the initial state of the system to be $\hat{\rho}=\hat{\rho}_\textrm{d}\otimes\hat{\rho}_{\phi}$ for the sake of simplicity, since these two systems are the only ones involved in the measurement. But it should be immediately realized that we can consider general initial states of the form $\hat{\rho}=\hat{\rho}_{\textrm{d}}\otimes\hat{\rho}_{\Phi}$, where $\hat{\rho}_{\Phi}$ is the joint state of the field and all the other physical systems with which it might share entanglement that may be relevant for our experiment. For simplicity of the treatment, let us first assume that all of them are non-relativistic, in the sense that their individual dynamics can be dealt with using non-relativistic quantum mechanics and in particular they are localized. We will relax this assumption and allow for the presence of other relativistic fields at the end of the section.
Let us denote the relevant physical systems that are not the field or the detector by $\Sigma=\{\Sigma_1,\Sigma_2,\hdots\}$. The derivation of the updated state for $\hat{\rho}_{\Phi}$ proceeds as shown in Section~\ref{section: the updated state of the field}. The only difference that we need to take into account is that now, if $\hat{\mathcal{O}}$ is an operator acting on the Hilbert space of the whole system Hilbert space (detector, field and $\Sigma$) and $\ket{\psi_1},\ket{\psi_2}$ are detector states, then we should understand $\bra{\psi_1}\hat{\mathcal{O}}\ket{\psi_2}$ to be an operator acting on the Hilbert space $\mathcal{H}_\Phi$ of the field and the systems in $\Sigma$, such that
\begin{equation}\label{notation for sandwiched operators extended}
\bra{\Phi_1}\bra{\psi_1}\hat{\mathcal{O}}\ket{\psi_2}\ket{\Phi_2}=\bra{\psi_1,\Phi_1}\hat{\mathcal{O}}\ket{\psi_2,\Phi_2}
\end{equation}
for any $\ket{\Phi_1},\ket{\Phi_2}\in\mathcal{H}_\Phi$. Clearly, Eq.~\eqref{notation for sandwiched operators extended} is the generalization of Eq.~\eqref{notation for sandwiched operators}. It is straightforward to check that all the prescriptions given in Section~\ref{section: the update rule} still apply after this direct generalization has been made, taking into account the extra systems when keeping track of the available information. However, in this more general setup, giving the update solely in terms of $n$-point functions as in Section~\ref{section: update of n-point functions} would no longer be possible. Nevertheless, we can consider ``joint'' \textit{extended} $n$-point functions of the joint system as follows: notice, first of all, that just as any observable of the field can be expressed in terms of the field operators, any observable of the systems in a subset \mbox{$\Gamma\subseteq\Sigma$} can be expressed in terms of the rank-one operators $\proj{\gamma_l}{\gamma_m}$, as
\begin{equation}\label{observable decomposition non-relativistic system}
\hat{\mathcal{O}}_{\Gamma}=\sum_{l,m}\bra{\gamma_{l}}\hat{\mathcal{O}}_{\Gamma}\ket{\gamma_m}\proj{\gamma_l}{\gamma_{m}}
\end{equation}
for an orthonormal basis $\{\ket{\gamma_{l}}\}$ of the Hilbert space of $\Gamma$.
Thus, any operator acting on the field and $\Gamma$ can be expressed in terms of the field operators $\hat \phi(\mathsf{x})$ and the rank-one operators $\proj{\gamma_{l}}{\gamma_m}$. We can therefore define the \textit{extended $n$-point functions} as
\begin{align}\label{extended n-point function definition non-relativistic}
\widetilde{w}_{\Gamma,n}(l,m&;\mathsf{x}_1,\hdots,\mathsf{x}_n)\\
&\coloneqq \textrm{tr}\big( \hat{\rho}_{\Phi}\!\proj{\gamma_l}{\gamma_m}\hat \phi(\mathsf{x}_1)\cdots\hat \phi(\mathsf{x}_n)\big) \nonumber
\end{align}
for $n\geq 0$, where
\begin{equation}
\widetilde{w}_{\Gamma,0}(l,m)\coloneqq \textrm{tr}\big(\hat{\rho}_{\Phi}\!\proj{\gamma_l}{\gamma_m}\big) \;.
\end{equation}
The extended $n$-point functions characterize $\hat{\rho}_{\Phi}$. The update rule can now be given in terms of an update of the extended $n$-point functions, which can be shown to be just a modification of the update for $n$-point functions given in Section~\ref{section: update of n-point functions}.
\paragraph{Non-selective update.}\label{paragraph: non-selective update}
Since the non-selective update~\eqref{non-selective} is trace-preserving and it acts non-trivially only on the Hilbert space of the field, we can simply prescribe the same update of Eq.~\eqref{non-selective updated n-point function non-perturbative} for each of the extended $n$-point functions:
\begin{align}\label{non-selective updated extended n-point function}
\widetilde{w}&_{\Gamma,n}^{\textsc{NS}}(l,m;\mathsf{x}_1,\hdots,\mathsf{x}_n) \nonumber\\
&= \textrm{tr}\big(\hat{M}_{s,\psi}\hat{\rho}_{\Phi}\hat{M}_{s,\psi}^{\dagger}\!\proj{\gamma_l}{\gamma_m} \hat \phi(\mathsf{x}_1)\cdots\hat \phi(\mathsf{x}_n) \big)\\
&\phantom{==}+\textrm{tr}\big(\hat{M}_{\bar{s},\psi}\hat{\rho}_{\Phi}\hat{M}_{\bar{s},\psi}^{\dagger}\!\proj{\gamma_l}{\gamma_m} \hat \phi(\mathsf{x}_1)\cdots\hat \phi(\mathsf{x}_n) \big) \;. \nonumber
\end{align}
As in Subsection~\ref{subsection: non-selective update}, this expression can be written in terms of the non-updated extended $n$-point functions. Note that, in particular, $\widetilde{w}_{\Gamma,0}^{\textsc{NS}}(l,m)=\widetilde{w}_{\Gamma,0}(l,m)$.
\paragraph{Selective update.}\label{paragraph: selective update}
Same as in Section~\ref{section: update of n-point functions}, the prescription of the update requires to keep track of where the information is accessible. This leads to a piecewise definition as in Eqs.~\eqref{selective updated n-point function def 1} and~\eqref{selective updated n-point function def 2}: let $\mathcal{P}$ be the causal future of the region in which the projective measurement on the detector is performed,
\begin{equation}\label{selective updated extended n-point function def 1}
\widetilde{w}_{\Gamma,n}^{\textsc{S}}(l,m;\mathsf{x}_1,\hdots,\mathsf{x}_n)=\widetilde{w}_{\Gamma,n}^{\textsc{NS}}(l,m;\mathsf{x}_1,\hdots,\mathsf{x}_n)
\end{equation}
if all $\mathsf{x}_1,\hdots,\mathsf{x}_n$ \textit{and} the systems of $\Gamma$ are outside $\mathcal{P}$, and
\begin{align}\label{selective updated extended n-point function def 2}
\widetilde{w}&_{\Gamma,n}^{\textsc{S}}(l,m;\mathsf{x}_1,\hdots,\mathsf{x}_n) \nonumber\\
&= \frac{\textrm{tr}\big(\hat{M}_{s,\psi}\hat{\rho}_{\Phi}\hat{M}_{s,\psi}^{\dagger}\!\proj{\gamma_l}{\gamma_m} \hat \phi(\mathsf{x}_1)\cdots\hat \phi(\mathsf{x}_n) \big)}{\textrm{tr}_{\Phi}\big( \hat{\rho}_{\Phi}\hat{E}_{s,\psi} \big)}
\end{align}
otherwise. In particular,
\begin{equation}\label{selective update rule extended order 0 def 1}
\widetilde{w}_{\Gamma,0}^{\textsc{S}}(l,m)=\widetilde{w}_{\Gamma,0}^{\textsc{NS}}(l,m)
\end{equation}\label{selective update rule extended order 0 def 2}
if the systems of $\Gamma$ are outside $\mathcal{P}$, and
\begin{equation}
\widetilde{w}_{\Gamma,0}^{\textsc{S}}(l,m)=\frac{\textrm{tr}\big(\hat{M}_{s,\psi}\hat{\rho}_{\Phi}\hat{M}_{s,\psi}^{\dagger}\!\proj{\gamma_l}{\gamma_m}\big)}{\textrm{tr}_{\Phi}\big( \hat{\rho}_{\Phi}\hat{E}_{s,\psi} \big)}
\end{equation}
otherwise.
To end this section, we can address the case where the third parties sharing entanglement with the probed field are themselves relativistic fields. In that scenario, Eq.~\eqref{observable decomposition non-relativistic system} is not useful anymore, since for a basis of the Hilbert space of a field, the rank-one operators $\proj{\gamma_l}{\gamma_m}$ are not local objects and the update to the extended $n$-point function has to be defined over local regions of spacetime.
Fortunately, the field itself is defined in terms of local observables. The local observables of a field $\sigma$ in $\Sigma$ can be expressed in terms of its associated field operators $\hat{\sigma}(\mathsf{x})$. Thus, for the simplest case in which the only system in $\Sigma$ is a field $\sigma$, we define the extended $n$-point function as an \textit{$(n',n)$-point function},
\begin{align}
&\widetilde{w}_{n'\!,n}(\mathsf{y}_{1},\hdots,\mathsf{y}_{n'};\mathsf{x}_1,\hdots,\mathsf{x}_{n}) \\
&\phantom{===}=\textrm{tr}\big( \hat{\rho}_{\Phi}\,\hat{\sigma}(\mathsf{y}_1)\cdots\hat{\sigma}(\mathsf{y}_{n'}\!)\,\hat \phi(\mathsf{x}_1)\cdots\hat \phi_1(\mathsf{x}_{n}) \big) \;. \nonumber
\end{align}
This expression provides the extended $n$-point function that substitutes Eq.~\eqref{extended n-point function definition non-relativistic} for the case in which $\Sigma$ is one relativistic field. If there are more fields present, one can build the extended $n$-point function in an analogous fashion. Regarding the update rule, in the same spirit of the prescriptions given in Eqs.~\eqref{non-selective updated extended n-point function},\eqref{selective updated extended n-point function def 1} and \eqref{selective updated extended n-point function def 2},
\begin{align}\label{non-selective updated (n',n)-point function fields}
&\widetilde{w}_{n'\!,n}^{\textsc{NS}}(\mathsf{y}_1,\hdots,\mathsf{y}_{n'};\mathsf{x}_1,\hdots,\mathsf{x}_n) \nonumber\\
&=\textrm{tr}\big(\hat{M}_{s,\psi}\hat{\rho}_{\Phi}\hat{M}_{s,\psi}^{\dagger}\hat{\sigma}(\mathsf{y}_1)\cdots\hat{\sigma}(\mathsf{y}_{n'}\!) \hat \phi(\mathsf{x}_1)\cdots\hat \phi(\mathsf{x}_n) \big)\\
&\;+\textrm{tr}\big(\hat{M}_{\bar{s},\psi}\hat{\rho}_{\Phi}\hat{M}_{\bar{s},\psi}^{\dagger}\hat{\sigma}(\mathsf{y}_1)\cdots\hat{\sigma}(\mathsf{y}_{n'}\!) \hat \phi(\mathsf{x}_1)\cdots\hat \phi(\mathsf{x}_n) \big) \nonumber
\end{align}
for the non-selective case, while for the selective case,
\begin{align}\label{selective updated extended n-point function fields def 1}
\widetilde{w}_{n'\!,n}^{\textsc{S}}&(\mathsf{y}_1,\hdots,\mathsf{y}_{n'};\mathsf{x}_1,\hdots,\mathsf{x}_n) \nonumber\\
&=\widetilde{w}_{\Gamma,n}^{\textsc{NS}}(\mathsf{y}_1,\hdots,\mathsf{y}_{n'};\mathsf{x}_1,\hdots,\mathsf{x}_n)
\end{align}
if all $\mathsf{x}_1,\hdots,\mathsf{x}_n$ \textit{and} $\mathsf{y}_1,\hdots,\mathsf{y}_{n'}$ are outside $\mathcal{P}$, and
\begin{align}\label{selective updated extended n-point function fields def 2}
&\widetilde{w}_{n'\!,n}^{\textsc{S}}(\mathsf{y}_1,\hdots,\mathsf{y}_{n'};\mathsf{x}_1,\hdots,\mathsf{x}_n) \nonumber\\
&= \frac{\textrm{tr}\big(\hat{M}_{s,\psi}\hat{\rho}_{\Phi}\hat{M}_{s,\psi}^{\dagger}\,\hat{\sigma}(\mathsf{y}_1)\cdots\hat{\sigma}(\mathsf{y}_{n'}\!) \hat \phi(\mathsf{x}_1)\cdots\hat \phi(\mathsf{x}_n) \big)}{\textrm{tr}_{\Phi}\big( \hat{\rho}_{\Phi}\hat{E}_{s,\psi} \big)}
\end{align}
otherwise.
Finally, for the mixed case in which $\Sigma$ contains both localized non-relativistic systems and relativistic fields, we just need to use the natural combination of both formalisms, that includes one-rank operators of the form \mbox{$\proj{\gamma_l}{\gamma_m}$} for the non-relativistic systems and field operators $\hat{\sigma}(\mathsf{y})$ for the relativistic fields.
\section{A practical example with detectors}\label{section: a practical example with detectors}
To further clarify how to use the formalism in a practical implementation we will consider an example involving three stationary experimenters, Alba, Blanca and Clara, each provided with a two-level Unruh-DeWitt detector. The situation, depicted in Figure~\ref{fig: example with three detectors}, is as follows\footnote{This configuration is a pretty archetypal setup in Relativistic Quantum Information in scenarios of entanglement harvesting, see e.g.~\cite{Reznik2005,Pozas2015} among many others.}:
\begin{itemize}
\item {Clara performs a measurement with her detector, by first letting it interact with the field and then performing a projective measurement on it, immediately after the interaction is switched off.}
\item {Blanca lets her detector interact with the field in the causal future of the projective measurement performed by Clara, $\mathcal{P}$.}
\item {Alba lets her detector interact with the field in a region that is spacelike separated from both Blanca's and Clara's interaction regions.}
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[scale=1]{Practicalexample.pdf}
\caption{Configuration in a slice of spacetime of the interaction regions of detectors A, B and C. In blue, the causal future of the projective measurement performed on C, $\mathcal{P}$; in pale blue, the causal future of the measurement that is not already in the future of the projective measurement, $\mathcal{J}^{+}(\mathcal{D})\setminus\mathcal{P}$.}\label{fig: example with three detectors}
\end{figure}
We consider an initial state
\begin{equation}
\hat{\rho}=\hat{\rho}_{\textsc{a}}\otimes\hat{\rho}_{\textsc{b}}\otimes\hat{\rho}_\textsc{c}\otimes \hat{\rho}_\phi
\end{equation}
for the array of detectors and the field. For simplicity we have assumed that the detector that is measured starts out in a pure state, $\hat{\rho}_\textsc{c}=\proj{\psi}{\psi}$. In the interaction picture, the interaction of the detectors with the field is given by the Hamiltonian
\begin{equation}
\hat{H}_{I}(t)=\hat{H}_{\textsc{a}}(t)+\hat{H}_\textsc{b}(t)+\hat{H}_\textsc{c}(t) \;,
\end{equation}
where
\begin{equation}
\hat{H}_{\nu}(t)=\lambda_{\nu}\chi_{\nu}(t)\hat{\mu}_{\nu}(t) \int\mathrm{d}^d\bm{x}\,F_{\nu}(\bm{x})\hat \phi(t,\bm{x})
\end{equation}
is the same Unruh-DeWitt Hamiltonian from Eq.~\eqref{UDW Hamiltonian}, for \mbox{$\nu\in\{\text{A},\textsc{B},\textsc{C}\}$}.
Now, since Clara's operations causally precede Blanca's, and since Alba is spacelike separated from both of them, the unitary operator that describes the evolution of the three detectors and the field
\begin{equation}
\hat{U}=\mathcal{T}\exp[-\mathrm{i}\int_{-\infty}^{\infty}\!\!\!\!\mathrm{d} t'\, \Big(\hat{H}_\textsc{a}(t')+\hat{H}_\textsc{b}(t')+\hat{H}_\textsc{c}(t')\Big)] \;
\end{equation}
can in fact be written as~\cite{Pipo2021}
\begin{equation}
\hat{U}=\hat{U}_\textsc{a}\hat{U}_\textsc{b}\hat{U}_\textsc{c}=\hat{U}_\textsc{b}\hat{U}_\textsc{c}\hat{U}_\textsc{a} \;,
\end{equation}
where
\begin{equation}
\hat{U}_\nu=\mathcal{T}\exp(-\mathrm{i}\int_{-\infty}^{\infty}\!\!\!\!\mathrm{d} t'\, \hat{H}_\nu(t')) \;
\end{equation}
for $\nu \in \{\textsc{A},\textsc{B},\textsc{C}\}$.
In particular, we have that
\begin{equation}\label{commutation evolutions A, B and C}
[\hat{U}_\textsc{a},\hat{U}_\textsc{b}]=[\hat{U}_\textsc{a},\hat{U}_\textsc{c}]=0 \;,
\end{equation}
and by Eq.~\eqref{M operator},
\begin{equation}\label{commutation evolution A measurement C}
[\hat{U}_\textsc{a},\hat{M}_{c,\psi}]=[\hat{U}_\textsc{a},\hat{M}_{c,\psi}^\dagger]=0
\end{equation}
for any $\ket{c}\in\mathcal{H}_{\textsc{c}}$.
We are interested in studying how the measurement performed by Clara affects the joint partial state of Alba and Blanca, $\hat{\rho}_{\textsc{ab}}$, as well as their individual partial states $\hat{\rho}_{\textsc{a}}$ and $\hat{\rho}_{\textsc{b}}$. Along the lines of previous sections, we distinguish whether the measurement performed by Clara is non-selective or selective. For the sake of clarity, in the main body of this section we will use the approach that uses a context-dependent density operator, as presented in Section~\ref{section: the update rule}, instead of the equivalent but more involved formulation based on $n$-point functions and its extensions, presented in Sections~\ref{section: update of n-point functions} and~\ref{section: generalization to the presence of entangled third parties}. Nevertheless, we have explicitly computed all the updates using the formulation of extended $n$-point functions in Appendix~\ref{appendix: a practical example using n-point functions}, showing explicitly that both methods give the same results.
\subsection{Non-selective measurement}\label{subsection: non-selective measurement}
Since it is less involved from the point of view of the update rule, let us first address the case in which Clara measures non-selectively. We will show that in the non-selective case the updated partial states of Alba and Blanca will coincide with the case where the three detectors interact with the field and we trace out the state of Clara's detector. That is, the only influence that Clara's detector has on $\hat\rho_{\textsc{ab}}$ is through its coupling to the field since no information about the measurement is assumed to be known by Alba and Blanca.
We already argued in Section~\ref{section: causal behaviour} that all observers are susceptible of being informed of the performance of the measurement without knowing its outcome. Thus, we can consider that both Alba and Blanca have access to the non-selective update of the state\footnote{Note that since Alba is spacelike separated from both Blanca and Clara, it does not matter whether we carry out first the update of Clara's measurement or the evolution due to the interaction of Alba's detector, as we saw in Section~\ref{section: causal behaviour} and becomes apparent in Eq.~\eqref{commutation evolution A measurement C}.}. For our purposes, it is simpler to consider the update of the measurement in the first place. Thus, we obtain a final joint state
\begin{align}\label{Alba-Blanca non-selective 1st derivation}
&\hat{\rho}_{\textsc{ab}}'=\textrm{tr}_{\phi}\big( \hat{U}_{\textsc{a}}\hat{U}_{\textsc{b}} \big[ \hat{\rho}_{\textsc{a}}\otimes\hat{\rho}_{\textsc{b}}\otimes\hat{M}_{c,\psi}\hat{\rho}_{\phi}\hat{M}_{c,\psi}^{\dagger} \big] \hat{U}_{\textsc{b}}^{\dagger}\hat{U}_{\textsc{a}}^{\dagger} \big) \nonumber\\
&\phantom{==}+\textrm{tr}_{\phi}\big[ \hat{U}_{\textsc{a}}\hat{U}_{\textsc{b}} \big( \hat{\rho}_{\textsc{a}}\otimes\hat{\rho}_{\textsc{b}}\otimes\hat{M}_{\bar{c},\psi}\hat{\rho}_{\phi}\hat{M}_{\bar{c},\psi}^{\dagger} \big) \hat{U}_{\textsc{b}}^{\dagger}\hat{U}_{\textsc{a}}^{\dagger} \big] \\
&=\textrm{tr}_{\textsc{c},\phi}\big[ \hat{U}_{\textsc{a}}\hat{U}_{\textsc{b}}\hat{U}_{\textsc{c}}(\hat{\rho}_{\textsc{a}}\otimes\hat{\rho}_{\textsc{b}}\otimes\proj{\psi}{\psi}\otimes\hat{\rho}_{\phi})\hat{U}_{\textsc{c}}^{\dagger}\hat{U}_{\textsc{b}}^{\dagger}\hat{U}_{\textsc{a}}^{\dagger} \big] \;,\nonumber
\end{align}
where in the last step we used Eq.~\eqref{non-selective updated state as a trace}. As anticipated, this is the same result obtained for $\hat{\rho}_\textsc{ab}$ in the case in which Clara does not perform a projective measurement on the detector at all. The partial states are
\begin{align}\label{Blanca non-selective 1st derivation}
\hat{\rho}_{\textsc{b}}'&=\textrm{tr}_{\textsc{a}}(\hat{\rho}_{\textsc{ab}}) \\
&=\textrm{tr}_{\textsc{c},\phi}\big[ \hat{U}_{\textsc{b}}\hat{U}_{\textsc{c}}(\hat{\rho}_{\textsc{b}}\otimes\proj{\psi}{\psi}\otimes\hat{\rho}_{\phi})\hat{U}_{\textsc{c}}^{\dagger}\hat{U}_{\textsc{b}}^{\dagger} \big] \nonumber
\end{align}
and
\begin{equation}\label{Alba non-selective 1st derivation}
\hat{\rho}_{\textsc{a}}'=\textrm{tr}_{\textsc{b}}(\hat{\rho}_{\textsc{ab}})=\textrm{tr}_{\phi}\big[ \hat{U}_{\textsc{a}}(\hat{\rho}_{\textsc{a}}\otimes\hat{\rho}_{\phi})\hat{U}_{\textsc{a}}^{\dagger} \big] \;.
\end{equation}
where in order to trace out A and B we have used Eq.~\eqref{commutation evolutions A, B and C} and the cyclic property of trace.
The same results of Eqs.~\eqref{Alba-Blanca non-selective 1st derivation},~\eqref{Blanca non-selective 1st derivation} and~\eqref{Alba non-selective 1st derivation} are obtained by using the extended $n$-point function update formalized in Section~\ref{section: generalization to the presence of entangled third parties}, as can be explicitly seen in the calculations leading to Eqs.~\eqref{Alba-Blanca non-selective 2nd derivation},~\eqref{Blanca non-selective 2nd derivation} and~\eqref{Alba non-selective 2nd derivation} in Appendix~\ref{appendix: a practical example using n-point functions}.
Notice in particular that $\hat{\rho}_{\textsc{a}}'$ does not depend on the operations performed by Blanca and Clara. In fact, as we expected, this is the same result that we would have obtained had we updated the state with the interaction of Alba's detector in the first place. Note that both partial states satisfy
\begin{equation}
\hat{\rho}_\textsc{a}'=\textrm{tr}_\textsc{b}(\hat{\rho}_\textsc{ab}') \quad \textrm{and} \quad \hat{\rho}_\textsc{b}'=\textrm{tr}_\textsc{a}(\hat{\rho}_\textsc{ab}') \;.
\end{equation}
This is a consequence of the fact that for non-selective measurements, as we saw in Section~\ref{section: causal behaviour}, there is no need to make a distinction in the update for observers inside $\mathcal{P}$ and outside $\mathcal{P}$, since they may in principle have access to the same information: that a measurement whose outcome is unknown has potentially been performed. More concretely, here both Alba and Blanca are ignorant about the outcome of Clara's measurement, and therefore all three partial density operators, $\hat{\rho}_\textsc{ab}$, $\hat{\rho}_\textsc{a}$ and $\hat{\rho}_\textsc{b}$ are calculated with the same amount of information about the field and its interactions.
\subsection{Selective measurement}\label{subsection: selective measurement}
The case in which Clara performs a selective measurement requires slightly more care than the non-selective one, since in this case the updated state after the measurement depends on the observer and the information that is available to them (in the language of $n$-point functions, the update is defined piecewise, unlike the non-selective case). As in the non-selective case, for the sake of formal simplicity, in this derivation we will perform the update due to Clara's measurement in the first place. We will check nevertheless that, as before and as should be required, the results are the same if we evolve the state due to Alba's interaction in the first place.
To calculate the joint state $\hat{\rho}_{\textsc{ab}}$, we need to take into account that the information in this state is only fully accessible by an observer that eventually has access to the information from both systems held by Alba and Blanca. In particular, such an observer has access to the outcome of Clara's measurement, since Blanca does\footnote{This line of reasoning is completely analogous to the one carried out in Section~\ref{subsubsection: two-point function} to prescribe the piecewise update of two-point functions.}. Therefore
\begin{align}\label{Alba-Blanca selective 1st derivation}
\hat{\rho}_{\textsc{ab}}'&=\frac{\textrm{tr}_{\phi}\big[ \hat{U}_{\textsc{a}}\hat{U}_{\textsc{b}}(\hat{\rho}_{\textsc{a}}\otimes\hat{\rho}_{\textsc{b}}\otimes\hat{M}_{c,\psi}\hat{\rho}_{\phi}\hat{M}_{c,\psi}^{\dagger})\hat{U}_{\textsc{b}}^{\dagger}\hat{U}_{\textsc{a}}^{\dagger}\big]}{\textrm{tr}_{\phi}\big(\hat{\rho}_{\phi}\hat{E}_{c,\psi}\big)} \\
&=\frac{\textrm{tr}_{\phi}\big[ \hat{U}_{\textsc{a}}\hat{U}_{\textsc{b}}\hat{M}_{c,\psi}(\hat{\rho}_{\textsc{a}}\otimes\hat{\rho}_{\textsc{b}}\otimes\hat{\rho}_{\phi})\hat{M}_{c,\psi}^{\dagger}\hat{U}_{\textsc{b}}^{\dagger}\hat{U}_{\textsc{a}}^{\dagger}\big]}{\textrm{tr}_{\phi}\big(\hat{\rho}_{\phi}\hat{E}_{c,\psi}\big)} \;,\nonumber
\end{align}
where the last step is simply an abuse of notation. For calculating $\hat{\rho}_{\textsc{b}}$, observe that since Blanca is in the causal future of the measurement performed by Clara, she has access to its outcome. Thus,
\begin{align}\label{Blanca selective 1st derivation}
\hat{\rho}_{\textsc{b}}'&=\frac{\textrm{tr}_{\textsc{a},\phi}\big[ \hat{U}_{\textsc{a}}\hat{U}_{\textsc{b}}\hat{M}_{c,\psi}(\hat{\rho}_{\textsc{a}}\otimes\hat{\rho}_{\textsc{b}}\otimes\hat{\rho}_{\phi})\hat{M}_{c,\psi}^{\dagger}\hat{U}_{\textsc{b}}^{\dagger}\hat{U}_{\textsc{a}}^{\dagger}\big]}{\textrm{tr}_{\phi}\big(\hat{\rho}_{\phi}\hat{E}_{c,\psi}\big)} \\
&=\frac{\textrm{tr}_{\phi}\big[\hat{U}_{\textsc{b}}\hat{M}_{c,\psi}(\hat{\rho}_{\textsc{b}}\otimes\hat{\rho}_{\phi})\hat{M}_{c,\psi}^{\dagger}\hat{U}_{\textsc{b}}^{\dagger}\big]}{\textrm{tr}_{\phi}\big(\hat{\rho}_{\phi}\hat{E}_{c,\psi}\big)}=\textrm{tr}_{\textsc{a}}(\hat{\rho}_{\textsc{ab}}') \;.\nonumber
\end{align}
Finally, if we want to obtain $\hat{\rho}_{\textsc{a}}$, we just need to take into account that Alba does not have access to the outcome of Clara's measurement, and hence the state of the field that she deals with is the one updated non-selectively or directly the initial one (since both bring the same result, as we saw in the previous section). The result is therefore the same of Eq.~\eqref{Alba non-selective 1st derivation} for the non-selective measurement,
\begin{equation}\label{Alba selective 1st derivation}
\hat{\rho}_{\textsc{a}}'=\textrm{tr}_{\phi}\big[ \hat{U}_{\textsc{a}}(\hat{\rho}_{\textsc{a}}\otimes\hat{\rho}_{\phi})\hat{U}_{\textsc{a}}^{\dagger} \big] \;.
\end{equation}
Notice that
\begin{equation}
\hat{\rho}_{\textsc{a}}'\neq\textrm{tr}_{\textsc{b}}\big(\hat{\rho}_{\textsc{ab}}')\;,
\end{equation}
since $\hat{\rho}_{\textsc{ab}}'$ was calculated for an observer that, unlike Alba, knows the outcome of Clara's measurement. In fact, if Alba eventually reaches the causal future of Clara's measurement and learns of its outcome, then the state should be updated as
\begin{align}\label{Alba selective 1st derivation after cone}
\hat{\rho}_{\textsc{a}}''&=\frac{\textrm{tr}_{\textsc{b},\phi}\big[ \hat{U}_{\textsc{a}}\hat{U}_{\textsc{b}}\hat{M}_{c,\psi}(\hat{\rho}_{\textsc{a}}\otimes\hat{\rho}_{\textsc{b}}\otimes\hat{\rho}_{\phi})\hat{M}_{c,\psi}^{\dagger}\hat{U}_{\textsc{b}}^{\dagger}\hat{U}_{\textsc{a}}^{\dagger}\big]}{\textrm{tr}_{\phi}\big(\hat{\rho}_{\phi}\hat{E}_{c,\psi}\big)} \\
&=\frac{\textrm{tr}_{\phi}\big[\hat{U}_{\textsc{a}}\hat{M}_{c,\psi}(\hat{\rho}_{\textsc{a}}\otimes\hat{\rho}_{\phi})\hat{M}_{c,\psi}^{\dagger}\hat{U}_{\textsc{a}}^{\dagger}\big]}{\textrm{tr}_{\phi}\big(\hat{\rho}_{\phi}\hat{E}_{c,\psi}\big)}=\textrm{tr}_\textsc{b}(\hat{\rho}_{\textsc{ab}}') \;, \nonumber
\end{align}
same as for $\hat{\rho}_{\textsc{b}}'$.
The same results of Eqs.~\eqref{Alba-Blanca selective 1st derivation},~\eqref{Blanca selective 1st derivation},~\eqref{Alba selective 1st derivation} and~\eqref{Alba selective 1st derivation after cone} are obtained by using the extended $n$-point function update formalized in Section~\ref{section: generalization to the presence of entangled third parties}, as can be explicitly seen in the calculations leading to Eqs.~\eqref{Alba-Blanca selective 2nd derivation},~\eqref{Blanca selective 2nd derivation},~\eqref{Alba selective 2nd derivation} and~\eqref{Alba selective 2nd derivation after cone} in Appendix~\ref{appendix: a practical example using n-point functions}.
\section{A measurement theory}\label{section: discussion}
We have proposed a measurement scheme where localized non-relativistic quantum systems that couple covariantly to the field gather information about its state. We are now in position to argue that this measurement framework has all the characteristics that one should expect from a proper measurement theory for QFT. Namely,
\begin{enumerate}
\item \textit{It is consistent with relativistic QFT}. The measurement process consists of two steps: the interaction between the detector and the field, and the projective measurement on the detector once the interaction has been switched off in order to access the information about the field stored in it. From recently established results, we know that UDW detectors can be coupled fully covariantly to quantum fields~\cite{TalesBruno2020}, and that the interaction with the field does not \textit{per se} allow faster-than-light signalling~\cite{Edu2015,Pipo2021}. Furthermore, when a detector is smeared, the possible signalling appears only in a restricted and controlled way if there is a third, non-pointlike detector mediating between them. Such a causality violation does not even become apparent in leading orders of perturbation theory~\cite{Pipo2021}. As for the projective measurement on the detector, in this work we have shown that the effect of performing projective measurements on detectors and updating the field state consistently is as safe from causality violations as the interaction with the field itself (Section~\ref{section: causal behaviour}).
\item \textit{It provides an update rule}. As we have explicitly described and discussed in Sections \ref{section: the update rule},~\ref{section: update of n-point functions} and~\ref{section: generalization to the presence of entangled third parties}, we have given a consistent update rule for the field state after the measurement that respects causality---as explicitly manifested in the update of the (extended) $n$-point functions---and includes the information obtained from the outcome of the measurement in the spirit of L\"uders rule, enforcing the compatibility of sequential measurements.
\item \textit{It produces definite values for the outcome of single-shot measurements}. Since the detectors are measured through projective measurements, the outcome of a measurement is a real number that can be written down in an experimenter's notepad.
\item \textit{It is capable of reproducing experiments}. Indeed, particle detector models have been proven to capture the features of experimental setups in quantum optics and the light-matter interaction~\cite{Edu2013,Pozas2016,Rodriguez2018,Lopp2020}, as well as the phenomenology of the measurement of other quantum fields such as, e.g., neutrinos~\cite{Bruno2020neutrino,Tales2021antiparticle}. Particle detector models are therefore directly connected with experimentally realistic setups where quantum fields are measured.
\end{enumerate}
By satisfying these four characteristics, we conclude that the measurement scheme proposed in this article constitutes a measurement theory for QFT that can still rely on the projection postulate of non-relativistic quantum mechanics to access the information in the field.
\section{Conclusions}\label{section: conclusions}
Since Sorkin's seminal paper in 1993, it has been evident that the measurement theory of non-relativistic quantum mechanics cannot be directly imported to quantum field theory due to relativistic considerations. As Sorkin put it, ``\textit{this problem leaves the Hilbert space formulation of quantum field theory with no definite measurement theory}''~\cite{Sorkin1993}. In this paper we have proposed a way to build a measurement theory for QFT based on particle detectors that 1) has all the advantages of the measurement theory of non-relativistic quantum mechanics, in that it provides the values of single-shot experiments and there is a state update enforcing compatibility with future measurements, 2) is compatible with relativity and is safe from gross causality violations, and 3) can be easily connected to experiments.
In order to establish the consistency of the proposed measurement scheme---consisting of 1) interaction of the detector with the probed field and 2) performing an idealized measurement on the detector and updating accordingly---we have relied on previous results establishing the covariance of the UDW detector-field coupling~\cite{TalesBruno2020} and the compatibility of the interaction with relativity~\cite{Edu2015,Pipo2021}. In addition, in this work we have shown that the performance of the projective measurement on the detector does not introduce any causality violations, and we have provided a contextual update rule for the state of the field after the measurement. This update rule has been given in full detail in terms of (extended) $n$-point functions of the field for both non-selective and selective measurements on particle detectors, and we have shown how it is implemented in a practical example.
These results provide a formal basis for a measurement theory for QFT. Furthermore, they pave the way to fully relativistic formulations of problems in the light-matter interaction where the role of measurements is central, such as the quantum Zeno effect~\cite{Misra1977,Patil2015}, the delayed choice quantum eraser experiment~\cite{Scully1982,Scully1991,Kwiat1992,Kim2000,Ma2013}, and many other similar experiments that can be performed within, e.g., the framework of the light-matter interaction.
\section{Acknowledgements}
The authors would like to thank Christopher J. Fewster, Maximilian H. Ruep and Ian Jubb for enlightening discussions. The authors also thank Maria Papageorgiou for her comments and for interesting discussions. J.P.-G. is supported by a Mike and Ophelia Lazaridis Fellowship. J.P.G. also received the support of a fellowship from ``La Caixa'' Foundation (ID 100010434, with fellowship code LCF/BQ/AA20/11820043). L.J.G. acknowledges support through Project. No.
MICINN FIS2017-86497-C2-2-P from Spain (with extension Project. No. MICINN PID2020-118159GB-C44 under
evaluation). E.M.-M. acknowledges support through the Discovery
Grant Program of the Natural Sciences and Engineering Research Council of Canada (NSERC). E.M.-M. also
acknowledges support of his Ontario Early Researcher
award.
|
1,108,101,564,967 | arxiv | \section{Introduction}\label{sec:Intro}
\paragraph{}Detectors constructed of large volumes of cryogenic liquids, particularly argon or xenon, are widely used in rare event searches in nuclear and particle physics \cite{LUX,LZ,EXO,XENON,DARKSIDE,PANDAX}. These experiments measure scintillation and/or ionization produced by impinging radiation. One particular technique is the dual-phase time projection chamber (TPC). The TPC consists of a large volume of liquid that acts as the target with a small layer of gas at the top of the detector to allow for amplification of the ionization signal through electroluminescence. To maintain the two phases in equilibrium, the liquid must be held very near its boiling point. In such an environment, bubbles have been observed to form either spontaneously or in regions where there is some active heat dissipation from components such as readout electronics.
Bubbles can interfere with the normal operation of detectors in several ways. First, the liquid-gas interface at the surface of the bubble can scatter scintillation and electroluminescence light, interfering with readout and position reconstruction of radiation events in the detector. Second, bubbles that make it to the surface of the liquid in dual-phase TPCs can change the uniformity and level of the interface, altering the electroluminescence signal from ionization electrons and interfering with energy and position reconstruction. Third, bubbles can disrupt the measurement of ionization charge if they fall in the path of drifting electrons. Finally, bubbles are a symptom of excess heat being generated in the detector, indicating a problem with electronic components or insulation. Therefore, it is desirable to be able to detect bubbles forming in the liquid and reconstruct their position in order to diagnose problems.
We have developed piezoelectric transducers that are capable of unambiguously detecting the sound created by the formation of bubbles in a volume of cryogenic liquid. In this work, we test these sensors in a cylindrical dewar of liquid nitrogen. We demonstrate their sensitivity to the sound of bubbles traveling through different media and the ability to reconstruct the 3D position of bubble formation using the time difference of arrival (TDOA) information.
\section{Experimental Apparatus}\label{sec:Exp}
\paragraph{}The piezoelectric transducers have been developed to increase response by maximizing flexure of the piezoelectric element. Figure \ref{fig:SensorDiagram} shows a schematic diagram and photographs of the assembly. A ball bearing, secured to the center of the element with cryogenic glue, translates vibration from the stainless steel wall to the sensor. A spring-damper system at the edge of the piezoelectric element provides stability and enables the assembly to absorb high frequency noise and increase signal-to-noise ratio without affecting signal quality. The spring is a modified design from a conventional leaf spring, and the mass damper is a 1/4\"-thick brass ring to partially absorb the energy of high frequency oscillations. The leaf spring, brass ring and piezoelectric element are joined with cryogenically glue. This design provides a higher relative displacement of the piezoelectric crystals, which results in higher signal amplitude than a piezoelectric element directly contacting the dewar. An amplification circuit provides a gain of 215 to the transducer signal. High speed, low noise AD8066 JFET preamplifiers are used in the circuit design.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.4\textwidth]{Piezo_Drawing.jpg}
\includegraphics[width=0.28\textwidth]{Piezo_1.jpg}
\includegraphics[width=0.28\textwidth]{Piezo_2.jpg}
\caption{Left: A schematic diagram of a sensor assembly; Middle: A photograph of the front side of the sensor showing the piezo, the steel ball and spring; Right: A photograph of the back side of the assembly showing the amplification electronics.}
\label{fig:SensorDiagram}
\end{figure}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.7\textwidth]{Piezo_dewar_diagram.pdf}
\caption{Diagrams of the test dewar. Sensors are shown in orange, and the inner volume containing liquid nitrogen in gray. Bubbles can be generated throughout the volume by means of a pulsed resistor attached to the end of a flexible, rotatable Garolite arm.}
\label{fig:DewarDiagram}
\end{figure}
To model a cryogenic liquid detector, we use a cylindrical stainless steel dewar filled with liquid nitrogen (LN), sketched in Figure \ref{fig:DewarDiagram}. The the inner space has a volume of approximately 30 liters. Four sensors are mounted and held against the surface using steel wire tensioned by springs. The ball bearing in the center of each piezo element is acoustically coupled to the dewar with a small amount of wax. The sensors are mounted at $90\degree$ angles and at two different heights (centered at $z = 12.7$~cm and $z = 27.9$~cm) to reduce degeneracies in the position reconstruction. Sensors that are $180\degree$ from each other are coplanar in $z$.
Bubbles can be generated throughout the dewar using a 100 $\Omega$ resistor connected to the end of a rotatable Garolite arm. Voltage pulses are sent to the resistor using a HP 8013b pulse generator. The minimum energy needed to create a bubble is found by raising the height and width of the supplied pulses until bubbles are observed, then decreasing the width until no bubbles are observed. The final pulses used in this experiment have a height of 6.56~V and a width of 14.12~ms, resulting in 6.07~mJ delivered to the resistor with each pulse. The pulses are supplied at a frequency of 1~Hz, which is observed to be long enough for the sound of the bubble to die away completely before the next pulse. Sound signals are acquired by a Tektronix DPO3054 digital oscilloscope. The piezoelectric sensors are fed into the four channels, digitized, and saved to an external flash drive for later analysis. Events that we use in the analyses below are triggered on the pulses sent to the resistor. Each waveform is sampled at 5~MHz and lasts 2 ms (10000 samples).
The data used in this analysis were taken on two separate days in April 2015 and June 2015. The first dataset consists of bubbles generated in the center of the dewar at different positions along the $z$-axis. These data are used to study sound transmission in the experiment. The second dataset includes bubbles generated in a variety of positions throughout the volume, and is used in the position reconstruction analysis.
\section{Data Analysis}\label{sec:Ana}
\subsection{Sound transmission through different media}
\label{subsec:trans}
\begin{figure}[tbp]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{20150422_bubble_at_z_bottom_smoothed_2.eps}
\caption{Bubble at center, z = 0 cm}
\label{fig:BottomCenter}
\end{subfigure}
~
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{20150422_bubble_at_z_26_5_smoothed_2.eps}
\caption{Bubble at center, z = 26.5 cm}
\label{fig:TopCenter}
\end{subfigure}
\caption{Digitized sound waves of bubbles generated by the resistor in the center of the dewar. Black points are sound arrival times reconstructed by setting a polarization-independent threshold of 10\% of the noise level in the raw signal. The difference between arrival times can be used to calculate bubble position. Low amplitude activity is visible in all channels at the beginning of the event at the bottom of the dewar. }
\label{fig:TransmissionWaveforms}
\end{figure}
\paragraph{}The transmission of sound through the steel walls of the dewar is studied by comparing data taken at the bottom center of the dewar and the top center. Data taken at the bottom center of the dewar is very close to a steel surface and maximally distant from the sensors. Due to the much higher speed of sound in steel, the fastest path of travel is through the floor and then the wall of the container. Therefore, we expect transmission of sound through the steel to be visible in these traces at the beginning of the event. In contrast, bubbles at the top center are maximally distant from any steel surfaces, so we expect the dominant contribution to sound arrival times to be due to transmission through the liquid nitrogen. In general, sound generated along the $z$-axis is expected to arrive in coplanar channels at the same time, and pairs of channels within an event can be used to cross-check each other.
\begin{table}
\centering
\begin{tabular}[tbp]{c|c|c|c|c}
Position (cm) & Threshold setting & $t_2-t_1$ ($\mu$s)& $t_4 - t_3$ ($\mu$s) & $\Delta\,t_{exp}$\\
\hline
(0, 0, 1) & Low & $-25\pm 27 $ & $7 \pm 23$ & -30 \\
(0, 0, 1) & High & $-153.8 \pm 0.4$ & $-88.2 \pm 0.5$ & -123\\
(0, 0, 26.5) & Low \& High & $17.7 \pm 0.3$ & $82.3 \pm 0.1 $ &54.7\\
\end{tabular}
\caption{Time differences used in the sound transmission analysis. For bubbles in the center of the dewar, we expect $\Delta\,t_{12} \approx \Delta\,t_{43}$. The low threshold is used to detect sound traveling through the steel for events close to to the bottom. The high threshold measures arrival time of sound traveling through the liquid nitrogen. For events far away from walls, both thresholds measure the same time. }
\end{table}
Two example events from the April 2015 dataset are shown in Figure \ref{fig:TransmissionWaveforms}. To remove noise in each trace, we apply a box filter with a width of 20 samples. The time of arrival of the sound is defined with two different thresholds. In low-threshold operation, the arrival time is defined as the point where the wave crosses a threshold of $\pm4\,\sigma_{noise}$, where $\sigma_{noise}$ is the standard deviation of the first 1000 samples in the smoothed waveform. In high-threshold mode, the arrival time is the point when the signal surpasses 30\% of the maximum height of the signal. The arrival times found in the traces in low-threshold mode are shown in Figure \ref{fig:TransmissionWaveforms} by the black markers. A low-amplitude signal is observed in the leading part of the trace for data taken at $z = 1$cm, while no such signal is observed in the data taken at $z = 21$cm. To confirm that this signal is sound traveling through the steel, we average together the differences in arrival times between the higher channels (1 and 3) and the lower channels (2 and 4) for all events at each location. The results are shown in Table 1. Uncertainties are calculated as the standard deviation of the measurements across all events.
For the low-threshold setting, we are consistent with the expected time difference due to sound traveling through the steel. When the threshold is raised, the time differences are of the correct order of magnitude when compared sound traveling through the liquid nitrogen. There is a systematic difference observed in the two pairs of channels, likely caused by the resistor not being directly on the z-axis. We estimate that we have a \textasciitilde2.3 cm systematic uncertainty in our placement of the resistor. This corresponds to as much as a 27~$\mu$s difference in the $\Delta\,t$ between two sensors.
\begin{figure}[tbp]
\centering
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{Spectrogram_Ch1_tek0005.eps}
\caption{Bubble at center, z = 0 cm}
\label{fig:BottomCenterSpec}
\end{subfigure}
~
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{Spectrogram_Ch1_tek0510ALL.eps}
\caption{Bubble at center, z = 26.5 cm}
\label{fig:TopCenterSpec}
\end{subfigure}
\caption{Spectrograms of the bubble. Frequency bands are visible at approximately 8~kHz, 18~kHz, and 30~kHz. The bubble directly against the bottom of the dewar shows early arriving sound in the higher frequency bands. These frequencies can be filtered to isolate sound traveling primarily through the liquid nitrogen, as explained in the text. }
\label{fig:TransmissionSpectrograms}
\end{figure}
It is also valuable to understand the frequency components of sound traveling through the two media. We compute the discrete short-time Fourier transform of the of the signals, downsampled to 0.5MHz, using a Hamming window with a width of 150 samples. Figure \ref{fig:TransmissionSpectrograms} shows the resulting spectrograms for channel 1 of the events shown in Figure \ref{fig:TransmissionWaveforms}. The fundamental frequency of the bubble is evident near ~8 kHz. In the bubble at the bottom, there is evidence of early sound at higher frequencies (\textasciitilde18 kHz and \textasciitilde30 kHz). These observations are consistent in all measurements at these locations. We interpret this as evidence that the steel preferentially transmits these higher frequencies.
From this analysis, we draw two conclusions. First, the sound transmission through the steel walls of the dewar is detectable, but has an uncertainty on the order of the time differences themselves which precludes its use in position reconstruction of bubbles. Second, this sound is preferentially at higher frequencies, so it may be possible to isolate sound transmission through the liquid nitrogen by constructing an appropriate band-pass filter. We explore position reconstruction using frequency isolation in the following section.
\subsection{Position reconstruction using TDOA}
\paragraph{}The time difference of arrivals (TDOA) technique is a method for determining the position of a signal transmitter when the initial transmission time is unknown. It has been employed extensively in navigation and precise location of mobile devices. For a fixed speed of sound, the time difference between two sensors receiving a signal defines a hyperboloid in 3D space on which the source can lie. The problem then becomes finding the intersection of the surfaces defined by all the time differences observable with the receivers in the system. The exact solution for the 3D case is laid out in \cite{BucherMisra}. However, in the presence of noise, it is shown in \cite{Gustafsson} that a best-fit approach performs better in the 2D case than the exact solution, and is much more easily implemented. We therefore generalize the latter to three dimensions and employ it here.
We create a least-squares cost function $J({\bf x}_b)$ that is minimized when the measured time difference between each pair of sensors, $\Delta\,t_{ij}$, is closest to the calculated time difference at a guessed bubble position, $\Delta\,t'_{ij}$. The calculation assumes that the sound travels in a straight line through the liquid nitrogen to the sensor.
\begin{equation}
J({\bf x}_b) = \sum_{i = 1}^4 \sum_{j=i}^4 (\Delta\,t_{ij} - \Delta\,t'_{ij})^2
\end{equation}
The position vector ${\bf x}_b = (x_b, y_b, z_b)$ represents the guessed location of the bubble at any given iteration of the minimization, and is related to the time difference of arrival between sensors $i$ and $j$ by the equation
\begin{equation}
\Delta\,t'_{ij} = \frac{ ||{\bf x}_b - {\bf x}_i || - ||{\bf x}_b - {\bf x}_j || }{c_{LN}}
\end{equation}
where ${\bf x}_i$ and ${\bf x}_j$ are the positions of the sensors and the speed of sound in liquid nitrogen is taken to be $c_{LN} = 853$~m/s \cite{LNSpeed}. The position ${\bf x}_b$ floats in the minimization, subject to the constraint that it remain inside the cylindrical volume of the dewar.
To remove the high-frequency components, the waveforms are first downsampled to a sampling rate of 0.5~MHz. We then employ two Chebyshev band-pass filtering schemes. In the first, a four-pole high-pass filter is applied with a cutoff frequency $f_c = 12.5$~kHz to remove low-frequency components. This is followed by a four-pole low-pass filter with the same cutoff is applied to attenuate the high frequency components of the sound traveling through the steel walls. In the second scheme, we utilize three two-pole filters in series: high-pass with $f_c = 5$~kHz, low-pass with $f_c = 2.5$~kHz, and another high-pass with $f_c = 5$~kHz.
\begin{figure}[tbp]
\centering
~
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{XY_BestFilter_21cm_WithCylinder_3.eps}
\caption{z=21cm}
\label{fig:topXY}
\end{subfigure}
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{XY_BestFilter_10cm_WithCylinder_3.eps}
\caption{z = 10 cm}
\label{fig:MiddleXY}
\end{subfigure}
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{XY_BestFilter_1cm_WithCylinder_3.eps}
\caption{z = 0 cm}
\label{fig:BottomXY}
\end{subfigure}
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{RZ_BestFilter_WithCylinder_3.eps}
\caption{Height vs. radius}
\label{fig:RZ}
\end{subfigure}
\caption{Radial, angular and position reconstruction of bubbles at different heights. Reconstructed clusters are self-consistent to within an average $\pm0.5$~cm, and are an average 2.5~cm from the nominal position of the resistor at each location. These systematic offsets are attributed primarily to a systematic uncertainty in the true position in the resistor. The first filtering scheme described in the text is used in (a) and (b), while the second scheme is used in (c). The reconstruction of height and radius is shown in (d), with the wall of the dewar drawn in grey at $r = 17.8$~cm. The nominal resistor position at each location is shown by the cross, while the reconstructed bubble positions are shown by circular points.}
\label{fig:XYPositions}
\end{figure}
We test our position reconstruction on the data from June 2015. It is found that the first filtering scheme works well for bubbles in the bulk liquid. The reconstructed x-y positions are shown in Figures \ref{fig:topXY} and \ref{fig:MiddleXY}. This scheme fails, however, for the measurements at the bottom of the dewar, and the second scheme is applied in Figure \ref{fig:BottomXY}. Close to the walls, it is likely that reflections from the nearby surface and a higher power of sound being transferred through the steel walls affect the frequency components picked up by the sensors, and necessitate a different filter.
We observe very consistent reconstruction, with clusters having an average standard deviation of $< 0.5$cm is $x$, $y$, and $z$ after throwing away reconstructions that fit to the radial edges of the cylinder. Of the 707 events in the June dataset, 25 are reconstructed to the edges and 5 more are reconstructed farther than $3\,\sigma$ away from their mean position, indicating a convergence rate of $95.8\%$. Systematic offsets from the expected source positions (shown by the stars in Figure \ref{fig:XYPositions}) are observed in all cases. The blue points at $z=0$ are the only group that we are confident are systematically reconstructed at the wrong position, and represent $33/707 = 4.7\%$ of the events. The remaining clusters are an average 2.51~cm away from the nominal resistor positions, which can be partially explained by a systematic uncertainty in the true position of the bubble source. We estimate this uncertainty to be \textasciitilde$2.3$~cm by studying the reconstructed distance between similar resistor positions in the June and April datasets.
\section{Conclusion}
\paragraph{}We have presented here a demonstration of the use of piezoelectric sensors to detect and locate bubbles in a volume of cryogenic liquid. Our results indicate that bubbles can be detected and positioned reliably within the volume. Sources of bubbling in a cryogenic liquid detector could potentially be discovered and diagnosed using this technique. We also demonstrate that sound traveling through media other than the liquid is detected by the sensors, implying that an application in a working detector will need to appropriately model propagation and design signal filters to account for the effects of internal materials.
To fully reconstruct a 3D position using TDOA information, at least four sensors are needed. However, it is likely that convergence and accuracy of the position reconstruction would be improved by additional sensors. It may be possible to resolve serious systematic offsets by eliminating certain sensors from the measurement or utilizing all sensors to break degeneracies.
In low-background particle physics experiments, the piezoelectric sensors used in this work will not meet the stringent low-radioactivity requirements. However, dark matter search experiments using superheated bubble chambers have successfully built and operated low-background acoustic sensors \cite{COUPP, PICO}. The techniques described in this work can be adapted to such a sensor for these applications.
\acknowledgments
The authors would like to acknowledge Marshall Styczinski and Gavin Fields for preliminary efforts on the present work. We would also like to thank Ray Gerhard, Britt Holbrook, David Hemer, and Keith DeLong for their engineering expertise and support. The sensor readout circuit was designed by Ilan Levine of Indiana University South Bend. This work at the University of California, Davis was supported by U.S. Department of Energy grant DE-FG02-91ER40674, as well as supported by DOE grant DE-NA0000979, which funds the seven universities involved in the Nuclear Science and Security Consortium. Brian Lenardo is supported by the Lawrence Scholars Program at the Lawrence Livermore National Laboratory (LLNL). LLNL is operated by Lawrence Livermore National Security, LLC, for the U.S. Department of Energy, National Nuclear Security Administration under Contract DE-AC52-07NA27344.
|
1,108,101,564,968 | arxiv | \section{Somos sequences}
In this note, we deal with Somos $4$ and Somos $6$ sequences. An $(\alpha, \beta)$ Somos $4$ sequence $a_n$ is a sequence such that
$$a_n = \frac{\alpha a_{n-1}a_{n-3} + \beta a_{n-2}^2}{a_{n-4}}, n>4,$$ for appropriate initial values.
An $(\alpha, \beta, \gamma)$ Somos $6$ sequence $a_n$ is a sequence such that
$$a_n = \frac{\alpha a_{n-1}a_{n-5} + \beta a_{n-2} a_{n-4} + \gamma a_{n-3}^2}{a_{n-6}}, n>6,$$ for appropriate initial values.
An $(\alpha, \beta, \gamma, \delta)$ Somos $8$ sequence $a_n$ is a sequence such that
$$a_n = \frac{\alpha a_{n-1}a_{n-7} + \beta a_{n-2} a_{n-6} +\gamma a_{n-3} a_{n-5}+ \delta a_{n-4}^2}{a_{n-8}}, n>8,$$ for appropriate initial values.
\section{Riordan arrays} A Riordan array $(g(x), f(x))$ may be defined by two power series
$$g(x)=g_0 + g_1 x + g_2 x^2+ \cdots, g_0 \ne 0,$$ and
$$f(x)=f_1 x+ f_2 x^2+ f_3x^3 + \cdots, f_0=0, f_1 \ne 0,$$ with the coefficients $g_n$ and $f_n$ drawn from an appropriate field (or from an appropriate ring: $\mathbb{Z}$ is the relevant ring for this note). The term ``array'' comes from the matrix representation of a Riordan array, which is the matrix $(t_{n,k})_{0 \le n,k \le \infty}$ where
$$t_{n,k}=[x^n] g(x)f(x)^k.$$ Here, $[x^n]$ is the functional that extracts the coefficient of $x^n$ in a power series. A Riordan array is a lower triangular invertible matrix.
A Riordan array $(g(x), f(x))$ acts on a power series $h(x)$ by the action (weighted composition)
$$(g(x), f(x))h(x)= g(x)h(f(x)).$$
In matrix terms, this is equivalent to multiplying the vector $(h_n)$, where $h(x)=\sum_{n=0}^{\infty} h_n x^n$, by the matrix $(t_{n,k})$.
The matrix representation of couples of the form $(g(x), x^r f(x))$ where $f(x)$ is as above, and the integer $r>0$, gives what are called "stretched" Riordan arrays. An example has already been met in Example (\ref{ex1}).
\section{Jacobi continued fractions and Hankel transforms}
A continued fraction of the form
$$g(x)=\cfrac{1}{1- a_0 x- \cfrac{b_0 x^2}{1-a_1 x - \cfrac{b_1 x^2}{1- a_2 x-\cdots}}}$$ is called a Jacobi continued fraction. We note that the Hankel transform $n_n=|g_{i+j}|_{0 \le i,j \le \infty}$ of the expansion of such a continued fraction will be given by $$h_n = \prod_{k=0}^n b_k^{n-k}.$$ This is independent of the coefficients $a_n$.
\section{Generalized Jacobi continued fractions and Somos $4$ sequences}
In this section, we itemize in increasing detail conjectures concerning generalized Jacobi continued fractions and Somos $4$ sequences. The examples given are in increasing order of complexity. The goal is to produce integer sequences whose Hankel transform are $(\alpha, \beta)$ Somos $4$ sequences.
\begin{conjecture} We consider the generalized Jacobi continued fraction
$$g(x)=\frac{1}{1-\frac{1+rx}{1-x} x - sx^2 g(x)}.$$ Then we can express $g(x)$ as
$$g(x)=\frac{1-x}{1-2x-rx^2}c\left(\frac{sx^2(1-x)^2}{(1-2x-rx^2)^2}\right).$$
The Hankel transform $h_n$ of the sequence $g_n$ is given by
$$h_n = s^{\lfloor \frac{n^2}{4} \rfloor}(r+s+1)^{\lfloor \frac{(n+1)^2}{4} \rfloor}.$$
Then $h_n$ is a $(0, s^2(r+s+1)^2)$ Somos $4$ sequence.
\end{conjecture}
The generating function $g(x)$ is obtained by applying the (stretched) Riordan array
$$\left(\frac{1-x}{1-2x-rx^2}, \frac{sx^2(1-x)^2}{(1-2x-rx^2)^2}\right)$$ to the generating function $c(x)$ of the Catalan numbers.
\begin{conjecture} We consider the generalized Jacobi continued fraction
$$g(x)=\frac{1}{1-\frac{1+rx}{1-x} x - \frac{sx^2}{1-x} g(x)}.$$ We can express $g(x)$ as
$$g(x)=\frac{1-x}{1-2x-rx^2}c\left(\frac{sx^2(1-x)}{(1-2x-rx^2)^2}\right).$$
The Hankel transform $h_n$ of the sequence $g_n$ is a $(s^2, s^2(r+(r+s)^2))$ Somos $4$ sequence.
\end{conjecture}
\begin{conjecture} We consider the generalized Jacobi continued fraction
$$g(x)=\frac{1}{1-\frac{1+rx}{1-x} x - \frac{1+sx}{1-x} x^2g(x)}.$$ We can express $g(x)$ as
$$g(x)=\frac{1-x}{1-2x-rx^2}c\left(\frac{x^2(1-x)(1+sx)}{(1-2x-rx^2)^2}\right).$$
The Hankel transform $h_n$ of the sequence $g_n$ is a $((s+1)^2,(1+r^2-6s-3s^2-r(s^2+2s-3)))$ Somos $4$ sequence.
\end{conjecture}
The last two conjectures are encompassed in the following more general conjecture.
\begin{conjecture} We consider the generalized Jacobi continued fraction
$$g(x)=\frac{1}{1-v\frac{1+rx}{1-x} x - w\frac{1+sx}{1-x} x^2g(x)}.$$ Then we can express $g(x)$ as
$$\frac{1-x}{1-(v+1)x-rx^2}c\left(\frac{wx^2(1-x)(1+sx)}{(1-(v+1)x-rx^2)^2}\right).$$
The Hankel transform $h_n$ of $g_n$ is an $(\alpha, \beta)$ Somos $4$ sequence with parameters
$$\alpha = (s+v)^2 w^2,$$ and
$$\beta=w^2(r^2v^2+w(w+v-v^2)+rv(v+2w)-s^2(v(r+1)+2w)-s((r+1)v^2+w+v(r+1+3w))).$$
\end{conjecture}
\section{Generalized Jacobi continued fractions and Somos $6$ sequences}
We now extend the ideas of the last section to formulate some conjectures concerning the Hankel transform of integer sequences and Somos $6$ sequences of type $(\alpha,0,\gamma)$. We start with an illustrative example.
\begin{example} We consider the generalized Jacobi continued fraction
$$g(x)=\frac{1}{1-x \frac{1+3x}{1-x}+ x^2 \frac{1+2x}{1-x}-x^3 g(x)},$$ which has closed form
$$g(x)=\frac{1-2x-2x^2+2x^3-\sqrt{1-4x+8x^3+4x^4-12x^5+4x^6}}{2x^3(1-x)}$$ or in Catalan form, as
$$\frac{1-x}{1-2x-2x^2+2x^3}c\left(\frac{x^3(1-x)^2}{(1-2x-2x^3+2x^3)^2}\right).$$
The generating function $g(x)$ expands to give the sequence $g_n$ that begins
$$1,1,4,9,25,67,183,\ldots$$ whose Hankel transform $h_n$ begins
$$1,3,2,-23,-231,-1987,-41482,\ldots.$$ We then conjecture that $h_n$ is a $(9,0,23)$ Somos $6$ sequence, that is, we have
$$h_n = \frac{9 e_{n-1} e_{n-5}+23 e_{n-3}^2}{e_{n-6}}, \quad n>6.$$
\end{example}
\begin{conjecture} We consider the generalized Jacobi continued fraction
$$g(x)=\frac{1}{1-x\frac{1+rx}{1-x}-x^2\frac{1+sx}{1-x}-tx^3 g(x)}.$$ We have that
$$g(x)=\frac{1-2x-(r+1)x^2-sx^3-\sqrt{Q(x,r,s,t)}}{2tx^3(1-x)},$$ where
$$Q(x,r,s,t)=1-4x-2(r-1)x^2+2(2r-s-2(t-1))x^3+(r^2+2r+4s+8t+1)x^4$$
$$\quad\quad\quad +2(rs+s-2t)x^5+s^2x^6.$$
In Catalan form, we have
$$g(x)=\frac{1-x}{1-2x-(r+1)x^2-sx^3}c\left(\frac{tx^3(1-x)^2}{(1-2x-(r+1)x^2-sx^3)^2}\right).$$
We conjecture that the Hankel transform $h_n$ of $g_n$ is an $(\alpha, 0,\gamma)$ Somos $6$ sequence with parameters
\begin{align*}
\alpha &=t^2(r+2)^2\\
\gamma &={\scriptstyle t^3(r^3t+r^2(s+7t)+2r(s^2+2(t+1)s+t(t+8))+s^3+s^2(3t+4)+s(t+2)(3t+2)+t(t^2+4t+12))}.\end{align*}
\end{conjecture}
The Riordan array
$$\left(\frac{1-x}{1-2x-(r+1)x^2-sx^3}, \frac{tx(1-x)^2}{(1-2x-(r+1)x^2-sx^3)^2}\right)$$ has its general term $t_{n,k}=t_{n,k}(r,s,t)$ given by
$$t^k \sum_{j=0}^{2k+1}\binom{2k+1}{j}(-1)^j \sum_{i=0}^{n-k-j}\binom{2k+i}{i}\sum_{m=0}^i 2^{i-m}\binom{m}{n-j-i-m} s^{n-k-j-i-m}(r+1)^{2m-n+k+j+i}.$$
Then we have
$$g_n=\sum_{k=0}^{\lfloor \frac{n}{3} \rfloor} t_{n-2k,k}C_k.$$
Assuming the validity of the conjecture, this then gives us a mechanism for producing a three parameter family of integer sequences whose Hankel transforms are Somos $6$ sequences.
\begin{example} The sequence $g_n(-2,-2,1)=\sum_{k=0}^{\lfloor \frac{n}{3} \rfloor} t_{n-2k,k}(-2,-2,1)C_k$ begins $$1, 1, 0, -3, -7, -9, -5, 8, 32, 71, 129, 187, 153, \ldots.$$
This has a Hankel transform that begins
$$1, -1, -2, 5, 17, -3, -122, 1201, -2980,\ldots.$$ The conjecture is that this is a $(1,0,-5)$ Somos $6$ sequence.
\end{example}
\begin{example} Taking $r=-3, s=0, t=-1$ we obtain a sequence that begins
$$1, 1, 0, -3, -7, -7, 7, 42, 78, 35, -217, -695, -907, 523, \ldots.$$ The Hankel transform of this sequence begins $$1, -1, -2, -3, 11, 23, 4, -355, -1326,\ldots.$$ The conjecture is that this is a $(1,0,3)$ Somos $6$ sequence.
\end{example}
We can add two more parameters by considering
$$g_n(r,s,t,u,v)=\sum_{k=0}^{\lfloor \frac{n}{3} \rfloor} t_{n-2k,k}(r,s,t)u^k v^{n-2k}C_k.$$
Again, we conjecture that the Hankel transform of the sequence $g_n$ is a Somos $6$ sequence.
\begin{example}
We take $r=s=t=1$ and $u=2, v=-1$, to obtain the sequence $g_n$ that begins
$$1, -1, 3, -10, 26, -75, 224, -659, 1979, -6025, 18452, -57028, 177625, \ldots,$$ with a Hankel transform $h_n$ that begins
$$1, 2, -15, -182, -4864, 85976, 26865504, 5387832064, 687205582336,\ldots.$$
We conjecture that this is a $(16,0,728)$ Somos $6$ sequence.
\end{example}
\section{Conjectures on integer Somos $8$ sequences}
The classical Somos $8$ sequence beginning $1,1,1,1,1,1,1,1,\ldots$ with parameters $(1,1,1,1)$ is not an integer sequence \cite{Laurent}. In fact, it begins
$$1,1,1,1,1,1,1,1,4,7,13,25,61,187,775,5827,14815,\frac{420514}{7},\frac{28670773}{91},\frac{6905822101}{2275},\ldots.$$
In this section, using Hankel transforms of integer sequences, we conjecture the form of infinite families of integer Somos $8$ sequences. In general, the $(\alpha, \beta, \gamma, \delta)$ parameters are rational numbers. This is the content of the following conjectures. The integrality of the sequences arises as we consider the Hankel transforms of integer sequences.
\begin{conjecture} We consider the continued fraction
$$g(x)=\frac{1}{1-\frac{x}{1-rx}-x^2-x^3 g(x)}.$$
We have
$$g(x)=\frac{1-rx}{1-(r+1)x-x^2+rx^3}c\left(\frac{x^3(1-rx)^2}{(1-(r+1)x-x^2+rx^3)^2}\right).$$
We have \begin{scriptsize}
$$g_n=\sum_{k=0}^n (\sum_{j=0}^{2k+1} \binom{2k+1}{j}(-r)^j \sum_{i=0}^{n-3k} \binom{2k+i}{i}\sum_{m=0}^i \binom{i}{m}(r+1)^{i-m} \binom{m}{n-3k-j-i-m}(-r)^{n-3k-j-i-m})C_k.$$
\end{scriptsize}
The Hankel transform $h_n$ of the sequence $g_n$ is an integer $(\alpha, \beta, \gamma, \delta)$ Somos $8$ sequence with
\begin{align*}
\alpha&=-\frac{-r^8+8 r^7-21 r^6+40 r^5-35 r^4+24 r^3-71 r^2-8 r}{r^4-2 r^3+8 r^2+2 r-9}\\
\beta&=\frac{8 \left(r^9-6 r^8+17 r^7-30 r^6+15 r^5-14 r^4-r^3-14 r^2\right)}{r^4-2 r^3+8 r^2+2 r-9}\\
\gamma&=\frac{8 \left(r^{10}-2 r^8+29 r^7-32 r^6+39 r^5+18 r^4+11 r^3-r^2+r\right)}{r^3-r^2+7 r+9}\\
\delta&={\scriptstyle -\frac{-2 r^{13}+13 r^{12}-48 r^{11}+85 r^{10}-83 r^9+11 r^8-124 r^7+454 r^6-364 r^5+263 r^4+84 r^3+189 r^2+25 r+9}{r^4-2 r^3+8 r^2+2 r-9}}.\end{align*}
\end{conjecture}
\begin{conjecture}
We consider the continued fraction
$$g(x)=\frac{1}{1-x-\frac{x^2}{1-rx}-x^3g(x)}.$$
We have
$$g(x)=\frac{1-rx}{1-(r+1)x+(r-1)x^2}c\left(\frac{x^3(1-rx)^2}{(1-(r+1)x+(r-1)x^2)^2}\right).$$
Then \begin{scriptsize}
$$g_n=\sum_{k=0}^n (\sum_{j=0}^{2k+1}\binom{2k+1}{j}(-r)^j \sum_{i=0}^{n-3k} \binom{2k+i}{i}\binom{i}{n-3k-j-i}(1-r)^{n-3k-j-i}(r+1)^{2i-n+3k+j})C_k.$$
\end{scriptsize}
The Hankel transform $h_n$ of the sequence $g_n$ is an integer $(\alpha, \beta, \gamma, \delta)$ Somos $8$ sequence with
\begin{align*}
\alpha &=-\frac{-r^4+11 r^3-26 r^2+16 r+5}{r^2-4 r+3}\\
\beta &=-\frac{-2 r^5+19 r^4-40 r^3+13 r^2+5 r}{r^2-4 r+3}\\
\gamma &=-\frac{-3 r^6+12 r^5-15 r^4-25 r^3+62 r^2+36 r+5}{r-3}\\
\delta &=-\frac{-r^9+8 r^8-26 r^7+43 r^6-40 r^5+17 r^4+23 r^3-27 r^2-19 r-3}{r^2-4 r+3}.\end{align*}
\end{conjecture}
\begin{conjecture} We consider the continued fraction
$$g(x)=\frac{1}{1-\frac{1-(r-1)x}{1-rx}x-x^2-x^3 g(x)}.$$
We have
$$g(x)=\frac{1-rx}{1+(r+1)x+(r-2)x^2+rx^3}c\left(\frac{x^3 (1-rx)^2}{(1+(r+1)x+(r-2)x^2+rx^3)^2}\right).$$
Then \begin{scriptsize}
$$g_n=\sum_{k=0}^n (\sum_{j=0}^{2k+1} \binom{2k+1}{j}(-r)^j \sum_{i=0}^{n-3k} \sum_{m=0}^i \binom{i}{m}(-1)^m (r+1)^{i-m} \binom{m}{n-3k-j-i-m}r^{n-3k-j-i-m}(r-2)^{2m-n+3k+j+i})C_k.$$
\end{scriptsize}
The Hankel transform $h_n$ of the sequence $g_n$ is an integer $(\alpha, \beta, \gamma, \delta)$ Somos $8$ sequence with
\begin{align*}
\alpha &=-\frac{r^7-8 r^6+25 r^5-20 r^4-37 r^3+75 r+28}{2 \left(r^3-3 r^2-5 r+7\right)}\\
\beta &=\frac{(r+1) \left(r^8-11 r^7+47 r^6-83 r^5+17 r^4+71 r^3+45 r^2-169 r+210\right)}{2 \left(r^3-3 r^2-5 r+7\right)}\\
\gamma &=\frac{(r^2-1) \left(3 r^8-29 r^7+115 r^6-225 r^5+181 r^4+105 r^3-255 r^2-235 r+84\right)}{2 \left(r^3-3r^2-5r+7\right)}\\
\delta &=-\frac{r^{10}-17 r^9+96 r^8-212 r^7+54 r^6+594 r^5-796 r^4-36 r^3+721 r^2-329 r-588}{2 \left(r^3-3 r^2-5 r+7\right)}.\end{align*}
\end{conjecture}
\begin{example} The conjectures above are not exhaustive. We consider, for instance, the continued fraction
$$g(x)=\frac{1}{1-\frac{x}{1-\frac{x}{1-3x}}-x^3 g(x)}=\frac{1}{1-\frac{x(1-3x)}{1-4x}-x^3 g(x)}.$$
We find that
$$g(x)=\frac{1-5x+3x^2-\sqrt{1-10x+31x^2-34x^3+41x^4-64x^5}}{2x^3(1-4x)},$$ or equivalently,
$$g(x)=\frac{1-4x}{1-5x+3x^2}c\left(\frac{x^3(1-4x)^2}{(1-5x+3x^2)^2}\right).$$
This expands to give the integer sequence $g_n$ that begins
$$1, 1, 2, 8, 32, 133, 569, 2450, 10569, 45643, 197206, 852239, 3683553,\ldots.$$
This has a Hankel transform $h_n$ that begins
$$1, 1, -8, -161, -1333, 631, 1570896, 194685449, 8871803329, -1552662557863, \ldots.$$
We now conjecture that the integer sequence $h_n$ is a $\left(-\frac{101}{3}, -\frac{484}{3}, 4299, \frac{23359}{3}\right)$ Somos $8$ sequence.
\end{example}
\begin{example} We consider the generating function $g(x)$ defined by
$$g(x)=\frac{1}{1-x-\frac{x^2}{1-\frac{x}{1-x}}-x^3 g(x)}=\frac{1}{1-x-\frac{x^2(1-x)}{1-2x}-x^3 g(x)}.$$
We have
$$g(x)=\frac{1-3x+x^2+x^3-\sqrt{1-6x+11x^2-8x^3+11x^4-14x^5+x^6}}{2x^3(1-2x)},$$ or equivalently,
$$g(x)=\frac{1-2x}{1-3x+x^2+x^3}c\left(\frac{x^3(1-2x)^2}{(1-3x+x^2+x^3)^2}\right).$$
The sequence $g_n$ begins
$$1, 1, 2, 5, 12, 30, 77, 199, 518, 1357, 3572, 9443, 25064,\ldots,$$ with a Hankel transform that
begins
$$1, 1, -1, -4, -8, -13, 57, 241, 1093, 792, -30661, -246182,\ldots.$$
We conjecture that this integer sequence is a $\left(\frac{1}{2}, -\frac{5}{2}, \frac{11}{2}, \frac{17}{2}\right)$ Somos $8$ sequence.
\end{example}
\section{Polynomial sequences}
We note that the parameterized sequences in the last section, whose Hankel transforms are conjectured to be Somos $8$ sequences, are in fact sequences of polynomials.
\begin{example} We consider the polynomial sequence (in $r$) with generating function
$$\frac{1-rx}{1-(r+1)x+(r-1)x^2}c\left(\frac{x^3(1-rx)^2}{(1-(r+1)x+(r-1)x^2)^2}\right).$$
This expands to give the polynomial sequence that begins
$$1, 1, 2, 4 + r, 8 + 2 r + r^2, 17 + 5 r + 2 r^2 + r^3, 37 + 13 r + 6 r^2 + 2 r^3 + r^4,$$
$$\quad\quad\quad 82 + 32 r + 16 r^2 + 7 r^3 + 2 r^4 + r^5,ldots.$$
The coefficient array of this polynomial sequence begins
$$\left(
\begin{array}{ccccccccccc}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
4 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
8 & 2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
17 & 5 & 2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
37 & 13 & 6 & 2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
82 & 32 & 16 & 7 & 2 & 1 & 0 & 0 & 0 & 0 & 0 \\
185 & 80 & 41 & 19 & 8 & 2 & 1 & 0 & 0 & 0 & 0 \\
423 & 201 & 108 & 51 & 22 & 9 & 2 & 1 & 0 & 0 & 0 \\
978 & 505 & 282 & 140 & 62 & 25 & 10 & 2 & 1 & 0 & 0 \\
\end{array}
\right).$$
We note that the initial column of this array,
$$1, 1, 2, 4, 8, 17, 37, 82, 185, 423, 97,\ldots,$$
is the RNA sequence \seqnum{A004148}. The Hankel transform of the above polynomial sequence than begins
$$1, 1, -2 r, -1 - 4 r - r^2 + 2 r^3 - r^4, -1 - 5 r - 6 r^2 + r^3 - 5 r^4 + 4 r^5 - r^6,$$
$$\quad \quad \quad \quad -1 - 6 r - 7 r^2 + 12 r^3 + 12 r^4 + r^5 - 5 r^6 + r^7,\ldots.$$
\end{example}
\section{Conclusion} In this paper, we have conjectured that a combination of generating functions defined by generalized Jacobi continued fractions, Riordan arrays, Catalan numbers, and the sequence Hankel transform can be a fruitful context within which to explore Somos $4$, $6$ and $8$ sequences. It is noteworthy that if the conjectures about Somos $8$ sequences are true, then we can produce infinite families of integer Somos $8$ sequences.
|
1,108,101,564,969 | arxiv | \section{Introduction}
The maximum likelihood (ML) decoding of channel codes
can be viewed as a computational task of finding the global minimum of an energy (objective) function.
The belief propagation (BP) algorithm~\cite{Pearl88}
is the most powerful one recognized by the information theory community~\cite{Mackay99}
to accomplish the task.
The new generation channel codes such as Turbo codes and low-density parity-check (LDPC) codes
combined with BP decoding can achieve remarkable performance close to the Shannon limit.
The {\it a posteriori} probability (APP) algorithm~\cite{Fossorier99} is a simplified variation of the BP algorithm.
Given a multivariate energy function $E(x_1, x_2, \ldots, x_n)$ of the following form
\[ E(x_1, x_2, \ldots, x_n) = \sum_i \left (e_i (x_i) + \sum_{j, j < i} e_{ij} (x_i, x_j) \right) \ , \]
with the assumption of the symmetric binary component functions $e_{ij}(x_i, x_j)$,
i.e., $e_{ij}(x_i, x_j) = e_{ji}(x_j, x_i)$, for any $i, j$.
The APP algorithm can be applied to find an approximate solution to minimize the energy function.
It is based on a method of updating and passing $n$ messages, $\psi_i(x_i, t)$ for $i=1,2,\ldots,n$,
in an iterative way as follows,
\begin{eqnarray}
\lefteqn{ \psi_i(x_i, t+1) = \frac{1}{Z_i(t+1)} e^{-e_i(x_i) /\hbar} \cdot } \nonumber \\
& & \prod_{j \not = i} \left( \sum_{x_j} e^{- e_{ij}(x_i, x_j) /\hbar} \psi_j(x_j, t) \right),
\label{app_algorithm}
\end{eqnarray}
where $\hbar$ is a positive constant, related to the channel characteristics in channel decoding.
$Z_i(t+1)$ is a normalization factor at time $t+1$, such that
\[ \sum_{x_i} \psi_i(x_i, t+1) = 1, \quad \mbox{for $i=1,2,\ldots, n$} \ . \]
The message $\psi_i(x_i, t)$ is a soft-decision for assigning variable $x_i$ at time $t$.
It is a real-valued, non-negative function called the soft-assignment function in this paper.
It measures in a quantitative way the preferences over different values of $x_i$
for minimizing the energy function.
The best candidate value for assigning $x_i$ at time $t$ is the one of the highest function value $\psi_i(x_i, t)$.
Often times at decoding channel codes,
each soft assignment function $\psi_i(x_i, t)$
is progressively peaked at one variable value while the rest
reduced to zero as the iteration proceeds.
That is to say that the algorithm eventually decides on a unique value for each variable at those instances.
The density evolution~\cite{Richardson01} is a powerful technique
invented by the information theory community
to understand and analyze this kind of processes.
\section{A Generalization of the APP Algorithm}
The difference equations~(\ref{app_algorithm}) of the APP algorithm,
can be generalized by raising the soft-assignment function $\psi_i(x_i, t)$
at the right side of the equations to a power $\alpha$,
\begin{eqnarray}
\lefteqn{\psi_i(x_i, t+1) = \frac{1}{Z_i(t+1)} e^{-e_i(x_i) /\hbar} \cdot } \nonumber \\
&& \prod_{j \not = i} \left( \sum_{x_j} e^{- e_{ij}(x_i, x_j) /\hbar} |\psi_j(x_j, t)|^{\alpha} \right).
\label{app_algorithm_a}
\end{eqnarray}
When $\alpha = 1$, the above generalization falls back to the original one.
It has been shown that the BP algorithm can only converge to a fixed point
that is also a stationary point of the Bethe approximation to the free energy~\cite{Yedidia05}.
It is also not hard to prove that each valid codeword can be a fixed point
when the APP algorithm is applied to decode a LDPC code
with the degree of each variable node $d_v \ge 2$ (or the BP algorithm when $d_v \ge 3$).
The algorithm will converge with an exponential rate to a fixed point of this kind
when it evolves into a state close enough to any one of them.
To improve the performance of the APP algorithm further,
we can smooth the soft assignment functions $\psi_i(x_i, t)$ to
prevent the algorithm from being trapped to an un-desired fixed point.
One way to smooth the soft assignment function $\psi_i(x_i, t)$ is given as follows,
\[ \psi^{'}_i(x_i, t) = (1-\beta) \psi_i(x_i, t) + \beta / |D_i| \ , \]
where $|D_i|$ is the domain size of variable $x_i$,
and the parameter $\beta$ is the smoothing factor satisfying $0 \le \beta \le 1$.
When $\beta = 1$, the function $\psi_i(x_i, t)$ is completely smoothed out.
The smoothing operation defines an operator on the soft assignment functions $\psi^{'}_i(x_i, t)$,
denoted as $\cal S(\cdot)$.
With that definition, we can generalize the APP algorithm~(\ref{app_algorithm_a}) further as follows,
\begin{eqnarray}
\lefteqn{ \psi_i(x_i, t+1) = \frac{1}{Z_i(t+1)} {\cal S} {\Big (} e^{-e_i(x_i) /\hbar} \cdot } \nonumber \\
&& \prod_{j \not = i} ( \sum_{x_j} e^{- e_{ij}(x_i, x_j) /\hbar} |\psi_j(x_j, t)|^{\alpha}) {\Big )}.
\label{app_algorithm_b}
\end{eqnarray}
It has been found that the generalized APP algorithm~(\ref{app_algorithm_b}) can sometimes significantly improve
the performance of the original one at decoding LDPC codes.
We have observed improvements over $1dB$ to $2dB$
in our experiments at decoding commercial irregular LDPC codes
(such as LDPC codes used for China's HDTV) and regular experimental LDPC codes.
\section{Deriving Schr\"{o}dinger Equation}
Let the parameter $\alpha = 2$ in the generalized AP algorithm~(\ref{app_algorithm_b}).
If all variables $x_i$s are in a continuous domain, the generalized APP algorithm~(\ref{app_algorithm_b}) becomes
\begin{eqnarray}
\lefteqn{\psi_i(x_i, t+1)= \frac{1}{Z_i(t+1)} {\cal S} {\Big (} e^{-e_i(x_i) /\hbar} \cdot } \nonumber \\
&& \prod_{j, j \not = i} \int d x_j~e^{-e_{ij}(x_i, x_j) /\hbar} |\psi_j (x_j, t)|^2 {\Big )} .
\label{app_algorithm_c}
\end{eqnarray}
The soft assignment function $\psi_i(x_i, t)$ can be generalized from a real-valued, non-negative function
to a function over the complex domain $\mathbb{C}$.
It has no impact on the optimization power of the generalized APP algorithm~(\ref{app_algorithm_c}).
In this case, it is the magnitude of the function $|\psi_i(x_i, t)|$ instead of itself
that measures the preferences over different values of $x_i$.
For $\psi_i(x_i, t) \in \mathbb{C}$, $|\psi_i(x_i, t)|$ is defined as $\sqrt{\psi^{*}_i(x_i, t) \psi_i(x_i, t)}$.
Let $\Delta t$ be an infinitesimal positive value
and the soft assignment function at $t + \Delta t$ be $\psi_i (x_i, t + \Delta t)$.
The difference equations~(\ref{app_algorithm_c}) of the generalized APP algorithm
in a continuous time version is
\begin{eqnarray}
\lefteqn{ \psi_i(x_i, t + \Delta t) = \frac{1}{Z_i (t + \Delta t)} {\cal S} {\Big (} \psi_i(x_i, t) e^{-(\Delta t/\hbar)e_i(x_i)} \cdot } \nonumber \\
&& \prod_{j, j \not = i} \int d x_j~e^{-(\Delta t/\hbar)e_{ij}(x_i, x_j)} |\psi_j (x_j, t)|^2 {\Big )}.
\label{update_rule5}
\end{eqnarray}
When $\Delta t \rightarrow 0$, the term inside the operator ${\cal S}(\cdot)$
at the right side of (\ref{update_rule5}) approaches $\psi_i(x_i, t)$.
Starting from an initial state,
the generalized APP algorithm described by (\ref{update_rule5})
will evolve toward one of its equilibriums over time.
It will be shown in the following that
the algorithm at its equilibrium is, in fact, the time-independent Schr\"{o}dinger equation.
Since variable $x_i$ is in a continuous domain,
let the smoothing operator ${\cal S}(\cdot)$ on the soft assignment function $\psi_i(x_i, t)$ be defined as
\[ {\cal S}(\psi_i(x_i, t)) = \int K(u - x_i) \psi_i(u, t) ~d u \ , \]
where $K(x)$ is a smoothing kernel.
If $x_i$ is in the one dimensional space $\mathbb{R}$,
we can choose the following Gaussian function as the smoothing kernel $K(x)$,
\begin{equation}
K(x) = \frac{1}{\sqrt{2 \pi \Delta t} \sigma_i} e^{ - x^2 / 2 \sigma^2_i \Delta t} \ .
\label{gaussian_kernel_1d}
\end{equation}
With the Gaussian smoothing kernel, the dynamic equations~(\ref{update_rule5}) become
\begin{eqnarray}
\lefteqn{ \psi_i(x_i, t + \Delta t) = \frac{1}{Z_i (t + \Delta t)} \cdot } \nonumber \\
&& \int d u~\frac{1}{\sqrt{2 \pi \Delta t} \sigma_i} e^{ - (u - x_i)^2 / 2 \sigma^2_i \Delta t} \psi_i(u, t) e^{-(\Delta t /\hbar) e_i(u)} \cdot \nonumber \\
&& \prod_{j, j \not = i} \int d x_j~e^{-(\Delta t /\hbar)e_{ij}(u, x_j)} |\psi_j (x_j, t)|^2 \ .
\label{update_rule10}
\end{eqnarray}
Expanding the right side of the above equation into a Taylor series with respect to $\Delta t$
and let $\Delta t \rightarrow 0$,
we have
\begin{eqnarray}
\lefteqn{ \frac{\partial \psi_i (x, t)}{\partial t}= \frac{\sigma^2_i}{2}\frac{\partial^2 \psi (x_i, t)}{\partial x^2_i} - } \nonumber \\
&& V_i (x_i)\frac{1}{\hbar} \psi_i(x_i, t) + \varepsilon_i (t) \psi_i(x_i, t) \ ,
\label{update_rule11a}
\end{eqnarray}
where
\[ V_i(x_i) = e_i(x_i) + \sum_{j, j \not = i} \int d x_j~e_{ij}(x_i, x_j) |\psi_j (x_j, t)|^2 \ , \]
and
\[ \varepsilon_i (t) = -\frac{d~Z_i(t)/d~t}{Z^2_i(t)} \ . \]
Let the operator $\nabla^2_i$ be defined as
\[ \nabla^2_i \psi (x_i, t) = \frac{\partial^2 \psi (x_i, t)}{\partial x^2_i} \ , \]
and $H_i$ be an operator on $\psi (x_i, t)$ defined as
\begin{equation}
H_i = -\frac{\hbar \sigma^2_i}{2} \nabla^2_i + V_i(x_i) \ .
\label{Hamiltonian}
\end{equation}
Then the equations~(\ref{update_rule11a}) can be rewritten as
\begin{equation}
\frac{\partial \psi_i (x, t)}{\partial t}
= - \frac{1}{\hbar} H_i \psi_i(x_i, t) + \varepsilon_i (t) \psi_i(x_i, t) \ .
\label{update_rule11}
\end{equation}
When the differential equations~(\ref{update_rule11}) evolve into a stationary state (equilibrium),
they become
\begin{equation}
E_i \psi_i(x_i, t) = H_i \psi_i(x_i, t) , \quad \mbox{ for $i=1, 2, \ldots, n$} \ ,
\label{update_rule12}
\end{equation}
where $E_i$, $E_i = \hbar \varepsilon_i$, is a scalar.
For a physical system consisting of $n$ particles,
let $x_i$ be the position of particle $i$, $1 \le i \le n$, in the one dimensional space $\mathbb{R}$.
Let $\sigma^2_i = \hbar / m_i$,
where $m_i$ is the mass of particle $i$.
Then equations~(\ref{update_rule12}) become
\begin{equation}
E_i \psi_i(x_i, t) = \left( -\frac{\hbar^2}{2 m_i} \nabla^2_i + V_i(x_i) \right) \psi_i(x_i, t) \ .
\label{stationary_state}
\end{equation}
They are the conditions for the physical system to be in a stationary state
when its dynamics is defined by the generalized APP algorithm.
Equation~(\ref{stationary_state}) is also the time-independent Schr\"{o}dinger equation.
(It is straightforward to generalize this derivation to three dimensions, but it does not yield any deeper understanding.)
In conclusion, from a pure mathematical observation,
the time-independent Schr\"{o}dinger equation is derivable
from a soft-decision iterative decoding algorithm.
From the derivation we can see that
the soft decisions $\psi_i(x_i, t)$ of the decoding algorithm
are the classic wavefunctions in the Schr\"{o}dinger equation.
\vspace{35\baselineskip}
~~~~~
|
1,108,101,564,970 | arxiv | \section{Introduction}
Consider the Coulomb branch of $\mathcal{N} =2$ superconformal field theories in four dimensions. At low energies we have $U(1)^n$ gauge fields, which are part of $\mathcal{N} =2$ vector multiplets. Denote the $\mathcal{N} =1$ chiral superfield part of these vector multiplets by $a_i$. In the low energy effective action, the gauge coupling constants matrix is promoted to $\tau^{ij} (a)$ (a function of the $a_i$). The gauge coupling matrix is related to the $\mathcal{N} =2$ prepotential $F$ by $\tau ^{ij} = \frac{\partial ^2 F}{\partial a_i \partial a_j} $, and we have the definition $a_D^i= \pder{F}{a_i} $. $a_D$ plays the role of $a$ in the electric-magnetic dual theory. Additionally, as can be observed from the low energy effective theory, the expectation values of $a_i$ and $a_D^i$ are the coefficients of the electric and magnetic charges under the $i$th $U(1)$ in the BPS bound.
The solution of the theory on the Coulomb branch (in the infrared) is described by the SW curve \cite{Seiberg:1994rs}\cite{Seiberg:1994aj}.
A basis of non-trivial cycles and dual cycles is chosen, and the integrals of the SW differential $\lambda $ along them give $a_i$ and $a_D^i$. The SW curve contains gauge invariant Coulomb branch parameters, and $a_i$, $a_D^i$ are holomorphic functions of these parameters.
The SW curve can actually be realized geometrically \cite{Witten:1997sc}. Linear quivers of $SU(k_i)$ gauge groups with $\mathcal{N} =2$ supersymmetry were constructed through the decoupling limit of configurations of NS5- and D4-branes in type IIA superstring. These configurations have singularities, but when interpreted in M-theory, could be described by a single M5-brane. For generic parameters, the M5-brane is smooth and contains no singularities. The description of the M5-brane includes the SW curve solution of the theory. Linear quivers with generic flavor groups of fundamental hypermultiplets of the $SU(k_i)$ gauge groups, are constructed by including D6-branes as well. Circular quivers, as well as other classes of $\mathcal{N} =2$ theories can also be obtained by brane constructions.
Following \cite{Gaiotto:2009we}, consider a linear quiver of $n$ $SU(N)$ gauge groups, with $N$ fundamentals at each end of the quiver. For $N=2$, when all the gauge groups but one are arbitrarily weakly coupled, we can take the coupling constant of that one gauge group to be very strong and apply the familiar $SL(2,\mathbb{Z} )$ S-duality of $SU(2)$ with four flavors. This leads to a generalized quiver (when expressed in terms of the weakly coupled gauge groups), differing from the linear quiver we began with. Similarly, for $N=3$, we may apply the Argyres-Seiberg duality (mentioned below), and get another kind of generalized quiver. These generalized quivers are composed of elementary building blocks. The $E_6$ SCFT is one of the building blocks for $N=3$. The generalized quivers constructed starting from some fixed quiver gauge theory, are weak coupling cusps of a single theory.
The linear quiver we started with can be constructed, as mentioned above, by taking $N$ M5-branes which are intersected by other $n+1$ transverse M5-branes. This configuration can be viewed as $N$ M5-branes wrapping a Riemann sphere, in the presence of $n+1$ punctures at some points on the Riemann sphere, as well as punctures at $0$ and $\infty $. In the circular quivers construction, the Riemann sphere is replaced by a torus $T^2$ (as a result of the topology of the space that the $N$ M5-branes wrap).
Other kinds of linear quivers will be described by including 2 punctures of a more general type. These punctures are local and thus suggest that we can have additional theories by combining different sets of the punctures. This more general sort of theories \cite{Gaiotto:2009we} is therefore defined by taking the $A_{N-1} $ $(2,0)$ six dimensional theory on a Riemann surface $C$, with co-dimension two defect operators (a twisting is required to get $\mathcal{N} =2$ in the four dimensional macroscopic theory). There are $(2,0)$ six dimensional theories of A,D and E types. This defines a four dimensional theory by a simply laced Lie group and a punctured Riemann surface. These theories are known as class-$\mathcal{S} $. We will restrict ourselves to the $A_{N-1} $ theories \footnote{For a review, see \cite{Tachikawa:2013kta}. Generalizations of our analysis to the D,E type theories is left for the future, see \cite{Chacaltana:2011ze},\cite{Chacaltana:2014jba}. }.
A basic building block of the generalized quivers is the $T_N$ theory which can be identified by an $A_{N-1} $ theory on a sphere with 3 full punctures (a review of the types of punctures will be given below). It plays a similar role to that played by the $E_6$ SCFT in $N=3$ (even though there is another kind of isolated SCFTs which is identified more naturally with the generalization of the $E_6$ SCFT to general $N$). The $T_N$ theory is an example of a class-$\mathcal{S} $ theory with no brane construction of the sort described above. There is a description \cite{Benini:2009gi} of it in terms of a web of 5-branes (and 7 branes) in type IIB giving a five dimensional theory, which when compactified on $S^1$ gives the $T_N$ theory (as well as more general isolated SCFTs).
The SW curve for the $A_{N-1} $ class-$\mathcal{S} $ theories can be written as an $N$ sheeted branched covering of $C$. It has the canonical form
\begin{equation} \label{eq:general_curve_form_class_S}
x^N+\sum _{k=2} ^N \phi _k(z) x^{N-k} =0 ,
\end{equation}
where $z$ parametrizes $C$ and the SW differential being $\lambda =x dz$. The $\phi _k$ are more naturally used as $k$-differentials $\phi _k dz^k$, having appropriate poles at the punctures. Usually the SW curve describes the infrared limit of the four dimensional theory on the Coulomb branch. Here, as is manifest in the brane constructions mentioned above, the structure of the SW curve as a branched covering of $C$ identifies the four dimensional theory.
There are discrete holomorphic transformations keeping the SW curve with the covering structure above invariant, and are therefore symmetries of the full four dimensional theory. Then, the S-duality invariant space of exactly marginal deformations is the complex structure moduli space of $C$, with punctures of the same kind being indistinguishable.
At various cusps of the moduli space of the punctured Riemann surface $C$, the surface degenerates, long tubes emerge and some cycles shrink. In such cases, weakly coupled gauge groups emerge. A common situation where this happens, is when some punctures on $C$ are brought close to each other. At different degenerations of the same surface $C$, different gauge groups become weekly coupled. This provides us with S-dualities between a priori different theories. An example is the Argyres-Seiberg duality \cite{Argyres:2007cn}, stating that the strongly coupled $SU(3)$ superconformal theory with $6$ fundamental hypermultiplets is a weakly coupled $SU(2)$ theory, coupled to one fundamental hypermultiplet and the $T_3$ theory (which has global symmetry $E_6$ and no marginal deformations).
Let us write the form of the SW curve more explicitly for a few low genus surfaces $C$.
On a sphere (genus $g$=0) with punctures at $z_1,z_2, \dots $, $\phi _k$ of a massless theory is of the following form
\begin{equation}
\phi _k = \frac{Q_k(z)}{(z-z_1)^{p^1_k} (z-z_2)^{p_k^2} \dots } dz^k ,
\end{equation}
where we label by $p^i_k$ the pole structure corresponding to the puncture located at $z_i$.
Since we do not have a pole at infinity, a change of variable $z=1/w$ shows that the polynomial $Q_k$ is of order at most $\sum _i p_k^i - 2k$. Therefore the number of Coulomb branch parameters $\phi _k$ gives rise to, is $\sum _i p_k^i - 2k + 1$. (Note we could choose to position one of the poles at infinity, with the same result.) \\
On a torus ($g=1$), the general $\phi _k$ with the required pole structure (and no masses) is of the form
\begin{equation}
\phi _k = A_k \frac{\theta (z-n^1_k) \dots \theta (z-n^{d_k}_k)}{\theta (z-z_1)^{p_k^1} \theta (z-z_2)^{p_k^2} \dots } dz^k, \qquad d_k=\sum _i p_k^i, \qquad \sum _i n^i_k = \sum _i p_k^i z_i ,
\end{equation}
where the $\theta $ is a Jacobi theta function.
Including $A_k$ and the restriction on $\sum_i n^i_k$, we see that $\phi_k$ gives rise to $\sum _i p^i_k$ Coulomb branch parameters. In a general surface of genus $g$, the number of Coulomb branch parameters of dimension $k$ is the dimension of the space of $k$-differentials and is given by
\begin{equation} \label{eq:Coulomb_branch_graded_dimension}
d_k = \sum _i p^i_k + (g-1)(2k-1) .
\end{equation}
The pole structure of regular punctures in a superconformal class-$\mathcal{S} $ $A_{N-1} $ theory is restricted. A regular puncture $P$ is described in terms of a Young diagram with $N$ boxes in total. The pole structure of the puncture is fixed by the diagram as follows. For $k=2,\dots ,N$, $\phi _k$ has a pole of order $p_k$ at the puncture, where $p_k$ is given by $p_k=k-h(k,P)$, in which $h(k,P)$ is the row number of the $k$th box in the Young diagram (we label the rows starting with $1$). The most common punctures are simple and full punctures. A simple puncture has a diagram with rows of width $2,1,1,\dots $ and pole structure $1,1,1,\dots $ ($p_k=1$). A full puncture has a single row (of width $N$) and pole structure $1,2,3,\dots $ ($p_k=k-1$). Any Young diagram corresponds to a regular puncture, except for a single column diagram, referred to as a no-puncture.
Since we will commonly use punctures, let us introduce a convenient (but slightly subtle) notation.
Punctures will be denoted by upper case Roman letters, such as $P$. For each such puncture $P$ we will use the notations:
\begin{itemize}
\item $P_i$: the width of row number $i$.
\item $p_k$: the pole structure at the value $k$.
\item $p$: the number of boxes outside the first column (explicitly $p=\sum _i (P_i-1)$).
\item $h(k,P)$: the row number of box number $k$.
\end{itemize}
When we have several punctures, we add a superscript, as in $P^i$ (and then we have as before $P^i_j$, $p^i_k$, $p^i$ and $h(k,P^i)$).
Each regular puncture has a global symmetry associated with it. Corresponding to this symmetry, mass deformations can be introduced. The symmetry associated with a regular puncture $P$ will be denoted by $G(P)$ and is given by
\begin{equation} \label{eq:regular_puncture_symmetry}
G(P) = S\left( \prod _i U(P_i - P_{i+1} ) \right)
\end{equation}
(where $S(\dots)$ means removing the diagonal $U(1)$).
The product of the punctures' symmetries does not have to be the full symmetry of the theory. A method to find the full symmetry which can be used in some of the cases is by considering the mirror of the theory compactified to three-dimensions, see \cite{Chacaltana:2010ks} using \cite{Benini:2010uu},\cite{Gaiotto:2008ak}.
In \autoref{sec:decoupling} we describe the result of decoupling in a general surface. We imagine that several punctures are brought close to each other, resulting in a formation of a long tube and a weakly coupled gauge group (see for instance \autoref{fig:sphere_decoupling}). In the extreme weak coupling limit, the Riemann surface is separated into two surfaces. We explain how, in a unique and simple manner, the result of this decoupling can be obtained. That is, what is the weakly coupled gauge group that arises, and what are the two theories corresponding to the two surfaces.
Then, in \autoref{sec:diagrammatic_decoupling} we first identify the punctures that are created at the ends of a tube as a set that will be useful to describe. The set of regular punctures that can appear in a decoupling process at the end of a tube is precisely the set of punctures $L$ satisfying $L_1 \ge 2L_2$, excluding the simple punctures of $N>2$.
A simple diagrammatic method for finding these punctures in a given decoupling process is described.
There are theories that are formed after a decoupling which can be described in terms of irregular punctures, as reviewed in \autoref{sec:decoupling} following \cite{Chacaltana:2010ks},\cite{Chacaltana:2011thesis}. The set of irregular punctures in $\mathcal{N} =2$ SCFTs of the sort described there is classified in terms of Young diagrams.
In \autoref{sec:gauging} similar questions are asked from a different point of view, in which we consider what gauging of some regular puncture is possible. In other words, gauging a diagonal subgroup of the symmetry associated with a regular puncture and of a global symmetry from an additional theory, what are the possibilities for that additional theory that will result in a SCFT. The possible such gauge groups are listed. Additionally, we discuss the embedding of the gauged group in the symmetry associated with the puncture and a few implications of that.
The $\mathcal{N} =2$ circular quiver theory is studied using holography in \cite{Aharony:2015zea}. It turns out that in a certain limit and when $N$ is large, it contains the $\mathcal{N} =(2,0)$ six dimensional $A_{K-1}$ theory on $AdS_5 \times S^1$. In order to place the $(2,0)$ theory on $AdS_5 \times S^1$, boundary conditions must be specified. Two sorts of boundary conditions in this context are discussed in \cite{Aharony:2015zea} and will be mentioned here. It is natural to check whether there are other class-$\mathcal{S} $ theories in which there is a decoupled field theory on $AdS_5$. Additionally, for the $\mathcal{N} =2$ theories in which this is the case, various additional boundary conditions might be possible, implemented in other $\mathcal{N} =2$ theories analogous to the ones described in \cite{Aharony:2015zea}. These issues are discussed in \autoref{sec:large_N} and the conclusions of the other sections can be applied in this context.
\section{Weak coupling limits of a class-$\mathcal{S} $ theory} \label{sec:decoupling}
As was mentioned in the previous section, a common situation in which the Riemann surface $C$ of an $A_{N-1} $ class-$\mathcal{S} $ theory degenerates, is when several punctures are brought close to each other. When $C$ is a sphere, this is the only possibility for a degeneration. A long tube is then formed, and is associated with an emergent weakly coupled gauge group which we denote by $G_T$ (see \autoref{fig:sphere_decoupling}) . In the extreme weak coupling limit we are left with two surfaces describing two theories. We describe what are these theories for the different possible surfaces $C$ with a generic set of punctures.
\subsection{Maximal gauge group along a tube}\label{section:maximal_gauge_group_along_tube}
In this subsection, we would like to examine the SW curve in the region of the tube when several punctures are brought close together, and to get some preliminary information about the gauge group along the tube, $G_T$. This is a review of a discussion that was done in \cite{Gaiotto:2009we}. Later on we use the characterization of the punctures that are brought together as it appears here. The naive argument that will be given is refined in the consequent subsections.
Suppose then that we bring $m$ regular punctures $P^i$ together. Let them be positioned at $z=\alpha _i$, with $\alpha _i \propto w$ and $w \to 0$ is the limit of bringing them close to each other. As the surface could be of any genus, and there might be additional punctures except for $P^i$, the explicit expression for the entire curve is not straightforward. However, it will be enough to concentrate on the form of the curve in the region of small $z$. This can be written down, and the other details are essentially immaterial for this purpose. The way they do affect the analysis will be explained.
As described in the previous section, the SW differential is $\lambda =xdz$. The curve in the region of small $z$ is approximated by
\begin{equation}
\begin{split}
x^N + \sum _{k=2} ^N \frac{Q_k(z)}{\prod_i (z-\alpha _i)^{p_k^i} } x^{N-k} =0 .
\end{split}
\end{equation}
The polynomials $Q_k$ are determined by the behavior of the Coulomb branch parameters as $w \to 0$. Now let us look at the behavior of the curve in the tube region, $|w| \ll |z| \ll 1$. Substitute $x=y/z$ to get
\begin{equation} \label{eq:SW_curve_around_tube}
y^N + \sum _{k=2} ^N \frac{Q_k(z)}{z^{\sum_i p^i_k-k} } y^{N-k} =0 .
\end{equation}
For some set of punctures $P^i$, we will use the following notation
\begin{equation}
\Delta _k \equiv \sum _i p^i_k - k .
\end{equation}
We always start with $\Delta _2=m-2$. There are essentially two possible qualitative behaviors of $\Delta _k$ for any set of regular punctures. We first review the reasoning for that and then summarize the two types of behavior.
Recall $p_k = k - h(k,P)$, $h(k,P)$ being the row number of the $k$th box in the Young diagram. When $k$ increases by 1, $p_k$ can either increase by 1 if the box is not at the end of a row, or $p_k$ can stay the same if we are at the last box of a row. If a diagram ends with a series of rows of width 1, call this region of the diagram the \textbf{"tip"} of the diagram.
$\Delta _k$ can decrease by 1, stay the same, or increase by $1,2,\dots ,m-1$, as $k$ increases by 1. It decreases by 1 only if we are at the end of a row in each diagram. It stays the same if we are at the end of a row in $m-1$ of the diagrams.
If $\Delta _k$ decreases by 1 twice consecutively, it means that we were twice at the end of a row consecutively (in all diagrams), and therefore we passed through a row of width 1. Since the row width is non-increasing, we are at the tip of each diagram. All $p_k^i$ will stay the same, and $\Delta _k$ will continue decreasing by 1. We can also note that if $\Delta _k$ decreases by 1 and then stays the same, or stays the same and then decreases by 1, it means that in $m-1$ of the Young diagrams we were at the end of a row twice consecutively, and therefore we are at their tip. We will stay there in these diagrams, and in overall $\Delta _k$ will continue either decreasing or staying the same (it will not increase).
What are the options we have? First look at $m=2$ for which we start at $\Delta _2=0$. Suppose first that when going from $k=2$ to $k=3$ we increase at first. Then if we were to go to negative $\Delta _k$, we need to decrease by one from $\Delta =0$, and before that we either stay the same or decrease by 1. In both cases we will not increase anymore, and stay at negative $\Delta $. Therefore in this case, once we crossed to negative $\Delta $ we will not get back. Next suppose $\Delta $ stays the same going from $k=2$ to $k=3$. If we sometime later increase by 1, we are in the previous situation. Otherwise we stayed the same and then decreased, and so again we will stay only at negative $\Delta $, and the behavior is essentially the same as before. Lastly, suppose we first decrease by one going from $k=2$ to $k=3$. This means that $k=2$ is the end of the row for both diagrams and they are of width 2. We then have to increase by 1, decrease by 1, and so on, until one of the diagrams gets to its tip. So after one of the decreases, we will not increase anymore and stay at negative $\Delta $. Qualitatively, we get two options, which are demonstrated in \autoref{fig:Delta_k_behavior}. In the first option, we might not get to a $k$ where $\Delta _k$ becomes negative, and so it happens that $\Delta _k \ge 0$ for all $k$ in this option.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{"Delta_k_behavior"}
\caption{The behavior of $\Delta _k$ in the two cases $SU$ and $Sp$.}
\label{fig:Delta_k_behavior}
\end{figure}
For $m>2$, $\Delta _2=m-2>0$, and we do not have the second behavior of the $m=2$ analysis. To get to negative $\Delta_k $ we must either stay the same and then decrease by 1, or decrease by 1 twice. In both cases we will stay at negative $\Delta _k$.
Let us call the case in which $\Delta _k$ behaves as in the left diagram of \autoref{fig:Delta_k_behavior} the $SU$ case, and when it behaves as in the right diagram, the $Sp$ case.
In the SU behavior, $\Delta _k$ is non-negative up to some $k$, and once it crossed the horizontal axis and became negative, it will be non-increasing (as mentioned, it happens that $\Delta _k$ does not get to that region where it is negative). In the Sp case, $\Delta _k$ is zigzagging between $0$ and $-1$ until some stage where it does not increase anymore. It should be kept in mind that the set of punctures $P^i$ is of Sp type exactly when $m=2$ and the first row of each of the two punctures is of width $2$. The rest are of the SU behavior. This is an immediate way to specify whether we are in the SU or Sp case.
\textbf{Define} $\textbf{T}$ to be the last $k$ such that $\Delta _k \ge 0$ for both the $SU$ and $Sp$ cases (and for any $m$). Note that $N$ might be not large enough, in which case the left diagram of \autoref{fig:Delta_k_behavior} may not reach $\Delta _k < 0$. The definition of $T$ will be useful in this situation too.
Now let us apply a naive argument for the gauge group along the tube which will be refined later on. Start with the $SU$ case. The behavior around the tube is governed by \eqref{eq:SW_curve_around_tube}. Choose the Coulomb branch parameters, such that $Q_k(z)$ will behave as $z^{\Delta _k} $ for $k \le T$ around the tube, in the limit $w \to 0$. As will be discussed later, we might not have enough Coulomb branch parameters for that. The terms in \eqref{eq:SW_curve_around_tube} with $k>T$ are negligible. We get an algebraic equation for $y$ with constant solutions. Recall that $\lambda =xdz = y \frac{dz}{z} $ is the SW differential. It follows that these constant solutions give the values of the integrals over the cycles surrounding the tube. These $T$ values (which sum to 0) are vevs of the scalars in the vector multiplet of an $SU(T)$ gauge group. This gives naively $SU(T)$ along the tube.
In the $Sp$ case (for $m=2$), the odd $k$ terms in \eqref{eq:SW_curve_around_tube} are negligible, and we get $USp(T)$ along the tube (the group of rank $T/2$).
If we do not have enough Coulomb branch parameters, we will get a smaller gauge group. In this sense, the gauge group we found given only $P^i$ is the maximal possibility.
\subsection{Decoupling on a sphere using the curve} \label{subsection:decoupling_sphere}
The case where $C$ is a sphere is important and instructive. Whenever several punctures on a sphere are close to each other, this is conformally the same as having the rest of the punctures being close to each other. Therefore for a sphere the situation is quite symmetric, in which we have two sets of punctures, see \autoref{fig:sphere_decoupling}. We can think of the punctures on the right as being close to each other, or the punctures on the left being close. As was mentioned before, the curve degenerates into two spheres and a long tube connecting them that represents a weakly coupled gauge group $G_T$. In the simplest description, when the gauge coupling is turned off, we remain with two spheres representing two theories, and at the points where the tube ended before, two new punctures arise, denoted as $L$ on the left sphere and $R$ on the right one. The question we ask is what are the resulting two theories and what is $G_T$ in general.
In the analysis of the previous subsection, we considered only the side of the surface with the punctures brought close to each other. We did not have enough information to determine what is the resulting $G_T$. Indeed, it cannot be fixed uniquely by considering punctures on one side alone, as will be discussed in \autoref{sec:decoupling_punctures_fix_tube}.
This is related to what was done in \cite{Chacaltana:2012ch}. There, at first stage each side in the degenerating sphere is considered separately. The analysis of the SW curve of the previous subsection is the same as that in section 3 of \cite{Chacaltana:2012ch}\footnote{It is applied there to theories beyond the $A_{N-1} $ which we consider here, and also in section 2.2 of \cite{Chacaltana:2013oka} and section 2.4.5 of \cite{Chacaltana:2014jba}.}. For each of the two sides the gauge group on the forming tube was found, as well as the puncture that would be created on the other side of each tube. However these two gauge groups are what we referred to as the maximal gauge groups. In order to get the final result of the decoupling, some sewing procedure must be done. One approach to do that which was used in \cite{Chacaltana:2012ch} is to consider the sphere containing the two new punctures that were found and to use the description of this sphere in \cite{Gaiotto:2011xs} as a supersymmetric non-linear sigma model. In this description the product of the two maximal gauge groups is Higgsed to the actual gauge group $G_T$ that eventually emerges. Additionally, it is important that the leftover spheres in this description may also change by Higgsing coming from the D-terms and the F-terms, and this should be taken into account. In the following we will use a different procedure and find directly the result of the sewing.
In the case of a sphere, we can write simply the full curve. We will perform an analysis which is more precise than that of the previous subsection, and at the end of this subsection we summarize the result that we get. The main question in such an analysis is how the Coulomb branch parameters should behave as a function of the scale of the punctures that are taken close to each other (as $w \to 0$ above), with the requirement that the curves that will be left on both sides make sense. This question can also be rephrased as how the zeroes of the meromorphic differentials $\phi _k$ should behave in the limit we take.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{"sphere_decoupling"}
\caption{Decoupling on a sphere.}
\label{fig:sphere_decoupling}
\end{figure}
Let us then write the curve of the theory.
Denote the set of punctures on the right by $P^i$, and associate to them $\Delta _k^R = \sum _i p^i_k - k$ as before; similarly for the left side we define $\Delta ^L_k=\sum_i q_k^i-k$ with $Q^i$ the punctures on the left. The curve is of the form
\begin{equation} \label{eq:sphere_curve}
\begin{split}
& \lambda ^N+ \phi _2\lambda ^{N-2} + \dots + \phi _N=0, \qquad \lambda =x\, dz \\
& \phi _k=\frac{Q_k(z)}{\prod (z-\alpha _i)^{p^i_k} \prod (z-\beta _i)^{q^i_k} } dz^k .
\end{split}
\end{equation}
Suppose we bring the punctures on the right side close together and to the point labelled $z=0$. The position of these points is proportional to some $w$ and we take $w \to 0$. The $\phi_k$'s appearing in the curve in this limit $w \to 0$ are
\begin{equation} \label{eq:approx_phi_k}
\phi_k \sim \frac{Q_k(z)}{z^{\Delta ^R_k+k} } dz^k = u_k \frac{(z-z^{(k)} _1) \dots (z-z^{(k)} _{n_k})}{z^{\Delta ^R_k+k} } dz^k .
\end{equation}
When we take $w\to 0$ we have to decide how the parameters behave in order to get a sensible curve which will remain on the LHS. The degrees of the poles in \eqref{eq:sphere_curve} fix the degree of the polynomial in the numerator of $\phi _k$,
\begin{equation}
n_k=\Delta _k^L+\Delta _k^R
\end{equation}
where $n_k$ is defined in \eqref{eq:approx_phi_k} (it is related to $d_k$ from the previous section by $d_k=n_k+1$).
For a set of punctures we defined $T$ as the last $k$ such that $\Delta _k \ge 0$.
Here we have such $T$ associated to the left punctures and the right punctures, $T^L$ and $T^R$.\\
We start by assuming that both $\Delta _k^L$ and $\Delta _k^R$ are of the $SU$ case. We cannot have simultaneously $T^R<N$ and $T^L<N$ because then $n_N<-1$. Then suppose $T^L=N$, $T^R=T$; we are in the situation depicted in \autoref{fig:two_spheres_basic_behavior}.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{"sphere_general_su"}
\caption{Both $\Delta _k^L$ and $\Delta ^R_k$ are of the $SU$ case which means that up to some $T$, $\Delta _k \ge 0$ and after it $\Delta _k<0$. Here we have $T^L=N$ ($N$ is the last value $k$ obtains), while $T^R$ may be smaller or equal to $N$. The actual values in the plot are chosen arbitrarily and should not have to correspond to a realistic case, but are shown for demonstration purposes only.}
\label{fig:two_spheres_basic_behavior}
\end{figure}
The following discussion is done for $\Delta _k^R$, but will be used afterwards for $\Delta ^L_k$ and $\Delta^R _k$ interchanged. Statements as "if $\Delta ^L_k \ge 0$ then... " which seem redundant for $\Delta ^L_k$, are not redundant when we switch $\Delta^L_k$ by $\Delta ^R_k$. The following is then a general discussion. \\
Fix some $k$. If $\Delta ^R_k\ge 0$ we ask if we can "flatten" $\Delta ^R_k$, by which we mean that we can have enough $z^{(k)} _i$'s in $Q_k$ that we can take to 0 together with $w\to 0$, such that the power of $1/z$ in $\phi_k$ will reduce from $\Delta^R _k+k$ to $k$. This can be done only when $n_k\ge \Delta ^R_k$, which is $\Delta ^L_k \ge 0$. So we will say that we can "flatten" $\Delta ^R_k$, if $\Delta ^L_k \ge 0$.
\begin{itemize}
\item If we are not able to flatten $\Delta ^R_k$, we know we cannot be left with a pole of order $> k$ (since we do not have those in the superconformal theories we consider), and we are forced to take $u_k=0$ ($u_k$ defined in \eqref{eq:approx_phi_k}).
\item If we can flatten $\Delta ^R_k$, we are brought to $(A+Bz+Cz^2+\dots)/z^k$ in $\phi_k$. Note that $z=0$ becomes the position of the created puncture $L$ on the remaining LHS sphere.
The constants $A$ in the various $A/z^k$ of $\phi_k$ are exactly the Coulomb branch parameters of the gauge group along the tube. This is so, because these terms give in the curve $x^N+ \sum _{k=2} ^T \frac{A_k}{z^k} x^{N-k}=0$, which after a change of variables $x=y/z$ gives an algebraic equation for $y$: $y^N+\sum _{k=2} ^T A_k y^{N-k} =0$. As explained in the previous subsection, the solutions for this equation give the Coulomb branch parameters of the gauge group (denoted usually by $a_i$).
Since we are interested in the LHS sphere, we take the $A$'s to 0, which just amounts to taking the Coulomb branch parameters of the gauge group along the tube to 0. This does not affect $B,C,\dots $ . Therefore we get that when $\Delta ^R_k\ge 0$ can be flattened, $l_k=k-1$.
\item If $\Delta ^R_k<0$, we do not have to do anything and therefore $l_k=k+\Delta ^R_k = \sum_i p_k^i$. \footnote{We could think that a pole of lower order can be obtained as well. This is addressed in the discussion about surfaces of general genus (\autoref{sec:appendix_g_ge1_analysis}).}
\end{itemize}
Note that in the second case, it might be that we do not have the $B,C,\dots $ terms. When we flattened $\Delta ^R_k \ge 0$, we had to use $\Delta ^R_k$ number of the $z^{(k)} _i$'s in \eqref{eq:approx_phi_k}. There will be no $B,C, \dots $ terms exactly when $n_k=\Delta ^R_k$, or equivalently $\Delta _k^L=0$. In this case, even though we can flatten $\Delta ^R_k$, we will be left with $A/z^k$ and then take $A \to 0$ to get the massless curve. Therefore we will get $\phi ^L_k=0$ in the SW curve of the LHS sphere for these values of $k$, and $l_k$ is not defined by the SW curve. However, the assignment of $l_k=k-1$ above can still be used as will be explained in a moment. The idea briefly is that if we have on the LHS sphere the punctures $q^i_k$ and $l_k=k-1$ then the sum $\sum _i q^i_k+l_k=\Delta _k^L +k+k-1=2k-1$ and there is no $k$-differential with poles of total order $2k-1$ on the sphere, forcing indeed $\phi _k^L=0$.
Let us apply the above to the situation of \autoref{fig:two_spheres_basic_behavior}.
For $k\le T$, we can "flatten" all the $\Delta ^R_k$ since $\Delta ^L_k \ge 0$, and therefore necessarily $l_k=k-1$ there. For $k>T$, $l_k=\sum_i p_k^i$. \\
The same can be done in the other direction, by asking what is left on the right hand sphere. For this we just interchange the roles of $\Delta ^R_k$ and $\Delta ^L_k$. $\Delta ^L_k$ is always non-negative, but we cannot flatten all the $\Delta ^L_k$ --- we see this by $\Delta ^R_k$. We can flatten only $k\le T$. Therefore $r_k=k-1$ for $k\le T,$ where $r_k$ is the puncture created on the RHS sphere. For $k>T$, we cannot flatten $\Delta ^L_k$, so we will get $\phi_k^R=0$ in the SW curve of the RHS sphere. $r_k$ is not defined by the SW curve for $k$ in this range. \\
We had one Coulomb branch parameter (an A) for each $2 \le k \le T$. From the point of view of $\Delta ^R_k$ this was so because $\Delta ^R_k \ge 0$ for these $k$'s and all of them can be flattened because $\Delta ^L_k \ge 0$. From the point of view of $\Delta ^L_k$ this was so because even though $\Delta ^L_k \ge 0$ for all $k$'s, only for $2 \le k \le T$, $\Delta ^L_k$ can be flattened because only there $\Delta^R _k \ge 0$. We then have along the tube $SU(T)$.
We have seen several cases in which we get $\phi _k=0$ in the LHS or RHS spheres (and therefore we cannot get the pole structures of $L$ or $R$). There are basically two ways to approach this. We will start with the first approach by describing an additional meaning in which we can assign a value to $l_k$ or $r_k$ in such situations.
In general, define $d _k$ associated to some theory to be the number of Coulomb branch parameters of dimension $k$. We had a formula \eqref{eq:Coulomb_branch_graded_dimension} for $d_k$ in terms of the pole structures for a non-zero $\phi _k$. The theory we started from had $d_k=n_k+1$ parameters of dimension $k$. For $k \le T$ one parameter became the single Coulomb branch parameter of dimension $k$ for the gauge group along the tube (the one that was denoted by $A$). As we said before, after flattening $\Delta ^R_k$, we will be left with $(A+Bz+...+ Dz^{n_k-\Delta ^R_k} )/z^k$ and take $A \to 0$. This leaves $n_k-\Delta ^R_k=\Delta^L_k$ parameters on the LHS sphere. Therefore $d_k^L=\Delta ^L_k$ and similarly $d_k^R=\Delta ^R_k$. As expected, the number of parameters is conserved $d_k=d_k^L+d_k^R+d_k^{\text{tube}}=\Delta ^L_k+\Delta ^R_k+1$. For $k>T$, no parameters go to the tube. One of the remaining curves' $\phi _k$ is 0 as we saw: in \autoref{fig:two_spheres_basic_behavior} it was $\phi _k^R$, so $d_k^R=0$. $\phi _k^L$ was just inherited from $\phi _k$, and therefore $d_k^L=d_k$. So again the number of parameters is conserved, $d_k=d_k^L+d_k^R+d_k^{\text{tube}} $.
Now take some $k$ for which we assume that $\Delta ^R_k=n_k>0$ (and then $\Delta ^L_k=0$), corresponding to the first situation of a vanishing $\phi _k$ that we encountered. According to the discussion above, $\phi _k^R$ is non-zero, but $\phi _k^L= d_k^L=0$. Now suppose, by definition, that we want to maintain the equation
\begin{equation}\label{eq:graded_dimension}
d_k=\sum p_k^i-2k+1 \qquad (p_k^i \text{ here denote general punctures})
\end{equation}
for the LHS sphere, even though for those $k$'s $\phi _k^L=0$ and $l_k$ is not defined by the SW curve. This will imply that $l_k=d_k^L-\sum q_k^i+2k-1=k-1-\Delta ^L_k=k-1$. In this sense, we get again $l_k=k-1$. For $k$'s satisfying $\Delta ^L_k=n_k$ we get by the same reasoning that even though $r_k$ is not defined by the SW curve, to preserve \eqref{eq:graded_dimension} we still can define $r_k=k-1$ as we did before naively.
For $k>T$ we said that $r_k$ is not defined by the SW curve. Using the definition we just had, we can similarly define $r_k$ there. $d_k^R=0$, so $r_k=d_k^R-\sum p_k^i+2k-1=2k-1-\sum p_k^i$. These $r_k$'s in the range $k>T$ satisfy $r_k \ge k$. Punctures having a pole structure greater than $k-1$ are called \textbf{irregular punctures}. Irregular punctures and this approach of using the graded dimension of the Coulomb branch are discussed in \cite{Chacaltana:2010ks}.
Note that we could be in the situation in which we started from a curve on $C$ in which $\phi _k=0$ for some $k$. The pole structures of the punctures on $C$ ($p^i_k$ and $q^i_k$) are not defined by the massless curve, but are given as part of the definition through a Riemann surface with co-dimension 2 defect operators. We can do the same procedure: the resulting curves will have of course $\phi _k^L=\phi _k^R=0$ for the $k$'s with $\phi _k=0$, and then $d_k^L=d_k^R=0$ again fix the $l_k$ and $r_k$ from $q^i_k$ and $p^i_k$.
We would like now to complete the discussion of all the possible decouplings of a sphere into two spheres in the current approach, and after this to describe another equivalent description in which we do not use irregular punctures. If both $\Delta _k^L$ and $\Delta _k^R$ are of the $SU$ case, as was discussed $T^L<N$ together with $T^R<N$ is not possible because it implies $d_N<0$. Therefore if both $\Delta _k$ are of the $SU$ case, we have one with $T=N$, say $T^L=N$ and the other $T^R=T \le N$. Both $\Delta _k^L$, $\Delta _k^R$ cannot be of the $Sp$ case because they give once again some $d_k<0$ \footnote{$\Delta _k<0$ for all $k>2$ satisfies both the $SU$ and $Sp$ behaviors, and we consider it to be of the $SU$ type.}. If one of the $\Delta _k$'s is of the $Sp$ type, then the other must be an $SU$. If the $Sp$ have $T<N$, the $SU$ must have $T=N$ (otherwise $d_N<0$). If on the other hand the $Sp$ have $T=N$, the $SU$ can have $T=N$ or $T<N$. If it is $T<N$, it actually must be $T=N-1$ because otherwise $d_{N-1} <0$. The following three cases then cover all the situations
\begin{enumerate}
\item Left : $SU$, $T^L=N$. Right : $SU$, $T^R=T \le N$.
\item Left : SU, $T^L=N$. Right : $Sp$, $T^R =T \le N$.
\item Left : $Sp$, $T^L=N$. Right : $SU$, $T^R=T=N-1$. Possible only for even $N$.
\end{enumerate}
(we have chosen what is left and what is right for convenience). \\
We described how $l_k$ and $r_k$ are fixed in general. Let us summarize it for instance for $l_k$. $r_k$ is obtained in the same way by switching left and right. Consider $\Delta _k^R$ and suppose $\Delta _k^R \ge 0$. If $\Delta_k^L > 0$, $\Delta _k^R$ can be flattened and we are left with a non-zero $\phi _k^L$. In this case $l_k=k-1$. If $\Delta _k^L = 0$ we can flatten $\Delta _k^R$ but are left with $\phi _k^L=0$ when taking $A=0$. If $\Delta _k^L<0$ we cannot flatten $\Delta _k^R$, and must take $u_k=0$ to get $\phi _k^L=0$. Therefore for $\Delta _k^L \le 0$, $\phi _k^L=0$, or equivalently $d_k^L=0$. We now use \eqref{eq:graded_dimension} to fix $l_k= d_k^L-\sum_i q_k^i +2k-1=k-1 - \Delta _k^L$. In case where $\Delta _k^R<0$ we do not need to do anything and just have $l_k= \sum_i p_k^i$. \\
These considerations assume $\phi _k \neq 0$, but hold also in case that $\phi _k=0$. To see this, note first that clearly $d_k^L=d_k^R=0$ for such $k$. If $\Delta _k^R \ge 0$ then $\Delta _k^L=n_k-\Delta _k^R=-1-\Delta _k^R<0$. $l_k$ is fixed by $l_k=d_k^L-\sum _i q_k^i+2k-1=k-1-\Delta _k^L$ as obtained in the previous paragraph. If $\Delta _k^R<0$, again $\Delta _k^L = -1-\Delta _k^R$. Still $l_k=-\sum _i q_k^i +2k-1=k-1-\Delta _k^L = k+\Delta_k^R =\sum _i p_k^i$ as before.\\
To summarize,
\begin{equation}
l_k = \begin{cases}
k-1 & \Delta _k^R \ge 0, \Delta _k^L \ge 0 \\
k-1-\Delta _k^L & \Delta _k^R \ge 0, \Delta _k^L<0 \\
\sum _i p_k^i & \Delta _k^R<0
\end{cases} .
\end{equation}
This equation and the one corresponding to switching left and right fix $l_k$ and $r_k$ in the three cases mentioned above.
Every given situation falls into one of these three cases, which are shown in figures \ref{fig:two_spheres_SUSU},\ref{fig:two_spheres_SUSp1},\ref{fig:two_spheres_SUSp2}. The plots are for illustration, and the essential behavior is indicated above the plots. As we saw, there is one Coulomb branch parameter of dimension $k$ in the gauge group along the tube exactly for $\Delta _k^L,\Delta _k^R \ge 0$. This gives us the gauge group along the tube in the three cases: $SU(T)$, $USp(T)$ and $USp(N-2)$. After the decoupling the resulting two theories are those defined by a sphere with the punctures $P^i$ and $R$ for the RHS theory, and $Q^i$ and $L$ for the LHS theory. $L$ and $R$ are indicated in the appropriate figure.
The irregular punctures are $R$ in the first case if $T<N$, $R$ in the second case, and both $L$ and $R$ in the third case. We have two irregular punctures at both ends of the tube only for even $N$, in which case we get $USp(N-2)$ along the tube.
The only case in which both punctures at the ends of the tube are regular is \autoref{fig:two_spheres_SUSU} with $T=N$, in which both punctures are full punctures ($l_k=r_k=k-1$ for all $k$) and the gauge group along the tube is $SU(N)$.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{"two_spheres_SUSU"}
\caption{First case. We get in the tube $SU(T)$. If $T=N$ both $l_k$ and $r_k$ are full punctures.}
\label{fig:two_spheres_SUSU}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{"two_spheres_SUSp1"}
\caption{Second case. We get in the tube $USp(T)$.}
\label{fig:two_spheres_SUSp1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{"two_spheres_SUSp2"}
\caption{Third case. Possible only for even $N$. In the tube we get $USp(N-2)$.}
\label{fig:two_spheres_SUSp2}
\end{figure}
We can describe the result of the decoupling using only the familiar regular punctures. In all of the resulting spheres that we have found having an irregular puncture, the $\phi _k$ are zero starting from some $k$ \footnote{This might also happen in theories with regular punctures only.}. There are then a set of branches of the curve that are decoupled from the rest of it. This is the same as
\begin{itemize}
\item A usual $A_{N'-1} $ theory with lower $N'<N$ having only regular punctures, with the curve obtained by cancelling a common factor of $x$ in \eqref{eq:general_curve_form_class_S} from the curve with the vanishing $\phi _k$'s.
If all $\phi _k=0$ for $k \ge 2$ we have only the second ingredient below.
The punctures are obtained by truncating the pole structures to $2 \le k \le N'$.
\item Plus possibly additional free hypermultiplets.
\end{itemize}
The number of additional free hypermultiplets in such a theory can be found as follows. We started from some theory $C$ and after a decoupling limit, had a theory $C_1$, a theory $C_2$ which is a theory of regular punctures, and $n_h$ additional free hypermultiplets. Each of $C$, $C_1$ and $C_2$ is a theory of regular punctures. For each theory, a number of effective hypermultiplets and vector multiplets was defined, by a relation to the $a,c$ anomalies \cite{Gaiotto:2009gz}.
Subtracting the number of effective hypers of $C_1$ and $C_2$ from $C$ we find $n_h$. We will not quote this simple algebraic calculation, but merely give the result.
In the Sp behavior, the resulting curve on the Sp side will be trivial ($x ^N=0$), and the theory there is then only a set of free hypermultiplets.
The missing ingredient is an expression for $N'$ in the SU case. We will use results that will be obtained later on in order to express the different quantities using the diagrams of the punctures. According to what we saw, in the resulting theories of interest, the last $k$ for which $\phi _k \neq 0$ and afterwards all $\phi _k=0$ is the last $k$ such that $\Delta _k>0$. Using the discussion in \autoref{section:maximal_gauge_group_along_tube} and equation \eqref{eq:T_diagrammatic}, if there exists a last $k$ such that $\Delta _k>0$ then it is given by $\sum _{i=1} ^{\alpha -1} P_i^1$ (we use the conventions and notations from the text around \eqref{eq:T_diagrammatic} and will write these below when we summarize the result \footnote{In these conventions we choose $P^1$ to be of largest $p$. Note that there might be several punctures with the largest $p$. In all the calculations (such as that of $T$ in \eqref{eq:T_diagrammatic}) it does not mater which one we choose, except for calculating $N'$ in some cases where we have $m=2$ punctures. In these cases, if only one of them has $P_{\alpha } >1$ we choose it as $P^1$ (and otherwise it does not matter which one is chosen). }).
If there is no $\Delta _k>0$, the resulting theory is a theory of free hypermultiplets ($N'=0$). By following the lines of the analysis of the behavior in the SU case, we can construct quite easily the diagrams that will result in all $\Delta _k \le 0$ by following the behavior of $\Delta _k$ as we append each box. Since $\Delta _2>0$ if $m>2$, this can happen only with two punctures $P^1,P^2$ that are brought together to form the decoupling sphere in which we are interested. By doing this exercise, we find that there is no $\Delta _k>0$ in three basic cases (which are not distinct in the way they are written).
In the first option we have a simple puncture and some other puncture $P^1$. In this case $\alpha =1$ and the resulting theory is only a theory of $P^1_1(P^1_1-P^1_2)$ free hypermultiplets. The second option is when we have $P^2_1=2$, $P^1_1=3$, $P^1_2 \le 2$. Finally, we can have any $P^1_1 \le 3$ and $P^2$ of rows $2,2,1,1,\dots $. \\
For these cases we just need the number of free hypers. In general, as was explained above, the number of effective hypers minus the number of effective vector multiplets can be calculated for the decoupled RHS theory in the first general case (\autoref{fig:two_spheres_SUSU}). The diagrammatic description which will be given later on is used in this calculation. The number that is found is (still ordering $p^1 \ge p^i$)
\begin{equation}
\begin{split}
& n_{\Delta } =n_h-n_v=-1+ \frac{1}{2} P^1_{1,\alpha } (P^1_{1,\alpha } -P^1_{\alpha +1} ) +\frac{1}{2} \sum _{i=1}^{\alpha} P^1_{1,i} (P^1_i-P^1_{i+1} ) + \sum _{j \ge 2} n_{\Delta } (P^j) \\
& n_{\Delta } (P) =- \frac{N}{2} +\sum _i \frac{1}{2} P_{1,i} (P_i-P_{i+1} ) \ge 0 , \qquad
P_{1,i} \equiv \sum _{j=1} ^i P_j .
\end{split}
\end{equation}
In these cases in which there is no $\Delta _k>0$ we find that the total number of free hypers is
\begin{equation} \label{eq:free_hypers_SU_RHS_when_delta_k_le_0}
n_h=-1+ \frac{1}{2} P^1_{1,\alpha } (P^1_{1,\alpha } -P^1_{\alpha +1} ) +\frac{1}{2} \sum _{i=1}^{\alpha} P^1_{1,i} (P^1_i-P^1_{i+1} ) + \sum _{j \ge 2} n_{\Delta } (P^j) .
\end{equation}
Let us summarize how the result of decoupling can be found very easily, in terms of usual theories with regular punctures. We are given a sphere $C$ with a set of punctures $P^i$ on the right and a set of punctures $Q^i$ on the left. We will assume that they are ordered such that $p^1 \ge p^i$ for any $i$ (see the previous section for the definition of $p$), and the same for $Q^1$ relative to $Q^i$ \footnote{For choosing $P^1$ among several punctures with the largest $p$, see the previous footnote.}.
Up to a possible renaming of what we call left and right, any given configuration should fall into one of the following cases, where by "$P^i$ are of Sp type" we simply mean that we have 2 punctures with their first row being of width 2 and all the rest is referred to as SU type:
\begin{enumerate}
\item $P^i$ and $Q^i$ are of SU type, and $\sum_i q^i \ge N$. \\
In that case, we have along the tube $SU(T)$ with $T=\sum _{i=1} ^{\alpha} P^1_i$ where we denote $\alpha =\sum _{j \ge 2} p^j$. The resulting theory on the LHS is given by an $A_{N-1} $ theory with the punctures $Q^i$ and additionally $l_k=\min(k-1,\sum _i p_k^i)$. We will show that the diagram of $L$ is the diagram of $P^1$ with the $\alpha +1$ first rows merged to a single one.\\
The theory on the RHS is a theory in $A_{N'-1} $ where $N' = \sum _{i=1} ^{\alpha-1} P^1_i$, with the punctures $P^i$ (truncated to $k=2,\dots, N'$) and an additional full puncture instead of the tube, plus $(N'+P^1_{\alpha } )(P^1_{\alpha } -P^1_{\alpha +1} )$ free hypers. \\
If there are only two $P^i$ punctures such that one of them is a simple puncture, or one of them has $P_1=2$ and the other $P_1=3$, $P_2 \le 2$, or one of them has $P_1 \le 3$ and the other is of rows $2,2,1,1,\dots $, then the RHS is a free theory with the number of hypers given by \eqref{eq:free_hypers_SU_RHS_when_delta_k_le_0}.
\item $Q^i$ are of SU type while $P^i$ are of Sp type, and $\sum _i q^i \ge N$. \\
Along the tube we get $USp(T)$ where now $\alpha = P^2$ since there are only 2 punctures on the RHS. The resulting theory on the LHS is described just as in the first case. The theory on the RHS is just $2\alpha (2-P^1_{\alpha +1}) $ free hypers.
\item The third case is quite special. It occurs only for even $N$. On the LHS there are 2 punctures with all rows being of width 2. On the RHS $P^i$ are of the SU type with $\sum _i p^i = N-1$. \\
On the tube we have $USp(N-2)$. The LHS theory is a free theory of $2N$ hypers. The theory on the RHS is as described in the first case.
\end{enumerate}
What is needed above are only the rows structure of the Young diagrams of the punctures, and this specification is therefore very easy to apply. \footnote{Note that we assume that the sphere we began with is a legitimate theory, since not any collection of punctures on the Riemann sphere is an acceptable theory.}
\subsection{$g \ge 1$ surfaces} \label{subsection:decoupling_g_ge1}
Suppose we have a $g \ge 1$ surface with regular punctures, and we bring several of them close together. We would like to get the resulting theories and tube as was done for the sphere. The scenario we consider is depicted in \autoref{fig:g_ge1_decoupling}.
In the case of the sphere, the situation between the two sets of punctures was symmetric. We could learn on each side by bringing the punctures on the other side close to each other. In a $g \ge 1$ surface the situation is not symmetric. We bring punctures say on the right close to each other. A sphere bubbles off and a long tube is formed, connecting the sphere and the remaining surface (as in \autoref{fig:g_ge1_decoupling}). An additional complication is that on the sphere we could use the full curve, which is more involved for a general surface. We should avoid writing its full form, and instead concentrate on the region of the punctures that are brought together, and be able to get both of the resulting theories.
This analysis appears in \autoref{sec:appendix_g_ge1_analysis}.
The answer is that any situation will be either the first or the second scenario described for the sphere (without the third one). The gauge groups along the tube and the resulting two theories are the ones described there. In the convention used, the RHS theory is a sphere, while the LHS is of the same genus as that of $C$.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{"g_ge1_decoupling"}
\caption{Decoupling in a $g \ge 1$ surface. The punctures $L$ and $R$ are shown but do not appear until the complete decoupling of the tube. }
\label{fig:g_ge1_decoupling}
\end{figure}
\subsection{Do the decoupling punctures fix the tube?} \label{sec:decoupling_punctures_fix_tube}
In this section, we saw that given the punctures on any surface, when some of them are brought together, the resulting tube and theories on both sides of it are determined completely. We gave the gauge group along the tube and the pole structures of the created punctures for all the cases (as well as any needed additional information such as the $N' \le N$ in which a resulting theory is defined and the number of additional free hypermultiplets in the corresponding description). \\
We could hope naively that if for instance we bring the punctures on the right side close together, the pole structures of these punctures alone may be sufficient to fix the additional data required to specify the decoupling result (that is, the gauge group along the tube, the punctures $L$ and $R$, and the $N' \le N$ and additional number of hypers when needed). From what we saw, it does not have to be the case. An example is shown in \autoref{fig:several_decoupling_options}. There, two full punctures are brought together, but give different tubes.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{"several_decoupling_options"}
\caption{Several options for decoupling of the same right sphere (see \cite{Chacaltana:2010ks},\cite{Chacaltana:2011thesis} for many examples). }
\label{fig:several_decoupling_options}
\end{figure}
We may ask when does one side of the decoupling determines the gauge group along the tube and the additional data mentioned above (all of these together will be called shortly "the tube" below). For the sphere, we can get the answer by inspecting the three cases from \autoref{subsection:decoupling_sphere} that cover all the scenarios. Suppose we are given a set of punctures that are brought together, and calculate $\Delta _k$. Begin with the case in which they are of the $SU$ case. If $T$ of that $\Delta _k$ equals $N$, we might be for instance on the left side of the first case, with different possibilities for the right side. In the first case, the tube is determined by the right side as can be seen by the results there, and hence we cannot determine the tube uniquely. If $T=N-1$ we might still be either in the first or the third case, and the tube is not determined. If $T<N-1$ we are necessarily on the right side of the first case, and it determines the tube uniquely. Now suppose that the $\Delta _k$ we obtained is of the $Sp$ type. If $T=N$ even, we might be either in the situation of the second case or of the third one which will give different tubes (in particular different gauge groups). If $T<N$, we are necessarily in the second case, the right side of which fixes the tube. \\
To conclude, a set of punctures $P^i$ brought together determines the information needed to specify the result of the decoupling, when they are of the $SU$ type with $T<N-1$ or the $Sp$ type with $T<N$. As will be explained later on, $T$ can be expressed in terms of the Young diagrams of $P^i$ through \eqref{eq:T_diagrammatic} where $p^1 \ge p^i$ is the puncture with the largest $p$ and $\alpha =\sum _{i \ge 2} p^i$.
For a $g \ge 1$ surface, we have only the first two cases and the punctures that are brought together correspond to the right hand sides there. Therefore the punctures that are brought together determine the tube completely.
\section{Diagrammatic decoupling} \label{sec:diagrammatic_decoupling}
\subsection{Punctures appearing at the end of a tube} \label{section:punctures_at_end_of_tube}
It will turn out instructive to distinguish the class of punctures that can be created when a tube decouples in a weak coupling limit.
According to what we saw in the discussion of the sphere and $g \ge 1$ surfaces, the three cases mentioned in \autoref{subsection:decoupling_sphere} exhaust all the possibilities. By examining theses options, the (regular) punctures that appear at the end of a tube are exactly those that are given by the formula
\begin{equation} \label{eq:general_PRP_equation}
l_k = \min(k-1, \sum _{i=1}^m p_k^i)
\end{equation}
where $P^i$ are the (regular) punctures that are brought together on the other side of the tube.
Any puncture that can be expressed as in \eqref{eq:general_PRP_equation} for some set of $m$ punctures, can also be obtained from only two punctures.
A short way to see this, is to note the identity
\begin{equation} \label{eq:several_punctures_relation_to_pairs}
\begin{split}
\min(p_k^1+ \dots p_k^m,k-1) &= \\
&=\min(k-1,p_k^1 + \min(k-1, p_k^2+ \dots \min(k-1,p_k^{m-1} +p_k^m)\dots )
\end{split}
\end{equation}
for regular punctures $P^1, \dots ,P^m$. Let us just write a particular case of this formula, so that the form of the formula will be clear: \eqref{eq:several_punctures_relation_to_pairs} for $m=3$ is
\begin{equation}
\min(k-1,p_k^1+p_k^2+p_k^3)=\min(k-1,p_k^1+\min(k-1,p_k^2+p_k^3)) .
\end{equation}
Let us prove \eqref{eq:several_punctures_relation_to_pairs}. Denote the term added to each $p_k^i$ by $\bar p_k^i$. Look at $p_k^1+\bar p_k^1$. When $p_k^1+\bar p_k^1 \le k-1$ necessarily we do not choose in the next minimum the $k-1$. So in this region, $p_k^1+\bar p_k^1 = p_k^1+p_k^2+\min(k-1,p_k^3+\bar p_k^3)$. Continuing with the same reasoning, we get that when $p_k^1+\bar p_k^1 \le k-1$, $p_k^1+\bar p_k^1=p_k^1+\dots + p_k^m$. \\
By repeated use of the RHS of \eqref{eq:general_PRP_equation} being regular, $\bar p_k^1, \dots \bar p_k^{m-2} $ are regular punctures. Apply now the analysis of \autoref{section:maximal_gauge_group_along_tube} to $p_k^1$ and $\bar p_k^1$. For $k>T$, $p_k^1+\bar p_k^1 \le k-1$ and we saw what happens for these values of $k$. For $k \le T$, $p_k^1+\bar p_k^1 \ge k-1$. The behavior of $p_k^1+\dots +p_k^m$ is of the $SU$ type only. It implies that $p_k^1+ \dots +p_k^m\ge k-1$. For both of the ranges of $k$ we get
\begin{equation} \label{eq:several_punctures_relation_to_pairs_proof}
\min(k-1,p_k^1+\bar p_k^1)=\min(k-1,p_k^1+ \dots +p_k^m)
\end{equation}
giving \eqref{eq:several_punctures_relation_to_pairs} indeed.
This shows in particular that the punctures obtained by the RHS of \eqref{eq:several_punctures_relation_to_pairs_proof} can also be achieved using the LHS.
Call the set of all (regular) punctures $l_k$ that can be written as $l_k=\min(k-1,p_k+p'_k)$ where $p_k$ and $p'_k$ are regular punctures, \textbf{primary regular punctures (PRPs)}. The regular punctures that can appear at the end of a tube are exactly the PRPs. We will give a simple classification of the PRPs now.
\subsection{Classification of PRPs}
Let us give a simple characterization of the possible PRPs in the language of the corresponding Young diagram.
We saw that any PRP can be obtained by colliding two punctures $P$ and $P'$. We defined $T$ to be the last $k$ for which $\Delta _k=p_k+p'_k-k \ge 0$. Let us show that for a PRP $L_1-L_2 \ge T$ and $T \ge L_2$. Assume meanwhile that $T<N$.
After $k=T$ we saw that $\Delta _k$ does not increase anymore, and therefore $k=T+1$ is already at the tip of at least one of the diagrams, say $P'$.
At $k=T+1$ we must have $\Delta _k=-1$, that is $p_k+p'_k=k-1$, and we get $l_{T+1} =T$. $k=T$ is the end of a row in both $P$ and $P'$ (because afterwards $\Delta _k$ decreases by 1). Denote the width of the row coming right after $k=T$ in $P$ by $P_i$. After $k=T$, $p'_k$ will not change anymore (stuck at the tip), so $\Delta _k$ will stay $-1$ as long as we increase $p_k$, that is, $P_i$ more times. We get therefore that $L_1=T+P_i$. Afterwards, $l_k$ becomes $k-2$, and stays so as long as $\Delta _k$ stays the same, which happens for $P_{i+1} $ more steps. Then $L_2=P_{i+1} $. We get that $L_1-L_2=T+P_i-P_{i+1} \ge T$, and $L_2=P_{i+1} \le P_1 \le T$ (the last inequality holds because as long as $k \le P_1$, $p_k=k-1$ and $\Delta _k=p_k+p'_k-k \ge p'_k-1 \ge 0$ so $T \ge P_1$). \\
We assumed that $T<N$. If this does not happen, then $T=N$. In both the $SU$ and the $Sp$ cases, this means that $p_k+p'_k \ge k-1$ for all $2 \le k \le N$. By the PRP formula \eqref{eq:general_PRP_equation}, $l_k=k-1$ for all $2 \le k \le N$. Remembering that $l_k=k-h(k,L)$ ($h(k,L)$ being the row number of the box number $k$ in the Young digram), it implies that $L_1=N$ and $L_i=0$, $i \ge 2$. \\
We obtained that anyway, for a PRP :
\begin{equation} \label{eq:PRP_classification_bounds}
L_1-L_2 \ge T \qquad \text{and} \qquad T \ge L_2
\end{equation}
This implies that $L_1-L_2 \ge L_2$, or $L_1 \ge 2L_2$ is a necessary condition for a PRP.
Now we claim that it is almost sufficient. Given $L_1 \ge 2 L_2$ (except for a simple puncture when $N>2$), it is a PRP : \\
Define $p_k$ through its Young diagram. Take it to have rows with the number of boxes being $L_1-L_2$, $L_2$, $L_2$, $L_3$, $L_4$ and so on. Choose $p'_k$ to be a simple puncture. The values of $p_k+p'_k$ for $k$'s in the corresponding rows of $p_k$ are $k$ in first, $k-1$ in second, $k-2$ in third, and so on (recall $p'_k=1$ for a simple puncture for all $k$, and $p_k$ is $k$ minus the height of the $k$th box in the diagram). $\min(k-1,p_k+p'_k)$ is $k-1$ for the first $L_1-L_2+L_2=L_1$ values of $k$, it is $k-2$ for the next $L_2$ values, $k-3$ for the next $L_3$ values and so on. We therefore constructed $L$ (rows of width $L_1,L_2,L_3, \dots $) using $\min(p_k+p'_k,k-1)$. We required that $L$ is not a simple puncture if $N>2$, because otherwise our construction of $p_k$ gives a no-puncture (all rows have a single box). A simple puncture in $N>2$ is not a PRP, because for instance we saw the requirement $L_1-L_2 \ge T \ge 2$, which is not satisfied by a simple puncture. When $N=2$ a simple puncture is also a full puncture, having $L_2=0$.
To summarize, PRPs are the regular punctures having $L_1 \ge 2L_2$, not including the simple punctures of $N>2$.
\subsection{Diagrammatic construction of the decoupling} \label{subsection:diagrammatic_method}
A (regular) puncture that can appear at the end of a tube, can be obtained by the decoupling of some punctures $P^i$, and will be given by \eqref{eq:general_PRP_equation}. This relation can be described diagrammatically in a simple way. We will show now how the Young diagram of $L$ is found easily from those of the $P^i$.
Recall that a Young diagram of a puncture can end with consecutive rows of width 1, and we called that region of the diagram the tip of the diagram. We also denoted the row number of box number $k$ in the diagram of the puncture $P$ by $h(k,P)$. Note that since $p_k=k-h(k,P)$, $p_k$ equals the number of boxes until box number $k$ that are not in the first column.
For a general puncture $P$ we have the following restrictions. At $k= \min(2p,N)$ we at least reach the last box of the diagram without the tip, and hence after this $k$, $p_k$ does not change anymore (\autoref{fig:puncture_p_demo} might be useful in this discussion):
\begin{equation} \label{eq:diagrammatic_first_range}
p_k = p \qquad \text{for } k \ge \min(2p,N) .
\end{equation}
What about the region $k \le \min(2p,N)$? If $P_1=2$ then $p_k = \floor*{\frac{k}{2} }$ for these $k$'s. If $P_1 >2$, at any box in this range at least half of the boxes lay outside the first column, and so $p_k \ge \frac{k}{2} $. Together:
\begin{equation} \label{eq:diagrammatic_second_range}
\begin{split}
p_k = \floor*{ \frac{k}{2} } \qquad \text{if } P_1=2, \\
p_k \ge \frac{k}{2} \qquad \text{if } P_1>2 ,
\end{split} \qquad
\text{for }2 \le k \le \min(2p,N).
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{"puncture_p_demo"}
\caption{Demonstration of the tip of a diagram for a puncture $P$ and the value $p$.}
\label{fig:puncture_p_demo}
\end{figure}
Arrange the punctures $P^i$ such that $p^1 \ge p^2 \ge p^3 \ge \dots $. Note that for $k \le \min(2p^2,N)$ we have $\sum _i p^i_k \ge k-1$ by \eqref{eq:diagrammatic_second_range}.
Define $\alpha =\sum _{j \ge 2} p^j$.
Suppose that $2p^2 \le N$. Since $2p^2 \le 2p^1$, we have $p^1_{k=2p^2} \ge \floor*{\frac{k}{2} } = p^2$ and the row number of the corresponding box is $h(2p^2,P^1) = 2p^2 - p^1_{2p^2} \le p^2 \le \alpha $. For $k \ge \min(2p^2,N)$, using \eqref{eq:diagrammatic_first_range}, $\sum _{i \ge 2} p^i_k= \alpha $ is constant. In this range, $l_k=\min(k-1,\sum _i p_k^i) = \min(k-1, \alpha + k - h(k,P^1))$. We see that until row number $\alpha +1$ in $P^1$ (which includes the range $k \le \min (2p^2,N)$ indeed since $h(2p^2,P^1) \le \alpha $) $l_k=k-1$, that is we are in the first row of $L$. For $k$ in row number $\alpha + n$ in $P^1$ (with $n > 1$), $l_k=k-n$ so we are in the $n$th row of $L$. In other words, $L$ is just $P^1$ with the $\alpha +1$ first rows combined to a single row. \\
If on the other hand $2p^2 > N$ then at $k=N$ we have $p^1_N \ge \floor*{\frac{N}{2} } =N - \ceil*{ \frac{N}{2} } \ge N - p^2$, or $h(N,P^1) \le p^2$. We saw that for $k \le \min(2p^2,N)$ we have $\sum _i p^i \ge k-1$ and so for all $k$, $l_k=k-1$ which is a full puncture. This is still described by combining the first $\alpha +1$ rows of $P^1$ in the sense that $\alpha +1 \ge p^2 \ge h(N,P^1)$, that is there are more rows to combine than we have in $P^1$.
To summarize, arranging the punctures such that $p^1 \ge p^i$ for all $i$ and defining $\alpha =\sum _{i \ge 2} p^i$, the Young diagram of $L$ is obtained by combining the first $\alpha +1$ rows of $P^1$ to a single row. This is demonstrated in \autoref{fig:diagrammatic_decoupling}. If the number of rows in $P^1$ is less than $\alpha +1$, we just combine all of them to a single row, and $L$ is a full puncture.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{"diagrammatic_decoupling"}
\caption{The diagrammatic method.}
\label{fig:diagrammatic_decoupling}
\end{figure}
\subsection{Irregular punctures} \label{subsection:irregular_punctures}
We have given a simple result for the process of decoupling, which requires the use of the familiar regular punctures only. The same applies to the rest of the sections. However, if we would like to describe the resulting theories in the same $A_{N-1} $ theories (that is, the same $N$), we have to use irregular punctures. In this section we pause to discuss them. These irregular punctures are discussed in \cite{Chacaltana:2010ks},\cite{Chacaltana:2011thesis}. A-priori it is not possible to say what is the set of all irregular punctures for $N$ general (for that, it is necessary to check in what 3-punctured spheres they might appear). With the tools we gained, we can now classify all the irregular punctures. We introduce Young diagrams analogous to those of regular punctures. The forms of these diagrams that correspond to irregular punctures will be described.
By considering figures \ref{fig:two_spheres_SUSU},\ref{fig:two_spheres_SUSp1},\ref{fig:two_spheres_SUSp2} we see that all the irregular punctures can be obtained in a decoupling process on the RHS of \autoref{fig:two_spheres_SUSU} or \autoref{fig:two_spheres_SUSp1} (these include the irregular punctures of \autoref{fig:two_spheres_SUSp2}). Therefore, a general irregular puncture can be realized in the setup of \autoref{fig:bringing_many_punctures_together} in which $L$ is a regular puncture (a PRP). Necessarily the puncture $R$ is either an irregular puncture or a full puncture. It will then be convenient to refer to a puncture which is either a full puncture or an irregular puncture, as a \textbf{PIP}. We will study PIPs in this setup of \autoref{fig:bringing_many_punctures_together}.
From figures \ref{fig:two_spheres_SUSU},\ref{fig:two_spheres_SUSp1}, there are two types of irregular punctures, depending on whether the punctures $P^i$ are of the SU type or the Sp type. We will call the corresponding irregular punctures SU and Sp irregular punctures. Their pole structure is given by
\begin{equation}
\begin{split}
SU \text{ irregular puncture : } &p_k=k-1 \text{ for } k \le T<N \text{ and }\\
p_k=2k-1&-\sum_i p_k^i \ge k \text{ for } k>T, \text{ with some set of regular punctures } p_k^i ,
\end{split}
\end{equation}
\begin{equation}
\begin{split}
Sp \text{ irregular puncture : } &p_k \text{ oscillates } k-1,k,k-1,k,\dots,k-1 \text{ up to some even } T \text{ and}\\
p_k=2k-1&-\sum_i p_k^i \ge k \text{ for } k>T, \text{ with some set of regular punctures } p_k^i .
\end{split}
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{"bringing_many_punctures_together"}
\caption{Bringing several punctures together.}
\label{fig:bringing_many_punctures_together}
\end{figure}
In \autoref{fig:bringing_many_punctures_together}, while $L$ is given by \eqref{eq:general_PRP_equation}, $R$ can be expressed as (see figures \ref{fig:two_spheres_SUSU},\ref{fig:two_spheres_SUSp1})
\begin{equation} \label{eq:PIP_equation}
\begin{split}
r_k &= \begin{cases}
k-1 & \Delta _k \ge 0 \\
2k-1 - \sum_i p^i_k & \Delta _k < 0
\end{cases} \\
&=\max(k-1, 2k-1-\sum_i p^i_k) .
\end{split}
\end{equation}
In \autoref{subsection:diagrammatic_method} we gave several arguments, that when combined with \eqref{eq:general_PRP_equation} resulted in the diagrammatic description. We now apply the results there and use \eqref{eq:PIP_equation}. First, consider the case where $P^i$ have the SU behavior. In that case, we can refine the inequality in the range $k \le \min (2p^2,N)$ to $\sum _i p^i_k \ge k$ by \eqref{eq:diagrammatic_second_range} (since either $m >2$ or at least one of the $P^i$ have $P^i_1>2$). For $k \ge \min(2p^2,N)$, $\sum _{i \ge 2} p^i_k = \alpha $. Substituting in \eqref{eq:PIP_equation}, in this range, $r_k = \max(k-1, k-1 - \alpha + h(k,P^1))$.
This equation actually holds for all $k$ since for $k \le \min(2p^2,N)$ we have $k-1-\alpha +h(k,P^1) \le 2k-1 - \sum _i p^i_k \le k-1$.
Therefore, until row number $\alpha $ in $P^1$, $r_k=k-1$, and for a $k$ in row number $\alpha +n$ in $P^1$, $r_k = k-1 + n$.
We introduce Young diagrams for SU PIPs (that is an SU irregular or a full puncture) as on the left side of \autoref{fig:irregular_punctures_diagrams}. These are usual Young diagrams, colored in red to distinguish them. The box number $k$ in the $n$th row gives $r_k = k-2 + n$. We see by the discussion above that the puncture $R$ of SU PIP type is obtained diagrammatically from the $P^i$'s in a similar way to the regular $L$. Arranging $p^1 \ge p^i$ for all $i$, the diagram of $R$ is that of $P^1$ with the $\alpha $ first rows combined to a single row, $\alpha = \sum _{i \ge 2} p^i$. In a moment we will classify the diagrams that are obtainable.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{"irregular_punctures_diagrams"}
\caption{Diagrams for PIP (irregular and full) punctures. SU PIPs are marked in red, Sp PIPs are marked in blue.}
\label{fig:irregular_punctures_diagrams}
\end{figure}
Before that, let us address the remaining Sp case. We have only $P^1$ and $P^2$, $P^i_1=2$ for both, and $\alpha =p^2$ assuming $p^1 \ge p^2$. Necessarily $2p^2 \le N$ since $P^2_i \le 2$. In the range $k \le 2p^2$, by \eqref{eq:diagrammatic_second_range}, $\sum _i p^i_k = 2 \floor*{\frac{k}{2} }$. Using \eqref{eq:PIP_equation}, $r_k = \max(k-1, 2 \ceil*{ \frac{k}{2} } - 1) = 2 \ceil*{ \frac{k}{2} } - 1 $. For $k \ge 2p^2$, $p^2_k = p^2$ and $r_k = \max(k-1,k-1-\alpha +h(k,P^1))$. Thus again if $k$ is in row number $\alpha +n$ in $P^1$, $r_k=k-1+n$. Note that $p^1_{k=2p^2} = \floor*{\frac{k}{2} } = p^2$ and $h(2p^2,P^1) = 2p^2-p^1_{2p^2} = p^2 = \alpha $.
We see that for the $\alpha $ first rows of $P^1$, $r_k=2 \ceil*{\frac{k}{2} }-1$, while for the $(\alpha +n)$th row ($n>0$), $r_k=k-1+n$. We represent Sp PIPs (Sp irregular puncture or a full puncture) by a Young diagram, colored in blue, where a $k$ in the first row is associated with $r_k= 2 \ceil*{ \frac{k}{2} } - 1$ and for the $n$th row ($n>1$), $r_k = k-2+n$, see the right side of \autoref{fig:irregular_punctures_diagrams}. Diagrammatically, we get $R$ by just combining the first $\alpha =p^2$ rows of $P^1$ to a single row (where again $p^1\ge p^2$). Since $R_1 = 2\alpha$, the number of boxes in the first row of an Sp PIP diagram is always even.
We are now in a position to classify the PIPs and the irregular punctures. Any PIP is described by a Young diagram from \autoref{fig:irregular_punctures_diagrams}. The question is what subset of these diagrams give precisely all the PIPs. Note that the two types of diagrams have an overlap which is given by the Sp diagrams of $R_1=2$. These diagrams are realized precisely when $m=2$, $P^1_1=2$ and $P^2$ is a simple puncture. To avoid this redundancy, even though this set of $P^i$ is of Sp type, we will group it with the SU PIPs. First, we claim that any SU PIP diagram is obtainable (except for $R_1=1$ as usual). The reason is that we can simply take $P^1$ to be a general regular puncture and $P^2$ to be a simple puncture having $p^2=1$, and then by the diagrammatic description $R$ has the same diagram as $P^1$. The SU PIP diagrams are the same as the regular punctures diagrams (which are all the Young diagrams with more than one column). Next consider Sp PIPs. We have $P^1$ with $p^1$ rows of length 2 and the rest contain a single box, and $P^2$ with $p^2 \le p^1$ rows of 2 boxes and the rest with a single one. By the diagrammatic rules, $R$ has $R_1 = 2p^2$, then $p^1-p^2$ rows of length 2, and the rest with a single box. Thus all the Sp PIPs are such that $R_1$ is even and $R_i \le 2$ for $i>1$. To avoid the redundancy above we restrict to $R_1>2$.
To recover the irregular punctures we just need to throw away the full punctures, which are the SU PIPs with $R_2=0$. Summarizing, the irregular punctures are described by all the SU Young diagrams with at least 2 rows and columns and by the Sp Young diagrams with $R_1 >2$ even and $R_i \le 2$ for $i>1$. The Sp irregular punctures are described by just two numbers, $p^2 > 1$ and $p^1-p^2 \ge 0$.
\section{Gauging a given theory} \label{sec:gauging}
In the discussion of the decoupling limits we considered, almost always \footnote{Recall the exception for that is the third case mentioned at the end of \autoref{subsection:decoupling_sphere}, which occurs only for even $N$ and gives just $USp(N-2)$. } the Riemann surface in an $A_{N-1} $ theory degenerated to a surface with the same genus and same $N$, with an additional regular puncture $L$ by which it was connected through a tube to a sphere. This situation is demonstrated in \autoref{fig:bringing_many_punctures_together}. We classified what are the regular punctures (the possible $L$'s) that appear at the end of tubes (which we called PRPs). In this section we want to look at this situation from the other direction. Given a PRP $L$, we know it can be gauged, that is, connected through a tube to a sphere with punctures $P^i$. This amounts to gauging a diagonal global symmetry group of the symmetry of $L$ and of the additional sphere. We ask then in \autoref{subsection:gauging_puncture} what are the possibilities for this sphere given a PRP $L$. In \autoref{subsection:possible_gauge_groups} we consider what subgroups of the global symmetry associated with the puncture $L$ can be gauged (that is, what gauge group $G_T$ we can have along a tube). In \autoref{subsection:embedding_in_L} we describe how $G_T$ is embedded in the symmetry of $L$ and what restrictions that were obtained from class-$\mathcal{S} $ are still valid (from field theory considerations) when we replace the additional sphere by a non-class-$\mathcal{S} $ theory. A bound relating the symmetry of $G$ to the symmetries of the punctures $P^i$ is given in \autoref{subsection:bound_sym_PRP}.
\subsection{Gauging a puncture} \label{subsection:gauging_puncture}
Given a PRP $L$, we would like to see how we can find all the possibilities for $P^i$ in \autoref{fig:bringing_many_punctures_together}. For this purpose, the diagrammatic description in \autoref{subsection:diagrammatic_method} (which applies here) will be very useful. We restrict ourselves to $L$ which is not a full puncture, and at the end we address the case of a full puncture. In the discussion in \autoref{subsection:diagrammatic_method} we saw that for $k \ge 2p^2$ (in the ordering of the punctures used there) all $p^i_k$ do not change with $k$ for all $i \ge 2$ (we have used $2p^2 < N$ since $L$ is not a full puncture), and that $k=2p^2$ is in the first $\alpha $ rows of $P^1$. $L$ is obtained by combining the first $\alpha +1$ rows of $P^1$. It follows that after the $\alpha $th row of $P^1$, all other $P^i$ with $i \ge 2$ are at their tip.
Any PRP can be gauged by $2$ punctures as we saw, and therefore we start by addressing the question for $m=2$. Let us denote them instead of $P^1$ and $P^2$, by $A$ and $B$. We order them as in the diagrammatic description with $a \ge b$. We saw that if we restrict their diagrams to the first $L_1$ boxes, we cover exactly the first $b+1$ rows of $A$, and in $B$ we are necessarily at the tip already. Additionally, since $A_{b+2} =L_2$, we must have $A_{b+1} \ge L_2$. The first $L_1$ boxes of $A$ necessarily belong to the following class of diagrams
\begin{itemize}
\item $A(L_1,L_2,n) $ = diagrams of $L_1$ boxes, $n$ rows, and $A_n \ge L_2$
\end{itemize}
with $n=b+1$. The full $A$ diagram is given by removing the first row of $L$ and placing the resulting diagram on top of a diagram in $A(L_1,L_2,b+1)$. The first $L_1$ boxes of $B$ give a diagram in the class
\begin{itemize}
\item $B(L_1,n)$ = diagrams of $L_1$ boxes, $n$ rows, and $B_n=1$ .
\end{itemize}
The full diagram of $B$ is given by extending the tip of this diagram.
Given any diagram in $A(L_1.L_2,n_A)$ and a diagram in $B(L_1,n_B)$, by extending them as above, we will get in their collision precisely $L$ if, $n_A=b+1$ while $b=L_1-n_B$, that is if
\begin{equation}
n_A + n_B = L_1 + 1.
\end{equation}
For an example, see \autoref{fig:puncture_gauging_m2}.
Note that $n_A \le L_1/L_2$ and so if $L_2 > 1$, then $n_B > n_A$ and $A(L_1,L_2,n_A)>B(L_1,n_B)$ (we sometimes denote $A>B$ meaning $a>b$ for the punctures $A,B$). If $L_2=1$ there will be an overlap between the two classes of punctures. To avoid some of this redundancy we can restrict $n_A \le \frac{L_1+1}{2} $, which ensures that still $a \ge b$.
To encapsulate this part, we do the following to find all the possibilities with $m=2$. For every $2 \le n_A \le \floor*{ \min \left( \frac{L_1}{L_2} ,\frac{L_1+1}{2} \right) }$ (the upper bound is $\floor*{\frac{L_1}{L_2} }$ unless $L_2=1$), we find all the diagrams in the class $A(L_1,L_2,n_A)$ (there is at least one diagram for each such $n_A$). Each such diagram, completed by appending the rows of $L$ starting from the second one, will be the first puncture. For each of them, we look for all the diagrams of $B(L_1,L_1+1-n_A)$ type, in which we complete the tip to get $N$ boxes in total (there is at least one such diagram for every $n_A$ above). All the pairs of diagrams that were obtained, are the $m=2$ solutions that can appear in a gauging of $L$. If $L_2=1$ we might get the same configuration more than once; for $L_2>1$ we will get each possibility exactly once.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{"puncture_gauging_m=2"}
\caption{Constructing $A$ and $B$. }
\label{fig:puncture_gauging_m2}
\end{figure}
The extension to a general number of punctures $m$ is given in a similar manner. Ordering the punctures as in the diagrammatic description $P^1 \ge P^2 \ge \dots $, the diagrams corresponding to the first $L_1$ boxes are of $A(L_1,L_2,n_1)$ class for $P^1$ and $B(L_1,n_i)$ for $P^i$, $i=2,\dots ,m$. The ordering of the punctures means that $n_1 \le n_2 \le \dots $. To get $L$ in the collision of the $P^i$ we need that $n_1 = \alpha +1 = \sum _{i \ge 2} p^i +1 = \sum _{i \ge 2} (L_1-n_i) + 1$, that is
\begin{equation} \label{eq:gauging_puncture_columns_constraint}
\sum _{i=1} ^m n_i=(m-1)L_1+1.
\end{equation}
For every diagram in $B(L_1,n)$, $n \le L_1-1$ (since a single column diagram is a no-puncture), thus $n_i \le L_1-1$ for $i \ge 2$ and from \eqref{eq:gauging_puncture_columns_constraint} $m \le n_1$. As before, $n_1 \le L_1/L_2 $ for $L_2>1$ and for $L_2=1$, $n_1 \le L_1-1$ again not to have a no-puncture. We have found
\begin{equation} \label{eq:gauging_punctures_allowed_m}
2 \le m \le \frac{L_1}{L_2} -\delta _{L_2,1} .
\end{equation}
Every $m$ in that range is indeed obtainable: take $m-1$ simple punctures, $n_1=m$ with the first row of length $L_1- (m-1)L_2$ and the rest of length $L_2$; all of these punctures are legitimate regular punctures, whose collision gives $L$.
Note that the condition \eqref{eq:gauging_puncture_columns_constraint} ensures that the collision of the punctures $P^i$ gives $L$. If $L_2>1$ then necessarily $n_1 \le n_i$ for $i\ge 2$ (since otherwise $\sum n_i \le 2 \frac{L_1}{L_2} +(m-2)(L_1-1) \le L_1+(m-2)(L_1-1) = (m-1)L_1 - (m-2) < (m-1) L_1+1 $ which is impossible). Thus indeed $p^1 \ge p^i$ for $i \ge 2$ and the collision gives the combination of the first $L_1$ boxes of $P^1$ giving $L$. If $L_2=1$ then the extension of all the diagrams is with rows of a single box and it does not matter what is the biggest puncture (the combination of the first $L_1$ boxes of all of them gives an L-shaped diagram).
To get all the possible $P^i$ for every $m$ in \eqref{eq:gauging_punctures_allowed_m} we do as before the following. We find all sets of $m-1$ punctures of $B(L_1,n_i)$ class with $n_2 \le n_3 \le \dots \le n_m \le L_1-1$ such that $\sum _{i \ge 2} n_i \ge (m-2)L_1+2$ (these ensure $1 \le m \le n_1 \le L_1-1$). For each of them we look for all $A(L_1,L_2,n_1=(m-1)L_1+1-\sum _{i \ge 2} n_i)$ punctures. Extending the tip of the $B(L_1,n_i)$ punctures to get $N$ boxes in total, and appending to the $A(L_1,L_2,n_1)$ punctures the diagram of $L$ without the first row, we get all the possibilities for the $m$ punctures $P^i$. Note that again if $L_2=1$ we might get the same configurations more than once.
Finally we comment on the case where $L$ is a full puncture. From \autoref{fig:Delta_k_behavior} and \eqref{eq:general_PRP_equation} $L$ is a full puncture exactly when $ \sum _i p^i = \sum _i p^i_N \ge N-1$. We can get a full puncture with any number $m \ge 2$ of punctures $P^i$. All the possibilities of such punctures with $\sum _i p^i \ge N-1$ result in a full puncture.
\subsection{The possible gauge groups $G_T$} \label{subsection:possible_gauge_groups}
We have found what $P^i$ can appear for a given PRP $L$ in \autoref{fig:bringing_many_punctures_together}. Now we look for all the possible $G_T$.
We have seen in the diagrammatic method that $L$ is given by combining the first $\alpha +1 = \sum _{i \ge 2} p^i + 1$ rows of $P^1$ (where $p^1 \ge p^i$ for all $i$). For the purpose of this subsection, it is more convenient to abuse notation and by $\alpha $ denote $\min\left( \sum _{i \ge 2} p^i ,h(N,P^1) \right)$ (where $h(N,P^1)$ is the number of rows in $P^1$) since there can be less rows in $P^1$ than $\alpha $, in which case $P^1_{\alpha +1} = P^1_{\alpha +2} = \dots = 0$ (note this redefinition is not essential and the original definition of $\alpha $ can be used in the formulae below). The diagrammatic rule is still the same with this definition of $\alpha $. With these conventions, by considering \autoref{section:maximal_gauge_group_along_tube} (in particular \autoref{fig:Delta_k_behavior}) and \eqref{eq:general_PRP_equation} (or alternatively from \autoref{subsection:irregular_punctures} using the last $k$ for which $r_k=k-1$) one can see that the $T$ associated with the $P^i$ is
\begin{equation} \label{eq:T_diagrammatic}
T= \sum _{i=1} ^{\alpha } P^1_i
\end{equation}
(for both the SU and the Sp case; in the Sp case $p^2 \le h(N,P^1)$ always). We have seen in \autoref{sec:decoupling} that if the $P^i$ are of the SU behavior, then $G_T = SU(T)$ while if they are of the Sp behavior, $G_T = USp(T)$.
From what we have seen just now, $L_1= T+ P^1_{\alpha +1} $, $L_2=P^1_{\alpha +2} $, $\dots $ .Therefore $T=L_1 - P^1_{\alpha +1} $, and we can bound $T$ from both sides. From the usual structure of Young diagrams we have $P^1_{\alpha +1} \ge L_2$ and so $T \le L_1-L_2$. Similarly, $L_1 - P^1_{\alpha +1} \ge P^1_{\alpha } \ge P^1_{\alpha +1} $, or $P^1_{\alpha +1} \le L_1/2$ and $T \ge L_1/2$. Additionally, always $T \ge 2$. Combining the bounds
\begin{equation} \label{eq:G_T_rank_bound}
\max\left( 2, \frac{L_1}{2} \right) \le T \le L_1-L_2 .
\end{equation}
Note that as we will see later on, the bound $T \ge \frac{L_1}{2} $ also follows from demanding that the theory on the other side of the tube (not the one with $L$) is unitary.
First we claim that precisely all the $SU(T)$ with $T$ in \eqref{eq:G_T_rank_bound} are the options for $G_T=SU(T)$. Indeed any integer $T$ is obtained in the following configuration. Take $m=2$, $P^2$ a simple puncture, and $P^1$ with rows of length $T$, $L_1-T$, $L_2$, $L_3, \dots$. By \eqref{eq:G_T_rank_bound} both punctures are legal, and they give $G_T=SU(T)$ and $L$ in their collision.
In the Sp case, we have $m=2$ and $P^1_1=P^1_2=2$. Therefore necessarily $L_i \le 2$ for $i \ge 2$. A puncture $P$ with $P_1=2$ is fixed by $p$ --- it has $p$ rows of width $2$ and the rest with width $1$. Suppose $L_2=2$; then necessarily $P^1_i=2$ for all $i \le L_1/2$, $L_1$ is even and $p^2=\frac{L_1}{2} -1 $. In this case $G_T = USp(L_1-2)$. If $L_2=1$, then $P^1_{\alpha +1} $ is $1$ or $2$ and $T=L_1-P^1_{\alpha +1} \ge L_1-2$ and $T \le L_1-1$ from \eqref{eq:G_T_rank_bound}. So if $L_2=1$ and $L_1$ is odd we have $G_T = USp(L_1-1)$ and if $L_1$ is even, then $G_T=USp(L_1-2)$. Both $G_T$ are obtained with a single $P^1,P^2$ configuration. The case $L_2=0$ is completed in the same way, with the result below.
To summarize, for a given PRP $L$, the possible $G_T$ are exactly
\begin{itemize}
\item $SU(T)$ with $T$ in \eqref{eq:G_T_rank_bound}.
\item If $L_2=2$ and $L_1$ even, $USp(L_1-2)$.
\item If $L_2=1$ then $USp(L_1-2)$ for even $L_1\ge 4$ and $USp(L_1-1)$ for odd $L_1$.
\item If $L_2=0$ and $L_1$ even, can have $USp(L_1)$ and $USp(L_1-2)$ (the latter for $L_1 \ge 4$).
\item If $L_2=0$ and $L_1$ odd, $USp(L_1-1)$.
\end{itemize}
The $Sp$ groups are obtained with a single possibility for $P^1,P^2$, and when $T=2$ the $G_T$ is just $SU(2)$.
\subsection{The embedding of $G_T$ in a PRP} \label{subsection:embedding_in_L}
The bound $T \le L_1-L_2$ in \eqref{eq:G_T_rank_bound} implies that $G_T$ comes from gauging a subgroup of the $U(L_1-L_2)$ factor only in the symmetry \eqref{eq:regular_puncture_symmetry} associated with $L$. As a verification of this, we expect to be left with a $\prod _{i \ge 2} U(L_i-L_{i+1} )$ global symmetry. Indeed, after the gauging, the bigger puncture $P^1$ contains a symmetry of $U(P^1_{\alpha +2} -P^1_{\alpha +3} ) \times U(P^1_{\alpha +3} -P^1_{\alpha +4} ) \times \dots = U(L_2-L_3) \times U(L_3-L_4) \times \dots $ (possibly without a $U(1)$ factor which will appear in another $P^i$) as we saw in the diagrammatic description (see \autoref{fig:diagrammatic_decoupling}). We show in this subsection how the possible $G_T$'s described above are embedded in this $U(L_1-L_2)$ and discuss the possibility of getting additional $G_T$'s if we gauge a diagonal subgroup of the symmetry of $L$ and a non-class-$\mathcal{S} $ theory (replacing the sphere containing the $P^i$). We will see that many of the restrictions on $G_T$ are purely field theoretic, valid for any $\mathcal{N} =2$ SCFT, not necessarily from class-$\mathcal{S} $.
Any gauge group $G_T$ along some tube in a super-conformal theory should have a vanishing beta-function. Before forming the tube, we had two theories described by two Riemann surfaces (or the tube can have two ends on the same surface). This is depicted in \autoref{fig:beta_function_picture}. This amounts to gauging some diagonal group from the flavor symmetries of the two sides. The contribution of each side to the beta function is proportional to the central charge of the flavor symmetry currents that we gauge. The total contribution of the matter should cancel the contribution from the vector multiplets. The vanishing beta function equation is then
\begin{equation} \label{eq:zero_beta_function}
2 T(\text{adj}) = k^L+k^R
\end{equation}
where $k^L$ and $k^R$ are the contributions from the two sides of the tube. The vector multiplets are in the adjoint representation and therefore contribute $2T(\text{adj})$. The factor of $2$ is a matter of normalization of this equation. We will work in the normalization in which $T(\text{adj})=2K$ and $T(\text{fund})=1$ for $SU(K)$.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{"beta_function_picture"}
\caption{The picture of gauging a diagonal flavor symmetry.}
\label{fig:beta_function_picture}
\end{figure}
\subsubsection{$G_T=SU(T)$}
The central charge of the flavor symmetry of some puncture is determined by the puncture and the subgroup we gauge. We are still in the situation of \autoref{fig:bringing_many_punctures_together}, and suppose $G_T=SU(T)$, $T$ being in the range \eqref{eq:G_T_rank_bound}.
We can replace the right side of the tube by any other theory, and the contribution of $L$ to the beta function of $G_T$ will not change as long as we have the regular puncture $L$ on the left side of the tube and $G_T$ is the same.
We choose the sphere shown in \autoref{fig:embedding_replaced_sphere} for that. Note that the rightmost diagram is valid, $T \ge L_1 - T$ and $L_1-T \ge L_2$. Using the analysis in \autoref{subsection:decoupling_sphere} or the diagrammatic method, it is seen immediately that we indeed get $L$ and $G_T=SU(T)$.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{"embedding_replaced_sphere"}
\caption{Replaced sphere giving $L$ and $G_T=SU(T)$. We used a shorthand notation for a diagram : a diagram with a label $n$ in a box, means that in the appropriate row we have $n$ boxes. The $2,1,1,\dots $ diagram is just a simple puncture.}
\label{fig:embedding_replaced_sphere}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{"embedding_replacement_in_quiver"}
\caption{A linear quiver in which the same $L$ and $G_T$ appear. A small disk represents a simple puncture.}
\label{fig:embedding_replacement_in_quiver}
\end{figure}
We can get the contribution of $L$ since the configuration of \autoref{fig:embedding_replaced_sphere} appears in the linear quiver shown in \autoref{fig:embedding_replacement_in_quiver}. The rightmost sphere is just a free theory of $2T-L_1$ fundamentals of $SU(T)$, while the contribution of $L$ to $SU(T)$ is the same as of $L_1$ fundamentals (the sphere to which it belongs gives a bifundamental hypermultiplet).
If we have hypermultiplets in the representation $ \oplus _i r_i$ of some group $G$, then $k_G=\sum _i 2T(r_i)$.
Therefore
\begin{equation}
k^L_{SU(T)} =2L_1 T(\text{fund})=2L_1 .
\end{equation}
We saw that any gauging of $L$ must come from some subgroup of $U(L_1-L_2)$. In general, if the central charge of a current of flavor symmetry $U$ ($U$ being a simple group) is $k_U$, then for a simple subgroup $W \subset U$, $k_W =xk_U$ where $x$ is the embedding index of $W$ in $U$. In our case, we found that $k_{SU(T)} ^L = k^L_{SU(L_1-L_2)}$. This means that the embedding index is $1$, and the embedding is the trivial embedding.
We can again check if this is consistent. After gauging $SU(T)$ from the $SU(L_1-L_2)$ part of the symmetry of the puncture $L$, we expect that if the embedding is trivial, there will be a leftover $SU(L_1-L_2-T)$ symmetry. This is indeed the case, as the bigger puncture in the diagrammatic picture of gauging, has an $SU(P^1_{\alpha +1} -P^1_{\alpha +2} )= SU(L_1-T-L_2)$ factor in its symmetry (as we saw in the previous subsection, with $\alpha $ defined as there), see \autoref{fig:diagrammatic_decoupling}.
We could ask whether in gauging $L$ with a non class-$\mathcal{S} $ theory, we could obtain a gauging which is not possible in class-$\mathcal{S} $, that is some other gauged subgroup or a different embedding (a non trivial embedding). Suppose we could gauge an $SU(T)$ subgroup of $SU(L_1-L_2)$ with embedding index $x$, and at the other end of the tube we could have any other theory (not only of class-$\mathcal{S} $). Then the contribution of the RHS theory to the beta function is
\begin{equation}
k^R = 4T-k^L = 4T - 2L_1 x .
\end{equation}
We assume that the theory at the other end of the tube is unitary. The condition $k^R \ge 0$ means $T \ge L_1/2$. Therefore, we cannot gauge in general an $SU(T)$ that does not appear in class-$\mathcal{S} $. \\
Since the Dynkin embedding index is an integer, for a non-trivial embedding $x \ge 2$, implying $k^R \le 4(T-L_1)$. We saw that $T=L_1-P_{\alpha +1}$ and so $T-L_1<0$ unless $P_{\alpha +1}=0$, for which $T=N=L_1$. But, for $T=N=L_1$, $x=1$. We see that assuming $x \neq 1$ leads to $T-L_1<0$ and $k^R<0$.
We conclude that the only possible gauging of $SU(T) \subset SU(L_1-L_2)$ is by the trivial embedding. \\
Note that from the same reason, we cannot gauge any subgroup of the other $SU(L_i-L_{i+1} )$, $i>1$, symmetries as in class-$\mathcal{S} $.
For any $L$ we can form the linear quiver presented at the bottom of \autoref{fig:general_quiver}. The corresponding curve is the one at the top of \autoref{fig:general_quiver}.
If we turn off the gauge coupling of the $SU(\sum _{j=1} ^i L_j)$ gauge group, we find that $k_{SU(L_i-L_{i+1} )} = 2 \sum _{j=1} ^i L_j$. By the supersymmetry, this central charge is also an anomaly coefficient and is independent of the exactly marginal couplings. $k^R \ge 0$ together with $T \le L_i-L_{i+1} $ are impossible as can be seen easily.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{"general_quiver"}
\caption{A tail of a general linear quiver. In the top we see the Riemann surface with a puncture $L$ and additional simple punctures. Below the corresponding linear quiver is displayed.}
\label{fig:general_quiver}
\end{figure}
\subsubsection{$G_T=USp(T)$}
The only other possibility for $G_T$ in class-$\mathcal{S}$ is $USp(T)$. We saw in the previous subsection the cases giving $USp(T)$.
Let us see what are the possible embeddings given that we cannot have $k^R<0$. For $USp(2r)$ , $2T(\text{adj})=4(r+1)$. The options giving $USp(2r)$ are
\begin{itemize}
\item $L_1=2r+2$, $L_2=0,1,2$. In these cases, $k^R=4(r+1)-x k_{SU(L_1-L_2)} =4(r+1)-2(2r+2)x$. Only $x=1$ is possible, and it gives $k^R=0$.
\item $L_1=2r+1$, $L_2=0,1$. $k^R=4(r+1)-2x(2r+1)$. For $x=1$ we get $k^R=2$. If $x \ge 2$, $k^R \le -4r < 0$, so it is not possible.
\item $L_1=2r$, $L_2=0$. $k^R=4(r+1)-4rx $. For $x=1$ we get $k^R=4$. If $x \ge 2$, $k^R \le 4(1-r) <0$, which is not possible again.
\end{itemize}
For the $USp(2r)$ case, again only the trivial embedding in $SU(L_1-L_2)$ is possible. We already saw that there will indeed be a leftover $SU(L_1-L_2-T)$ symmetry after the gauging.
Let us ask once again about gauging $L$ with a non class-$\mathcal{S}$ theory.
As before, only an embedding with index $x=1$ is possible, even when the other theory that we gauge is not necessarily from class-$\mathcal{S} $. The reason for this is the following. An embedding of $USp(2r) \subset SU(L_1-L_2)$ implies $L_1 \ge 2r+L_2$. Again we use $k^R = 4(r+1)-2xL_1 \le 4(r+1)-2x(2r+L_2)$. An embedding with $x \ge 2$ would imply $k^R<0$ (using $r>1$ since otherwise the algebra is the same as that of $SU(2)$). We saw that in class-$\mathcal{S} $ we cannot gauge some $USp(2r)$ of the flavor symmetry of a puncture with $L_2 \ge 3$, and this still holds even when the other theory we couple to the gauge group is not from class-$\mathcal{S} $ by the same bound.
\subsubsection{$G_T=SO(T)$}
In the theories discussed here, it was not possible to have an $SO(n)$ group along the tube. Let us see that it is still impossible even if we would like to gauge a diagonal subgroup of the symmetry of the puncture $L$, and of some other theory which is not necessarily of class-$\mathcal{S} $. $2T(\text{adj}) = 4(n-2)$ for $SO(n)$. Additionally, in any embedding $SO(n) \subset SU(L_1-L_2)$, $x \ge 2$ necessarily (since the minimal index of a representation is 2 in $SO(n)$).
The exceptions for that are the low rank cases in which the two algebras are just the same, but we are not interested clearly in these cases.
In all the other cases, $x=2$ and $L_1-L_2 \ge n$. We obtain then $k^R = 4(n-2) -2x L_1 \le 4(n-2) - 4n <0 $.
\subsection{A bound on the rank of the symmetry of a PRP} \label{subsection:bound_sym_PRP}
Given $m\ge 2$ regular punctures $P^1, \dots ,P^m$, the regular puncture $L$ defined by \eqref{eq:general_PRP_equation} satisfies
\begin{enumerate}
\item $L_1 \ge \min( \sum _{i=1} ^m P_1^i - m+1 , N)$ or equivalently
\begin{equation} \label{eq:bound_rk_l_1}
\rk G (L) \ge \min( \sum _{i=1} ^m \rk G(P^i), N-1)
\end{equation}
(where $G(P)$ is the global symmetry of the puncture $P$).
\item If in addition at least one $P_2^i \ge 2$, then\\
$L_1 \ge \min( \sum _{i=1} ^m P_1^i - m +2 , N)$ or equivalently
\begin{equation} \label{eq:bound_rk_l_2}
\rk G(L) \ge \min( \sum _{i=1} ^m \rk G(P^i) + 1 , N-1)
\end{equation}
\end{enumerate}
These follow easily from equation \eqref{eq:general_PRP_equation}.
In any puncture $P_1 \ge 2$, because if it was $1$ it was a no-puncture. Define $k_0 = \min(\sum _i P^i_1 - m+1,N)$. If $k_0 = \sum _i P^i_1-m+1$, then for any $i$, $k_0 = P^i_1+\sum _{j \neq i} P^j_1-m+1 \ge P^i_1 + 2(m-1)-m+1 \ge P^i_1$ and therefore $p^i_{k_0} \ge P^i_1-1$. This is clearly true also if $k_0=N$. It follows that $\sum _i p^i_{k_0} \ge \sum _i P^i_1 - m \ge k_0 -1$, and then by the definition of $l_k$, $l_{k_0} = k_0-1$. This means that $L_1 \ge k_0$, establishing \eqref{eq:bound_rk_l_1} (since $\rk G(P)=P_1-1$).
For the second statement, define $k'_0 = \min( \sum _i P^i_1 - m+2,N)$. It is not smaller than the previously defined $k_0$ and therefore still $p^i_{k'_0} \ge P^i_1-1$ for every $i$. For the particular $i$ for which $P^i_2 \ge 2$, if $k'_0=\sum _j P^j_1-m+2$ then $k'_0 = P^i_1 + \sum _{j \neq i} P^j_1 - m + 2 \ge P^i_1 + 2(m-1)-m+2 = P^i_1 +m \ge P^i_1 + 2$. Since $P^i_2 \ge 2$, it means that $p^i_{k'_0} \ge P^i_1$. If $k'_0=N$ this clearly still holds. Together, these give $\sum_i p^i_{k'_0} \ge \sum P^i_1-m+1 \ge k'_0 - 1$, and once again $L_1 \ge k'_0$.
\section{Large $N$ and field theories on $AdS_5$} \label{sec:large_N}
Certain theories of the sort we have used are relevant in the large $N$ limit for describing field theories on $AdS_5$ with various boundary conditions. In this section we address this connection and apply the tools from the previous sections.
In \cite{Aharony:2015zea} the AdS/CFT correspondence between type IIB string theory on $AdS_5 \times S^5 / \mathbb{Z}_K $ and the four dimensional $\mathcal{N} =2$ theory with a circular quiver $SU(N)^K$ \cite{Kachru:1998ys} was considered. The four dimensional side is the theory where the Riemann surface $C$ is a torus with $K$ simple punctures. The singular limit where the integrals of the NS-NS and R-R 2-forms on the 2-cycles of the orbifold vanish is particularly interesting. This corresponds in the 4d side to bringing the simple punctures close to each other. In the large $N$ limit, we get the six-dimensional $(2,0)$ $A_{K-1} $ theory on $AdS_5 \times S^1$ decoupled from gravity.
In the 4d picture, this situation is a simple instance of the degeneration limits we considered in the previous sections. As the simple punctures are taken close, a sphere bubbles off, connected by a tube to the remaining torus on which a single puncture is created if the tube is completely decoupled. This puncture is an L-shaped puncture with rows of width $K+1,1,1,\dots $ and pole structure $1,2,\dots ,K,K,\dots $. The gauge group that becomes weakly coupled is an $SU(K)$. As we explained in \autoref{sec:decoupling}, the curve on the decoupling sphere terminates within a number of terms of the order of $K$. The sphere and the tube are independent of $N$.
In this limit where we have an $A_{K-1} $ singularity in the type IIB theory, we expect to get an $SU(K)$ gauge symmetry in the five dimensional theory on $AdS_5$ and an $SU(K)$ global symmetry in the corresponding four dimensional theory. This does not happen and the global symmetry in our 4d theory is $U(1)^K$. This conflict is avoided as described in \cite{Aharony:2015zea} by having the $SU(K)$ tube on the boundary gauging both the decoupling sphere with simple punctures sitting on the boundary as well, and the five dimensional theory on $AdS_5$ in the bulk. The 4d $SU(K)$ symmetry is thus gauged and not global, and the global $U(1)^K$ symmetry arises from the global symmetry of the theory on the decoupling sphere.
Having these theories on the boundary provides a specific choice of boundary conditions for the $(2,0)$ theory on $AdS_5\times S^1$, which is realized in the type IIB construction. We could give the $(2,0)$ theory on $AdS_5 \times S^1$ alternative boundary conditions in which we do not have the ingredients just mentioned on the boundary. In this case the corresponding four dimensional theory is a torus with the L-shaped puncture, having now $SU(K)$ in its global symmetry. It is not known how to realize these alternative boundary conditions in type IIB string theory. In principle an M-theory dual can be found along the lines of \cite{Gaiotto:2009gz}, but it is not reliable due to high curvatures.
We would like to see by how much this picture can be generalized. The configurations we need to examine appear in \autoref{fig:4d_part_correspondence}. The $Q(N,G)$ ingredient is the analog of the torus with the L-shaped puncture. It is now some surface including a puncture with global symmetry $G$ (as well as additional punctures possibly). The tube, gauging some subgroup of $G$, together with the $P(G)$ theory may provide different boundary conditions in case we have a decoupled theory on $AdS_5$. The tube and the $P(G)$ theories are finite in the large $N$ limit.
In order for the tube and $P(G)$ to be independent of $N$, we must be either in the first or the second general cases which were described at the end of \autoref{subsection:decoupling_sphere}. In particular as was discussed in \autoref{subsection:decoupling_g_ge1}, $P(G)$ must be defined on a sphere. We are once again in the situation we inspected a lot, which appears in \autoref{fig:bringing_many_punctures_together} with the puncture of symmetry $G$ being $L$.
As was mentioned above, a torus with an L-shaped puncture contains a decoupled $(2,0)$ theory on $AdS_5\times S^1$ \cite{Aharony:2015zea}. We first try to see in what theories $Q(N,G)$ we have a decoupled field theory on $AdS_5$. We will give a partial answer for this, not covering all the possible situations. For the cases where this does happen, we assume that different choices of the gauged subgroup $G_T \subset G$ and of $P(G)$ provide different boundary conditions for the decoupled theory. Under this assumption we can enumerate what are those possible boundary conditions.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{"4d_part_correspondence"}
\caption{General form of 4d theories which might give decoupled theories on $AdS_5$.}
\label{fig:4d_part_correspondence}
\end{figure}
\subsubsection*{Large $N$}
The first immediate generalization of an L-shaped puncture is a puncture with a tip (all rows are of width $1$ starting from some row number).
When $N$ is large, we will say that a puncture has a long tip if $p$ is independent of $N$.
All punctures of this sort have a finite large $N$ limit for their global symmetry. That is, only for these punctures the global symmetry \eqref{eq:regular_puncture_symmetry} does not change as we append a box (or any number of boxes) starting from some point, and is independent of $N$. As will be explained in a moment, if all the punctures $P^i$ on $P(G)$ before the decoupling of the tube have a long tip in the large $N$ limit, after the decoupling the $P(G)$ side will represent a theory which is independent of $N$.
By the requirement that after decoupling $P(G)$ is independent of $N$ we are led to consider additional punctures. Namely, punctures of "finite $P_1$" which means that the first row of the Young diagram has a number of boxes independent of $N$. We will now explain that except for exceptions of a certain type, the theory $P(G)$ after its decoupling will be independent of $N$ when all punctures except one have a long tip, and the remaining puncture is of "finite $P_1$". Therefore the punctures $P^i$ we need to consider are of "finite $P_1$" (which includes punctures with a long tip in particular).
The explanation of the statement above is the following. As mentioned before, the $P(G)$ theory corresponds to the RHS in the first or the second general cases which were described at the end of \autoref{subsection:decoupling_sphere}. We will use the (by now) usual convention of $p^1 \ge p^i$ for $i>1$ and $\alpha =\sum _{j \ge 2} p^j$. By the description of the second case in which we have only $m=2$, we basically need $\alpha $ to be independent of $N$ for $P(G)$ to be independent of $N$ and so $P^2$ has a long tip and $P^1$ automatically has "finite $P^1_1$" (it equals to $2$). There is an exception for that, in which we have $p^2=\alpha $ which is large and $P^1_{\alpha +1} =2$. In this case the decoupled theory is an empty theory.
Next, consider the first general case. Assume first that there is a $\Delta _k>0$ (the $\Delta _k$ here corresponds to the $P^i$ punctures). To remain with a finite theory we must have a finite $N'$ (see the discussion regarding $N'$ in \autoref{subsection:decoupling_sphere}) which means that $P^1_1$ and $\alpha $ are necessarily independent of $N$ and so $P^i$ for $i>1$ have a long tip. The number of additional free hypers in the resulting decoupled theory will also be finite. Finally consider what happens if there is no $\Delta _k>0$. As mentioned in \autoref{subsection:decoupling_sphere}, the decoupling theory is a theory of free hypers.
For the number of hypers (given by \eqref{eq:free_hypers_SU_RHS_when_delta_k_le_0}) to be independent of $N$ we need in principle that again $P^1_1$ and $\alpha $ are finite and so again $P^1$ is of "finite $P^1_1$" and the $P^j$ for $j>1$ have long tips. The exception for this is when $m=2$, $P^2$ is a simple puncture, and $P^1_1=P^1_2$ which is large. In this case the decoupling theory is empty as in the previous exception. To summarize, we have found that $P(G)$ is independent of $N$ after decoupling exactly when $P^1$ is of "finite $P^1_1$" and the other $P^i$ have long tips, with the exceptions mentioned above in which the decoupled theory is empty.
Note that we cannot have $Q(N,G)$ a sphere with only a single puncture of symmetry $G$ in \autoref{fig:4d_part_correspondence}, because the number of Coulomb branch parameters of dimension $k$ in the sphere composed of $Q(N,G)$, $P(G)$ and the tube is $d_k=\Delta _k-k+1$, and for $P^i$ giving $P(G)$ finite, $\Delta _k$ and $d_k$ become negative. On a torus, and higher genus surfaces, the Coulomb branch graded dimension \eqref{eq:Coulomb_branch_graded_dimension} is always non-negative, $d_k \ge 0$, and we do not have this restriction.
\subsection{Decoupled field theories in the large $N$ limit} \label{subsection:decoupled_AdS5}
Consider a genus $g>1$ surface $C$ having punctures with a long tip. We would like to see if analogously to the case of a torus with an L-shaped puncture, there is in the large $N$ limit a field theory which is decoupled from gravity. Gaiotto and Maldacena constructed the gravity duals of such theories \cite{Gaiotto:2009gz}. These are given by M-theory on a background of the form $AdS_5 \times \mathcal{M} _6$ with $\mathcal{M} _6$ being compact. The M-theory spacetime is smooth for $g>1$, and in the large-$N$ limit adding to it punctures of the sort we consider is a small local deformation of the background.
The background is found by solving a Toda equation with boundary conditions specified by the different punctures.
Around each puncture, the Toda equation is analogous to three dimensional electrostatics with a cylindrical axial symmetry. Every puncture is associated with a line charge density profile denoted by $\lambda (\eta )$ which is piecewise linear. The slopes are integer and change at integer values of $\eta $. Specifically, for a given puncture $P$, the slopes are $P_1,P_2,\dots $ and they change at $\eta =1,2,\dots $. Whenever $P_i-P_{i+1} \ge 2$, we have corresponding to a slope change, in the appropriate position in spacetime, an $A_{k-1} $ singularity, with $k=P_i-P_{i+1} $. Each $A_{k-1} $ singularity gives a non-abelian gauge theory on $AdS_5$ which is associated with an appropriate global symmetry in the four dimensional $\mathcal{N} =2$ theory. The radius of the $AdS_5$ in Planck units is $R_{\text{AdS}_5} /l_P \sim \lambda (\eta _k)^{1/3} $ (see equation (3.22) in \cite{Gaiotto:2009gz}) where $\lambda (\eta _k)$ is the value of $\lambda $ where the slope changes, and the five dimensional gauge coupling is $R_{AdS_5} /g_5^2\sim \lambda (\eta _k)$. To get a non-trivial theory decoupled from gravity we need to have a finite non-zero value of $R_{AdS_5} /g_5^2$ while $l_P \to 0$ (compared to $R_{AdS_5} $ and $g_5^2$).
Therefore we see that we do not have a non-trivial field theory decoupled from gravity.
We do not provide indication for or against the existence of decoupled field theories on $AdS_5$ in the large $N$ limit in the rest of the cases. In $g>1$ surfaces having only punctures of the type we considered there are no such decoupled theories, but if there are other punctures, different arguments should be used \footnote{For any theory having an M-theory dual which is given by a weakly curved spacetime with $A_{k-1}$ singularities, there are no such decoupled field theories on $AdS_5$, as was argued. }. In a $g=1$ surface with an L-shaped puncture there is a decoupled theory, but for other punctures, as well as for the sphere, the situation should be investigated further.
Let us assume that in the cases where we do have a decoupled field theory, the various choices for $P(G)$ in \autoref{fig:4d_part_correspondence} with the appropriate tube amount to different boundary conditions for the decoupled theory on $AdS_5$.
As was mentioned before, $P(G)$ is defined on the sphere and we are in the situation of \autoref{fig:bringing_many_punctures_together} with $L$ having symmetry $G$. We can use all the conclusions developed so far.
Under this assumption, \autoref{subsection:gauging_puncture} essentially is a classification of all the possible boundary conditions of this sort. In particular, this can be applied to the torus with a single L-shaped puncture where we know that we have a decoupled field theory.
If we do not restrict to boundary conditions which can be implemented in class-$\mathcal{S} $, the possible options for gauging some subgroup of $G$ depend only on the central charge of this subgroup,
which is related to the central charge of $G$ as review above. The options for possible boundary conditions are thus determined by the central charge of $G$, which is related to the gauge coupling of $G$ in the bulk in units of the $AdS$ radius.
\subsection{Non-singular weakly curved gravitational duals}
Having or not a decoupled field theory on $AdS_5$, many of the 4d theories do not have a dual string or M-theory on a non-singular background which is weakly curved.
The symmetry of a four dimensional theory having such a dual should be a product of $U(1)$'s. In such a case, all the punctures in $P(G)$ in \autoref{fig:4d_part_correspondence} must have $P_i - P_{i+1}=0,1$. Using the diagrammatic method, the resulting puncture with symmetry $G$ necessarily has $L_i-L_{i+1}=0,1$ for $i>1$ but not for $i=1$, see \autoref{fig:PRP_from_U1s_only}. Conversely, not every puncture of the form shown in \autoref{fig:PRP_from_U1s_only} can be gauged such that at the other side of the tube there is a sphere with punctures having symmetries $U(1)^{n_i} $ only. This is easily seen by considering \autoref{fig:diagrammatic_decoupling}, since it is not always possible to partition $L_1$ into $P^1_1, P^1_2, \dots, P^1_{\alpha +1} \ge L_2$ with $P_i-P_{i+1} \le 1$. Only surfaces with punctures for which this is possible, might have boundary conditions such that the resulting theory has a gravitational dual which is weakly curved and non-singular. These boundary conditions are a generalization of \autoref{fig:4d_part_correspondence} in which each such puncture is connected through a tube to a sphere with punctures having a symmetry $U(1)^{n_i} $.
\begin{figure}[h]
\centering
\includegraphics[width=0.35\textwidth]{"L_from_U1s_only"}
\caption{A PRP obtained by decoupling punctures with symmetries $U(1)^{n_i} $. It has $L_i-L_{i+1} \le 1$ for $i>1$.}
\label{fig:PRP_from_U1s_only}
\end{figure}
Concentrating on a single puncture as in \autoref{fig:4d_part_correspondence}, note that the case from the beginning of this section in which we have only simple punctures on $P(G)$ and $L$ is an L-shaped puncture is special in the following sense.
Assuming that $L$ is not a full puncture, it is the only case in which the rank of the gauge symmetry $G$ in the bulk is equal to the rank of the introduced abelian global symmetry group arising after gauging $G$ in a particular choice of boundary conditions.
This means in the context of \autoref{fig:4d_part_correspondence}, that for $L$ which is not a full puncture being replaced by the punctures $P^i$ of symmetry $U(1)^{n_i} $ each, the case of simple punctures giving an L-shaped puncture is the only one in which $\rk G(L)= \sum _i \rk G(P^i)$. This is a simple consequence of \autoref{subsection:bound_sym_PRP}. Since $L$ is not a full puncture, $\rk G(L)\le N-2$ and we can choose the first argument of the minimum in both \eqref{eq:bound_rk_l_1} and \eqref{eq:bound_rk_l_2}. It then follows that all $P^i_2\le 1$, and to have symmetries of $U(1)^{n_i} $ all the $P^i$ must be simple punctures (and from \eqref{eq:general_PRP_equation}, the resulting $l_k$ is $(1,2, \dots ,m,m, \dots)$, an L-shaped puncture).
\section*{Acknowledgments}
We are thankful to Ofer Aharony for suggesting this project and for useful discussions.
This work was supported in part by the I-CORE program of the Planning and Budgeting
Committee and the Israel Science Foundation (grant number 1937/12), by an Israel Science
Foundation center for excellence grant, by the Minerva foundation with funding from the
Federal German Ministry for Education and Research, by a Henri Gutwirth award from
the Henri Gutwirth Fund for the Promotion of Research, and by the ISF within the ISF-UGC
joint research program framework (grant no. 1200/14).
|
1,108,101,564,971 | arxiv | \section{Introduction}
Plasma diagnostics in both laboratory and astrophysical settings relies
principally on spectroscopic and polarimetric methods of observation, where
the radiation emitted from the plasma is analyzed as a function of
wavelength and state of polarization. These allow one to determine important
properties of the observed plasma, such as bulk and turbulent velocity
fields, gas composition and density, existence of anisotropic processes
such as the presence of directed electric and/or magnetic fields, and
anisotropic sources of irradiation and of colliding particles.
Particularly in the case of low density gases, collisional processes can
be of such modest magnitude that the excitation state of the plasma is
typically far away from local thermodynamical equilibrium (LTE). In that
case, the excitation and de-excitation processes that are responsible
for the observed radiation may be statistically correlated, so that the
system is able to preserve a ``memory'' of the excitation conditions.
A typical example is the partially coherent scattering of
radiation, where the frequency of the outgoing photon is correlated
(via an intrinsically second-order atom--photon process) to the spectral
distribution of the incoming radiation, leading to the phenomenon
of partial redistribution of frequency.
These excitation conditions are rather common in low-density astrophysical
plasmas, such as the higher layers of the solar atmosphere, planetary
nebulae, and interstellar \ion{H}{1} regions, where outstanding spectral
lines of the observed spectrum (e.g., the first few lines of the
Lyman and Balmer series of hydrogen, some lines of neutral sodium, and
of singly ionized calcium, etc.) are known to be formed in
conditions of strong departure from LTE, which may often also include
the effects of partially coherent scattering of radiation.
In this paper, we derive the redistribution function in the
laboratory frame for the polarized three-term atom of the $\Lambda$-type,
i.e., for the transition system $(l,l')\to (u,u')\to f$ such that
$l,l',f\prec u,u'$ by energy ordering (see
Figure~\ref{fig:Lambda-model}). This model can be used to investigate,
for example, the formation of the \ion{Ca}{2} system of transitions
comprising the K and H lines and the infrared (IR) triplet of the solar spectrum
\citep{CM16}.
The expression we arrive at corresponds to the extension of the
well-known $R_{\rm II}$ redistribution function \cite[e.g.,][]{Mi78}
to the case of a three-term atom that can harbor atomic polarization in
all of its levels, including those of the lower (metastable) states.
In applications where plasma collisions are important in determining
the statistical equilibrium of the atomic system, this expression is
still useful as it provides the correct contribution to the total
emissivity of the plasma from radiation scattering. As long as the
collisional lifetime of the metastable states is much longer than
the \emph{total} lifetime of the upper state, the approximation of
sharp lower levels remains applicable in practice, and our expression
of $R_{\rm II}$ can then be used, with proper weights, alongside with
$R_{\rm III}$, to completely describe the radiative and collisional
redistribution of radiation in the modeled atmosphere
\cite[see, e.g.,][]{BT12}.
\begin{figure}[t!]
\centering
\includegraphics[width=.45\hsize]{three_term.eps}
\caption{Schematic diagram for the fluorescent scattering in a
three-term model atom of the $\Lambda$-type, for an incoming
photon of frequency $\omega_k$ and an outgoing photon of frequency
$\omega_{k'}$. The model atom considered in this paper is restricted
to the case where all $(l,l')$ and $f$ levels are sharp (i.e.,
metastable). However, all the levels involved can be non-degenerate.
\label{fig:Lambda-model}}
\end{figure}
\section{The redistribution function for the $\Lambda$-type three-term atom}
\label{sec:derivation}
\renewcommand*{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
We rely on previously published work \cite[][hereafter, Paper~I]{Ca14}
on the redistribution function for the polarized two-term atom
in order to develop the framework for the generalization of
$R_{\rm II}$ to the case of a three-term atom of the
$\Lambda$-type, undergoing the two-photon transition
$(l,l')\to (u,u')\to f$ (see Figure~\ref{fig:Lambda-model}). The
bases for this generalization were laid out in a recent paper
\citep{CM16}, where we showed how the radiative transfer equation
describing the polarized line formation in a two-term atom in the
presence of partially coherent scattering (Equations (19) and (20)
of Paper~I) naturally extends to the case of the multi-term
atom of the $\Lambda$-type (Equations (1) and (2) of
\citealt{CM16}).
In practice, the main complication in deriving
the redistribution function for the three-term atom comes from
allowing $\epsilon_f\ne\epsilon_{l,l'}$ for the final state $f$
(here $\epsilon_a$ is the total width of the level $a$
due to all possible relaxation processes), and from the need to
assume generally different thermal widths for the two
transitions $l\to u$ and $u\to f$.
Accordingly, we introduce a characteristic Doppler width $\Delta_{mn}$
for the atomic transition between two generic terms $m$ and $n$ with
energy $E_m$ and $E_n$, respectively.
We indicate with $\omega_{mn}=(E_m-E_n)/\hbar$ the Bohr frequency
of such transition.
For each scattering event, the angle $\Theta$ between the propagation
directions of the incoming and outgoing photons is an essential
parameter of the laboratory frame redistribution function.
We introduce the associated quantities
\begin{equation}
C=\cos\Theta\;,\qquad S=\sin\Theta\;,
\end{equation}
and follow \cite{Hu82} for the choice of the Cartesian reference frame
for the projection of the thermal velocity of the plasma. This coincides
with the frame adopted in Paper~I \cite[see also][]{Mi78} when the
thermal widths of the two transitions are identical (i.e.,
when the terms $l$ and $f$ have the same energy).
Accordingly, we define for the $\Lambda$-type transition $l\to u\to f$,
\begin{equation} \label{eq:avg_Doppler}
\Delta=(\Delta_{ul}^2+\Delta_{uf}^2 - 2 C \Delta_{ul}\Delta_{uf})^{1/2}\;,
\end{equation}
\begin{equation} \label{eq:xi}
\xi_l=\Delta_{ul}/\Delta\;,\qquad
\xi_f=\Delta_{uf}/\Delta\;.
\end{equation}
We note that Equations (\ref{eq:avg_Doppler}) and (\ref{eq:xi}) imply
\begin{equation} \label{eq:cool}
\xi_l^2+\xi_f^2 - 2 C \xi_l\xi_f=1\;.
\end{equation}
We define next the complex profile function \cite[e.g.,][]{LL04}
\begin{eqnarray} \label{eq:W}
W(v,a)&\equiv&
\frac{1}{\pi}\int_{-\infty}^{+\infty} dp\;
\frac{{\rm e}^{-p^2}}{a+{\rm i}(p-v)}
= \exp(a-{\rm i}v)^2\,\mbox{erfc}(a-{\rm i}v) \nonumber \\
&\equiv& H(v, a)+{\rm i}\,L(v, a)\;,
\end{eqnarray}
where $\mbox{erfc(z)}$ is the complementary error function
\cite[e.g.,][]{AS64},
and $H(v,a)$ and $L(v,a)$ are respectively the Voigt and Faraday--Voigt
functions, along with the dimensionless frequency variables
\begin{equation} \label{eq:variables.1}
v_{mn}=(\hat\omega_k-\omega_{mn})/\Delta\;,\qquad
w_{mn}=(\hat\omega_{k'}-\omega_{mn})/\Delta\;.
\end{equation}
The incoming and outgoing radiation frequencies, $\hat\omega_k$ and
$\hat\omega_{k'}$, are expressed in the laboratory frame of reference,
and they differ from the corresponding frequencies in the atomic frame of
rest because of the thermal motion of the atoms (see Appendix of Paper~I).
We also introduce normalized damping parameters associated with the inverse
lifetimes of the transition levels, using the same ``reduced'' Doppler width
of Equation (\ref{eq:avg_Doppler})
\begin{equation}
\label{eq:variables.2}
a_m=\epsilon_m/\Delta\;.
\end{equation}
In the presence of pressure broadening, the numerator of
Equation (\ref{eq:variables.2}) should be augmented by the corresponding
collisional inverse lifetime.
We note in passing that the proposed normalizations (\ref{eq:variables.1})
and (\ref{eq:variables.2}) differ from the one adopted in Paper~I, where
instead the frequency and inverse lifetimes pertaining to a
given transition were normalized to the Doppler width for that
transition. The choice adopted
here is dictated exclusively by convenience, as it significantly simplifies
the form of the resulting expressions.
Following
the approach of Paper~I, Appendix~A, we find that we can perform the
integration over two of the three components of the thermal velocity in
the adopted reference frame. Accordingly, after some algebra,
the redistribution function of the polarized three-term atom of the
$\Lambda$-type can be written in the following integral form
\begin{eqnarray} \label{eq:rlab}
R(\Omega_u,\Omega_{u'};\Omega_l,\Omega_{l'},\Omega_{f};
\hat{\omega}_{k},\hat{\omega}_{k'};\Theta) &=&
\frac{1}{\Delta^2\,S \xi_l \xi_f}
\int_{-\infty}^{+\infty} dq\; {\rm e}^{-q^2} \nonumber \\
&&\kern -2.2in\times\Biggl\{
\frac{W\bigl( \frac{v_{ul'}+q(C\xi_l\xi_f-\xi_l^2)}{S\xi_l\xi_f},
\frac{a_u+a_{l'}}{S\xi_l\xi_f} \bigr)}
{a_{l'}+a_f+{\rm i}\bigl(q-v_{ul'}+w_{uf}\bigr)}
+
\frac{\overline{W}\bigl( \frac{v_{u'l}+q(C\xi_l\xi_f-\xi_l^2)}{S\xi_l\xi_f},
\frac{a_{u'}+a_l}{S\xi_l\xi_f}\bigr)}
{a_l+a_f-{\rm i}\bigl(q-v_{u'l}+w_{u'f}\bigr)}
\nonumber \\
&&\kern -2.2in \,\,{}+
\frac{W\bigl( \frac{w_{uf}+q(\xi_f^2-C\xi_l\xi_f)}{S\xi_l\xi_f},
\frac{a_u+a_f}{S\xi_l\xi_f} \bigr)}
{a_l+a_f-{\rm i}\bigl(q-v_{ul}+w_{uf}\bigr)}
+
\frac{\overline{W}\bigl( \frac{w_{u'f}+q(\xi_f^2-C\xi_l\xi_f)}{S\xi_l\xi_f},
\frac{a_{u'}+a_f}{S\xi_l\xi_f}\bigr)}
{a_{l'}+a_f+{\rm i}\bigl(q-v_{u'l'}+w_{u'f}\bigr)}
\nonumber \\
&&\kern -2.2in\,\,{} +
\frac{W\bigl( \frac{v_{ul'}+q(C\xi_l\xi_f-\xi_l^2)}{S\xi_l\xi_f},
\frac{a_u+a_{l'}}{S\xi_l\xi_f} \bigr)
- W\bigl( \frac{w_{uf}+q(\xi_f^2-C\xi_l\xi_f)}{S\xi_l\xi_f},
\frac{a_u+a_f}{S\xi_l\xi_f} \bigr)}
{a_f-a_{l'} - {\rm i}(q-v_{ul'}+w_{uf})} \nonumber \\
&&\kern -2.2in \,\,{}+
\frac{\overline{W}\bigl( \frac{v_{u'l}+q(C\xi_l\xi_f-\xi_l^2)}{S\xi_l\xi_f},
\frac{a_{u'}+a_l}{S\xi_l\xi_f} \bigr)
-\overline{W}\bigl( \frac{w_{u'f}+q(\xi_f^2-C\xi_l\xi_f)}{S\xi_l\xi_f},
\frac{a_{u'}+a_f}{S\xi_l\xi_f} \bigr)}
{a_f-a_l + {\rm i}(q-v_{u'l}+w_{u'f})}
\Biggr\}\;,
\end{eqnarray}
where we have indicated with $\overline{W}$ the complex conjugate of $W$.
Equation (\ref{eq:rlab}) transforms exactly into Equation (A6) of
Paper~I when $\xi_l=\xi_f$ (and letting $a_u=a_{u'}$ and
$a_l=a_{l'}=a_f$), noting that Equation (\ref{eq:avg_Doppler})
gives $\Delta=2 S_2\Delta\omega_T$, in such case,
using the notation of Paper~I.
In order to facilitate further manipulation of Equation (\ref{eq:rlab}) for
specific applications, it is convenient to introduce barred quantities
${\bar x}=x/(S\xi_l\xi_f)$. Thus Equation (\ref{eq:rlab}) becomes
\begin{eqnarray*}
R(\Omega_u,\Omega_{u'};\Omega_l,\Omega_{l'},\Omega_{f};
\hat{\omega}_{k},\hat{\omega}_{k'};\Theta) &=&
\frac{1}{\Delta^2\,S\xi_l\xi_f}
\int_{-\infty}^{+\infty} d\bar q\; {\rm e}^{-S^2\xi_l^2\xi_f^2\bar q^2} \\
&&\kern -2.2in\times\Biggl\{
\frac{W\bigl( \bar v_{ul'}+\bar q(C\xi_l\xi_f-\xi_l^2),
\bar a_u+\bar a_{l'} \bigr)}
{\bar a_{l'}+\bar a_f+{\rm i}\bigl(\bar q
-\bar v_{ul'}+\bar w_{uf}\bigr)}
+
\frac{\overline{W}\bigl( \bar v_{u'l}+\bar q(C\xi_l\xi_f-\xi_l^2),
\bar a_{u'}+\bar a_l \bigr)}
{\bar a_l+\bar a_f-{\rm i}\bigl(\bar q
-\bar v_{u'l}+\bar w_{u'f}\bigr)}
\nonumber \\
&&\kern -2.2in \,\,{}+
\frac{W\bigl( \bar w_{uf}+\bar q(\xi_f^2-C\xi_l\xi_f),
\bar a_u+\bar a_f \bigr)}
{\bar a_l+\bar a_f-{\rm i}\bigl(\bar q
-\bar v_{ul}+\bar w_{uf}\bigr)}
+
\frac{\overline{W}\bigl( \bar w_{u'f}+\bar q(\xi_f^2-C\xi_l\xi_f),
\bar a_{u'}+\bar a_f \bigr)}
{\bar a_{l'}+\bar a_f+{\rm i}\bigl(\bar q
-\bar v_{u'l'}+\bar w_{u'f}\bigr)}
\nonumber \\
&&\kern -2.2in\,\,{} +
\frac{W\bigl( \bar v_{ul'}+\bar q(C\xi_l\xi_f-\xi_l^2),
\bar a_u+\bar a_{l'} \bigr)
- W\bigl( \bar w_{uf}+\bar q(\xi_f^2-C\xi_l\xi_f),
\bar a_u+\bar a_f \bigr)}
{\bar a_f-\bar a_{l'} - {\rm i}(\bar q
-\bar v_{ul'}+\bar w_{uf})} \nonumber \\
&&\kern -2.2in \,\,{}+
\frac{\overline{W}\bigl( \bar v_{u'l}+\bar q(C\xi_l\xi_f-\xi_l^2),
\bar a_{u'}+\bar a_l \bigr)
-\overline{W}\bigl( \bar w_{u'f}+\bar q(\xi_f^2-C\xi_l\xi_f),
\bar a_{u'}+\bar a_f \bigr)}
{\bar a_f-\bar a_l + {\rm i}(\bar q
-\bar v_{u'l}+\bar w_{u'f})}
\Biggr\}\;.
\end{eqnarray*}
We then define
\begin{equation} \label{eq:x.y.def}
x_{ab}=\bar v_{ab}+\bar q(C\xi_l\xi_f-\xi_l^2)\;,\qquad
y_{ab}=\bar w_{ab}+\bar q(\xi_f^2-C\xi_l\xi_f)\;,
\end{equation}
from which (see Equation (\ref{eq:cool}))
\begin{equation} \label{eq:ganzo}
x_{ab}-y_{ac}=\bar v_{ab}-\bar w_{ac}-\bar q\;.
\end{equation}
When $C=\xi_l/\xi_f$ or $C=\xi_f/\xi_l$, the
coefficient of $\bar q$ in one of the two definitions (\ref{eq:x.y.def})
vanishes.\footnote{Note that this can happen for \emph{both}
definitions at the same time only if $\xi_f=\xi_l$, but in that case
the problem becomes identical to the one for the two-term atom \citep{Ca14}.}
However, the relation (\ref{eq:ganzo}), on which we rely for
the following development, remains valid.
Using these two definitions, we then can rewrite at last
\begin{eqnarray} \label{eq:rlab.bar}
R(\Omega_u,\Omega_{u'};\Omega_l,\Omega_{l'},\Omega_{f};
\hat{\omega}_{k},\hat{\omega}_{k'};\Theta) &=&
\frac{1}{\Delta^2\,S\xi_l\xi_f}
\int_{-\infty}^{+\infty} d\bar q\; {\rm e}^{-S^2\xi_l^2\xi_f^2\bar q^2}
\nonumber \\
&&\kern -2.2in\times\Biggl\{
\frac{W\bigl( x_{ul'},\bar a_u+\bar a_{l'} \bigr)}
{\bar a_{l'} + \bar a_f - {\rm i}(x_{ul'}-y_{uf})}
+
\frac{\overline{W}\bigl( x_{u'l},\bar a_{u'}+\bar a_l \bigr)}
{\bar a_l + \bar a_f + {\rm i}(x_{u'l}- y_{u'f})}
+\frac{W\bigl( y_{uf},\bar a_u+\bar a_f \bigr)}
{\bar a_l + \bar a_f + {\rm i}(x_{ul}- y_{uf})}
+
\frac{\overline{W}\bigl( y_{u'f},\bar a_{u'}+\bar a_f \bigr)}
{\bar a_{l'} + \bar a_f - {\rm i}(x_{u'l'}- y_{u'f})}
\nonumber \\
&&\kern -2.2in \,\,{}+
\frac{W\bigl( x_{ul'},\bar a_u+\bar a_{l'} \bigr)
- W\bigl( y_{uf},\bar a_u+\bar a_f \bigr)}
{\bar a_f - \bar a_{l'} + {\rm i}(x_{ul'}-y_{uf})}
+
\frac{\overline{W}\bigl( x_{u'l},\bar a_{u'}+\bar a_l \bigr)
-\overline{W}\bigl( y_{u'f},\bar a_{u'}+\bar a_f \bigr)}
{\bar a_f - \bar a_l - {\rm i}(x_{u'l}-y_{u'f})}
\Biggr\}\;.
\end{eqnarray}
This is the starting point for the derivation of the redistribution
function for the model atom considered in this paper.
\section{The case of the $\Lambda$-type three-term atom with metastable
lower states}
In order to treat the case of a $\Lambda$-type atom undergoing the
transition $(l,l')\to (u,u')\to f$, where the initial and final terms are
metastable, we must consider the limit $\bar a_{l,l',f}\to 0$ of
Equation (\ref{eq:rlab.bar}), using the identity
\begin{equation} \label{eq:zeta0}
\lim_{\epsilon\to0^+}\frac{1}{\epsilon\pm {\rm i} z} =
\pi\delta(z)\mp{\rm i}\,{\rm Pv}\frac{1}{z}\;,
\end{equation}
where $\delta(x)$ and $\mbox{Pv}$ are, respectively, the Dirac delta
and the Cauchy principal value distributions.
In general, we will assume that the
initial lower term is polarized and carrying atomic coherence
(i.e., $\rho_{ll'}\ne0$). After some algebraic manipulation
(see Appendix~\ref{app:A}), and using
\begin{equation} \label{eq:useful}
v_{ul}-w_{uf}
=\frac{\hat\omega_k-\hat\omega_{k'}+\omega_{lf}}{\Delta}
=v_{u'l}-w_{u'f}\;,
\end{equation}
we obtain
\begin{eqnarray} \label{eq:R.last}
R(\Omega_u,\Omega_{u'};\Omega_l,\Omega_{l'},\Omega_{f};
\hat{\omega}_{k},\hat{\omega}_{k'};\Theta)_{\rm s.l.l.}
&=& \frac{\pi}{\Delta^2\,S\xi_l\xi_f} \nonumber \\
&&\kern -2.5in \times \Biggl\{
\exp\biggl[-\frac{(\hat\omega_k-\hat\omega_{k'}+\omega_{lf})^2}
{\Delta^2}\biggr]\,\biggl[
W\biggl(\frac{\kappa^+ v_{ul}+\kappa^- w_{uf}}{S\xi_l\xi_f},
\frac{a_u}{S\xi_l\xi_f} \biggr) +
\overline{W}\biggl(\frac{\kappa^+ v_{u'l}+\kappa^- w_{u'f}}{S\xi_l\xi_f},
\frac{a_{u'}}{S\xi_l\xi_f} \biggr) \biggr] \nonumber \\
&&\kern -2.4in \mathop{+}
\exp\biggl[-\frac{(\hat\omega_k-\hat\omega_{k'}+\omega_{l'f})^2}
{\Delta^2}\biggr]\,\biggl[
W\biggl(\frac{\kappa^+ v_{ul'}+\kappa^- w_{uf}}{S\xi_l\xi_f},
\frac{a_u}{S\xi_l\xi_f} \biggr) +
\overline{W}\biggl(\frac{\kappa^+ v_{u'l'}+\kappa^- w_{u'f}}{S\xi_l\xi_f},
\frac{a_{u'}}{S\xi_l\xi_f} \biggr) \biggr]
\Biggr\} \nonumber \\
&&\kern -2.5in \,\mathop{+}
\frac{\rm i}{\Delta^2\,S\xi_l\xi_f}
\;{-}\kern-1.07em\intop\nolimits_{-\infty}^{+\infty} dq\; {\rm e}^{-q^2}
\biggl(
\frac{1}{v_{ul'}-w_{uf}-q}-
\frac{1}{v_{ul}-w_{uf}-q} \biggr) \nonumber \\
\nonumber \\
&&\kern -1.5in \times \biggl[
W\biggl( \frac{w_{uf}
+q(\xi_f^2-C\xi_l\xi_f)}{S\xi_l\xi_f},
\frac{a_u}{S\xi_l\xi_f} \biggr) +
\overline{W}\biggl( \frac{w_{u'f}
+q(\xi_f^2-C\xi_l\xi_f)}{S\xi_l\xi_f},
\frac{a_{u'}}{S\xi_l\xi_f} \biggr)
\biggr]\;,
\end{eqnarray}
where we introduced the quantities defined in Equation (\ref{eq:chi}), and
where the symbol $\mbox{${-}\kern-.9em\intop$}$ indicates that the integral must be evaluated
as the Cauchy principal value.
We note that it is possible to rewrite
\begin{eqnarray} \label{eq:alter}
&&\biggl(
\frac{1}{v_{ul'}-w_{uf}-q}-
\frac{1}{v_{ul}-w_{uf}-q} \biggr)
\biggl[
W\biggl( \frac{w_{uf}+q(\xi_f^2-C\xi_l\xi_f)}{S\xi_l\xi_f},
\frac{a_u}{S\xi_l\xi_f} \biggr) +
\overline{W}\biggl( \frac{w_{u'f}+q(\xi_f^2-C\xi_l\xi_f)}{S\xi_l\xi_f},
\frac{a_{u'}}{S\xi_l\xi_f} \biggr)
\biggr] \nonumber \\
&&\kern 2in \equiv
\frac{\omega_{ll'}}{\Delta}\,
\frac{
W\Bigl( \frac{w_{uf}+q(\xi_f^2-C\xi_l\xi_f)}{S\xi_l\xi_f},
\frac{a_u}{S\xi_l\xi_f} \Bigr) +
\overline{W}\Bigl( \frac{w_{u'f}+q(\xi_f^2-C\xi_l\xi_f)}{S\xi_l\xi_f},
\frac{a_{u'}}{S\xi_l\xi_f} \Bigr) }%
{(v_{ul'}-w_{uf}-q) (v_{ul}-w_{uf}-q)}\;,
\end{eqnarray}
where we observed that
\begin{displaymath}
v_{ul}-v_{ul'}
=\frac{\omega_{ll'}}{\Delta}
=v_{u'l}-v_{u'l'}\;.
\end{displaymath}
Thus, the integral term in Equation (\ref{eq:R.last})
vanishes when $\omega_{ll'}=0$, i.e., for completely degenerate
lower levels, or in the case of non-coherent lower term (see
below).
In Appendix~\ref{app:C}, we show
that this integral contribution is simply a
frequency redistribution term that carries no net energy.
Despite the relative simplicity of Equation (\ref{eq:alter}), the original
form of the integrand as given in Equation (\ref{eq:R.last}) is more
convenient for numerical computation, since it can
be shown that (see Appendix~\ref{app:B})
\begin{eqnarray}
&&\;{-}\kern-1.07em\intop\nolimits_{-\infty}^{+\infty} dq\;
\frac{{\rm e}^{-q^2}}{v-w-q}\,
W\biggl(\frac{w+q(\xi_f^2-C\xi_l\xi_f)}{S\xi_l\xi_f},
\frac{a}{S\xi_l\xi_f}\biggr) \nonumber \\
&=&
\frac{2}{\sqrt{\pi}}
\int_{-\infty}^{+\infty} dp\;
\frac{{\rm e}^{-p^2/\xi_f^2}}{a+{\rm i}(p-w)}\,
F\biggl(\frac{v-w+p(1-C\xi_l/\xi_f)}{S\xi_l}\biggr)\;,
\end{eqnarray}
where $F(x)$ is Dawson's integral function, and so the need to evaluate integrals
in the principal-value sense is removed. With this transformation,
Equation (\ref{eq:R.last}) becomes
\begin{eqnarray} \label{eq:R.final}
R(\Omega_u,\Omega_{u'};\Omega_l,\Omega_{l'},\Omega_{f};
\hat{\omega}_{k},\hat{\omega}_{k'};\Theta)_{\rm s.l.l.}
&=& \frac{\pi}{\Delta^2\,S\xi_l\xi_f} \nonumber \\
&&\kern -2.5in \times \Biggl\{
\exp\biggl[-\frac{(\hat\omega_k-\hat\omega_{k'}+\omega_{lf})^2}
{\Delta^2}\biggr]\left[
W\biggl(\frac{\kappa^+ v_{ul}+\kappa^- w_{uf}}{S\xi_l\xi_f},
\frac{a_u}{S\xi_l\xi_f} \biggr) +
\overline{W}\biggl(\frac{\kappa^+ v_{u'l}+\kappa^- w_{u'f}}{S\xi_l\xi_f},
\frac{a_{u'}}{S\xi_l\xi_f} \biggr) \right] \nonumber \\
&&\kern -2.4in \mathop{+}
\exp\biggl[-\frac{(\hat\omega_k-\hat\omega_{k'}+\omega_{l'f})^2}
{\Delta^2}\biggr]\left[
W\biggl(\frac{\kappa^+ v_{ul'}+\kappa^- w_{uf}}{S\xi_l\xi_f},
\frac{a_u}{S\xi_l\xi_f} \biggr) +
\overline{W}\biggl(\frac{\kappa^+ v_{u'l'}+\kappa^- w_{u'f}}{S\xi_l\xi_f},
\frac{a_{u'}}{S\xi_l\xi_f} \biggr) \right]
\Biggr\} \nonumber \\
&&\kern -2.5in \,\mathop{+}
\frac{2}{\sqrt\pi}\,\frac{{\rm i}}{\Delta^2 S\xi_l\xi_f}
\int_{-\infty}^{+\infty} dp\;
{\rm e}^{-p^2/\xi_f^2} \biggl[
\frac{1}{a_u+{\rm i}(p-w_{uf})}+\frac{1}{a_{u'}-{\rm i}(p-w_{u'f})}
\biggr] \nonumber \\
&&\kern -1.2in\times
\biggl[
F\biggl(\frac{v_{ul'}-w_{uf}+p(1-C\xi_l/\xi_f)}{S\xi_l}\biggr) -
F\biggl(\frac{v_{ul}-w_{uf}+p(1-C\xi_l/\xi_f)}{S\xi_l}\biggr)
\biggr]\;,
\end{eqnarray}
where we used Equations (\ref{eq:variables.1}) and (\ref{eq:useful})
in order to combine similar terms.
\begin{figure}[t!]
\centering
\includegraphics[height=3.7truein]{llpint_plot.eps}
\caption{\label{fig:int_term}
The real (top) and imaginary (bottom) parts of the integral of the
redistribution function of Equation (\ref{eq:R.final}) over
the normalized output frequency $v_{k'}\equiv\frac{1}{2}(w_{uf}+w_{u'f})$,
plotted against the normalized input frequency
$v_k\equiv\frac{1}{2}(v_{ul}+v_{u'l'})$. The different
curves correspond to different values of the ratio
$v_{ll'}\equiv\omega_{ll'}/\Delta$. For this example, we assumed
a scattering angle $\Theta=$1\,rad, and a fully
degenerate upper level (i.e., $\omega_{uu'}=0$).}
\end{figure}
Owing to the fact that $|F'(x)|\le 1$ over the real domain
\cite[e.g.,][]{AS64}, from the mean-value theorem it follows that
\begin{displaymath}
\left|F\biggl(\frac{v_{ul'}-w_{uf}+p(1-C\xi_l/\xi_f)}{S\xi_l}\biggr)
-
F\biggl(\frac{v_{ul}-w_{uf}+p(1-C\xi_l/\xi_f)}{S\xi_l}\biggr)\right|
\le
\frac{|\omega_{ll'}|}{\Delta S\xi_l}\;.
\end{displaymath}
This allows us to estimate a bound to the integral contribution of
Equation (\ref{eq:R.final}). In fact,
\begin{eqnarray}
&&\frac{1}{\pi}\int_{-\infty}^{+\infty} dp\;
{\rm e}^{-p^2/\xi_f^2} \biggl|
\frac{1}{a_u+{\rm i}(p-w_{uf})}+\frac{1}{a_{u'}-{\rm i}(p-w_{u'f})}
\biggr| \nonumber \\
&\le& H\!\left(\frac{w_{uf}}{\xi_f},\frac{a_u}{\xi_f}\right) +
H\!\left(\frac{w_{u'f}}{\xi_f},\frac{a_{u'}}{\xi_f}\right) +
\frac{1}{\pi}\int_{-\infty}^{+\infty} dp\;
{\rm e}^{-p^2/\xi_f^2} \biggl[
\frac{|p-w_{uf}|}{a_u^2+(p-w_{uf})^2}
+\frac{|p-w_{u'f}|}{a_{u'}^2+(p-w_{u'f})^2}
\biggr] \nonumber \\
\noalign{\allowbreak}
&\le& H\!\left(\frac{w_{uf}}{\xi_f},\frac{a_u}{\xi_f}\right) +
H\!\left(\frac{w_{u'f}}{\xi_f},\frac{a_{u'}}{\xi_f}\right) +
\frac{1}{2\pi}\int_{-\infty}^{+\infty} dp\;
{\rm e}^{-p^2/\xi_f^2} \biggl( \frac{1}{a_u} + \frac{1}{a_{u'}} \biggr)
\nonumber \\
&=& H\!\left(\frac{w_{uf}}{\xi_f},\frac{a_u}{\xi_f}\right) +
H\!\left(\frac{w_{u'f}}{\xi_f},\frac{a_{u'}}{\xi_f}\right) +
\frac{\xi_f}{2\sqrt\pi}\biggl( \frac{1}{a_u} + \frac{1}{a_{u'}} \biggr)\;,
\end{eqnarray}
and so the integral contribution in Equation (\ref{eq:R.final}) can be
neglected when the quantity
\begin{displaymath}
\frac{2}{\sqrt\pi}\,\frac{|\omega_{ll'}|}{\Delta S\xi_l}\,
\biggl[H\!\left(\frac{w_{uf}}{\xi_f},\frac{a_u}{\xi_f}\right) +
H\!\left(\frac{w_{u'f}}{\xi_f},\frac{a_{u'}}{\xi_f}\right) +
\frac{\xi_f}{2\sqrt\pi}\biggl( \frac{1}{a_u} + \frac{1}{a_{u'}}
\biggr)\biggr]
\end{displaymath}
is sufficiently small compared to the absolute value of the contribution
within curly brackets in that same equation.
\begin{figure}[t!]
\centering
\includegraphics[height=3.7truein]{Wfunc_plot.eps}
\caption{\label{fig:Wfunc}
The real (top) and imaginary (bottom) parts of the redistribution
function of Equation (\ref{eq:R.final}) plotted against the
normalized output frequency $v_{k'}\equiv\frac{1}{2}(w_{uf}+w_{u'f})$, for various values of the
normalized input frequency $v_k\equiv\frac{1}{2}(v_{ul}+v_{u'l'})$.
For this example, we assumed $\omega_{ll'}=0.5\,\Delta$ and
$\omega_{uu'}=0.2\,\Delta$.}
\end{figure}
Figure~\ref{fig:int_term} shows the integral over the normalized
output frequency $v_{k'}\equiv\frac{1}{2}(w_{uf}+w_{u'f})$ of the
redistribution function of Equation (\ref{eq:R.final}), plotted
against the normalized input frequency
$v_k\equiv\frac{1}{2}(v_{ul}+v_{u'l'})$. The upper (lower)
panel shows the real (imaginary) part of this integral. The different
curves correspond to different values of the quantity
$v_{ll'}\equiv\omega_{ll'}/\Delta$. For this example, we assumed a
scattering angle $\Theta=1$\,rad, and a fully degenerate upper
state, so that $\omega_{uu'}=0$. In
such a case, we see from Equation (\ref{eq:R.final}) that the
contribution within curly brackets is purely real. Therefore, the plots
in the lower panel are exclusively due to the integral term in
Equation (\ref{eq:R.final}). We can conclude from those plots
that the contribution of
the integral term is generally not negligible, so it can significantly
affect the shape of the redistributed profile depending on the
importance of the lower-term coherence.
In contrast, the integral term brings no contribution when the atom is
illuminated with a flat spectrum, owing to the fact that its integral
over $v_k$ vanishes identically (see Figure~\ref{fig:int_term}, and
the comment at the end of Appendix~\ref{app:C}).
In the particular case when the atomic coherence of the lower term
is completely relaxed ($\rho_{ll'}=\delta_{ll'}\,\rho_{ll}$), or if the
levels $l$ and $l'$ are completely degenerate, the integral contribution
vanishes, and Equation (\ref{eq:R.final}) provides the
generalization of the well-known $R_{\rm II}$ redistribution function
\citep[e.g.,][]{Mi78} to the case of a $\Lambda$-type three-term atom,
\begin{eqnarray} \label{eq:R2}
R_{\rm II}(\Omega_u,\Omega_{u'};\Omega_l,\Omega_{f};
\hat{\omega}_{k},\hat{\omega}_{k'};\Theta)
&=& \frac{2\pi}{\Delta^2\,S\xi_l\xi_f}\,
\exp\biggl[-\frac{(\hat\omega_k-\hat\omega_{k'}+\omega_{lf})^2}
{\Delta^2}\biggr] \nonumber \\
&&\kern -3cm {}\times \biggl[
W\biggl(\frac{\kappa^+ v_{ul}+\kappa^- w_{uf}}{S\xi_l\xi_f},
\frac{a_u}{S\xi_l\xi_f} \biggr) +
\overline{W}\biggl(\frac{\kappa^+ v_{u'l}+\kappa^- w_{u'f}}{S\xi_l\xi_f},
\frac{a_{u'}}{S\xi_l\xi_f} \biggr) \biggr]\;.
\end{eqnarray}
This expression was applied in recent work that investigated the effects
of partial redistribution on the formation of polarized lines from
$\Lambda$-type three-term atoms in the solar spectrum \citep{CM16}.
Finally, Figure~\ref{fig:Wfunc} shows several realizations of the full
redistribution function of Equation (\ref{eq:R.final}) plotted against
the normalized output frequency $v_{k'}$, for different values
of the normalized input frequency $v_k$.
For the example in this figure, we assumed
$\omega_{ll'}=0.5\,\Delta$ and
$\omega_{uu'}=0.2\,\Delta$, and a scattering angle $\Theta=1$\,rad.
\section{The cases of forward and backward scattering}
\label{sec:limitcase}
In the two cases of forward and backward scattering
($\Theta=0$ and $\Theta=\pi$, respectively), the expression
(\ref{eq:R.final}) for the laboratory redistribution function breaks
down because of the condition $S=0$. We can, however, treat these two
cases relying on a physical argument of continuity, and taking the
limit of the redistribution function for $S\to 0^+$.
Using the asymptotic expansion of $W(v,a)$ for large values of
$|v+{\rm i}a|$ \citep[e.g.,][]{LL04}, and recalling
Equation (\ref{eq:F_def}), it can be shown that
\begin{equation} \label{eq:asympt}
\lim_{k\to 0^+} \frac{1}{k}\,W\Bigl(\frac{v}{k},\frac{a}{k}\Bigr)
=\frac{{\rm i}}{\sqrt{\pi}\,(v+{\rm i}a)}\;, \qquad
\lim_{k\to 0^+} \frac{1}{k}\,F\Bigl(\frac{v}{k}\Bigr)
=\frac{1}{2v}\;.
\end{equation}
Considering for simplicity the case of non-coherent lower term,
Equation (\ref{eq:R2}) thus becomes, for the two cases of forward and
backward scattering,
\begin{eqnarray} \label{eq:R2.fb}
R_{\rm II}(\Omega_u,\Omega_{u'};\Omega_l,\Omega_{f};
\hat{\omega}_{k},\hat{\omega}_{k'};C=\pm1)
&=& \frac{2\sqrt\pi}{\Delta_\mp}\,
\exp\biggl[-\frac{(\hat\omega_k-\hat\omega_{k'}+\omega_{lf})^2}
{\Delta_\mp^2}\biggr] \nonumber \\
&&\kern -2.5in {}\times \biggl[
\frac{{\rm i}}{\kappa^+ (\hat\omega_k-\omega_{ul})
+\kappa^- (\hat\omega_{k'}-\omega_{uf})+{\rm i}\epsilon_u} -
\frac{{\rm i}}{\kappa^+ (\hat\omega_k-\omega_{u'l})
+\kappa^- (\hat\omega_{k'}-\omega_{u'f})-{\rm i}\epsilon_{u'}}
\biggr]\;,
\end{eqnarray}
where $\Delta_\pm=\Delta_{ul}\pm\Delta_{uf}$.
\begin{figure}[t!]
\centering
\includegraphics[height=3.7truein]{test_FS.eps}
\caption{\label{fig:FS test}
Plots showing the transition between the forms (\ref{eq:R2}) and
(\ref{eq:R2.fb}) of $R_{\rm II}$ in approaching the condition of forward
scattering. The atomic model adopted is that of the three-term \ion{Ca}{2}
ion encompassing the formation of the H and K lines and the IR triplet.
The plots show the Raman scattered emissivity in the polarized
\ion{Ca}{2} K line due to the monochromatic excitation of the
\ion{Ca}{2} 854.2\,nm line at the exact value of its resonance
frequency, and for different values of the scattering angle $\Theta$
near $0^\circ$.}
\end{figure}
\subsection{The Special Case of the Two-term Atom}
When the two lower terms coincide, so that $\Delta_{uf}=\Delta_{ul}$,
the asymptotic result expressed
by Equation (\ref{eq:asympt}) is not generally applicable. In fact,
in the case of a two-term atom
with non-coherent lower term, Equation (\ref{eq:R2}) becomes
\begin{eqnarray} \label{eq:R2.two}
R_{\rm II}(\Omega_u,\Omega_{u'};\Omega_l,\Omega_{f};
\hat{\omega}_{k},\hat{\omega}_{k'};\Theta)
&=&\frac{\pi}{\Delta\omega_T^2\,C_2 S_2}
\exp\!\left[-\frac{(\hat{\omega}_k-\hat{\omega}_{k'}+\omega_{lf})^2}%
{4 S_2^2\Delta\omega_T^2}\right] \nonumber \\
&&\kern -2in {}\times
\left[
W\biggl(\frac{\hat{\omega}_k+\hat{\omega}_{k'}
-\omega_{ul}-\omega_{uf}}{2 C_2\Delta\omega_T},
\frac{\epsilon_u}{C_2\Delta\omega_T}\biggr) +
\overline{W}\biggl(\frac{\hat{\omega}_k+\hat{\omega}_{k'}
-\omega_{u'l}-\omega_{u'f}}{2 C_2\Delta\omega_T},
\frac{\epsilon_{u'}}{C_2\Delta\omega_T}\biggr)\right]\;,
\end{eqnarray}
where we indicated with $\Delta\omega_T=\Delta_{ul}=\Delta_{uf}$
the single value of the Doppler width of the two-term transition,
and we also defined
$S_2=\sin(\Theta/2)$ and $C_2=\cos(\Theta/2)$ (cf.\ Paper~I).
Then the limit (\ref{eq:asympt}) applies only to the case of backward
scattering $(C_2\to 0)$, but not to forward scattering $(S_2\to 0)$.
In the first case, we obtain an expression formally identical to
Equation (\ref{eq:R2.fb}), namely,
\begin{eqnarray} \label{eq:R2.two.backward}
R_{\rm II}(\Omega_u,\Omega_{u'};\Omega_l,\Omega_{f};
\hat{\omega}_{k},\hat{\omega}_{k'};\Theta=\pi)
&=& \frac{2\sqrt\pi}{\Delta\omega_T}\,
\exp\biggl[-\frac{(\hat\omega_k-\hat\omega_{k'}+\omega_{lf})^2}
{4\Delta\omega_T^2}\biggr] \nonumber \\
&&\kern -2in {}\times \Biggl(
\frac{{\rm i}}{\hat{\omega}_k+\hat{\omega}_{k'}
-\omega_{ul}-\omega_{uf}+2{\rm i}\epsilon_u} -
\frac{{\rm i}}{\hat{\omega}_k+\hat{\omega}_{k'}
-\omega_{u'l}-\omega_{u'f}-2{\rm i}\epsilon_{u'}}
\Biggr)\;,
\end{eqnarray}
whereas in the case of forward scattering,
\begin{eqnarray} \label{eq:R2.two.forward}
R_{\rm II}(\Omega_u,\Omega_{u'};\Omega_l,\Omega_{f};
\hat{\omega}_{k},\hat{\omega}_{k'};\Theta=0)
&=&\frac{2\pi\sqrt\pi}{\Delta\omega_T}\,
\delta(\hat{\omega}_k-\hat{\omega}_{k'}+\omega_{lf})
\nonumber \\
&&\kern -1.5in {}\times
\left[
W\biggl(\frac{\hat{\omega}_{k'}-\omega_{uf}}{\Delta\omega_T},
\frac{\epsilon_u}{\Delta\omega_T}\biggr) +
\overline{W}\biggl(\frac{\hat{\omega}_{k'}-\omega_{u'f}}{\Delta\omega_T},
\frac{\epsilon_{u'}}{\Delta\omega_T}\biggr)\right]\;,
\end{eqnarray}
owing to the fact that $\exp(-x^2/k^2)/k\to\sqrt{\pi}\,\delta(x)$ when
$k\to0^+$.
This last expression in particular shows that the process of forward
scattering \emph{in a two-term atom} is strictly coherent.
In fact, from a simple inspection of Equation (\ref{eq:R2.fb}) it is
already possible to
conclude that both processes of forward and backward scattering imply an
increased degree of correlation between the input and output
frequencies \citep[cf.][and references therein]{Le77},
with a typical spread that is dominated by the inverse
lifetime of the upper level, rather than by the Doppler width. However,
only in the case of forward scattering in a two-term atom, strict coherence
is attained, according to Equation (\ref{eq:R2.two.forward}).
The plots in Figure~\ref{fig:FS test} show an example of the behavior of
the scattered polarized emissivity in a three-term atomic system when the
scattering angle approaches $0^\circ$. For the example in
this figure, we adopted the three-term model of the \ion{Ca}{2} ion
underlying the formation of the H and K doublet around 395\,nm and the
IR triplet around 858\,nm. The line formation model also includes a
magnetic field of 10\,G normal to the direction of the incident light,
which is responsible for the appearance of line polarization, and in
particular of non-vanishing Stokes $U$ and $V$ signals when the
scattering direction forms an angle with the incidence direction.
The plotted profiles of the
\ion{Ca}{2} K line at 393.4\,nm are produced by Raman scattering of
monochromatic radiation at the exact resonance frequency of the 854.2\,nm
line of the IR triplet. For the modeling we assumed a plasma density of
$10^{12}\,\rm cm^{-3}$ and a temperature of 1000\,K, which corresponds to
a Doppler width of 8.5\,m\AA. However, because of the near condition of
forward scattering, the actual width of the scattered profiles is
instead dominated by the natural linewidth ($\sim 0.5$\,m\AA),
as expected.
Figure~\ref{fig:FS test} numerically demonstrates how
Equation (\ref{eq:R2.fb}) indeed provides the correct limit of the
general expression (\ref{eq:R2}) of $R_{\rm II}$, when approaching
the condition of forward scattering, as both Stokes $I$ and $Q$ for
$\Theta=0.1^\circ$ are already practically indistinguishable from
those at $\Theta=0^\circ$.
\begin{acknowledgements}
We thank T.\ del Pino Alem\'an (HAO) for internally reviewing the
manuscript and for helpful comments. We are deeply indebted
to the anonymous referee, for a very careful review of our manuscript and
for helpful comments and suggestions that have greatly improved the
presentation of this work.
\end{acknowledgements}
|
1,108,101,564,972 | arxiv | \section{Introduction and results}
The polaron models the interaction of an electron with a polar crystal. The Fröhlich Hamiltonian describing the interaction of the electron with the lattice vibrations has a fiber decomposition in terms of the Hamiltonians
\begin{equation*}
H(P) = \frac12 (P - P_f)^2 + \mathbf N + \frac{\sqrt{\alpha}}{\sqrt{2} \pi} \int_{\mathbb R^3}
\frac{1}{|k|}(a_k + a^\ast_k) \, \mathrm dk
\end{equation*}
at fixed total momentum $P\in \mathbb R^3$ that act on the bosonic Fock space over $L^2(\R^3)$. Here $a_k^\ast$ and $a_k$ are the creation and annihilation operators satisfying the canonical commutation relations $[a^\ast_k, a_{k'}] = \delta(k-k')$, $\mathbf N \equiv \int_{\mathbb R^3} a^\ast_k a_k\, \mathrm d k$ is the number operator, $P_f \equiv \int_{\R^3} k a_k^* a_k \mathrm dk$ is the momentum operator of the field and $\alpha>0$ is the coupling constant. Of particular interest has been the energy momentum relation
\begin{equation*}
E(P) \coloneqq \inf \operatorname{spec}(H(P)).
\end{equation*}
For small $|P|$ the system is believed to behave like a free particle with an increased ``effective mass''. $E$ is known to have a strict local minimum at $P=0$ and to be smooth in a neighbourhood of the origin. The effective mass is defined as the inverse of the curvature at the origin such that
\begin{equation}
\label{Equation: Asympotics around 0}
E(P) - E(0) = \frac{1}{2m_{\text{eff}}} |P|^2 + o(|P|^2)
\end{equation}
in the limit $P \to 0$. While significant effort has been put into the study of the asymptotic behaviour of $E(0)$ and $m_{\text{eff}}$ in the strong coupling limit $\alpha \to \infty$ (see e.g \cite{DoVa83}, \cite{LiTh97}, \cite{LiSe20}, \cite{BP22}, \cite{MMS22}), we will be interested in the qualitative behaviour of $E$ at a fixed value of the coupling constant. One valuable tool for the analysis of $E(0)$ and $m_{\text{eff}}$ has been their probabilistic representation obtained via the Feynman-Kac formula. The approach taken below extends the probabilistic methods developed for the analysis of the effective mass to the whole energy momentum relation.\\
\\
Let
\begin{equation*}
E_\text{ess}(P) \coloneqq \inf \text{ess spec}(H(P))
\end{equation*}
be the bottom of the essential spectrum. It is known \cite{Sp88} that
\begin{equation*}
E_\text{ess}(P) = E(0) + 1
\end{equation*}
for all $P$. From now on, we will often abuse notation and identify a radially symmetric function on $\mathbb R^3$ with a function on $[0, \infty)$. Keeping that in mind, let
\begin{equation*}
\mathcal I_0 \coloneqq \{P\in [0, \infty): E(P) < E(0) + 1\}
\end{equation*}
(which is known to contain a neighbourhood of the origin). For Hamiltonians with stronger regularity assumptions (e.g. the Fröhlich polaron with an ultraviolet cutoff) it is known \cite{Mo06} that the spectral gap closes in the limit, i.e. that $\lim_{P \to \infty} E(P) = E_\text{ess}(0)$. For the Fröhlich polaron in dimensions 1 and 2 it is known \cite{Sp88} that $\mathcal I_0 = [0, \infty)$ i.e. that the spectral gap does not close in a finite interval. In dimension 3, however, it has been predicted in the physics literature that $\mathcal I_0$ is bounded \cite{Fe72}. For sufficiently small coupling constants, this has been shown in \cite{Da17}. In the framework presented below, the question whether $\mathcal I_0$ is bounded or unbounded reduces to the study of the tails of a probability distribution on $(0, \infty)^2$. There does not seem to be known much about the behaviour of $E$ in the intermediate $P$-regime. In \cite{DySp20} it was shown that $E$ is real analytic on $\mathcal I_0$ with $E(0) \leq E(P)$ for all $P$ and that the inequality is strict for $P$ outside of a compact set. In recent work it has been shown \cite{LMM22} that $E$ has indeed a strict global minimum in 0. In the present text, we will prove some previously unknown properties of $E$, namely monotonicity and concavity of $P\mapsto E(\sqrt{P})$ on $[0, \infty)$, both of which are strict on $\mathcal I_0$.
The (strict) monotonicity additionally allows us to replicate the result of \cite{LMM22}.
\begin{thrm}
\label{Theorem: Main result}
The following holds.
\begin{enumerate}[label=(\roman*)]
\item $P \mapsto E(P)$ is non-decreasing on $[0, \infty)$ and strictly increasing on $\mathcal I_0$. In particular, $\mathcal I_0$ is an (potentially unbounded) interval.
\item $P \mapsto E(\sqrt{P})$ is strictly concave on $\mathcal I_0$. In particular
\begin{equation*}
E(P) - E(0) < \frac{1}{2m_{\text{eff}}} P^2
\end{equation*}
for all $P>0$, i.e. the correction to the quasi-particle energy is negative and $\big[0, \sqrt{2 m_\text{eff}}\, \big) \subset \mathcal I_0$.
\item For $|P| \notin \operatorname{cl}(\mathcal I_0)$ we have $\lim_{\lambda \uparrow E(P)}\langle \Omega, (H(P) - \lambda)^{-1} \Omega \rangle<\infty$, where $\Omega$ is the Fock vacuum, in particular $H(P)$ does not have a ground state.
\end{enumerate}
\end{thrm}
For the polaron with an ultraviolet cut-off and in dimensions 3 and 4, the non-existence of a ground state for $|P|\notin \mathcal I_0$ has been shown in \cite{Mo06}. In a certain limit of strong coupling, the negativity of the correction to the quasi-particle energy has been shown in \cite{MMS22}.
In (iii) we used that if $H(P)$ has a ground state, then it is non-orthogonal $\Omega$: The operator $e^{\mathrm i \pi \mathbf N} e^{-TH(P)} e^{-\mathrm i \pi \mathbf N}$ is for all $T>0$ positivity improving \cite[Theorem 6.3]{Miy10} which in turn implies that if there exists a ground $\psi_P$ state of $H(P)$ then it is unique (up to a phase) and can be chosen such that $e^{\mathrm i \pi \mathbf N}\psi_P$ is strictly positive \cite[Theorem 2.12]{Miy10}. In \cite[Theorem 6.4]{Miy10} it was shown that there exists a ground state of $H(P)$ for $|P|< \sqrt{2}$. Part (ii) of our Theorem \ref{Theorem: Main result} allows us to improve this to existence of a ground state for $|P|< \sqrt{2m_\text{eff}}$.\\
\\
Before starting with the proof of Theorem \ref{Theorem: Main result}, we give a brief summary of our approach. An application of the Feynman-Kac formula to the semigroup generated by the Hamiltonian yields \cite{DySp20}
\begin{equation*}
\label{Equation: Feyman-Kac}
\langle \Omega, e^{-TH(P)} \Omega \rangle = \int_{C([0, \infty), \mathbb R^3)} \mathcal W(\mathrm dX)\, e^{- \mathrm i P \cdot X_T} \exp\bigg( \frac{\alpha}{2} \int_0^T \int_0^T \mathrm ds \mathrm dt \, \frac{e^{-|t-s|}}{|X_{s, t}|} \bigg)
\end{equation*}
for all $P\in \mathbb R^3$ and $T\geq 0$, where $\Omega$ is the Fock vacuum, $\mathcal W$ is the distribution of a three dimensional Brownian motion started in the origin and $X_{s, t} \coloneqq X_t -X_s$ for $X\in C([0, \infty), \R^3)$ and $s, t\geq 0$. After normalizing the expression above by dividing by $\langle \Omega, e^{-TH(0)} \Omega \rangle$, one can study $E$ by looking at the large $T$ asymptotics of Brownian motion perturbed by a pair potential. Herbert Spohn conjectured in \cite{Sp87} convergence of the resulting path measure under diffusive rescaling to Brownian motion and showed that the respective diffusion constant is then the inverse of the effective mass, see also \cite{DySp20}. The validity of this central limit theorem was shown by Mukherjee and Varadhan in \cite{MV19} for sufficiently small $\alpha$ and then, by extending the proof given in \cite{MV19}, for all $\alpha$ in \cite{BP21}. The proof given in \cite{MV19} relies on a representation of the path measure as a mixture of Gaussian measures, where the mixing measure can be expressed in terms of a perturbed birth and death process. An application of renewal theory then yielded the existence of an infinite volume measure and a central limit theorem provided that a certain technical condition holds whose validity was proven for sufficiently small coupling parameters. In \cite{BP21} this approach was continued. Rather than directly verifying the validity of said condition, the point process representation was used in order to derive a renewal equation for $T \mapsto \langle \Omega, e^{-TH(0)} \Omega \rangle$. It was then shown that the condition of Mukherjee and Varadhan is equivalent to the known existence of a ground state of $H(0)$ that is non-orthogonal to $\Omega$. We will use a similar approach and derive renewal equations for $T \mapsto \langle \Omega, e^{-TH(P)} \Omega \rangle$ for any $P$. We arrive at our results by comparing the asymptotic behaviour of the solutions in dependency of $P$. \\
\\
In our units the free electron has mass 1 and physically one would expect that
$1 < m_{\text{eff}} < \infty$.
The proof of the central limit theorem entails a formula for the diffusion constant that directly implies that this indeed holds. We will give an additional proof that yields (essentially) the same formula for the effective mass
but that does not rely on the validity of a central limit theorem. Numerous efforts have been made to establish central limit theorems for related models (see e.g. \cite{BeSp04}, \cite{Gu06}, \cite{Mu22}, \cite{BP21}) and a generalization of the method presented below may be a viable alternative to study the effective mass with probabilistic methods.
\section{Proof of Theorem \ref{Theorem: Main result}}
We define $\triangle \coloneqq \{(s, t)\in \mathbb [0, \infty)^2:\, s<t\}$ and $\mathcal Y \coloneqq \bigcup_{n=0}^\infty (\triangle \times [0, \infty))^n$, and equip the latter with the disjoint-union $\sigma$-algebra (i.e. the final $\sigma$-algebra with respect to the canonical injections $(\triangle \times [0, \infty))^n \hookrightarrow \mathcal Y$, $n\in \N$). For $\zeta = ((s_i, t_i, u_i))_{1\leq i \leq n}\in \mathcal Y$ let
\begin{equation*}
T_1(\zeta) \coloneqq \sup_i t_i, \quad \sigma^2(\zeta) \coloneqq \operatorname{dist}_{L^2}\Big(B_{T_1(\zeta)}, \operatorname{span}\{u_i B_{s_i, t_i} + Z_i: \, 1\leq i \leq n\}\Big)^2
\end{equation*}
where $(B_t)_{t\geq 0}$ is a one dimensional Brownian motion and $(Z_n)_n$ is an iid sequence of $\mathcal N(0, 1)$ distributed random variables that is independent of $(B_t)_{t\geq 0}$. For a measure $\mu$ on $\mathcal Y$ and a measurable function $f:\mathcal Y \to \mathbb R$ we abbreviate $\mu(f) \coloneqq \int_{\mathcal Y} \mu(\mathrm d\zeta) f(\zeta)$ provided that the integral exists in $\overline{\mathbb R}$. Additionally, we set $f_P(T) \coloneqq \langle \Omega, e^{-TH(P)} \Omega \rangle$ for $P\in \mathbb R^3$, $T\geq 0$.
\begin{prop}
\label{Proposition: Our renewal equations}
There exists a measure $\mu$ on $\mathcal Y$ such that
\begin{equation*}
f_P(T) = \mu\big(e^{-P^2 \sigma^2/2} f_P(T-T_1)\1_{\{T_1 \leq T\}}\big) + e^{-P^2T/2}
\end{equation*}
holds for all $P \in \mathbb R^3$ and $T \geq 0$.
\end{prop}
\begin{proof}
We define
\begin{equation*}
F_P(T_1, T_2, \xi) \coloneqq \int \mathcal W(\mathrm d X) \, e^{- \mathrm i P \cdot X_{T_1, T_2}} \prod_{i=1}^{n} |X_{s_i, t_i}|^{-1}
\end{equation*}
for $T_1, T_2 \geq 0$ and $\xi = ((s_i,t_i))_{1\leq i \leq n}$ such that the integral is well defined.
Let $\nu_T(\mathrm ds \mathrm dt) \coloneqq \alpha e^{-|t-s|} \1_{\{0<s<t<T\}} \mathrm ds \mathrm dt$. Expanding the exponential into a series and exchanging the order of integration leads to\footnote{Using that the integral is finite for $P=0$ shows that this is indeed justified.}
\begin{align}
\label{Equation: Paths measure by PPP}
f_P(T) &= \int \mathcal W(\mathrm d X) \, e^{-\mathrm i P \cdot X_{0, T}} \exp \left( \int \nu_T(\mathrm ds \mathrm dt) |X_{s, t}|^{-1} \right) \nonumber \\ \nonumber
&= \sum_{n=0}^\infty \frac{1}{n!} \int \nu_{T}^{\otimes n}(\mathrm d s_1 \mathrm d t_1, \hdots, \mathrm d s_n \mathrm d t_n) \int \mathcal W(\mathrm dX)\, e^{-\mathrm i P \cdot X_{0, T}} \prod_{i=1}^{n}|X_{s_i, t_i}|^{-1} \nonumber \\
&= e^{c_{T}} \int \Gamma_{T}(\mathrm d\xi) F_P(0, T, \xi)
\end{align}
where $\Gamma_T$ is the distribution of a Poisson point process on $\mathbb R^2$ with intensity measure $\nu_T$ and $c_T \coloneqq \nu_T(\mathbb R^2)$. Let $\Gamma$ be the distribution of a Poisson point process on $\mathbb R^2$ with intensity measure $\nu(\mathrm ds \mathrm dt) \coloneqq \alpha e^{-|t-s|} \1_{\{0<s<t\}} \mathrm ds \mathrm dt$. The measure $\Gamma$ can be seen as the distribution of a birth and death process with birth rate $\alpha$ and death rate 1 (started with no individual alive at time 0) by identifying an individual that is born at $s$ and that dies at $t$ with the point $(s, t)$. For $t \geq 0$ and a configuration $\xi = ((s_i, t_i))_{i}$ of individuals let $N_t(\xi) \coloneqq |\{i: \, s_i \leq t < t_i\}|$ be the number of individuals alive at time $t$. By the restriction theorem for Poisson point processes, $\Gamma_{T}$ can be obtained by restricting $\Gamma$ to the process of all individuals that are born before $T$ conditional on the event that no individual is alive at time $T$. One can easily verify that
\begin{equation*}
e^{c_T} = e^{\alpha T} e^{-\nu([0, T] \times (T, \infty))} = e^{\alpha T} \Gamma(N_T = 0).
\end{equation*}
Hence, if we denote by $\xi_{t_1, t_2}$ the restriction of $\xi$ to all individuals born in $[t_1, t_2)$, we can rewrite
\begin{equation*}
f_P(T) = e^{\alpha T} \int \Gamma(\mathrm d\xi) F_P(0, T, \xi_{0, T}) \1_{\{N_T(\xi) = 0\}}.
\end{equation*}
Let
\begin{equation*}
\tau(\xi) \coloneqq \inf\{t\geq \inf_i s_i: \, N_t(\xi) = 0\}
\end{equation*}
be the first time after the first birth at which no individual is alive. By independence of Wiener increments
\begin{equation*}
F_P(0, T, \xi_{0, T}) = F_P(0, \tau(\xi), \xi_{0, \tau(\xi)}) F_P(\tau(\xi), T, \xi_{\tau(\xi), T}).
\end{equation*}
for all $\xi \in \{\tau \leq T\}$ such that that the left hand side is well defined.
Let $\Xi$ be the distribution of $\xi \mapsto \xi_{0, \tau(\xi)}$ under $\Gamma$. The process $\Gamma$ regenerates after $\tau$ and by the translation invariance of $F_P$ under a simultaneous time shift in all variables and since $e^{\alpha T} = e^{\alpha \tau}e^{\alpha (T-\tau)}$
\begin{align*}
f_P(T) =& \int \Xi(\mathrm d \xi) \1_{\{\tau(\xi) \leq T\}}e^{\alpha \tau(\xi)} F_P(0, \tau(\xi), \xi) f_P(T-\tau(\xi)) \\
&+ e^{\alpha T}\int \Xi(\mathrm d \xi) \1_{\{\tau(\xi) > T, \, N_T(\xi) = 0\}} F_P(0, T, \xi_{0, T}).
\end{align*}
The event $\{\tau > T, \, N_T = 0\}$ happens if and only if there is no birth until time $T$. Then $\xi_{0, T}$ is the empty configuration and hence
\begin{align*}
F_P(0, T, \xi_{0, T}) = \mathbb E_\mathcal W\big[e^{-\mathrm i P \cdot X_T}\big] = e^{-P^2 T/2}
\end{align*}
for $\xi \in \{\tau> T, \, N_T = 0\}$. Under $\Xi$, the time until the first birth is $\operatorname{Exp}(\alpha)$ distributed and hence $ \Xi(\tau> T, \, N_T = 0) = e^{-\alpha T}$.
Combined, this gives us
\begin{equation*}
f_P(T) = e^{-P^2T/2} + \int \Xi(\mathrm d \xi) e^{\alpha \tau(\xi)} \1_{\{\tau(\xi) \leq T\}} F_P(0, \tau(\xi), \xi) f_P(T-\tau(\xi)).
\end{equation*}
For $(\xi, u) \in \triangle^n \times [0, \infty)^n$ we define $\mathbb P_{\xi, u}$ by
\begin{equation*}
\mathbb P_{\xi, u}(\mathrm dX) \coloneqq \frac{1}{\phi(\xi, u)} e^{-\sum_{i=1}^n u_i^2 |X_{s_i, t_i}|^2/2} \mathcal W(\mathrm dX)
\end{equation*}
where $\phi(\xi, u)$ is a normalization constant.
Then $\mathbb P_{\xi, u}$ is a centred and rotationally symmetric Gaussian measure and
\begin{equation*}
\frac{1}{3}\mathbb E_{\mathbb P_{\xi, u}}\big[|X_t|^2\big] = \operatorname{dist}_{L^2}\Big(B_{t}, \operatorname{span}\{u_i B_{s_i, t_i} + Z_i: \, 1\leq i \leq n\}\Big)^2 \eqqcolon \sigma^2_t(\xi, u)
\end{equation*}
for all $t\geq 0$, see the proof of Proposition 3.2 in \cite{BP22}. We thus have
\begin{align*}
F_P(0, t, \xi) &= \int \mathcal W(\mathrm dX) \int_{[0, \infty)^n} \mathrm du\, (2/\pi)^{n/2}\, e^{-\mathrm i P \cdot X_t} e^{-\sum_{i=1}^n u_i^2|X_{s_i, t_i}|^2/2} \\
&= \int_{[0, \infty)^n} \mathrm du\, (2/\pi)^{n/2} \phi(\xi, u) e^{-P^2 \sigma_t^2(\xi, u)/2}.
\end{align*}
Hence, the measure we are looking for is given by
\begin{equation}
\mu(\mathrm d \xi \mathrm du) \coloneqq \Xi(\mathrm d \xi) \mathrm du \, (2/\pi)^{n(\xi)/2} e^{\alpha \tau(\xi)} \phi(\xi, u)
\end{equation}
under the identification of $\triangle^n \times [0, \infty)^n$ with $(\triangle \times [0, \infty))^n$.
\end{proof}
\begin{prop}
\label{Propposition: Matrix element of Resolvent}
We have $\mu(e^{-\sigma^2 P^2 /2 + E(P) T_1}) \leq 1$ for all $P\in \R^3$ and for $\lambda < E(P)$ we have
\begin{equation*}
\langle \Omega, (H(P) - \lambda)^{-1} \Omega \rangle = \frac{1}{P^2/2-\lambda} \cdot \frac{1}{1-\mu(e^{-P^2\sigma^2 /2 + \lambda T_1})}.
\end{equation*}
If $|P|\in \mathcal I_0$ then $E(P)$ is the unique real number satisfying
\begin{equation*}
\mu(e^{-P^2\sigma^2 /2 + E(P) T_1}) = 1.
\end{equation*}
\end{prop}
\begin{proof}
For $P\in \R^3$, let $\nu_P$ be the image measure of $e^{-P^2\sigma^2(\zeta)/2}\mu(\mathrm d \zeta)$ under the map $T_1$ and let $z_P(T) \coloneqq e^{-P^2T/2}$ for $T \geq 0$. By Proposition \ref{Proposition: Our renewal equations}, for any $P \geq 0$ the renewal equation
\begin{equation}
\label{Equation: Renewal equation}
f_P = \nu_P*f_P + z_P
\end{equation}
holds, where the convolution $\nu_P*f_P$ is defined as
\begin{equation*}
(\nu_P*f_P)(T) \coloneqq \int_{[0, T]} \nu_P(\mathrm dt) f_P(T-t)
\end{equation*}
for $T\geq 0$. As $f_P$ is continuous and strictly positive, $\inf_{0 \leq t \leq T} f_P(t) >0$ and hence the measure $\nu_P$ is locally finite. Renewal theory implies that the unique locally bounded solution to \eqref{Equation: Renewal equation} is given by
\begin{equation*}
f_P = \sum_{n=0}^\infty \nu_P^{*n}*z_P
\end{equation*}
Taking the Laplace transform leads to
\begin{equation*}
\langle \Omega, (H(P) - \lambda)^{-1} \Omega \rangle = \mathcal L(f_P)(-\lambda) = \frac{1}{P^2/2-\lambda} \sum_{n=0}^\infty \mathcal L(\nu_P)^n(-\lambda)
\end{equation*}
for\footnote{The inequality $E(P)<P^2/2$ follows from the considerations above and can also be obtained directly from the definition of the Hamiltonians by using $E(0)<0$ and the estimate $E(P) \leq \langle \psi_0, H(P) \psi_0 \rangle$, where $\psi_0$ is the ground state of $H(0)$.} $\lambda < E(P)$. In particular, $\mathcal L(\nu_P)(-\lambda)<1$ for $\lambda < E(P)$ and
\begin{equation*}
\langle \Omega, (H(P) - \lambda)^{-1} \Omega \rangle = \frac{1}{P^2/2-\lambda} \cdot \frac{1}{1-\mu(e^{-P^2 \sigma^2/2 + \lambda T_1})}.
\end{equation*}
As mentioned earlier, if there exists a ground state of $H(P)$ then it is unique and non-orthogonal to $\Omega$. In combination with the spectral theorem this implies for $|P|\in \mathcal I_0$ that
\begin{equation*}
\lim_{\lambda \uparrow E(P)} \langle \Omega, (H(P) - \lambda)^{-1} \Omega \rangle = \infty
\end{equation*}
and hence $\mu(e^{-\sigma^2 P^2 /2 + E(P) T_1}) = 1$ by the monotone convergence theorem.
\end{proof}
\begin{remark}
Let $P\in \mathbb R^3$ such that $|P|\in \mathcal I_0$ and $\psi_P$ be the unique ground state of $H(P)$. By an application of the spectral theorem
\begin{equation*}
\lim_{T \to \infty} f_P(T)e^{TE(P)} = \lim_{T \to \infty} \langle \Omega, e^{-T(H(P)-E(P))} \Omega \rangle = |\langle \Omega, \psi_P \rangle|^2.
\end{equation*}
On the other hand, the limit $\lim_{T \to \infty} f_P(T)e^{TE(P)}$ can be calculated by using the renewal theorem. This gives us the identity
\begin{equation}
\label{Equation: Formular for overlap}
|\langle \Omega, \psi_P \rangle|^2 = \frac{1}{P^2/2 - E(P)} \frac{1}{\mu(T_1 e^{-P^2\sigma^2/2 + E(P)T_1})}.
\end{equation}
\end{remark}
\begin{cor}
$E$ is non-decreasing and strictly increasing on $\mathcal I_0$. For $|P| \notin \operatorname{cl}(\mathcal I_0)$ we have $\lim_{\lambda \uparrow E(P)}\langle \Omega, (H(P) - \lambda)^{-1} \Omega \rangle<\infty$ and $H(P)$ does not have a ground state.
\end{cor}
\begin{proof}
The strict monotonicity on $\mathcal I_0$ follows directly from Proposition \ref{Propposition: Matrix element of Resolvent}. For $P_1, P_2 \notin \mathcal I_0$ with $P_1 < P_2$ we always have
\begin{equation*}
\mu(e^{-P_2^2 \sigma^2/2 + E_{\operatorname{ess}}(0)T_1}) < \mu( e^{-P_1^2\sigma^2 /2 + E_{\operatorname{ess}}(0)T_1}) \leq 1
\end{equation*}
and hence
\begin{equation*}
\mu(e^{-P^2\sigma^2/2 + E_{\operatorname{ess}}(0)T_1}) < 1
\end{equation*}
for all $P\in \R^3$ such that $|P| \notin \operatorname{clos}(\mathcal I_0)$. Hence, for those $P$
\begin{equation*}
\lim_{\lambda \uparrow E(P)} \langle \Omega, (H(P) - \lambda)^{-1} \Omega \rangle = \frac{1}{P^2/2-E_{\operatorname{ess}}(0)} \cdot \frac{1}{1-\mu(e^{-P^2 \sigma^2 /2 + E_{\operatorname{ess}}(0) T_1})}
\end{equation*}
and $H(P)$ does not have a ground state (since it would need to be non-orthogonal to $\Omega$).
If $E$ would be not non-decreasing, then $\mathcal I_0$ would not be an interval i.e. there would exist $P_1 \in [0, \infty) \setminus \mathcal I_0$ and $P_2 \in \mathcal I_0$ such that $P_1 < P_2$. This, however, would imply
\begin{equation*}
1 = \mu(e^{-P_2^2\sigma^2/2 + E(P)T_1}) < \mu(e^{-P_1^2\sigma^2/2 + E_\text{ess}(0)T_1}) \leq 1. \qedhere
\end{equation*}
\end{proof}
\begin{cor}
\label{Corollary: Is I0 bounded?}
The interval $\mathcal I_0$ is bounded if and only if there exists a $P\geq 0$ such that
\begin{equation*}
\mu(e^{-P^2 \sigma^2/2 + E_\text{ess}(0)T_1}) = \widehat{\mu}(e^{-P^2 \sigma^2/2 + T_1}) < \infty
\end{equation*}
where $\widehat{\mu}$ is the probability measure defined by $\widehat{\mu}(\mathrm d \zeta) \coloneqq e^{E(0) T_1(\zeta)} \mu(\mathrm d \zeta)$.
\end{cor}
\begin{proof}
This easily follows from the monotone convergence theorem.
\end{proof}
\begin{cor}
\label{Corollary: Behavior arround origin}
We have
\begin{equation}
\label{Equation: FOrmular for effective mass}
m_{\text{eff}} = \frac{\widehat \mu(T_1)}{\widehat \mu(\sigma^2)} \in (1, \infty).
\end{equation}
\end{cor}
\begin{proof}
Let $P\in \mathcal I_0$ and $\lambda< E(P)$. Then
\begin{equation*}
\mu(e^{-P^2\sigma^2/2 + \lambda T_1}) = 1 - \frac{1}{P^2/2 - \lambda}\cdot \frac{1}{\langle \Omega, (H(P) - \lambda)^{-1} \Omega \rangle}.
\end{equation*}
The function $\lambda \mapsto \langle \Omega, (H(P) - \lambda)^{-1} \Omega \rangle^{-1}$ has a removable singularity in $E(P)$ since $E(P)$ is for $P \in \mathcal I_0$ an isolated eigenvalue. This implies that there exists an $\tilde \varepsilon>0$ such that $\mu(e^{-P^2\sigma^2/2 + (E(P) + \tilde \varepsilon) T_1})<\infty$. Since $\sigma^2 \leq T_1$ there thus exist $\varepsilon, \delta>0$ such that
\begin{equation*}
\label{Equation: Differentiation under the integral}
\mu(e^{-(P-\delta)^2\sigma^2/2 + (E(P) + \varepsilon)T_1}) < \infty.
\end{equation*}
Hence, we may differentiate under the integral. Differentiating
\begin{equation*}
1 = \mu(e^{- P^2\sigma^2/2 + E(P) T_1})
\end{equation*}
twice with respect to $P$ and evaluating at $P=0$ yields the equality in \eqref{Equation: FOrmular for effective mass}. Notice that both integrals are finite by the previous considerations (or by \eqref{Equation: Formular for overlap} for that matter). Since $\sigma^2 \leq T_1$ and $\mu(\sigma^2 < T_1)>0$ the quotient is strictly larger than 1.
\end{proof}
\begin{cor}
$P \mapsto E(\sqrt{P})$ is strictly concave on $\mathcal I_0$. In particular
\begin{equation*}
E(P) - E(0) < \frac{1}{2m_{\text{eff}}} P^2
\end{equation*}
for all $P>0$, i.e. the correction to the quasi-particle energy is negative and $\big[0, \sqrt{2 m_\text{eff}}\, \big) \subset \mathcal I_0$.
\end{cor}
\begin{proof}
For $\lambda \in \tilde{\mathcal I_0} \coloneqq \{P^2:\, P \in \mathcal I_0\}$ let $h(\lambda)$ be the unique solution to
\begin{equation*}
\mu(e^{-\lambda \sigma^2/2 + h(\lambda) T_1}) = 1,
\end{equation*}
i.e. $h = E \circ \sqrt{\cdot}$.
Then, for $\lambda_1, \lambda_2 \in \tilde{\mathcal I_0}$ with $\lambda_1 \neq \lambda_2$ and $\beta \in (0, 1)$ we get with Hölders inequality with dual exponents $1/\beta$ and $1/(1-\beta)$
\begin{align*}
&\mu(e^{-(\beta \lambda_1 + (1-\beta)\lambda_2) \sigma^2/2 + (\beta h(\lambda_1) + (1-\beta)h(\lambda_2)) T_1}) \\
&< \mu(e^{-\lambda_1 \sigma^2/2 + h(\lambda_1) T_1})^{\beta} \mu(e^{- \lambda_2 \sigma^2/2 + h(\lambda_2) T_1})^{1-\beta} = 1
\end{align*}
which means
$h(\beta \lambda_1 + (1-\beta) \lambda_2) > \beta h(\lambda_1) + (1-\beta)h(\lambda_2)$. Hence, $h$ is strictly concave on $\mathcal I_0$, which implies for all $P \in \mathcal I_0 \setminus \{0\}$
\begin{equation*}
E(P) - E(0) = h(P^2) - h(0) < h'(0) P^2 = \frac{1}{2}E''(0)P^2. \qedhere
\end{equation*}
\end{proof}
{\bf Acknowledgment:} The author would like to thank David Mitrouskas and Krzysztof Myśliwy for making him aware of some open problems concerning the energy-momentum relation. Additionally, he would like to thank Antti Knowles and Volker Betz for helpful comments on an earlier version of the paper. The author was partially supported by the Swiss
National Science Foundation grant 200020-200400.
|
1,108,101,564,973 | arxiv | \section{Introduction}
\label{IntroSect}\bigbreak
\noindent
Let $\mathfrak{F}_{\gk}$ be the category of finitely generated field extensions of a field~$\gk$,
and~$\CM_{\ast}$ a cycle module in the sense of Rost~\cite{Ro96} over~$\gk$, as for instance Milnor
$K$-theory or (abelian) Galois cohomology with finite coefficients. A {\it cohomological invariant} of
degree~$n$ of an algebraic group~$G$ over~$\gk$ with values in~$\CM_{\ast}$ is a natural
transformation
$$
a\, :\;\HM^{1}(\, -\, ,G)\,\longrightarrow\,\CM_{n}(\, -\, )
$$
of functors on~$\mathfrak{F}_{\gk}$. Here $\HM^{1}(\, -\, ,G)$ denotes the first non abelian Galois cohomology
of~$G$. Cohomological invariants are an old topic. For instance the discriminant, or the Clifford invariant
of a quadratic form can be interpreted as cohomological invariants of an orthogonal group. However the
formalization of this concept has been done only recently by Serre, see his lecture notes in~\cite{CohInv}
for a thorough account and some information on the history of the subject.
\smallbreak
In general the cohomological invariants of an algebraic group with values in a given cycle module are hard (if
not impossible) to compute. For most groups we know only some of the invariants, and even finding new ones
can be quit a task, as is exemplified in the construction of the Rost invariant, see \textsl{e.g.}\ Merkurjev's lecture
in~\cite{CohInv}. Beside (the natural) applications to the classification of algebraic groups and their torsors there
are further applications of the theory of cohomological invariants, as for instance to rationality questions around
Noether's problem, see \textsl{e.g.}\ ~\cite[Part I, Sects.\ 33 and 34]{CohInv}.
\medbreak
The aim of this work, which is split into two parts, is the computation of the invariants of Weyl groups
with values in a cycle module, which is annihilated by~$2$, over a field of characteristic zero, which
contains a square root of~$-1$. The actual computation will be presented in the sequel~\cite{Hi19} to
this paper by the second named author, to which we also refer for a more precise description of the
result. Crucial for these investigations is the so called {\it splitting principle} for invariants of orthogonal
reflection groups. The proof of this principle is the content of this article. To formulate this result we recall
first the definition of a orthogonal reflection group over~$\gk$. Assume that $\khar\gk\not=2$. Let~$(V,b)$
be a regular symmetric bilinear space (of finite dimension) over~$\gk$ and~$\Orth (V,b)$ its orthogonal
group. A finite subgroup~$W$ of~$\Orth (V,b)$ is called a (finite) {\it orthogonal reflection group} over~$\gk$
if~$W$ is generated by reflections.
\medbreak
\noindent
{\bf Theorem.}
{\it
Let~$W$ be a orthogonal reflection group over the field~$\gk$. Assume
that $\khar {\gk}$ is coprime to the order of~$W$.
Then a cohomological invariant of degree~$n$ of~$W$ with values in a cycle
module~$\CM_{\ast}$ over~$\gk$
$$
a\, :\;\HM^{1}(\, -\, ,W)\,\longrightarrow\,\CM_{n}(\, -\, )
$$
is trivial if and only if its restrictions to all $2$-subgroups of~$W$, which are generated
by reflections, are trivial.
}
\medbreak
\noindent
Crucial for our proof of this theorem is the explicit description of a versal $W$-torsor over~$\gk$,
which we give in Section~\ref{ReflGrVerTorSubSect}. This construction uses the fact that~$W$ is
a subgroup of the orthogonal group of some regular symmetric bilinear space over~$\gk$. Hence,
although~$W$ is a defined as a finite group scheme over an arbitrary field, we prove the
splitting principle only for invariants over fields, where~$W$ has a faithful orthogonal representation.
\smallbreak
We want to point out that our arguments here work also for Witt- and Milnor-Witt $K$-theory invariants of
orthogonal reflection groups, \textsl{i.e.}\ the splitting principle holds also for such invariants. We explain the
necessary modifications in the last section of this work. However, the computations of Witt- and Milnor-Witt
$K$-theory invariants of Weyl groups are a different story and not touched in the second part of this
work. In fact, at least the Milnor-Witt $K$-theory invariants even of symmetric groups seem to be unknown.
\medbreak
For~$W$ a Weyl group the splitting principle in our theorem above has been already announced
by Serre in his lectures~\cite[Part I, 25.15]{CohInv}. It plays an important role in his computation of
cohomological invariants of the symmetric group with values in $\HM^{\ast}(\, -\, ,\mathbb{Z}/2)$ over an
arbitrary field. Note also that Ducoat claims this principle for the special case of invariants of Coxeter
groups with values in Galois cohomology with finite coefficients over (big enough) fields of characteristic
zero in his unpublished preprint~\cite{Du11}.
\medbreak
This article (except for the last section) as well as its sequel~\cite{Hi19} are based on the
2010 Diploma thesis~\cite{Hi10} of the second named author. This diploma thesis
does not deal with cycle modules of Rost, but with $\mathbb{A}^{1}$-invariants sheaves with
$\MK_{\ast}/2$-structure. However he proof of the splitting principle there has some
gaps and flaws.
\smallbreak
Our original intention was to write this article and its sequel in
the same setting but we refrained from this for the following two reasons. On the one hand,
it has turned out to be much easier and also shorter to give a complete proof of the splitting
principle in the slightly more restrictive setting of Rost's cycle modules. And on the other hand, we
believe that the most interesting invariants are anyway Galois cohomology-, or Milnor
$K$-theory (modulo some integer) invariants, which are both cycle modules, or Witt invariants.
An advantage of this restriction is also that the article is readable for readers only interested
in such invariants. They can assume throughout that the cycle module~$\CM_{\ast}$ in
question is one of their favorite theories.
\smallbreak
Moreover, according to Morel~\cite[Rem.\ 2.5]{A1AlgTop} such $\mathbb{A}^{1}$-invariant sheaves with
$\MK_{\ast}/2$-structure are the same as Rost cycle modules which are annihilated by~$2$, and
so we recover the computations of the diploma thesis~\cite{Hi10} in their full generality working
only with cycle modules.
\bigbreak
The content of this article is as follows. The main theorem is proven in Section~\ref{SpPrincipleSect}.
In Section~\ref{CycleModSect} we recall some definitions and facts about
Rost's cycle modules and Galois cohomology, mainly to fix notations and conventions. In the
following Section~\ref{WKSect} we recall the definition of a cohomological
invariant with values in a cycle module and prove some auxiliary results needed for
the proof of the main theorem. Except for the proof of a kind of specialization theorem,
our Theorem~\ref{specializationThm}, all arguments in Section~\ref{WKSect} are only
slight modifications of the one of Serre in his lectures~\cite[Part I]{CohInv}. However
the proof of Theorem~\ref{specializationThm} is more involved since a cycle module
over a field~$\gk$ is only defined for finitely generated field extensions of~$\gk$ and
so not for the completion or henselization of~$\gk$.
\smallbreak
Finally, in the last section we discuss the case of Witt-, and Milnor-Witt $K$-theory invariants.
Our proof of the splitting principle for invariants of reflection groups with values in cycle modules
carries over to this situation as well.
\bigbreak
\noindent
{\bf Acknowledgement.}
We would like to thank Fabien Morel for advise and fruitful discussions around this work.
This work has started (and slept long in between) more than 10 years ago, when one of us
(S.G.) was Assistent and the other (C.H.) Diploma Student of Fabien at the LMU Munich.
The first named author would also like to thank Volodya Chernousov, now his colleague
at the University of Alberta. Visiting Volodya in March 2008 has made a crucial impact on
this work.
\bigbreak\bigbreak
\begin{emptythm}
\label{NotationsSubSect}
{\bf Notations.}
Given a field~$\gk$ we denote by~$(\gk)_{s}$ its separable closure, and by
$\Gamma_{\gk}:=\Gal ((\gk)_{s}/\gk)$ its absolute Galois group.
\smallbreak
We denote by $\mathrm{Fields}_{\gk}$ the category of all field
extensions of~$\gk$. More precisely, the objects of $\mathrm{Fields}_{\gk}$ are pairs $(L,j)$,
where~$L$ is a field and $j:\gk\longrightarrow L$ a homomorphism of fields. A morphism
$(E,i)\longrightarrow (L,j)$ is a morphism of fields $\varphi:E\longrightarrow L$,
such that
$$
\xymatrix{
E \ar[rr]^-{\varphi} & & L
\\
& \gk \ar[ru]_-{j} \ar[lu]^-{i}
}
$$
commutes. For ease of notation the structure morphism will not be mentioned,
\textsl{i.e.}\ we write~$L$ only instead of~$(L,j)$.
\smallbreak
If~$(\ell ,\iota)\in\mathrm{Fields}_{\gk}$ then $\mathrm{Fields}_{\ell}$ can be identified with a full subcategory
of~$\mathrm{Fields}_{\gk}$ via the embedding $(L,j)\mapsto (L,j\circ\iota)$, which depends on
the structure morphism $\iota:\gk\longrightarrow\ell$.
\smallbreak
The symbol~$\mathfrak{F}_{\gk}$ denotes the full subcategory of~$\mathrm{Fields}_{\gk}$ consisting of
finitely generated field extensions of~$\gk$, \textsl{i.e.}\ of pairs~$(L,j)$, where~$L$ is a field and
$j:\gk\longrightarrow L$ a homomorphism of fields giving~$L$ the structure of a finitely generated field
extension of~$\gk$. Again we can identify $\mathfrak{F}_{\ell}$ with a full subcategory of~$\mathfrak{F}_{\gk}$
for all~$\ell\in\mathfrak{F}_{\gk}$.
\end{emptythm}
\goodbreak
\section{Preliminaries: Cycle modules, Galois cohomology and torsors}
\label{CycleModSect}\bigbreak
\begin{emptythm}
\label{CMSubSect}
{\bf Cycle modules.}
These have been invented by Rost~\cite{Ro96}
to facilitate Chow group computations. We refer to this article for details and more
information.
\smallbreak
The prototype of a cycle module is Milnor $K$-theory, which has been introduced by Milnor~\cite{Mi69/70},
and which we denote by
$$
\MK_{\ast}(F)\, :=\;\bigoplus\limits_{n\geq 0}\MK_{n}(F)
$$
for a field~$F$. Recall that this is a graded ring and as abelian group generated
by the pure symbols $\{ x_{1},\ldots ,x_{n}\}\in\MK_{n}(F)$, where $x_{1},\ldots ,x_{n}$
are non zero elements of~$F$.
\medbreak
A {\it cycle module over a field~$\gk$} is a covariant functor
$$
\CM_{\ast}\, :\;\mathfrak{F}_{\gk}\,\longrightarrow\,\grAb\, ,\; F\,\longmapsto\,\CM_{\ast}(F)\, =\,
\bigoplus\limits_{n\in\mathbb{Z}}\CM_{n}(F)\, ,
$$
where $\grAb$ denotes the category of graded abelian groups, such that $\CM_{\ast}(F)$
is a graded $\MK_{\ast}(F)$-module for all $F\in\mathfrak{F}_{\gk}$. Following half way
Rost~\cite{Ro96} and deviating from somehow usual customs we denote by $\varphi_{\CM}$
the morphism $\CM_{\ast}(F)\longrightarrow\CM_{\ast}(E)$ induced by a morphism $\varphi:F\longrightarrow E$
in~$\mathfrak{F}_{\gk}$.
\end{emptythm}
\begin{emptythm}
\label{2edResMapSubSect}
{\bf The second residue map.}
Let~$v$ be a discrete valuation of $F\in\mathfrak{F}_{\gk}$ of geometric type which is trivial on~$\gk$.
By this we mean that there exists a normal integral $\gk$-scheme~$X$ of finite type, such that the function
field~$\gk (X)$ is equal~$F$, and such that~$v$ corresponds to a codimension one point in~$X$. Then
there is a $\MK_{\ast}(\gk)$-linear homomorphism, the so called {\it (second) residue map}:
$$
\partial_{v}\, :\;\CM_{\ast}(F)\,\longrightarrow\,\CM_{\ast -1}(F(v))\, ,
$$
where~$F(v)$ is the residue field of~$v$.
\smallbreak
Associated with this homogenous homomorphism of degree~$-1$ there is a
homogenous homomorphism of degree~$0$, the so called {\it specialization
homomorphism}:
$$
s_{v}^{\pi}\, :\;\CM_{\ast}(F)\,\longrightarrow\,\CM_{\ast}(F(v))\, ,\; x\,\longmapsto\,\partial_{v}\big(\{\pi\}\cdot x\big)\, ,
$$
which depends on the choice of a uniformizer~$\pi$ for~$v$.
\smallbreak
Recall the following three axioms, which play some role in the next section.
Let $F,v,F(v)$ be as above and $\varphi: F\longrightarrow E$ a finite field extension. Assume
there is a geometric valuation~$w$ on~$E$ with residue field~$E(w)$ and with
$w|_{F}=v$. Let~$e_{w|v}$ be the ramification index and $\bar{\varphi}:F(v)\longrightarrow E(w)$
the induced homomorphism of the residue fields. Then
the following holds (numbering as in Rost~\cite[p.\ 329]{Ro96}):
\smallbreak
\begin{itemize}
\item[{\bf (R3a)}]
$\partial_{w}\circ\varphi_{\CM}\, =\, e_{w|v}\cdot\bar{\varphi}_{\CM}\circ\partial_{v}$;
\smallbreak
\item[{\bf (R3c)}]
if~$w$ is trivial on~$F$, and so~$F(v)=F$, then $\partial_{w}\circ\varphi_{\CM}\, =0$; and
\smallbreak
\item[{\bf (R3d)}]
if~$w$ is as in {\bf (R3c)} and~$\pi$ is an uniformizer for~$w$ then $s_{w}^{\pi}\circ\varphi_{\CM}\, =\,\bar{\varphi}_{\CM}$.
\end{itemize}
\end{emptythm}
\begin{emptythm}
\label{UnrCMSubSect}
{\bf Unramified cycle modules.}
Let~$X$ be a integral scheme, which is essentially of finite type over a field~$\gk$. By
the latter we mean that~$X$ is a finite type $\gk$-scheme or a localization of such a
scheme. Denoting~$X^{(1)}$ the points of codimension~$1$. If a point~$x$ in~$X^{(1)}$
is regular, then its local ring~${\mathcal O}_{X,x}$ is a discrete valuation ring and we get a valuation~$v_{x}$
on the function field~$\gk (X)$ of~$X$.
\smallbreak
Given a cycle module~$\CM_{\ast}$ we have then a second residue map
$$
\partial_{x}\, :\; =\,\partial_{v_{x}}\, :\;\CM_{\ast}(\gk (X))\,\longrightarrow\,\CM_{\ast -1}(\gk (x))\, ,
$$
where~$\gk(x)$ denotes the residue field of~$x$, as well as a specialization map
$$
s_{x}^{\pi}\, :=\; s_{v_{x}}^{\pi}\, :\;\CM_{\ast}(\gk (X))\longrightarrow\CM_{\ast}(\gk (x))
$$
for every uniformizer $\pi\in{\mathcal O}_{X,x}$.
If~$X$ is regular in codimension one $\partial_{x}$ exists for all~$x\in X^{(1)}$ and so
we can define
$$
\CM_{n,unr}(X)\, :=\;\Ker\big(\,\CM_{n}(\gk (X))\,\xrightarrow{\; (\partial_{x})_{x\in X^{(1)}}\;}\,
\bigoplus\limits_{x\in X^{(1)}}\CM_{n-1}(\gk (x))\, ,
$$
the so called {\it unramified $\CM_{n}$-cohomology group} of~$X$.
\end{emptythm}
\begin{emptythm}
\label{H1SubSect}
{\bf Non abelian Galois cohomology.}
We recall now -- mainly to fix notations -- some definitions and properties of torsors and
non abelian Galois cohomology sets. We refer to Serre's well known book~\cite{GalCoh}
and also to~\cite[\S 28 and \S 29]{BookInv} for details and more information.
\medbreak
Let~$F$ be a field and $G$ a linear algebraic group over~$F$. We denote by
$\HM^{1}(F,G)$ the {\it first non abelian Galois cohomology} set,
\textsl{i.e.}\ $\HM^{1}(F,G)=\HM^{1}(\Gamma_{F},G(F_{s}))$. If a continuous maps
$c:\Gamma_{F}\longrightarrow G(F_{s})$, $\sigma\mapsto c_{\sigma}$ is a cycle, \textsl{i.e.}\ represents
an element of~$\HM^{1}(F,G)$, then we denote its class in~$\HM^{1}(F,G)$ by~$[c]$.
\smallbreak
If $\varphi:F\longrightarrow E$ is a morphism of fields we denote the induced {\it restriction map}
$\HM^{1}(F,G)\longrightarrow\HM^{1}(E,G)$ by $r_{\varphi}$, or if~$\varphi$ is clear from the
context by $r_{E/F}$.
\smallbreak
If $\theta:H\longrightarrow G$ is a morphism of linear algebraic groups over~$F$ we denote
following~\cite{BookInv} the induced homomorphism $\HM^{1}(F,H)\longrightarrow\HM^{1}(F,G)$
by~$\theta^{1}$.
\smallbreak
In the proof of Theorem~\ref{specializationThm} below we consider also the first non
abelian \'etale cohomology set $\HM^{1}_{et}(X,G)$, where~$X$ is a scheme over~$F$
and~$G$ a linear algebraic group over~$F$. If $f:X\longrightarrow Y$ is a morphism of such schemes
we denote the pull-back map $\HM^{1}_{et}(Y,G)\longrightarrow\HM^{1}_{et}(X,G)$ by~$r_{f}$. We write
then also $T_{X}$ instead of $r_{f}(T)$ for $T\in\HM^{1}_{et}(Y,G)$ if~$f$ is clear from
the context.
\smallbreak
Note that since~$G$ is smooth the set~$\HM^{1}_{et}(X,G)$ can be identified with the isomorphism
classes of $G$-torsors $\pi:\sheaf{T}\longrightarrow X$ over~$X$, see \textsl{e.g.}\ ~\cite[Chap.\ III.4]{EC}. We denote
the class of a $G$-torsor $\pi:\sheaf{T}\longrightarrow X$ over~$X$ by $[\sheaf{T}\longrightarrow X]$.
\smallbreak
We use in the following also affine notations, \textsl{i.e.}\ $\HM^{1}_{et}(R,G)$ instead of $\HM^{1}_{et}(X,G)$
if $X=\Spec R$ is affine. Note that if~$X=\Spec K$ is the spectrum of a field then
$\HM^{1}_{et}(X,G)$ is naturally isomorphic to~$\HM^{1}(K,G)$.
\end{emptythm}
\begin{emptythm}
\label{GaloisExpl}
{\bf Example.}
Let~$G$ be a finite group with trivial $\Gamma_{F}$-action, where~$F$ is a field.
Then the non abelian Galois cohomology set $\HM^{1}(F,G)$ can be identified with
the isomorphism classes of $G$-Galois algebras, see \textsl{e.g.}\ ~\cite[V.14]{IGC},
or~\cite[\S 18B]{BookInv}.
\smallbreak
A particular example of such a $G$-Galois algebra is a finite Galois extension $E\supset F$
with group $\Gal (E/F)=G$. Then the continuous homomorphism of groups
$c:\Gamma_{F}\longrightarrow G$, $\sigma\mapsto\sigma|_{E}$, represents the class of the
$G$-Galois algebra~$E$ in~$\HM^{1}(F,G)$, see \textsl{e.g.}\ ~\cite[Thm.\ V.14.17]{IGC}.
\smallbreak
If $\theta:H\subset G$ is a subgroup with fixed field~$L$ then the
class of the restriction $r_{L/F}([c])$ is represented by the continuous homomorphism
$c|_{\Gamma_{L}}:\Gamma_{L}\subset\Gamma_{F}\xrightarrow{c}G$, whose
image is in the subgroup~$H$. It follows that
$$
r_{L/F}([c])\, =\,\theta^{1}([c'])\, ,
$$
where $[c']\in\HM^{1}(L,H)$ is represented by
$c':\Gamma_{L}\xrightarrow{c|_{\Gamma_{L}}}H$.
\end{emptythm}
\begin{emptythm}
\label{VerTorSubSect}
{\bf Versal torsors.}
Let~$T\in\HM^{1}_{et}(X,G)$ be a $G$-torsor over the smooth integral $F$-scheme~$X$
with function field $K=F(X)$. Assume that given a field extension~$L$ of~$F$, which is infinite,
and an element $\tilde{T}\in\HM^{1}(L,G)$ then there exists an $L$-point~$x$ of~$X$, such that
$T_{F(x)}=\tilde{T}$ in $\HM^{1}(F(x),G)=\HM^{1}(L,G)$. Then~$T_{K}\in\HM^{1}(K,G)$ is called
a {\it versal $G$-torsor}, see~\cite[Part I, Def.\ 5.1]{CohInv}.
\end{emptythm}
\begin{emptythm}
\label{VerTorExpl}
{\bf Example.}
Let~$G$ be a finite group which acts faithfully on the finite dimensional
$\gk$-vector space~$V$, where~$\gk$ is a field. Then~$G$ acts on the
dual space $V^{\vee}:=\Hom_{\gk}(V,\gk)$ via $(h.f)(v):=f(h^{-1}.v)$
for all $f\in V^{\vee}$, $v\in V$, and~$h\in G$. This induces a
$G$-action on $\mathbb{A} (V):=\Spec\SymAlg (V^{\vee})$, where~$\SymAlg (V^{\vee})$
denotes the symmetric algebra of~$V^{\vee}$. For~$g\in G$ denote by
$\sheaf{V}_{g}$ the closed subset of~$\mathbb{A} (V)$ defined by the ideal generated
by all $f\circ (g-\id_{V})\in V^{\vee}$, $f\in V^{\vee}$. The group~$G$ acts freely
on the open set
$$
U\, :=\;\mathbb{A} (V)\,\setminus\;\bigcup\limits_{g\in G}\sheaf{V}_{g}\, ,
$$
and so the quotient morphism $q:U\longrightarrow U/G$ is a $G$-torsor. The generic
fiber of this torsor is a versal $G$-torsor, see~\cite[Part I, 5.4 and 5.5]{CohInv}
for a proof. Note that the function field~$\gk (U/G)$ is the fraction field of the
invariant ring $\SymAlg (V^{\vee})^{G}$. Hence the class of this versal $G$-torsor
in $\HM^{1}(\gk (U/G),G)$ is the class of the $G$-Galois algebra $\gk (U)$, \textsl{i.e.}\
the class of the Galois extension $\gk (U)\supseteq\gk(U)^{G}=\gk (U/G)$.
\end{emptythm}
\goodbreak
\section{Invariants in cycle modules}
\label{WKSect}\bigbreak
\begin{emptythm}
\label{InvCMDef}
{\bf Definition.}
Let~$G$ be a linear algebraic group and~$\CM_{\ast}$ a cycle module over the field~$\gk$.
A {\it cohomological invariant of degree~$n$} of~$G$ with values in the cycle module~$\CM_{n}$ is
a natural transformation of functors
$$
a\, :\;\HM^{1}(\, -\, ,G)\,\longrightarrow\,\CM_{n}(\, -\,)\, ,
$$
\textsl{i.e.}\ for all $\varphi:F\longrightarrow E$ in~$\mathfrak{F}_{\gk}$ the following diagram commutes:
$$
\xymatrix{
\HM^{1}(E,G) \ar[r]^-{a_{E}} & \CM_{n}(E)
\\
\HM^{1}(F,G) \ar[r]_-{a_{F}} \ar[u]^-{r_{\varphi}} & \CM_{n}(F) \ar[u]_-{\varphi_{\CM}} \rlap{\, .}
}
$$
This definition is due to Serre and a special case of the one given in his lectures~\cite[Part I]{CohInv}, but
it includes the main player of Serre's text, cohomological invariants of algebraic groups with coefficients
in Galois cohomology. Note however that there is a subtle difference as we consider here only the
category~$\mathfrak{F}_{\gk}$ of finitely generated field extensions of~$\gk$ and not the category~$\mathrm{Fields}_{\gk}$
of all field extensions of~$\gk$. This is forced by the fact that for technical reasons a "abstract" cycle module
is not defined for all field extensions of a given field but only for the finitely generated ones. If one is
only interested in "concrete" cycle modules as for instance Milnor $K$-theory, Witt groups, or Galois
cohomology, there is no need for this restriction. However one may wonder whether it could happen
albeit~$\HM^{1}(\, -\, ,G)$ and the value group is defined for all $F\in\mathrm{Fields}_{\gk}$ that there exists
invariants, which are only defined for finitely generated field extensions of~$\gk$.
\medbreak
Following Serre's lectures~\cite[Part I]{CohInv} we denote the set of cohomological invariants of degree~$n$
of the group~$G$ with values in~$\CM_{n}$ by $\Inv_{\gk}^{n}(G,\CM_{\ast})$. The set $\Inv^{n}_{\gk}(G,\CM_{\ast})$
has the structure of an is an abelian group as~$\CM_{n}(F)$ is one for all $F\in\mathfrak{F}_{\gk}$.
\smallbreak
We set
$$
\Inv_{\gk}(G,\CM_{\ast})\, :=\;
\bigoplus\limits_{n\in\mathbb{Z}}\Inv^{n}_{\gk}(G,\CM_{\ast})\, ,
$$
and call elements of this direct sum {\it cohomological invariants} of~$G$ with
values in~$\CM_{\ast}$. Note that the $\MK_{\ast}$-structure of~$\CM_{\ast}$
induces a $\MK_{\ast}(\gk)$ operation on~$\Inv_{\gk}(G,\CM_{\ast})$ making
the set of cohomological invariants of~$G$ with values in~$\CM_{\ast}$ a
graded $\MK_{\ast}(\gk)$-module.
\end{emptythm}
\begin{emptythm}
\label{ConstInvSubSect}
{\bf Constant invariants.}
We have $\Inv_{\gk} (G,\CM_{\ast})\not= 0$ if and only $\CM_{\ast}(\gk)\not= 0$. In fact,
if $x\in\CM_{\ast}(\gk)$ then
$$
c_{L}\, :\;\HM^{1}(L,G)\,\longrightarrow\,\CM_{\ast}(L)\, ,\; t\,\longmapsto\, (\iota_{L})_{\CM}(x)\, ,
$$
where $L\in\mathfrak{F}_{\gk}$ with structure map $\iota_{L}:\gk\longrightarrow L$, defines an
invariant. Such invariants are called {\it constant}, respectively, {\it constant of degree~$n$}
if $x\in\CM_{n}(\gk)$, and we write $c\equiv x\in\CM_{\ast}(\gk)$.
\end{emptythm}
\begin{emptythm}
\label{ResInvSubSect}
{\bf Restriction of invariants.}
Let $\theta:H\longrightarrow G$ be a morphism of linear algebraic groups over~$\gk$ and~$\CM_{\ast}$
a cycle module over~$\gk$. Composing $a\in\Inv_{\gk} (G,\CM_{\ast})$ with the map~$\theta^{1}$:
$$
\HM^{1}(\, -\, ,H)\,\xrightarrow{\;\theta^{1}\;}\HM^{1}(\, -\, ,G)\,\xrightarrow{\; a\;}\,\CM_{\ast}(\, -\,)
$$
we get an invariant $\theta^{\ast}(a)\in\Inv_{\gk} (H,\CM_{\ast})$. In case $\theta:H\subseteq G$
is the embedding of a closed subgroup we denote following Serre's lecture~\cite[Part I]{CohInv}
the induced homomorphism $\theta^{\ast}:\Inv_{\gk} (G,\CM_{\ast})\longrightarrow\Inv_{\gk} (H,\CM_{\ast})$ by
$\Res_{G}^{H}$ and call it the {\it restriction}.
\end{emptythm}
\begin{emptythm}
\label{InnAutExpl}
{\bf Example.}
Let~$H$ be a subgroup of a finite group~$G$, $\CM_{\ast}$ a cycle
module over~$\gk$, and $a\in\Inv_{\gk} (G,\CM_{\ast})$. If~$g$ is an
element of the normalizer~$\Norm_{G}(H)$ we denote by~$\iota_{g}$
the inner automorphism of~$G$ defined by~$g$. The isomorphism $\iota_{g}$ acts
also on~$H$ and so consequently via $\iota_{g}^{\ast}:a\mapsto a\circ\iota_{g}^{1}$
on $\Inv_{\gk} (H,\CM_{\ast})$ for all $g\in\Norm_{G}(H)$ giving~$\Inv_{\gk} (H,\CM_{\ast})$ the
structure of a $\Norm_{G}(H)$-module. We claim that $\Res_{G}^{H}$ maps
$\Inv_{\gk} (G,\CM_{\ast})$ into the subgroup $\Inv_{\gk} (H,\CM_{\ast})^{\Norm_{G}(H)}$ of
$\Norm_{G}(H)$-invariant elements in~$\Inv_{\gk} (H,\CM_{\ast})$. In fact, by~\cite[Part I, Prop.\ 13.1]{CohInv}
the map $\iota_{g}^{1}:\HM^{1}(L,G)\longrightarrow\HM^{1}(L,G)$ is the identity for all $L\in\mathfrak{F}_{\gk}$.
Since we have $\Res_{G}^{H}\circ\iota_{g}^{\ast}=\iota_{g}^{\ast}\circ\Res_{G}^{H}$ this implies the claim.
\medbreak
We prove now the following specialization theorem, which is kind of an analog of a result
of Rost~\cite[Part I, Thm.\ 11.1]{CohInv} about Galois cohomology invariants.
\end{emptythm}
\begin{emptythm}
\label{specializationThm}
{\bf Theorem.}
{\it
Let~$X$ be an integral scheme with function field~$K=\gk(X)$, which is essentially of finite type
over the field~$\gk$. Let further~$\CM_{\ast}$ be a cycle module over~$\gk$, $a\in\Inv_{\gk} (G,\CM_{\ast})$,
where~$G$ is a linear algebraic group over~$\gk$, and $T\in\HM^{1}_{et}(X,G)$. Let~$x\in X$
be a regular codimension one point. Then we have:
\smallbreak
\begin{itemize}
\item[(i)]
$\partial_{x}\big(\, a_{K}(T_{K})\,\big)\, =0$; and
\smallbreak
\item[(ii)]
$s_{x}^{\pi}(a_{K}(T_{K}))\, =\, a_{\gk (x)}(T_{\gk (x)})$ for all local uniformizers~$\pi\in{\mathcal O}_{X,x}$.
\end{itemize}
In particular, if~$X$ is regular in codimension one, then $a_{K}(T_{K})\in\CM_{\ast,unr}(X)$.
}
\begin{proof}
Replacing~$X$ by~$\Spec{\mathcal O}_{X,x}$ we can assume that $X=\Spec R$ for a discrete valuation
ring~$R$, which is essentially of finite type over~$\gk$, and that~$x$ is the closed point of~$X$.
We denote~$k=\gk (x)$ the residue field of~$R$, $q:R\longrightarrow k$ the quotient map, $\eta:R\longrightarrow K$ the embedding
of~$R$ into its field of fractions~$K$, and~$v$ the discrete valuation of~$K$ corresponding to~$R$.
Then $\partial_{x}=\partial_{v}:\CM_{\ast}(K)\longrightarrow\CM_{\ast -1}(k)$, and for~(i) we have to show
$\partial_{v}\big(\, a_{K}(T_{K})\,\big)=0$.
\smallbreak
To prove this let $\psi: R\longrightarrow R^{h}$ be the henselization of~$R$ with fraction field~$K^{h}$.
This is also a discrete valuation ring with the same residue field~$k$, and there exists local
\'etale $R$-algebras $\varphi_{i}:R\longrightarrow R_{i}$, $i\in I$, such that $R^{h}=\lim\limits_{i\in I} R_{i}$,
see~\cite[Chap.\ VIII]{ALH}. The rings~$R_{i}$ are also discrete valuation rings with~$k$ as
residue field. We denote by~$v_{i}$ the induced valuation on the fraction field~$K_{i}$
of~$R_{i}$. We get a commutative diagram of homomorphisms of rings for all~$i\in I$:
$$
\xymatrix{
K \ar[r]^-{\varphi'_{i}} & K_{i} \ar[r]^-{\psi'_{i}} & K^{h}
\\
R \ar[r]_-{\varphi_{i}} \ar[u]^-{\eta} & R_{i} \ar[r]_-{\psi_{i}} \ar[u]_-{\eta_{i}} & R^{h} \ar[u]_-{\eta^{h}} \rlap{\, ,}
}
$$
where the up going arrows are the respective inclusions of the rings~$R,R_{i}$, and~$R^{h}$ into
their fraction fields.
\medbreak
The homomorphism $\varphi_{i}:R\longrightarrow R_{i}$ is unramified at the maximal ideal of~$R_{i}$, and so
by the cycle module Axiom~{\bf (R3a)}, see Section~\ref{2edResMapSubSect}, the square on the
right hand side of the following diagram commutes
\begin{equation}
\label{SpPfEq0}
\xymatrix{
\HM^{1}(K_{i},G) \ar[r]^-{a_{K_{i}}} & \CM_{\ast}(K_{i}) \ar[r]^-{\partial_{v_{i}}} & \CM_{\ast -1}(k)
\\
\HM^{1}(K,G) \ar[r]^-{a_{K}} \ar[u]^-{r_{\varphi_{i}'}} & \CM_{\ast}(K) \ar[u]^-{(\varphi_{i}')_{\CM}}
\ar[r]^-{\partial_{v}} & \CM_{\ast -1}(k) \ar[u]_-{=}
}
\end{equation}
for all~$i\in I$. Since~$a$ is an invariant also the one on the left hand side is commutative.
Therefore to prove $\partial_{v}(a_{K}(T_{K}))=0$ it is enough to show that there exists~$i\in I$,
such that
$$
\partial_{v_{i}}\big(\, a_{K_{i}}(T_{K_{i}})\,\big)\, =\,
\partial_{v_{i}}\big(\,a_{K_{i}}(r_{\varphi'_{i}}(T_{K}))\,\big)\; =\, 0\, .
$$
To find this~$i\in I$ we use the fact that there exists a splitting $j: k\longrightarrow R^{h}$
of the quotient morphism $q^{h}:R^{h}\longrightarrow k$, \textsl{i.e.}\ $q^{h}\circ j=\id_{k}$. In fact,
if~$\hat{R}$ is the completion of~$R$ we have a splitting $\hat{j}:k\longrightarrow\hat{R}$
of the quotient map $\hat{R}\longrightarrow k$ by the Cohen structure theorem. This splitting
factors via~$R^{h}$ since the henselian local domain~$R^{h}$ is excellent
by~\cite[Cor.\ 18.7.6]{EGA4-4}, and therefore satisfies the approximation property
by~\cite[Sect.\ 3.6, Cor.\ 9]{NM}, which implies in particular, that the splitting~$\hat{j}$
of $\hat{R}\longrightarrow k$ factors via~$R^{h}$.
\smallbreak
The composition of maps
$\HM^{1}(k,G)\xrightarrow{r_{j}}\HM^{1}_{et}(R^{h},G)\xrightarrow{r_{q^{h}}}\HM^{1}(k,G)$
is the identity, and by~\cite[Chap.\ XXIV, Prop.\ 8.1]{SGA3-3} the map
$r_{q^{h}}:\HM^{1}_{et}(R^{h},G)\longrightarrow\HM^{1}(k,G)$ is an isomorphism. Therefore
$r_{j}:\HM^{1}(k,G)\longrightarrow\HM^{1}_{et}(R^{h},G)$ is one as well, and moreover we have
$r_{j}(T_{k})=T_{R^{h}}$ since $r_{q^{h}}(T_{R^{h}})=r_{q^{h}}(r_{\psi}(T_{R}))=r_{q}(T_{R})=T_{k}$.
\smallbreak
Let~$k_{i}$ be the pre-image of $j(k)$ under the homomorphism $\psi_{i}:R_{i}\longrightarrow R^{h}$.
The set~$k_{i}\setminus\{ 0\}$ is contained in the set of units of~$R_{i}$ and so
is a field. This implies also that $v_{i}$ is trivial on~$k_{i}$. The $\gk$-linear quotient
homomorphism $q_{i}:R_{i}\longrightarrow k$ maps~$k_{i}$ onto a subfield of~$k$. This implies
by~\cite[Chap.\ 5, \S 14, No 7, Cor.\ 3]{ALG4-7} that~$k_{i}$ is also a finitely generated
field extension of~$\gk$ and so in~$\mathfrak{F}_{\gk}$ for all~$i\in I$.
\smallbreak
By the definition of the fields~$k_{i}$ we have a commutative diagram
\begin{equation}
\label{SpPfEq2}
\xymatrix{
k \ar[r]^-{j} & R^{h}
\\
k_{i} \ar[r]_-{j_{i}} \ar[u]^-{\bar{\psi}_{i}} & R_{i} \ar[u]_-{\psi_{i}}
}
\end{equation}
for all~$i\in I$, where $j_{i}$ is the inclusion $k_{i}\subset R_{i}$
and $\bar{\psi}_{i}$ the homomorphism induced by~$\psi_{i}$. Note that
$\bar{\psi}_{i}=q_{i}\circ j_{i}$ as $q\circ j=\id_{k}$.
\smallbreak
Diagram~(\ref{SpPfEq2}) gives in turn a commutative
diagram of pointed non abelian cohomology sets
\begin{equation}
\label{SpPfEq3}
\xymatrix{
\HM^{1}(k,G) \ar[r]^-{r_{j}} & \HM^{1}_{et}(R^{h},G)
\\
\HM^{1}(k_{i},G) \ar[r]_-{r_{j_{i}}} \ar[u]^-{r_{\bar{\psi}_{i}}} &
\HM^{1}_{et}(R_{i},G) \ar[u]_-{r_{\psi_{i}}} \rlap{\, .}
}
\end{equation}
\smallbreak
We have $k=\lim\limits_{i\in I}k_{i}$ and therefore by~\cite[Chap.\ VII, Thm.\ 5.7]{SGA4-2}
(or by direct verification) that $\HM^{1}(k,G)\, =\,\lim\limits_{i\in I}\HM^{1}(k_{i},G)$. Hence there
exists $i_{0}\in I$ and $T_{i_{0}}\in\HM^{1}(k_{i_{0}},G)$, such that
\begin{equation}
\label{SpPfEq4}
r_{\bar{\psi}_{i_{0}}}(T_{i_{0}})\, =\, T_{k}\in\HM^{1}(k,G)\, .
\end{equation}
\medbreak
By~(\ref{SpPfEq3}) we have
$$
r_{\psi_{i_{0}}}\big(\, r_{j_{i_{0}}}(T_{i_{0}})\,\big)\, =\,
r_{j}\big(\, r_{\bar{\psi}_{i_{0}}}(T_{i_{0}})\,\big)\, =\, r_{j}(T_{k})\, =\, T_{R^{h}}\, =\,
r_{\psi_{i_{0}}}(T_{R_{i_{0}}})\, .
$$
Now $R^{h}=\lim\limits_{i\in I}R_{i}$ and so by~\cite[Chap.\ VII, Thm.\ 5.7]{SGA4-2} again we
have $\lim\limits_{i\in I}\HM^{1}_{et}(R_{i},G)=\HM^{1}_{et}(R^{h},G)$. Hence replacing~$i_{0}$
by a 'larger' element of~$I$ if necessary we can assume that also
\begin{equation}
\label{SpPfEq5}
r_{j_{i_{0}}}(T_{i_{0}})\, =\, T_{R_{i_{0}}}\, .
\end{equation}
\medbreak
We claim that this index~$i_{0}$ does the job, \textsl{i.e.}\ we have
$\partial_{v_{i_{0}}}\big( a_{K_{i_{0}}}(T_{K_{i_{0}}})\big)=0$.
In fact, since~$a$ is a cohomological invariant we
have a commutative diagram
$$
\xymatrix{
\HM^{1}(K_{i_{0}},G) \ar[r]^-{a_{K_{i_{0}}}} & \CM_{\ast}(K_{i_{0}})
\\
\HM^{1}(k_{i_{0}},G) \ar[r]_-{a_{k_{i_{0}}}} \ar[u]^-{r_{(\eta_{i_{0}}\circ j_{i_{0}})}} &
\CM_{\ast}(k_{i_{0}}) \ar[u]_-{(\eta_{i_{0}}\circ j_{i_{0}})_{\CM}} \rlap{\, ,}
}
$$
and therefore taking~(\ref{SpPfEq5}) into account
$$
a_{K_{i_{0}}}(T_{K_{i_{0}}})\, =\, a_{K_{i_{0}}}\big(\, r_{(\eta_{i_{0}}\circ j_{i_{0}})}(T_{i_{0}})\,\big)\, =\,
(\eta_{i_{0}}\circ j_{i_{0}})_{\CM}\big(\, a_{k_{i_{0}}}(T_{i_{0}})\,\big)\, .
$$
But $v_{i_{0}}|_{k_{i_{0}}}\equiv 0$ and so by the cycle module Axiom~{\bf (R3c)}, see
Section~\ref{2edResMapSubSect}, we have $\partial_{v_{i_{0}}}(z)=0$ for all $z\in\CM_{\ast}(K_{i_{0}})$,
which are in the image of $(\eta_{i_{0}}\circ j_{i_{0}})_{\CM}$. We have proven~(i).
\bigbreak
For the proof of~(ii) we continue with above notation, \textsl{i.e.}\ $R={\mathcal O}_{X,x}$,
$R^{h}=\lim\limits_{i\in I}R_{i}$, and so on. We fix further a uniformizer~$\pi$ of~$R$.
Since the extensions $\varphi_{i}:R\longrightarrow R_{i}$ are unramified the image of the uniformizer~$\pi$
in~$R_{i}$ is also one, which we denote also by~$\pi$. We have $s_{x}^{\pi}=s_{v}^{\pi}$,
and so taking the right hand side of the commutative diagram~(\ref{SpPfEq0}) as well as the
definition of the specialization map, see Section~\ref{2edResMapSubSect}, into account we have
$s_{v_{i_{0}}}^{\pi}\circ (\varphi'_{i_{0}})_{\CM}\, =\, s_{v}^{\pi}$. Hence we have
$$
\begin{array}{r@{\; =\;}l@{\qquad}l}
s_{v}^{\pi}(a_{K}(T_{K})) & s_{v_{i_{0}}}^{\pi}\big(\, (\varphi'_{i_{0}})_{\CM}(a_{K}(T_{K}))\,\big) & \\[4mm]
& s_{v_{i_{0}}}^{\pi}\big(\, a_{K_{i_{0}}}(r_{\varphi'_{i_{0}}}(T_{K}))\,\big) & \mbox{$a$ is invariant} \\[4mm]
& s_{v_{i_{0}}}^{\pi}\big(\, a_{K_{i_{0}}}(r_{\eta_{i_{0}}}(T_{R_{i_{0}}}))\,\big) &
\mbox{since $\eta_{i_{0}}\circ\varphi_{i_{0}}=\varphi'_{i_{0}}\circ\eta$} \\[4mm]
& s_{v_{i_{0}}}^{\pi}\big(\, a_{K_{i_{0}}}(r_{(\eta_{i_{0}}\circ j_{i_{0}})}(T_{i_{0}}))\big) &
\mbox{by~(\ref{SpPfEq5})} \\[4mm]
& s_{v_{i_{0}}}^{\pi}\big(\, (\eta_{i_{0}}\circ j_{i_{0}})_{\CM}(a_{k_{i_{0}}}(T_{i_{0}}))\,\big) &
\mbox{$a$ is invariant} \\[4mm]
& (\bar{\psi}_{i_{0}})_{\CM}\big(\, a_{k_{i_{0}}}(T_{i_{0}})\,\big) & \mbox{by~{\bf (R3d)}} \\[4mm]
& a_{k}\big(\, r_{\bar{\psi}_{i_{0}}}(T_{i_{0}})\,\big) & \mbox{$a$ is invariant} \\[4mm]
& a_{k}(T_{k}) & \mbox{by (\ref{SpPfEq4}).}
\end{array}
$$
as claimed. We are done.
\end{proof}
\smallbreak
A consequence of this result is the following detection principle, which
is the analog of~\cite[Part I,12.2]{CohInv} for cycle module invariants.
\end{emptythm}
\begin{emptythm}
\label{SpCor}
{\bf Corollary.}
{\it
Let~$R$ be a regular local ring, which is essentially of finite type over the field~$\gk$.
Denote by~$K$ and~$k$ the fraction- and residue field, respectively, of~$R$. Let
further~$G$ be a linear algebraic group over~$\gk$, and~$\CM_{\ast}$ a cycle module
over~$\gk$. Then
$$
a_{K}(T_{K})\, =\, 0\quad\Longrightarrow\quad a_{k}(T_{k})\, =\, 0\
$$
for all $T\in\HM^{1}_{et}(R,G)$ and all $a\in\Inv_{\gk}(G,\CM_{\ast})$.
}
\begin{proof}
The proof is the same as the one of~\cite[Part I, 12.2]{CohInv}. We recall
the arguments for the convenience of the reader.
\smallbreak
If $\dim R=1$ this follows from part~(ii) of the theorem above, so let~$d:=\dim R\geq 2$,
and~$t\in R$ a regular parameter. Then $R/Rt$ is also a regular local ring with the same
residue field~$k$, and which is essentially of finite type over~$\gk$. The quotient field
of~$R/Rt$ is the residue field~$K_{t}$ of the discrete valuation ring $R_{Rt}$ (the localization at the
codimension one prime ideal~$Rt$). By the dimension one case we have $a_{K_{t}}(T_{K_{t}})=0$,
and so by induction $a_{k}(T_{k})=0$.
\end{proof}
\medbreak
Finally we state and prove the following detection principle, which is the cycle module
analog of~\cite[Part I, Thm.\ 12.3]{CohInv}. Again the proof is the same as in Serre's lecture
and only recalled for the convenience of our reader.
\end{emptythm}
\begin{emptythm}
\label{DetectionThm}
{\bf Theorem.}
{\it
Let~$G$ be a linear algebraic group over the field~$\gk$, and $T\in\HM^{1}(K,G)$
a versal $G$-torsor. Then we have for a given cycle module~$\CM_{\ast}$ over~$\gk$
and $a,b\in\Inv_{\gk} (G,\CM_{\ast})$:
$$
a_{K}(T)\, =\, b_{K}(T)\quad\Longrightarrow\quad a\, =\, b\, .
$$
}
\begin{proof}
Replacing~$a$ by $b-a$ it is enough to show that $a_{K}(T)=0$ implies~$a\equiv 0$.
We have to show $a_{k}(S)=0$ for all $k\in\mathfrak{F}_{\gk}$ and all $S\in\HM^{1}(k,G)$.
\smallbreak
Replacing~$k$ by the rational function field~$k(T)$ if necessary, we can assume
that~$k$ is an infinite field. In fact, since~$a$ is an invariant the following diagram
commutes:
$$
\xymatrix{
\HM^{1}(k(T),G) \ar[r]^-{a_{k(T)}} & \CM_{\ast}(k(T))
\\
\HM^{1}(k,G) \ar[r]^-{a_{k}} \ar[u]^-{r_{\iota}} & \CM_{\ast}(k)
\ar[u]_-{\iota_{\CM}} \rlap{\, ,}
}
$$
where $\iota:k\hookrightarrow k(T)$ is the natural embedding. By
Rost~\cite[Prop.\ 2.2 {\bf (H)}]{Ro96} the homomorphism
$\iota_{\CM}:\CM_{\ast}(k)\longrightarrow\CM_{\ast}(k(T))$ is injective and so $a_{k(T)}(S_{k(T)})=0$
implies $a_{k}(S)=0$.
\smallbreak
To prove the claim for an infinite field~$k$ we use that since~$T\in\HM^{1}(K,G)$ is a versal
$G$-torsor there exists a $G$-torsor $\sheaf{T}\longrightarrow X$ over a smooth integral scheme~$X$ with
function field~$K$, such that the generic fiber of $\sheaf{T}\longrightarrow X$ is isomorphic to~$T$,
and such that there exists $x\in X(k)$ with $S=[\sheaf{T}\longrightarrow X]_{\gk (x)}$. Now~${\mathcal O}_{X,x}$ is a
regular local ring since~$X$ is smooth, and $a_{K}(T_{K})=0$ by assumption. We conclude
that $a_{\gk (x)}([\sheaf{T}\longrightarrow X]_{\gk (x)})=0$ by Corollary~\ref{SpCor} above.
\end{proof}
\end{emptythm}
\goodbreak
\section{The splitting principle}
\label{SpPrincipleSect}\bigbreak
\begin{emptythm}
\label{ReflGrSubSect}
{\bf Orthogonal reflection groups.}
We recall here and in the following three subsections some definitions and properties of
orthogonal reflection groups, merely to fix our notations. We refer to the standard
reference Bourbaki~\cite{LIE4-6} for details and more information, but see also the
book~\cite{RGInvTh} by Kane.
\smallbreak
Throughout this section we denote by~$\gk$ a field of characteristic~$\not= 2$.
\medbreak
Let~$(V,b)$ be a regular symmetric bilinear space of finite dimension over~$\gk$, and
$v\in V$ an anisotropic vector, \textsl{i.e.}\ $b(v,v)\not=0$. Then
$$
s_{v}\, :\; V\,\longrightarrow\, V\, ,\; w\,\longmapsto w-\frac{2b(v,w)}{b(v,v)}\cdot v\, ,
$$
is an element of the orthogonal group~$\Orth (V,b)$, called
the {\it reflection} associated with~$v$.
\end{emptythm}
\begin{emptythm}
\label{OrthReflGrDef}
{\bf Definition.}
Let~$(V,b)$ be a regular symmetric bilinear space of finite dimension over~$\gk$.
A finite subgroup of~$\Orth (V,b)$, which is generated by reflections, is called a
{\it (finite) orthogonal reflection group} over the field~$\gk$.
\medbreak
Let~$W\subset\Orth (V,b)$ be such a orthogonal reflection group.
Since~$b$ is non singular the homomorphism
$$
\hat{b}\, :\; V\,\longrightarrow\, V^{\vee}\, :=\;\Hom_{\gk}(V,\gk)\, ,\; v\,\longmapsto\,
v^{\vee}\, :=\; b(\, -\, ,v)
$$
is an isomorphism. Then $b^{\vee}(v^{\vee},w^{\vee})=b(v,w)$ defines a regular
symmetric bilinear form on~$V^{\vee}$, and $v\mapsto v^{\vee}$ is an isometry
$(V,b)\xrightarrow{\simeq} (V^{\vee},b^{\vee})$. The group~$W$ acts on~$V^{\vee}$
as well via $(w.f)(x):=f(w^{-1}.x)$ for all $f\in V^{\vee}$, $w\in W~$, and~$x\in V$.
\smallbreak
Then we have $s_{v}.f=s_{v^{\vee}}(f)$ for all $f\in V^{\vee}$ and anisotropic
vectors $v\in V$, where~$s_{v^{\vee}}$ is the reflection on~$v^{\vee}$ in~$(V^{\vee},b^{\vee})$.
Hence~$W$ is isomorphic to an orthogonal reflection group in $\Orth (V^{\vee},b^{\vee})$.
\smallbreak
We quote now two facts from Bourbaki~\cite[Chap.\ V, \S 5, Thm.\ 3 and Ex.\ 8]{LIE4-6}.
(Note for the second assertion that in a vector space with a regular bilinear form over
a field of characteristic~$\not= 2$ a pseudo reflection in the orthogonal group
is automatically a reflection, see~\cite[Chap.\ V, \S 2, no 3]{LIE4-6}.)
\end{emptythm}
\begin{emptythm}
\label{InvThm}
{\bf Theorem (Chevalley-Shephard-Todd-Bourbaki).}
{\it
Let~$W\subset\Orth (V,b)$ be a orthogonal reflection group, where~$(V,b)$ is a
regular symmetric bilinear space over the field~$\gk$. Denote by~$\SymAlg (V^{\vee})$ the
symmetric algebra of the dual space~$V^{\vee}$, and let~$f\in V^{\vee}$ be a non zero linear
form. Assume that $\khar\gk$ does not divide~$|W|$.
\smallbreak
Then:
\smallbreak
\begin{itemize}
\item[(i)]
the algebra of invariants $\SymAlg (V^{\vee})^{W}$ is a polynomial algebra; and
\smallbreak
\item[(ii)]
the isotropy group of~$f$,
$$
W_{f}\, :=\;\big\{\, w\in W\, |\, w.f=f\,\big\}
$$
is a orthogonal reflection group as well.
\end{itemize}
}
\medbreak
\noindent
Note that if~$f=v^{\vee}$ for some~$v\in V$ then we have
$$
W_{f}=W_{v}\, :=\;\big\{\, w\in W\, |\, w.v=v\,\big\}\, ,
$$
since $v^{\vee}(x)=v^{\vee}(w^{-1}.x)$ for all $x\in V$ is equivalent to
$b(w.v,x)=b(v,x)$ for all~$x\in V$ and so equivalent to $w.v=v$ since~$b$
is non singular.
\end{emptythm}
\begin{emptythm}
\label{rootSystemSubSect}
{\bf The root system of a orthogonal reflection group.}
Given such a orthogonal reflection group~$W\subset\Orth (V,b)$ let
$R_{W}$ be the set of reflections in~$W$. Recall now that
$$
w\circ s_{\alpha}\circ w^{-1}\, =\, s_{w.\alpha}\, .
$$
Hence the set~$R_{W}$ is the disjoint union of conjugacy classes
$R_{W}=\bigcup\limits_{i=1}^{m}R_{i}$. For every~$R_{i}$ we choose
an anisotropic vector $\beta_{i}$ with $s_{\beta_{i}}\in R_{i}$.
Then we have $R_{i}=\{ s_{w.\beta_{i}}|w\in W\}$ for all $1\leq i\leq m$. Let
$$
\Delta_{i}\, :=\;\big\{\, w.\beta_{i}\, |\, w\in W\,\big\}
$$
for all $1\leq i\leq m$, and set
$$
\Delta\, :=\;\bigcup\limits_{i=1}^{m}\Delta_{i}\, .
$$
Note that the sets~$\Delta_{i}$ are $W$-invariant by definition.
The set~$\Delta$ is called a {\it root system} associated with~$W$.
It has the following properties:
\smallbreak
\begin{itemize}
\item[{\bf (R1)}]
if $\alpha\in\Delta$ then $\lambda\cdot\alpha\in\Delta$ for~$\lambda\in\gk$ if and only
if~$\lambda=\pm 1$, and
\smallbreak
\item[{\bf (R2)}]
for all $\alpha,\beta\in\Delta$ we have $s_{\alpha}.\beta\in\Delta$.
\end{itemize}
\smallbreak
\noindent
(In fact, if $w.\alpha=\lambda\cdot\alpha$ then
$b(\alpha,\alpha)=b(w.\alpha,w.\alpha)=b(\lambda\cdot\alpha,\lambda\cdot\alpha)=
\lambda^{2}b(\alpha,\alpha)$, and so~$\lambda=\pm 1$, hence {\bf (R1)}. Property~{\bf (R2)}
is by construction.)
\smallbreak
\noindent
Moreover, also by construction the set $\{ s_{\alpha}|\alpha\in\Delta\}$ is the set
of all reflections in~$W$, and so in particular~$W$ is generated by
all $s_{\alpha}$, $\alpha\in\Delta$.
\smallbreak
We prove an easy lemma, which is crucial for the proof of
the main theorem.
\end{emptythm}
\begin{emptythm}
\label{VerTorLem}
{\bf Lemma.}
{\it
Let~$W\subset\Orth (V,b)$ be a orthogonal reflection group as above
and~$\Delta$ an associated root system. Let~$P_{\alpha}\in\Spec\SymAlg (V^{\vee})$
the ideal generated by~$\alpha^{\vee}$ for some~$\alpha\in\Delta$ and
$$
W_{P_{\alpha}}\, :=\;\big\{\, w\in W\, |\, w.P_{\alpha}=P_{\alpha}\,\big\}
$$
the inertia group of~$P_{\alpha}$. Then we have
\smallbreak
\begin{itemize}
\item[(i)]
$$
W_{P_{\alpha}}\, =\, W_{\pm\alpha}\ :=\;\big\{ w\in W\, |\, w.\alpha=\pm\alpha\,\big\}\,
=\, <s_{\alpha}>.W_{\alpha}\,\simeq\,\mathbb{Z}/2\times W_{\alpha}\, ,
$$
where~$<s_{\alpha}>=\{\id_{V},s_{\alpha}\}$ is the subgroup generated by~$s_{\alpha}$,
and
\medbreak
\item[(ii)]
if~$\khar\gk$ does not divide~$|W|$
\begin{equation}
\label{W-freeEq}
\bigcup\limits_{\alpha\in\Delta}\Ker (\alpha^{\vee})\, =\,
\bigcup\limits_{\id_{V}\not= w\in W}\Ker (w-\id_{V})\, .
\end{equation}
\end{itemize}
}
\begin{proof}
(i)~If $w.\alpha=-\alpha$ then $s_{\alpha}\circ w\in W_{\alpha}$ and so
$W_{\pm\alpha}=<s_{\alpha}>.W_{\alpha}$.
\smallbreak
For the isomorphism on the right hand side we have to show that $s_{\alpha}$
commutes with all elements of~$W_{\alpha}=\{ w\in W|w.\alpha=\alpha\}$.
This can be seen as follows. Since~$\alpha$ is anisotropic there exists a
regular subspace $H\subset V$ with $(\gk\cdot\alpha)\perp H=V$.
As $w\in\Orth (V,b)$ we have $w.h\in H$ for all $w\in W_{\alpha}$
and $h\in H$. It follows $w(s_{\alpha}(h))=w(h)=s_{\alpha}(w(h))$ for
all $h\in H$ and $w\in W_{\alpha}$. Since we have also
$w(s_{\alpha}(\alpha))=-\alpha=s_{\alpha}(w(\alpha))$ we are done.
\medbreak
(ii)~Since $\Ker\alpha^{\vee}=\Ker (s_{\alpha}-\id_{V})$ the left hand side
is contained in the right hand side.
\smallbreak
For the other direction let $x\in V\setminus\bigoplus\limits_{\alpha\in\Delta}\Ker (\alpha^{\vee})$.
By Theorem~\ref{InvThm}~(ii) above we know that $W_{x}=\{ w\in W|w.x=x\}$ is an
orthogonal reflection group. Hence if~$W_{x}\not=\{\id_{V}\}$ there exists a reflection
$s_{\alpha}\in W$, $\alpha\in\Delta$, such that $s_{\alpha}.x=x$, or equivalently $\alpha^{\vee}(x)=0$,
a contradiction.
\end{proof}
\medbreak
We state and prove now our main theorem.
\end{emptythm}
\begin{emptythm}
\label{mainThm}
{\bf Theorem.}
{\it
Let~$W$ be a orthogonal reflection group over the field~$\gk$, whose characteristic
is coprime to the order of~$W$, and~$\CM_{\ast}$ a cycle module over~$\gk$. Then
a cohomological invariant
$$
a\, :\;\HM^{1}(\, -\, ,W)\,\longrightarrow\,\CM_{n}(\, -\, )
$$
over~$\gk$ is trivial if and only if its restrictions to all elementary abelian
$2$-subgroups of~$W$, which are generated by reflections, are trivial.
}
\bigbreak
For the proof we have to describe a versal $W$-torsor over~$\gk$. Let for
this~$(V,b)$ be a regular symmetric bilinear space over~$\gk$ with
$W\subset\Orth (V,b)$. Denote further by~$\Delta$ a root system
associated with~$W$.
\end{emptythm}
\begin{emptythm}
\label{ReflGrVerTorSubSect}
{\bf A versal torsor for~$W$.}
Define~$U\subset\mathbb{A} (V)=\Spec\SymAlg (V^{\vee})$ as in Example~\ref{VerTorExpl},
\textsl{i.e.}\ ~$U$ is the open complement of the union of closet sets~$\sheaf{V}_{w}$, $w\in W$,
where~$\sheaf{V}_{w}$ is the closed set defined by the ideal generated by all $f\circ (\id_{V}-w)$,
$f\in V^{\vee}$. The group~$W$ acts freely on~$U$, and the generic fiber of the quotient
morphism $q:U\longrightarrow U/W$ is a versal $W$-torsor over~$\gk$, see Example~\ref{VerTorExpl}.
\smallbreak
By~(\ref{W-freeEq}) we have that
\begin{equation}
\label{W-free2Eq}
\bigcup\limits_{\alpha\in\Delta}\Ker (\id_{L}\otimes\,\alpha^{\vee})\, =\,
\bigcup\limits_{\id_{V}\not= w\in W}\Ker\big((\id_{L}\otimes\, w)-\id_{L\otimes_{\gk}V}\big)
\end{equation}
for all field extensions $L\supseteq\gk$. We get
$U=\Spec\big(\,\SymAlg (V^{\vee})[g_{\Delta}^{-1}]\,\big)$, where we have set
$$
g_{\Delta}\, :=\;\prod\limits_{\alpha\in\Delta}\alpha^{\vee}\;\in\,\SymAlg (V^{\vee})\, .
$$
Hence the quotient morphism $q:U\longrightarrow U/W$ corresponds to the embedding of rings
$$
\big(\,\SymAlg (V^{\vee})\,\big)^{W}[g_{\Delta}^{-1}]\,\longrightarrow\,
\SymAlg (V^{\vee})[g_{\Delta}^{-1}]\, ,
$$
and so a versal $W$-torsor over~$\gk$, which is the generic fiber of~$q$, is
equal to the Galois extension $\Spec E\longrightarrow\Spec E^{W}$ with Galois group~$W$.
Here~$E$ denotes the fraction field of~$\SymAlg (V^{\vee})$. We set in the
following $K:=E^{W}$, and denote by $[E/K]\in\HM^{1}(K,W)$ the class of the $W$-Galois
algebra $E\supset K$, which is a versal $W$-torsor over~$\gk$.
\end{emptythm}
\begin{emptythm}
\label{UnramifiedExtSubSect}
{\bf An unramified extension.}
Let~$Q$ be a prime ideal of height one in $\SymAlg (V^{\vee})^{W}$, which is not in the open
subscheme $U/W=\Spec\big(\SymAlg (V^{\vee})^{W}[g_{\Delta}^{-1}]\big)$, \textsl{i.e.}\ $g_{\Delta}\in Q$.
The local ring $R:=\big(\SymAlg (V^{\vee})^{W}\big)_{Q}$ at~$Q$ is a discrete valuation ring. Let~$P$
be a prime ideal in~$\SymAlg (V^{\vee})$ above~$Q$. Then there exists~$\alpha\in\Delta$,
such that $P=P_{\alpha}=\SymAlg (V^{\vee})\cdot\alpha^{\vee}$. The Galois group~$W$ of $E\supset K$
acts transitively on the prime ideals above~$Q$, and by Lemma~\ref{VerTorLem}~(ii) we know
that the inertia group $W_{P_{\alpha}}=\{ w\in W|w.P_{\alpha}=P_{\alpha}\}$ is equal
$$
W_{\pm\alpha}\, =\,\big\{ w\in W\, |\, w.\alpha=\pm\alpha\,\big\}
\, =\, <s_{\alpha}>.W_{\alpha}\,\simeq\,\mathbb{Z}/2\times\; W_{\alpha}\, .
$$
\smallbreak
Denote by~$F_{\alpha}$ the fixed field of~$W_{\pm\alpha}$ in~$E$, by $\iota_{\alpha}$ the
embedding $K\subseteq F_{\alpha}$, and by~$\tilde{S}$ the integral closure of~$R$ in~$F_{\alpha}$.
We set $S_{\alpha}:=\tilde{S}_{\tilde{S}\cap P}$. This is a discrete valuation ring with maximal
ideal $Q_{\alpha}:=(\tilde{S}\cap P)\cdot\tilde{S}_{\tilde{S}\cap P}$. By construction
$S_{\alpha}\supseteq R$ is unramified, and the residue field~$\gk (Q_{\alpha})$
of~$S_{\alpha}$ is equal to the one of~$R$, which we denote~$\gk (Q)$. Hence by
the cycle module Axiom~{\bf (R3a)}, see Section~\ref{2edResMapSubSect}, we have
a commutative diagram
$$
\xymatrix{
\CM_{\ast}(F_{\alpha}) \ar[r]^-{\partial_{Q_{\alpha}}} & \CM_{\ast -1}(\gk (Q))
\\
\CM_{\ast}(K) \ar[r]^-{\partial_{Q}} \ar[u]^-{\iota_{\alpha\,\CM}} & \CM_{\ast -1}(\gk (Q))
\ar[u]_-{=}
}
$$
for all cycle modules~$\CM_{\ast}$ over~$\gk$.
\end{emptythm}
\begin{emptythm}
\label{PfmainThmSubSect}
{\bf Proof of Theorem~\ref{mainThm}.}
The proof is by induction on $m=|W|$. If~$m=1$ or $m=2$ there is nothing
to prove, so let $m\geq 4$. Using the induction hypothesis we show first the following.
\medbreak
\noindent
{\bf Claim.}
$a_{K}([E/K])\;\in\,\CM_{n,unr}(\mathbb{A} (V)/W)$.
\medbreak
To prove the claim we have to show
\begin{equation}
\label{PfmainThmEq1}
\partial_{Q}\big(\, a_{K}([E/K])\,\big)\, =0
\end{equation}
for all prime ideals~$Q$ of height one in~$\SymAlg (V^{\vee})^{W}$. This is clear by
Theorem~\ref{specializationThm} if~$Q$ is in the open subset
$U/W\subset\mathbb{A} (V)/W=\Spec\big(\SymAlg (V^{\vee})^{W}\big)$
since~$[E/K]$ is by construction the generic fiber of $U\longrightarrow U/W$.
\smallbreak
So assume~$Q\not\in U/W$. Then~$Q$ contains~$g_{\Delta}$. Let~$Q_{\alpha}$
and~$\iota_{\alpha}:K\subseteq F_{\alpha}$ be as in Section~\ref{UnramifiedExtSubSect}
above. By the diagram at the end of Section~\ref{UnramifiedExtSubSect} is is enough to
show
$$
\partial_{Q_{\alpha}}\big(\, (\iota_{\alpha})_{\CM}(a_{K}([E/K]))\,\big)\; =\, 0\, .
$$
Since~$a$ is an invariant we have
$(\iota_{\alpha})_{\CM}(a_{K}([E/K]))=a_{F_{\alpha}}(r_{\iota_{\alpha}}([E/K]))$,
and therefore this equation is equivalent to
\begin{equation}
\label{PfmainThmEq2}
\partial_{Q_{\alpha}}\big(\, a_{F_{\alpha}}(r_{\iota_{\alpha}}([E/K]))\,\big)\; =\, 0\, .
\end{equation}
\medbreak
Now we distinguish two cases:
\smallbreak
\begin{itemize}
\item[(a)]
$F_{\alpha}=K$:
Then~$W= <s_{\alpha}>.W_{\alpha}\simeq\mathbb{Z}/2\times W_{\alpha}$ (in the notation of
Section~\ref{UnramifiedExtSubSect}), and therefore we have
$$
\HM^{1}(\, -\, ,W)\,\simeq\,\HM^{1}(\, -\, ,\mathbb{Z}/2)\times\HM^{1}(\, -\, ,W_{\alpha})\, .
$$
We claim that $a_{\ell}(x,y)=0$
for all $(x,y)\in\HM^{1}(\ell,\mathbb{Z}/2)\times\HM^{1}(\ell,W_{\alpha})$ and
all $\ell\in\mathfrak{F}_{\gk}$. This clearly implies $a\equiv 0$, and so
$\partial_{Q}(a_{K}(T))=0$.
\smallbreak
Let for this~$\ell\in\mathfrak{F}_{\gk}$, and $x\in\HM^{1}(\ell,\mathbb{Z}/2)$, and consider~$\mathfrak{F}_{\ell}$
as full subcategory of~$\mathfrak{F}_{\gk}$, \textsl{cf.}\ ~Section~\ref{NotationsSubSect}. The maps
$$
b^{x}_{L}\, :\;\HM^{1}(L,W_{\alpha})\,\longrightarrow\CM_{n}(L)\, ,\; z\,\longmapsto a_{L}(r_{j}(x),z)\, ,
$$
where $j:\ell\longrightarrow L$ is the structure map in~$\mathfrak{F}_{\ell}$, define an invariant
of degree~$n$ of~$W_{\alpha}$ over~$\ell$ with values in~$\CM_{n}$, \textsl{i.e.}\ we have
$b^{x}\in\Inv^{n}_{\ell}(W_{\alpha},\CM_{\ast})$.
\smallbreak
Let~$H\subseteq W_{\alpha}$ be a $2$-subgroup generated by reflections. Then
the subgroup $H':=<s_{\alpha}>.H$ of~$W$ is a $2$-subgroup generated by reflections
as well, and therefore by assumption the restriction $\Res_{W}^{H'}(a)$ is trivial. Now
for~$L\in\mathfrak{F}_{\ell}$ with structure map $\varphi_{L}:\ell\longrightarrow L$ and $z\in\HM^{1}(L,H)$
we have
$$
\Res_{W_{\alpha}}^{H}(b^{x})_{L}(z)\, =\,\Res_{W}^{H'}(a)(r_{\varphi_{L}}(x),z)\, =\, 0\, .
$$
As~$z\in\HM^{1}(L,H)$ and~$L\in\mathfrak{F}_{\ell}$ were arbitrary this implies
$\Res_{W_{\alpha}}^{H}(b^{x})$ is trivial. This holds for all $2$-subgroups~$H$ of~$W_{\alpha}$
generated by reflections and therefore since $W_{\alpha}$ is also a orthogonal
reflection group by Theorem~\ref{InvThm}~(ii) we have by induction~$b^{x}\equiv 0$. In
particular, we have $0=b^{x}_{\ell}(y)=a_{\ell}(x,y)=0$ as claimed. We are done in the case
$F_{\alpha}=K$.
\bigbreak
\item[(b)]
$F_{\alpha}\not= K$: Then $W\not= <s_{\alpha}>.W_{\alpha}=W_{\pm\alpha}$.
By Example~\ref{GaloisExpl} we have
\begin{equation}
\label{PfmainThmEq3}
[E/K]_{F_{\alpha}}\, =\, r_{\iota_{\alpha}}([E/K])\, =\,\theta^{1}(T')
\end{equation}
for some~$T'\in\HM^{1}(F_{\alpha},W_{\pm\alpha})$, where
$\theta:W_{\pm\alpha}\hookrightarrow W$ is the inclusion.
\smallbreak
Let~$H$ be a $2$-subgroup of~$W_{\pm\alpha}$ generated by reflections.
Then
$$
\Res_{W_{\pm\alpha}}^{H}\big(\,\Res_{W}^{W_{\pm\alpha}}(a)\,\big)\, =\,\Res_{W}^{H}(a)\, ,
$$
and since~$H$ is also a $2$-subgroup of~$W$ generated by reflections we
have $\Res_{W}^{H}(a)\equiv 0$ by our assumption. it follows that
$\Res_{W_{\pm\alpha}}^{H}\big(\,\Res_{W}^{W_{\pm\alpha}}(a)\,\big)\equiv 0$
for all $2$-subgroups of~$W_{\pm\alpha}$ which are generated by reflections.
\smallbreak
Since~$W_{\alpha}$ is a reflection group, see Theorem~\ref{InvThm}~(ii)
also $W_{\pm\alpha}$ is one, and so we can conclude by induction that
\begin{equation}
\label{PfmainThmEq4}
\Res_{W}^{W_{\pm\alpha}}(a)\;\equiv\, 0\, .
\end{equation}
\smallbreak
\noindent
We compute $\partial_{Q_{\alpha}}\big(\, a_{F_{\alpha}}(r_{\iota_{\alpha}}([E/K))\,\big)$
\smallbreak
$$
\begin{array}{r@{\; =\;}l@{\quad}l}
& \partial_{Q_{\alpha}}\big(\, a_{F_{\alpha}}([E/K]_{F_{\alpha}})\big) & \\[3mm]
& \partial_{Q_{\alpha}}\big(\, a_{F_{\alpha}}(\theta^{1}(T'))\,\big)
& \mbox{by~(\ref{PfmainThmEq3})} \\[3mm]
& \partial_{Q_{\alpha}}\big(\,\Res_{W}^{W_{\pm\alpha}}(a)_{F_{\alpha}}(T')\,\big) &
\mbox{by definition of~$\Res_{W}^{W_{\pm\alpha}}$} \\[3mm]
& 0 & \mbox{by~(\ref{PfmainThmEq4}).}
\end{array}
$$
Hence~(\ref{PfmainThmEq2}) holds if~$K\not= F_{\alpha}$ and we
therefore have $\partial_{Q}(a_{K}([E/K]))=0$ as claimed.
\end{itemize}
\bigbreak
\noindent
We have proven the claim, and can now finish the proof of the theorem. By the
Chevalley-Shephard-Todd-Bourbaki Theorem~\ref{InvThm}~(i)
the $\gk$-scheme $\mathbb{A} (V)/W=\Spec\SymAlg (V^{\vee})^{W}$ is an affine space over~$\gk$
and so by homotopy invariance of the cohomology of cycle modules, see Rost~\cite[Prop.\ 8.6]{Ro96},
we have $\CM_{n,unr}(\mathbb{A} (V)/W)\simeq\CM_{n}(\gk)$. Hence by the detection
Theorem~\ref{DetectionThm} the invariant~$a$ is constant. However by assumption the restriction
of~$a$ to a $2$-subgroup generated by reflections is zero, and so~$a$ has to be constant zero.
\end{emptythm}
\begin{emptythm}
\label{mainCor}
{\bf Corollary.}
{\it
Let~$W$ be as in Theorem~\ref{mainThm} a orthgonal reflection group
and~$\CM_{\ast}$ a cycle module over a field~$\gk$, whose characteristic
is coprime to~$|W|$. Let further $G_{1},\ldots ,G_{r}$ different maximal
elementary abelian $2$-subgroups generated by reflections, which represent
all such subgroups up to conjugation, \textsl{i.e.}\ if~$G$ is a maximal elementary
abelian $2$-subgroup of~$W$ generated by reflections then $G=wG_{i}w^{-1}$
for some $1\leq i\leq r$ and some~$w\in W$.
\smallbreak
Then the product of restriction morphisms
$$
\big(\,\Res_{W}^{G_{i}}\,\big)_{i=1}^{r}\, :\;\Inv_{\gk}(W,\CM_{\ast})\,\longrightarrow\,
\bigoplus\limits_{i=1}^{r}\Inv_{\gk}(G_{i},\CM_{\ast})^{N_{W}(G_{i})}
$$
is injective. (Recall from Example~\ref{InnAutExpl} that the image of~$\Res_{W}^{G_{i}}$
is actually in the subgroup $\Inv_{\gk}(G_{i},\CM_{\ast})^{N_{W}(G_{i})}$ for all $1\leq i\leq r$.)
}
\begin{proof}
Let~$a\in\Inv^{n}_{\gk}(W,\CM_{\ast})$ be a non trivial invariant. Then by
Theorem~\ref{mainThm} there exists an elementary abelian $2$-subgroup~$H$
of~$W$, which is generated by reflections, such that $\Res_{W}^{H}(a)\not\equiv 0$.
Let~$G$ be a maximal elementary $2$-subgroup generated by reflections,
which contains~$H$. Then there exists $1\leq i_{0}\leq r$ and~$w_{0}\in W$, such
that $w_{0}Gw_{0}^{-1}=G_{i_{0}}$. Let $H'\subseteq G_{i_{0}}$ be the image of~$H$
under the inner automorphism $\iota_{w_{0}}:g\mapsto w_{0}\cdot g\cdot w_{0}^{-1}$ of~$W$.
The morphism
$$
\iota_{w_{0}}^{\ast}\, :\;\Inv_{\gk}(H,\CM_{\ast})\,\longrightarrow\,\Inv_{\gk}(H',\CM_{\ast})
$$
is an isomorphism with inverse~$\iota_{w_{0}^{-1}}^{\ast}$, see Example~\ref{InnAutExpl}
for notations, and so
$$
0\,\not\equiv\;\iota_{w_{0}}^{\ast}\big(\,\Res_{W}^{H}(a)\,\big)\, =\,
\Res_{W}^{H'}\big(\,\iota_{w_{0}}^{\ast}(a)\,\big)\, .
$$
Now $\iota_{w_{0}}^{\ast}:\Inv_{\gk}(W,\CM_{\ast})\longrightarrow\Inv_{\gk}(W,\CM_{\ast})$
is the identity by~\cite[Part I, Prop.\ 13.1]{CohInv}, and therefore
$0\not\equiv\Res_{W}^{H'}(a)\, =\,\Res_{G_{i_{0}}}^{H'}\big(\,\Res_{W}^{G_{i_{0}}}(a)\,\big)$.
It follows $\Res_{W}^{G_{i_{0}}}(a)\not\equiv 0$.
\end{proof}
\end{emptythm}
\begin{emptythm}
\label{FDefRem}
{\bf Remarks.}
\begin{itemize}
\item[(i)]
For~$W$ a symmetric group the splitting principle holds more generally. The field~$\gk$
can be arbitrary, and in particular~$\khar\gk$ can divide~$|W|$. This has been
shown by Serre~\cite[Part I, Thm.\ 24.9]{CohInv}.
\smallbreak
\item[(ii)]
Let~$V$ be a finite dimensional vector space over the field~$\gk$. A $\gk$-linear
automorphism~$s$ of~$V$ is called a {\it pseudo-reflection} if~$s-\id_{V}$ has
rank~$1$. A finite subgroup~$W$ of $\Gl(V)$ is called a {\it (finite) pseudo-reflection
group} if it is generated by pseudo-reflections. One may wonder whether for such
a subgroup~$W$ of~$\Gl (V)$ the following generalization of our main
Theorem~\ref{mainThm} is true or not:
\smallbreak
\noindent
{\it
If~$\khar\gk$ and~$|W|$ are coprime then an invariant $a:\HM^{1}(\, -\, ,W)\longrightarrow\CM_{n}(\, -\, )$
is constant zero if and only if its restrictions to abelian subgroups generated by pseudo-reflections
are constant zero for all cycle modules~$\CM_{\ast}$ over~$\gk$.
}
\smallbreak
\noindent
Note that under the assumption that~$\khar\gk$ does not divide~$|W|$ all pseudo-reflections in~$W$
are diagonalizable, see \textsl{e.g.}\ ~\cite[Prop.\ in Sect.\ 14-6]{RGInvTh}.
\end{itemize}
\end{emptythm}
\goodbreak
\section{The splitting principle of Witt- and Milnor-Witt $K$-theory invariants of orthogonal reflection groups}
\label{W-MWKInvSect}\bigbreak
\begin{emptythm}
\label{WittInvSubSect}
{\bf Witt invariants.}
We refer to the book~\cite{QHF} for details and more information about
Witt groups.
\smallbreak
Given a field~$F$ of characteristic~$\not= 2$ we denote by~$\W (F)$ the Witt group of~$F$
and by~$\FdI^{n}(F)\subset\W (F)$ the $n$th power of the fundamental ideal of~$F$ for~$n\in\mathbb{Z}$,
where we set $\FdI^{n}(F)=\W (F)$ if~$n\leq 0$. Fixing a base field~$\gk$ these are functors
on~$\mathrm{Fields}_{\gk}$, the category of all field extensions of~$\gk$.
\smallbreak
In the following we assume $\khar\gk\not= 2$.
\smallbreak
A {\it Witt invariant} of a $\gk$-algebraic group~$G$ of degree~$n$ is a natural transformation
$$
a\, :\;\HM^{1}(\, -\, ,G)\,\longrightarrow\,\FdI^{n}(\, -\,)\, ,
$$
where we consider both sides as functors on~$\mathrm{Fields}_{\gk}$.
\smallbreak
We denote the set of all Witt invariants of~$G$ of degree~$n$ over~$\gk$ by
$\Inv_{\gk}^{n}(G,\FdI^{\ast})$, and set
$$
\Inv_{\gk}(G,\FdI^{\ast})\, :=\;\bigoplus\limits_{n\in\mathbb{Z}}\Inv_{\gk}^{n}(G,\FdI^{\ast})\, .
$$
\medbreak
For these invariants the analog of Theorem~\ref{DetectionThm}, the detection principle, holds
as well.
\end{emptythm}
\begin{emptythm}
\label{WInvDetectionThm}
{\bf Theorem.}
{\it
Let~$G$ be a linear algebraic group over the field~$\gk$, and $T\in\HM^{1}(K,G)$
a versal $G$-torsor. Then we have for and $a,b\in\Inv_{\gk} (G,\FdI^{\ast})$:
$$
a_{K}(T)\, =\, b_{K}(T)\quad\Longrightarrow\quad a\, =\, b\, .
$$
}
\medbreak
\noindent
For~$a,b$ in $\Inv_{\gk}^{0}(G,\FdI^{\ast})$ this is proven in Serre's lectures~\cite[Sect.\ 27]{CohInv},
and the same argument works also for Witt invariants of non zero degree. In fact, one can use the same
arguments as for invariants with values in Galois cohomology.
\smallbreak
Our proof for cycle modules can be copied verbatim as well. We have only to replace
the specialization map by the so called first residue map and observe that given a field~$F$
with discrete valuation~$v$ and residue field~$F$ the second residue map
$\partial_{v,\pi}:\W(F)\longrightarrow\W(F(v))$ associated with some uniformizer~$\pi$ of~$v$ maps
$\FdI^{n}(F)$ into~$\FdI^{n-1}(F(v))$ for all~$n\in\mathbb{Z}$, see Arason~\cite[Satz 3.1]{Ar75}.
\medbreak
Using the detection principle we can now copy the proof of the splitting principle in
Section~\ref{PfmainThmSubSect} verbatim to get the following result.
\end{emptythm}
\begin{emptythm}
\label{WInvmainThm}
{\bf Theorem.}
{\it
Let~$W$ be a orthogonal reflection group over the field~$\gk$, whose characteristic
is coprime to the order of~$W$. Then a Witt invariant of degree~$n$
$$
a\, :\;\HM^{1}(\, -\, ,W)\,\longrightarrow\,\FdI^{n}(\, -\, )
$$
over~$\gk$ is trivial if and only if its restrictions to all elementary abelian
$2$-subgroups of~$W$, which are generated by reflections, are trivial.
}
\end{emptythm}
\begin{emptythm}
\label{MWKInvSubSect}
{\bf Milnor-Witt $K$-theory invariants.}
The $n$th {\it Milnor-Witt $K$-group} of a field~$F$ of characteristic not~$2$, which we denote
by~$\MWK_{n}(F)$, can be defined using generators and relations as in Morel's book~\cite[Def.\ 3.1]{A1AlgTop},
or equivalently, see Morel~\cite{Mo04}, as the pull-back
$$
\xymatrix{
\MWK_{n}(F) \ar[r]^-{f_{n,F}} \ar[d]_-{g_{n,F}} & \MK_{n}(F) \ar[d]^-{e_{n,F}}
\\
\FdI^{n}(F) \ar[r]_-{q_{n,F}} & \FdI^{n}(F)/\FdI^{n+1}(F)\, ,
}
$$
where~$e_{n,F}$ maps the symbol $\{ a_{1},\ldots ,a_{n}\}\in\MK_{n}(F)$ to the class of the $n$-fold
Pfister form $\ll a_{1},\ldots a_{n}\gg$ and~$q_{n,F}$ is the quotient map. Considering Milnor $K$-theory,
the $n$th power of the fundamental ideal, as well as Milnor-Witt $K$-theory as functors
on the category of all field extensions of a given base field~$\gk$ then $f_{n}$ and~$g_{n}$
are natural transformations of functors.
\medbreak
Let now~$\gk$ be a field of characteristic not~$2$ and~$G$ a linear algebraic group over~$\gk$.
A {\it Milnor-Witt $K$-theory invariant} of~$G$ of degree~$n$ is a natural transformation
$$
a\, :\,\HM^{1}(\, -\, ,G)\,\longrightarrow\,\MWK_{n}(\, -\, )
$$
of functors on~$\mathrm{Fields}_{\gk}$. We denote the set of all Milnor-Witt $K$-theory invariants of~$G$ of
degree~$n$ by $\Inv^{n}_{\gk}(G,\MWK_{\ast})$, and set
$$
\Inv_{\gk}(G,\MWK_{\ast})\, :=\;\bigoplus\limits_{n\in\mathbb{Z}}\Inv^{n}_{\gk}(G,\MWK_{\ast})\, .
$$
The addition in Milnor-Witt $K$-theory induces one on~$\Inv_{\gk}^{n}(G,\MWK_{\ast})$
for all~$n\in\mathbb{Z}$ making the set of such invariants an abelian group.
\medbreak
Given $a\in\Inv^{n}_{\gk}(G,\MWK_{\ast})$ then $f_{n}\circ a$ and $g_{n}\circ a$ are Milnor $K$-theory-
respective Witt invariants of degree~$n$, and by the very definition of Milnor-Witt $K$-theory via
the pull-back diagram above we have that $a\equiv 0$ if and only if $f_{n}\circ a\equiv 0$ and
$g_{n}\circ a\equiv 0$. Hence we have a monomorphism
$$
\Inv^{n}_{\gk}(G,\MWK_{\ast})\,\xrightarrow{\; a\mapsto (a\circ f_{n},a\circ g_{n})\;}
\Inv^{n}_{\gk}(G,\MK_{\ast})\oplus\Inv^{n}_{\gk}(G,\FdI^{\ast})\, .
$$
Consequently we have also the splitting principle for Milnor-Witt $K$-theory
invariants of reflection groups.
\end{emptythm}
\begin{emptythm}
\label{MWKmainThm}
{\bf Theorem.}
{\it
Let~$W$ be a orthogonal reflection group over the field~$\gk$, whose characteristic
is coprime to the order of~$W$. Then a Milnor-Witt $K$-invariant of degree~$n$
$$
a\, :\;\HM^{1}(\, -\, ,W)\,\longrightarrow\,\MWK_{n}(\, -\, )
$$
over~$\gk$ is trivial if and only if its restrictions to all elementary abelian
$2$-subgroups of~$W$, which are generated by reflections, are trivial.
}
\end{emptythm}
\bibliographystyle{amsalpha}
|
1,108,101,564,974 | arxiv | \section{Introduction}
\subsection{Background on $\mathbb{Z}_p$-index}
Let $p$ be a prime number.
When the finite group $\mathbb{Z}_p := \mathbb{Z}/p\mathbb{Z}$ freely acts on a topological space, we can define
its index. The $\mathbb{Z}_p$-index roughly measures the size of the given $\mathbb{Z}_p$-space.
It has several astonishing applications to combinatorics \cite{matouvsek2003using}.
Tsutaya, Yoshinaga and the second-named author \cite{tsukamoto2020markerproperty}
found an application of the $\mathbb{Z}_p$-index theory to \textit{topological dynamics}.
(One of their motivations is to solve a problem about the \textit{marker property} of dynamical systems.
This will be briefly explained in \S \ref{sec:marker property}.)
The purpose of this paper is to continue this investigation.
In particular we solve a problem posed by \cite{tsukamoto2020markerproperty}.
First we prepare terminologies of $\mathbb{Z}_p$-index, following the book of
Matou$\mathrm{\check{s}}$ek \cite{matouvsek2003using}.
A pair $(X, T)$ is called a \textbf{$\mathbb{Z}_p$-space} if $X$ is a topological space and $T:X\to X$
is a homeomorphism with $T^p = \mathrm{id}$.
It is said to be \textbf{free} if $T^a x\neq x$ for all $1\leq a\leq p-1$ and $x\in X$.
Since $p$ is a prime number, this condition is equivalent to the condition that $Tx\neq x$ for all $x\in X$.
Let $n\geq 0$ be an integer.
A free $\mathbb{Z}_p$-space $(X, T)$ is called an \textbf{$E_n \mathbb{Z}_p$-space} if it satisfies:
\begin{itemize}
\item $X$ is an $n$-dimensional finite simplicial complex and $T$ is a simplicial map
(i.e. sending each simplex to simplex affinely).
\item $X$ is $(n-1)$-connected i.e., $\pi_k(X) = 0$ for all $0\leq k \leq n-1$.
\end{itemize}
For example, $\mathbb{Z}_p$ itself (with the natural $\mathbb{Z}_p$-action) is an $E_0 \mathbb{Z}_p$-space.
(We consider that $\mathbb{Z}_p$ is $(-1)$-connected.)
The join\footnote{Recall that for two topological spaces $X$ and $Y$, the join $X*Y$ is defined by
$$X*Y:=[0,1]\times X\times Y/\sim$$ where the equivalence relation $\sim$ is given by
$$
(0, x, y)\sim (0,x, y')~\text{and}~(1, x, y)\sim (1,x', y),
$$
for any $x,x'\in X$ and any $y,y'\in Y$.
The equivalence class of $(t, x, y)$ is denoted by $(1-t)x\oplus ty$.
Given maps $T:X\to X$ and $S:Y\to Y$, we defined the map $T*S: X*Y\to X*Y$ by
$$T*S\left((1-t)x\oplus ty\right) = (1-t)Tx \oplus t Sy. $$} of the $(n+1)$ copies of $\mathbb{Z}_p$
\[ \left(\mathbb{Z}_p\right)^{*(n+1)} = \underbrace{\mathbb{Z}_p* \dots *\mathbb{Z}_p}_{\text{$(n+1)$ times}} \]
is an $E_n\mathbb{Z}_p$-space. Here $\mathbb{Z}_p$ acts on each component of $\left(\mathbb{Z}_p\right)^{*(n+1)}$
simultaneously.
An $E_n\mathbb{Z}_p$-space is not unique.
But they are essentially unique for our purpose here \cite[Lemma 6.2.2]{matouvsek2003using}:
If $(X, T)$ and $(Y, S)$ are both $E_n \mathbb{Z}_p$-spaces then
there are equivariant continuous maps $f:X\to Y$ and $g:Y\to X$.
Let $(X, T)$ be a free $\mathbb{Z}_p$-space. We define its \textbf{index} and \textbf{coindex} by
\begin{equation*}
\begin{split}
\text{\rm ind}_p (X,T) &:=\min\{n\ge 0: \exists~\text{an equivariant continuous}: X\to E_n\mathbb{Z}_p \}, \\
\text{\rm coind}_p (X,T) & :=\max\{n\ge 0: \exists~\text{an equivariant continuous}: E_n\mathbb{Z}_p \to X \}.
\end{split}
\end{equation*}
We set $\text{\rm ind}_p (X,T)=\infty$ if there is no equivariant continuous map from $X$ to $E_n\mathbb{Z}_p$ for any $n\ge 0$. We use the convention that $\text{\rm ind}_p (X,T)=\text{\rm coind}_p(X,T)=-1$ for $X=\emptyset$.
We sometime abbreviate $\text{\rm ind}_p (X,T)$ (resp. $\text{\rm coind}_p(X,T)$) as $\text{\rm ind}_pX$ (resp. $\text{\rm coind}_pX$).
It is known that if there exists an equivariant continuous map from $E_m\mathbb{Z}_p$ to $E_n\mathbb{Z}_p$ then $m\le n$ (see \cite[Theorem 6.2.5]{matouvsek2003using}).
From this,
$$
\text{\rm ind}_p (X,T)\ge \text{\rm coind}_p(X,T).
$$
Moreover,
$\text{\rm ind}_p E_n\mathbb{Z}_p = \text{\rm coind}_p E_n \mathbb{Z}_p = n$.
\subsection{Background on dynamical systems}
A pair $(X, T)$ is called a \textbf{(topological) dynamical system} if $X$ is a compact metrizable space and
$T:X\to X$ is a homeomorphism.
(Notice that here we assume the compactness of $X$. This is essential for our result.)
Let $n\geq 1$ and $(X, T)$ a dynamical system.
We define $P_n(X, T)$ as the set of $n$-periodic points of $(X, T)$:
$$
P_n(X, T) := \{x\in X: \, T^n x= x\}.
$$
We often abbreviate this as $P_n(X)$.
A dynamical system $(X, T)$ is said to be \textbf{fixed-point free} it has no fixed point, i.e.
$P_1(X, T) = \emptyset$.
It is said to be \textbf{aperiodic} (or \textbf{free}) if it has no periodic point, i.e. $P_n(X, T) = \emptyset$
for all $n\geq 1$.
Let $(X, T)$ be a dynamical system. For each prime number $p$, the pair
\[ \left(P_p(X, T), T\right) \]
is a $\mathbb{Z}_p$-space.
If $(X, T)$ is a fixed-point free dynamical system then
$\left(P_p(X, T), T\right)$ becomes a free $\mathbb{Z}_p$-space.
The paper \cite{tsukamoto2020markerproperty} investigated its index and proved
\begin{thm}[\cite{tsukamoto2020markerproperty}, Theorem 1.2] \label{theorem: linear growth}
Let $(X, T)$ be a fixed-point free dynamical system.
The sequence
\[ \text{\rm ind}_p P_p(X), \quad (p=2,3,5,7, 11, \dots) \]
has at most linear growth in $p$. Namely there exists a positive number $C$ satisfying
\[ \text{\rm ind}_p P_p(X) < C\cdot p \]
for all prime numbers $p$.
\end{thm}
So the sequence $\text{\rm ind}_p P_p(X)$ $(p=2,3,5,\dots)$ cannot be an arbitrary sequence.
It has a nontrivial restriction.
But, \textit{is this restriction optimal?
Is there a fixed point free dynamical system $(X, T)$ such that
\[ \text{\rm ind}_p P_p(X) >C\cdot p \]
for some positive number $C$ and all sufficiently large prime numbers $p$?}
This is a difficult question because (at least for our current technology) it is hard to estimate
$\text{\rm ind}_p P_p(X)$ from below.
Indeed even the following simpler question has been open.
\begin{problem}[\cite{tsukamoto2020markerproperty}, Problem 7.2]\label{prob:1}
Is there a fixed-point free dynamical system $(X, T)$ such that the sequence
$$\text{\rm ind}_p P_p(X), \quad (p = 2, 3, 5, 7, 11, \cdots)$$
is unbounded?
\end{problem}
The main purpose of this paper is to solve this problem affirmatively.
\subsection{Main result}
Let $\mathcal{S}=\mathbb{R}/2\mathbb{Z}$. Let $\rho$ be a $\mathcal{S}$-invariant metric on $\mathcal{S}$ defined by
$$
\rho(x, y)=\min_{n\in \mathbb{Z}} |x-y-2n|.
$$
Let $\sigma$ be the (left)-shift on $\mathcal{S}^\mathbb{Z}$.
Define a subsystem of $(\mathcal{S}^\mathbb{Z}, \sigma)$ by
$$
\mathcal{Z}:=\left\{(x_n)_{n\in \mathbb{Z}} \in \mathcal{S}^\mathbb{Z} :
\forall n\in \mathbb{Z}, ~\text{either}~\rho(x_{n-1}, x_{n})\ge \frac{1}{2} ~\text{or}~ \rho(x_{n}, x_{n+1})\ge \frac{1}{2} \right\}.
$$
Obviously, the dynamical system $(\mathcal{Z}, \sigma)$ has no fixed points and $P_p(\mathcal{Z}, \sigma)\not=\emptyset$ for all prime numbers $p$.
Now we state our main result.
\begin{thm}\label{main thm}
We have $$\lim_{p\to \infty} \text{\rm coind}_p\, P_p(\mathcal{Z}, \sigma) =\infty,$$
where $p$ runs over prime numbers.
\end{thm}
Since we know
\[ \text{\rm coind}_p P_p\left(\mathcal{Z}, \sigma\right) \leq \text{\rm ind}_p P_p\left(\mathcal{Z}, \sigma\right), \]
we also have
$$ \lim_{p\to \infty} \text{\rm ind}_p P_p\left(\mathcal{Z}, \sigma\right) = \infty. $$
So this solves Problem \ref{prob:1} affirmatively.
We would like to remark that our proof of Theorem \ref{main thm} is \textit{noneffective}.
We cannot figure out the actual growth rate of $\text{\rm coind}_p P_p(\mathcal{Z}, \sigma)$ from our proof.
This remains to be a task for a future study.
The main difficulty is that (at least for the authors) it is very hard to directly study the topology of
$P_p(\mathcal{Z},\sigma)$.
Our proof is indirect and uses the \textit{marker property} (see \S \ref{sec:marker property}).
Using Theorem \ref{main thm}, we can also construct some other
fixed-point free dynamical systems
having divergent coindex sequence.
Let $\rho_N$ be a $\mathcal{S}^N$-invariant metric on $\mathcal{S}^N$ defined by
$$
\rho_N\left( (x_i)_{i=1}^N, (x_i)_{i=1}^N \right)=\max_{1\le i\le N} \rho(x_i, y_i).
$$
For a positive integer $N$ and $\delta>0$, we define
$$
\mathcal{X}(\mathcal{S}^N, 1, \delta):=\left\{(x_n)_{n\in \mathbb{Z}}\in (\mathcal{S}^N)^\mathbb{Z}: \rho_N(x_n, x_{n+1})\ge \delta, ~\forall n\in \mathbb{Z} \right\},
$$
This notation might look a bit strange. Its meaning will become clearer in \S \ref{section: inverse limit}.
The system $\mathcal{X}(\mathcal{S}^N, 1, \delta)$ has no fixed point.
\begin{lem}\label{lem:embedding Z}
We have the following equivariant embeddings:
$$
\mathcal{X}(\mathcal{S}, 1, 1/2) \hookrightarrow (\mathcal{Z}, \sigma) \hookrightarrow \mathcal{X}(\mathcal{S}^2, 1, 1/2)\hookrightarrow \mathcal{X}(\mathcal{S}^N, 1, 1/2),
$$
for all integers $N\ge 2$.
\end{lem}
\begin{proof}
The embeddings $\mathcal{X}(\mathcal{S}, 1, 1/2) \hookrightarrow (\mathcal{Z}, \sigma)$
and $\mathcal{X}(\mathcal{S}^2, 1, 1/2)\hookrightarrow \mathcal{X}(\mathcal{S}^N, 1, 1/2)$ are canonical for $N\ge 2$.
Define $f: (\mathcal{Z}, \sigma) \to \mathcal{X}(\mathcal{S}^2, 1, 1/2)$ by $(x_k)_{k\in \mathbb{Z}} \mapsto (x_k, x_{k+1})_{k\in \mathbb{Z}}$.
It is easy to check that $f$ is an equivariant embedding.
\end{proof}
Combining Lemma \ref{lem:embedding Z} and Theorem \ref{main thm}, we get that for all integers $N\ge 2$
and $0<\delta<1/2$
$$\lim_{p\to \infty} \text{\rm coind}_p{P_p\left(\mathcal{X}(\mathcal{S}^N, 1, \delta)\right)}=\infty,$$
where $p$ runs over prime numbers.
\section{Preliminaries}
\subsection{Properties of $\mathbb{Z}_p$-coindex}
Let $(X, T)$ be a dynamical system.
Following \cite{shi2021marker}, for simplifying notations,
we define the \textit{periodic coindex} of $(X,T)$ as
$$
\text{\rm coind}_p^{\rm Per}(X,T)=\text{\rm coind}_p(P_p(X,T), T),
$$
for prime numbers $p$. The following lemma is essentially due to \cite[Proposition 3.1 ]{tsukamoto2020markerproperty}. See also the proof in \cite[Corollary 3.3]{shi2021marker}.
\begin{lem}\label{lem:basic property}
Let $(X, T)$ and $(Y, S)$ be fixed-point free dynamical systems. Let $p$ be a prime number. Then the following properties hold.
\begin{itemize}
\item [(1)] If there is an equivariant continuous map $f: X\to Y$ then $\text{\rm coind}^{\rm Per}_p(X,T)\le \text{\rm coind}^{\rm Per}_p(Y, S)$.
\item [(2)] The system $(X*Y, T* S)$ has no fixed points and $\text{\rm coind}^{\rm Per}_p(X*Y, T* S)\ge \text{\rm coind}^{\rm Per}_p(X,T)+\text{\rm coind}^{\rm Per}_p(Y, S)+1$.
\end{itemize}
\end{lem}
\subsection{Marker property}\label{sec:marker property}
For a dynamical system $(X,T)$, it is said to satisfy the {\bf marker property} if for each positive integer $N$ there exists an open set $U\subset X$ satisfying that
$$
U\cap T^{-n}U=\emptyset~\text{for all}~0<n<N~\text{and,}~X=\bigcup_{n\in \mathbb{Z}} T^{n}U.
$$
For example, an extension of an aperiodic minimal system has
the marker property.
Gutman \cite[Theorem 6.1]{Gut15Jaworski} proved that
every finite dimensional aperiodic dynamical system has the marker property.
Here a dynamical system $(X, T)$ is said to be finite dimensional if the topological dimension (a.k.a the Lebesgue covering
dimension) of $X$ is finite.
The marker property has been intensively used in the context of mean dimension theory.
Probably the marker property seems to have nothing to do with the study of $\mathbb{Z}_p$-index.
But indeed it has.
We can easily see that if a dynamical system has the marker property then
it is aperiodic.
It had been an open problem for several years whether the converse holds or not.
This problem was solved by \cite{tsukamoto2020markerproperty}.
They constructed an aperiodic dynamical system which does not have the marker property.
The main ingredient of their proof is the $\mathbb{Z}_p$-index theory\footnote{This was a main motivation for
\cite{tsukamoto2020markerproperty} to study the interaction between $\mathbb{Z}_p$-index theory and topological dynamics.}.
The first-named author \cite{shi2021marker} further developed the argument and proved that
there exists a finite mean dimensional aperiodic dynamical system which does not have the marker property.
The proof of Theorem \ref{main thm} uses the method developed in \cite{shi2021marker}.
\section{Inverse limit of a family of dynamical systems} \label{section: inverse limit}
In this section, we follow \cite[Section 5]{shi2021marker} and write the results from $(\mathbb{R}/2\mathbb{Z})^{\mathbb{Z}}$ to infinite products of a compact metrizable abelian group.
Let $G$ be a compact metrizable abelian group. Then there is a $G$-invariant metric $\rho$ on $G$ which is compatible with its topology (\cite{struble1974metrics}), i.e. $\rho(x+g,y+g)=\rho(x,y)$ for any $x,y,g\in G$. Let $\sigma$ be the (left)-shift on $G^\mathbb{Z}$, i.e. $\sigma((x_k)_{k\in \mathbb{Z}} )=(x_{k+1})_{k\in \mathbb{Z}}$. For any positive integers $m$ and any number $\delta>0$, we define a subsystem $(\mathcal{X}(G, m, \delta), \sigma)$ of $(G^\mathbb{Z}, \sigma)$ by
$$
\mathcal{X}(G, m, \delta):=\{(x_n)_{n\in \mathbb{Z}}\in G^\mathbb{Z}: \rho(x_n, x_{n+m!})\ge \delta, ~\forall n\in \mathbb{Z} \},
$$
where $m!=m\cdot (m-1)\cdot \dots \cdot 2 \cdot 1$.
It is clear that $(\mathcal{X}(G, m, \delta), \sigma)$ has no fixed points. We denote by $(X_m, T_m):=(\mathcal{X}(G, m, \delta), \sigma)$ for convenience when $G$ is fixed.
For $m>1$, we define an equivariant continuous map $\theta_{m,m-1}$ from $X_m$ to $G^\mathbb{Z}$ by
$$
(x_{k})_{k\in \mathbb{Z}} \mapsto \left(\sum_{i=0}^{m-1}x_{k+i (m-1)!} \right)_{k\in \mathbb{Z}}.
$$
A simple computation shows that
\begin{equation*}
\begin{split}
&\rho\left(\sum_{i=0}^{m-1}x_{k+i\cdot (m-1)!} , \sum_{i=0}^{m-1}x_{(k+(m-1)!)+i\cdot (m-1)!} \right)\\
=&\rho\left(\sum_{i=0}^{m-1}x_{k+i\cdot (m-1)!} , \sum_{i=1}^{m}x_{k+i\cdot (m-1)!}\right)\\
=&\rho\left(x_{k} , x_{k+ m\cdot (m-1)!} \right)=\rho\left(x_{k} , x_{k+ m!} \right),~\forall k\in \mathbb{Z}.
\end{split}
\end{equation*}
Then we obtain that the image of $X_m$ under $\theta_{m,m-1}$ is contained in $X_{m-1}$. For $m>n$, we define $\theta_{m,n}=\theta_{m,m-1}\circ \theta_{m-1,m-2} \circ \dots \theta_{n+1,n}$ to be an equivariant continuous map from $X_m$ to $X_n$.
Fix $a=(a_k)_{k\in \mathbb{Z}}\in G^\mathbb{Z}.$ For $m\ge 2$, we define a map $\eta_{m-1,m}=\eta_{m-1,m}^a: X_{m-1} \to G^\mathbb{Z}$ by
$\eta_{m-1, m}((x_k)_{k\in \mathbb{Z}})=(y_k)_{k\in \mathbb{Z}}$ where
\begin{equation*}
y_k=
\begin{cases}
\bigbreak
a_k~&\text{if}~0\le k\le (m-1)\cdot (m-1)!-1,\\
\bigbreak
x_{k-(m-1)\cdot (m-1)!}-\sum_{i=1}^{m-1}a_{k-i\cdot (m-1)!}~&\text{if}~(m-1)\cdot (m-1)!\le k\le m!-1\\
\sum_{i=0}^{n-1} \left( x_{i\cdot m!+(m-1)!+j}-x_{i\cdot m!+j} \right) +y_j &\text{if}~k=n\cdot m!+j \\
\bigbreak
& ~\text{with}~n>0~\text{and}~0\le j\le m!-1,\\
\sum_{i=n}^{-1} \left( x_{i\cdot m!+j}-x_{i\cdot m!+(m-1)!+j} \right) +y_j &\text{if}~k= n\cdot m!+j\\
&~\text{with}~n<0~\text{and}~0\le j\le m!-1.
\end{cases}
\end{equation*}
Obviously, the map $\eta_{m-1,m}$ is continuous. We remark that the map $\eta_{m-1,m}$ is not equivariant. We show several properties of $\eta_{m-1,m}$ in the following lemmas.
\begin{lem}\label{lem:image of eta}
For $m\ge 2$, $\eta_{m-1,m}(X_{m-1})\subset X_m$.
\end{lem}
\begin{proof}
Let $x\in X_{m-1}$ and $y=\eta_{m-1,m}(x)$. Let $k=n\cdot m!+j$ with $n\in \mathbb{Z}$ and $0\le j\le m!-1$. We divide the proof in the following three cases according to the value of $n$.
\vspace{5pt}
\noindent Case 1. $n=1$. We have
$$
y_k-y_{k-m!}=x_{(m-1)!+j}-x_j+y_j-y_j=x_{(m-1)!+j}-x_{j}.
$$
Since $x\in X_{m-1}$, we have
$$
\rho(y_{k}, y_{k-m!})=\rho(x_{(m-1)!+j}, x_{j})\ge \delta.
$$
\\
Case 2. $n\ge 2$. A simple computation shows that
\begin{equation*}
\begin{split}
&y_{k}-y_{k-m!}\\
=&\sum_{i=0}^{n-1} \left( x_{i\cdot m!+(m-1)!+j}-x_{i\cdot m!+j} \right) - \sum_{i=0}^{n-2} \left( x_{i\cdot m!+(m-1)!+j}-x_{i\cdot m!+j} \right)\\
=&x_{(n-1) m!+(m-1)!+j}-x_{(n-1)\cdot m!+j}=x_{k-m!+(m-1)!}-x_{k-m!}.
\end{split}
\end{equation*}
It follows that
$$
\rho(y_{k}, y_{k-m!})=\rho(x_{k-m!+(m-1)!}, x_{k-m!})\ge \delta.
$$
\\
Case 3. $n\le 0$. Similarly to Case 1 and Case 2, we have
\begin{equation*}
y_{k}-y_{k-m!}=x_{k-m!+(m-1)!}-x_{k-m!},
\end{equation*}
and consequently $\rho(y_{k}, y_{k-m!})\ge\delta.$
\vspace{5pt}
To sum up, we conclude that $y\in X_m$ and $\eta_{m-1,m}(X_{m-1})\subset X_m$.
\end{proof}
By Lemma \ref{lem:image of eta}, we see that $\eta_{m-1, m}$ is the map from $X_{m-1}$ to $X_m$. Moreover, we show in the following that $\eta_{m-1, m}$ is indeed a right inverse map of $\theta_{m, m-1}$.
\begin{lem}\label{lem:identity}
$\theta_{m,m-1}\circ\eta_{m-1,m}=\text{id}, \forall m\ge 2.$
\end{lem}
\begin{proof}
Let $x\in X_{m-1}$ and $y=\eta_{m-1,m}(x)$. Then we have
$$
\theta_{m,m-1}\circ\eta_{m-1,m}(x)=\theta_{m,m-1}(y)=\left(\sum_{i=0}^{m-1}y_{k+i\cdot (m-1)!} \right)_{k\in \mathbb{Z}}.
$$
If $0\le k\le (m-1)!-1$, then
\begin{equation*}
\begin{split}
&\sum_{i=0}^{m-1}y_{k+i\cdot (m-1)!}\\
=&\sum_{i=0}^{m-2}a_{k+i\cdot (m-1)!}+ \left(x_{k}-\sum_{i=1}^{m-1}a_{k+(m-1-i)\cdot (m-1)!}\right)
=x_{k}.
\end{split}
\end{equation*}
If $k=s\cdot m!+t\cdot (m-1)!+j>(m-1)!$ for $s\ge 0$, $0\le t\le m-1$ and $0\le j\le (m-1)!-1$, then
\begin{equation*}
\begin{split}
&\sum_{i=0}^{m-1}y_{k+i\cdot (m-1)!}\\
=& \sum_{i=t}^{m-1}y_{s\cdot m!+i\cdot (m-1)!+j}+\sum_{i=0}^{t-1}y_{(s+1) m!+i \cdot (m-1)!+j}\\
=& \sum_{i=t}^{m-1}\sum_{\ell=0}^{s-1} \left( x_{\ell \cdot m!+(i+1)\cdot (m-1)!+j}-x_{\ell\cdot m!+i\cdot (m-1)!+j} \right) \\
&+\sum_{i=0}^{t-1}\sum_{\ell=0}^{s} \left( x_{\ell \cdot m!+(i+1)\cdot (m-1)!+j}-x_{\ell\cdot m!+i\cdot (m-1)!+j} \right) +\sum_{i=0}^{m-1}y_{i\cdot (m-1)!+j}\\
=&\sum_{\ell=0}^{s-1}\sum_{i=0}^{m-1} \left( x_{\ell \cdot m!+(i+1)\cdot (m-1)!+j}-x_{\ell\cdot m!+i\cdot (m-1)!+j} \right) \\
&+\sum_{i=0}^{t-1}\left( x_{s \cdot m!+(i+1)\cdot (m-1)!+j}-x_{s\cdot m!+i\cdot (m-1)!+j} \right)+ x_j\\
=&\sum_{\ell=0}^{s-1} \left( x_{(\ell+1) \cdot m!+j}-x_{\ell\cdot m! +j} \right)
+\left( x_{s \cdot m!+t\cdot (m-1)!+j}-x_{s\cdot m!+j} \right)+ x_j\\
=& x_{s\cdot m!+j}-x_j+x_{k}-x_{s\cdot m!+j} +x_j=x_k.
\end{split}
\end{equation*}
If $k=s\cdot m!+t\cdot (m-1)!+j$ for $s<0$, $0\le t\le m-1$ and $0\le j\le (m-1)!-1$, then by the similar computation of the case where $s\ge 0$, we have $\sum_{i=0}^{m-1}y_{k+i\cdot (m-1)!}=x_k$. This completes the proof.
\end{proof}
\begin{cor}\label{cor:properties of maps}
Let $K$ be a non-negative integer. Then the following properties hold for $m>n$.
\begin{itemize}
\item [(i)] The map
$
\theta_{m,n}^{*(K+1)}: X_m^{*(K+1)} \to X_n^{*(K+1)}
$
is equivariant, continuous and surjective.
\item [(ii)] The map $\eta_{n,m}^{*(K+1)}: X_{n}^{*(K+1)} \to X_m^{*(K+1)}$ is equivariant and continuous.
\item [(iii)] The map $\eta_{n,m}^{*(K+1)}$ is a continuous right-inverse of $\theta_{m,n}^{*(K+1)}$, i.e. $\theta_{m,n}^{*(K+1)}\circ \eta_{n,m}^{*(K+1)}={\rm id}.$
\end{itemize}
\end{cor}
\begin{proof}
By definition of joining of spaces and maps, (i), (ii) and (iii) are clear by the argument in this section.
\end{proof}
\section{Proof of Theorem \ref{main thm}}
Let $\mathbb{Z}_3:=\mathbb{Z}/3\mathbb{Z}$ as before. A $\mathbb{Z}_3$-invariant metric $\rho$ on the finite abelian group is $\rho(x,y)=\delta_0({x-y})$ where $\delta$ is the Dirac operator, i.e. $\delta_0(a)=0$ if and only if $a=0$. For $m\ge 1$ and $0<\delta<1$, the subshift $(\Sigma_m, \sigma):=\mathcal{X}(\mathbb{Z}_3, m, \delta)$ of the full shift $(\mathbb{Z}_3^\mathbb{Z}, \sigma)$ has the form:
$$
\Sigma_m=\{(x_n)_{n\in \mathbb{Z}}\in \mathbb{Z}_3^\mathbb{Z}: x_n\not= x_{n+m!}, ~\forall n\in \mathbb{Z} \}.
$$
\begin{lem}\label{lem:finite periodic point}
The dynamical system $(\Sigma_m, \sigma)$ has no fixed point. The set $P_p(\Sigma_m, \sigma)$ is nonempty and finite for every prime number $p>m!$.
\end{lem}
\begin{proof}
It is obvious that $P_1(\Sigma_m, \sigma)=\emptyset$. Let $p$ be a prime number with $p>m!$. Notice that
$$P_p(\Sigma_m, \sigma)=\{(x_i)_{i\in \mathbb{Z}_p}\in \mathbb{Z}_3^{\mathbb{Z}_p}: x_i\not= x_{i+m!}, ~\forall i\in \mathbb{Z}_p \}.$$
Since $p>m!$, we see that $p$ and $m!$ are coprime.
Let $y_k=x_{k\cdot m! \mod \mathbb{Z}_p}$. It follows that
$$P_p(\Sigma_m, \sigma)\cong\{(y_k)_{k\in \mathbb{Z}_p}\in \mathbb{Z}_3^{\mathbb{Z}_p}: y_k\not=y_{k+1}, k\in \mathbb{Z}_p\}. $$
It is easy to check that the right-hand side set is nonempty and finite (see also \cite[Lemma 4.1]{tsukamoto2020markerproperty}).
\end{proof}
\begin{lem}\label{lem:p>m!}
Let $K\ge 0$. Then
$\text{\rm coind}^{\rm Per}_p(\Sigma_m^{*(K+1)}, \sigma^{*(K+1)})=K$ for all prime numbers $p>m!$.
\end{lem}
\begin{proof}
By Lemma \ref{lem:finite periodic point},
we get that $P_p(\Sigma_m^{*(K+1)}, \sigma^{*(K+1)})=P_p(\Sigma_m, \sigma)^{*(K+1)}$
which is an $E_K\mathbb{Z}_p$-space for any prime number $p>m!$.
Thus we have $\text{\rm coind}_p{P_p(\Sigma_m^{*(K+1)}, \sigma^{*(K+1)})}=K$.
This completes the proof.
\end{proof}
Let $\mathcal{S}=\mathbb{R}/2\mathbb{Z}$. Let $\rho$ be a $\mathcal{S}$-invariant metric on $\mathcal{S}$ defined by
$$
\rho(x, y)=\min_{n\in \mathbb{Z}} |x-y-2n|.
$$
Define
$$
\mathcal{Y}:=\{(x_n)_{n\in \mathbb{Z}}\in \mathcal{S}^\mathbb{Z}: \forall n\in \mathbb{Z}, ~\text{either}~\rho(x_{n-1}, x_{n})=1 ~\text{or}~ \rho(x_{n}, x_{n+1})=1 \}.
$$
This system is related to the marker property by the next lemma.
\begin{lem}[\cite{tsukamoto2020markerproperty}, Lemma 5.3]\label{lem:to Y}
Let $(X,T)$ be a dynamical system having marker property. Then there
is an equivariant continuous map from $(X, T)$ to $(\mathcal{Y}, \sigma)$.
\end{lem}
We recall the definition of the dynamical system $\mathcal{Z}$.
It is a subsystem of $(\mathcal{S}^\mathbb{Z}, \sigma)$ defined by
$$
\mathcal{Z}:=\left\{(x_n)_{n\in \mathbb{Z}} \in \mathcal{S}^\mathbb{Z} :
\forall n\in \mathbb{Z}, ~\text{either}~\rho(x_{n-1}, x_{n})\ge \frac{1}{2} ~\text{or}~ \rho(x_{n}, x_{n+1})\ge \frac{1}{2} \right\}.
$$
The dynamical system $(\mathcal{Z}, \sigma)$ has no fixed points and
$P_p(\mathcal{Z}, \sigma)\not=\emptyset$ for all prime numbers $p$.
The following proposition is essentially due to \cite[Lemma 7.5]{shi2021marker}.
\begin{prop}\label{prop:inverse limit}
Let $(X,T)$ be the inverse limit of a family of dynamical systems $\{(X_n, T_n) \}_{n\in \mathbb{N}}$ via $\tau=(\tau_{m,n})_{m,n\in \mathbb{N}, m>n}$ where $\tau_{m,n}: X_m\to X_n$ are equivariant continuous maps. Suppose there is a continuous right-inverse $\gamma=(\gamma_{n,m})_{n,m\in \mathbb{N}, m>n}$, i.e. $\gamma_{n,m}: X_n\to X_m$ are continuous maps with $\tau_{m,n}\circ \gamma_{n,m}={\rm id}$ for $m>n$.
If there is an equivariant continuous map $f: (X,T) \to (\mathcal{Y}, \sigma)$, then
there exists an integer $M$ and an equivariant continuous map
$$g: (X_M, T_M) \longrightarrow (\mathcal{Z}, \sigma). $$
\end{prop}
\begin{proof}
Let $\pi_m: \mathcal{X}\to X_m$ be the natural projection for $m\in \mathbb{N}$. Let $P_1: \mathcal{S}^\mathbb{Z} \to \mathcal{S}$ be the projection on $0$-th coordinate.
Define $\phi=P_1\circ f: X \to \mathcal{S}$. Then $f(x)=(\phi(T^nx))_{n\in \mathbb{Z}}$ for any $x\in X$. Notice that there exists an integer $M>0$ such that
\begin{equation}\label{eq:3}
\pi_M(x)=\pi_M(y) \Longrightarrow \rho(\phi(x), \phi(y))<\frac{1}{4}.
\end{equation}
For $m\ge 1$, we define a continuous map $\gamma_m: X_m \to X$ by
$$
x\mapsto (\tau_{m,1}(x), \tau_{m,2}(x), \dots, \tau_{m,m-1}(x), x, \gamma_{m,m+1}(x), \gamma_{m,m+2}(x), \dots ).
$$Define a continuous map $\varphi=\phi\circ \gamma_M: X_M \to \mathcal{S}$ and an equivariant continuous map $g: X_M \to \mathcal{S}^\mathbb{Z}$ by
$$
x \mapsto (\varphi(T_M^n(x)))_{n\in \mathbb{Z}}.
$$
Since $\pi_M\circ \gamma_M={\rm id}$ and $\pi_M\circ T=T_M\circ \pi_M$, it follows from \eqref{eq:3} that
\begin{equation}\label{eq:4}
\rho\left(\phi(\gamma_M(T_M^n(x)) ), \phi(T^n(\gamma_M(x)) )\right)<\frac{1}{4}, \forall n\in \mathbb{Z}.
\end{equation}
Fix $x\in X_M$ and $n\in \mathbb{Z}$. By definitions of $\mathcal{Y}$ and $f$, there exists an $i\in \{0,1\}$ such that
\begin{equation}\label{eq:5}
\rho(\phi(T^{n+i}(\gamma_M(x))),\phi(T^{n+i+1}(\gamma_M(x))) )=1.
\end{equation}
Combing \eqref{eq:5} with \eqref{eq:4}, we obtain that
\begin{equation*}
\begin{split}
&\rho\left(\phi(\gamma_M(T_M^{n+i}x) ), \phi(\gamma_M(T_M^{n+i+1}x) )\right)\\
\ge
&~ \rho(\phi(T^{n+i}(\gamma_M(x))),\phi(T^{n+i+1}(\gamma_M(x))) ) \\
&\quad - \rho\left(\phi(\gamma_M(T_M^{n+i}x) ), \phi(T^{n+i}(\gamma_M(x)) )\right)\\
&\qquad -\rho\left(\phi(\gamma_M(T_M^{n+i+1}x) ), \phi(T^{n+i+1}(\gamma_M(x)) )\right)\\
\ge&~ 1-\frac{1}{4}-\frac{1}{4}=\frac{1}{2}.
\end{split}
\end{equation*}
Since $\varphi=\phi\circ \gamma_M$, we have that $$\rho\left(\varphi(T_M^{n+i}x), \varphi(T_M^{n+i+1}x) \right)\ge \frac{1}{2}.$$
By definition of $g$ and arbitrariness of $n$ and $x$, we conclude that the image of $X_M$ under $g$ is contained in $\mathcal{Z}$. This completes the proof.
\end{proof}
Now we present the proof of our main result.
\begin{proof}[Proof of Theorem \ref{main thm}]
Let $K\ge 0$.
By Corollary \ref{cor:properties of maps} (1), let $(X,T)$ be the inverse limit of the family $\{(\Sigma_n^{*(K+1)}, \sigma^{*(K+1)}) \}_{n\in \mathbb{N}}$ via $\theta=(\theta_{m,n}^{*(K+1)})_{m,n\in \mathbb{N}, m>n}$. Since for every $m\ge 1$,$$P_{m!}(\Sigma_m^{*(K+1)}, \sigma^{*(K+1)})=P_{m!}(\Sigma_m, \sigma
)^{*(K+1)}=\emptyset,$$
we see that $(X,T)$ is aperiodic.
Since $\Sigma_m$ is $0$-dimensional, we have that $\Sigma_m^{*(K+1)}$ is at most of dimension $K$ for any $m\ge 1$ and consequently $X$ is at most of dimension $K$ (\cite[Section 6]{nagami1970dimension}). Since an aperiodic finite dimensional dynamical system has the marker property (\cite[Theorem 6.1]{Gut15Jaworski}), the dynamical system $(X, T)$ has the marker property. By Lemma \ref{lem:to Y}, there is an equivariant continuous map from $(X,T)$ to $(\mathcal{Y}, \sigma)$. It follows from Corollary \ref{cor:properties of maps} and Proposition \ref{prop:inverse limit} that there exists an integer $M$ and an equivariant continuous map from $(\Sigma_M^{*(K+1)}, \sigma^{*(K+1)}))$ to $(\mathcal{Z}, \sigma)$. By Lemma \ref{lem:basic property}, we have
$$
\text{\rm coind}^{\rm Per}_p(\mathcal{Z}, \sigma)\ge \text{\rm coind}^{\rm Per}_p(\Sigma_M^{*(K+1)}, \sigma^{*(K+1)})=K,
$$
for all prime numbers $p>M!$. Since $K$ is chosen arbitrarily, we conclude that
$$
\lim_{p\to \infty} \text{\rm coind}^{\rm Per}_p(\mathcal{Z}, \sigma)=\infty.
$$
\end{proof}
\bibliographystyle{alpha}
|
1,108,101,564,975 | arxiv | \section{Introduction}
One of the main benefits of using Vector Quantization Variational Autoencoders (VQ-VAE) for speech synthesis is that this architecture facilitates learning rich representations of speech \cite{zhao2020improved, williams2020learning, oord2017neural, yasuda2020end} in the form of discrete latent sequences. These learned representations come from vector-quantized \textit{codebooks} that behave as a clustering space with prototype centroids. Each entry in a codebook is represented by a pair consisting of a \textit{code} (also known as an index or token) and its corresponding vector. The code is a discrete integer value, and the vector is a learned \textit{n}-dimensional array of continuous values. In this paper, we are interested in the content and usefulness of codebooks after a VQ-VAE model has been trained. Specifically, we are the first to compare multilingual and monolingual VQ-VAE codebook representations for phone and speaker, with the aim to observe how well they adapt to voice transformation, linguistic code-switching and content-masking.
The original VQ-VAE architecture design was based on a single VQ space: one encoder, one VQ codebook, and one decoder. That design proved to be useful across different objectives in image, video, and speech processing \cite{oord2017neural}. Since then, others have shown that the architecture could be expanded by stacking encoders which result in learning multiple different VQ spaces at the same time \cite{williams2020learning,zhang2019learning} or even hierarchical representations \cite{dhariwal2020jukebox}. These extended models provide more generalization capability, in part because they learn richer representations.
It is possible to model multiple types of information in the speech signal with little or no supervision. In the process of learning to represent different types of information, the stacked VQ-VAE architectures are also providing a means to separate informational factors. This act of separating information from representations is known by several names, including factorization and disentanglement. Traditionally, factorization has served the purpose of removing irrelevant information from a representation such as a speaker embedding -- and then discarding what had been deemed irrelevant \cite{dehak2010front}. After information has been removed, it could be argued that a representation is in some way more ``pure''. On the other hand, disentanglement retains information. At the time of this writing we use the term \textit{disentanglement} to describe the phenomenon of isolating multiple types of distributed information from one source, into separate external representations. Functionally, this is a form of distributed representation learning.
Currently there are no single-best techniques to measure the intrinsic goodness of disentangled representations apart from probing how well they perform in extrinsic tasks \cite{Raj_2019, peri2020empirical, williams2019disentangling, chung2020vector}. Recent efforts for phone and speaker disentanglement have been limited to contrastive tasks such as phone recognition and speaker recognition \cite{williams2020learning, ebbers2020contrastive}. Or observing that one representation ``gains'' information while another ``loses'' information \cite{williams2019disentangling, parthasarathi2012wordless} by measuring changes in classification accuracy.
Our work adds additional task-based evaluation by exploring disentanglement in both a multilingual and monolingual model. In order for the multilingual model to perform well at tasks such as voice transformation and linguistic code-switching, the learned representations must completely separate phonetic content and speaker information. We also introduce a novel technique that uses VQ phone codes to manipulate targeted content in the speech signal without altering the sound of a speaker's voice. Our exploration exposes some of the interesting capabilities of disentangled representations. We also offer ideas for improving the VQ-VAE architecture.
\section{Related Work}
Early versions of the VQ-VAE architecture with a single encoder and VQ phone codebook are known to be well-suited to voice conversion. Particularly \cite{ding2019group} showed that grouping latent embeddings together during the training process helps with mispronunciations. Their system relied on one-hot speaker encodings, but they suggest that the model could be made to generalize to unseen speakers by using externally-learned speaker embeddings instead. Our VQ-VAE implementation uses a similar approach to group latent embeddings, but goes one step further to simultaneously learn VQ speaker and phone embeddings.
In \cite{wu2020one}, they propose a VQ technique that disentangles speaker and content information in a fully unsupervised manner for monolingual one-shot voice conversion. Phone embeddings originate from a VQ codebook whereas speaker embeddings are learned as a difference between discrete VQ codes and continuous VQ vectors. Finally, the speaker and content representations are re-combined additively (instead of by concatenation) and passed to the decoder as local conditions. While the method works very well in one-shot voice conversion, it does require a target speaker sample. Since the speaker representations rely on differences between internal VQ embeddings, it is not clear how the content and speaker representations could be used externally to this system, or whether or not it works for multilingual data.
A dual-encoder VQ-VAE was proposed by \cite{zhao2020improved} which modeled the phone content and F0. This approach of using two encoders and learning two VQ codebooks was also used in \cite{williams2020learning} who sought to learn speaker identity as well as speech content at the same time. In \cite{williams2020learning}, they explored several variations of dual-encoder approach with different kinds of supervision. They found that the adversarial model performed disentanglement best between the speaker and content. In this paper, we utilize their pre-trained English VCTK model for multilingual adaptation as well as our experiments.
While VQ-VAE has received a lot of attention for its potential in voice conversion, other challenges remain for multilingual speech synthesis. In \cite{himawan2020speaker} and \cite{zhou2019novel}, they showed it is possible to use DNNs to synthesize voices across languages, but these methods perform speaker adaptation rather than learning embeddings that could be re-purposed. Therefore these methods require an exemplar sentence that contains specific words and phrases. Likewise \cite{zhang2019learning, yang2020towards, li2019bytes} propose universal multi-language multi-speaker TTS systems, but it is not clear that the internal embeddings are re-useable for other speech tasks and the number of evaluated languages is small.
Speech is often a primary medium for communicating sensitive information such as financial details or medical information. To date, most speech privacy scenarios reflect the need to protect speaker voice characteristics \cite{qian2018towards, tomashenko2020introducing}. The work of \cite{ahmed2020preech} proposes shuffling audio in a speech file to transform it into a speech ``bag of words'' so that the content and meaning cannot be easily gleaned from ASR. Likewise \cite{parthasarathi2012wordless} proposes using acoustic transformations to conceal the words of speech audio. Our approach to content privacy is inspired by \cite{hashimoto2016privacy} which created a \textit{speech privacy sound}. However, instead of privacy for speaker identity, we mask targeted words in a phrase by manipulating the sequence of discrete VQ phone codes.
\section{Data}
The multilingual SIWIS dataset \cite{goldman2016siwis} contains four languages: English, German, French, and Italian. There are 36 unique speakers. Each speaker is bilingual or trilingual and has been recorded in two or three languages. The dataset languages were imbalanced, so our train/test splits also preserved this imbalance as shown in Table~\ref{tab:data_splits}. The monolingual English VCTK dataset \cite{yamagishi2019cstr} contains 109 speakers with different accents. For VCTK, we used the same train/test splits as in \cite{williams2020learning}. All audio was downsampled to 16 kHz and normalized with sv56. The preprocessing steps were followed using scripts provided by \cite{zhao2020improved}.
\begin{table}[h]
\small
\caption{SIWIS data splits across languages and speakers.}
\centering
\begin{tabular}{|l|cc|cc|cc|}
\hline
Language & \multicolumn{2}{c|}{Training} & \multicolumn{2}{c|}{Validation} & \multicolumn{2}{c|}{Held-out}\\
& Spk & Utt & Spk & Utt & Spk & Utt\\\hline\hline
English (EN) & 18 & 2387 & 18 & 603 & 4 & 16\\ \hline
French (FR) & 26 & 3405 & 26 & 841 & 5 & 16 \\ \hline
German (DE) & 13 & 1719 & 13 & 376 & 4 & 18 \\ \hline
Italian (IT) & 13 & 1689 & 13 & 430 & 3 & 10\\\hline
\end{tabular}
\label{tab:data_splits}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{figures/siwis_model.png}
\caption{VQ-VAE overview from \cite{williams2020learning}, two encoders and VQ spaces which modeled speaker identity as a global condition, and speech phones as a local condition. We added a global one-hot language vector for our multilingual training.}
\label{fig:model}
\end{figure}
\section{VQ-VAE Model Adaptation}
We started with a dual-encoder VQ-VAE model that was pre-trained and provided by \cite{williams2020learning}. It learned two separate encoders and two separate VQ codebooks for speech content and speaker identity (Figure~\ref{fig:model}). They had trained the model to 500k steps using English VCTK data.
We used the pre-trained model from \cite{williams2020learning} and adapted it to multilingual SIWIS data. For the model adaptation, a projection layer from the pre-trained WaveRNN decoder was discarded but we kept all other parameters from the encoders and VQ codebooks. We also added a one-hot language vector as global conditions to the WaveRNN decoder. We trained the multilingual model on all four languages mixed together for 550k steps while monitoring the validation losses.
The goal is not to learn to disentangle languages, but to learn representations of content and speaker that are shared across multiple languages. For example, to learn phone VQ representations from multiple languages in a single VQ codebook. During the model adaptation, we did not experiment with changing the codebook sizes from the pre-trained model. Therefore we used a codebook size of 256 for the speaker codebook, and 512 for the phone codebook.
The input to the encoder was a waveform. After the waveform was downsampled by each encoder, it was transformed into a sequence of VQ codes and vectors for phones, and a single VQ code and vector for speaker identity. The VQ vectors were then provided to the WaveRNN decoder. Finally the output was a reconstructed waveform.
\section{Task-Based Evaluation}
The purpose of a task-based evaluation is to understand how learned phone or speaker representations perform in tasks that benefit from disentanglement. We describe four very small ``proof-of-concept'' tasks and corresponding results. The synthesized speech\footnote{Speech examples: \url{https://rhoposit.github.io/ssw11}} was assessed using human listening judgements.
For the listening tests, participants were recruited from the Prolific\footnote{\url{https://www.prolific.co/}} platform and the listening test materials were hosted by Qualtrics\footnote{\url{https://www.qualtrics.com/uk/}}.
We grouped our listening test tasks on the basis of language and dataset in order to utilize similar participants. This resulted in a total of seven separate listening tests and also allowed for consistency among our listener pool. For example, the same set of French speakers evaluated French MOS copy-synthesis, French MOS voice transformation, and French voice transformation speaker similarity. All of our participants self-identified as ``fluent'' in their respective languages, including pairs for code-switching: English-French, or English-German. While the multilingual model training included Italian data, this language was omitted from the evaluation as there were few speakers in the held-out set to select representative samples for gender, as well as bilingual/trilingual overlap. For each of the seven listening tests, we recruited 20 people and they were compensated at the rate of \pounds\ 7.50 per hour.
\subsection{Copy-Synthesis}
One way to gauge the quality of a trained VQ-VAE is to perform copy-synthesis. If copy-synthesis quality is very good then the internal VQ representations are more likely to also be good, however this is not guaranteed. While this does not inform us about the quality of the internal representations, it provides a starting point. This section is included as a sanity check. However, since the listening test was very small the reported MOS values may not generalize.
Listeners rated the naturalness on a Likert scale of 1-5 (where 5 is natural). We evaluated 6 examples per language using data from the held-out set, for a total of 24 samples. We report the average MOS naturalness scores in Table~\ref{tab:vqvae_quality}. The synthetic speech results in lower MOS scores for the monolingual and multilingual models. In the multilingual model, English and German naturalness was lower. The MOS for French had the smallest change from natural to synthetic. Evaluating with higher quantities of speech samples would provide a better perspective of the average MOS scores per language.
\begin{table}
\centering
\small
\caption{MOS naturalness scores for copy-synthesis. Results are reported for the multilingual model (SIWIS data) as well as the monolingual English model (VCTK data).}
\begin{tabular}{|l|c|cc|}
\hline
Data & Natural & Synthetic & $\Delta$\\
\hline\hline
SIWIS-EN & 4.1 & 1.6 & $\downarrow$ 2.5 \\
SIWIS-FR & 3.4 & 2.9 & $\downarrow$ 0.5 \\
SIWIS-DE & 3.7 & 2.5 & $\downarrow$ 1.2 \\ \hline
VCTK-EN & 4.0 & 3.3 & $\downarrow$ 0.7 \\\hline
\end{tabular}
\label{tab:vqvae_quality}
\vspace{-4mm}
\end{table}
\subsection{Voice Transformation}
We present results from a \textit{voice transformation} task. We tried to change the speaker identity by replacing the speaker code to one of other codes obtained after the VQ-VAE optimization. Individual speaker codes do not always correspond to speakers included in the training dataset and hence this is not a conversion to specific identity of a target speaker. But, we would be able change the speaker identity by replacing the VQ speaker codes while keeping the VQ phone codes unchanged. For each model, we identified which VQ speaker codes had been learned during training. Neither of the two models utilized all of the possible speaker codebooks (the codebook size was 256 for both models), even though both models were trained with multi-speaker data. In the multilingual model (SIWIS), there were 11 VQ speaker codebooks utilized for 36 unique speakers. In the monolingual model (VCTK), there were 18 VQ speaker codebooks utilized for 110 unique speakers. Our VQ-VAE model under-estimated the number of speakers and seems to merge some speakers into one cluster.
\subsubsection{Single-Representation}
This version of voice transformation changes one single speaker VQ code at a time, without mixing or combining speaker codes. For the multilingual model, we selected one male and female speaker (\textbf{spk13}-male, \textbf{spk04}-female) from the SIWIS data and seen conditions. Then we extracted the VQ phone and speaker codes. We replaced their speaker codes with each of the 11 multilingual VQ speaker codes from the codebook. We used 2 utterances per speaker, per language for a total of 12 examples. For the one-hot language vector, we used the language from the source sentence. For the monolingual model and codebook, we followed the same approach selecting a male and female speaker from the VCTK data and seen conditions (\textbf{p229}-female-English, \textbf{p302}-male-Canadian). We selected 2 utterances for each speaker, for a total of 4 examples.
\subsubsection{Mixed-Representations}
This version of voice transformation mixes speaker VQ codes to create new voices, in a spirit similar to \textit{zero-shot} voice conversion. Ideally, this could be done using various combinations of VQ speaker codes and weighting them. In this work, we mixed two representations by calculating an unweighted mean between two VQ codebook vectors. In a vector space, the resulting representation is a new centroid that is equidistant between the paired vectors. We randomly paired VQ speaker codes for each model, and then mixed them. We synthesized the same source utterances as before.
\begin{figure*}
\centering
\subfloat[Multilingual model]{\label{fig1:a}\includegraphics[width=0.23\linewidth]{figures/SIWIS-EN.png}\includegraphics[width=0.165\textwidth]{figures/SIWIS-DE.png}\includegraphics[width=0.19\textwidth]{figures/SIWIS-FR.png}}\hfill
\subfloat[Monolingual model]{\label{fig1:b}\includegraphics[width=0.25\textwidth]{figures/VCTK.png}} \caption{Voice transformation speaker VQ code similarity matrix. Annotations represent the percent of listeners who marked a pair of utterances as the same speaker. Note that the monolingual and multilingual models utilize different speaker VQ codebooks. }
\label{fig:vc_similarity}
\end{figure*}
\begin{table}[ht]
\small
\caption{Multilingual (SIWIS) MOS naturalness scores for voice transformation and voice mixing.}
\centering
\begin{tabular}{|l|c|c|c|}
\hline
Speaker Code & English & French & German\\\hline\hline
Code 85 & 2.4 & 2.9 & 3.4\\
Code 192 & 2.6 & 3.0 & 3.1 \\
Code 238 & 2.5 & 3.0 & 3.2 \\ \hline
Code 131+248 & 2.4 & 3.1 & 3.3\\\hline
\end{tabular}
\label{tab:vc_siwis_quality}
\end{table}
\begin{table}[h]
\small
\caption{Monolingual English (VCTK) MOS naturalness scores for voice transformation and voice mixing.}
\centering
\begin{tabular}{|l|c|}
\hline
Speaker Code & English\\\hline\hline
Code 67 & 2.3\\
Code 109 & 2.3\\
Code 242 & 2.5\\ \hline
Code 109+242 & 2.4 \\\hline
\end{tabular}
\label{tab:vc_vctk_quality}
\vspace{-4mm}
\end{table}
\subsubsection{Results}
For the listening tests, we randomly selected 4 speaker VQ codes (3 single-representations, 1 mixed) from each model. Participants listened to all 8 samples in their language and marked naturalness on a scale of 1 to 5. The results for MOS naturalness are provided in Table~\ref{tab:vc_siwis_quality} and Table~\ref{tab:vc_vctk_quality}. MOS naturalness is changes depending on the speaker VQ code and language. The mixed VQ speaker vectors did not degrade the quality of the synthesized speech overall. In the multilingual model, French and German had better naturalness than English for all four of the reported VQ speaker codes. This is a similar pattern for naturalness in the earlier copy-synthesis task.
We also asked our listeners about speaker similarity. The purpose of this was to understand the consistency of the VQ speaker codes. Listeners were provided with matched and unmatched pairs in an A/B test, and were asked to decide if the A/B examples were from the same or different speaker. For example, a matched pair was 2 synthetic speech utterances using the target speaker VQ code \textbf{238}. An unmatched pair was 2 synthetic speech samples using two different speaker codes such as \textbf{238} and \textbf{85}. There were 16 total matched pairs and 24 unmatched pairs per language and dataset. This format allowed us to observe similarities and differences across a particular language and speaker VQ code. Recall that our voice transformation task did not utilize target speakers, only the learned VQ codes from the speaker codebooks. Speaker similarity results are reported in Figure~\ref{fig:vc_similarity}. The annotations in the figure represent the percent of listeners who marked a pair of utterances as the same speaker. A clear diagonal would indicate that the speaker VQ codes are consistently unique. In the multilingual model codes \textbf{131+248} and \textbf{192} are less consistent. German appears to be more consistent than French or English. In the monolingual model, we observed a pair of VQ speaker codes that participants identified as being inconsistent: \textbf{67} and \textbf{242}.
\begin{table}[ht!]
\small
\caption{Speaker similarity for linguistic code-switching. \textit{A/B} measured how often listeners said the speaker was the same between synthetic and natural speech. \textit{Inter-Utt} measured how often listeners reported consistent speaker within an utterance. }
\centering
\begin{tabular}{|l|cc|}
\hline
& \multicolumn{2}{c|}{Speaker Similarity}\\
Data & A/B & Inter-Utt \\
\hline\hline
English-French & 57.9\% & 69.0\%\\
French-English & 30.8\% & 60.7\%\\\hline
English-German & 67.5\% & 77.5\%\\
German-English & 75.0\% & 77.5\%\\\hline
\end{tabular}
\label{tab:codeswitch}
\vspace{-4mm}
\end{table}
\subsection{Linguistic Code-Switching}
The purpose the linguistic code-switching task was to find out if we could generate speech using analysis-synthesis, wherein the speech has multiple languages within the same utterance. We simulated code-switching by concatenating together VQ phone codes from utterances in different languages but from the same speaker. This was possible because the SIWIS data contained utterances from bilingual and trilingual speakers. We used the sequence of VQ phone codes from entire audio files instead of word or phrase level granularity, and we did not change or modify the VQ phone code contents. We selected 6 utterances for English and German, and 6 utterances for English and French using both male and female speakers from the held-out set. We also swapped the language order, essentially doubling the number of exemplars. This was to observe if the WaveRNN decoder is sensitive to language ordering, since the decoder could only accept a single one-hot language code. This resulted in 24 code-switched files (6 per language and order pair). For the one-hot language vector, we used the language of the first utterance. The speech was synthesized from VQ phone and speaker codes without performing any modifications to the codes apart from the concatenation.
Our main interest for this task was to find out if the multilingual model could preserve speaker similarity while also synthesizing the multilingual speech. Listeners were presented with (A) code-switched synthetic speech from concatenated VQ phone codes, and (B) code-switched speech from concatenated audio files. In this A/B test, participants were asked if the speaker was the same between the two A/B samples.
We also presented listeners with single code-switched examples from only (A) and asked the listeners to judge if the speaker voice was consistent throughout an utterance, or if it changed. This was measured because we had sometimes observed that the speaker voice was not consistent within an utterance. Results are reported in Table~\ref{tab:codeswitch}. We observed slightly more consistency for English-German pairs, compared to French. The A/B similarity for the French-English pair was particularly low, which means that the decoder had difficulty switching from French to English. This could be due to the language imbalances in the SIWIS dataset, or differences in the VQ phone code frequencies between these two languages. More investigation would uncover which part of the utterance was failing, and why the decoder was unable to recover. Better performance on German was also reflected in the other tasks.
This analysis-synthesis task does not reflect how code-switching works with speakers in real-life because it was done at the utterance level instead of the word or phrase-levels. As mentioned earlier, the purpose was to observe if the model, especially WaveRNN, is capable of it. More investigation is required to understand and quantify the limits and edge cases of VQ-VAE for code-switching. In addition, the quantity of evaluated samples was particularly small, which makes it difficult to generalize the results or draw strong conclusions. We attempted to also measure intelligibility, however the listeners did not follow instructions often enough to perform calculations of intelligibility scores. For example, some listeners identified the names of the languages rather than the words of the utterance.
\subsection{Content-Based Privacy Masking}
The purpose of exploring content-based privacy is to develop a capability that conceals certain sensitive words or phrases in a manner that does not disrupt the normal flow and feel of a speech utterance. For example, in some use-cases it might be preferable to transform a sensitive phrase into a speaker's mumbling voice instead of a cut, beep, silence or static. Different types of masks may affect speech recognition (ASR) or speaker verification (ASV) differently.
In this task, we used the monolingual model because we had reliable alignments for the VCTK data \cite{mcauliffe2017montreal}. We hand-selected phrases that occurred mid-utterance and concealed them to try and render the target phrases unintelligible, while keeping the surrounding words intelligible. First, we used the forced-alignments to determine the timestamp end-points of the target phrase. Next, we used those endpoints to determine the location of the target phrase in the sequence of VQ phone codes. Finally, we modified only the VQ phone codes corresponding to the target phrase. We experimented with two different masking positions, as shown in Figure~\ref{fig:masking}, as well as two different masking methods. We have taken advantage of forced-alignments in this toy problem as well as knowing the target phrases beforehand. In real-world applications it may require keyword spotting or another mechanism to decide which words and phrases get masked. Performing this in real-time versus from a speech database would introduce additional engineering challenges.
The first masking method was to replace true VQ phone codes of the target phrase with VQ phone codes from ICRA noise signals \cite{dreschler2001icra}. Since the noise has speech-like spectral and temporal properties, it is expected to generate speech-like, but, meaningless phone codes. The speech-shaped noise offers a non-recoverable masking, which is useful for applications where speech content redaction must be persistent. First, we analyzed this noise to obtain its VQ phone codes. Even though the noise does not truly contain phones, the resulting VQ phone code sequence represented the noise quite well. Next, we replaced the sequence of true VQ phone codes for our target phrase with a randomly selected sequence of the SSN VQ codes of the same length. Our second technique was to simply reverse the the order of the true VQ phone codes for the target phrase, while leaving the remaining VQ phone codes intact. The VQ code reversal method does render the target phrase unintelligible, however it could be recovered by playing the audio backwards. We did not attempt other masking methods, however it may be possible to use silence or randomly selected VQ phone codes. It is also unknown if VQ-VAE could be used for recoverable masking, wherein the masked could be undone. Whether or not this is desirable depends on the use-case.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figures/masking.png}
\caption{Diagram showing two different content masking positions for VQ phone codes on a given phrase.}
\label{fig:masking}
\end{figure}
\subsubsection{Results}
We selected two utterances that were shared between a female and male speaker. Next, we selected two target phrases to mask, at different positions in the sentence. For the first utterance, the two target phrases were ``these things'' (position1) and ``three red bags'' (position2). For the second utterance, the two target phrases were ``sunlight strikes'' (position1) and ``raindrops in the air'' (position2). In total, 16 examples were evaluated.
Participants were instructed that one or more words had been removed from the utterance, but were not told which ones. They were asked whether or not the speaker voice was consistent throughout the utterance and we measured the proportion of positive responses as shown in Table~\ref{tab:masking}. Overall the SSN was better for maintaining speaker identity throughout the utterance. In general, masking the phrase at position2 resulted in more consistency, which could be due to the challenges of using an auto-regressive decoder like WaveRNN. Listeners also performed an A/B preference test which revealed a slight preference for SSN over reversal masking. Finally, we measured ASR-based intelligibility as word error rate (WER) using the IBM Watson Speech-to-Text API\footnote{\url{https://www.ibm.com/cloud/watson-speech-to-text}}. We first calculated the WER on natural, unmasked audio as a baseline and found it was 24\%. This is higher than expected but likely due to pronunciations and the audio quality. The other WER is reported in Table~\ref{tab:masking}. Overall, the WER increased compared to natural, unmasked speech. The position1 resulted in better intelligibility, and the two different techniques were comparable on average. It is unclear if the rise in WER is due to the masking or if intelligibility was lost for unmasked words. Future work must provide a procedure to better evaluate content-based masking.
\begin{table}[h]
\small
\caption{Speaker similarity and ASR-based WER for content masking, comparing two methods and target phrase positions.}
\centering
\begin{tabular}{|l|c|c|}
\hline
& Speaker & ASR-Based\\
Masks & Similarity & WER\\
\hline\hline
Reversal Position1 & 63.7\% & 47\% \\\hline
Reversal Position2 & 77.5\% & 68\% \\\hline
SSN Position1 & 70.0\% & 53\% \\\hline
SSN Position2 & 76.2\% & 61\% \\\hline
\end{tabular}
\label{tab:masking}
\vspace{-4mm}
\end{table}
\section{Discussion}
We have shown that it is possible to adapt an existing monolingual VQ-VAE model to a new multi-speaker multi-language dataset with reasonable performance on copy-synthesis, voice transformation, and linguistic code-switching\footnote{Code/models: \url{https://github.com/rhoposit/multilingual_VQVAE}}. This is an important finding for multi-lingual speech synthesis.
The manner in which the VQ speaker codebooks are under-utilized for both models has some implications for the limitations of the VQ-VAE architecture. It is sometimes referred to as \textit{codebook collapse} analogous to posterior collapse in VAE. We observed similar codebook collapse in our VQ phone codebooks as the VQ speaker codebooks. In both models, the phone codebook size was set to 256, however the multilingual model utilized 161 entries and the monolingual model utilized 170 entries. The quantity of utilized entries is far greater than the size of a requisite phone set -- even in the multilingual model. We examined the distribution of VQ phone codes for each language in the multilingual model and found that all four languages utilized similar codebooks with similar frequencies.
The diversity of the learned codebooks should be improved. The size of codebooks must be pre-determined at the time of initializing the architecture. As we have shown, VQ-VAE models can be adapted to new datasets, but having hard-coded constraints (such as the codebook sizes) may be a limiting factor. Our recommendation is to develop a way to dynamically add or remove VQ codebooks during the training process. This would make it possible to learn only and all of the codebook vectors that matter. The true capabilities of VQ-VAE modeling are limited by its toolkit implementation: the nature of the tensor graph and how it is used in memory does not accommodate dynamic modeling to its fullest potential.
We have described a method to synthesize high-quality speech in multiple languages (including code-switching) from a single multilingual model, based on learned representations. This will be useful for speech-to-speech translation, controllable speech synthesis, and data augmentation. In future work, we are interested in adding additional internal representations to the dual-encoder VQ-VAE model in an effort to perform further disentanglement of speech signal characteristics.
\section{Acknowledgements}
We sincerely thank Evelyn Williams at the University of Edinburgh for helping implement the listening tests. This work was partially supported by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1) and University of Edinburgh; and by a JST CREST Grant (JPMJCR18A6, VoicePersonae project), Japan. Some of the numerical calculations were carried out on the TSUBAME 3.0 supercomputer at the Tokyo Institute of Technology.
\bibliographystyle{IEEEtran}
|
1,108,101,564,976 | arxiv | \section{Introduction}
An important diagnostic of the physical state of the interstellar medium
is its large-scale velocity dispersion. This parameter is however very
difficult to derive, since it is in general dominated by the
contribution of the systematic velocity gradients in the beam, which are
not well-known.
Exactly face-on galaxies are ideal objects for this study, since
the line-width can be attributed almost entirely to the z-velocity
dispersion $\sigma_v$. Indeed, the systematic gradients perpendicular
to the plane are expected negligible; for instance no systematic pattern
associated to spiral arms have been observed in face-on galaxies
(e.g. Shostak \& van der Kruit 1984, Dickey et al 1990), implying that
the z-streaming motions at the arm crossing are not predominant. In an
inclined galaxy on the contrary, it is very difficult
to obtain the true velocity
dispersion, since the systematic motions in the plane $z=0$ (rotation, arm
streaming motions) widen the spectra due to the finite spatial resolution of
the observations (e.g. Garcia-Burillo et al 1993, Vogel et al 1994).
Nearly face-on galaxies have already been extensively studied in the
atomic gas component, in order to derive the true \hI\ velocity dispersion
(van der Kruit \& Shostak
1982, 1984, Shostak \& van der Kruit 1984, Dickey et al 1990).
The evolution of $\sigma_v$ as a function of radius was derived: the velocity
dispersion is remarkably constant all over the galaxy
$\sigma_v$ = 6 \kms = $\Delta V_{FWHM}/2.35$, and only in the
inner parts it increases up to 12 \kms.
\medskip
The constancy of $\sigma_v$ in the plane, and in particular in the outer
parts of the galaxy disk, is not yet well understood; it might
be related to the large-scale gas stability and to the linear
flaring of the plane,
as is observed in the Milky-Way (Merrifield 1992) and M31 (Brinks \& Burton
1984). In the isothermal sheet model of a thin plane, where the z-velocity
dispersion $\sigma$ is independent of z, the height $h_g$(r) of the
gaseous plane, if assumed self-gravitating, is
$$
h_g(r) = {{\sigma_g^2(r)}\over{2 \pi G \mu_g(r)}}
$$
where $\sigma_g$ is the gas velocity dispersion,
and $\mu_g(r)$ the gas surface
density. The density profile is then a sech$^2$ law.
But to have the gas self-gravitating, we have to assume that either there is
no dark matter component, or the gas is the dark matter itself
(e.g. Pfenniger et al 1994). Since in general
the \hI\ surface density decreases as 1/r in the outer parts of galaxies
(e.g. Bosma 1981), a linear flaring ($h_g \propto r$) corresponds
to a constant velocity dispersion with radius.
On the contrary hypothesis
of the gas plane embedded in an external potential of larger scale
height, where the gravitational acceleration close to the plane can
be approximated by $K_z z$, the z-density profile is then a Gaussian:
$$
\rho_g=\rho_{0g} e^{-\frac12\frac{K_zz^2}{\sigma_g^2}}
$$
and the characteristic height, or gaussian scale height of the gas is:
$$
h_g = \frac{\sigma_g}{\sqrt{K_z}}
$$
and $K_z$ is $4\pi G \rho_0$, where $\rho_0(r)$ is the density in the plane
of the total matter, stellar component plus dark matter component, in which the
gas is embedded. If the dark component is assumed spherical, the
density in the plane is dominated by the stellar component, which is
distributed in an exponential disk.
This hypothesis would predict an exponential flare in the gas,
while the gas flares appear more linear than exponential
(e.g. Merrifield 1992, Brinks \& Burton 1983). The knowledge of
their true shape is however hampered by the presence of warps.
Also, the flattening of the dark matter component, and its participation
to the density $\rho_0$ in the plane, is unknown.
\medskip
As for the stability arguments, let us assume
here the z-velocity dispersion
comparable to the radial velocity dispersion, or at least their
ratio constant with radius.
The velocity dispersion of the gas component is self-regulated by
dynamical instabilities. If the Toomre Q parameter for the gas
$$
Q_g = \sigma_g(r) \kappa(r) /3.36 G \mu_g(r) = \sigma_g(r)/\sigma_{cg}(r)
$$
is lower than 1, instabilities set in, heat the medium and increase
$\sigma_g(r)$ until $Q_g$ is 1.
The critical velocity dispersion $\sigma_{cg}$ depends on the
epicyclic frequency $\kappa(r)$ and on the gas surface density $\mu_g(r)$;
assuming again an \hI\ surface density decreasing as 1/r in the outer parts
and a flat rotation curve, where $\kappa(r)$ also varies as
1/r, then $\sigma_{cg}$ is constant. To maintain $Q_g = 1$ all over the outer
parts, $\sigma_g$ should also remain constant.
However, the gas density gradient appears often steeper than $1/r$ and
the $Q_g$ parameter is increasing towards the outer parts. This has
been noticed by Kennicutt (1989), who concluded that there exists some radius
in every galaxy where the gas density reaches the threshold of global
instability ($Q_g\approx 1$); he identifies this radius to the onset
of star formation in the disk. In fact, this threshold does not occur
exactly at $Q_g$ = 1, but at a slightly higher value, around 1.4, which
could be due to the fact that the $Q$ criterion is a single-fluid
one, which does not take into account the coupling between gas and stars.
\medskip
The determination of the z-velocity dispersion
in the molecular component has not yet been done. It could bring
complementary insight to the \hI\ results,
since in general the center of galaxies is much better sampled
through CO emission (a central \hI\ depletion
is frequent), and also the thickness of the H$_2$ plane can be lower by a
factor 3 or 4 than the \hI\ layer (case of MW, M31, Boulanger et al 1981). In
the case of M51, an almost face-on galaxy (i=20$^{\circ}$), the estimated
$\sigma_v$ determined from the CO lines is surprisingly large (up to
$\sigma_v$ = 25 \kms in the southern arm) once the rotation field, and even
streaming-motions are taken into account, at the beam scale.
An interpretation could be that the CO lines are broadened by
macroscopic opacity, i.e. cloud overlapping (Garcia-Burillo et al 1993),
since such large line-widths are not observed in galaxies with less CO
emission. However, one could also suspect turbulent motions, generated at
large-scale by gravitational instabilities or viscous shear.
The level of star formation could be another factor: as for turbulence, it
generally affects the molecular component more than the HI, except for very
violent events like SNe. But the finite
inclination (20$^{\circ}$) of M51 makes the discrimination between in-plane
and z-dispersion very delicate. It is therefore necessary
to investigate in more details this
problem in exactly face-on galaxies, and determine whether there exist spatial
variations of $\sigma_v$ over the galaxy plane.
\medskip
\begin{figure}
\psfig{figure=6130f1.ps,height=9.cm,width=9.cm}
\caption[]{Map of CO(1-0) spectra towards NGC 628. The scale in
velocity is from 556 to 756 \kms, and in T$_A^*$ from -0.05 to 0.18K.
The major axis has been rotated by 25$^\circ$ to be vertical. The
offsets marked on the box are in arcsec.}
\label{map628}
\end{figure}
\begin{figure}
\psfig{figure=6130f2.ps,height=9.cm,width=9.cm}
\caption[]{Map of CO(1-0) spectra towards NGC 3938. The scale in
velocity is from 608 to 908 \kms, and in T$_A^*$ from -0.05 to 0.18K.
The major axis has been rotated by 20$^\circ$ to be vertical. The
offsets marked on the box are in arcsec.}
\label{map3938}
\end{figure}
In this paper we report molecular gas observations of two face-on galaxies
NGC 628 (M74) and NGC 3938, in the CO(1-0), CO(2-1) and $^{13}$CO lines,
using the IRAM 30--m telescope. After a brief description of the
galaxy parameters in section 2, and the observational parameters
in section 3, we derive the amplitude and the spatial
variations of $\sigma_v$ perpendicular to the plane in NGC 628 and NGC 3938.
Section 5 summarises and discusses the physical interpretations.
\begin{table*}
\begin{flushleft}
\caption[]{Galaxy properties}
\scriptsize
\begin{tabular}{|lllrcccrrcccc|}
\hline
& & & & & & & & & & & & \\
\multicolumn{1}{|c}{Name} &
\multicolumn{1}{c}{Type} &
\multicolumn{2}{c}{Coordinates} &
\multicolumn{1}{c}{$V_{\odot}$} &
\multicolumn{1}{r}{Dist} &
\multicolumn{1}{c}{$L_B$} &
\multicolumn{1}{c}{f$_{60}$} &
\multicolumn{1}{c}{f$_{100}$} &
\multicolumn{1}{c}{$D_{25}$} &
\multicolumn{1}{c}{$PA$} &
\multicolumn{1}{c}{$i$} &
\multicolumn{1}{c|}{Environment} \\
& & \multicolumn{1}{c}{$\alpha$(1950)} &
\multicolumn{1}{c}{$\delta$(1950)} &
\multicolumn{1}{c}{$km\,s^{-1}$} &
\multicolumn{1}{c}{Mpc} &
\multicolumn{1}{c}{10$^9$ $L_\odot$} &
\multicolumn{2}{c}{Jy} &
\multicolumn{1}{c}{$'$} &
\multicolumn{1}{c}{$\circ$} &
\multicolumn{1}{c}{$\circ$} &
\\
& & & & & & & & & & & & \\
\hline
& & & & & & & & & & & & \\
NGC 628& Sc(s)I & 01 34 0.7& 15 31 55& 656 &10& 25 & 20& 65& 10.7 &
25 & 6.5 &loose group, comp. at 140 kpc \\
NGC 3938& Sc(s)I& 11 50 13.6& 44 24 07& 808&10& 11 &4.9& 22 & 5.4 &
20 & 11.5 & Ursa Mayor Cl., no comp.$<$ 100kpc \\
& & & & & & & & & & & & \\
\hline
\end{tabular}
\\
\end{flushleft}
\end{table*}
\section{Relevant galaxy properties}
\subsection{NGC 628}
NGC 628 (M74) is a large (Holmberg radius R$_{Ho}$ = 6\amin)
bright face-on galaxy, with a remarkable \hI\ extension, as large as
3.3 R$_{Ho}$ (Briggs et al 1980).
Sandage \& Tamman (1975) propose a distance of 19.6 Mpc from the size of
its \hII\ regions, but most authors adopt a distance of 10 Mpc,
based on a Hubble constant of 75 \hbox{${\rm km\,s}^{-1}{\rm Mpc}^{-1}$}. At this distance, 1\amin\,
is about 3 kpc, and our beams are 1.1 kpc and 580 pc in CO(1-0) and
CO(2-1) lines respectively.
Disk morphology and star formation activity were derived by
Natali et al (1992): they fit the I-band light distribution
by an exponential disk of scale length 4 kpc, and a bulge of 1.5 kpc extent,
which we will use below to interprete the rotation curve. NGC 628 has
a very modest star formation rate of 0.75 star/yr, which is fortunate, since
violent stellar activity agitates the interstellar medium, through the
formation of bubbles, jets, stellar winds, and the intrinsic
or un-perturbed z-velocity dispersion
could not be naturally measured.
The stellar velocity dispersion has been derived by van der Kruit \& Freeman
(1984): it is 60$\pm$20 \kms at about one luminosity scale-length, and
its evolution with radius is compatible with an exponential decrease,
with a radial scalelength twice that of the density distribution,
as expected if the stellar disk has a constant scale-height with radius.
This result has been derived for several galaxies by Bottema (1993).
The \hI\ distribution has been observed at Westerbork with 14\asec x 48\asec\,
beam by Shostak \& van der Kruit (1984)
who derived an almost constant z-velocity dispersion (9-10 \kms in the
center, to 7-8 \kms in the outer parts). Kamphuis \& Briggs (1992)
investigate in more details the \hI\ outer disk with the VLA
(beam 53\asec x 43\asec). They found that the inner disk of NGC 628 is
relatively unperturbed, with an inclination of 6.5$^\circ$ and a
PA of 25$^\circ$, while at a radius of about 6\amin\, (18 kpc), the plane
begins to be tilted, warped and perturbed by high velocity \hI\ clouds .
These clouds could
be accreting onto the outer parts, and fueling the warp.
As for the molecular component,
a cross has been observed in NGC 628 by Adler \& Liszt (1989)
with the Kitt Peak 12m telescope (beam 1\amin=3 kpc), and Wakker \&
Adler (1995) presented BIMA interferometric observations, with
a resolution of 7.1\asec x 11.6\asec\, (340 x 560 pc). They showed that there
might be a CO emission hole in the center, of size $\approx$ 10\asec, but
this could be only a relative deficiency, since only 45\% of the single-dish
flux is recovered in the interferometric observations.
The total masses derived are M(H$_2$)=2 10$^9$ M$_\odot$, M(HI)=
1.2 10$^{10}$ M$_\odot$, and from the flat rotation curve V$_{rot}$=200 \kms
an indicative total mass inside two Holmberg radii (12\amin=
36 kpc) M$_{tot}$ = 3.3 10$^{11}$ M$_\odot$, assuming a spherical
mass distribution.
\begin{figure}
\psfig{figure=6130f3.ps,height=12.cm,width=9.cm}
\caption[]{{\bf top} Rotation curve derived from the CO(1-0) (filled triangles)
and CO(2-1) (open squares) observed points in NGC 628. The adopted inclination
is 6.5$^\circ$.
{\bf bottom} Radial distribution of integrated CO emission in NGC 628. }
\label{rad628}
\end{figure}
\begin{figure}
\psfig{figure=6130f4.ps,height=12.cm,width=9.cm}
\caption[]{{\bf top} Rotation curve derived from the CO(1-0) (filled triangles)
and CO(2-1) (open squares) observed points in NGC 3938. The adopted inclination
is 11.5$^\circ$.
{\bf bottom} Radial distribution of integrated CO emission in NGC 3938. }
\label{rad3938}
\end{figure}
\subsection{NGC 3938}
NGC 3938 is also a nearly face-on galaxy, at about the same distance
as NGC 628. Its distance derived from its corrected radial velocity
(850 \kms) and a Hubble constant of 75 \hbox{${\rm km\,s}^{-1}{\rm Mpc}^{-1}$} is 11.3 Mpc, but Sandage
\& Tamman (1974) derive a distance of 19.5 Mpc from the \hII\
regions. We will adopt here a distance of 10 Mpc, to better compare with
the literature, where this value is more frequently used.
Its global star formation rate is comparable to that of NGC 628, its ratio
between far-infrared and blue luminosity is L$_{FIR}$/L$_B$=0.15
(while L$_{FIR}$/L$_B$=0.19 for NGC 628).
The stellar velocity dispersion was measured by Bottema (1988, 1993)
to be about 20 \kms at one scale-length: its radial variation is
also exponential, compatible with a constant stellar scale height.
An \hI\ map was obtained at Westerbork with 24\asec x 36\asec\, beam
(1.1 x 1.7 kpc) by van der Kruit \& Shostak (1982), and the z-velocity
dispersion
was also found almost constant with radius at a value of 10 \kms. They
found no evidence of a systematic pattern of z-motions in the \hI\ layer,
in excess of 5 \kms. CO emission has been reported by Young et al (1995)
towards 4 points, with a beam of 45\asec\, (2.1 kpc), and the central
line-width was 70 \kms.
The total masses derived are M(H$_2$)=1.6 10$^9$ M$_\odot$, M(HI)=
1.6 10$^{9}$ M$_\odot$, and from the flat rotation curve V$_{rot}$=180 \kms
an indicative total mass inside two Holmberg radii (7\amin=
21 kpc) M$_{tot}$ = 1.5 10$^{11}$ M$_\odot$, assuming a spherical
mass distribution.
There are some uncertainties about the inclination, as it is usual
for nearly face-on galaxies. Danver (1942) adopted an inclination of
i=9.5$^\circ$, and van der Kruit \& Shostak (1982) after fitting
the \hI\ velocity field and from the galaxy type deduce i=8$^\circ$-11$^\circ$.
This corresponds to a maximum velocity of 200-250\kms.
Bottema (1993) uses the Tully-Fisher relation established by
Rubin et al (1985) to deduce a maximum rotational velocity
of 150\kms, corresponding to an inclination of 15$^\circ$.
However, the Tully-Fisher relation has an intrinsic scatter.
If we compare with NGC 628, NGC 3938 has about half the
luminosity, and half the radial scale-length (the exponential
scales of the disk are h$_d$=4 and 1.75kpc, and D$_{25}$ see Table 1).
If we choose comparable M/L ratios, given they have the same type,
and about the same star formation rate, we expect comparable
maximum rotational velocities. We then adopt a compromise
of V$_{max}\approx$ 180\kms, and an inclination of i=11.5$^\circ$.
We note that this will not change our conclusions about
the vertical dispersions, except that the critical
dispersions for stability $\sigma_c$ scale as V$_{max}^{-1}$.
\section{Observations}
Most of the observations were made in August 1994 with the
IRAM 30--m telescope, equipped with single side--band tuned SIS receivers
for both the CO(1-0) and the CO(2-1) lines. The observations were done in
good weather conditions, but the relative humidity was typical of summer
time, which affected essentially the high frequency observations
(the CO(2-1) line). The typical system temperatures, measured in
the Rayleigh--Jeans main beam brigthness temperature (T$_A^*$) scale,
of 350\,K and 1000\,K for the 115 and 230 GHz lines, respectively.
We used two 512 channel filterbanks, with a channel separation of 2.6\,\kms
and 1.3\,\kms for the CO(1-0) and CO(2-1) line respectively.
Each channel width could be slightly broader (by 10 or 20\%) than
the spacing of 1 MHz, degrading slightly the spectral resolution, but
this is not critical, with respect to the half-power line width of at
least 14\kms that we observed.
The half power beam sizes at 115 and 230 GHz are 23\asec\ and
12\asec. The observations were done using a nutating
secondary, with a beamthrow of $\pm$4\amin\ in azimuth. Pointing checks
were done at least every two hours on nearby continuum sources and planets,
with rms fluctuations of less than 3\asec. The calibration of the receivers
was checked by observing Galactic sources (Orion\,A and SgrB2).
The intensities are given here in T$_A^*$,
the chopper wheel calibrated antenna temperature. To convert into main
beam temperatures T$_{mb} =\,{T_{A}^{*} \over \eta_{mb}}\,$
where $\eta_{mb}$ is the main--beam efficiency.
For the IRAM telescope $\eta_{mb}$ is
0.60 and 0.45 for the CO(1-0) and CO(2-1) lines.
For NGC 628, we also combined a series of
of CO(1-0) and CO(2-1) spectra obtained
during the IRAM galaxy survey by Braine et al (1993).
These were two radial cuts along the RA and DEC axis. In August 1994, we
observed two radial cuts aligned along the major and minor axis
of the galaxy, with a position angle 25$^\circ$ with respect
to the previous cuts. We integrated about 30 min per point, and reached
a noise level of about 15 mK and 30 mK for the CO(1-0) and CO(2-1) lines,
at 2.6 \kms resolution. We have smoothed the CO(2-1) data to 2.6 \kms
resolution to increase the signal-to-noise; according to the line-width
observed (at least 10 \kms), this has never a significant broadening
effect (less than 5\%).
Some $^{13}$CO spectra were also taken in November 1994 towards selected
points in NGC 628, to test the effect of cloud overlap, and "macroscopic"
optical depth of the $^{12}$CO line on the derived line-width and
velocity dispersion. The typical system temperatures were then
240 and 350 K at the $^{13}$CO(1-0) and (2-1) (110 GHz and 220 GHz)
respectively.
\begin{figure}
\psfig{figure=6130f5.ps,height=6.cm,width=9.cm,angle=-90}
\caption[]{Radial distribution of CO(1-0) velocity dispersions
obtained through gaussian fits of the NGC 628 profiles. The points on
the minor axis are marked by filled triangles. The full lines are the
$\sigma_v$ expected from an axisymmetric velocity models, where the width
comes only from the beam-smearing of the rotational velocity gradients
projected on the sky plane. The top line is for the minor axis, the
bottom line for the major axis. The dashed horizontal line is the
average value adopted for the $^{12}$CO dispersion}
\label{dv_628}
\end{figure}
\begin{figure}
\psfig{figure=6130f6.ps,height=6.cm,width=9.cm,angle=-90}
\caption[]{Radial distribution of CO(1-0) velocity dispersions
obtained through gaussian fits of the NGC 3938 profiles.
Markers and lines as in previous figure.}
\label{dv_3938}
\end{figure}
\section{Results}
\subsection{Spectra and radial distributions}
An overview of the CO(1-0) spectra of the observed galaxies is shown in
Figures \ref{map628} and \ref{map3938}. We derived the area, central velocity
and velocity width of each profile through gaussian fits. Positions where
the signal-to-noise ratio was not sufficient (below 3) were discarded.
The radial distributions of integrated CO emission is displayed
in figures \ref{rad628} and \ref{rad3938}, together with the derived
rotation curves. The adopted positions angles and inclinations are
those found for the inner disk in \hI\ by Kamphuis \& Briggs (1992) for
NGC 628, and and a compromise between the values from Bottema (1993) and
van der Kruit \& Shostak (1982) for NGC 3938. We minimised
the dispersion by fitting the central CO velocity, which is indicated
in Table 1.
\subsection{Vertical velocity dispersion}
The velocity dispersions, derived directly from
the gaussian fits are displayed as a function of radius in
figures \ref{dv_628} and \ref{dv_3938} for NGC 628 and NGC 3938
respectively. The striking feature is the almost constancy of
the velocity dispersion as a function of radius. The dispersion is slightly
increasing towards the center, but this can be entirely accounted for
by the rotational velocity gradients projected on the sky plane.
We estimated these gradients for each position by modeling
the radial distribution and velocity field of the molecular gas, assuming
axisymmetry. The radial distribution was taken from the present
observations: we fitted an exponential surface density model for the
CO integrated emission, with an exponential scale-length of 6 kpc
and 3 kpc respectively for NGC 628 and NGC 3938. This corresponds
also to the radial distribution found by Adler and Liszt (1989) for NGC 628.
We know that there might be a 10\asec\, ($\approx$ 500 pc) hole in the
NGC 628 center (Wakker \& Adler 1995), corresponding to a hint of depletion
in our CO(2-1) distribution; we have tested this in the model, and
the effect on the expected beam-smoothed line-width was negligible.
We also entered the values for the rotation curve obtained both from
the inner \hI\ disks (Shostak \& van der Kruit 1984, van der Kruit
\& Shostak 1982), and the present CO rotational velocities. Both
are compatible (cf fig. \ref{rad628} and \ref{rad3938}). We distributed
randomly 4 10$^4$ test particules according to these radial distributions
and kinematics, in a plane of thickness 100pc. No velocity
dispersion of any sort was added, so that the spectrum of each
particule is a delta function. After projection, and beam-smearing
with a gaussian beam of 23\asec\, and 12\asec\, to reproduce the CO(1-0)
and CO(2-1) respectively, the expected map of the velocity widths
coming from in-plane velocity gradients was derived. The corresponding
cuts along the minor and major axis are plotted in figures
\ref{dv_628} and \ref{dv_3938}.
\begin{figure}
\psfig{figure=6130f7.ps,height=6.cm,width=9.cm,angle=-90}
\caption[]{Same as fig \ref{dv_628} but for the CO(2-1) line.}
\label{dv21_628}
\end{figure}
\begin{figure}
\psfig{figure=6130f8.ps,height=6.cm,width=9.cm,angle=-90}
\caption[]{Same as fig \ref{dv_3938} but for the CO(2-1) line.}
\label{dv21_3938}
\end{figure}
We see that in the center, the velocity width can sometimes be entirely
due to the rotational gradients. This is not true for the outer parts, where
the measured $\sigma$ must represent the vertical velocity dispersion.
These large effects of the rotational gradients explain the much larger
line-widths found with a 45\asec\, beam by Young et al (1995); they also
explain the spiral shape of the velocity residuals found in the
\hI\ kinematics by Foster \& Nelson (1985), with comparable spatial
resolution.
When trying to deconvolve the measued $\sigma$ at the center, we obtain a
rather flat profile of vertical velocity dispersion with radius.
There are however some uncertainties: first, sometimes the expected gradient
is larger than the observed one; this could be explained if the CO-emitting
clouds are not spread all over the beam, and do not share the whole
expected rotational gradients (for instance, if they are confined into
arms). Also, the axisymmetric model might under-estimate the expected
rotational gradients, since no streaming motions have been taken into
account. We dont estimate it worth to refine the model,
given the many sources of uncertainty.
The beam-smearing is less severe in the CO(2-1) line. The corresponding
curves are plotted in fig \ref{dv21_628} and \ref{dv21_3938}.
Unfortunately, the signal-to-noise ratio is lower for this line,
due to both a higher system temperature, and a CO(2-1)/CO(1-0) emission
ratio slightly lower than 1. The constancy of the line-width as a
function of radius is however confirmed.
The constant value of the vertical dispersion
$\sigma_v$ = (FWHM/2.35) is 6.5 \kms and 9 \kms
for NGC 628 and NGC 3938 respectively. We can question the fact
that this is not the true molecular gas dispersion, if there is a
saturation effect in the $^{12}$CO line, as shown by Garcia-Burillo
et al (1993). The less saturated $^{13}$CO line profiles were found
systematically narrower in the galaxy M51. To investigate this point,
we observed a few $^{13}$CO spectra, as shown in fig \ref{co13}.
We found indeed slightly narrower $^{13}$CO line profiles,
with a $^{12}$CO/$^{13}$CO width ratio of 1.1 in average, in the (1-0)
and the (2-1) lines as well. Only the (-17\asec,-16\asec) offset has a
significantly higher ratio width of 1.35 (FWHM = 12.7 $\pm$0.9 for
the $^{12}$CO line and FWHM=9.1 $\pm$ 0.7 for the $^{13}$CO line).
We can therefore conclude that the saturation effect is no more than
10\% in order of magnitude in average over the galaxy, and we estimate
the true velocity dispersion of the molecular gas perpendicular to the
plane to be 6 \kms and 8.5 \kms for NGC 628 and NGC 3938.
\begin{figure}
\psfig{figure=6130f9.ps,height=9.cm,width=9.cm}
\caption[]{ Some $^{13}$CO spectra taken towards NGC 628. The offsets
are indicated at the top left, in arcsec. }
\label{co13}
\end{figure}
\subsection{Comparison of the two galaxies}
It is interesting to compare the two galaxies NGC 628 and NGC 3938,
since they are of the same type, and however the stellar surface density
is about twice higher in NGC 3938, a property independent of the distance
adopted (both the total luminosity and the characteristic radius are
twice lower in NGC 3938). We can also note that the gas surface
density is also twice higher in NGC 3938 (figure \ref{sigma}).
How can we then explain that the vertical gas dispersion is higher in
NGC 3938, while the stellar vertical dispersion is twice lower?
First let us note that the maximum rotational velocities
are comparable in the two galaxies, and that is expected if they have
similar M/L ratios: indeed the square of the velocity scales as M/R,
which is similar for both objects.
Now the critical velocity dispersions for stability in the plane
scale as $\sigma_c \propto \mu/\kappa \propto \mu R/V$ and is also
the same for both galaxies. We therefore expect the same planar
dispersions, if they are regulated by gravitational instabilities.
Since we observe a stellar vertical dispersion lower in NGC 3938,
this means that the anisotropy is much higher in this galaxy.
For the self-gravitating stellar disk, we can apply the isothermal
equilibrium, and find that the stellar scale-height $h_*$ scales
as $\sigma_*^2/\mu_*$ and should be 8 times smaller in NGC 3938.
The stellar density in the plane $\rho_0$ should then be 16 times
higher, and so should be the restoring force for the gas $K_z$. We
can then deduce that the scale height of the gas is also smaller,
but only by a factor 4 or less. In fact, there remains a free parameter, which
is the vertical/planar anisotropy, which must result from
the history of the galaxy formation and evolution
(companion interactions, mergers, gas infall etc..)
which should explain the differences between the two galaxies.
\section{Summary and Discussion}
One of the most important result is that the vertical velocity
dispersion of the molecular gas in NGC 628 and NGC 3938
is constant as a function of radius. Morevover
the value of this dispersion is $\sigma_v \approx$ 6-8 \kms, very
comparable to that of the \hI\ component.
\begin{figure}
\psfig{figure=6130f10.ps,angle=0,height=12.cm,width=9.cm}
\caption[]{ Radial distributions of gas surface densities in NGC 628 (left)
and NGC 3938 (right): \hI\ (dash) and H$_2$ from CO (dots) are combined to
estimate the total (full line) surface density. Helium is taken into
account}
\label{sigma}
\end{figure}
This universality of the dispersion already tells us that it does
not correspond to a thermal width. In that case, the dispersion
is only a function of gas temperature, and this should vary with
the galactocentric distance, since the cooling and heating processes
depend strongly on the star formation efficiency. The temperature
of dust derived from infra-red emission for instance, is a function
of galactocentric radius. Also, the dispersion of the \hI\ should then
be very different from that of the colder molecular gas.
It is clear at least for the cold molecular component that the line widths
correspond to large-scale macroscopic turbulent motions between
a large number of clumps of internal dispersion possibly
much lower than 1 \kms.
The fact that the \hI\ and CO emissions reveal comparable z-dispersions
may appear surprising. It is well known from essentially
our own Galaxy, that the \hI\ plane is broader than the molecular gas plane,
and this is attributed to a larger z-velocity dispersion
(e.g. Burton 1992). In external galaxies, the evidence is indirect,
because of the lack of spatial resolution. In M31, the comparison
between \hI\ and CO velocity dispersions led to the conclusion that
the \hI\ height was 3 or 4 times higher than the molecular one at
R=18 kpc (Boulanger et al 1981). Is really the thin molecular component
an independant layer embedded in a thicker \hI\ layer?
We discuss this further below.
\subsection{Critical dispersion for gravitational stability}
It is interesting to compare the observed vertical
velocity dispersions to the critical dispersion required in the plane
by stability criteria; as a function of gas surface
density $\mu_g$ and epicyclic frequency $\kappa$, the critical
dispersion can be expressed by (Toomre 1964):
$$
\sigma_{cg} = 3.36 G \mu_g/\kappa
$$
To obtain the total gas surface density $\mu_g$, we have combined the available
\hI\ data (Shostak \& van der Kruit 1984 for NGC 628; van der Kruit \& Shostak
1982, for NGC 3938) to the present CO data as a tracer of H$_2$ surface density.
We have multiplied the result by the factor 1.4, to take into account
the helium fraction.
We can see in figure \ref{sigma} that the apparent central hole detected
in \hI\ is filled out by the molecular gas.
This is not unexpected, since it is believed that above a certain gas
density threshold, the atomic gas becomes molecular.
This threshold involves essentially the gas column density, the pressure
of the interstellar medium, the radiation field and the metallicity, since the
main point is to shield molecules from photodestruction (Elmegreen 1993). It
is indeed observed that the average gas column density is sufficiently
increasing towards the galaxy centers, to reach the threshold. Once the
shielding conditions are met, the chemistry time-scale is short enough (of
the order of 10$^5$ yrs) with respect to the dynamical time-scale, that the
\hI\ to H$_2$ phase transition occurs effectively. This
transition is obvious in most galaxies (e.g. Sofue et al 1995,
Honma et al 1995); the threshold at solar conditions for metallicity and
radiation field is around 10$^3$ cm$^{-3}$ and 10$^{21}$ cm$^{-2}$, but
it is difficult to precise it more because of
spatial resolution effects, and because we rely upon CO emission
to trace H$_2$ (the observed thresholds concern
in fact CO excitation and photo-dissociation).
\medskip
We have then built a mass model of the two galaxies in fitting their
rotation curves, taken into account the constraints on the scale-lengths
and masses of luminous components given by optical observations
(section 2). In these fits, the gas contribution to the rotation curve
have been found negligible. We include in the mass model a spherical bulge
represented by a Plummer (size r$_b$, mass M$_b$), an exponential
disk (scale-length h$_d$, mass M$_d$) and a flattened dark matter halo,
represented by an isothermal, pseudo-ellipsoidal density
(eg Binney \& Tremaine 1987):
\begin{eqnarray*}
\rho_{DM} & = & \frac{\rho_{0,DM}}{(1+ \frac{r^{2}}{r_h^{2}} +
\frac{z^{2}}{r_h^{2} q^{2}})}
\end{eqnarray*}
where $r_h$ is the characteristic scale of the DM halo and $q$ its
flattening.
All parameters of the fits are displayed in Table 2.
The fits are far from unique, but their essential use is to get
an analytical curve fitting the observed rotation curve,
in order to get derivatives and characteristic dynamical frequencies.
Given the functional forms adopted for the various components,
we can get easily the total mass, and compare the M/L with
expected values for galaxies of the same types.
\begin{table}
\begin{flushleft}
\caption[]{Mass models derived from rotation curve fits}
\begin{tabular}{|lcc|}
\hline
& & \\
\multicolumn{1}{|l}{Galaxy} &
\multicolumn{1}{c}{NGC 628} &
\multicolumn{1}{c|}{NGC 3938} \\
& & \\
\hline
& & \\
r$_b$(kpc) & 1.5 & 0.8 \\
M$_b$(10$^{10}$ M$_\odot$) & 1.6 & 0.8 \\
h$_d$(kpc) & 3.9 & 1.75 \\
M$_d$(10$^{10}$ M$_\odot$) & 6.9 & 2.8 \\
r$_h$(kpc) & 14. & 7. \\
M$_h$(10$^{10}$ M$_\odot$)$^*$ & 2.7 & 1.3 \\
$q$ & 0.2 & 0.2 \\
M(stars)/L$_B$ & 3.4 & 3.3 \\
M$_{tot}(<R_{25})$/L$_B$ & 4.5 & 4.5 \\
& & \\
\hline
\end{tabular}
\\
\vskip 4truemm
$^*$ Mass inside R$_{25}$=15.5 kpc for NGC 628 and 7.85 kpc for NGC 3938\\
\end{flushleft}
\end{table}
From these mass models, we have derived the epicyclic frequency as a function
of radius (this does not depend on the precise model used,
as long as the rotation curve is fitted), and the critical velocity
dispersion required for axisymmetric stability,
for the stellar and gaseous components (figures \ref{vrot628} and
\ref{vrot3938}). The comparison with the observed vertical velocity
dispersions for \hI\ and CO is clear: the observed values are most of
the time larger, in particular for NGC 3938. This means that, if the gas
velocity dispersion can be considered isotropic, the Toomre stability
parameter in the galaxy plane
is always $Q_g \ga 1$, and most of the time $Q_g >$ 2-3, for NGC 3938.
For NGC 628, $Q_g$ is near 1 between 3 and 20kpc, and the threshold
for star formation, $Q_g=1.4$ according to Kennicutt (1989) is
reached at 23 kpc. This is far in the outer parts of the galaxy,
since R$_{25}$ = 15.5 kpc.
If the vertical dispersion is lower than in the plane, as could be
the case (e.g. Olling 1995), than $Q_g$ is even larger.
The gas appears then to be quite stable, unless the coupling
gas-stars has a very large effect.
\begin{figure}
\psfig{figure=6130f11.ps,height=12.cm,width=9.cm}
\caption[]{ Rotation curve fit for NGC 628:
{\it top}: total fitted rotation curve (full line), with contributions
of bulge, exponential disk and dark matter halo (dashed lines) compared
with CO and \hI\ data;
{\it middle}: Derived critical velocity dispersions required for
axisymmetric stability for stars and gas (full lines). The \hI\ and CO observed
vertical velocity dispersions are also shown for comparison (noted $\sigma$(HI)
full line and $\sigma$(CO), horizontal dashed line). The \hI\ dispersions
data have been taken from the compilation in Kamphuis thesis (1992).
A fit to the observed
z-stellar velocity dispersion is also shown ($\sigma$(*), dotted line);
{\it bottom}: Corresponding Toomre $Q$ parameters, assuming
$\sigma_z/\sigma_r$ =0.6 for the stars (the CO dispersion has
been taken for the gas)}
\label{vrot628}
\end{figure}
\begin{figure}
\psfig{figure=6130f12.ps,height=12.cm,width=9.cm}
\caption[]{ Same as previous figure, for NGC 3938}
\label{vrot3938}
\end{figure}
Figures \ref{vrot628} and \ref{vrot3938} also plot the critical
velocity dispersion for the stellar component, together with a fit
to the observed stellar velocity dispersions, from van der Kruit \&
Freeman (1984) for NGC 628 and from Bottema (1988, 1993) for NGC 3938.
From a sample of 12 galaxies where such data are available,
Bottema (1993) concludes that the stellar velocity dispersion is
declining exponentially as $e^{-r/2h}$, as expected for an exponential
disk of scale-length $h$ and constant thickness, as found by
van der Kruit \& Searle (1981). Since mostly the vertical stellar
dispersion $\sigma_z$ is measured, it is assumed that there is a constant ratio
between the radial dispersion $\sigma_r$, comparable to that observed
in the solar neighbourhood $\sigma_z/\sigma_r$ = 0.6. This is already
well above the minimum ratio required for vertical stability, i.e.
$\sigma_z/\sigma_r$ = 0.3 (Araki 1985, Merritt \& Sellwood 1994).
Within these assumptions, it can be derived that the Toomre parameter
for the stars $Q_*$ is about constant with radius, within the
optical disk; it depends of course
on the mass-to-light ratio adopted for the luminous component, and
is in the range $Q_* \approx$ 1 for M(stars)/L$_B$ = 3.
Figures \ref{vrot628} and \ref{vrot3938} confirm the result of almost constant
$Q_*$, but with low values, especially for NGC 3938.
This could be explained, if the vertical dispersion is indeed much lower
than the radial one. The minimum value for the ratio
$\sigma_z/\sigma_r$ is 0.3 (for stability reasons), so that the derived
$Q_*$ values displayed in figures \ref{vrot628} and \ref{vrot3938} could be
multiplied by $\approx$ 2. The idea of
stellar velocity dispersion regulated by gravitational instabilities
appears therefore supported by the data, within the uncertainties.
\medskip
The most intriguing result is the large gas vertical dispersion observed
for NGC 3938, and its distribution with radius. The large corresponding $Q_g$
values, that will mean comfortable stability, are difficult to reconcile with
the observed large and small-scales gas instabilities: clear spiral arms
are usually observed in the outer \hI\ disks, with small-scale structure as well
(see e.g. van der Hulst \& Sancisi 1988, Richter \& Sancisi 1994). This
is also the case here for NGC 628 showing all signs of gravitational
instabilities in its outer \hI\ disk (Kamphuis \& Briggs 1992), and
for NGC 3938 (van der Kruit \& Shostak 1982).
A possibility to reduce $Q_g$ is that also the gas dispersion
is anisotropic, this time the vertical one being larger than in the plane.
However we will see, through comparison with
gas dispersion in the plane of the Galaxy (cf next section)
that the anisotropy of gas dispersion does not appear so large.
Another explanation could be that
the present rough calculations of the $Q$-parameter concern only
a simplified one-component stability analysis, and could be
significantly modified by multi-components analysis.
It has been shown (Jog \& Solomon 1984, Romeo 1992, Jog 1992 \& 1996)
that the coupling between
several components de-stabilises every dynamical component.
The apparent stability ($Q\approx 2-3$) of the gas component
might therefore not be incompatible with an instability-regulated
velocity dispersion for the gas.
But then, in the vertical direction, the dispersion is much higher
than the minimum required for vertical stability.
Could this large velocity dispersion be powered by star formation?
This is not likely, at least for the majority of the \hI\ gas
well outside the optical disk, where no stellar activity is observed.
A possible explanation would be to suppose that the \hI\ is tracing a much larger
amount of gas, in the form of molecular clouds, which will then
be self-gravitating, with $Q_g \la 1$ (Pfenniger et al 1994;
Pfenniger \& Combes 1994).
With a flat rotation curve, and a gas surface density decreasing as $1/r$,
the critical dispersion would then be constant with radius.
\subsection{ Similar \hI\ and CO vertical dispersions}
\subsubsection{ Two phases of the same dynamical component }
Another puzzle is the similarity of the CO and \hI\ vertical velocity
dispersions. If the gas layers are indeed isothermal in z, we can deduce
that both atomic and molecular layers have also similar heights.
This means that the atomic and molecular components can be considered as a
unique dynamical component, which can be observed under two phases,
according to the local physical conditions (density, excitation temperature,
etc..). The amplitudes of z-oscillations of the molecular and atomic
gas are the same, only we see the gas as molecular when it is at
heights lower than $\approx$ 50pc. At these heights,
the molecular fraction is
$f_{mol} \ga 0.8$ (Imamura \& Sofue 1997), which means that almost
all clouds are molecular, taking into account their atomic envelope.
In fact it is not clear whether we see the CO or H$_2$ formation and
destruction, since we can rely only on the CO tracer. Also, it is
possible that the density of clouds at high altitude is not enough to
excite the CO molecule, which means that the limit for observing CO
will not be coinciding with the limit for molecular presence itself.
The latter is strongly suggested by the observed vertical density profiles
of the H$_2$ and \hI\ number density: there is a sharp boundary where
the apparent $f_{mol}$ falls to zero, while we expect a smoother profile
for a unique dynamical gas component.
That the gas can change phase from molecular to atomic and vice-versa
several times in one z-oscillation is not unexpected,
since the time-scale of molecular formation and destruction
is smaller than the z-oscillation period, of $\approx$ 10$^8$ yrs
at the optical radius: the chemical time-scale
is of the order of 10$^5$ yrs (Leung et al 1984, Langer \& Graedel 1989).
Morever, as discussed in the previous section ({\it 5.1}), the key factor
controlling the presence of molecules is photodestruction, which
explains why there is a column density threshold above which the
gas phase turns to molecular (Elmegreen 1993). This threshold could be
reached at some particular height above the plane.
\bigskip
\subsubsection{ Collisions }
Should we expect the existence of several layers of gas at different
tmperatures, and therefore different thicknesses, in galaxy planes?
In the very simple model of a diffuse and homogeneous gas,
unperturbed by star-formation, we
can compute the mixing time-scale of two layers at different temperatures,
through atomic or molecule collisions: this is of the order of the
collisional time-scale, $\approx$ 10$^4$ yrs for an average volumic density
of 1 cm$^{-3}$, and a thermal velocity of 0.3 \kms. This is very short
with respect to the z-oscillation time scale of $\approx$ 10$^8$ yrs,
and therefore mixing should occur, if differential dissipation or
gravitational heating is not taken into account.
This simple model is of course very far from realistic.
We know that the interstellar medium, atomic as well as
molecular, is distributed in a hierachical ensemble of clouds,
similar to a fractal. Let us then consider another simple modelisation
of an ideal gas where the particles are in fact the interstellar
clouds, undergoing collisions (cf Oort 1954, Cowie 1980).
For typical clouds of 1pc size, and 10$^3$ cm$^{-3}$ volumic density,
the collisional time-scale is of the order of 10$^8$ yrs,
comparable with the vertical oscillations time-scale.
This figure should not be taken too seriously, given the
rough simplifications, but it corresponds to what has
been known for a long time, i.e. the ensemble of clouds cannot
be considered as a fluid in equilibrium, since the collisional
time-scale is comparable to the dynamical time,
like the spiral-arm crossing time (cf Bash 1979, Kwan 1979, Casoli \&
Combes 1982, Combes \& Gerin 1985).
If the collisions were able to redistribute the kinetic
energy completely, there should be equipartition, i.e.
the velocity dispersion would decrease with the mass $m$ of the
clouds like $\sigma_v \propto m^{-1/2}$. In fact the cloud-cloud
relative velocities are roughly constant with mass
(between clouds of masses 100 M$_\odot$ and GMCs of 10$^6$
M$_\odot$, a ratio of 100 would be expected in velocity dispersions,
which is not observed, Stark 1979). Towards the Galactic anticenter, where
streaming motions should be minimised, the one-dimensional dispersion
for the low-mass and giant clouds are found to be about 9.1 and 6.6 \kms
respectively, with near constancy over several orders of magnitude,
and therefore no equipartition of energy (Stark 1984).
The almost constancy of velocity
dispersions with mass requires to find other mechanisms
responsible for the heating.
\subsubsection{ Gravitational heating }
If relatively small clouds can
be heated by star-formation, supernovae, etc...(e.g.
Chi\`eze \& Lazareff 1980), the largest clouds could be
heated by gravitational scattering (Jog \& Ostriker 1988,
Gammie et al 1991). In the latter mechanism, encounters between
clouds with impact parameters of the order of their tidal radius
in a differentially rotating disk are equivalent to a gravitational
viscosity that pumps the rotational energy into random cloud
kinetic energy. A 1D velocity dispersion of 5-7\kms is the predicted
result, independent of mass. This value is still slightly lower than
the observed 1D dispersion of clouds observed in the Milky Way.
Stark \& Brand (1989) find 7.8\kms from a study within 3 kpc of the
sun. But collective effects, gravitational instabilities forming
structures like spiral arms, etc... have not yet been taken into account.
Given the high degree of structure and apparent permanent
instability of the gas, they must play a major role in the heating,
the source of energy being also the global rotational energy.
Dissipation lowering the gas dispersion continuously maintains
the gas at the limit of instability, closing the feedback loop
of the self-regulation (Lin \& Pringle 1987, Bertin \& Romeo 1988).
In the external parts of galaxies, where there is no star formation,
gravitational instabilities are certainly the essential heating
mechanism. This again will tend to an isothermal, or more exactly
isovelocity, ensemble of clouds, since the gravitational
mechanism does not depend on the particle mass.
The molecular or atomic gas are equivalent in this process,
and should reach the same equilibrium dispersion.
\medskip
\subsubsection { \hI\ and CO velocity dispersions in the Milky Way }
In the Milky Way, although the kinematics of gas is much complicated due to
our embedded perspective, we have also the same puzzle.
The velocity dispersion has been estimated through several methods,
with intrinsic biases for each method, but essentially the dispersion
has been estimated in the plane.
Only with high-latitude molecular clouds, can we have an idea of the
local vertical velocity dispersion. Magnani et al (1996) have recently made
a compilation of more than 100 of these high-latitude clouds. The velocity
dispersion of the ensemble is 5.8\kms if seven intermediate velocity objects
are excluded, and 9.9 \kms otherwise. This is interestingly close to
the values we find for NGC 628 (6\kms) and NGC 3938 (8.5\kms). Unfortunately
there is always some doubt in the Galaxy that all molecular clouds are
taken into account, due to many selection effects, while the measurement
is much more direct at large scale in external face-on galaxies.
In fact, it has been noticed by Magnani et al (1996) that there were an
inconsistency between the local measured scale-height of molecular
clouds (about 60pc) and the vertical velocity dispersion. However, they
conclude in terms of a different population for the
local high-latitude clouds (HLC). Indeed, the
total mass of observed HLC is still a small fraction of the molecular
surface density at the solar radius.
The local gaussian scale height of the molecular component has
been derived to be 58pc (at R$_\odot$ = 8.5kpc) through
a detailed data modelling by Malhotra (1994); this is
also compatible with all previous values (Dame et al 1987, Clemens
et al 1988). The local \hI\ scale height is 220pc (Malhotra 1995).
We therefore would have expected a ratio of 3.8 between the dispersions
of the H$_2$ and \hI\ gas, but these are very similar, within the uncertainties,
which come mainly from the clumpiness of the clouds for the H$_2$
component. If we believe the more easily determined \hI\ dispersion
of 9\kms (Malhotra 1995), then the H$_2$ dispersion is expected to
be 2.4\kms, clearly outside of the error bars or intrinsic scatter:
the value at the solar radius is estimated at 7.8\kms by Malhotra (1994).
Of course, all this discussion is hampered by the fact that we discuss
mainly horizontal dispersions in the case of the Milky Way, while
the gas dispersions could well be anisotropic. This is why the present results
on external face-on galaxies are more promising.
\bigskip
The vertical gas velocity dispersion in spiral galaxies
is an important parameter required to determine the flattening of
the dark matter component, combined with the observation of the gas
layer thickness (cf Olling 1995, Becquaert \& Combes 1997). We have shown here
that the gas dispersion does not appear very anisotropic, in the sense
that the vertical dispersion is not much smaller that what has been
derived in the plane of our Galaxy (for instance by the terminal
velocity method, Burton 1992, Malhotra 1994). Such vertical dispersion data
should be obtained in much larger samples, to consolidate
statistically this result.
\vspace{0.25cm}
\acknowledgements
We are very grateful to the referee, R. Bottema, for his interesting
and helpful comments. We acknowledge the support from the staff
at the Pico Veleta during the course of these observations.
|
1,108,101,564,977 | arxiv | \section{Introduction and basic definitions}
In recent years, various papers have shown some interplay between theoretical economics and mathematical logic. More specifically, some connections have risen between social welfare relations on infinite utility streams and descriptive set theory. In particular the following results have been proven:
\begin{itemize}
\item in \cite{Lau09} Lauwers proves that the existence of a total social welfare relation satisfying infinite Pareto and anonymity implies the existence of a non-Ramsey set.
\item in \cite{Z07} Zame proves that the existence of a total social welfare relation satisfying strong Pareto and anonymity implies the existence of a non-Lebesgue measurable set.
\end{itemize}
(Precise definitions of these combinatorial concepts from economic theory are introduced in Definition \ref{def1} below.)
So in terms of set-theoretical considerations, these results mean that the existence of these specific relations satisfying certain combinatorial principles from economic theory are connected to a fragment of the axiom of choice, AC.
As a consequence, from the set-theoretical point of view, it is natural and interesting to understand more deeply the \emph{exact} fragment of AC they correspond to, in particular compared to
other objects coming from measure theory, topology and infinitary combinatorics, extensively studied in the set-theoretic literature (for a detailed overview see \cite{Ikegami},\cite{Khomskii},\cite{BL99}). More precisely, we show that the reverse implications of Lauwers and Zame's results do not hold, and therefore total social welfare relations satisfying Pareto and anonymity need a strictly larger fragment of AC than non-Lebesgue measurable and non-Ramsey sets.
Moreover we are going to analyse social welfare relations from the topological point of view, and specifically the connection with the Baire property.
This question is explicitely asked in \cite[Problem 11.14]{Mathias} and it was the main motivation arousing this paper. We deeply thank Adrian Mathias for such a fruitful inspiration.
\vspace{3mm}
Since the motivation of this paper comes from the study of some combinatorial concepts studied in economic theory, we briefly remind the basic notions about social welfare relations and infinite utility streams, in as much detail as required for our scope.
We consider a \emph{set of utility levels} $Y$ (or \emph{utility domain}) endowed with some topology and totally ordered, and we call $X:= Y^\omega$ the corresponding \emph{space of infinite utility streams}, endowed with the product topology.
Given $x,y \in X$ we write $x \leq y$ iff $\forall n \in \omega (x(n) \leq y(n))$, and $x < y$ iff $x \leq y \land \exists n \in \omega (x(n) < y(n))$. Furthermore we set $\mathcal{F}:= \{ \pi: \omega \rightarrow \omega: \text{ finite permutation} \}$, and we define, for $x \in X$, $f_\pi(x):= \langle x({\pi(n)}): n \in \omega \rangle$.
We say that $\precsim$ subset of $X \times X$ is a \emph{social welfare relation (SWR) on $X$} iff $\precsim$ is reflexive and transitive.
Next we introduce the theoretical economic principles used in this paper.
\begin{definition} \label{def1}
Let $\precsim$ be a SWR on $X$. We say that $\precsim$ satisfies:
\begin{itemize}
\item \label{D1} \emph{Anonymity (A)} iff whenever given $x, y\in X$ there exist $i, j\in \omega$ such that $y(j) = x(i)$ and $x(j) = y(i)$, while $y(k) = x(k)$ for all $k\in \omega\setminus \{i, j\}$, then $x \sim y$.
\item \label{D2} \emph{Strong Pareto (SP)} iff for all $x, y\in X$, if $x\leq y$ and $x(i)<y(i)$ for some $i \in \omega$, then $x \prec y$.
\item \label{D3} \emph{Infinite Pareto (IP)} iff for all $x, y\in X$, if $x\leq y$ and $x(i)<y(i)$ for infinitely many $i \in \omega$, then $x \prec y$.
\item \label{D4} \emph{Weak Pareto (WP)} iff for all $x, y\in X$, if $x(i)<y(i)$ for all $i \in \omega$, then $x \prec y$.
\end{itemize}
\end{definition}
It is immediate to notice that SP $\Rightarrow$ IP $\Rightarrow$ WP.
From descriptive set theory of the reals we recall the following notions:
\begin{itemize}
\item $X \subseteq [\omega]^\omega$ is \emph{non-Ramsey} iff for every $F \in [\omega]^\omega$ one has $[F]^\omega \not \subseteq X$ and $[F]^\omega \cap X \neq \emptyset$.
\item $X \subseteq 2^\omega$ is Lebesgue measurable iff there exists a Borel set $B \subseteq 2^\omega$ such that $X \Delta B$ has measure zero. In case a set is not Lebesgue measurable we call it \emph{non-Lebesgue}.
\item $X \subseteq 2^\omega$ satisfies the \emph{Baire property} iff there exists an open set $O \subseteq 2^\omega$ such that $X \Delta O$ is meager. In case a set does not satisfy the Baire property we call it \emph{non-Baire}.
\end{itemize}
Throughout the paper, we use the following notation:
\[
\begin{split}
\textbf{SPA}_Y :=& \text{``There exists a total SWR on $Y^\omega$ satisfying A and SP"} \\
\textbf{IPA}_Y :=& \text{``There exists a total SWR on $Y^\omega$ satisfying A and IP"} \\
\textbf{WPA}_Y :=& \text{``There exists a total SWR on $Y^\omega$ satisfying A and WP"} \\
\textbf{NL} :=& \text{``There exists a non-Lebesgue set"}\\
\textbf{NR} :=& \text{``There exists a non-Ramsey set"}\\
\textbf{NB} :=& \text{``There exists a non-Baire set"}
\end{split}
\]
\begin{remark} \label{Remark2}
In the first three symbols, involving statements on total SWRs, we have specified the utility domain, as the nature of such SWRs strongly depends on $Y$. To see this, one can observe that combining A and WP is trivial when $Y=\{ 0,1 \}$, since for instance we can simply define $\prec$ so that for all $y \in 2^\omega$ which are not the constant sequence $e_0:= \langle 0, 0, \dots \rangle$ we have $e_0 \prec y$, but on the other hand the combination of A and WP gives non-constructive SWRs when $Y=[0,1]$ (for instance, see Proposition \ref{P-wp}).
\end{remark}
\begin{remark}
The study on infinite populations and these combinatorial principles has been rather extensively developed in the economic literature. Summarizing the reasons and analysing the intepretations in the context of economic theory is away from the aim of this paper, which should be meant as a contribution to a set-theoretic question coming from the study of combinatorial concepts introduced in economic theory, rather than an effective application of set theory to economic theory. The reader interested in detailed background from economic theory literature may consult the following selected list of papers: \cite{Asheim2010}, \cite{Au64}, \cite{Chi*}, \cite{Chi96}, \cite{Litak}, \cite{Lau09}, \cite{LV97}, \cite{Z07}.
\end{remark}
\vspace{3mm}
The technical tools from forcing and descriptive set theory of the reals are introduced through the paper when specifically needed.
\section{Topological side of SWRs} \label{section1}
In this section we investigate the topological properties of SWRs satisfying Pareto and anonymity (focusing on the Baire/product topology), and proving some interplay with non-Baire sets, answering Problem 11.14 posed in \cite{Mathias}.
We start with an example. Let $X := [0,1]^\omega$ and
define $\rhd$ (usually called Suppes-Sen principle) as follows: for every $x,y \in X$, we say
\[
\begin{split}
x \rhd y & \text{ iff there exists $\pi \in \mathcal{F}$ such that $f_\pi(x) > y$}.\\
x \sim y & \text{ iff there exists $\pi \in \mathcal{F}$ such that $f_\pi(x) = y$}.
\end{split}
\]
Let $\textsf{supp}_r(y):= \{ n \in \omega: y(n)\neq r \}$, for a given $r \in [0,1]$.
It is clear that $\rhd$ is a SWR satisfying SP and A.
We consider the standard euclidean topology on $[0,1]$ and then the corresponding product topology on $X:=[0,1]^\omega$.
\begin{remark}
The Suppes-Sen principle is rather coarse from the topological point of view, as many pairs $x,y \in X$ are incompatible w.r.t. $\rhd$.
More precisely, $S:= \{(x,y) \in X \times X: x \ntriangleright y \land y \ntriangleright x \land x \not \sim y \}$ is comeager.
Let $S'$ be the complement of $S$.
We show that $S'$ is meager.
First partition $S'$ into three pieces:
$E:= \{ (x,y) \in X \times X: x \rhd y \}$, $D:= \{ (x,y) \in X \times X: y \rhd x \}$ and $C:= \{ (x,y) \in X \times X: y \sim x\}$.
We check that $E$ is meager and then note that similar arguments work for $D$ and $C$ as well.
Fix $y \in X$ so that $\textsf{supp}_0(y)$ is infinite (i.e., $y$ is not eventually 0) and consider $E^y:= \{ x \in X : (x,y) \in E \}$. Let $H^y:= \{x \in X: x > y \}$.
Note that
\(
E^y:= \bigcup_{\pi \in \mathcal{F}} H^{f_\pi(y)}.
\)
Since $\mathcal{F}$ is countable it is enough to prove that for each $\pi \in \mathcal{F}$, $H^{f_\pi(y)}$ is meager.
Actually $H^y$ is nowhere dense, for every $y \in X$ with $|\textsf{supp}_0(y)| =\omega$; in fact, given $U:= \prod_{n \in \omega} U_n \subseteq X$ basic open set and $k \in \omega$ sufficiently large that for all $n \geq k$, $U_n=[0,1]$,
one can pick $n^* > k$ such that $n^* \in\textsf{supp}_0(y)$ and pick $U' \subseteq U$ so that:
$\forall n \neq n^*$, $U_n=U'_n$ and
$U'_{n^*}:= [0,y(n^*))$.
Then it is clear that $U' \cap H^y=\emptyset$. Note that if $\pi \in \mathcal{F}$ we get $|\textsf{supp}_0(f_\pi(y))|=\omega$ as well, and so $H^{f_\pi(y)}$ is nowhere dense too.
By Ulam-Kuratowski theorem, the proof is concluded simply by noticing that the set $\{y \in X: |\textsf{supp}_0(y)|=\omega \}$ is comeager, which easily follows since each $B_n:= \{ y \in B: |\textsf{supp}_0(y)| \leq n \}$ in nowhere dense.
\end{remark}
Under this point of view the Suppes-Sen principle can then be considered rather ``poor", for a positive characteristic of a SWR is to be able of comparing as many elements as possible.
Part 1 of the following proposition shows that this coarse nature of such SWRs is not only specific for Suppes-Sen principle, but in a sense it holds for any ``regular" SWR satisfying A and SP. Moreover, in part 2 we show that when assuming $\textbf{SPA}_Y$, the price to pay is to get a set without Baire property. In the following we consider $X=[0,1]^\omega$.
\begin{proposition} \label{prop:non-BP}
Let $X=[0,1]^\omega$. Then the following hold:
\begin{enumerate}
\item Let $\precsim$ be a SWR satisfying A and SP on $X$, $E:= \{(x,y) \in X \times X: x \succ y\}$ and $D:=\{(x,y) \in X \times X: y \succ x \}$.
If both $E$ and $D$ have the Baire property, then $E \cup D$ is meager.
\item Let $\precsim$ be a total SWR satisfying A and SP on $X$, and let E,D as above.
Then either $E$ or $D$ does not have the Baire property.
\end{enumerate}
\end{proposition}
\begin{proof}
Under the assumption $E,D$ both satisfying the Baire property, we show that $E$ is meager, and remark that the argument for $D$ is essentially the same.
Given $S \subseteq X \times X$ and $y \in X$, we use the notation $S_y:= \{x \in X: (x,y) \in S \}$.
Since we assume $E$ has the Baire property, we can find a Borel set $B \subseteq E$ such that $E \setminus B$ is meager; moreover for every $\pi, \pi' \in \mathcal{F}$ we can define $B(\pi,\pi'):= \{ (f_\pi(x), f_{\pi'}(y)): (x,y) \in B \}$. Put $B^*:= \bigcup \{ B(\pi,\pi'): \pi, \pi' \in \mathcal{F} \}$ and note that $B^* \subseteq E$, as $E$ is closed under finite permutations. Moreover, $E \setminus B^*$ is meager too.
Let $I_0:= \{ y \in X : B^*_y \text{ is meager} \}$ and $I_1:= \{ y \in X : B^*_y \text{ is comeager} \}$. Note that each $B^*_y$ is by definition invariant under finite permutations,
i.e., $x \in B^*_y \Leftrightarrow f_\pi(x) \in B^*_y$, where $\pi \in \mathcal{F}$.
Hence by \cite[Theorem 8.46]{Kechris} with $G$ being the group on $X$ induced by finite permutations, we have that each $B^*_y$ is either meager or comeager, and hence $I_0 \cup I_1 = X$. We also observe that both $I_0$ and $I_1$ are invariant under $\pi \in \mathcal{F}$. In fact, it is straightforward to check that if $\pi \in \mathcal{F}$ and $B^*_y$ is meager, then $B^*_{f_\pi(y)}$ is meager too.
So, if $I_1$ is comeager, by Kuratowski-Ulam theorem we get $E$ is comeager. But since an analogous argument could be done for $D$ too, we would have that also $D$ is comeager; however by definition $E \cap D = \emptyset$, which is a contradiction.
As a consequence, we get $I_0$ is comeager, which implies $E$ (and $D$ as well) is meager.
\vspace{2mm}
$2.$ Note that in this case the SWR is total and so, if $E$ and $D$ both satisfy the Baire property, it follows that the set $A:= \{ (x,y) \in X \times X: x \sim y \}$ is comeager.
Thus, by Kuratowski-Ulam's theorem there is $y \in X$ such that $A_{y}$ is comeager.
Pick $0<r<\frac{1}{2}$, define
\[
H:= [0,1-r] \times \prod_{i \in \omega} [0,1],
\]
and consider the injective function $\phi: X' \rightarrow X$ such that $i(x(0)):= x(0)+ r$. Note that
\[
\phi[H]:= [r,1] \times \prod_{i \in \omega} \big [0,1],
\]
Note also that for every $x \in H$, $\phi(x) \succ x$ by Pareto, and so in particular $x \sim y \Leftrightarrow x \not \sim \phi(y)$. Hence, we have the following two mutually contradictory consequences.
\begin{itemize}
\item On the one side, $H \cap A_y \cap \phi[H \cap A_y]=\emptyset$;
indeed if there exists $z \in H \cap A_y \cap \phi[H \cap A_y]$, then there is $x \in H \cap A_y$ such that $z := \phi(x)$; then on the one hand we have $z \in A_y$ which gives $z \sim y$, but on the other hand we have $x \in H \cap A_y$ that in turn gives $x \sim y$ and so together with $x \prec \phi(x)=z$ we would get $y \prec z$; contradiction.
\item On the other side, $H \cap A_y \cap \phi[H \cap A_y]$ cannot be meager, since $H \cap \phi[H]$ is a non-empty open set, $H \cap A_y$ is comeager in $H$ and $\phi[H \cap A_y]$ is comeager in $\phi[H]$.
\end{itemize}
\end{proof}
\begin{remark}
Note that Proposition \ref{prop:non-BP} holds even when $Y=\{0,1\}$. For part (1) we can argue with the same proof, while for part (2) we only need to consider the map $\phi: X \rightarrow X$ such that $\phi(x)(0)\neq x(0)$ and for all $n > 0$, $\phi(x)(n)=x(n)$, and so $\phi(x) \not \sim x$.
\end{remark}
\subsection{Mathias-Silver trees}
We recall the standard basic notions and notation about tree-forcings.
A subset $T \subseteq Y^{<\omega}$ is called a \emph{tree} if and only if for every $t \in T$ every $s \subseteq t$ is in $T$ too, in other words, $T$ is \emph{closed} under initial segments.
We call the segments $t \in T$ the \emph{nodes} of $T$ and denote the \emph{length} of the node by $|t|$; $\textsf{stem}(T)$ is the longest node such that $\forall t \in T (t \subseteq \textsf{stem}(T) \vee \textsf{stem}(T) \subseteq t)$.
A node $t \in T$ is called \emph{splitting} if there are two distinct $n$, $m \in Y$ such that $t^\smallfrown n$, $t^\smallfrown m \in T$.
Given $x \in Y^{\omega}$ and $n \in \omega$, we denote by $x {\upharpoonright} n$ the cut of $x$ of length $n$, i.e., $x {\upharpoonright} n := \langle x(0), x(1), \cdots, x(n-1) \rangle$.
A tree $p \subseteq Y^{<\omega}$ is called \emph{perfect} if and only if for every $s \in p$ there exists $t \supseteq s$ splitting.
We define $[p]:=\{x \in Y^\omega: \forall n \in \omega (x{\upharpoonright} n \in p)\}$, and $x \in [p]$ is called a \emph{branch of $p$}.
A tree $p \subseteq 2^{<\omega}$ is called \emph{Silver} tree if and only if $p$ is perfect and for every $s$, $t \in p$, with $|s|=|t|$ one has $s^\smallfrown 0 \in p$ $\Leftrightarrow t^\smallfrown 0 \in p$ and $s^\smallfrown 1 \in p$ $\Leftrightarrow t^\smallfrown 1 \in p$.
If $t$ is a splitting node of $p$, we call $|t|+1$ a splitting level of $p$ and let $S(p)$ denote the set of splitting levels of $p$.
Then set $U(p):= \{n \in \omega: \forall x \in [p] (x(n)=1)\}$ and let $\{n^p_k: k \in \omega\}$ enumerate the set $S(p) \cup U(p)$.
We could also define a Silver tree $p$ and its corresponding set of branches $[p]$ relying on the notion of partial functions.
Consider a partial function $f: \omega \rightarrow \{0, 1\}$ such that $\text{dom}(f)$ is co-infinite (i.e. the complement of the domain of $f$ is infinite); then define $N_f:= \{x \in 2^{\omega}: \forall n \in \text{dom}(f) (f(n)=x(n))\}$.
It easily follows from the definitions that there is a one-to-one correspondence between every Silver tree $p$ and a set $N_f$;
given any Silver tree $p$ there is a unique partial function $f: \omega \rightarrow \{0, 1\}$ such that $[p]=N_f$.
In particular, the set of splitting levels $S(p)$ correspond to $\omega \setminus \text{dom}(f)$.
Silver trees are extensively studied in the literature, as well as their topological properties (e.g., see \cite{Halbeisen2003} and \cite{BLH2005})
We now introduce a variant of Silver trees which perfectly serves for our purpose.
\begin{definition}
Let $p \subseteq 2^{<\omega}$ be a Silver tree with $\{ n_k^p:k \geq 1 \}$ enumeration of $S(p) \cup U(p)$; $p$ is called a \emph{Mathias-Silver} tree ($p \in \mathbb{MV}$) if and only if there are infinitely many triples $(n^p_{m_j}, n^p_{m_j+1}, n^p_{m_j+2})$'s such that:
\begin{enumerate}
\item for all $j \geq 1$, $m_j$ is even;
\item for all $j \geq 1$, $n^p_{m_j}, n^p_{m_j+1}, n^p_{m_j+2}$ are in $S(p)$ with $n^p_{m_j} +1< n^p_{m_j+1}$ and $n^p_{m_j+1}+1 < n^p_{m_j+2}$;
\item for all $j \geq 1$, $t \in p$, $i < |t|$ $(n^p_{m_j} < i < n^p_{m_j+1} \vee n^p_{m_j+1} < i < n^p_{m_j+2} \Rightarrow t(i)=0)$.
\end{enumerate}
We call $(n^p_{m_j}, n^p_{m_j+1}, n^p_{m_j+2})$ satisfying (1), (2) and (3) a \emph{Mathias triple}.
\end{definition}
\begin{remark}
The idea is that a Mathias-Silver tree is a special type of a Silver tree that mimics infinitely often the feature of a Mathias tree, which is that in between the splitting levels occuring in any Mathias triple $(n^p_{m_j}, n^p_{m_j+1}, n^p_{m_j+2})$ all nodes of the tree $p$ take value 0. In the proof of propositions \ref{P-ip} and \ref{P-wp} this property will be crucial, and indeed it is not clear how to obtain, if possible, similar results working with Silver trees instead of Mathias-Silver trees.
\end{remark}
\begin{definition}
A set $X \subseteq 2^\omega$ is called \emph{Mathias-Silver measurable set} (or $\mathbb{MV}$-measurable set) if and only if there exists $p \in \mathbb{MV}$ such that $[p] \subseteq X$ or $[p] \cap X = \emptyset$.
A set $X \subseteq 2^\omega$ not satisfying this condition is called a \emph{non-$\mathbb{MV}$-measurable set}.
\end{definition}
\noindent The following lemma is the key step to prove that any set satisfying the Baire property is $\mathbb{MV}$-measurable, or in other words, that a non-$\mathbb{MV}$-measurable set is a particular instance of a non-Baire set.
The proof is a variant of the construction developed in \cite{Halbeisen2003} for standard Silver trees.
\begin{lemma}\label{l1}
Given any comeager set $C \subseteq 2^\omega$ there exists $p \in \mathbb{MV}$ such that $[p] \subseteq C$.
\end{lemma}
\begin{proof}
Let $\{D_n: n \in \omega\}$ be a $\subseteq$-decreasing sequence of open dense sets such that $\underset{n\in\omega}{\bigcap} D_n \subseteq C$.
Given $s \in 2^{<\omega}$, put $N_s:=\{x \in 2^\omega: x \supset s\}$.
Recall that if $D$ is open dense, then $\forall s \in 2^{<\omega}$ there exists $s^{\prime} \supseteq s$ such that $N_{s^{\prime}} \subseteq D$.
We construct $p \in \mathbb{MV}$ by recursively building up its nodes as follows: first of all let
\[
\begin{split}
s_1= (10000), s_2=(10001), s_3=(10100), s_4=(10101),\\
s_5=(00000), s_6=(00001), s_7=(00100), s_8=(00101).
\end{split}
\]
\begin{itemize}
\item Pick $t_{\emptyset} \in 2^{<\omega}$ such that $N_{t_\emptyset} \subseteq D_0$, and then let $F_0:= \overset{8}{\underset{k=1}{\bigcup}}\left\{t_\emptyset^\smallfrown s_k\right\}$ and $T_0$ be the downward closure of $F_0$, i.e., $T_0:= \left\{s \in 2^{<\omega}: \exists t \in F_0 (s \subseteq t) \right\}$;
\item Assume $F_n$ is already defined.
Let $\left\{t_j: j \leq J \right\}$ enumerate all nodes in $F_n$ (note by construction $J=8^{n+1}$). We proceed inductively as follows: pick $r_0 \in 2^{<\omega}$ such that $N_{t_0^\smallfrown r_0} \subseteq D_{n+1}$; then pick $r_1 \supseteq r_0$ such that $N_{t_1^\smallfrown r_1} \subseteq D_{n+1}$; proceed inductively in this way for every $j \leq J$, so $r_j \supseteq r_{j-1}$ such that $N_{t_j^\smallfrown r_j} \subseteq D_{n+1}$.
Finally put $r=r_J$.
Then define
\[
\begin{split}
F_{n+1}&:= \bigcup \left\{ t^\smallfrown r^\smallfrown s_k: t \in F_n, k=1,2, \dots 8 \right\} \\
T_{n+1}&:= \left\{ s \in 2^{<\omega}: \exists t \in F_{n+1} (s \subseteq t)\right\}.
\end{split}
\]
\end{itemize}
Note that by construction, for all $t \in F_{n+1}$ we have $N_t \subseteq D_{n+1}$.
Finally put $p := \underset{n\in\omega}{\bigcup} T_n$.
Then by construction $p \in \mathbb{MV}$ as it is a Silver tree and the use of $s_1, s_2, \cdots, s_8$ ensures that $p$ contains infinitely many Mathias triples, and so $p \in \mathbb{MV}$.
It is left to show $[p] \subseteq \underset{n\in\omega}{\bigcap} D_n$.
For this, fix arbitrarily $x \in [p]$ and $n \in \omega$;
by construction there is $t \in F_n$ such that $t \subset x$.
Since $N_t \subseteq D_n$ we then get $x \in N_t \subseteq D_n$.
\end{proof}
\begin{corollary} \label{C1}
If $A \subseteq 2^{\omega}$ satisfies the Baire property, then $A$ is a $\mathbb{MV}$-measurable set.
\end{corollary}
\begin{proof}
The proof is a simple application of Lemma \ref{l1} and the fact that any set satisfying the Baire property is either meager or comeager relative to some basic open set $N_t$.
Indeed, if $A$ is meager, then we apply Lemma \ref{l1} to the complement of $A$ and find $p \in \mathbb{MV}$ such that $[p] \cap A= \emptyset$.
If there exists $t \in 2^{<\omega}$ such that $A$ is comeager in $N_t$, then we can use the construction as in Lemma \ref{l1} in order to find $p \in \mathbb{MV}$ such that $[p] \subseteq A$, simply by choosing $t_{\emptyset} \supseteq t$, $t_{\emptyset} \in D_0$ and then use the same construction as in the proof of Lemma \ref{l1}.
\end{proof}
\subsection{Infinite Pareto and Anonymity}\label{s4.1}
Given $x \in 2^\omega$, let $U(x):= \{ n \in \omega: x(n)=1 \}$ and $\{ n^x_k: k \geq 1 \}$ enumerate the numbers in $U(x)$.
Define
\begin{equation}
\begin{split}
o(x):= [n^x_1,n^x_2) \cup [n^x_3,n^x_4) \cdots [n^x_{2j+1}, n^x_{2j+2}) \cup \cdots \\
e(x):= [n^x_2,n^x_3) \cup [n^x_4,n^x_5) \cdots [n^x_{2j+2}, n^x_{2j+3}) \cup \cdots
\end{split}
\end{equation}
As usual we identify subsets of $\omega$ with their characteristic functions, so that we can write $o(x)$, $e(x) \in 2^\omega$.
\begin{proposition}\label{P-ip}
Let $\precsim$ denote a total SWR satisfying IP and A on $X= 2^{\omega}$.
Then there exists a subset of $X$ which is not $\mathbb{MV}$-measurable.
\end{proposition}
Note that Proposition \ref{P-ip} somehow improves the result in part(b) of Proposition \ref{prop:non-BP}, as SP is stronger than IP and by Corollary \ref{C1} any non-$\mathbb{MV}$-measurable set is non-Baire too. However, Proposition \ref{prop:non-BP} is still relevant as it reveals some topological structural properties that cannot be deduced from Proposition \ref{P-ip}; for instance, part(a) of Proposition \ref{prop:non-BP} shows that any SWR satisfying the Baire property must have comeager many incompatible or equivalent pairs, which essentially means that any \emph{regular} SWR is necessarily rather coarse as it has \emph{many} either incomparable or indistinguishable pairs, and the non-Baire set built in part(b) shows how the irregularity of total SWRs is intrinsically connected to the characteristic of excluding the presence of too many incompatible elements.
\begin{proof}[Proof of Proposition \ref{P-ip}]
Let $\Gamma:= \left\{ x \in 2^\omega: e(x) \prec o(x) \right\}$.
We show $\Gamma$ is not $\mathbb{MV}$-measurable.
Given any $p \in \mathbb{MV}$, let $\{n_k: k \geq 1 \}$ enumerate all natural numbers in $S(p) \cup U(p)$ (note that in the enumeration of the $n_k$'s we drop the index $p$ for making the notation less cumbersome, since the tree $p$ we refer to is fixed).
To prove our claim, we aim to find $x$, $z \in [p]$ such that $x \in \Gamma \Leftrightarrow z \notin \Gamma$.
We pick $x \in [p]$ such that for all $n_k \in S(p) \cup U(p)$, $x(n_k)=1$, i.e. for every $k \geq 1$, $n^x_k=n_k$.
Let $\left\{\left(n_{m_j}, n_{m_{j}+1}, n_{m_{j}+2}\right): j \geq 1 \right\}$ be an enumeration of all Mathias triples in $p$.
We need to consider three cases.
\begin{itemize}
\item Case $e(x) \prec o(x)$:
We remove $n_{m_1+1}$, $n_{m_j}$, $n_{m_j+1}$, for all $j>1$ from $U(x)$ to obtain $z \in 2^\omega$ as follows:
\[
z(n) := \Big \{
\begin{array}{ll}
x(n) & \text{if $n\notin \left\{n_{m_1+1}, n_{m_j}, n_{m_j+1}: j>1\right\}$}\\
0 & \text{otherwise}.
\end{array}
\]
Note that $z \in [p]$, since $n_{m_1+1}, n_{m_j}, n_{m_{j}+1}$ are all in $S(p)$. Let
\[
\begin{split}
O(m_1):=& [n_1,n_2) \cup [n_3,n_4) \cdots [n_{m_1-1}, n_{m_1}),\\
E(m_1):=& [n_2,n_3) \cup [n_4,n_5) \cdots [n_{m_1}, n_{m_1+1}).
\end{split}
\]
Let $\{k_1, k_2, \cdots, k_M\}$ enumerate the elements in $O(m_1)$, and let $\{k^1, \cdots, k^M\}$ enumerate the initial $M$ elements of the infinite set $\underset{j>1}{\bigcup}[n_{m_j}, n_{m_j+1})$.
We permute $e(z)(k_1)$ with $e(z)(k^1)$, $e(z)(k_2)$ with $e(z)(k^2)$, continuing likewise till $e(z)(k_M)$ with $e(z)(k^M)$ to obtain $e^{\pi}(z)$.
Further, $o^{\pi}(z)$ is obtained by carrying out identical permutation on $o(z)$.
Observe that $e^{\pi}(z)$ and $o^{\pi}(z)$ are finite permutations of $e(z)$ and $o(z)$ respectively.
Then,
\begin{itemize}
\item[-]{for all $n\in O(m_1)$, $e^{\pi}(z)(n) = 1 = o(x)(n)$ and $o^{\pi}(z)(n) = 0 = e(x)(n)$,}
\item[-]{for all $n\in E(m_1)$, $e^{\pi}(z)(n) = 1 > 0 = o(x)(n)$ and $o^{\pi}(z)(n) = 0 < 1 = e(x)(n)$,}
\item[-]{for all $n\in \underset{j>1}{\bigcup} [n_{m_j}, n_{m_j+1})\setminus \{k^1, \cdots, k^M\}$, $e^{\pi}(z)(n) = 1 > 0 = o(x)(n)$ and $o^{\pi}(z)(n) = 0 <1 = e(x)(n)$,}
\item[-]{for $n\in \{k^1, \cdots, k^M\}$, $e^{\pi}(z)(n) = 0= o(x)(n)$ and $o^{\pi}(z)(n) = 1= e(x)(n)$, and }
\item[-]{for all remaining $n\in \omega$, $e^{\pi}(z)(n) = o(x)(n)$ and $o^{\pi}(z)(n) = e(x)(n)$.}
\end{itemize}
Observe that A implies
$e^{\pi}(z)\sim e(z)\; \text{and}\; o^{\pi}(z)\sim o(z)$
and IP implies
$o(x) \prec e^{\pi}(z) \; \text{and}\; o^{\pi}(z) \prec e(x)$.
Combining them with transitivity, we thus get
$o(z) \sim o^{\pi}(z)\prec e(x) \prec o(x) \prec e^{\pi}(z) \sim e(z)$ and so $o(z) \prec e(z)$,
which implies $z\notin \Gamma$.
\vspace{2mm}
\item Case $o(x) \prec e(x)$: the argument is similar to the above case and we just need to arrange the details accordingly. We remove $n_{m_1}$, $n_{m_j+1}$, $n_{m_j+2}$, for all $j>1$ from $U(x)$ to obtain $z \in 2^\omega$ as follows:
\[
z(n) := \Big \{
\begin{array}{ll}
x(n) & \text{if $n\notin \left\{n_{m_1}, n_{m_j+1}, n_{m_j+2}: j>1\right\}$}\\
0 & \text{otherwise}.
\end{array}
\]
Let
\[
\begin{split}
O(m_1):=& [n_1,n_2) \cup [n_3,n_4) \cdots [n_{m_1-1}, n_{m_1}), \\
E(m_1):=& [n_2,n_3) \cup [n_4,n_5) \cdots [n_{m_1-2}, n_{m_1-1}).
\end{split}
\]
(In case $m_1=2$ put $E(m_1)=\emptyset$.)
Let $\{k_1, k_2, \cdots, k_M\}$ enumerate the elements in $E(m_1)$, and let $\{k^1, \cdots, k^M\}$ enumerate the initial $M$ elements of the infinite set $\underset{j>1}{\bigcup}[n_{m_j+1}, n_{m_j+2})$.
We permute $e(z)(k_1)$ with $e(z)(k^1)$, $e(z)(k_2)$ with $e(z)(k^2)$, continuing likewise till $e(z)(k_M)$ with $e(z)(k^M)$ to obtain $e^{\pi}(z)$.
Further, $o^{\pi}(z)$ is obtained by carrying out identical permutation on $o(z)$.
Observe that $e^{\pi}(z)$ and $o^{\pi}(z)$ are finite permutations of $e(z)$ and $o(z)$, respectively.
Then, by arguing as in the previous case, one can
observe that A implies
$e^{\pi}(z)\sim e(z)\; \text{and}\; o^{\pi}(z)\sim o(z)$
and IP gives
$e^{\pi}(z) \prec o(x) \; \text{and}\; e(x) \prec o^{\pi}(z)$.
Combining them we obtain
$e(z) \sim e^{\pi}(z) \prec o(x) \prec e(x) \prec o^{\pi}(z)\sim o(z)$ and so $e(z) \prec o(z)$,
which implies $z\in \Gamma$.
\vspace{2mm}
\item Case $e(x) \sim o(x)$: We remove $n_{m_j}$, $n_{m_j+1}$, for all $j>1$ from $U(x)$ to obtain $z \in 2^\omega$ as follows:
\[
z(n) = \Big \{
\begin{array}{ll}
x(n) & \text{if $n\notin \left\{n_{m_j}, n_{m_j+1}: j>1\right\}$}\\
0 & \text{otherwise}.
\end{array}
\]
By construction we obtain $o(z)(n) \geq o(x)(n)$ and $e(z)(n) \leq e(x)(n)$ for all $n\in \omega$.
Further, for all $n \in \underset{j\in \omega}{\bigcup} \left[n_{m_j}, n_{m_{j}+1}\right)$, $o(z)(n) = 1 > 0 = o(x)(n)$ and $e(z)(n) = 0 < 1 = e(x)(n)$.
Hence, by IP, we get
$o(x) \prec o(z) \; \text{and}\; e(z) \prec e(x)$,
and so by transitivity it follows
$e(z) \prec o(z)$, which implies $z\in \Gamma$.
\end{itemize}
\end{proof}
\section{Welfare-regularity Diagram} \label{section2}
The results proved in the previous section, together with the results mentioned in the introduction already discovered by Lauwers and Zame, yield to the following \emph{Welfare-Regularity Diagram} (\emph{WR-diagram}), which essentially represents the fragments of AC corresponding to the non-constructive sets involved in our investigation. Since the utility domain $Y=\{ 0,1 \}$ is fixed, throughout this section, we simply write $\textbf{SPA}$ ($\textbf{IPA}$ resp.) instead of $\textbf{SPA}_{\{0,1\}}$ ($\textbf{IPA}_{\{ 0,1 \}}$ resp.).
\begin{center}
\begin{tikzpicture}[-, auto, node distance=3.0cm, thick]
\tikzstyle{every state}=[fill=white, draw=none, text=black]
\node[state](U){\textbf{U}};
\node[state](SP)[right of=U]{\textbf{SPA}};
\node[state](IP)[right of=SP, node distance=2.0cm]{\textbf{IPA}};
\node[state] (NL) [above of=SP]{\textbf{NL}};
\node[state] (NB) [above of=IP]{\textbf{NB}};
\node[state](NR)[right of=IP]{\textbf{NR}};
\path (U) edge (SP)
(SP) edge (IP)
(SP) edge (NL)
(IP) edge (NB)
(IP) edge (NR)
;
\end{tikzpicture}
\end{center}
This WR-diagram mimics other popular diagrams in set theory of the reals (like Cicho\'n's diagram) and it should be understood similarly; moving left-to-right or bottom-up means moving from a stronger to a weaker statement (in terms of ZF-implications). As for Cicho\'n's diagram, we want to consider combinations of $\square$'s and $\blacksquare$'s using the following convention: $\square$ means that the corresponding statement is true, $\blacksquare$ means that the corresponding statement is false.
It is then interesting to understand whether the various ZF-implications do not reverse and, more generally, if provided a combination of $\blacksquare$'s and $\square$'s, one can find the suitable model satisfying such a given combination.
\subsection{A model for $\textbf{IPA}=\blacksquare$, $\textbf{NB}=\square$, $\textbf{NL}=\square$, $\textbf{NR}=\square$}
The proof uses some idea from the previous section on Mathias-Silver trees together with the proof-method used in \cite[Proposition 3.7]{BLH2005}.
\begin{lemma} \label{L2}
Let $c$ be a Cohen generic real. Then
\[
V[c] \models \exists q \in \mathbb{MV} \forall z \in [q] (z \text{ is a Cohen real}).
\]
\end{lemma}
\begin{proof}
It follows the same idea as in the proof of Lemma \ref{l1}. Consider the poset $\mathbb{F}$ consisting of all $F \subseteq 2^{<\omega}$ finite uniform trees, i.e., such that:
\begin{itemize}
\item all terminal nodes of $F$ have the same length;
\item $\forall s,t \in F \forall i \in \{ 0,1 \}(|s|=|t| \Rightarrow (s^\smallfrown i \in F \Leftrightarrow t^\smallfrown i \in F))$.
\end{itemize}
$\mathbb{F}$ is ordered by end-extension: $F' \leq F$ iff $F' \supseteq F$ and for all $t \in F' \setminus F$ there is $s \in \textsc{Term}(F)$ such that $s \subseteq t$.
Since $\mathbb{F}$ is countable (and non-trivial), it is equivalent to $\mathbb{C}$.
Let $G$ be $\mathbb{F}$-generic over $V$ and put $q_G:= \bigcup G$.
We claim that $q_G \in \mathbb{MV}$ and every of its branch is Cohen over $V$.
To show that it is sufficient to prove that given any $F \in \mathbb{F}$ and $D \subseteq \mathbb{C}$ open dense from the ground model $V$,
there exists $F' \leq F$ such that
\(
F' \Vdash \forall t \in \textsc{Term}(F') (t \in D).
\)
It is easy to see that one can use the same argument as in the proof of Lemma \ref{l1} (actually, in this case one step being sufficient and no need of $\omega$-many steps), by using the same eight sequences $s_1, s_2, \dots, s_8$ in order to make sure that $q_G \in \mathbb{MV}$ and then uniformly extend the nodes in order to get that for all $t \in \textsc{Term}(F')$, $t \in D$.
\end{proof}
\begin{proposition}
Let $\mathbb{C}_{\omega_1}$ be an $\omega_1$-product of $\mathbb{C}$ with finite support and $G$ be $\mathbb{C}_{\omega_1}$-generic over $L$. Then
\[
L(\mathbb{R})^{L[G]} \models \neg \mathbf{IPA} \land \mathbf{NB} \land \mathbf{NL} \land \mathbf{NR}.
\]
\end{proposition}
\begin{proof}
It follows the same argument as in the proof of \cite[Proposition 3.7]{BLH2005}; we give here the proof for completeness and arrange some details according to our setting. We aim to show that in $L[G]$ for every $\text{On}^\omega$-definable set $X \subseteq 2^\omega$ there exists $q \in \mathbb{MV}$ such that $[q] \subseteq X$ or $[q] \cap X = \emptyset$. The key idea is that since Cohen forcing adds a generic Mathias-Silver tree of Cohen branches and Cohen forcing is strongly homogeneous, we thus have a factoring lemma \emph{\'a la Solovay}. More precisely, we can argue as follows. Given $X:=\{x \in 2^\omega: \varphi(x,v) \}$ with $\varphi$ formula and $v \in \text{On}^\omega$, we can use standard argument to absorb $v$ into the ground model, i.e., pick $\alpha<\omega_1$ such that $v \in L[G{\upharpoonright} \alpha]$. Let $\Phi(x,v)$ be the formula asserting that $\Vdash_{\mathbb{C}_{\omega_1}} \varphi(x,v)$.
By strong homogeneity of $\mathbb{C}_{\omega_1}$, one has the following factoring Lemma: for every $x \in 2^\omega \cap L[G]$, there exists a $\mathbb{C}_{\omega_1}$-generic filter $H$ over $L[G {\upharpoonright} \alpha][x]$ such that $L[G]=L[G{\upharpoonright} \alpha][x][H]$.
Then Lemma \ref{L2} gives $q \in L[G {\upharpoonright} \alpha+1]$ such that all $x \in [p]^{L[G]}$ are Cohen over $L[G {\upharpoonright} \alpha]$; moreover note that an easy refinement of the proof-argument indeed shows that we can arbitrarily pick $\textsf{stem}(q)$ being any $t \in 2^{<\omega}$. Finally, the latter combined with the factoring lemma, gives: for all $x \in 2^\omega \cap L[G]$, if $x$ is Cohen over $L[G {\upharpoonright} \alpha]$, then
\[
L[G {\upharpoonright} \alpha][x] \models \Phi(x,v) \quad \Leftrightarrow \quad L[G] \models \varphi(x,v).
\]
Since $q$ only consists of Cohen branches, and by homogeneity of $\mathbb{C}$, we thus obtain
\[
L[G] \models \forall x \in 2^\omega (x \in [q] \Rightarrow \varphi(x,v)) \quad \text{or} \quad L[G] \models \forall x \in 2^\omega (x \in [q] \Rightarrow \neg \varphi(x,v)).
\]
Finally, from various characterizations proved in \cite{BL99} and some known preservation theorems, it follows that in $L[G]$:
\begin{itemize}
\item there exists a $\mathbf{\Sigma}^1_2$ non-Baire set, as $\mathbb{C}_{\omega_1}$ does not add a comeager set of Cohen reals (\cite[Theorem 5.8]{BL99} and \cite[Lemma 6.5.3, p. 313]{BJ1995});
\item there exists a $\mathbf{\Delta}^1_2$ non-Ramsey set, as $\mathbb{C}_{\omega_1}$ does not add dominating reals (\cite[Theorem 4.1]{BL99} and \cite[Lemma 6.5.3, p. 313]{BJ1995}: note that a non-Laver measurable set is a special case of a non-Ramsey set);
\item there exists a $\mathbf{\Delta}^1_2$ non-Lebesgue set, as $\mathbb{C}_{\omega_1}$ does not add random reals (\cite[Theorem 6.5.28, p. 322]{BJ1995} and \cite[Theorem 9.2.1, p. 452]{BJ1995}).
\end{itemize}
Hence, passing into the inner model $L(\mathbb{R})$ of $L[G]$ we have $\neg \textbf{IPA} \land \textbf{NB} \land \textbf{NL} \land \textbf{NR}$.
\end{proof}
In particular, it follows $L[G]$ is a model for the diagram
\begin{center}
\begin{tikzpicture}[-, auto, node distance=3.0cm, thick]
\tikzstyle{every state}=[fill=white, draw=none, text=black]
\node[state](U){$\blacksquare$};
\node[state](SP)[right of=U]{$\blacksquare$};
\node[state](IP)[right of=SP, node distance=2.0cm]{$\blacksquare$};
\node[state] (NL) [above of=SP]{$\square$};
\node[state] (NB) [above of=IP]{$\square$};
\node[state](NR)[right of=IP]{$\square$};
\path (U) edge (SP)
(SP) edge (IP)
(SP) edge (NL)
(IP) edge (NB)
(IP) edge (NR)
;
\end{tikzpicture}
\end{center}
\subsection{A model for $\textbf{NL}=\blacksquare$, $\textbf{NR}=\square$}
We recall the following well-known forcing notions, which we use through this section.
\begin{itemize}
\item Random forcing $\poset{B}:=\{ C \subseteq 2^{\omega}: C \text{ closed } \land \mu(C) > 0 \}$, where $\mu$ is the standard Lebesgue measure on $2^\omega$. The order is given by: $C' \leq C \Leftrightarrow C' \subseteq C$.
\item Mathias forcing $\mathbb{M}$ consisting of pairs $(s,x)$ such that $x \in [\omega]^\omega$, $s \in [\omega]^{<\omega}$ and $\max s < \min x$, ordered by $(t,y) \leq (s,x)$ iff $t \supseteq s$, $t {\upharpoonright} |s| = s$ and $y \subseteq x$. Moreover we denote
\[
[s,x]:= \{y \in [\omega]^\omega: y \supset s \land y {\upharpoonright} |s|= s \land y \subseteq x \}.
\]
\item Given $\kappa > \omega$ cardinal, let
\[
\mathsf{Fn}(\omega,\kappa):= \{ f: f \text{ is a function} \land |\text{dom}(f)| < \omega \land \text{dom}(f) \subseteq \omega \land \text{ran} (f) \subseteq \kappa \},
\]
ordered by: $f' \leq f \Leftrightarrow f' \supseteq f$. Note $\mathsf{Fn}(\omega,\kappa)$ is the standard poset adding a surjection $f_G: \omega \rightarrow \kappa$, i.e., the forcing collapsing $\kappa$ to $\omega$.
\end{itemize}
\begin{theorem} \label{thm1}
There is a ZF-model $N$ such that
\[
N \models \mathbf{NR} \land \neg \mathbf{NL}.
\]
\end{theorem}
The model $N$ is going to be the inner model of a certain forcing extension that we are going to define in the proof of Theorem \ref{thm1} below.
The key idea to obtain such a forcing-extension is to use Shelah's amalgamation over random forcing with respect to a certain name $Y$ for sets of elements in $2^\omega$ in order to get a complete Boolean algebra $B$ such that, if $G$ is $B$-generic over $V$, in $V[G]$ the following hold:
\begin{enumerate}
\item every subset of $2^\omega$ in $L(\mathbb{R},\{ Y \})$ is Lebesgue measurable
\item $Y$ is non-Ramsey.
\end{enumerate}
Hence, we obtain that in $L(\mathbb{R},\{Y\})^{V[G]}$ every subset of $2^\omega$ is Lebesgue measurable (and so by Zame's result there cannot be any total SWR satisfying A and SP), but there is a non-Ramsey set.
Shelah's amalgamation (\cite{Sh85}) is the main tool we need for our forcing construction. Since it is a rather demanding machinery, we refer the reader to the Appendix for a more detailed approach and an exposition of the main properties.
The reader already familiar with Shelah's amalgamation can proceed with no need of such Appendix.
Before going to the detailed and technical proof, we just give a short overview of the proof-structure.
Starting from a Boolean algebra $B$, two complete subalgebras $\mathbb{B}_0, \mathbb{B}_1 \lessdot B$ isomorphic to the random algebra with $\phi$ isomorphism between them, the amalgamation process provides us with the pair $(B^*,\phi^*)$ such that $B \lessdot B^*$ and $\phi^* \supseteq \phi$ such that $\phi^*$ is an automorphism of $B^*$. We denote this amalgamation process by $\textsf{Am}^\omega(B,\phi)$, so that $B^*=\textsf{Am}^\omega(B,\phi)$.
Since the process itself generates more and more copies of random algebras, we have to iterate this process as long as we treat all of the copies of such random algebras. For doing that a recursive book-keeping argument of length $\kappa$ inaccessible will be sufficient (and necessary to ensure the final construction satisfy $\kappa$-cc).
The idea to obtain 1 and 2 above is based on the following main parts:
\begin{itemize}
\item[(a)] The Boolean algebra $B$ is built via a recursive construction, alternating the amalgamation, iteration with $\mathsf{Fn}$, iteration with Mathias forcing and picking direct limits at limit steps.
\item[(b)] The set $Y$ is also recursively built by carefully adding Mathias reals cofinally often in order to get a non-Ramsey set.
\item[(c)] In order to obtain that all sets of reals in $L(\mathbb{R}, \{ Y \})$ be Lebesgue measurable, we have to amalgamate over random forcing, and we also need to recursively close $Y$ under the isomorphisms between copies of the random algebras generated by the amalgamation process, in order to get $\Vdash \phi[Y]=Y$, for every such isomorphism $\phi$.
\end{itemize}
In particular to get (c) the algebra $B$ we are going to construct is going to satisfy \emph{$(\poset{B},Y)$-homogeneity}, i.e., for every pair of random algebras $\mathbb{B}_0,\mathbb{B}_1 \lessdot B$ with $\phi: \mathbb{B}_0 \rightarrow \mathbb{B}_1$ isomorphism, there exists $\phi^* \supseteq \phi$ automorphism of $B$ such that $\Vdash_B \phi^*[Y]=Y$. (Roughly speaking: any isomorphism between copies of random algebra can be extended to an automorphism which fixes $Y$). See \cite[Theorem 6.2.b]{JR93} for a proof that $(\poset{B},Y)$-homogeneity is the crucial ingredient to force that all sets in $L(\mathbb{R},\{ Y \})$ are Lebesgue measurable.
We now see the construction of the complete Boolean algebra $B$ and the proof of Theorem \ref{thm1} in detail.
\begin{proof}[Proof of Theorem \ref{thm1}]
Start from a ground model $V$ we are going to recursively define $\{B_\alpha: \alpha < \kappa \}$ sequence of complete Boolean algebras such that $B_\alpha \lessdot B_\beta$, for $\alpha < \beta$, and $\{ Y_\alpha: \alpha <\kappa \}$ $\subseteq$-increasing sequence of sets of names for reals, and then put $B:= \lim_{\alpha < \kappa} B_\alpha$ and $Y:= \bigcup_{\alpha<\kappa} Y_\alpha$. The construction follows the line of the one presented in \cite{JR93}, even if it sensitively differs in the construction of the set $Y$and in proving that it is non-Ramsey, instead of a set without the Baire property. We also use the forcing $\mathsf{Fn}$ instead of the amoeba for measure, as it also serves the scope of collapsing the additivity of the null ideal and to ensure the inaccessible $\kappa$ be gently collapsed to $\omega_1$ in the forcing-extension via $B$. We start with $B_0 = \{ 0 \}$ and $Y_0=\emptyset$.
\begin{enumerate}
\item In order to obtain the $(\poset{B},\dot{Y})$-homogeneity we use a standard book-keeping argument to hand us down all possible situations of the following type:
if ${B}_\alpha \lessdot {B}' \lessdot {B}$ and ${B}_\alpha \lessdot {B}'' \lessdot {B}$ are such that
${B}_\alpha$ forces $({B}'/{B}_\alpha) \approx ({B}''/{B}_\alpha) \approx \poset{B}$
and $\phi_0: \algebra{B}' \rightarrow \algebra{B}''$ an isomorphism s.t. $\phi_0 {\upharpoonright} {B}_\alpha= \text{Id}_{{B}_\alpha}$, then there exists
a sequence of functions in order to extend the isomorphism $\phi_0$ to an automorphism $\phi: {B}
\rightarrow {B}$, i.e., $\exists \langle \alpha_\eta : \eta
< \kappa \rangle$ increasing, cofinal in $\kappa$, with $\alpha_0=\alpha$, and $ \exists \langle
\phi_\eta : \eta < \kappa \rangle$ such that
\begin{itemize}
\item for $\eta >0$ successor ordinal, ${B}_{\alpha_{\eta}+1} :=\textsf{Am}^\omega({B}_{\alpha_{\eta}},\phi_{\eta-1})$,
and $\phi_\eta$ be the automorphism on ${B}_{\alpha_\eta+1}$ generated by the amalgamation;
\item for $\eta$ limit ordinal, let ${B}_{\alpha_\eta}:= \lim_{\xi < \eta} {B}_{\alpha_\xi}$ and $\phi_\eta= \lim_{\xi < \eta} \phi_\xi$, in the obvious sense;
\item for every $\eta< \kappa$, we have ${B}_{\alpha_{\eta}+1} \lessdot {B}_{\alpha_{\eta+1}}$.
\end{itemize}
In order to fix the set of names by each automorphism $\phi_\eta$, one then sets
\begin{itemize}
\item successor case $\eta>0$:
\[
\begin{split}
B_{\alpha_{\eta}+1} \Vdash Y_{\alpha_{\eta} +1} &:= Y_{\alpha_{\eta}} \cup \{
\phi^j_{\eta}(\dot{y}), \phi^{-j}_{\eta}(\dot{y}): \dot{y} \in
Y_{\alpha_{\eta}}, j \in \omega \}, \\
\end{split}
\]
\item limit case: $B_{\alpha_\eta} \Vdash Y_{\alpha_\eta} := \bigcup_{\xi < \eta} Y_{\alpha_\xi}$.
\end{itemize}
\item In order to get $Y$ being non-Ramsey, for cofinally many $\alpha$'s, put ${B}_{\alpha + 1}:=
{B}_\alpha * \dot{\mathbb{M}}$ and
\[
B_{\alpha+1} \Vdash Y_{\alpha +1}:= Y_\alpha \cup \{
\dot{y}_{(s,x)}: (s,x) \in \mathbb{M} \},
\]
where $\dot{y}_{(s,x)}$ is a name for a Mathias real over
$V^{{B}_\alpha}$ such that $(s,x) \Vdash s \subset \dot{y}_{(s,x)} \subseteq x$.
\item For cofinally many $\alpha$'s pick a cardinal $\lambda_\alpha < \kappa$ such that $B_\alpha \Vdash \lambda_\alpha > \omega$, put ${B}_{\alpha + 1}:=
{B}_\alpha * \mathsf{Fn}(\omega, \lambda_\alpha)$, and let
$B_{\alpha+1} \Vdash Y_{\alpha +1}:= Y_\alpha$, where $\mathsf{Fn}(\omega, \lambda_\alpha)$ is the forcing adding a surjection $F_\alpha: \omega \rightarrow \lambda_\alpha$.
\item For any limit ordinal, put
${B}_\lambda := \lim_{\alpha < \lambda} {B}_\alpha$ and $B_\lambda \Vdash Y_\lambda := \bigcup_{\alpha < \lambda} Y_\alpha$.
\end{enumerate}
Let $G$ be $B$-generic over $V$.
As mentioned above, the proof of ``every set of reals in $L(\mathbb{R},Y)$ is Lebesgue measurable'' is a standard Solovay-style argument, and can be found in \cite{JR93}. The only difference we adopt is the use of $\mathsf{Fn}(\omega, \lambda_\alpha)$. i.e. the poset ``collapsing" $\lambda_\alpha$ to $\omega$ instead of the amoeba for measure. The property needed for our purpose, which is to turn the union of all Borel null sets coded in the ``ground model" $V[G {\upharpoonright} \alpha +1]$ into a null set, is fulfilled by $\mathsf{Fn}(\omega, \lambda_\alpha)$ as well, i.e.
\[
\mathsf{Fn}(\omega, \lambda_\alpha) \Vdash \bigcup \{N_c: c \text{ is a Borel code for a null set in $V[G {\upharpoonright} \alpha +1]$}\} \text{ is null},
\]
where $N_c \subseteq 2^\omega$ is the Borel null set coded by $c$.
What is left to show is that
\begin{equation} \label{eq-non-ramsey}
B \Vdash \text{``$Y$ is non-Ramsey''}.
\end{equation}
For proving that, pick arbitrarily $(s,x) \in \mathbb{M}$; we have to show $$B \Vdash Y \cap [s,x] \neq \emptyset \text{ and } [s,x] \not \subseteq Y.$$
For the former, Let $\dot{(s,x)}$ be a
${B}$-name for a Mathias condition. By $\kappa$-cc and part (2) of the recursive construction, there is $\alpha < \kappa$
such that $\dot{(s,x)}$ is a ${B}_\alpha$-name,
${B}_{\alpha +1}={B}_\alpha
* \dot{\mathbb{M}}$ and
$B_{\alpha+1} \Vdash Y_{\alpha+1}=Y_\alpha \cup \{
\dot{y}_{(s,x)}: (s,x) \in \mathbb{M}^{B_\alpha} \}$. Consider $\dot{y}_{(s,x)}$ name for a Mathias
real over $V^{B_\alpha}$ such that $B_{\alpha+1} \Vdash \dot{y}_{(s,x)} \in [s,x]$. Thus,
\[
B \Vdash \dot{y}_{(s,x)} \in Y \cap [s,x].
\]
On the other hand, by part (3) of the construction, there is also $\alpha < \kappa$, such that
$\dot{(s,x)}$ is a ${B}_\alpha$-name,
${B}_{\alpha+1}= {B}_{\alpha}* \mathsf{Fn}(\omega,\lambda_\alpha)$
and $B_{\alpha+1} \Vdash Y_{\alpha+1}=Y_\alpha$. Let
$\dot{y}$ be a $B_{\alpha+1}$-name for a Mathias real
over $V^{B_\alpha}$ such that
$B \Vdash \dot y \in [s,x]$. Obviously, $B \Vdash
\dot{y} \notin Y_{\alpha}$ (since ``the real $y$ is added at
stage $\alpha+1$''), and hence
\[
B \Vdash \dot{y} \in [s,x] \setminus
Y_{\alpha+1},
\]
since
$B \Vdash Y_{\alpha+1}=Y_{\alpha}$.
So it is left to show that also for every $\beta>\alpha+1$, $B \Vdash y \notin Y_{\beta} \setminus Y_{\alpha}$, which means, intuitively speaking, $y$ cannot fall into $Y$ at any later stage $\beta>\alpha+1$.
For proving that we show the following Claim \ref{claim-mathias}. Fix the notation: given $x \in 2^\omega$, we denote by $f_x$ the increasing enumeration of the set $\{ n \in \omega: x(n)=1 \}$. It is well-known (and straightforward to check) that if $x$ is a Mathias real over $V$, then $f_x$ is dominating over $V \cap \omega^\omega$.
\begin{claim} \label{claim-mathias}
For $\beta < \kappa$, $\beta > \alpha+1$ and $\dot{y} \in Y_\beta \setminus Y_{\alpha+1}$, one has
\[
B \Vdash \text{``}f_{\dot{y}} \text{ is dominating over } V^{B_{\alpha+1}} \cap \omega^\omega \text{''}.
\]
\end{claim}
For $\beta$ limit the proof is trivial. For $\beta+1$, we have two cases.
Case as in part (2) of the recursive construction, i.e. $Y_{\beta+1}= Y_\beta \cup \{\dot{y}_{(s,x)}: (s,x) \in \mathbb{M} \}$. In this case $\dot{y}$ has to be a Mathias real over $V^{B_{\alpha+1}}$ and therefore $f_{\dot y}$ is dominating over $V^{B_{\alpha+1}} \cap \omega^\omega$.
Case as in part (1) of the construction, i.e.
\[
B_{\beta+1} \Vdash Y_{\beta +1} := Y_{\beta} \cup \{
\phi^j(\dot{y}), \phi^{-j}(\dot{y}): \dot{y} \in
Y_{\beta}, j \in \omega \},
\]
where $\phi$'s are the associated automorphisms generated by the amalgamation.
The aim is to show that the property of ``being dominating" is preserved through the construction unfolded in part (1), both by the amalgamation process and by iteration of random forcing. More precisely, we need the following lemma.
\begin{lemma} \label{lemma:preserve-dominating}
Let $\eta > 0$ be a successor ordinal. Let ${B}', {B}'' \lessdot {B}_{\alpha_\eta}$ and $\dot{x} \in V^{{B}_{\alpha_\eta}} \cap 2^\omega$
such that
\[
{B}_{\alpha_\eta} \Vdash \text{``$f_{\dot{x}}$ is dominating over both $V^{{B}'} \cap \omega^\omega$ and $V^{{B}''} \cap \omega^\omega$''},
\]
and $\psi: {B}' \rightarrow {B}''$ isomorphism.
Then, for every $j \in \omega$,
\[
{B}_{\alpha_\eta+1} \Vdash \text{``$f_{\phi^j_\eta (\dot{x})}$ and $f_{\phi^{-j}_\eta (\dot{x})}$ are dominating over $V^{{B}_{\alpha_\eta}} \cap \omega^\omega$''}.
\]
where ${B}_{\alpha_\eta+1}=\textsf{Am}^\omega({B}_{\alpha_\eta}, \psi)$, and $\phi_\eta$ is the automorphism extending $\psi$, generated
by the amalgamation.
\end{lemma}
\begin{sublemma}
[Preservation by one-step amalgamation] \label{sublemma-1}
Let ${B}, {B}_1, {B}_2, \phi_0$, $e_1$, $e_2$ as in the Appendix and $\dot{x}$ a ${B}$-name for an element of $2^\omega$ such that
${B}$ forces $f_{\dot{x}}$ is dominating over $V^{{B}_1} \cap \omega^\omega$ and $V^{{B}_2} \cap \omega^\omega$.
Then
\begin{equation} \label{eq-5}
\textsf{Am}({B},\phi_0) \Vdash \text{``$f_{e_1(\dot{x})}$ is dominating over $V^{e_2[{B}]} \cap \omega^\omega$''}.
\end{equation}
(And analogously $\textsf{Am}({B},\phi_0) \Vdash \text{``$f_{e_2(\dot{x})}$ is dominating over $V^{e_1[{B}]} \cap \omega^\omega$''}$.)
\end{sublemma}
\begin{proof}[Proof of Sublemma 1]
By Lemma \ref{lemma-amal-product} in Appendix, putting $V=N[H]$, $A_1= (B / B_1)^H$, $A_2= (B / B_2)^H$, it is sufficient to prove that given $A_1,A_2$ complete Boolean algebras and $\dot{f}$ a $A_1$-name for an element of $\omega^\omega$, if
$$A_1 \Vdash \text{``$\dot{f}$ is dominating over $V \cap \omega^\omega$''},$$
then
\[
A_1 \times A_2 \Vdash \text{``$\dot{f}$ is dominating over $V[G] \cap \omega^\omega$'' },
\]
where $G$ is $A_2$-generic over $V$.
To reach a contradiction, assume there is $z \in \omega^\omega \cap V[G]$, $(a_1,a_2) \in A_1 \times A_2$ such that $(a_1,a_2) \Vdash \exists^\infty n \in \omega ( z(n) > f(n))$. Let $\{ n_j: j \in \omega \}$ enumerate all such $n$'s, and for every $j \in \omega$ pick $b_j \in A_2$, $b_j \leq a_2$ and $k_j \in \omega$ such that $(a_1,b_j) \Vdash z(n_j)=k_j$; note that this can be done since $z \in V[G]$ and $G$ is $A_2$-generic over $V$; hence $z$ can be seen as an $A_2$-name and so it is suffcient to strengthen conditions in $A_2$ in order to decide its values. Since $A_1$ forces $f$ be dominating over $V \cap \omega^\omega$, one can pick $a \leq a_1$ such that $(a,a_2) \Vdash \exists m \forall j \geq m(k_j \leq f(n_j))$. Pick $j' > m$; then
\begin{itemize}
\item[-] on the one side, since $(a,b_{j'}) \leq (a_1,a_2)$, it follows $(a,b_{j'}) \Vdash f(n_{j'}) < k_{j'} = z(n_{j'})$
\item[-] on the other side, since $(a,b_{j'}) \leq (a,a_2)$, it follows $(a,b_{j'}) \Vdash f(n_{j'}) \geq k_{j'}=z(n_{j'})$,
\end{itemize}
which is a contradiction.
\end{proof}
\begin{sublemma}[Preservation by $\omega$-step amalgamation] \label{sublemma1bis}
Let $B$ be a complete Boolean algebra, ${B}', {B}'' \lessdot {B}$ and $\dot{x} \in V^{{B}} \cap 2^\omega$
such that
\[
{B} \Vdash \text{``$f_{\dot{x}}$ is dominating over both $V^{{B}'} \cap \omega^\omega$ and $V^{{B}''} \cap \omega^\omega$''},
\]
with $\psi: {B}' \rightarrow {B}''$ isomorphism.
Then, for every $j \in \omega$,
\[
\textsf{Am}^\omega({B},\psi) \Vdash \text{``$f_{\phi^{j} (\dot{x})}$ and $f_{\phi^{-j} (\dot{x})}$ are dominating over $V^{{B}} \cap \omega^\omega$''}.
\]
where $\phi: \textsf{Am}^\omega({B},\psi) \rightarrow \textsf{Am}^\omega({B},\psi)$ is the automorphism extending $\psi$, generated
by the amalgamation.
\end{sublemma}
The proof simply consists of a recursive application of Sublemma \ref{sublemma-1} following the line of the proof of \cite[Lemma 3.4]{JR93} by replacing the notion of ``unbounded" with ``dominating".
Note that \ref{sublemma1bis} is enough to show Lemma \ref{lemma:preserve-dominating} when $\eta \geq 2$ successor, by considering $B=B_{\alpha_\eta}$, $\textsf{Am}^\omega (B, \phi)=B_{\alpha_{\eta}+1}$, $B'=B_{\alpha_{\eta-1}}$, $B''=\phi_{\eta-1}[B_{\alpha_{\eta-1}}]$ and $\psi=\phi_{\eta-1}$.
It is only left to show the case $\eta=1$, which is: ${B}_{\alpha_0} \lessdot {B}', {B}'' \lessdot {B}_{\alpha_1}$
such that
${B}_{\alpha_0}$ forces $({B}'/ {B}_{\alpha_0}) \approx ({B}'' / {B}_{\alpha_0}) \approx \poset{B}$,
and $\phi_0: {B}' \rightarrow {B}''$ isomorphism such that
$\phi_0 {\upharpoonright} {B}_{\alpha_0} = \emph{Id}_{{B}_{\alpha_0}}$. Then for every $\dot{x} \in V^{{B}_{\alpha_1}} \cap 2^\omega$ such that
${B}_{\alpha_1} \Vdash \text{``$f_{\dot{x}}$ is dominating over $V^{{B}_{\alpha_0}} \cap \omega^\omega$''} $, one has, for every $j \in \omega$,
\[
{B}_{\alpha_1+1} \Vdash \text{`` $f_{\phi^j_1 (\dot{x})}$ and $f_{\phi^{-j}_1 (\dot{x})}$ are dominating over $V^{{B}_{\alpha_1}} \cap \omega^\omega$''}.
\]
But, since ${B}_{\alpha_0}$ forces both $({B}'/{B}_{\alpha_0}) \approx ({B}''/{B}_{\alpha_0}) \approx \poset{B}$, by Sublemma \ref{sublemma-1} and the fact that random forcing is $\omega^\omega$-bounding (and thus it preserves dominating reals), we obtain $\textsf{Am}^\omega(B_{\alpha_1}, \phi_0) = {B}_{{\alpha_1}+1}$ and
\[
{B}_{{\alpha_1}+1} \Vdash \text{``$f_{\dot{x}}$ is dominating over both $V^{{B}_{\alpha_0}*({B}'/ {B}_{\alpha_0})} \cap \omega^\omega$ and
$V^{{B}_{\alpha_0}*({B}''/ {B}_{\alpha_0})} \cap \omega^\omega$''}.
\]
\end{proof}
It is easy to see that the construction developed can be combined with Shelah's original one, simply by recursively construct in parallel a set being non-Baire. As a consequence one can obtain a model satisfying the following WR-Diagram
\begin{center}
\begin{tikzpicture}[-, auto, node distance=3.0cm, thick]
\tikzstyle{every state}=[fill=white, draw=none, text=black]
\node[state](U){$\blacksquare$};
\node[state](SP)[right of=U]{$\blacksquare$};
\node[state](IP)[right of=SP, node distance=2.0cm]{?};
\node[state] (NL) [above of=SP]{$\blacksquare$};
\node[state] (NB) [above of=IP]{$\square$};
\node[state](NR)[right of=IP]{$\square$};
\path (U) edge (SP)
(SP) edge (IP)
(SP) edge (NL)
(IP) edge (NB)
(IP) edge (NR)
;
\end{tikzpicture}
\end{center}
Note that the status of $\textbf{IPA}$ is not clear in this model.
\begin{remark}
Some other combinations of the WR-Diagram are already known or follow easily from known results.
For instance, in order to obtain a model for $\textbf{NB}=\blacksquare \land \textbf{NL}=\square$, we can consider $N$ be Shelah's model constructed in \cite{Sh85}, where every set of reals has the Baire property. Note that such a model is obtained with no need of inaccessible cardinals. Since in \cite{Sh85} it is also shown that to get a model where every set of reals is Lebesgue measurable we need an inaccessible, we can then deduce that in $N$ there is a set that is not Lebesgue measurable. Note that in such a model the status of $\textbf{NR}$ is not clear. More generally, the interplay between $\textbf{NB}$ and $\textbf{NR}$ is still open, since the lemmata about the preservation of dominating and/or unbounded reals do not extend when we amalgamate over Cohen or Mathias forcing, in place of random forcing.
\end{remark}
\subsection{Weak Pareto for larger utility domain}
\label{s4.2}
In this last sub-section we make a digression away from the WR-Diagram and we dealt with WP, thus providing an answer to \cite[Problem 11.14]{Mathias} also in case we consider the Paretian condition being the weakest possible. As we already notice in Remark \ref{Remark2}, WP is trivial when $Y=\{ 0,1 \}$. Moreover, whenever $Y$ is well-founded, then one can simply consider the function $f: Y^\omega \rightarrow \mathbb{R}$ such that $f(x):= \min\{ x(n): n \in \omega\}$; then define $x \prec y :\Leftrightarrow f(x) < f(y)$ and $x \sim y :\Leftrightarrow f(x) = f(y)$ in order to get a total SWR on $Y^\omega$ satisfying WP and A.
Here we give a proof that the existence of a total SWR satisfying WP and A gives a non-$\mathbb{MV}$-measurable set when $Y \subseteq [0,1]$ contains a subset with order type $\mathbb{Z}$; we present a proof for $Y=\mathbb{Z}$ to make the notation less cumbersome, but it is straightforward to notice that precisely the same argument works for any $Y$ with order type $\mathbb{Z}$.
Given $x \in 2^\omega$, let $U(x):= \{n \in \omega: x(n)=1\}$ and $\{n^x_k: k \in \omega \}$ enumerate $U(x)$. As in the case of Proposition \ref{P-ip},
define $o(x)$ and $e(x)$; next use the following notation:
\begin{itemize}
\item let $\{l_k: k \geq 1\}$ enumerate all elements in $o(x)$ and $\{u_k: k \geq 1\}$ enumerate all elements in $\omega \setminus o(x)$;
\item let $\{l^{\prime}_k: k \geq 1\}$ enumerate all elements in $e(x)$ and $\{u^{\prime}_k: k \geq 1\}$ enumerate all elements in $\omega \setminus e(x)$;
\end{itemize}
Note that for every $k \geq 1$, one has $l^{\prime}_k=u_{n_1+(k-1)}$ and $l_k=u'_{n_1+(k-1)}$.
Next we define the following pair of sequences $o(\textbf{x}), e(\textbf{x})$ in $\mathbb{Z}^\omega$:
\begin{equation}\label{T3e0}
o(\textbf{x})(n)= \Big \{
\begin{array}{ll}
k & \text{if $n=l_k$, for some $k \geq 1$}\\
-k& \text{if $n=u_k$, for some $k \geq 1$},
\end{array}
\end{equation}
\begin{equation}\label{T3e00}
e(\textbf{x})(n) = \Big \{
\begin{array}{ll}
k & \text{if $n=l^{\prime}_k$, for some $k \geq 1$}\\
-k& \text{if $n=u^{\prime}_k$, for some $k \geq 1$}.
\end{array}
\end{equation}
\begin{proposition}\label{P-wp}
Let $\precsim$ denote a total SWR satisfying WP and A on $X= \mathbb{Z}^{\omega}$.
Then there exists a subset of $2^{\omega}$ which is not $\mathbb{MV}$-measurable.%
\footnote{It is clear from the proof that one could get the same result in an even slightly more general setting, namely with $Y$ any set of utilities with order type $\mathbb{Z}$.}
\end{proposition}
\begin{proof}
The structure of the proof is similar to Proposition \ref{P-ip}, but some technical details are different.
Let $\precsim$ be a total SWR satisfying WP and A, and put $\Gamma:= \{x \in 2^\omega: e(\textbf{x}) \prec o(\textbf{x})\}$.
Given any $p \in \mathbb{MV}$, let $\{n_k: k \geq 1 \}$ enumerate all natural numbers in $S(p) \cup U(p)$.
We aim to find $x$, $z \in [p]$ such that $x \in \Gamma \Leftrightarrow z \notin \Gamma$.
We proceed as follows: pick $x \in [p]$ such that for all $n_k \in S(p) \cup U(p)$, $x(n_k)=1$.
Let $\left\{\left(n_{m_j}, n_{m_{j}+1}, n_{m_{j}+2}\right): j \in \omega \right\}$ be an enumeration of all Mathias triples in $p$.
We need to consider three cases.
\begin{itemize}
\item Case $e(\textbf{x}) \prec o(\textbf{x})$: We remove $n_{m_1+1}$, $n_{m_j}$, $n_{m_j+1}$, for all $j>1$ from $U(x)$ to obtain $z \in 2^\omega$ as follows:
\[
z(n) = \Big \{
\begin{array}{ll}
x(n) & \text{if $n\notin \left\{n_{m_1+1}, n_{m_j}, n_{m_j+1}: j>1\right\}$}\\
0 & \text{otherwise}.
\end{array}
\]
Let
\[
\begin{split}
O(m_1):=& [n_1,n_2) \cup [n_3,n_4) \cdots [n_{m_1-1}, n_{m_1}), \\
E(m_1):=& [n_2,n_3) \cup [n_4,n_5) \cdots [n_{m_1}, n_{m_1+1}).
\end{split}
\]
\begin{claim}\label{C1}
There exists $N\in \omega$ such that $e(\textbf{x})(n)> o(\textbf{z})(n)$ holds for all $n>N$.
\end{claim}
\begin{proof}
We distinguish two cases.
\begin{enumerate}
\item{$|O(m_1)| < |E(m_1)|$: Among coordinates $n<n_{m_1+1}$,
\begin{itemize}
\item fewer negative integers have been assigned in $e(\textbf{x})(n)$ as compared to $o(\textbf{z})(n)$.
Then $0> e(\textbf{x})(n_{m_1+1}) > o(\textbf{z})(n_{m_1+1})$ and for all subsequent coordinates $n$ with both $e(\textbf{x})(n)$ and $o(\textbf{z})(n)$ being negative, $0> e(\textbf{x})(n) > o(\textbf{z})(n)$ holds.
\item fewer positive integers have been assigned in $o(\textbf{z})(n)$ as compared to $e(\textbf{x})(n)$.
Then $e(\textbf{x})(n_{m_1+2}) > o(\textbf{z})(n_{m_1+2})>0$ and for all subsequent coordinates $n$ with both $e(\textbf{x})(n)$ and $o(\textbf{z})(n)$ being positive, $e(\textbf{x})(n) > o(\textbf{z})(n)>0$ holds.
\end{itemize}
We take $N = n_{m_1+1}$ in this case.}
\item{$|O(m_1)| \geq |E(m_1)|$: Among the coordinates $[n_{m_j+1}, n_{m_{j+1}})$ for all $j\in \omega$, $e(\textbf{x})(n)$ and $o(\textbf{z})(n)$ contain equally many elements of same sign.
Further for the coordinates in $[n_{m_j}, n_{m_{j}+1})$, $o(\textbf{z})(n)$ is negative but $e(\textbf{x})(n)$ is not.
Thus for some $J\in \omega$,
\[
|O(m_1)| < |E(m_1)| + \left|\underset{j\in \{2, \cdots, J\}}{\bigcup} \left[n_{m_j}, n_{m_j+1}\right)\right|
\]
will be true.
In this case, we can apply argument of case (i) above for $n_{m_J+1}$ and therefore obtain $N =n_{m_J+1}$.}
\end{enumerate}
Thus we have shown that for all $n>N$, if $e(\textbf{x})(n)$ and $o(\textbf{z})(n)$ share the same sign then $e(\textbf{x})(n)>o(\textbf{z})(n)$.
The remaining situation is $e(\textbf{x})(n)>0>o(\textbf{z})(n)$.
This completes the proof.
\end{proof}
\begin{claim}\label{C2}
There exists a finite permutation $o^{\pi}(\textbf{z})$ of $o(\textbf{z})$ such that $e(\textbf{x})(n)> o^{\pi}(\textbf{z})(n)$ holds for all $n\in \omega$.
\end{claim}
\begin{proof}
In claim \ref{C1}, it has been shown that for all $n>N$ $e(\textbf{x})(n)> o(\textbf{z})(n)$.
Let $K:= \{k^0, k^1, \cdots, k^N\}$ be an increasing enumeration of all elements from the set $\underset{j>J}{\bigcup} \left[n_{m_j}, n_{m_j+1}\right)$.
We permute $o(\textbf{z})(0)$ and $o(\textbf{z})(k^0)$; $o(\textbf{z})(1)$ and $o(\textbf{z})(k^1)$ and so on till $o(\textbf{z})(N)$ and $o(\textbf{z})(k^N)$ to obtain $o^{\pi}(\textbf{z})$.
Hence, $o^{\pi}(\textbf{z})$ is obtained via a finite permutation of $o(\textbf{z})$.
It is immediate to check that $\pi$ has the desired properties.
\end{proof}
Applying Claims \ref{C1} and \ref{C2}, we have obtained $o^{\pi}(\textbf{z})$ such that
$e(\textbf{x})(n)>o^{\pi}(\textbf{z})(n)$ for all $n\in \omega$.
A implies
$o(\textbf{z})\sim o^{\pi}(\textbf{z})$, and by WP we get
$e(\textbf{x})\succ o^{\pi}(\textbf{z})$.
By applying transitivity, we obtain
$e(\textbf{x})\succ o(\textbf{z})$.
Notice that arguments of claims \ref{C1} and \ref{C2} could also be applied to the pair of sequences $e(\textbf{z})$ and $o(\textbf{x})$.
Thus we are able to obtain $o^{\pi}(\textbf{x})$ such that applying A we get
$
o(\textbf{x})\sim o^{\pi}(\textbf{x}),
$
by WP we get
$
o^{\pi}(\textbf{x})\prec e(\textbf{z}),
$
and finally, by transitivity it follows
$
o(\textbf{x})\prec e(\textbf{z}).
$
Combining all together we obtain
$o(\textbf{z}) \prec e(\textbf{x}) \prec o(\textbf{x}) \prec e(\textbf{z})$, and so $o(\textbf{z}) \prec e(\textbf{z})$,
which implies
$
z\notin \Gamma.
$
\vspace{2mm}
\item Case $o(\textbf{x}) \prec e(\textbf{x})$: Similar to the previous case, only with some different technical details. We remove $n_{m_1}$, $n_{m_j+1}$, $n_{m_j+2}$, for all $j>1$ from $U(x)$ to obtain $z \in 2^\omega$ as follows:
\[
z(n) = \Big \{
\begin{array}{ll}
x(n) & \text{if $n\notin \left\{n_{m_1}, n_{m_j+1}, n_{m_j+2}: j>1\right\}$}\\
0 & \text{otherwise}.
\end{array}
\]
Let
\[
\begin{split}
O(m_1):=& [n_1,n_2) \cup [n_3,n_4) \cdots [n_{m_1-1}, n_{m_1}), \\
E(m_1):=& [n_2,n_3) \cup [n_4,n_5) \cdots [n_{m_1-2}, n_{m_1-1}).
\end{split}
\]
(In case $m_1=2$ put $E(m_1)=\emptyset$.)
Applying Claims \ref{C1} and \ref{C2}, we are able to obtain $e^{\pi}(\textbf{z})$ and $o^{\pi}(\textbf{z})$ such that
\(
o(\textbf{x})(n)>e^{\pi}(\textbf{z})(n), \;\text{and}\; o^{\pi}(\textbf{z})(n)>e(\textbf{x})(n)\;\text{for all}\; n\in \omega.
\)
A implies
$
o(\textbf{z})\sim o^{\pi}(\textbf{z}), \;\text{and}\; e(\textbf{z})\sim e^{\pi}(\textbf{z}),
$
and by WP we get
$
e^{\pi}(\textbf{z})\prec o(\textbf{x}), \;\text{and}\; e(\textbf{x})\prec o^{\pi}(\textbf{z}).
$
By transitivity, it follows
$
e(\textbf{z})\prec o(\textbf{x}), \;\text{and}\; e(\textbf{x})\prec o(\textbf{z}),
$
which leads to
$
z \in \Gamma.
$
\vspace{2mm}
\item Case $e(\textbf{x}) \sim o(\textbf{x})$: We remove $n_{m_j}$, $n_{m_j+1}$, for all $j>1$ from $U(x)$ to obtain $z \in 2^\omega$ as follows:
\[
z(n) = \Big \{
\begin{array}{ll}
x(n) & \text{if $n\notin \left\{n_{m_j}, n_{m_j+1}: j>1\right\}$}\\
0 & \text{otherwise}.
\end{array}
\]
By construction we obtain $o(\textbf{z})(n) \geq o(\textbf{x})(n)$ and $e(\textbf{z})(n) \leq e(\textbf{x})(n)$ for all $n\in \omega$.
Further, for all $n > m_1$, $o(\textbf{z})(n) > o(\textbf{x})(n)$ and $e(\textbf{z})(n) < e(\textbf{x})(n)$.
Applying a similar argument as in the proof of Claim \ref{C2}, by permuting finitely many elements, we are able to obtain $e^{\pi}(\textbf{z})$ and $o^{\pi}(\textbf{z})$ such that
\[
o(\textbf{x})(n) < o^{\pi}(\textbf{z})(n), \;\text{and}\; e^{\pi}(\textbf{z})(n) < e(\textbf{x})(n),\;\text{for all}\; n\in \omega.
\]
Again, A implies
$
o(\textbf{z})\sim o^{\pi}(\textbf{z}), \;\text{and}\; e(\textbf{z})\sim e^{\pi}(\textbf{z})
$,
WP implies
$
o(\textbf{x})\prec o^\pi(\textbf{z}), \;\text{and}\; e^{\pi}(\textbf{z}) \prec e(\textbf{x}),
$
and therefore, by transitivity, it follows
\(
e(\textbf{z})\prec e(\textbf{x}), \;\text{and}\; o(\textbf{x})\prec o(\textbf{z}),
\)
which leads to
$
z\in \Gamma.
$
\end{itemize}
\end{proof}
\section{Concluding remarks}
The aim of this paper was firstly motivated by answering Problem 11.14 in \cite{Mathias}, but we have then elaborated on more systematically the relationships between total SWRs and other irregular sets. These results might just be the tip of the iceberg of a potentially rather interesting research project, in order to use tools from infinitary combinatorics, forcing theory and descriptive set theory, to give a theoretical structure to the several social welfare relations on infinite utility streams defined in economic theory.
Other economic combinatorial principles which can be investigated are those \emph{\' a la Hammond}: given infinite utility streams $x,y \in X=Y^\omega$, we say that $x \leq_H y$ whenever there are $i \neq j$ such that $x(i) < y(i) < y(j) < x(j)$ and for all $k \neq i,j$, $x(k)=y(k)$.
Intuitively this type of pre-orders assert that a stream is better-off than another one if the distribution reduces the inequality among generations.
So we consider to elaborate on the following idea: comparing different types of social welfare relations, in particular with respect to the following three categories: procedural equity principles (e.g. anonymity), efficiency principles (e.g. Pareto), consequentialist equity principles (e.g. Hammond), and describe a hierarchy of such relations based on the associated fragment of AC. From a pure theoretical point of view, this may suggest a ranking-method among combinations of the three kinds of principles, analysing a degree of compatibility between them.
This specifically means that one can expand the WR-diagram also with other statements combining these economic principles, or even introduce other similar WR-diagrams and then try to study the possible combinations of $\square$'s and $\blacksquare$'s.
As a specific question left open in this paper, we consider the following being the most relevant: can one find a ZF-model satisfying $\textbf{IPA} \land \neg \textbf{SPA}$?
|
1,108,101,564,978 | arxiv |
\section{Introduction}
As the title suggests, the main idea of this paper is to use backward error analysis (BEA) to assess and interpret solutions obtained by perturbation methods. The idea will seem natural, perhaps even obvious, to those who are familiar with the way in which backward error analysis has seen its scope increase dramatically since the pioneering work of Wilkinson in the 60s, e.g., \cite{Wilkinson(1963),Wilkinson(1965)}. From early results in numerical linear algebraic problems and computer arithmetic, it has become a general method fruitfully applied to problems involving root finding, interpolation, numerical differentiation, quadrature, and the numerical solutions of ODEs, BVPs, DDEs, and PDEs, see, e.g, \cite{CorlessFillion(2013),Deuflhard(2003),Higham(1996)}. This is hardly a surprise when one considers that BEA offers several interesting advantages over a purely forward-error approach.
BEA is often used in conjunction with perturbation methods. Not only is it the case that many algorithms' backward error analyses rely on perturbation methods, but the backward error is related to the forward error by a coefficient of sensitivity known as the condition number, which is itself a kind of sensitivity to perturbation. In this paper, we examine an apparently new idea, namely, that perturbation methods themselves can also be interpreted within the backward error analysis framework. Our examples will have a classical feel, but the analysis and interpretation is what differs, and we will make general remarks about the benefits of this mode of analysis and interpretation.
However, due to the breadth of the literature in perturbation theory, we cannot determine with certainty the extent to which applying backward error analysis to perturbation methods is new. Still, none of the works we know, apart from \cite{Boyd(2014)}, \cite{Corless(1993)b}, and \cite{Corless(2014)}, even mention the possibility of using of using BEA to explain or measure the success of a perturbation computation. Among the books we have consulted, only \cite[p.~251 \& p.~289]{Boyd(2014)} mentions the residual by name, but does not use it systematically. At the very least, therefore, the idea of using BEA in relation to perturbation methods might benefit from a wider discussion.
\section{The basic method from the BEA point of view} \label{genframe}
The basic idea of BEA is increasingly well-known in the context of numerical methods. The slogan \textsl{a good numerical method gives the exact solution to a nearby problem} very nearly sums up the whole perspective. Any number of more formal definitions and discussions exist---we like the one given in \cite[chap.~1]{CorlessFillion(2013)}, as one might suppose is natural, but one could hardly do better than go straight to the source and consult, e.g., \cite{Wilkinson(1963),Wilkinson(1965),Wilkinson(1971),Wilkinson(1984)}. More recently \cite{Grcar(2011)} has offered a good historical perspective. In what follows we give a brief formal presentation and then give detailed analyses by examples in subsequent sections.
Problems can generally be represented as maps from an input space $\mathcal{I}$ to an output space $\mathcal{O}$.
If we have a problem $\varphi:\mathcal{I}\to\mathcal{O}$ and wish to find $y=\varphi(x)$ for some putative input $x\in\mathcal{I}$, lack of tractability might instead lead you to engineer a simpler problem $\hat{\varphi}$ from which you would compute $\hat{y}=\hat{\varphi}(x)$. Then $\hat{y}-y$ is the \textsl{forward error} and, provided it is small enough for your application, you can treat $\hat{y}$ as an approximation in the sense that $\hat{y}\approx \varphi(x)$. In BEA, instead of focusing on the forward error, we try to find an $\hat{x}$ such that $\hat{y}=\varphi(\hat{x})$ by considering the \textsl{backward error} $\Delta x=\hat{x}-x$, i.e., we try to find for which set of data our approximation method $\hat{\varphi}$ has exactly solved our reference problem $\varphi$. The general picture can be represented by the following commutative diagram:
\begin{center}
\begin{tikzpicture}
\def2{2}
\draw (0,0) node (x) {$x$};
\draw (2,0) node (y) {$y$};
\draw (0,-2) node (xhat) {$\hat{x}$};
\draw (2,-2) node (yhat) {$\hat{y}$};
\draw (x) edge[->] node[above] {$\varphi$} (y);
\draw (xhat) edge[->,dashed] node[below] {$\varphi$} (yhat);
\draw (x) edge[->,dashed] node[left] {$+\Delta x$} (xhat);
\draw (y) edge[->] node[right] {$+\Delta y$} (yhat);
\draw (x) edge[->] node[above right] {$\hat{\varphi}$} (yhat);
\end{tikzpicture}
\end{center}
We can see that, whenever $x$ itself has many components, different backward error analyses will be possible since we will have the option of reflecting the forward error back into different selections of the components.
It is often the case that the map $\varphi$ can be defined as the solution to $\phi(x,y)=0$ for some operator $\phi$, i.e., as having the form
\begin{align} x\xrightarrow{\varphi} \left\{ y\mid \phi(x,y)=0\right\}\>.\end{align}
In this case, there will in particular be a simple and useful backward error resulting from computing the residual $r=\phi(x,\hat{y})$. Trivially $\hat{y}$ then exactly solves the reverse-engineered problem $\hat{\varphi}$ given by $\hat{\phi}(x,y)=\phi(x,y)-r=0$.
Thus, when the residual can be used as a backward error, this directly computes a reverse-engineered problem that our method has solved exactly. We are then in the fortunate position of having both a problem and its solution, and the challenge then consists in determining how similar the reference problem $\varphi$ and the modified problems $\hat{\varphi}$ are, \textsl{and whether or not the modified problem is a good model for the phenomenon being studied}.
\paragraph{Regular perturbation BEA-style}
Now let us introduce a \textsl{general framework for perturbation methods} that relies on the general framework for BEA introduced above.
Perturbation methods are so numerous and varied, and the problems tackled are from so many areas, that it seems a general scheme of solution would necessarily be so abstract as to be difficult to use in any particular case.
Actually, the following framework covers many methods. For simplicity of exposition, we will introduce it using the simple gauge functions $1,\ensuremath{\varepsilon},\ensuremath{\varepsilon}^2,\ldots$, but note that extension to other gauges is usually straightforward (such as Puiseux, $\ensuremath{\varepsilon}^n\ln^m\ensuremath{\varepsilon}$, etc), as we will show in the examples.
To begin with, let
\begin{align}
F(x,u;\ensuremath{\varepsilon})=0 \label{operatoreq}
\end{align}
be the operator equation we are attempting to solve for the unknown $u$. The dependence of $F$ on the scalar parameter $\ensuremath{\varepsilon}$ and on any data $x$ is assumed but henceforth not written explicitly. In the case of a simple power series perturbation, we will take the $m$th order approximation to $u$ to be given by the \textsl{finite} sum
\begin{align}
z_m = \sum_{k=0}^m \ensuremath{\varepsilon}^ku_k\>.
\end{align}
The operator $F$ is assumed to be Fr\'echet differentiable. For convenience we assume slightly more, namely, that for any $u$ and $v$ in a suitable region, there exists a linear invertible operator $F_1(v)$ such that
\begin{align}
F(u) = F(v) + F_1(v)(u-v) + O\left(\|u-v\|^2\right)\>.
\end{align}
Here, $\|\cdot\|$ denotes any convenient norm. We denote the \textsl{residual} of $z_m$ by
\begin{align}
\Delta_m := F(z_m)\>,
\end{align}
\emph{i.e.}, $\Delta_m$ results from evaluating $F$ at $z_m$ instead of evaluating it at the reference solution $u$ as in equation \eqref{operatoreq}. If $\|\Delta_m\|$ is small, we say we have solved a ``nearby'' problem, namely, the reverse-engineered problem for the unknown $u$ defined by
\begin{align}
F(u)-F(z_m) = 0\>,
\end{align}
which is exactly solved by $u=z_m$. Of course this is trivial. It is \textsl{not} trivial in consequences if $\|\Delta_m\|$ is small compared to data errors or modelling errors in the operator $F$. We will exemplify this point more concretely later.
We now suppose that we have somehow found $z_0=u_0$, a solution with a residual whose size is such that
\begin{align}
\|\Delta_0\|=\|F(u_0)\| = O(\ensuremath{\varepsilon})\qquad \textrm{as} \qquad \ensuremath{\varepsilon}\to0\>.
\end{align}
Finding this $u_0$ is part of the art of perturbation; much of the rest is mechanical.
Suppose now inductively that we have found $z_n$ with residual of size
\[
\|\Delta_n\| = O\left(\ensuremath{\varepsilon}^{n+1}\right) \quad\textrm{ as }\quad \ensuremath{\varepsilon}\to0\>.
\]
Consider $F(z_{n+1})$ which, by definition, is just $F(z_n+\ensuremath{\varepsilon}^{n+1}u_{n+1})$. We wish to choose the term $u_{n+1}$ in such a way that $z_{n+1}$ has residual of size $\|\Delta_{n+1}\|=O(\ensuremath{\varepsilon}^{n+2})$ as $\ensuremath{\varepsilon}\to0$. Using the Fr\'echet derivative of the residual of $z_{n+1}$ at $z_n$, we see that
\begin{align}
\Delta_{n+1} &= F(z_n+\ensuremath{\varepsilon}^{n+1}u_{n+1})= F(z_n)+F_1(z_n)\ensuremath{\varepsilon}^{n+1}u_{n+1}+O\left(\ensuremath{\varepsilon}^{2n+2}\right)\>. \label{resseries1}
\end{align}
By linearity of the Fr\'echet derivative, we also obtain $F_1(z_n) = F_1(z_0)+O(\ensuremath{\varepsilon})= [\ensuremath{\varepsilon}^0]F_1(z_0)+O(\ensuremath{\varepsilon})$. Here, $[\ensuremath{\varepsilon}^k]G$ refers to the coefficient of $\ensuremath{\varepsilon}^k$ in the expansion of $G$. Let
\begin{align}
A=[\ensuremath{\varepsilon}^0]F_1(z_0)\>,
\end{align}
that is, the zeroth order term in $F_1(z_0)$. Thus, we reach the following expansion of $\Delta_{n+1}$:
\begin{align}
\Delta_{n+1} = F(z_n) + A\ensuremath{\varepsilon}^{n+1}u_{n+1}+O\left(\ensuremath{\varepsilon}^{n+2}\right)\>.\label{eqDnp1}
\end{align}
Note that, in equation \eqref{resseries1}, one could keep $F_1(z_n)$, not simplifying to $A$ and compute not just $u_{n+1}$ but, just as in Newton's method, double the number of correct terms. However, this in practice is often too expensive \cite[chap.~6]{Geddes(1992)b}, and so we will in general use this simplification. As noted, we only need $F_1(z_0)$ accurate to $O(\ensuremath{\varepsilon})$, so in place of $F_1(z_0)$ in equation \eqref{eqDnp1} we use $A$.
As a result of the above expansion of $\Delta_{n+1}$, we now see that to make $\Delta_{n+1} = O\left(\ensuremath{\varepsilon}^{n+2}\right)$, we must have $F(z_n)+A\ensuremath{\varepsilon}^{n+1}u_{n+1}=O(\ensuremath{\varepsilon}^{n+2})$, in which case
\begin{align}
A u_{n+1} +\frac{F(z_n)}{\ensuremath{\varepsilon}^{n+1}} = Au_{n+1} +\frac{\Delta_n}{\ensuremath{\varepsilon}^{n+1}} =O(\ensuremath{\varepsilon})\>.
\end{align}
Since by hypothesis $\Delta_n=F(z_n)=O(\ensuremath{\varepsilon}^{n+1})$, we know that $\sfrac{\Delta_n}{\ensuremath{\varepsilon}^{n+1}}=O(1)$.
In other words, to find $u_{n+1}$ we solve the linear operator equation
\begin{align*}
A u_{n+1} =
-[\ensuremath{\varepsilon}^{n+1}]\Delta_n\>,
\end{align*}
where, again, $[\ensuremath{\varepsilon}^{n+1}]$ is the coefficient of the $(n+1)$th power of $\ensuremath{\varepsilon}$ in the series expansion of $\Delta$. Note that by the inductive hypothesis the right hand side has norm $O(1)$ as $\ensuremath{\varepsilon}\to0$. Then $\|\Delta_{n+1}\| = O(\ensuremath{\varepsilon}^{n+2})$ as desired, so $u_{n+1}$ is indeed the coefficient we were seeking.
We thus need $A=[\ensuremath{\varepsilon}^0]F(z_0)$ to be invertible. If not, the problem is singular, and essentially requires reformulation.\footnote{We remark that it is a sufficient but not necessary condition for regular expansion to be able to find our initial point $u_0$ and to have invertible $A=F_1(u_0;0)$. A regular perturbation problem can be defined in many ways, not just in the way we have done, with invertible $A$. For example, \cite[Sec 7.2]{Bender(1978)} essentially uses continuity in $\ensuremath{\varepsilon}$ as $\ensuremath{\varepsilon}\to0$ to characterize it. Another characterization is that for regular perturbation problems infinite perturbation series are convergent for some non-zero radius of convergence.
} We shall see examples. If $A$ is invertible, the problem is regular.
This general scheme can be compared to that of, say, \cite{Bellman(1972)}. Essential similarities can be seen. In Bellman's treatment, however, the residual is used implicitly, but not named or noted, and instead the equation defining $u_{n+1}$ is derived by postulating an infinite expansion
\begin{align}
u=u_0+\ensuremath{\varepsilon} u_1+\ensuremath{\varepsilon}^2u_2+\cdots\>.
\end{align}
By taking the coefficient of $\ensuremath{\varepsilon}^{n+1}$ in the expansion of $\Delta_n$ we are implicitly doing the same work, but we will see advantages of this point of view. %
Also, note that in the frequent case of more general asymptotic sequences, namely Puiseux series or generalized approximations containing logarithmic terms, we can make the appropriate changes in a straightforward manner, as we will show below.
\section{Algebraic equations}
We begin by applying the regular method from section \ref{genframe} to algebraic equations. We begin with a simple scalar equation and gradually increase the difficulty, thereby demonstrating the flexibility of the backward error point of view.
\subsection{Regular perturbation}\label{RegularPert}
In this section, after applying the method from section \ref{genframe} to a scalar equation, we use the same method to solve a $2\times2$ system; higher dimensional systems can be solved similarly. We give some computer algebra implementations (scripts that the reader may modify) of the basic method. Finally, in this section, we give an alternative method based on the Davidenko equation that is simpler to use in Maple.
\subsubsection{Scalar equations}
Let us consider a simple example similar to many used in textbooks for classical perturbation analysis. Suppose we wish to find a real root of
\begin{align}
x^5 -x-1=0 \label{refprobalgeq}
\end{align}
and, since the Abel-Ruffini theorem---which says that in general there are no solutions in radicals to equations of degree 5 or more---suggests it is unlikely that we can find an elementary expression for the solution of this \textsl{particular} equation of degree 5, we introduce a parameter which we call $\ensuremath{\varepsilon}$, and moreover which we suppose to be small. That is, we embed our problem in a parametrized family of similar problems. If we decide to introduce $\ensuremath{\varepsilon}$ in the degree-1 term, so that
\begin{align}
u^5-\ensuremath{\varepsilon} u-1=0\>, \label{pertalgeq}
\end{align}
we will see that we have a so-called regular perturbation problem.
To begin with, we wish to find a $z_0$ such that $\Delta_0=F(z_0) = z_0^5-\ensuremath{\varepsilon} z_0-1=O(\ensuremath{\varepsilon})$. Quite clearly, this can happen only if $z_0^5-1=0$. Ignoring the complex roots in this example, we take $z_0=1$. To continue the solution process, we now suppose that we have found
\begin{align}
z_n = \sum_{k=0}^n u_k\ensuremath{\varepsilon}^k
\end{align}
such that $\Delta_n=F(z_n) = z_n^5-\ensuremath{\varepsilon} z_n-1=O(\ensuremath{\varepsilon}^{n+1})$ and we wish to use our iterative procedure. We need the Fr\'echet derivative of $F$, which in this case is just
\begin{align}
F_1(u) &= 5u^4-\ensuremath{\varepsilon}\>,
\end{align}
because
\begin{align}
F(u) = u^5-\ensuremath{\varepsilon} u-1 &= v^5-\ensuremath{\varepsilon} v-1 + F'(v)(u-v)+O(u-v)^2\>.
\end{align}
Hence, $A=5z_0^4=5$, which is invertible. As a result our iteration is $\Delta_n=F(z_n)$, i.e.,
\begin{align}
5u_{n+1} = -[\ensuremath{\varepsilon}^{n+1}]\Delta_n\>.
\end{align}
Carrying out a few steps we have
\begin{align}
\Delta_0 = F(z_0) = F(1) = 1-\ensuremath{\varepsilon}-1 = -\ensuremath{\varepsilon}
\end{align}
so
\begin{align}
5\cdot u_1 = -[\ensuremath{\varepsilon}]\Delta_0 = -[\ensuremath{\varepsilon}](-\ensuremath{\varepsilon}) = 1\>.
\end{align}
Thus, $u_1=\sfrac{1}{5}$. Therefore, $z_1=1+\sfrac{\ensuremath{\varepsilon}}{5}$ and
\begin{align}
\Delta_1 &= \left(1+\frac{\ensuremath{\varepsilon}}{5}\right)^5 -\ensuremath{\varepsilon}\left(1+\frac{\ensuremath{\varepsilon}}{5}\right)-1\\
& = \left(1+5\frac{\ensuremath{\varepsilon}}{5}+10\frac{\ensuremath{\varepsilon}^2}{25}+O\left(\ensuremath{\varepsilon}^3\right)\right) - \ensuremath{\varepsilon}-\frac{\ensuremath{\varepsilon}^2}{5}-1\\
&=\left(\frac{2}{5}-\frac{1}{5}\right)\ensuremath{\varepsilon}^2+O\left(\ensuremath{\varepsilon}^3\right) = \frac{1}{5}\ensuremath{\varepsilon}^2+O\left(\ensuremath{\varepsilon}^3\right)\>.
\end{align}
Then we find that $Au_1=-\sfrac{1}{5}$ and thus $u_1=-\sfrac{1}{25}$. So, $u=1+\sfrac{\ensuremath{\varepsilon}}{5}-\sfrac{\ensuremath{\varepsilon}^2}{25}+O(\ensuremath{\varepsilon}^3)$. Finding more terms by this method is clearly possible although tedium might be expected at higher orders.
Luckily nowadays computers and programs are widely available that can solve such problems without much human effort, but before we demonstrate that, let's compute the residual of our computed solution so far:
\[
z_2 = 1+\frac{1}{5}\ensuremath{\varepsilon}-\frac{1}{25}\ensuremath{\varepsilon}^2\>.
\]
Then $\Delta_2 = z_2^5-\ensuremath{\varepsilon} z_2-1$ is
\begin{align}
\Delta_2 &= \left(1+\frac{1}{5}\ensuremath{\varepsilon}-\frac{1}{25}\ensuremath{\varepsilon}^2\right)^5-\ensuremath{\varepsilon}\left(1+\frac{1}{5}\ensuremath{\varepsilon}-\frac{1}{25}\ensuremath{\varepsilon}^2\right)-1 \nonumber \\
& = -\frac{1}{25}\ensuremath{\varepsilon}^3 - \frac{3}{125}\ensuremath{\varepsilon}^4+\frac{11}{3125}\ensuremath{\varepsilon}^5 +\frac{3}{125}\ensuremath{\varepsilon}^6 -\frac{2}{15625}\ensuremath{\varepsilon}^7 \nonumber\\ &\qquad -\frac{1}{78125}\ensuremath{\varepsilon}^8 +\frac{1}{390625}\ensuremath{\varepsilon}^9 -\frac{1}{9765675}\ensuremath{\varepsilon}^{10}\>.
\end{align}
We note the following. First, $z_2$ exactly solves the modified equation
\begin{align}
x^5-\ensuremath{\varepsilon} x-1\enskip +\frac{1}{25}\ensuremath{\varepsilon}^3 + \frac{3}{25}\ensuremath{\varepsilon}^4-\ldots + \frac{1}{9765625}\ensuremath{\varepsilon}^{10}=0 \label{starred}
\end{align}
which is $O(\ensuremath{\varepsilon}^3)$ different to the original. Second, the complete residual was computed rationally: there is no error in saying that $z_2=1+\sfrac{\ensuremath{\varepsilon}}{5}-\sfrac{\ensuremath{\varepsilon}^2}{25}$ solves equation \eqref{starred} exactly. Third, if $\ensuremath{\varepsilon}=1$ then $z_2=1+\sfrac{1}{5}-\sfrac{1}{25}=1.16$ exactly (or $1\sfrac{4}{25}$ if you prefer), and the residual is then $(\sfrac{29}{25})^5-\sfrac{29}{25}-1\doteq -0.059658$, showing that $1.16$ is the exact root of an equation about 6\% different to the original.
Something simple but importantly different to the usual treatment of perturbation methods has happened here. We have assessed the quality of the solution in an explicit fashion without concern for convergence issues or for the exact solution to $x^5-x-1=0$, which we term the reference problem. We use this term because its solution will be the reference solution. We can't call it the ``exact'' solution because $z_2$ is \textsl{also} an ``exact'' solution, namely to equation~\eqref{starred}.
Every numerical analyst and applied mathematician knows that this isn't the whole story---we need some evaluation or estimate of
the effects of such perturbations of the problem. One effect is the difference between $z_2$ and $x$, the reference solution, and this is what people focus on. We believe this focus is sometimes excessive. The are other possible views. For instance, if the backward error is physically reasonable.
As an example, if $\ensuremath{\varepsilon}=1$ and $z_2=1.16$ then $z_2$ exactly solves $y^5-y-a=0$ where $a\neq 1$ but rather $a\doteq 0.9403$. If the original equation was really $u^5-u-\alpha=0$ where $\alpha=1\pm 5\%$ we might be inclined to accept $z_2=1.16$ because, for all we know, we might have the true solution (even though we're outside the $\pm 5\%$ range, we're only just outside; and how confident are we in the $\pm5\%$, after all?).
\subsubsection{Simple computer algebra solution}
The following Maple script can be used to solve this or similar problems $f(u;\ensuremath{\varepsilon})=0$. Other computer algebra systems can also be used.
\lstinputlisting{RegularScalar}
That code is a straightforward implementation of the general scheme presented in subsection \ref{genframe}. Its results, translated into \LaTeX\ and cleaned up a bit, are that
\begin{align}
z = 1+\frac{1}{5}\ensuremath{\varepsilon}-\frac{1}{25}\ensuremath{\varepsilon}^2+\frac{1}{125}\ensuremath{\varepsilon}^3
\end{align}
and that the residual of this solution is
\begin{align}
\Delta = \frac{21}{3125}\ensuremath{\varepsilon}^5+O\left( \ensuremath{\varepsilon}^6 \right) \>.
\end{align}
With $N=3$, we get an extra order of accuracy as the next term in the series is zero, but this result is serendipitous.
\subsubsection{Systems of algebraic equations}\label{systems}
Regular perturbation for systems of equations using the framework from section \ref{genframe} is straightforward. We include an example to show some computer algebra and for completeness. Consider the following two equations in two unknowns:
\begin{align}
f_1(v_1,v_2) &=v_1^2+v_2^2 -1-\ensuremath{\varepsilon} v_1v_2 = 0\\
f_2(v_1,v_2) &= 25v_1v_2-12+2\ensuremath{\varepsilon} v_1 =0
\end{align}
When $\ensuremath{\varepsilon}=0$ these equations determine the intersections of a hyperbola with the unit circle. There are four such intersections: $(\sfrac{3}{5},\sfrac{4}{5}), (\sfrac{4}{5},\sfrac{3}{5}), (-\sfrac{3}{5},-\sfrac{4}{5})$ and $(-\sfrac{4}{5},-\sfrac{3}{5})$. The Jacobian matrix (which gives us the Fr\'echet derivative in the case of algebraic equations) is
\begin{align}
F_1(v) = \begin{bmatrix} \frac{\partial f_1}{\partial v_1} & \frac{\partial f_1}{\partial v_2} \\[.25cm] \frac{\partial f_2}{\partial v_1} & \frac{\partial f_2}{\partial v_2} \end{bmatrix} = \begin{bmatrix} 2v_1 & 2v_2 \\ 25 v_2 & 25 v_1\end{bmatrix} + O(\ensuremath{\varepsilon})\>.
\end{align}
Taking for instance $u_0=[\sfrac{3}{5},\sfrac{4}{5}]^T$ we have
\begin{align}
A= F_1(u_0) = \begin{bmatrix} \sfrac{6}{5} & \sfrac{8}{5} \\ 20 & 15\end{bmatrix}\>.
\end{align}
Since $\det A=-14\neq 0$, $A$ is invertible and indeed
\begin{align}
A^{-1} = \begin{bmatrix} -\sfrac{15}{14} & \sfrac{4}{25} \\ \sfrac{10}{7} & -\sfrac{3}{35} \end{bmatrix}\>.
\end{align}
The residual of the zeroth order solution is
\begin{align}
\Delta_0 = F\left(\frac{3}{5},\frac{4}{5}\right) = \begin{bmatrix}-\sfrac{12}{25} \\ \sfrac{6}{5} \end{bmatrix}\>,
\end{align}
so $-[\ensuremath{\varepsilon}]\Delta_0 = [\sfrac{12}{25},-\sfrac{6}{5}]^T$. Therefore
\begin{align}
u_1 = \begin{bmatrix} u_{11} \\ u_{12}\end{bmatrix} = A^{-1}\begin{bmatrix}\sfrac{12}{25} \\ -\sfrac{6}{25}\end{bmatrix} = \begin{bmatrix} -\sfrac{114}{175} \\ \sfrac{138}{175}\end{bmatrix}
\end{align}
and $z_1=u_0+\ensuremath{\varepsilon} u_1$ is our improved solution:
\begin{align}
z_1 = \begin{bmatrix}\sfrac{3}{5} \\ \sfrac{4}{5} \end{bmatrix} + \ensuremath{\varepsilon} \begin{bmatrix} -\sfrac{114}{175} \\ \sfrac{138}{175}\end{bmatrix}\>.
\end{align}
To guard against slips, blunders, and bugs (some of those calculations were done by hand, and some were done in Sage on an Android phone) we compute
\begin{align}
\Delta_1 = F(z_1) = \ensuremath{\varepsilon}^2\begin{bmatrix}\sfrac{6702}{6125} \\ -\sfrac{17328}{1225}\end{bmatrix} + O\left(\ensuremath{\varepsilon}^3\right)\>.
\end{align}
That computation was done in Maple, completely independently. Initially it came out $O(\ensuremath{\varepsilon})$ indicating that something was not right; tracking the error down we found a typo in the Maple data entry ($183$ was entered instead of $138$). Correcting that typo we find $\Delta_1=O(\ensuremath{\varepsilon}^2)$ as it should be. Here is the corrected Maple code:
\lstinputlisting{ResidualSystem}
Just as for the scalar case, this process can be systematized and we give one way to do so in Maple, below. The code is not as pretty as the scalar case is, and one has to explicitly ``map'' the series function and the extraction of coefficients onto matrices and vectors, but this demonstrates feasibility.
\lstinputlisting{RegularSystem.tex}
This code computes $z_3$ correctly and gives a residual of $O(\ensuremath{\varepsilon}^4)$. From the backward error point of view, this code finds the intersection of curves that differ from the specified ones by terms of $O(\ensuremath{\varepsilon}^4)$. In the next section, we show a way to use a built-in feature of Maple to do the same thing with less human labour.
\subsubsection{The Davidenko equation}
Maple has a built-in facility for solving differential equations in series that (at the time of writing) is superior to its built-in facility for solving algebraic equations in series, because the latter can only handle scalar equations. This may change in the future, but it may not because there is the following simple workaround. To solve
\begin{align}
F(u;\ensuremath{\varepsilon})=0
\end{align}
for a function $u(\ensuremath{\varepsilon})$ expressed as a series, simply differentiate to get
\begin{align}
D_1(F)(u,\ensuremath{\varepsilon})\frac{du}{d\ensuremath{\varepsilon}} + D_2(F)(u,\ensuremath{\varepsilon})=0\>.
\end{align}
Boyd \cite{Boyd(2014)} calls this the Davidenko equation. If we solve this in Taylor series with the initial condition $u(0)=u_0$, we have our perturbation series. Notice that what we were calling $A=[\ensuremath{\varepsilon}^0]F_1(u_0)$ occurs here as $D_1(F)(u_0,0)$ and this needs to be nonsingular to be solved as an ordinary differential equation; if $\mathrm{rank}(D_1(F)(u_0,0))<n$ then this is in fact a nontrivial differential algebraic equation that Maple may still be able to solve using advanced techniques (see, e.g., \cite{Avrachenkov(2013)}). Let us just show a simple case here:
\lstinputlisting{RegularDavidenko}
This generates (to the specified value of the order, namely, \verb|Order=4|) the solution
\begin{align}
x(\ensuremath{\varepsilon}) &=\frac{3}{5}-\frac{114}{175}\ensuremath{\varepsilon}+\frac{119577}{42875}\ensuremath{\varepsilon}^2-\frac{43543632}{2100875}\ensuremath{\varepsilon}^3\\
y(\ensuremath{\varepsilon}) &=\frac{4}{5}+\frac{138}{175}\ensuremath{\varepsilon}-\frac{119004}{42875}\ensuremath{\varepsilon}^2+\frac{43245168}{2100875}\ensuremath{\varepsilon}^3\>,
\end{align}
whose residual is $O(\ensuremath{\varepsilon}^4)$. Internally, Maple uses its own algorithms, which occasionally get improved as algorithmic knowledge advances.
\subsection{Puiseux series}\label{Puiseux}
Puiseux series
are simply Taylor series or Laurent series with fractional powers. A standard example is
\begin{align}
\sin\sqrt{x} = x^{\sfrac{1}{2}} - \frac{1}{3!}x^{\sfrac{3}{2}} + \frac{1}{5!}x^{\sfrac{5}{2}}+\cdots
\end{align}
A simple change of variable (e.g. $t=\sqrt{x}$ so $x=t^2$) is enough to convert to Taylor series. Once the appropriate power $n$ is known for $\ensuremath{\varepsilon}=\mu^n$, perturbation by Puiseux expansion reduces to computations similar to those we've seen already.
For instance, had we chosen to embed $u^5-u-1$ in the family $u^5-\ensuremath{\varepsilon}(u+1)$ (which is somehow conjugate to the family of the last section), then because the equation becomes $u^5=0$ when $\ensuremath{\varepsilon}=0$ we see that we have a five-fold root to perturb, and we thus suspect we will need Puiseux series.
For scalar equations, there are built-in facilities in Maple for Puiseux series, which gives yet another way in Maple to solve scalar algebraic equations perturbatively. One can use the \texttt{RootOf} construct to do so as follows:
\lstinputlisting{Puiseux}
This yields
\begin{align}
z = \alpha\ensuremath{\varepsilon}^{\sfrac{1}{5}}+\frac{1}{5}\alpha^2\ensuremath{\varepsilon}^{\sfrac{2}{5}} -\frac{1}{25}\alpha^3\ensuremath{\varepsilon}^{\sfrac{3}{5}} +\frac{1}{125}\alpha^4\ensuremath{\varepsilon}^{\sfrac{4}{5}} - \frac{21}{15626}\alpha \ensuremath{\varepsilon}^{\sfrac{6}{5}} \>.
\end{align}
This series describes all paths, accurately for small $\ensuremath{\varepsilon}$. Note that the command
\begin{lstlisting}
alias(alpha = RootOf(u^5-1,u))
\end{lstlisting}
is a way to tell Maple that $\alpha$ represents a fixed fifth root of unity. Exactly which fixed root can be deferred till later. Working instead with the default value for the environment variable \texttt{Order}, namely \texttt{Order := 6}, gets us a longer series for $z$ containing terms up to $\ensuremath{\varepsilon}^{\sfrac{29}{5}}$ but not $\ensuremath{\varepsilon}^{\sfrac{30}{5}}=\ensuremath{\varepsilon}^6$. Putting the resulting $z_6$ back into $f(u)$ we get a residual
\begin{align}
\Delta_6 = f(z_6) = \frac{23927804441356816}{14551915228366851806640625}\ensuremath{\varepsilon}^7 + O(\ensuremath{\varepsilon}^8)
\end{align}
Thus we expect that for small $\ensuremath{\varepsilon}$ the residual will be quite small. For instance, with $\ensuremath{\varepsilon}=1$ the exact residual is, for $\alpha=1$, $\Delta_6=1.2\cdot 10^{-9}$. This tells us that this approximation ought to get us quite accurate roots, and indeed we do.
We conclude this discussion with two remarks. The first is that by a discriminant analysis as we describe in section \ref{SingPert}, we find that the nearest singularity is at $\ensuremath{\varepsilon}=\sfrac{3125}{256}$, and so we expect this series to actually converge for $\ensuremath{\varepsilon}=1$. Again, this fact was not used in our analysis above. Secondly, we could have used the \verb|series/RootOf| technique to do both the regular perturbation in subsection \ref{RegularPert} or the singular one we will do in subsection \ref{SingPert}. The Maple commands are quite similar:
\begin{lstlisting}
series(RootOf(u^5-e*u-1,u),e);
\end{lstlisting}
and
\begin{lstlisting}
series(RootOf(e*u^5-u-1,u),e);
\end{lstlisting}
However, in both cases only the real root is expanded. Some ``Maple art'' (that one of us more readily characterizes as black magic) can be used to complete the computation, but the previous code (both the loop and the Davidenko equation) are easier to generalize. Making the \texttt{dsolve/series} code for the Davidenko equation work in the case of Puiseux series requires a preliminary scaling.
\subsection{Singular perturbation}\label{SingPert}
Suppose that instead of embedding $u^5-u-1=0$ in the regular family we used in the previous section, we had used $\ensuremath{\varepsilon} u^5-u-1=0$. If we run our previous Maple programs, we find that the zeroth order solution is unique, and $z_0=-1$. The Fr\'echet derivative is $-1$ to $O(\ensuremath{\varepsilon})$, and so $u_{n+1} = [\ensuremath{\varepsilon}^{n+1}]\Delta_n$ for all $n\geq 0$. We find, for instance,
\begin{align}
z_7 = -1-\ensuremath{\varepsilon} -5\ensuremath{\varepsilon}^2 - 35\ensuremath{\varepsilon}^3 -285\ensuremath{\varepsilon}^4 -2530\ensuremath{\varepsilon}^5 -23751\ensuremath{\varepsilon}^6 -231880\ensuremath{\varepsilon}^7
\end{align}
which has residual $\Delta_7 = O(\ensuremath{\varepsilon}^8)$ but with a larger integer as the constant hidden in that $O$ symbol. For $\ensuremath{\varepsilon}=0.2$, the value of $z_7$ becomes \begin{align}z_7\doteq -7.4337280\end{align} while $\Delta_7=-4533.64404$, which is not small at all. Thus we have no evidence this perturbation solution is any good: we have the exact solution to $u^5-0.2 u-1=-4533.64404$ or $u^5-0.2 u+4532.64404=0$, probably not what was intended (and if it was, it would be a colossal fluke). Note that we do not need to know a reference value of a root of $u^5-0.2u-1$ to determine this.
Trying a smaller $\ensuremath{\varepsilon}$, we find that if $\ensuremath{\varepsilon}=0.05$ we have $z_7\doteq -1.07$ and $\Delta_7\doteq -1.2\cdot 10^{-4}$. This means $z_7$ is an exact root of $u^5-0.05 u-1.00012$; which may very well be what we want.
The following remark is not really germane to the method but it's interesting. Taking the discriminant with respect to $u$, i.e., the resultant of $f$ and $\sfrac{\partial f}{\partial u}$, we find $\mathrm{discrim}(f) = \ensuremath{\varepsilon}^3(3125\ensuremath{\varepsilon}-256)$. Thus $f$ will have multiple roots if $\ensuremath{\varepsilon}=0$ (there are 4 multiple roots at infinity) or if $\ensuremath{\varepsilon} = \sfrac{256}{3125}=0.08192$. Thus our perturbation expansion can be expected to diverge\footnote{A separate analysis leads to the identification of $u_k = \frac{1}{5k+1}\binom{5k+1}{k}$ (via \cite{OEIS}). The ratio test confirms that the series converges for $|\ensuremath{\varepsilon}|<\sfrac{256}{3125}$, and diverges if $\ensuremath{\varepsilon} = \sfrac{256}{3125}$.} for $\ensuremath{\varepsilon}\geq 0.08192$. What happens to $z_7$ if $\ensuremath{\varepsilon}=\sfrac{256}{3125}$? $z_7\doteq -1.1698$ and $\Delta_7=-9.65\cdot10^{-3}$, so we have an exact solution for $u^5-\sfrac{256}{3125}u-1.00965$; this is not bad. The reference double root is $-1.25$, about $0.1$ away, although this fact was not used in the previous discussion.
But this computation, valid as it is, only found one root out of five, and then only for sufficiently small $\ensuremath{\varepsilon}$. We now turn to the roots that go to infinity as $\ensuremath{\varepsilon}\to0$. Preliminary investigation from similar to that of subsection \ref{Puiseux} shows that it is convenient to replace $\ensuremath{\varepsilon}$ by $\mu^4$.
Many singular perturbation problems including this one can be turned into regular ones by rescaling. Putting $u=\sfrac{y}{\mu}$, we get
\begin{align}
\mu^4\left(\frac{y}{\mu}\right)^5-\frac{y}{\mu}-1=0\>,
\end{align}
which reduces to
\begin{align}
y^5-y-\mu=0\>.
\end{align}
This is now regular in $\mu$. The zeroth order the equation is $y(y^4-1)=0$ and the root $y=0$ just recovers the regular series previously attained; so we let $\alpha$ be a root of $y^4-1$, i.e., $\alpha\in\{1,-1,i,-i\}$. A very similar Maple program (to either of the previous two) gives
\begin{align}
y_5= \alpha +\frac{1}{4}\mu - \frac{5}{32}\alpha^3\mu^2 +\frac{5}{32}\alpha^2\mu^3-\frac{385}{2048}\alpha\mu^4 + \frac{1}{4}\mu^5
\end{align}
so our appoximate solution is $\sfrac{y_5}{\mu}$ or
\begin{align}
z_5 = \frac{\alpha}{\mu}+\frac{1}{4}-\frac{5}{32}\alpha^3\mu^2-\frac{385}{2048}\alpha\mu^3+\frac{1}{4}\mu^4
\end{align}
which has residual \textsl{in the original equation}
\begin{align}
\Delta_5 = \mu^4 z^5 -z-1= \frac{23205}{16384}\alpha^3\mu^5 - \frac{21255}{65536}\alpha^2\mu^6 +O(\mu^7)\>.\label{residorigeq}
\end{align}
That is, $z_5$ exactly solves $\mu^4u^5-u-1-\sfrac{23205}{16384}\>\alpha^2\mu^5=O(\mu^6)$ instead of the one we had wanted to solve. This differs from the original by $O(|\ensuremath{\varepsilon}|^{\sfrac{5}{4}})$, and for small enough $\ensuremath{\varepsilon}$ this may suffice.
\paragraph{Optimal backward error}
Interestingly enough, we can do better. The residual is only one kind of backward error. Taking the lead from the Oettli-Prager theorem \cite[chap.~6]{CorlessFillion(2013)}, we look for equations of the form
\begin{align}
\left(\mu^4 +\sum_{j=10}^{15} a_j\mu^j\right) u^5 - u -1
\end{align}
for which $z_5$ is a better solution yet. Simply equating coefficients of the residual
\begin{align}
\tilde{\Delta}_5 = \left(\mu^4+\sum_{j=10}^{15}a_j\mu^j\right)z_5^5-z_5-1
\end{align}
to zero, we find
\begin{align}
(\mu^4 - \frac{23205}{16384}\alpha^2\mu^{10}+ \frac{2145}{1024}\alpha\mu^{11})z_5^5 -z_5-1 = \frac{12165535425}{1073741824}\alpha\mu^{11}+O(\mu^{12})
\end{align}
and thus $z_5$ solves an equation that is $O(\mu^{\sfrac{10}{4}})=O(\ensuremath{\varepsilon}^{\sfrac{5}{2}})$ close to the original, not just an equation \eqref{residorigeq} that is $O(\mu^6)=O(|\ensuremath{\varepsilon}|^{\sfrac{5}{4}})$. This is a superior explanation of the quality of $z_5$.
This was obtained with the following Maple code:
\lstinputlisting{SingularPertOettli}
Computing to higher orders (see the worksheet) gives e.g. that $z_8$ is the exact solution to an equation that differs by $O(\mu^{13})$ from the original, or better than $O(\ensuremath{\varepsilon}^3)$. This in spite of the fact that the basic residual $\Delta_8=O(\ensuremath{\varepsilon}^{9/4})$, only slightly better than $O(\ensuremath{\varepsilon}^2)$.
We will see other examples of improved backward error over residual for singularly-perturbed problems. In retrospect it's not so surprising, or shouldn't have been: singular problems are sensitive to changes in the leading term, and so it takes less effort to match a given solution.
\subsection{Perturbing all roots at once}
The preceding analysis found a nearby equation for each root independently; this might suffice, but there are circumstances in which it might not. Perhaps we want a ``nearby'' equation satisfied by all roots at once. Sadly this is more difficult, and in general may not be possible. But it is possible for the example we've considered and we demonstrate how the backward error is used in such a case. Let
\begin{align}
\zeta_1 &= z_5(1)= \frac{1}{\mu}+\frac{1}{4} -\frac{5}{32}\mu-\frac{385}{2048}\mu^3+\frac{1}{4}\mu^4\\
\zeta_2 &= z_5(-1) = -\frac{1}{\mu}+\frac{1}{4}-\frac{5}{32}\mu +\frac{385}{2048}\mu^3 + \frac{1}{4}\mu^4\\
\zeta_3 &= z_5(i) = \frac{i}{\mu}+\frac{1}{4} + \frac{5}{32}\mu -\frac{385i}{2048}\mu^3 + \frac{1}{4}\mu^4\\
\zeta_4 &= z_5(-i) = -\frac{i}{\mu}+\frac{1}{4} + \frac{5}{32}\mu + \frac{385}{2048}\mu^3 + \frac{1}{4}\mu^4\\
\zeta_5 &= z_5 = -1-\mu^4-5\mu^8 ,
\end{align}
$\zeta_5$ is the regular root we have found first in the previous subsection. Now put
\begin{align}
\tilde{p}(x)=\mu^4(x-\zeta_1)(x-\zeta_2)(x-\zeta_3)(x-\zeta_4)(x-\zeta_5)
\end{align}
and expand it. The result, by Maple, is
\begin{multline}
\mu^4x^5-5\mu^{12}x^4 + \left(\frac{23205}{16384}\mu^8+\frac{45}{8}\mu^{12}\right)x^3
-\left(\frac{5435}{32768}\mu^8+\frac{195697915}{33554432}\mu^{12}\right)x^2 \\
+ \left( \frac{2575665}{2097152}\mu^8+\frac{5696429035}{1073741824}\mu^{12}-1 \right)x +
\frac{8453745}{2097152}\mu^8 -\frac{5355037365}{1073741824}\mu^{12}-1
\end{multline}
which equals
\begin{align}
\ensuremath{\varepsilon} x^5 -x-1-5\ensuremath{\varepsilon}^3 x^4 + (\frac{23205}{16384}\ensuremath{\varepsilon}^2+\frac{45}{8}\ensuremath{\varepsilon}^3)x^3 - (\frac{5435}{32768}\ensuremath{\varepsilon}^2+\cdots)x^2+O(\ensuremath{\varepsilon}^2)
\end{align}
As we see, this equation is remarkably close to the original, although we see changes in all the coefficients. The backward error is $O(\mu^8)$, i.e., $O(\ensuremath{\varepsilon}^2)$. Thus for algebraic equations it's possible to talk about simultaneous backward error.
\subsection{A hyperasymptotic example}
In \cite[sect.~15.3, pp.~285-288]{Boyd(2014)}, Boyd takes up the perturbation series expansion of the root near $-1$ of
\begin{align}
f(x,\ensuremath{\varepsilon})=1+x+\ensuremath{\varepsilon} \mathrm{sech}\left(\frac{x}{\ensuremath{\varepsilon}}\right) = 0\>,
\end{align}
a problem he took from \cite[p.~22]{Holmes(1995)}. After computing the desired expansion using a two-variable technique, Boyd then sketches an alternative approach suggested by one of us (based on \cite{Corless(1996)}), namely to use the Lambert $W$ function. Unfortunately, there are a number of sign errors in Boyd's equation (15.28). We take the opportunity here to offer a correction, together with a residual-based analysis that confirms the validity of the correction. First, the erroneous formula: Boyd has
\begin{align}
z_0 = \frac{W(-2e^{\sfrac{1}{\ensuremath{\varepsilon}}})\ensuremath{\varepsilon}-1}{\ensuremath{\varepsilon}}
\end{align}
and $x_0=-\ensuremath{\varepsilon} z_0$, so allegedly $x_0=1-\ensuremath{\varepsilon} W(-2\ensuremath{\varepsilon}^{\sfrac{1}{\ensuremath{\varepsilon}}})$. This can't be right: as $\ensuremath{\varepsilon}\to0^+$, $e^{\sfrac{1}{\ensuremath{\varepsilon}}}\to\infty$ and the argument to $W$ is negative and large; but $W$ is real only if its argument is between $-e^{-1}$ and $0$, if it's negative at all. We claim that the correct formula is
\begin{align}
x_0 = -1-\ensuremath{\varepsilon} W(2e^{-\sfrac{1}{\ensuremath{\varepsilon}}}) \label{star}
\end{align}
which shows that the errors in Boyd's equation (15.28) are explainable as trivial. Indeed, Boyd's derivation is correct up to the last step; rather than fill in the algebraic details of the derivation of formula~\eqref{star}, we here verify that it works by computing the residual:
\begin{align}
\Delta_0 = 1+x_0 + \ensuremath{\varepsilon} \mathrm{sech}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right).
\end{align}
For notational simplicity, we will omit the argument to the Lambert $W$ function and just write $W$ for $W(2e^{-\sfrac{1}{\ensuremath{\varepsilon}}})$. Then, note that $\mathrm{sech}(\sfrac{x_0}{\ensuremath{\varepsilon}}) = \mathrm{sech}(\sfrac{1+\ensuremath{\varepsilon} W}{\ensuremath{\varepsilon}})$ since each $\mathrm{sech}$ is even, and that
\begin{align}
\mathrm{sech}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right) = \frac{2}{\displaystyle e^{\sfrac{x_0}{\ensuremath{\varepsilon}}}+e^{-\sfrac{x_0}{\ensuremath{\varepsilon}}}} = \frac{1}{\displaystyle e^{(\sfrac{1}{\ensuremath{\varepsilon}}) +W}+e^{-\sfrac{1}{\ensuremath{\varepsilon}}-W}}\>.
\end{align}
Now, by definition,
\begin{align}
We^W = 2e^{-\sfrac{1}{\ensuremath{\varepsilon}}}
\end{align}
and thus we obtain
\begin{align}
e^W = \frac{2e^{-\sfrac{1}{\ensuremath{\varepsilon}}}}{W} \qquad \textrm{and} \qquad e^{-W} = \frac{We^{\sfrac{1}{\ensuremath{\varepsilon}}}}{2}\>.
\end{align}
It follows that
\begin{align}
\mathrm{sech}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right) = \frac{2}{\displaystyle \sfrac{2}{W}+\sfrac{W}{2}} = \frac{W}{\displaystyle 1+\sfrac{W^2}{4}}\>, \label{sechW}
\end{align}
and hence the residual is
\begin{align}
\Delta_0 &= 1+(-1-\ensuremath{\varepsilon} W)+\ensuremath{\varepsilon} \frac{W}{\displaystyle 1+\sfrac{W^2}{4}}
= \frac{\displaystyle -\ensuremath{\varepsilon} W(1+\sfrac{W^2}{4}) + \ensuremath{\varepsilon} W}{\displaystyle 1+\sfrac{W^2}{4} } \\
&= \frac{\displaystyle -\sfrac{\ensuremath{\varepsilon} W^3}{4}}{\displaystyle 1+\sfrac{W^2}{4}} = \frac{-\ensuremath{\varepsilon} W^3}{4+ W^2} \nonumber \>.
\end{align}
Now $W= W(2e^{-1/\ensuremath{\varepsilon}})$ and as $\ensuremath{\varepsilon}\to 0^+$, $2e^{-1/\ensuremath{\varepsilon}}\to 0$ rapidly; since the Taylor series for $W(z)$ starts as $W(z)= z-z^2+\frac{3}{2}z^3+\ldots$, we have that $W(2e^{-\sfrac{1}{\ensuremath{\varepsilon}}})\sim 2e^{-\sfrac{1}{\ensuremath{\varepsilon}}}$ and therefore
\begin{align}
\Delta_0 = -\ensuremath{\varepsilon} 2e^{-\sfrac{3}{\ensuremath{\varepsilon}}}+O(e^{-\sfrac{5}{\ensuremath{\varepsilon}}})\>.
\end{align}
We see that this residual is very small indeed. But we can say even more. Boyd leaves us the exercise of computing higher order terms; here is our solution to the exercise. A Newton correction would give us
\begin{align}
x_1 = x_0 - \frac{f(x_0)}{f'(x_0)}
\end{align}
and we have already computed $f(x_0)=\Delta_0$. What is $f'(x_0)$? Since $f(x) = 1+x+\ensuremath{\varepsilon}\mathrm{sech}(\sfrac{x}{\ensuremath{\varepsilon}})$, this derivative is
\begin{align}
f'(x) = 1-\mathrm{sech}\left(\frac{x}{\ensuremath{\varepsilon}}\right)\mathrm{tanh}\left(\frac{x}{\ensuremath{\varepsilon}}\right)\>.
\end{align}
Simplifying similarly to equation \eqref{sechW}, we obtain
\begin{align}
\mathrm{tanh}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right) = \frac{e^{1/\ensuremath{\varepsilon} +W} - e^{-1/\ensuremath{\varepsilon}-W}}{e^{1/\ensuremath{\varepsilon}+W}+e^{-1/\ensuremath{\varepsilon}+W}} = \frac{\frac{2}{W}-\frac{W}{2}}{\frac{2}{W}+\frac{W}{2}} = \frac{4-W^2}{4+W^2}\>.
\end{align}
Thus
\begin{align}
f'(x_0) &= 1-\mathrm{sech}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right)\mathrm{tanh}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right)
= 1- \frac{\displaystyle W(1-\sfrac{W^2}{4})}{\displaystyle (1+\sfrac{W^2}{4})^2}\>.
\end{align}
It follows that
\begin{align}
x_1 &= x_0 - \frac{\Delta_0}{f'(x_0)}
= -1-\ensuremath{\varepsilon} W+\frac{\displaystyle\sfrac{\ensuremath{\varepsilon} W^3}{4+W^2}}{\displaystyle 1- \frac{W(1-\sfrac{W^2}{4})}{(1+\sfrac{W^2}{4})^2}} \\
&= -1 -\ensuremath{\varepsilon} W+ \frac{\ensuremath{\varepsilon} W^3(4+W^2)}{16-16W+8W^2+4W^3+W^4}\\
&= -1-\ensuremath{\varepsilon} W+ \frac{\ensuremath{\varepsilon}}{4}W^3+\frac{\ensuremath{\varepsilon}}{4}W^4+\frac{3}{16}\ensuremath{\varepsilon} W^5-\frac{11}{64}\ensuremath{\varepsilon} W^6+O(W^7)
\end{align}
Finally, the residual of $x_1$ is
\begin{align}
\Delta_1 = 4\ensuremath{\varepsilon} e^{\sfrac{7}{\ensuremath{\varepsilon}}}+O(\ensuremath{\varepsilon} e^{-\sfrac{8}{\ensuremath{\varepsilon}}})\>. \label{Newtresid}
\end{align}
We thus see an example of the use of $f'(x_0)$ instead of just $A$, as discussed in section \ref{genframe}, to approximately double the number of correct terms in the approximation.
This analysis can be implemented in Maple as follows:
\lstinputlisting{Hyperasymptotic}
Note that we had to use the MultiSeries package \cite{Salvy(2010)} to expand the series in equation \eqref{Newtresid}, for understanding how accurate $z_2$ was. $z_2$ is slightly more lacunary than the two-variable expansion in \cite{Boyd(2014)}, because we have a zero coefficient for $W^2$.
\section{Divergent Asymptotic Series}
Before we begin, a note about the section title: some authors give the impression that the word ``asymptotic'' is used \textsl{only} for divergent series,
and so the title might seem redundant. But the proper definition of an asymptotic series can include convergent series (see, e.g., \cite{Bruijn(1981)}), as it means that the relevant limit is not as the number of terms $N$ goes to infinity, but rather as the variable in question (be it \ensuremath{\varepsilon}, or $x$, or whatever) approaches a distinguished point (be it 0, or infinity, or whatever). In this sense, an asymptotic series might diverge as $N$ goes to infinity, or it might converge, but typically we don't care. We concentrate in this section on divergent asymptotic series.
Beginning students are often confused when they learn the usual ``rule of thumb'' for optimal accuracy when using divergent asymptotic series, namely to truncate the series \textsl{before} adding in the smallest (magnitude) term. This rule is usually motivated by an analogy with \textsl{convergent} alternating series, where the error is less than the magnitude of the first term neglected. But why should this work (if it does) for divergent series?
The answer we present in this section isn't as clear-cut as we would like, but nonetheless we find it explanatory. Perhaps you and your students will, too. The basis for the answer is that one can measure the residual $\Delta$ that arises on truncating the series at, say, $M$ terms, and choose $M$ to minimize the residual. Since the forward error is bounded by the condition number times the size of the residual, by minimizing $\|\Delta\|$ one minimizes a bound on the forward error. It often turns out that this method gives the same $M$ as the rule of thumb, though not always.
An example may clarify this. We use the large-$x$ asymptotics of $J_0(x)$, the zeroth-order Bessel function of the first kind. In \cite[section 10.17(i)]{NIST:DLMF}, we find the following asymptotic series, which is attributed to Hankel:
\begin{align}
J_0(x) = \left(\frac{2}{\pi x}\right)^{\sfrac{1}{2}}\left( A(x)\cos\left(x-\frac{\pi}{4}\right)-B(x)\sin\left(x-\frac{\pi}{4}\right)\right)
\end{align}
where
\begin{align}
A(x) = \sum_{k\geq 0} \frac{a_{2k}}{x^{2k}} \qquad \textrm{and}\qquad
B(x) = \sum_{k\geq 0} \frac{a_{2k+1}}{x^{2k+1}} \label{twoseries}
\end{align}
and where
\begin{align}
a_0 &= 1 \nonumber\\
a_k &= \frac{(-1)^k }{k! 8^k}\prod_{j=1}^k (2j-1)^2\>.
\end{align}
For the first few $a_k$s, we get
\begin{align}
a_0=1, a_1 = -\frac{1}{8}, a_2= -\frac{9}{128}, a_3 = \frac{75}{1024}\>,
\end{align}
and so on. The ratio test immediately shows the two series \eqref{twoseries} diverge for all finite~$x$.
Luckily, we always have to truncate anyway, and if we do, the forward errors get arbitrarily small so long as we take $x$ arbitrarily large. Because the Bessel functions are so well-studied, we have alternative methods for computation, for instance
\begin{align}
J_0(x) = \frac{1}{\pi}\int_0^\pi \cos(x\sin\theta)d\theta
\end{align}
which, given $x$, can be evaluated numerically (although it's ill-conditioned in a relative sense near any zero of $J_0(x)$). So we can directly compute the forward error.
But let's pretend that we can't. We have the asymptotic series, and not much more. Or course we have to have a defining equation---Bessel's differential equation
\begin{align}
x^2y''+xy'+x^2y=0
\end{align}
with the appropriate normalizations at $\infty$. We look at
\begin{align}
y_{N,M} = \left(\frac{2}{\pi x}\right)^{\sfrac{1}{2}}A_N(x)\cos\left(x-\frac{\pi}{4}\right)-\frac{2}{\pi x}B_M(x)\cos\left(x-\frac{\pi}{4}\right)
\end{align}
where
\begin{align}
A_N(x) = \sum_{k=0}^N \frac{a_{2k}}{x^{2k}}\qquad \textrm{and}\qquad
B_M(x) = \sum_{k=0}^M \frac{a_{2k+1}}{x^{2k+1}}\>.
\end{align}
Inspection shows that there are only two cases that matter: when we end on an even term $a_{2k}$ or on an odd term $a_{2k+1}$. The first terms omitted will be odd and even. A little work shows that the residual
\begin{align}
\Delta = x^2y''_{N,M} + x y'_{N,M} + x^2y_{N,M}
\end{align}
is just
\begin{align}
\frac{\displaystyle (k+\sfrac{1}{2})^2 a_k}{\displaystyle x^{k+\sfrac{1}{2}}} \cdot \left\{ \begin{array}{c} \cos(x-\sfrac{\pi}{4})\\ \sin(x-\sfrac{\pi}{4})\end{array}\right\}
\end{align}
if the final term \textsl{kept}, odd or even, is $a_k$. If even, then multiply by $\cos(x-\sfrac{\pi}{4})$; if odd, then $\sin(x-\sfrac{\pi}{4})$.
Let's pause a moment. The algebra to show this is a bit finicky but not hard (the equation is, after all, linear). This end result s an extremely simple (and exact!) formula for $\Delta$. The finite series $y_{N,M}$ is then the exact solution to
\begin{align}
x^2y''+xy'+xy &= \Delta\\
&= \frac{\displaystyle (k+\sfrac{1}{2})^2 a_k}{x^{k+\sfrac{1}{2}}} \cdot \left\{ \begin{array}{c} \cos(x-\frac{\pi}{4})\\ \sin(x-\frac{\pi}{4})\end{array}\right\}
\end{align}
and, provided $x$ is large enough, this is only a small perturbation of Bessel's equation. In many modelling situations, such a small perturbation may be of direct physical significance, and we'd be done. Here, though, Bessel's equation typically arises as an intermediate step, after separation of variables, say. Hence one might be interested in the forward error. By the theory of Green's functions, we may express this as
\begin{align}
J_0(x) - y_{N,M}(x) = \int_x^\infty K(x,\xi)\Delta(\xi)d\xi
\end{align}
for a suitable kernel $K(x,\xi)$. The obvious conclusion is that if $\Delta$ is small then so will $J_0(x)-y_{N,M}(x)$; but $K(x,\xi)$ will have some effect, possibly amplifying the effects of $\Delta$, or perhaps even damping its effects. Hence, the connection is indirect.
To have an error in $\Delta$ of at most $\ensuremath{\varepsilon}$, we must have
\begin{align}
\left(k+\frac{1}{2}\right)^2\frac{|a_k|}{x^{k+\sfrac{1}{2}}}\leq\ensuremath{\varepsilon}
\end{align}
(remember, $x>0$). This will happen only if
\begin{align}
x\geq \left(\left(k+\frac{1}{2}\right)^2 \frac{|a_k|}{\ensuremath{\varepsilon}}\right)^{2/(2k+1)}
\end{align}
and this, for fixed $k$, goes to $\infty$ as $\ensuremath{\varepsilon}\to0$.
Alternatively, we may ask which $k$, for a fixed $x$, minimizes
\begin{align}
\left(k+\frac{1}{2}\right)^2\frac{|a_k|}{x^{k+\sfrac{1}{2}}}
\end{align}
and this answers the truncation question in a rational way. In this particular case, minimizing $\|\Delta\|$ doesn't necessarily minimize the forward error (although, it's close). For $x=2.3$, for instance, the sequence $(k+\sfrac{1}{2})^2|a_k|x^{-k-\sfrac{1}{2}}$ is (no $\sqrt{\sfrac{2}{\pi}}$)
\begin{align}
\begin{array}{ccccccc}
k & 0 & 1 & 2 & 3 & 4 & 5\\
A_k & 0.165 & 0.081 & 0.055 & 0.049 & 0.054 & 0.070
\end{array}
\end{align}
The clear winner seems to be $k=3$. This suggests that for $x=2.3$, the best series to take is
\begin{align}
y_3 = \left(\frac{2}{\pi x}\right)^{\sfrac{1}{2}} \left( \left(1-\frac{9}{128x^2}\right)\cos\left(x-\frac{\pi}{4}\right) + \left(\frac{1}{8x}-\frac{75}{1024x^3}\right)\sin\left(x-\frac{\pi}{4}\right)\right)\>.
\end{align}
This gives $5.454\cdot 10^{-2}$ for $x=2.3$. But the cosine versus sine plays a role, here: $\cos(2.3-\sfrac{\pi}{4})\doteq 0.056$ while $\sin(2.3-\sfrac{\pi}{4})\doteq0.998$, so we should have included this. When we do, the estimates for $\Delta_0,\Delta_2$ and $\Delta_4$ are all significantly reduced---and this changes our selection, and makes $k=4$ the right choice; $\Delta_6>\Delta_4$ as well (either way). But the influence of the integral is mollifying.
Comparing to a better answer (computers via the integral formula) $0.0555398$, we see that the error is about $8.8\cdot 10^{-4}$ whereas $((4+\sfrac{1}{2})^2a_4/2.3^{4+\sfrac{1}{2}})\cos(2.3-\sfrac{\pi}{4})$ is $3.06\cdot 10^{-3}$; hence the residual overestimates the error slightly.
How does the rule of thumb do? The first term that is neglected here is $(\sfrac{1}{x})^{\sfrac{1}{2}}a_5x^{-5}\sin(x-\sfrac{\pi}{4})$ which is $\sim2.3\cdot 10^{-3}$ apart from the $(\sfrac{2}{\pi})^{\sfrac{1}{2}}=0.797$ factor, so about $1.86\cdot10^{-3}$. The \textsl{next} term is, however, $(\sfrac{2}{\pi x})^{\sfrac{1}{2}}a_6x^{-6}\cos(x-\sfrac{\pi}{4})\doteq -1.14\cdot 10^{-4}$ which is smaller yet, suggesting that we should keep the $a_5$ term.
But we shouldn't. Stopping with $a_4$ gives a better answer, just as the residual suggests that it should.
We emphasize that this is only a slightly more rational rule of thumb, because minimizing $\|\Delta\|$ only minimizes a bound on the forward error, not the forward error itself. Still, we have not seen this discussed in the literature before. A final comment is that the defining equation and its scale, define also the scale for what's a ``small'' residual.
So, a justification for the ``rule of thumb'' would be as follows. In our general scheme,
\begin{align}
Au_{n+1} = -[\ensuremath{\varepsilon}^{n+1}]\Delta_n
\end{align}
and thus, loosely speaking,
\begin{align}
u_{n+1} \sim -A^{-1}\Delta_n + O(\ensuremath{\varepsilon}^{n+1})\>.
\end{align}
Thus, if we stop when $u_{n+1}$ is smallest, this would tend to happen at the same integer $n$ that $\Delta_n$ was smallest.
This isn't going to be always true. For instance, if $A$ is a matrix with largest singular value $\sigma_1$ and smallest $\sigma_N>0$, with associated vectors $\hat{u}_k$ and $\hat{v}_k$, so that
\begin{align}
A\hat{v}_k = \sigma_k\hat{u}_k\>.
\end{align}
Then, if $u_{n+1}$ is like $\hat{v}_1$ then $\Delta_n$ will be like $\sigma\hat{u}_1$, which can be substantially larger; contrariwise, if $u_{n+1}$ is like $\hat{v}_N$ then $A\hat{v}_N=\sigma_N\hat{u}_N$ and $\Delta_n$ can be substantially smaller. The point is that directions of $\Delta_n$ can change between steps in the perturbation expansion; we thus expect correlation but not identity.
\section{Initial-Value problems}
BEA has successfully been applied to the \textsl{numerical} solution of differential equations for a long time, now. Examples include the works of Enright since the 1980s, e.g., \cite{Enright(1989)b,Enright(1989)a}, and indeed the Lanczos $\tau$-method is yet older~\cite{Lanczos(1988)}. It was pointed out in \cite{Corless(1992)} and \cite{Corless(1993)b} that BEA could be used for perturbation and other series solutions of differential equations, also. We here display several examples illustrating this fact. We use regular expansion, matched asymptotic expansions, the renormalization group method, and the method of multiple scales.
\subsection{Duffing's Equation}
This proposed way of interpreting solutions obtained by perturbation methods has interesting advantages for the analysis of series solutions to differential equations. Consider for example an unforced weakly nonlinear Duffing oscillator, which we take from \cite{Bender(1978)}:
\begin{align}
y''+y+\varepsilon y^3=0 \label{Duffing}
\end{align}
with initial conditions $y(0)=1$ and $y'(0)=0$. As usual, we assume that $0<\varepsilon\ll 1$.
Our discussion of this example does not provide a new method of solving this problem, but instead it improves the interpretation of the quality of solutions obtained by various methods.
\subsubsection{Regular expansion}
The classical perturbation analysis supposes that the solution to this equation can be written as the power series
\begin{align}
y(t) = y_0(t) + y_1(t)\ensuremath{\varepsilon} + y_2(t)\ensuremath{\varepsilon}^2+y_3(t)\ensuremath{\varepsilon}^3+\cdots\>.
\end{align}
Substituting this series in equation \eqref{Duffing} and solving the equations obtained by equating to zero the coefficients of powers of $\ensuremath{\varepsilon}$ in the residual, we find $y_0(t)$ and $y_1(t)$ and we thus have the solution
\begin{align}
z_1(t)= \cos( t) +\ensuremath{\varepsilon} \left( \frac{1}{32}\cos(3t) -\frac{1}{32}\cos( t) -\frac{3}{8}t\sin( t) \right)\>. \label{classical1st}
\end{align}
The difficulty with this solution
is typically characterized in one of two ways. Physically, the secular term $t\sin t$ shows that our simple perturbative method has failed since the energy conservation prohibits unbounded solutions. Mathematically, the secular term $t\sin t$ shows that our method has failed since the periodicity of the solution contradicts the existence of secular terms.
Both these characterizations are correct, but require foreknowledge of what is physically meaningful or of whether the solutions are bounded. In contrast, interpreting \eqref{classical1st} from the backward error viewpoint is much simpler. To compute the residual, we simply substitute $z_2$ in equation \eqref{Duffing}, that is, the residual is defined by
\begin{align}
\Delta_1(t) = z_1'' + z_1 + \ensuremath{\varepsilon} z_1^3\>.
\end{align}
For the first-order solution of equation \eqref{classical1st}, the residual is
\begin{multline}
\Delta_1(t) = \Big( -\tfrac {3}{64}\cos( t) +\tfrac{3}{128}\cos( 5t) +\tfrac{3}{128}\cos( 3t) -
\tfrac{9}{32}t\sin(t)\\ -\tfrac{9}{32}t\sin( 3t)\Big) \ensuremath{\varepsilon}^2+O( \ensuremath{\varepsilon}^3) \>. \label{ClassicalRes1st}
\end{multline}
$\Delta_1(t)$ is exactly computable. We don't print it all here because it's too ugly, but in figure \ref{ClassicalDuffingRes}, we see that the complete residual grows rapidly.
\begin{figure}
\centering
\includegraphics[width=.55\textwidth]{ClassicalDuffingRes.png}
\caption{Absolute Residual for the first-order classical perturbative solution of the unforced weakly damped Duffing equation with $\ensuremath{\varepsilon}=0.1$.}
\label{ClassicalDuffingRes}
\end{figure}
This is due to the secular term $-\tfrac{9}{32}t(\sin(t)-\sin(3t))$ of equation \eqref{ClassicalRes1st}. Thus we come to the conclusion that the secular term contained in the first-order solution obtained in equation \eqref{classical1st} invalidate it, but this time we do not need to know in advance what to physically expect or to prove that the solution is bounded. This is a slight but sometimes useful gain in simplicity.\footnote{In addition, this method makes it easy to find mistakes of various kinds. For instance, a typo in the 1978 edition of \cite{Bender(1978)} was uncovered by computing the residual. That typo does not seem to be in the later editions, so it's likely that the authors found and fixed it themselves.}
A simple Maple code makes it possible to easily obtain higher-order solutions:
\lstinputlisting{DuffingClassical}
Experiments with this code suggests the conjecture that $\Delta_n=O(t^n\ensuremath{\varepsilon}^{n+1})$. For this to be small, we must have $\ensuremath{\varepsilon} t=o(1)$ or $t<O(\sfrac{1}{\ensuremath{\varepsilon}})$.
\subsubsection{Lindstedt's method}
The failure to obtain an accurate solution on unbounded time intervals by means of the classical perturbation method suggests that another method that eliminates the secular terms will be preferable. A natural choice is Lindstedt's method, which rescales the time variable $t$ in order to cancel the secular terms.
The idea is that if we use a rescaling $\tau=\omega t$ of the time variable and chose $\omega$ wisely the secular terms from the classical perturbation method will cancel each other out.\footnote{Interpret this as: we choose $\omega$ to keep the residual small over as long a time-interval as possible.} Applying this transformation, equation \eqref{Duffing} becomes
\begin{align}
\omega^2 y''(\tau)+y(\tau)+\ensuremath{\varepsilon} y^3(\tau) \qquad y(0)=1,\enskip y'(0)=0\>.\label{DuffingTau}
\end{align}
In addition to writing the solution as a truncated series
\begin{align}
z_1(\tau) = y_0(\tau)+y_1(\tau)\ensuremath{\varepsilon} \label{ytau}
\end{align}
we expand the scaling factor as a truncated power series in \ensuremath{\varepsilon}:
\begin{align}
\omega=1+\omega_1\ensuremath{\varepsilon}\>. \label{omeg}
\end{align}
Substituting \eqref{ytau} and \eqref{omeg} back in equation \eqref{DuffingTau} to obtain the residual and setting the terms of the residual to zero in sequence, we find the equations
\begin{align}
y_0'' + y_0 =0\>,
\end{align}
so that $y_0=\cos(\tau)$, and
\begin{align}
y_1'' + y_1 = -y_0^3 - 2\omega_1 y_0''
\end{align}
subject to the same initial conditions, $y_0(0)=1, y'_0(0)=0, y_1(0)=0$, and $y_1'(0)=0$. By solving this last equation, we find
\begin{align}
y_1(\tau) =\frac{31}{32}\cos(\tau) +\frac{1}{32}\cos(3\tau) -\frac{3}{8}\tau \sin(\tau)+\omega_1\tau\sin(\tau)\>.
\end{align}
So, we only need to choose $\omega_1=\sfrac{3}{8}$ to cancel out the secular terms containing $\tau\sin(\tau)$. Finally, we simply write the solution $y(t)$ by taking the first two terms of $y(\tau)$ and plug in $\tau=(1+\sfrac{3\ensuremath{\varepsilon}}{8})t$:
\begin{align}
z_1(t) = \cos \tau +\ensuremath{\varepsilon} \left( \frac{31}{32}\cos \tau +\frac{1}{32}\cos \tau \right)
\end{align}
This truncated power series can be substituted back in the left-hand side of equation \eqref{Duffing} to obtain an expression for the residual:
\begin{align}
\Delta_1(t) = \left( \frac{171}{128}\cos \left( t \right) +\frac {3}{128}\cos \left( 5t \right) +\frac {9}{16}\cos \left( 3t \right) \right) \ensuremath{\varepsilon}^2+O \left( \ensuremath{\varepsilon}^3\right)
\end{align}
See figure \ref{FstLindstedt}.
\begin{figure}
\centering
\subfigure[First-Order\label{FstLindstedt}]{\includegraphics[width=.48\textwidth]{FstLindstedt.png}}
\subfigure[Second-Order\label{SndLindstedt}]{\includegraphics[width=.48\textwidth]{SndLindstedt.png}}
\caption{Absolute Residual for the Lindstedt solutions of the unforced weakly damped Duffing equation with $\ensuremath{\varepsilon}=0.1$.}
\end{figure}
We then do the same with the second term $\omega_2$. The following Maple code has been tested up to order $12$:
\lstinputlisting{DuffingLindstedt}
The significance of this is as follows: The normal presentation of the method first requires a proof (an independent proof) that the reference solution is bounded and therefore the secular term $\ensuremath{\varepsilon} t \sin t$ in the classical solution is spurious. \textsl{But} the residual analysis needs no such proof. It says directly that the classical solution solves not
\begin{align}
f(t,y,y',y'')=0
\end{align}
nor $f+\Delta f=0$ for uniformly small $\Delta$ but rather that the residual \textsl{departs} from 0 and is \textsl{not} uniformly small whereas the residual for the Lindstedt solution \textsl{is} uniformly small.
\subsection{Morrison's counterexample}
In \cite[pp.~192-193]{Omalley(2014)}, we find a discussion of the equation
\begin{align}
y''+y+\ensuremath{\varepsilon}(y')^3+3\ensuremath{\varepsilon}^2(y')=0\>.
\end{align}
O'Malley attributed the equation to \cite{Morrison(1966)}. The equation is one that is supposed to illustrate a difficulty with the (very popular and effective) method of multiple scales. We give a relatively full treatment here because a residual-based approach shows that the method of multiple scales, applied somewhat artfully, can be quite successful and moreover we can demonstrate \textsl{a posteriori} that the method was successful. The solution sketched in \cite{Omalley(2014)} uses the complex exponential format, which one of us used to good effect in his PhD, but in this case the real trigonometric form leads to slightly simpler formul\ae. We are very much indebted to our colleague, Professor Pei Yu at Western, for his careful solution, which we follow and analyze here.\footnote{We had asked him to solve this problem using one of his many computer algebra programs; instead, he presented us with an elegant handwritten solution.}
The first thing to note is that we will use three time scales, $T_0=t$, $T_1=\ensuremath{\varepsilon} t$, and $T_2=\ensuremath{\varepsilon}^2 t$ because the DE contains an $\ensuremath{\varepsilon}^2$ term, which will prove to be important. Then the multiple scales formalism gives
\begin{align}
\frac{d}{dt} = \frac{\partial}{\partial T_0} + \ensuremath{\varepsilon} \frac{\partial}{\partial T_1} + \ensuremath{\varepsilon}^2 \frac{\partial}{\partial T_2} \label{msformalism}
\end{align}
This formalism gives most students some pause, at first: replace an ordinary derivative by a sum of partial derivatives using the chain rule? What could this mean? But soon the student, emboldened by success on simple problems, gets used to the idea and eventually the conceptual headaches are forgotten.\footnote{This can be made to make sense, after the fact. We imagine $F(T_1,T_2,T_3)$ describing the problem, and $\sfrac{d}{dt}=\sfrac{\partial F}{\partial T_1}\sfrac{\partial T_1}{\partial t} + \sfrac{\partial F}{\partial T_2}\sfrac{\partial T_2}{\partial t} + \sfrac{\partial F}{\partial T_3}\sfrac{\partial T_3}{\partial t}$ which gives $\sfrac{d}{dt}=\sfrac{\partial F}{\partial T_1}+\ensuremath{\varepsilon} \sfrac{\partial F}{\partial T_2} + \ensuremath{\varepsilon}^2 \sfrac{\partial F}{\partial T_3}$ if $T_1=t, T_2=\ensuremath{\varepsilon} t$ and $T_3=\ensuremath{\varepsilon}^2t$.} But sometimes they return, as with this example.
To proceed, we take
\begin{align}
y=y_0+\ensuremath{\varepsilon} y_1+\ensuremath{\varepsilon}^2 y_2+O(\ensuremath{\varepsilon}^3)
\end{align}
and equate to zero like powers of $\ensuremath{\varepsilon}$ in the residual. The expansion of $\sfrac{d^2 y}{dt^2}$ is straightforward:
\begin{multline}
\left( \frac{\partial}{\partial T_0} + \ensuremath{\varepsilon}\frac{\partial}{\partial T_1}+\ensuremath{\varepsilon}^2\frac{\partial}{\partial T_2}\right)^2(y_0+\ensuremath{\varepsilon} y_1+\ensuremath{\varepsilon}^2 y_2) =\\
\frac{\partial^2 y_0}{\partial T_0^2} + \ensuremath{\varepsilon}\left(\frac{\partial^2 y_1}{\partial T_0^2}+2\frac{\partial^2 y_0}{\partial T_0\partial T_1}\right)
+\ensuremath{\varepsilon}^2\left(\frac{\partial^2 y_2}{\partial T_0^2}+2\frac{\partial^2 y_1}{\partial T_0\partial T_1}+\frac{\partial^2 y_0}{\partial T_1^2}+2\frac{\partial^2 y_0}{\partial T_0\partial T_1}\right)
\end{multline}
For completeness we include the other necessary terms, even though this construction may be familiar to the reader. We have
\begin{multline}
\ensuremath{\varepsilon}\left(\frac{dy}{dt}\right)^3 = \ensuremath{\varepsilon}\left( \left(\frac{\partial}{\partial T_0}+\ensuremath{\varepsilon}\frac{\partial}{\partial T_1}\right)(y_0+\ensuremath{\varepsilon} y_1)\right)^3\\
= \ensuremath{\varepsilon}\left(\frac{\partial y_0}{\partial T_0}\right)^3 + 3\ensuremath{\varepsilon}^2\left(\frac{\partial y_0}{\partial T_0}\right)^2 \left(\frac{\partial y_0}{\partial T_1}+\frac{\partial y_1}{\partial T_0}\right)+\cdots\>,
\end{multline}
and $y=y_0+\ensuremath{\varepsilon} y_1+\ensuremath{\varepsilon}^2 y_2$ is straightforward, and also
\begin{align}
3\ensuremath{\varepsilon}^2\left( \left(\frac{\partial}{\partial T_0}+\cdots\right)(y_0+\cdots)\right) = 3\ensuremath{\varepsilon}^2\frac{\partial y_0}{\partial T_0}+\cdots
\end{align}
is at this order likewise straightforward. At $O(\ensuremath{\varepsilon}^0)$ the residual is
\begin{align}
\frac{\partial^2 y_0}{\partial T_0^2}+y_0=0
\end{align}
and without loss of generality we take as solution
\begin{align}
y_0 = a(T_1,T_2)\cos(T_0+\varphi(T_1,T_2))
\end{align}
by shifting the origin to a local maximum when $T_0=0$. For notational simplicity put $\theta=T_0+\varphi(T_1,T_2)$. At $O(\ensuremath{\varepsilon}^1)$ the equation is
\begin{align}
\frac{\partial^2 y_1}{\partial T_0^2} + y_1 = -\left(\frac{\partial y_0}{\partial T_0}\right)^3 - 2\frac{\partial^2 y_0}{\partial T_0\partial T_1}
\end{align}
where the first term on the right comes from the $\ensuremath{\varepsilon}\dot{y}^3$ term whilst the second comes from the multiple scales formalism. Using $\sin^3\theta=\sfrac{3}{4}\sin\theta-\sfrac{1}{4}\sin 3\theta$, this gives
\begin{align}
\frac{\partial^2 y_1}{\partial T_0^2}+y_1 = \left(2\frac{\partial a}{\partial T_1}+\frac{3}{4} a^3\right)\sin\theta + 2a\frac{\partial \varphi}{\partial T_1}\cos\theta - \frac{a^3}{4}\sin 3\theta
\end{align}
and to suppress the resonance that would generate secular terms we put
\begin{align}
\frac{\partial a}{\partial T_1} = -\frac{3}{8}a^3 \quad\textrm{and}\qquad \frac{\partial\varphi}{\partial T_1}=0\>. \label{525}
\end{align}
Then $y_1 = \frac{a^3}{32}\sin 3\theta$ solves this equation and has $y_1(0)=0$, which does not disturb the initial condition $y_0(0)=a_0$, although since $\sfrac{dy_1}{dT_0}=\sfrac{3a^2}{32}\cos3\theta$ the derivative of $y_0+\ensuremath{\varepsilon} y_1$ will differ by $O(\ensuremath{\varepsilon})$ from zero at $T_0=0$. This does not matter and we may adjust this by choice of initial conditions for $\varphi$, later.
The $O(\ensuremath{\varepsilon}^2)$ term is somewhat finicky, being
\begin{multline}
\frac{\partial^2 y_2}{\partial T_0^2}+y_2 = -2\frac{\partial^2 y_0}{\partial T_0\partial T_2} -2\frac{\partial^2 y_1}{\partial T_0\partial T_1} \\
-3\left(\frac{\partial y_0}{\partial T_0}\right)^2 \left(\frac{\partial y_0}{\partial T_1}+\frac{\partial y_1}{\partial T_0}\right) - \frac{\partial^2 y_0}{\partial T_1^2}-3\frac{\partial y_0}{\partial T_0}
\end{multline}
where the last term came from $3(\dot{y})\ensuremath{\varepsilon}^2$. Proceeding as before, and using $\partial\varphi/\partial T_1=0$ and $\sfrac{\partial a}{\partial T_1}=-\sfrac{3}{8}\>a^3$ as well as some other trigonometric identities, we find the right-hand side can be written as
\begin{align}
\left(2\frac{\partial a}{\partial T_2}+3a\right)\sin\theta+\left(2a\frac{\partial\varphi}{\partial T_2}-\frac{9}{128}a^5\right)\cos\theta-\frac{27}{1024}a^5\cos3\theta+\frac{9}{128}a^5\cos5\theta\>.
\end{align}
Again setting the coefficients of $\sin\theta$ and $\cos\theta$ to zero to prevent resonance we have
\begin{align}
\frac{\partial a}{\partial T_2}=-\frac{3}{2}a \label{528}
\end{align}
and
\begin{align}
\frac{\partial \varphi}{\partial T_2} = \frac{9}{256}a^4\qquad (a\neq0).
\end{align}
This leaves
\begin{align}
y_2= \frac{27}{1024}a^5\cos3\theta - \frac{3 a^5}{1024}\cos5\theta
\end{align}
again setting the homogeneous part to zero.
Now comes a bit of multiple scales magic: instead of solving equations \eqref{525} and \eqref{528} in sequence, as would be usual, we write
\begin{align}
\frac{da}{dt} &= \frac{\partial a}{\partial T_0} + \ensuremath{\varepsilon} \frac{\partial a}{\partial T_1} + \ensuremath{\varepsilon}^2\frac{\partial a}{\partial T_2}
= 0 + \ensuremath{\varepsilon}\left(-\frac{3}{8}a^3\right) + \ensuremath{\varepsilon}^2\left(-\frac{3}{2}a\right) \nonumber \\
&= -\frac{3}{8}\ensuremath{\varepsilon} a(a^2+4\ensuremath{\varepsilon})\>. \label{magic}
\end{align}
Using $a=2R$ this is equation (6.50) in \cite{Omalley(2014)}. Similarly
\begin{align}
\frac{d\varphi}{dt} &= \ensuremath{\varepsilon} \frac{\partial\varphi}{\partial T_1}+\ensuremath{\varepsilon}^2 \frac{\partial\varphi}{\partial T_2}
= 0+\ensuremath{\varepsilon}^2 \frac{9}{256} a^4 \label{moremagic}
\end{align}
and once $a$ has been identified, $\varphi$ can be found by quadrature. Solving \eqref{magic} and \eqref{moremagic} by Maple,
\begin{align}
a = \frac{\sqrt{\ensuremath{\varepsilon}} a_0}{\displaystyle \sqrt{\ensuremath{\varepsilon} e^{3\ensuremath{\varepsilon}^2 t}+\frac{a_0^2}{4}(e^{3\ensuremath{\varepsilon}^2 t}-1)}} = 2\frac{\sqrt{\ensuremath{\varepsilon}} a_0}{\sqrt{u}}
\end{align}
and
\begin{align}
\varphi = -\frac{3}{16}\ensuremath{\varepsilon}^2\ln u + \frac{9}{16}\ensuremath{\varepsilon}^4 t-\frac{3}{16}\frac{\ensuremath{\varepsilon}^2a_0^2}{u}
\end{align}
where $u=4\ensuremath{\varepsilon} e^{3\ensuremath{\varepsilon}^2 t}+a_0^2(e^{3\ensuremath{\varepsilon}^2 t}-1)$. The residual is (again by Maple)
\begin{align}
\footnotesize \ensuremath{\varepsilon}^3\left( \frac{9}{16}a_0^3\cos3t+a_0^7\left( -\frac{351}{4096}\sin t - \frac{9}{512} \sin 7t+\frac{333}{4096}\sin 3t + \frac{459}{4096}\sin 5t\right)\right)+O(\ensuremath{\varepsilon}^4)
\end{align}
and there is no secularity visible in this term.
It is important to note that the construction of the equation \eqref{magic} for $a(t)$ required both $\sfrac{\partial a}{\partial T_1}$ and $\sfrac{\partial a}{\partial T_2}$. Either one alone gives misleading or inconsistent answers. While it may be obvious to an expert that both terms must be used at once, the situation is somewhat unusual and a novice or casual user of perturbation methods may well wish reassurance. (We did!) Computing (and plotting) the residual $\Delta=\ddot{z}+z+\ensuremath{\varepsilon}(\dot{z})^3+3\ensuremath{\varepsilon}^2\dot{z}$ does just that (see figure \ref{YuResidual}).
\begin{figure}
\centering
\includegraphics[width=.55\textwidth]{YuResidual.png}
\caption{The residual $|\Delta_3|$ divided by $\ensuremath{\varepsilon}^3a$, with $\ensuremath{\varepsilon}=0.1$, where $a=O(e^{-\sfrac{3}{2}\>\ensuremath{\varepsilon}^2 t})$, on $0\leq t\leq \sfrac{10\mathrm{ln}(10)}{\ensuremath{\varepsilon}^2}$ (at which point $a=10^{-15}$). We see that $|\sfrac{\Delta_3}{\ensuremath{\varepsilon}^3a}|<1$ on this entire interval.}
\label{YuResidual}
\end{figure}
It is simple to verify that, say, for $\ensuremath{\varepsilon}=1/100$, $|\Delta|<\ensuremath{\varepsilon}^3a$ on $0<t<10^5\pi$.
Notice that $a\sim O(e^{-\sfrac{3}{2}\>\ensuremath{\varepsilon}^2 t})$ and $e^{-\sfrac{3}{2}\cdot 10^{-4}\cdot 10^5\cdot\pi}=e^{-15\pi} \doteq 10^{-15}$ by the end of this range. The method of multiple scales has thus produced $z$, the exact solution of an equation uniformly and relatively near to the original equation. In trigonometric form,
\begin{multline}
z = a\cos(t+\varphi)+\ensuremath{\varepsilon}\frac{a^3}{32}\cos(3(t+\varphi)) \\
+ \ensuremath{\varepsilon}^2\left(\frac{27}{1024}a^5\cos(3(t+\varphi))
-\frac{3}{1024}a^5\cos^5((5(t+\varphi)) \right) \label{zeqn}
\end{multline}
and $a$ and $\varphi$ are as in equations \eqref{magic} and \eqref{moremagic}. Note that $\varphi$ asymptotically approaches zero. Note that the trigonometric solution we have demonstrated here to be correct, which was derived for us by our colleague Pei Yu, appears to differ from that given in \cite{Omalley(2014)}, which is
\begin{align}
y= Ae^{it} + \ensuremath{\varepsilon} Be^{3it} + \ensuremath{\varepsilon}^2 Ce^{5it}+\cdots
\end{align}
where (with $\tau=\ensuremath{\varepsilon} t$)
\begin{align}
C\sim \frac{3}{64}A^5+\cdots \qquad \textrm{and}\qquad B\sim -\frac{A^3}{8}(i+\frac{45}{8}\ensuremath{\varepsilon}|A|^2+\cdots)
\end{align}
and, if $A=Re^{i\varphi}$,
\begin{align}
\frac{dR}{d\tau} = -\frac{3}{2}(R^3+\ensuremath{\varepsilon} R+\cdots)
\qquad \textrm{and}\qquad
\frac{d\varphi}{d\tau} = -\frac{3}{2}R^2 (1+\frac{3\ensuremath{\varepsilon}}{8}R^2+\cdots)
\end{align}
Of course with the trigonometric form $y=a\cos(t+\varphi)$, the equivalent complex form is
\begin{align}
y &= a \left( \frac{e^{it+i\varphi}+ e^{-it-i\varphi}}{2}\right)
= \frac{a}{2}e^{i\varphi}e^{it}+c.c.
\end{align}
and so $R=\sfrac{a}{2}$. As expected, equation (6.50) in \cite{Omalley(2014)} becomes
\begin{align}
\frac{da}{d\tau}\left(\frac{a}{2}\right) = -\frac{3}{2}\frac{a}{2}\left(\frac{a^2}{4}+\ensuremath{\varepsilon}\right)
\end{align}
or, alternatively,
\begin{align}
\frac{da}{d\tau} = -\frac{3}{8}\ensuremath{\varepsilon} a(a^2+4\ensuremath{\varepsilon})
\end{align}
which agrees with that computed for us by Pei Yu. However, O'Malley's equation (6.48) gives
\begin{align}
C\cdot e^{i\cdot 5t} &= \frac{3}{64}A^5 e^{i5t} = \frac{3}{64}R^5e^{i5\theta} = \frac{3}{2048}a^5 e^{i5\theta}\>,
\end{align}
so that
\begin{align}
Ce^{i5t}+c.c = \frac{3}{1024}a^5\cos5\theta\>,
\end{align}
whereas Pei Yu has $-\sfrac{3}{1024}$. As demonstrated by the residual in figure \ref{YuResidual}, Pei Yu is correct. Well, sign errors are trivial enough.
More differences occur for $B$, however. The $-\sfrac{A^3}{8} \> ie^{3it}$ term becomes $\sfrac{a^3}{32}\>\cos 3\theta$, as expected, but $-\sfrac{45}{64}A^3\cdot |A|^2e^{3it}+c.c.$ becomes $-\sfrac{45}{32}\sfrac{a^5}{32}\>\cos3\theta = -\sfrac{45}{1024}\>a^5\cos3\theta$, not $\sfrac{27}{1024}\>a^5\cos3\theta$. Thus we believe there has been an arithmetic error in \cite{Omalley(2014)}. This is also present in \cite{Omalley(2010)}. Similarly, we believe the $\sfrac{d\varphi}{dt}$ equation there is wrong.
Arithmetic errors in perturbation solutions are, obviously, a constant hazard even for experts. We do not point out this error (or the other errors highlighted in this paper) in a spirit of glee---goodness knows we've made our own share. No, the reason we do so is to emphasize the value of a separate, independent check using the residual. Because we have done so here, we are certain that equation \eqref{zeqn} is correct: it produces a residual that is uniformly $O(\ensuremath{\varepsilon}^3)$ for bounded time, and which is $O(\ensuremath{\varepsilon}^{9/2}e^{-\sfrac{3}{2}\>\ensuremath{\varepsilon}^2 t})$ as $t\to \infty$. (We do not know why there is extra accuracy for large times).
Finally, we remark that the difficulty this example presents for the method of multiple scales is that equation \eqref{magic} cannot be solved itself by perturbation methods (or, al least, we couldn't do it). One has to use all three terms at once; the fact that this works is amply demonstrated afterwards.
Indeed the whole multiple scales procedure based on equation \eqref{msformalism} is really very strange when you think about it, but it can be justified afterwards. It really doesn't matter how we find equation \eqref{zeqn}. Once we have done so, verifying that it is the exact solution of a small perturbation of the original equation is quite straightforward. The implementation is described in the following Maple code:
\lstinputlisting{Morrison}
\subsection{The lengthening pendulum}
As an interesting example with a genuine secular term, \cite{Boas(1966)} discuss the lengthening pendulum. There, Boas solves the linearized equation exactly in terms of Bessel functions. We use the model here as an example of a perturbation solution in a physical context. The original Lagrangian leads to
\begin{align}
\frac{d}{dt} \left(m\ell^2\frac{d\theta}{dt}\right)+mg\ell\sin\theta =0
\end{align}
(having already neglected any system damping). The length of the pendulum at time $t$ is modelled as $\ell =\ell_0+vt$, and implicitly $v$ is small compared to the oscillatory speed $\sfrac{d\theta}{dt}$ (else why would it be a pendulum at all?). The presence of $\sin\theta$ makes this a nonlinear problem; when $v=0$ there is an analytic solution using elliptic functions \cite[chap.~4]{Lawden(2013)}.
We \textsl{could} do a perturbation solution about that analytic solution; indeed there is computer algebra code to do so automatically \cite{Rand(2012)}. For the purpose of this illustration, however, we make the same small-amplitude linerization that Boas did and replace $\sin\theta$ by $\theta$. Dividing the resulting equation by $\ell_0$, putting $\ensuremath{\varepsilon}=\sfrac{v}{\ell_0\omega}$ with $\omega=\sqrt{\sfrac{g}{\ell_0}}$ and rescaling time to $\tau=\omega t$, we get
\begin{align}
(1+\ensuremath{\varepsilon}\tau)\frac{d^2\theta}{d\tau^2}+2\ensuremath{\varepsilon}\frac{d\theta}{d\tau}+\theta=0\>.
\end{align}
This supposes, of course, that the pin holding the base of the pendulum is held perfectly still (and is frictionless besides).
Computing a regular perturbation approximation
\begin{align}
z_{\textrm{reg}} = \sum_{k=0}^N \theta_k(\tau)\ensuremath{\varepsilon}^k
\end{align}
is straightforward, for any reasonable $N$, by using computer algebra. For instance, with $N=1$ we have
\begin{align}
z_{\textrm{reg}} = \cos\tau + \ensuremath{\varepsilon}\left(\frac{3}{4}\sin\tau+\frac{\tau^2}{4}\sin\tau-\frac{3}{4}\tau\cos\tau\right)\>.
\end{align}
This has residual
\begin{align}
\Delta_{\textrm{reg}} &= (1+\ensuremath{\varepsilon}\tau)z''_{\textrm{reg}}+2\ensuremath{\varepsilon} z'_{\textrm{reg}}+z_{\textrm{reg}}\\
&= -\frac{\ensuremath{\varepsilon}^2}{4}\left(\tau^3\sin\tau-9\tau^2\cos\tau-15\tau\sin\tau\right)
\end{align}
also computed straightforwardly with computer algebra. By experiment with various $N$ we find that the residuals are always of $O(\ensuremath{\varepsilon}^{N+1})$ but contain powers of $\tau$, as high as $\tau^{2N+1}$. This naturally raises the question of just when this can be considered ``small.'' We thus have the \textsl{exact} solution of
\begin{align}
(1+\ensuremath{\varepsilon}\tau)\frac{d^2\theta}{d\tau^2}+2\ensuremath{\varepsilon}\frac{d\theta}{d\tau}+\theta = \Delta_{\textrm{reg}}(\tau)=P(\ensuremath{\varepsilon}^{N+1}\tau^{2N+1})
\end{align}
and it seems clear that if $\ensuremath{\varepsilon}^{N+1}\tau^{2N+1}$ is to be considered small it should at least be smaller than $\ensuremath{\varepsilon}\tau$, which appear on the left hand side of the equation. [$\sfrac{d^2}{d\tau^2}$ is $-\cos\tau$ to leading order, so this is periodically $O(1)$.] This means $\ensuremath{\varepsilon}^N\tau^{2N}$ should be smaller than $1$, which forces $\tau\leq T$ where $T=O(\ensuremath{\varepsilon}^{-q})$ with $q<\frac{1}{2}$. That is, this regular perturbation solution is valid only on a limited range of $\tau$, namely, $\tau=O(\ensuremath{\varepsilon}^{-\sfrac{1}{2}})$.
Of course, the original equation contains a term $\ensuremath{\varepsilon}\tau$, and this itself is small only if $\tau\leq T_{\max}$ with $T_{\max}=O(\ensuremath{\varepsilon}^{-1+\delta})$ for $\delta>0$. Notice that we have discovered this limitation of the regular perturbation solution without reference to the `exact' Bessel function solution of this linearized equation. Notice also that $\Delta_{\textrm{reg}}$ can be interpreted as a small forcing term; a vibration of the pin holding the pendulum, say. Knowing that, say, such physical vibrations, perhaps caused by trucks driving past the laboratory holding the pendulum, are bounded in size by a certain amount, can help to decide what $N$ to take, and over which $\tau$-interval the resulting solution is valid.
Of course, one might be interested in the forward error $\theta-z_{\textrm{reg}}$; but then one should be interested in the forward errors caused by neglecting physical vibrations (e.g. of trucks passing by) and the same theory---what a numerical analyst calls a condition number---can be used for both.
But before we pursue that farther, let us first try to improve the perturbation solution. The method of multiple scales, or equivalent but easier in this case the renormalization group method \cite{Kirkinis(2012)} which consists for a linear problem of taking the regular perturbation solution and replacing $\cos\tau$ by $\sfrac{(e^{i\tau}+e^{-i\tau})}{2}$ and $\sin\tau$ by $\sfrac{(e^{i\tau}-e^{-i\tau})}{2i}$, gathering up the result and writing it as $\sfrac{1}{2}\>A(\tau;e)e^{i\tau}+\sfrac{1}{2}\>\bar{A}(\tau;\ensuremath{\varepsilon})e^{-i\tau}$. One then writes $A(\tau;\ensuremath{\varepsilon}) = e^{L(\tau;\ensuremath{\varepsilon})}+O(\ensuremath{\varepsilon}^{N+1})$ (that is, taking the logarithm of the $\ensuremath{\varepsilon}$-series for $A(\tau;\ensuremath{\varepsilon})=A_0(\tau)+\ensuremath{\varepsilon} A_1(\tau)+\cdots+\ensuremath{\varepsilon}^NA_N(\tau)+O(\ensuremath{\varepsilon}^{N+1})$, a straightforward exercise (especially in a computer algebra system) and then (if one likes) rewriting $\sfrac{1}{2}\>e^{L(\tau;\ensuremath{\varepsilon})+i\tau}+$ c.c. in real trigonometric form again, gives an excellent result. If $N=1$, we get
\begin{align}
\tilde{z}_{\textrm{renorm}}=e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon}\tau}\cos\left(\frac{3}{4}\ensuremath{\varepsilon}+\tau-\ensuremath{\varepsilon}\frac{\tau^2}{4}\right)
\end{align}
which contains an irrelevant phase change $\frac{3}{4}\ensuremath{\varepsilon}$ which we remove here as a distraction to get
\begin{align}
z_{\textrm{renorm}} = e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon}\tau}\cos\left(\tau-\ensuremath{\varepsilon}\frac{\tau^2}{4}\right)\>.
\end{align}
This has residual:
\begin{align}
\Delta_{\textrm{renorm}} &= (1+\ensuremath{\varepsilon}\tau)\frac{d^2z_{\textrm{renorm}}}{d\tau^2} +2\ensuremath{\varepsilon}\frac{dz_{\textrm{renorm}}}{d\tau}+z_{\textrm{renorm}} \nonumber\\
&= \ensuremath{\varepsilon}^2e^{-\frac{3}{4}\ensuremath{\varepsilon}\tau} \left( (\frac{3}{4}\tau^2-\frac{15}{16})\cos(\tau-\ensuremath{\varepsilon}\frac{\tau^2}{4})-\frac{9}{4}\tau\sin(\tau-\ensuremath{\varepsilon}\frac{t^2}{4})\right)+O(\ensuremath{\varepsilon}^3\tau^3e^{-\frac{3}{4}\ensuremath{\varepsilon}\tau})
\>.
\end{align}
By inspection, we see that this is superior in several ways to the residual from the regular perturbation method. First, it contains the damping term $e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon}\tau}$ just as the computed solution does; this residual will be small compared even to the decaying solution. Second, at order $N$ it contains only $\tau^{N+1}$ as its highest power of $\ensuremath{\varepsilon}$, not $\tau^{2N+1}$. This will be small compared to $\ensuremath{\varepsilon}\tau$ for times $\tau< T$ with $T=O(\ensuremath{\varepsilon}^{-1+\delta})$ for \emph{any} $\delta>0$; that is, this perturbation solution will provide a good solution so long as its fundamental assumption, that the $\ensuremath{\varepsilon}\tau$ term in the original equation, can be considered `small', is good.
Note that again the quality of this perturbation solution has been judged without reference to the exact solution, and quite independently of whatever assumptions are usually made to argue for multiple scales solutions (such as boundedness of $\theta$) or the renormalization group method.
Thus, we conclude that the renormalization group method gives a superior solution in this case, and this judgement was made possible by computing the residual. We have used the following Maple implementation:
\lstinputlisting{LengtheningPendulum}
See figure \ref{pendulum}.
\begin{figure}
\includegraphics[width=.45\textwidth]{LengtheningSols.png}\quad
\includegraphics[width=.45\textwidth]{RenormRes.png}
\caption{On the left, solutions to the lengthening pendulum equation (the renormalized solution is the solid line). On the right, residual of the renormalized solution, which is orders of magnitudes smaller than that of the regular expansion.}
\label{pendulum}
\end{figure}
Note that this renormalized residual contains terms of the form $(\ensuremath{\varepsilon}\tau)^k e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon} \tau}$> No matter what order we compute to, these have maxima $O(1)$ when $\tau=O(\sfrac{1}{\ensuremath{\varepsilon}})$, but as noted previously the fundamental assumption of perturbation has been violated by that large a $\tau$.
\paragraph{Optimal backward error again} Now, one further refinement is possible. We may look for an $O(\ensuremath{\varepsilon}^2)$ perturbation of the lengthening of the pendulum, that explains part of this computed residual! That is, we look for $p(t)$, say, so that
\begin{align}
\Delta_2 := (1+\ensuremath{\varepsilon}\tau+\ensuremath{\varepsilon} p(\tau)) z_{\textrm{renorm}}'' + 2(\ensuremath{\varepsilon}+\ensuremath{\varepsilon}^2 p'(\tau))z_{\textrm{renorm}}'+z_{\textrm{renorm}} \label{renormeqs}
\end{align}
has only \textsl{smaller} terms in it than $\Delta_{\textrm{renorm}}$. Note the correlated changes, $\ensuremath{\varepsilon}^2 p(\tau)$ and $\ensuremath{\varepsilon}^2 p'(\tau)$.
At this point, we don't know if this is possible or useful, but it's a good thing to try. In numerical analysis terms, we are trying to find a structured backward error for this computed solution.
The procedure for identifying $p(\tau)$ in equation \eqref{renormeqs} is straightforward. We put $p(\tau)=a_0+a_1\tau+a_2\tau^2$ with unknown coefficients, compute $\Delta_2$, and try to choose $a_0$, $a_1$, and $a_2$ in order to make as many coefficients of powers of $\ensuremath{\varepsilon}$ in $\Delta_2$ to be zero as we can. When we do this, we find that
\begin{align}
p = -\frac{15}{16}+\frac{3}{4}\tau^2
\end{align}
makes
\begin{align}
\Delta_{\textrm{mod}} &= \left(1+\ensuremath{\varepsilon}\tau+\ensuremath{\varepsilon}^2\left(\frac{3}{4}\tau^2-\frac{15}{16}\right)\right)z_{\textrm{renorm}}'' + 2\left(\ensuremath{\varepsilon}+\ensuremath{\varepsilon}^2\left(\frac{3}{2}\tau\right)\right) z_{\textrm{renorm}}' + z_{\textrm{renorm}}\\
&= \ensuremath{\varepsilon}^2e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon}\tau}\left(-\frac{3}{4}\tau\sin\left(\tau-\sfrac{1}{4}\>\ensuremath{\varepsilon} \tau^2\right)\right) + O(\ensuremath{\varepsilon}^3\tau^3 e^{-\sfrac{3\ensuremath{\varepsilon}\tau}{4}})\>.
\end{align}
This is $O(\ensuremath{\varepsilon}^2\tau e^{-\sfrac{3\ensuremath{\varepsilon}\tau}{4}})$ instead of $O(\ensuremath{\varepsilon}^2\tau^2 e^{-\sfrac{3\ensuremath{\varepsilon}\tau}{4}})$, and therefore smaller. This \textsl{interprets} the largest term of the original residual, the $O(\ensuremath{\varepsilon}^2\tau^2)$ term, as a perturbation in the lengthening of the pendulum. The gain is one of interpretation; the solution is the same, but the equation it solves exactly is slightly different. For $O(\ensuremath{\varepsilon}^N\tau^N)$ solutions the modifications will probably be similar.
Now, if $z\doteq\cos\tau$ then $z'\doteq-\sin\tau$; so if we include a damping term
\begin{align}
\left( +\ensuremath{\varepsilon}^2\cdot\frac{3}{8}\cdot\tau \theta' \right)
\end{align}
in the model, we have
\begin{align}
\left(1+\ensuremath{\varepsilon}\tau+\ensuremath{\varepsilon}^2\left(\frac{3}{4}\tau^2-\frac{15}{16}\right)\right) z_{\textrm{renorm}}'' + 2\left(\ensuremath{\varepsilon}-\ensuremath{\varepsilon}^2\left(\frac{3}{2}\tau\right)+\ensuremath{\varepsilon}^2\frac{3}{8}\tau\right)z_{\textrm{renorm}}' + z_{\textrm{renorm}} \nonumber\\
= O\left(\ensuremath{\varepsilon}^3\tau^3e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon}\tau}\right)
\end{align}
and \textsl{all} of the leading terms of the residual have been ``explained'' in the physical context.
If the damping term had been negative, we might have rejected it; having it increase with time also isn't very physical (although one might imagine heating effects or some such).
\subsection{Vanishing lag delay DE}
For another example we consider an expansion that ``everybody knows'' can be problematic. We take the DDE
\begin{align}
\dot{y}(t)+ay(t-\ensuremath{\varepsilon})+b y(t)=0
\end{align}
from \cite[p.~52]{Bellman(1972)} as a simple instance. Expanding $y(t-\ensuremath{\varepsilon})=y(t)-\dot{y}(t)\ensuremath{\varepsilon}+O(\ensuremath{\varepsilon}^2)$ we get
\begin{align}
(1-a\ensuremath{\varepsilon})\dot{y}(t) + (b+a)y(t)=0
\end{align}
by ignoring $O(\ensuremath{\varepsilon}^2)$ terms, with solution
\begin{align}
z(t) = \mathrm{exp}(-\frac{b+a}{1-a\ensuremath{\varepsilon}}t)u_0
\end{align}
if a simple initial condition $u(0)=u_0$ is given. Direct computation of the residual shows
\begin{align}
\Delta &= \dot{z} + az(t-\ensuremath{\varepsilon})+bz(t)\\
&= O(\ensuremath{\varepsilon}^2)z(t)
\end{align}
uniformly for all $t$; in other words, our computed solution $z(t)$ exactly solves
\begin{align}
\dot{y} + ay(t-\ensuremath{\varepsilon}) + (b+O(\ensuremath{\varepsilon}^2))y(t)=0
\end{align}
which is an equation of the same type as the original, with only $O(\ensuremath{\varepsilon}^2)$ perturbed coefficients. The initial history for the DDE should be prescribed on $-\ensuremath{\varepsilon}\leq t<0$ as well as the initial condition, and that's an issue, but often that history is an issue anyway. So, in this case, contrary to the usual vague folklore that Taylor series expansion in the vanishing lag ``can lead to difficulties'', we have a successful solution and we know that it's successful.
We now need to assess the sensitivity of the problem to small changes in $b$, but we all know that has to be done anyway, even if we often ignore it.
Another example of Bellman's on the same page, $\ddot{y}(t)+ay(t-\ensuremath{\varepsilon})=0$, can be treated in the same manner. Bellman cautions there that seemingly similar approaches can lead to singular perturbation problems, which can indeed lead to difficulties, but even there a residual/backward error analysis can help to navigate those difficulties.
\subsection{Artificial viscosity in a nonlinear wave equation}
Suppose we are trying to understand a particular numerical solution, by the method of lines, of
\begin{align}
u_t + uu_x = 0 \label{waveeq}
\end{align}
with initial condition $u(0,x)=e^{i\pi x}$ on $-1\leq x\leq 1$ and periodic boundary conditions. Suppose that we use the method of modified equations (see, for example, \cite{Griffiths(1986)}, \cite{Warming(1974)}, or \cite[chap~12]{CorlessFillion(2013)}) to find a perturbed equation that the numerical solution more nearly solves. Suppose also that we analyze the same numerical method applied to the divergence form
\begin{align}
u_t + \frac{1}{2}(u^2)_x=0\>. \label{waveeq2}
\end{align}
Finally, suppose that the method in question uses backward differences $f'(x) = \sfrac{(f(x)-f(x-2\ensuremath{\varepsilon}))}{2\ensuremath{\varepsilon}}$ (the factor 2 is for convenience) on an equally-spaced $x$-grid, so $\Delta x=-2\ensuremath{\varepsilon}$.
The method of modified equations gives
\begin{align}
u_t + uu_x -\ensuremath{\varepsilon}(uu_{xx})+O(\ensuremath{\varepsilon}^2)=0
\end{align}
for equation \eqref{waveeq} and
\begin{align}
u_t+uu_x -\ensuremath{\varepsilon} (u_x^2 + uu_{xx})+O(\ensuremath{\varepsilon}^2) = 0
\end{align}
for equation \eqref{waveeq2}.
The outer solution to each of these equations is just the reference solution to both equations \eqref{waveeq} and \eqref{waveeq2}, namely,
\begin{align}
u = \frac{1}{i\pi t} W(i\pi t e^{i\pi x})
\end{align}
where $W(z)$ is the principal branch of the Lambert $W$ function, which satisfies $W(z) e^{W(z)}=z$. See \cite{Corless(1996)} for more on the Lambert $W$ function. That $u$ is the solution for this initial condition was first noticed by \cite{weideman(2003)}.
The residuals of these outer solutions are just $-\ensuremath{\varepsilon} uu_{xx}$ and $-\ensuremath{\varepsilon}(u_x^2+uu_{xx})$ respectively. Simplifying, and again suppressing the argument of $W$ for tidiness, we find that
\begin{align}
-\ensuremath{\varepsilon} uu_{xx} = -\frac{\ensuremath{\varepsilon} W^2}{t^2(1+W^3)}
\end{align}
and
\begin{align}
-\ensuremath{\varepsilon}(u_x^2 + uu_{xx}) = -\frac{\ensuremath{\varepsilon} W^2(2+W)}{t^2(1+W^3)}
\end{align}
where $W$ is short for $W(i\pi t e^{i\pi x})$. We see that if $x=\sfrac{1}{2}$ and $t=\sfrac{1}{(\pi e)}$, both of these are singular:
\begin{align}
-\ensuremath{\varepsilon} uu_{xx} \sim -\ensuremath{\varepsilon} \left( \frac{i\pi^2 e^2\sqrt{2}}{4(et\pi-1)^{\sfrac{3}{2}}}+O\left(\frac{1}{et\pi-1}\right)\right)
\end{align}
and
\begin{align}
-\ensuremath{\varepsilon} (u^2_x +uu_{xx}) \sim -\ensuremath{\varepsilon} \left( \frac{i\pi^2e^2\sqrt{2}}{4(et\pi-1)^{\sfrac{3}{2}}} + O\left(\frac{1}{\sqrt{et\pi-1}}\right)\right)\>.
\end{align}
We see that the outer solution makes the residual very large near $x=\sfrac{1}{2}$ as $t\to \sfrac{1}{(\pi e)}^-$ suggesting that the solution of the modified equation---and thus the numerical solution---will depart from the outer solution. Both the original form and the divergence form are predicted to have similar behaviour, and this is confirmed by numerical experiments.
We remark that using forward differences instead just changes the sign of $\ensuremath{\varepsilon}$, and given the similarity of $euu_{xx}$ to $\ensuremath{\varepsilon} u_{xx}$, we intuit that this will blow up rather quickly, like the backward heat equation, because the exact solution to Burger's equation $u_t+uu_x=\ensuremath{\varepsilon} u_{xx}$ involves a change in variable to the heat equation \cite[pp.~352-353]{Kevorkian(2013)}. We also remark also that this use of residual is a bit perverse: we here substitute the reference solution into an approximate (reverse-engineered) equation. Some authors do use `residual' or even `defect' in this sense., e.g., \cite{Chiba(2009)}. It only fits our usage because the reference solution to the original equation is just the outer solution of the perturbation problem of interest here.
Finally, we can interpolate the numerical solution using a trigonometric interpolant in $x$ tensor producted with the interpolant in $t$ provided by the numerical solver (e.g., \texttt{ode15s} in Matlab). We can then compute the residual $\Delta(t,x) = z_t+zz_x$ in the original equation and we find that, away from the singularity, it is $O(\ensuremath{\varepsilon})$. If we compute the residual in the modified equation
\begin{align}
\Delta_1(t,x)=z_t+zz_x-\ensuremath{\varepsilon} zz_{xx}
\end{align}
we find that, away from the singularity, it is $O(\ensuremath{\varepsilon}^2)$. This is a more traditional use of residual in a numerical computation, and is done without knowledge of any reference solution. The analogous use we are making for perturbation methods can be understood from this numerical perspective.
\section{Concluding Remarks}
Decades ago, van Dyke had already made the point that, in perturbation theory, ``[t]he possibilities are too diverse to be subject to rules'' \cite[p.~31]{vanDyke(1964)}. Van Dyke was talking about the useful freedom to choose expansion variables artfully, but the same might be said for perturbation methods generally. This paper has attempted (in the face of that observation) to lift a known technique, namely the residual as a backward error, out of numerical analysis and apply it to perturbation theory. The approach is surprisingly useful and clarifies several issues, namely
\begin{itemize}
\item BEA allows one to directly use approximations taken from divergent series in an optimal fashion without appealing to ``rules of thumb'' such as stopping before including the smallest term.
\item BEA allows the justification of removing spurious secular terms, even when true secular terms are present.
\item Not least, residual computation and \emph{a posteriori} BEA makes detection of slips, blunders, and bugs all but certain, as illustrated in our examples.
\item Finally BEA interprets the computed solution solution $z$ as the exact solution to just as good a model.
\end{itemize}
In this paper we have used BEA to demonstrate the validity of solutions obtained by the iterative method, by Lindstedt's method, by the method of multiple scales, by the renormalization group method, and by matched asymptotic expansions.
We have also successfully used the residual and BEA in many problems not shown here: eigenvalue problems from \cite{Nayfeh(2011)}; an example from \cite{vanDyke(1964)} using the method of strained coordinates; and many more.
The examples here have largely been for algebraic equations and for ODEs, but the method was used to good effect in \cite{Corless(2014)} for a PDE system describing heat transfer between concentric cylinders, with a high-order perturbation series in Rayleigh number. Aside from the amount of computational work required, there is no theoretical obstacle to using the technique for other PDE; indeed the residual of a computed solution $z$ (perturbation solution, in this paper) to an operator equation $\varphi(y;x)=0$ is usually computable: $\Delta = \varphi(z;x)$ and its size (in our case, leading term in the expansion in the gauge functions) easily assessed.
It's remarkable to us that the notion, while present here and there in the literature, is not used more to justify the validity of the perturbation series.
We end with a caution. Of course, BEA is not a panacea. There are problems for which it is not possible. For instance, there may be hidden constraints, something like solvability conditions, that play a crucial role and where the residual tells you nothing. A residual can even be zero and if there are multiple solutions, one needs a way to get the right one.
There are things that can go wrong with this backward error approach. First, the final residual computation might not be independent enough from the computation of $z$, and repeat the same error. An example is if one correctly solves
\begin{align}
\ddot{y}+y+\ensuremath{\varepsilon} \dot{y}^3+3\ensuremath{\varepsilon}^2\dot{y}=0
\end{align}
and verifies that the residual is small, while \textsl{intending} to solve
\begin{align}
\ddot{y}+y+\ensuremath{\varepsilon} \dot{y}^3-3\ensuremath{\varepsilon}^2\dot{y}=0\>,
\end{align}
i.e., getting the wrong sign on the $\dot{y}$ term, both times. Another thing that can go wrong is to have an error in your independent check but not your solution. This happened to us with 183 instead of 138 in subsection \ref{systems}; the discrepancy alerted us that there \textsl{was} a problem, so this at least was noticeable. A third thing that can go wrong is that you verify the residual is small but forget to check the boundary counditions. A fourth thing that can go wrong is that the residual may be small in an absolute sense but still larger than important terms in the equation---the residual may need to be smaller than you expect, in order to get good qualitative results. A fifth thing is that the residual may be small but of the `wrong character', i.e., be unphysical. Perhaps the method has introduced the equivalent of negative damping, for instance. This point can be very subtle.
A final point is that a good solution needs not just a small backward error, but also information about the sensitivity (or robustness) of the model to physical perturbations. We have not discussed computation of sensitivity, but we emphasize that even if $\Delta\equiv 0$, you still have to do it, because real situations have real perturbations. Nonetheless, we hope that we have convinced you that BEA can be helpful.
\bibliographystyle{plain}
|
1,108,101,564,979 | arxiv | \section{Introduction} \label{S-introduction}
Throughout this article, let $\mathbb{N}$ denote the set of nonnegative integers. Let $\mathbb{C}$ denote the complex number field. Let $\mathbb{T}=\{z\in\mathbb{C}:|z|=1\}$ and $\mathbb{D}=\{z\in\mathbb{C}:|z|<1\}$.
An {\it automorphism} of $\mathbb{D}$ is a bijective analytic function $\varphi:\mathbb{D}\rightarrow\mathbb{D}$. The set of all automorphisms of $\mathbb{D}$ is denoted by $Aut(\mathbb{D})$. It is well known that the automorphisms of $\mathbb{D}$ are the linear fractional transformations
$$\varphi(z)=b\frac{z-a}{1-\overline{a}z}, |a|<1, |b|=1.$$
Moreover, every $\varphi\in Aut(\mathbb{D})$ maps $\mathbb{T}$ bijectively onto itself (see \cite[pages 131-132]{Conway}).
For $1\leqslant p<+\infty$, let $H^{p}$ denote the space of all analytic functions on $\mathbb{D}$ for which
$$\sup\limits_{0\leqslant r<1}(\frac{1}{2\pi}\int_{0}^{2\pi}|f(re^{i\theta})|^{p}d\theta)^{\frac{1}{p}}<+\infty.$$
For any $f\in H^{p}$, let
$$\|f\|_{p}=\sup\limits_{0\leqslant r<1}(\frac{1}{2\pi}\int_{0}^{2\pi}|f(re^{i\theta})|^{p}d\theta)^{\frac{1}{p}}.$$
Then $(H^{p},\|\cdot\|_{p})$ is a Banach space.
Let $\varphi\in Aut(\mathbb{D})$ and let $C_{\varphi}(f)=f\circ\varphi(f\in H^{p})$ be the corresponding composition operator on $H^{p}$. It is well known that for any $1\leqslant p<+\infty$ and $\varphi\in Aut(\mathbb{D})$, $C_{\varphi}$ defines a continuous linear operator on $H^{p}$ (see \cite[pages 220-221]{Zhu}).
A continuous linear operator $T$ on a Banach space $X$ is called {\it hypercyclic} if there is an element $x$ in $X$ whose orbit $\{T^{n}x:n\in\mathbb{N}\}$ under $T$ is dense in $X$; {\it topologically transitive} if for any pair $U,V$ of nonempty open subsets of $X$, there exists some nonnegative integer $n$ such that $T^{n}(U)\cap V\neq\emptyset$; and {\it mixing} if for any pair $U,V$ of nonempty open subsets of $X$, there exists some nonnegative integer $N$ such that $T^{n}(U)\cap V\neq\emptyset$ for all $n\geqslant N$.
The historical interest in hypercyclicity is related to the invariant subset problem. The invariant subset problem, which is open to this day, asks whether every continuous linear operator on any infinite dimensional separable Hilbert space possesses an invariant closed subset other than the trivial ones given by $\{0\}$ and the whole space. Counterexamples do exist for continuous linear operators on non-reflexive spaces like $l^{1}$. After a simple observation, a continuous linear operator $T$ on a Banach space $X$ has no nontrival invariant closed subsets if and only if every nonzero vector $x$ is hypercyclic (i.e. the orbit $\{T^{n}x:n\in\mathbb{N}\}$ under $T$ is dense in $X$).
The best known examples of hypercyclic operators are due to Birkhoff \cite{Birkhoff}, MacLane \cite{MacLane} and Rolewicz \cite{Rolewicz}. Each of these papers had a profound influence on the literature on hypercyclicity. Birkhoff's result on the hypercyclicity of the translation operator $T_{a}(f)(z)=f(z+a),a\neq0,$ on the space $H(\mathbb{C})$ of entire functions has led to an extensive study of hypercyclic composition operators (see \cite[pages 110-118]{Grosse-Erdmann-Peris}), while MacLane's result on the hypercyclicity of the differentiation operator $Df=f^{\prime}$ on $H(\mathbb{C})$ initiated the study of hypercyclic differential operators (see \cite[pages 104-110]{Grosse-Erdmann-Peris}). Shift operators on Banach sequence spaces were first studied by Rolewicz \cite{Rolewicz} who showed that for any $\lambda>1$ the multiple $\lambda B$ of the unilateral unweighted backward shift $B$ on the Banach sequence spaces $l^{p}(1\leqslant p<+\infty)$ or $c_{0}$ is hypercyclic. Since then the hypercyclic, mixing, weakly mixing and chaotic properties of shift operators have been studied by several authors (see \cite{Beauzamy,Bernal-Gonzalez,Bes-Peris,Birkhoff,Chan,Costakis-Sambarino,Gethner-Shapiro,
Godefroy-Shapiro,Grosse-Erdmann,Grosse-Erdmann-Peris,Gulisashvili-MacCluer,Kitai,MacLane,Martinez-Peris,Mathew,Rolewicz,Salas91,Salas95}).
Recently Bourdon and Shapiro \cite{Bourdon-Shapiro1,Bourdon-Shapiro2} have done an extensive study of cyclic and hypercyclic linear fractional composition operators on $H^{2}$. Zorboska \cite{Zorboska} has determined hypercyclic and cyclic composition operators induced by a linear fractional self map of $\mathbb{D}$, acting on a special class of smooth weighted Hardy spaces $H^{2}(\beta)$. Gallardo and Montes \cite{Gallardo-Montes} have obtained a complete characterization of the cyclic, supercyclic and hypercyclic composition operators $C_{\varphi}$ for linear fractional self-maps $\varphi$ of $\mathbb{D}$ on any of the spaces $\mathcal{S}_{\nu},\nu\in\mathbb{R}$. In particular, $\mathcal{S}_{0}$ is the Hardy space $H^{2}$, $\mathcal{S}_{-1/2}$ is the Bergman space $A^{2}$, and $\mathcal{S}_{1/2}$ is the Dirichlet space $\mathcal{D}$ under an equivalent norm. Since the Hardy space $H^{2}$ is a particular case of the spaces $H^{p}$ with $1\leqslant p<+\infty$, it is therefore very natural to try to characterize the cyclic, supercyclic and hypercyclic composition operators $C_{\varphi}$ for linear fractional self-maps $\varphi$ of $\mathbb{D}$ on any of the spaces $H^{p}$ with $1\leqslant p<+\infty$. In this paper we will characterize the hypercyclic and mixing composition operators $C_{\varphi}$ for the automorphisms of $\mathbb{D}$ on any of the spaces $H^{p}$ with $1\leqslant p<+\infty$, generalizing the corresponding results in \cite{Bourdon-Shapiro1,Bourdon-Shapiro2}.
\begin{theorem}
Let $1\leqslant p<+\infty$. Let $\varphi\in Aut(\mathbb{D})$ and $C_{\varphi}$ be the corresponding composition operator on $H^{p}$. Then the following assertions are equivalent:
(1) $C_{\varphi}$ is hypercyclic;
(2) $C_{\varphi}$ is mixing;
(3) $\varphi$ has no fixed point in $\mathbb{D}$.
\end{theorem}
Bourdon and Shapiro \cite{Bourdon-Shapiro1,Bourdon-Shapiro2} proved the above result in the case $p=2$. Hence the above result generalizes the corresponding results in \cite{Bourdon-Shapiro1,Bourdon-Shapiro2}.
This paper is organized as follows. In Section~\ref{S-hypercyclic} we characterize the hypercyclic and mixing composition operators $C_{\varphi}$ for the automorphisms of $\mathbb{D}$ on any of the spaces $H^{p}$ with $1\leqslant p<+\infty$.
\noindent{\it Acknowledgments.}
Z.~R. was supported by Research Program of Science at Universities of Inner Mongolia Autonomous Region (Grant No.
NJZY22328).
\section{Hypercyclic and mixing composition operators on $H^{p}$} \label{S-hypercyclic}
In this section we characterize the hypercyclic and mixing composition operators $C_{\varphi}$ for the automorphisms of $\mathbb{D}$ on any of the spaces $H^{p}$ with $1\leqslant p<+\infty$, generalizing the corresponding results in \cite{Bourdon-Shapiro1,Bourdon-Shapiro2}.
Recall the notion of annihilator introduced in \cite[page 163]{Taylor-Lay}.
\begin{definition}
Let $X$ be a normed linear space. If $A\subseteq X$, the annihilator $A^{\bot}$ of $A$ is the set
$$A^{\bot}=\{x^{\prime}\in X^{\ast}:x^{\prime}(x)=0\text{ for all }x\in A\},$$
where $X^{\ast}$ is the set of continuous linear functionals on $X$.
If $F\subseteq X^{\ast}$, the annihilator $F^{\bot}$ of $F$ is the set
$$F^{\bot}=\{x\in X:x^{\prime}(x)=0\text{ for all }x^{\prime}\in F\}.$$
\end{definition}
The following technical results will help us characterize hypercyclic and mixing composition operators on $H^{p}$ for $1\leqslant p<+\infty$.
The following proposition is well known (see \cite[page 164]{Taylor-Lay}).
\begin{proposition}
A normed linear space $X$ is a reflexive Banach space if and only if every norm-closed linear subspace in $X^{\ast}$ is $\sigma(X^{\ast},X)$-closed, where $\sigma(X^{\ast},X)$ is the weak$^{\ast}$ topology on $X^{\ast}$.
\end{proposition}
We need the following proposition (see \cite[pages 163-164]{Taylor-Lay}).
\begin{proposition}
Let $X$ be a normed linear space. If $F$ is a nonempty subset of $X^{\ast}$, then $F^{\bot\bot}$ is the $\sigma(X^{\ast},X)$-closed linear subspace generated by $F$, where $F^{\bot\bot}=(F^{\bot})^{\bot}$.
\end{proposition}
We need the following Kitai criterion (see \cite[page 71]{Grosse-Erdmann-Peris}).
\begin{proposition}
Let $T$ be a continuous linear operator on a Banach space $X$. If there are dense subsets $X_{0},Y_{0}\subseteq X$ and a map $S:Y_{0}\rightarrow Y_{0}$ such that, for any $x\in X_{0},y\in Y_{0}$,
(i) $T^{n}x\rightarrow0$,
(ii) $S^{n}y\rightarrow0$,
(iii) $TSy=y$,
then $T$ is mixing.
\end{proposition}
The following propositions are the major techniques we need.
\begin{proposition}
For any $1\leqslant p<+\infty$, each point evaluation $k_{\lambda}:H^{p}\rightarrow\mathbb{C}(\lambda\in\mathbb{D})$ is continuous on $H^{p}$, where $k_{\lambda}(f)=f(\lambda)(f\in H^{p})$.
\end{proposition}
\begin{proof}
Let $\lambda\in\mathbb{D}$, $\{f_{n}\}_{n=1}^{\infty}$ be a sequence in $H^{p}$, $f\in H^{p}$ and $\lim\limits_{n\rightarrow\infty}\|f_{n}-f\|_{p}=0$. We will show that $\lim\limits_{n\rightarrow\infty}f_{n}(\lambda)=f(\lambda)$. Since $\lambda\in\mathbb{D}$, we may choose $r,R\in(0,1)$ with $|\lambda|<r<R<1$. Notice that
\begin{align*}
\|f_{n}-f\|_{p}\geqslant&(\frac{1}{2\pi}\int_{0}^{2\pi}|f_{n}(Re^{i\theta})-f(Re^{i\theta})|^{p}d\theta)^{\frac{1}{p}}\\
\geqslant&\frac{1}{(2\pi)^{\frac{1}{p}}}(\int_{0}^{2\pi}|f_{n}(Re^{i\theta})-f(Re^{i\theta})|^{p}d\theta)^{\frac{1}{p}}\\
\geqslant&\frac{1}{(2\pi)^{\frac{1}{p}}}\frac{1}{(2\pi)^{\frac{1}{q}}}\int_{0}^{2\pi}|f_{n}(Re^{i\theta})-f(Re^{i\theta})|d\theta(\text{ where }\frac{1}{p}+\frac{1}{q}=1)\\
=&\frac{1}{2\pi}\int_{0}^{2\pi}|f_{n}(Re^{i\theta})-f(Re^{i\theta})|d\theta.
\end{align*}
By Cauchy's integral formula, we have
\begin{align*}
(f_{n}-f)(\lambda)=&\frac{1}{2\pi i}\int_{\gamma}\frac{(f_{n}-f)(\omega)}{\omega-\lambda}d\omega(\text{ where }\gamma(t)=Re^{i\theta},0\leqslant\theta\leqslant2\pi)\\
=&\frac{1}{2\pi i}\int_{0}^{2\pi}\frac{f_{n}(Re^{i\theta})-f(Re^{i\theta})}{Re^{i\theta}-\lambda}d(Re^{i\theta})\\
=&\frac{R}{2\pi}\int_{0}^{2\pi}\frac{f_{n}(Re^{i\theta})-f(Re^{i\theta})}{Re^{i\theta}-\lambda}e^{i\theta}d\theta.
\end{align*}
Therefore
\begin{align*}
&\frac{1}{2\pi}\int_{0}^{2\pi}|f_{n}(Re^{i\theta})-f(Re^{i\theta})|d\theta\\
&=\frac{1}{2\pi}\int_{0}^{2\pi}|\frac{f_{n}(Re^{i\theta})-f(Re^{i\theta})}{Re^{i\theta}-\lambda}e^{i\theta}|\cdot |Re^{i\theta}-\lambda|d\theta\\
&\geqslant\frac{R-r}{2\pi}\int_{0}^{2\pi}|\frac{f_{n}(Re^{i\theta})-f(Re^{i\theta})}{Re^{i\theta}-\lambda}e^{i\theta}|d\theta\\
&\geqslant\frac{R-r}{2\pi}|\int_{0}^{2\pi}\frac{f_{n}(Re^{i\theta})-f(Re^{i\theta})}{Re^{i\theta}-\lambda}e^{i\theta}d\theta|\\
&=\frac{R-r}{2\pi}|\frac{2\pi}{R}(f_{n}(\lambda)-f(\lambda))|\\
&=\frac{R-r}{R}|f_{n}(\lambda)-f(\lambda)|.
\end{align*}
Hence $\|f_{n}-f\|_{p}\geqslant\frac{R-r}{R}|f_{n}(\lambda)-f(\lambda)|$. Since $\lim\limits_{n\rightarrow\infty}\|f_{n}-f\|_{p}=0$, $\lim\limits_{n\rightarrow\infty}f_{n}(\lambda)=f(\lambda)$.
\end{proof}
\begin{flushright}
$\Box$
\end{flushright}
If $f\in H^{p}(1\leqslant p<+\infty)$, then $\lim\limits_{r\rightarrow1^{-}}f(re^{i\theta})$ exists for almost all values of $\theta$ (see \cite[page 17]{Duren}), thus defining a function which we denote by $f(e^{i\theta})$.
\begin{proposition}
For any $1<p<+\infty$, $H^{p}$ is reflexive.
\end{proposition}
\begin{proof}
Since $1<p<+\infty$, each $\phi\in(H^{p})^{\ast}$ is representable in the following form
$$\phi(f)=\frac{1}{2\pi}\int_{0}^{2\pi}f(e^{i\theta})g(e^{-i\theta})d\theta(f\in H^{p})$$
by a unique function $g\in H^{q}$, where $\frac{1}{p}+\frac{1}{q}=1$. Furthermore, the linear operator $T:(H^{p})^{\ast}\rightarrow H^{q}$ defined by $T(\phi)=g(\phi\in(H^{p})^{\ast})$ is a topological isomorphism (see \cite[pages 112-113]{Duren}). Let $x^{\prime\prime}\in(H^{p})^{\ast\ast}$. We will show that there exists a $g\in H^{p}$ such that $x^{\prime\prime}(x^{\prime})=x^{\prime}(g)$ for all $x^{\prime}\in(H^{p})^{\ast}$. Let $y^{\prime\prime}=x^{\prime\prime}\circ T^{-1}$. Then $y^{\prime\prime}\in(H^{q})^{\ast}$ and $y^{\prime\prime}(Tx^{\prime})=x^{\prime\prime}(x^{\prime})$ for $x^{\prime}\in(H^{p})^{\ast}$. Hence there exists a unique function $g\in H^{p}$ such that
$$y^{\prime\prime}(f)=\frac{1}{2\pi}\int_{0}^{2\pi}f(e^{i\theta})g(e^{-i\theta})d\theta(f\in H^{q}).$$
Therefore, if $x^{\prime}\in(H^{p})^{\ast}$,
\begin{align*}
x^{\prime\prime}(x^{\prime})=&y^{\prime\prime}(Tx^{\prime})\\
=&\frac{1}{2\pi}\int_{0}^{2\pi}(Tx^{\prime})(e^{i\theta})g(e^{-i\theta})d\theta\\
=&-\frac{1}{2\pi}\int_{0}^{-2\pi}g(e^{i\theta})(Tx^{\prime})(e^{-i\theta})d\theta\\
=&\frac{1}{2\pi}\int_{-2\pi}^{0}g(e^{i\theta})(Tx^{\prime})(e^{-i\theta})d\theta\\
=&\frac{1}{2\pi}\int_{0}^{2\pi}g(e^{i\theta})(Tx^{\prime})(e^{-i\theta})d\theta\\
=&x^{\prime}(g).
\end{align*}
This implies that $x^{\prime\prime}(x^{\prime})=x^{\prime}(g)$ for all $x^{\prime}\in(H^{p})^{\ast}$ and $H^{p}$ is reflexive.
\end{proof}
\begin{flushright}
$\Box$
\end{flushright}
We need the following important properties of the $H^{p}$-spaces (see \cite[pages 9, 12, 21]{Duren}).
\begin{proposition}
Let $f\in H^{p},1\leqslant p<+\infty$. Then
(1) $\|f\|_{p}=\lim\limits_{r\rightarrow1^{-}}(\frac{1}{2\pi}\int_{0}^{2\pi}|f(re^{i\theta})|^{p}d\theta)^{\frac{1}{p}}$;
(2) $\lim\limits_{r\rightarrow1^{-}}\int_{0}^{2\pi}|f(re^{i\theta})|^{p}d\theta=\int_{0}^{2\pi}|f(e^{i\theta})|^{p}d\theta$;
(3) $\lim\limits_{r\rightarrow1^{-}}\int_{0}^{2\pi}|f(re^{i\theta})-f(e^{i\theta})|^{p}d\theta=0$.
\end{proposition}
\begin{proposition}
Let $1\leqslant p<+\infty$. Then the polynomials form a dense set in $H^{p}$. Hence $H^{p}$ is separable.
\end{proposition}
\begin{proof}
It is obvious that every polynomial belongs to $H^{p}$. Next we will show that the polynomials form a dense set in $H^{p}$.
Given $f\in H^{p}$ and $\varepsilon>0$. By Proposition 2.7 we have
$$\lim_{r\rightarrow1^{-}}\int_{0}^{2\pi}|f(re^{i\theta})-f(e^{i\theta})|^{p}d\theta=0.$$
Hence we may choose $0<\rho<1$ such that
$$\int_{0}^{2\pi}|f(\rho e^{i\theta})-f(e^{i\theta})|^{p}d\theta<2\pi(\frac{\varepsilon}{2})^{p}.$$
Now let $S_{n}(z)$ denote the $n$th partial sum of the Taylor series of $f$ at the origin. Since $S_{n}(z)\rightarrow f(z)$ uniformly on $\{z\in\mathbb{C}:|z|=\rho\}$, we may choose a positive integer $n_{0}$ such that
$$\int_{0}^{2\pi}|S_{n_{0}}(\rho e^{i\theta})-f(\rho e^{i\theta})|^{p}d\theta<2\pi(\frac{\varepsilon}{2})^{p}.$$
Thus, using Minkowski's inequality in the case $1\leqslant p<+\infty$, we find
\begin{align*}
&(\int_{0}^{2\pi}|S_{n_{0}}(\rho e^{i\theta})-f(e^{i\theta})|^{p}d\theta)^{\frac{1}{p}}\\
&\leqslant(\int_{0}^{2\pi}|S_{n_{0}}(\rho e^{i\theta})-f(\rho e^{i\theta})|^{p}d\theta)^{\frac{1}{p}}+(\int_{0}^{2\pi}|f(\rho e^{i\theta})-f(e^{i\theta})|^{p}d\theta)^{\frac{1}{p}}\\
&<(2\pi)^{\frac{1}{p}}\frac{\varepsilon}{2}+(2\pi)^{\frac{1}{p}}\frac{\varepsilon}{2}\\
&=(2\pi)^{\frac{1}{p}}\varepsilon.
\end{align*}
Let $(S_{n_{0}})_{\rho}(z)=S_{n_{0}}(\rho z)(z\in\mathbb{C})$. Next we will show that $\|(S_{n_{0}})_{\rho}-f\|_{p}<\varepsilon$.
\begin{align*}
\|(S_{n_{0}})_{\rho}-f\|_{p}=&\sup\limits_{0\leqslant r<1}(\frac{1}{2\pi}\int_{0}^{2\pi}|S_{n_{0}}(r\rho e^{i\theta})-f(re^{i\theta})|^{p}d\theta)^{\frac{1}{p}}\\
=&\lim_{r\rightarrow1^{-}}(\frac{1}{2\pi}\int_{0}^{2\pi}|S_{n_{0}}(r\rho e^{i\theta})-f(re^{i\theta})|^{p}d\theta)^{\frac{1}{p}}(\text{ By Proposition 2.7})\\
=&\frac{1}{(2\pi)^{\frac{1}{p}}}\lim_{r\rightarrow1^{-}}(\int_{0}^{2\pi}|S_{n_{0}}(r\rho e^{i\theta})-f(re^{i\theta})|^{p}d\theta)^{\frac{1}{p}}\\
=&\frac{1}{(2\pi)^{\frac{1}{p}}}(\int_{0}^{2\pi}|S_{n_{0}}(\rho e^{i\theta})-f(e^{i\theta})|^{p}d\theta)^{\frac{1}{p}}(\text{ By Proposition 2.7})\\
<&\frac{1}{(2\pi)^{\frac{1}{p}}}(2\pi)^{\frac{1}{p}}\varepsilon\\
=&\varepsilon.
\end{align*}
Notice that $(S_{n_{0}})_{\rho}$ is also a polynomial. Hence the polynomials form a dense set in $H^{p}$.
\end{proof}
\begin{flushright}
$\Box$
\end{flushright}
We need the following property of the Taylor coefficients of $H^{p}$ functions (see \cite[page 94]{Duren}).
\begin{proposition}
Let $1\leqslant p\leqslant2$ and $f\in H^{p}$. Let $\sum\limits_{n=0}^{\infty}a_{n}z^{n}$ be the Taylor series of $f$ at the origin. Then $\{a_{n}\}_{n=0}^{\infty}\in l^{q}$, where $\frac{1}{p}+\frac{1}{q}=1$.
\end{proposition}
Our aim now is to characterize when $C_{\varphi}$ is hypercyclic on $H^{p}$. To this end we need some important dynamical properties of automorphisms $\varphi$ of $\mathbb{D}$.
Let
$$\varphi(z)=\frac{az+b}{cz+d}, ad-bc\neq0$$
be an arbitrary linear fractional transformation, which we consider as a map on the extended complex plane $\widehat{\mathbb{C}}$. Then $\varphi$ has either one or two fixed points in $\widehat{\mathbb{C}}$, or it is the identity.
Suppose that $\varphi$ has two distinct fixed points $z_{0}$ and $z_{1}$, and let $\sigma$ be a linear fractional transformation that maps $z_{0}$ to 0 and $z_{1}$ to $\infty$. Then $\psi:=\sigma\circ\varphi\circ\sigma^{-1}$ has fixed points 0 and $\infty$, which easily implies that $\psi(z)=\lambda z$ for some $\lambda\neq0$. The constant $\lambda$ is called the \emph{multiplier} of $\varphi$. Replacing $\sigma$ by $1/\sigma$ one sees that also $1/\lambda$ is a multiplier, which, however, causes no problem in the following.
\begin{definition}
Let $\varphi$ be a linear fractional transformation that is not the identity.
(a) If $\varphi$ has a single fixed point then it is called \emph{parabolic}.
(b) Suppose that $\varphi$ has two distinct fixed points, and let $\lambda$ be its multiplier. If $|\lambda|=1$ then $\varphi$ is called \emph{elliptic}; if $\lambda>0$ then $\varphi$ is called \emph{hyperbolic}; in all other cases, $\varphi$ is called \emph{loxodromic}.
\end{definition}
We need the following dynamical properties of automorphisms $\varphi$ of $\mathbb{D}$ (see \cite[pages 125-126]{Grosse-Erdmann-Peris}).
\begin{proposition}
Let $\varphi\in Aut(\mathbb{D})$, not the identity. Then we have the following:
(i) if $\varphi$ is parabolic then its fixed point $z_{0}$ lies in $\mathbb{T}$, and $\varphi^{n}(z)\rightarrow z_{0}, \varphi^{-n}(z)\rightarrow z_{0}$ for all $z\in\widehat{\mathbb{C}}$;
(ii) if $\varphi$ is elliptic then it has a fixed point in $\mathbb{D}$;
(iii) if $\varphi$ is hyperbolic then it has distinct fixed points $z_{0}$ and $z_{1}$ in $\mathbb{T}$ such that $\varphi^{n}(z)\rightarrow z_{0}$ for all $z\in\widehat{\mathbb{C}}, z\neq z_{1}$, and $\varphi^{-n}(z)\rightarrow z_{1}$ for all $z\in\widehat{\mathbb{C}}, z\neq z_{0}$;
(iv) $\varphi$ cannot be loxodromic.
\end{proposition}
The dynamical properties of $\varphi\in Aut(\mathbb{D})$ imply the dynamical properties of $C_{\varphi}$.
Finally we prove Theorem 1.1.
$\mathbf{Proof~of~Theorem~1.1.}$
(2)$\Rightarrow$(1) Assume that $C_{\varphi}$ is mixing. Then $C_{\varphi}$ is topologically transitive. Since a continuous linear operator on a separable Banach space is topologically transitive if and only if it is hypercyclic (see \cite[page 10]{Grosse-Erdmann-Peris}), by Proposition 2.8 we have $C_{\varphi}$ is hypercyclic.
(1)$\Rightarrow$(3) Assume that $C_{\varphi}$ is hypercyclic. We will show that $\varphi$ has no fixed point in $\mathbb{D}$. Suppose $\varphi$ has a fixed point $z_{0}\in\mathbb{D}$. Since $C_{\varphi}$ is hypercyclic, there exists a $f\in H^{p}$ such that $\{(C_{\varphi})^{n}f:n\geqslant0\}$ is dense in $H^{p}$. We may choose a $g\in H^{p}$ with $g(z_{0})\neq f(z_{0})$. Since $\overline{\{(C_{\varphi})^{n}f:n\geqslant0\}}=H^{p}$, we may choose a sequence $\{n_{k}\}_{k=1}^{\infty}$ of positive integers with $n_{1}<n_{2}<\cdots<n_{k}<n_{k+1}<\cdots$ such that $\lim\limits_{k\rightarrow\infty}(C_{\varphi})^{n_{k}}f=g$. By Proposition 2.5 we have $\lim\limits_{k\rightarrow\infty}((C_{\varphi})^{n_{k}}f)(z_{0})=g(z_{0})$. Notice that
$$((C_{\varphi})^{n_{k}}f)(z_{0})=(f\circ\varphi^{n_{k}})(z_{0})=f(\varphi^{n_{k}}(z_{0}))=f(z_{0}).$$
Hence $f(z_{0})=g(z_{0})$, this is a contradiction with $f(z_{0})\neq g(z_{0})$. Therefore $\varphi$ has no fixed point in $\mathbb{D}$.
(3)$\Rightarrow$(2) Suppose that $\varphi$ has no fixed point in $\mathbb{D}$. It suffices to show that $C_{\varphi}$ satisfies Kitai's criterion. By Proposition 2.11, $\varphi$ is either parabolic or hyperbolic, and in both cases $\varphi$ has fixed points $z_{0}$ and $z_{1}$ in $\mathbb{T}$ (possibly with $z_{0}=z_{1}$) such that $\varphi^{n}(z)\rightarrow z_{0}$ for all $z\in\mathbb{T}\backslash\{z_{1}\}$ and $\varphi^{-n}(z)\rightarrow z_{1}$ for all $z\in\mathbb{T}\backslash\{z_{0}\}$.
Now, for $X_{0}$ we will take the subspace of $H^{p}$ of all functions that are analytic on a neighbourhood of $\overline{\mathbb{D}}$ and that vanish at $z_{0}$. Since $z_{0}$ is a fixed point of $\varphi$, $C_{\varphi}$ maps $X_{0}$ into itself.
$\mathbf{Claim~1.}$ For any $1\leqslant p<+\infty$ we have $\overline{X_{0}}=H^{p}$.
We divide it into two cases.
Case i. If $1<p<+\infty$. First we will show that $X_{0}^{\bot}=\{0\}$. Let $1<p<+\infty$ and $x^{\ast}\in X_{0}^{\bot}$. We will show that $x^{\ast}=0$. Since $1<p<+\infty$ and $x^{\ast}\in(H^{p})^{\ast}$, there exists a unique function $g\in H^{q}$ such that
$$x^{\ast}(f)=\frac{1}{2\pi}\int_{0}^{2\pi}f(e^{i\theta})g(e^{-i\theta})d\theta(f\in H^{p}),$$
where $\frac{1}{p}+\frac{1}{q}=1$ (see \cite[pages 112-113]{Duren}). By Proposition 2.7 we have
$$\lim_{r\rightarrow1^{-}}\int_{0}^{2\pi}|g(re^{i\theta})-g(e^{i\theta})|^{q}d\theta=0.$$
Hence for each $f\in H^{p}$ we have
$$\lim_{r\rightarrow1^{-}}\frac{1}{2\pi}\int_{0}^{2\pi}f(e^{i\theta})g(re^{-i\theta})d\theta=\frac{1}{2\pi}\int_{0}^{2\pi}f(e^{i\theta})g(e^{-i\theta})d\theta.$$
Since, for any $n\geqslant0$, the functions $g_{n}:\mathbb{C}\rightarrow\mathbb{C}$ defined by $g_{n}(z)=z_{0}z^{n}-z^{n+1}$ belong to $X_{0}$ we have that $x^{\ast}(g_{n})=0(n\geqslant0)$. Notice that
\begin{align*}
x^{\ast}(g_{n})=&\frac{1}{2\pi}\int_{0}^{2\pi}g_{n}(e^{i\theta})g(e^{-i\theta})d\theta\\
=&\lim_{r\rightarrow1^{-}}\frac{1}{2\pi}\int_{0}^{2\pi}g_{n}(e^{i\theta})g(re^{-i\theta})d\theta.
\end{align*}
Hence for any $n\geqslant0$ we have
$$\lim_{r\rightarrow1^{-}}\frac{1}{2\pi}\int_{0}^{2\pi}g_{n}(e^{i\theta})g(re^{-i\theta})d\theta=0.$$
Let $g(z)=\sum\limits_{n=0}^{\infty}a_{n}z^{n}(z\in\mathbb{D})$, $0<r<1$ and $n\geqslant0$. Then
\begin{align*}
&\frac{1}{2\pi}\int_{0}^{2\pi}g_{n}(e^{i\theta})g(re^{-i\theta})d\theta\\
&=\frac{1}{2\pi}\int_{0}^{2\pi}(z_{0}e^{in\theta}-e^{i(n+1)\theta})g(re^{-i\theta})d\theta\\
&=\frac{1}{2\pi}\int_{0}^{2\pi}z_{0}e^{in\theta}g(re^{-i\theta})d\theta-\frac{1}{2\pi}\int_{0}^{2\pi}e^{i(n+1)\theta}g(re^{-i\theta})d\theta\\
&=\frac{1}{2\pi}\int_{0}^{2\pi}z_{0}e^{in\theta}(\sum_{k=0}^{\infty}a_{k}r^{k}e^{-ik\theta})d\theta-\frac{1}{2\pi}\int_{0}^{2\pi}e^{i(n+1)\theta}
(\sum_{k=0}^{\infty}a_{k}r^{k}e^{-ik\theta})d\theta\\
&=\sum_{k=0}^{\infty}\frac{1}{2\pi}\int_{0}^{2\pi}z_{0}e^{in\theta}a_{k}r^{k}e^{-ik\theta}d\theta-\sum_{k=0}^{\infty}\frac{1}{2\pi}\int_{0}^{2\pi}e^{i(n+1)\theta}
a_{k}r^{k}e^{-ik\theta}d\theta\\
&=\frac{1}{2\pi}\int_{0}^{2\pi}z_{0}a_{n}r^{n}d\theta-\frac{1}{2\pi}\int_{0}^{2\pi}a_{n+1}r^{n+1}d\theta\\
&=z_{0}a_{n}r^{n}-a_{n+1}r^{n+1}.
\end{align*}
Since $\lim\limits_{r\rightarrow1^{-}}\frac{1}{2\pi}\int_{0}^{2\pi}g_{n}(e^{i\theta})g(re^{-i\theta})d\theta=0$, we have $$\lim\limits_{r\rightarrow1^{-}}(z_{0}a_{n}r^{n}-a_{n+1}r^{n+1})=z_{0}a_{n}-a_{n+1}=0.$$
Hence $a_{n}=a_{0}z_{0}^{n}(n\geqslant0)$. Since $q=\frac{p}{p-1}>1$, we may choose $1<q_{1}\leqslant2$ with $q_{1}<q$. It is evident that $H^{q}\subseteq H^{q_{1}}$. Since $g\in H^{q}$, $g\in H^{q_{1}}$. By Proposition 2.9 we have $\{a_{n}\}_{n=0}^{\infty}\in l^{p_{1}}$, where $\frac{1}{p_{1}}+\frac{1}{q_{1}}=1$. Notice that $$\sum\limits_{n=0}^{\infty}|a_{n}|^{p_{1}}=\sum\limits_{n=0}^{\infty}|a_{0}z_{0}^{n}|^{p_{1}}=\sum\limits_{n=0}^{\infty}|a_{0}|^{p_{1}}<+\infty.$$
Hence $a_{0}=0$ and $a_{n}=0$ for $n\geqslant0$. Therefore $g(z)=0$ for all $|z|<1$ and $x^{\ast}=0$.
Second we will show that $\overline{X_{0}}=H^{p}$. Since $\overline{X_{0}}^{\bot}=X_{0}^{\bot}$ and $X_{0}^{\bot}=\{0\}$, we have $\overline{X_{0}}^{\bot}=\{0\}$. Hence $\overline{X_{0}}^{\bot\bot}=\{0\}^{\bot}=H^{p}$. Since $1<p<+\infty$, by Proposition 2.6 we have $H^{p}$ is reflexive. Since $\overline{X_{0}}$ is norm-closed, by Proposition 2.2 we have $\overline{X_{0}}$ is $\sigma(X^{\ast},X)$-closed. Finally by Proposition 2.3 we have $\overline{X_{0}}^{\bot\bot}=\overline{X_{0}}$. Hence $\overline{X_{0}}=H^{p}$. This proves the case $1<p<+\infty$.
Case ii. If $p=1$. We will show that $\overline{X_{0}}=H^{1}$. Let $f\in H^{1}$ and $\varepsilon>0$. We will show that there exists a $g\in X_{0}$ such that $\|f-g\|_{1}<\varepsilon$. By Proposition 2.8, there exists a polynomial $h$ such that $\|f-h\|_{1}<\frac{\varepsilon}{2}$. By the case $p=2$, $X_{0}$ is dense in $H^{2}$. Hence there exists a $g\in X_{0}$ such that $\|g-h\|_{2}<\frac{\varepsilon}{2}$. Notice that $\|g-h\|_{1}\leqslant\|g-h\|_{2}$. Then $\|g-h\|_{1}<\frac{\varepsilon}{2}$. Hence $\|f-g\|_{1}\leqslant\|f-h\|_{1}+\|h-g\|_{1}<\varepsilon$. This proves the case $p=1$.
$\mathbf{Claim~2.}$ $(C_{\varphi})^{n}f\rightarrow0$ for all $f\in X_{0}$.
Let $f\in X_{0}$. Since $f\circ\varphi^{n}$ is continuous on $\overline{\mathbb{D}}$, by Proposition 2.7 we have
$$\|(C_{\varphi})^{n}f\|_{p}^{p}=\frac{1}{2\pi}\int_{0}^{2\pi}|f(\varphi^{n}(e^{i\theta}))|^{p}d\theta.$$
Since the integrands are uniformly bounded and convergent to $|f(z_{0}|^{p}=0$, for every $t$ with possibly one exception, the dominated convergence theorem implies that $(C_{\varphi})^{n}f\rightarrow0$. This proves Claim 2.
Next, for $Y_{0}$ we will take the subspace of $H^{p}$ of all functions that are analytic on a neighbourhood of $\overline{\mathbb{D}}$ and that vanish at $z_{1}$, and for $S$ we take the map $S=C_{\varphi^{-1}}$. Since $z_{1}$ is a fixed point of $\varphi^{-1}$, $S$ maps $Y_{0}$ into itself, and clearly $C_{\varphi}S=I$. It follows as above that $Y_{0}$ is dense in $H^{p}$ and that $S^{n}f\rightarrow0$ for all $f\in Y_{0}$. Therefore the conditions of Kitai's criterion are satisfied, so that $C_{\varphi}$ is mixing.
\begin{flushright}
$\Box$
\end{flushright}
Bourdon and Shapiro \cite{Bourdon-Shapiro1,Bourdon-Shapiro2} proved Theorem 1.1 in the case $p=2$. Hence Theorem 1.1 generalizes the corresponding results in \cite{Bourdon-Shapiro1,Bourdon-Shapiro2}.
|
1,108,101,564,980 | arxiv | \section{Introduction}
Carbon onions (or fullerene onions) are concentric fullerenes nested in each other. For the first time their formation was observed in 1992 by Ugarte who focused an electron beam in a sample of amorphous carbon \cite{1}. Nowadays carbon onions can be produced by nanodiamond annealing \cite{2}, arc discharge between two graphite electrodes in water \cite{3,4} or by the naphthalene combustion \cite{5}. Unique properties of carbon onions make them the element base of various electric devices. Carbon onions are used as components for the electric double layer capacitors, also called supercapacitors \cite{6,7}. Pech et al. prepared ultrahigh-power micrometer-sized supercapacitors based on the carbon onions \cite{8}. In combination with ${\rm Co_3O_4}$ and ${\rm MnO_2}$ carbon onions can be used as electrode materials for ion-lithium batteries \cite{9,10}. The nanofilters based on the carbon onions and its composites can be applied as electromagnetic interference shields for the terahertz waves \cite{11}. Application of the carbon onions in electric devices requires the information about its electron parameters like energy gap and Fermi levels.
In the case of the bilayer onions which we investigate in this paper, the energy gap is given by the HOMO-LUMO gap - the energy difference between the lowest unoccupied molecular orbital of the inner onion shell and of the highest occupied molecular orbital of the outer onion shell. This difference is influenced by the energy difference between the Fermi levels of both onion shells.
\begin{figure}[htbp]
\includegraphics[width=6cm]{onion1.jpg}
\caption{Bilayer carbon onion.}\label{fg1}
\end{figure}
In our previous paper we calculated the energy gap of the onion ${\rm C_{60}@C_{240}}$ (Fig. \ref{fg1}) by the treatment of the hybridization of the fullerene orbitals \cite{12}. To calculate this value we took the experimental data about isolated fullerenes ${\rm C_{60}}$ and ${\rm C_{240}}$ \cite{13,14}. To expand our knowledge about the electronic properties of other bilayer onions we need information about their isolated part. In this paper we will obtain this information due to tight-binding method that was successfully used for the investigation of the electronic and the geometry properties of the bilayer onions in \cite{15}.
In section 2, we describe the tight-binding model with the original parametrization. On the base of this model we receive the geometry parameters which are necessary for the calculation of the Fermi levels of the considered onions. We also calculate the van der Waals interaction and on this base we determine which forms of the carbon onions can exist.
In section 3, we briefly describe how to obtain the HOMO-LUMO gaps for the bilayer onions by the combination of the calculations of the Fermi levels and of the HOMO and LUMO energies. It is described in \cite{12} how to calculate the energy difference between the Fermi levels. They are calculated due to parametrization based on the hybridization of the orbitals. The geometry parameters received in previous section are used here.\\
\section{The results obtained by tight-binding model with original parametrization}
To calculate the geometry and the electronic structure of the isolated fullerenes, the original parametrization of the tight binding method with the Harrison-Goodwin modification was used. To obtain the metric features of the investigated object, the total energy $E$ is minimized on the bond lengths:
\begin{equation}E = {E_{bond}} + {E_{rep}},\end{equation}
where $E_{bond}$ is the energy of occupied energy states (levels), $E_{rep}$ is the repulsive energy with an account of internuclear and electron-electron interaction. The energy of the band structure is determined by the formula
\begin{equation}{E_{bond}} = 2\mathop \sum \limits_n {\varepsilon _n},\end{equation}
where $\varepsilon_n$ is $n-$th energy level that corresponds to an eigenvalue of the Hamiltonian (multiplier 2 takes into account spin of electron). The repulsion energy is represented as the sum of pair potentials:
\begin{equation}{E_{rep}} = \mathop \sum \limits_{i < j} {V_{rep}}\left( {\left| {{r_i} - {r_j}} \right|} \right)\end{equation}
where $i, j$ - the numbers of interacting atoms, $r_i, r_j$ - Cartesian coordinates. The term $V_{rep}$ is determined by the Goodwin's expression \cite{16}:
\begin{equation}{V_{rep}}\left( r \right) = V_{rep}^0{\left(\frac{{1.54}}{r}\right)^{4.455}} \times\exp\left\{4.455\left[ - {{\left( {\frac{r}{{2.32}}} \right)}^{22}} + {{\left( {\frac{{1.54}}{{2.32}}} \right)}^{22}}\right] \right\},\end{equation}
where $V_{rep}^0= 10.92$ eV.
To find $E_{bond}$ we need to fill in the Hamiltonian. For this case we should determine carbon terms $\varepsilon_s$ and $\varepsilon_p$, and the overlapping integrals $V_{ss\sigma }^0,V_{sp\sigma }^0,V_{pp\sigma }^0,V_{pp\pi }^0$. The interatomic matrix elements of Hamiltonian were determined as follows \cite{19}:
\begin{equation}{V_{ij\alpha }}\left( r \right) = V_{ij\alpha }^0{\left(\frac{{1.54}}{r}\right)^{2.796}} \times \exp\left\{ {2.796\left[ - {{\left( {\frac{r}{{2.32}}} \right)}^{22}} + {{\left( {\frac{{1.54}}{{2.32}}} \right)}^{22}}\right]} \right\},\end{equation}
where $r$ is the distance between the atoms, $i, j$ - orbital moments of the wave functions, $\alpha$ is the index noting a bond type ($\sigma$ or $\pi$). The initial Goodwin's modification had the array of disadvantages: it didn't allow to calculate the potential of ionization and the energy gaps obtained by this method differed from the experimental data. So we developed the original parametrization of the Hamiltonian by the comparison of the experimental data for fullerene ${\rm C_{60}}$ (bond length $r_1$ and $r_2$, energy gap $E_g$ and ionization potential $I$ \cite{17}) with the calculated. The obtained matrix elements of the Hamiltonian are shown in Table \ref{tab1}. Herewith for the fullerene ${\rm C_{60}}$ it was found: ${r_1} = 1.4495\,{\rm \AA},\,\,{r_2} = 1.4005\,{\rm \AA},\,\,{E_g} = 1.96\,{\rm eV},\,\,I = 7.6099\,{\rm eV}$.
\renewcommand{\arraystretch}{2}
\begin{table}[htbp]
\caption{Atom terms of carbon and ground overlapping integrals (in eV).}
\begin{tabular}{|C{1.5cm}||C{1.5cm}|C{1.5cm}|C{1.5cm}|C{1.5cm}|C{1.5cm}|C{1.5cm}|}
\hline
term & $\varepsilon_s$ & $\varepsilon_p$ & $V_{ss\sigma}^0$ & $V_{sp\sigma}^0$ & $V_{pp\sigma}^0$ & $V_{pp\pi}^0$\\
\hline
value & -10.932 & -5.991 & -4.344 & 3.969 & 5.457 & -1.938\\
\hline
\end{tabular}
\label{tab1}
\end{table}
\renewcommand{\arraystretch}{1}
Let's note that our parametrization of the Harrison-Goodwin tight-binding modification allows to calculate the energy gap and the potential of ionization of isolated fullerenes with good accuracy, though it can't find the electron structure of the bilayer onions. But it can be used to find the ground state of them and, thus, its geometry parameters. We have already successfully applied this method to find the geometry structure and the ground states of the molecules ${\rm C_{20}@C_{240}}$ and ${\rm C_{60}@C_{540}}$ \cite{15}. In this paper we expanded the number of considered isolated fullerenes and onions. The HOMO and LUMO gaps of the isolated fullerenes are in Table \ref{tab1a} and their geometry parameters are given in Table \ref{tab2}.
\renewcommand{\arraystretch}{2}
\begin{table}
\caption{Energy in eV of HOMO and LUMO energies and the corresponding gaps for isolated fullerenes.}
\begin{tabular}{|C{1.7cm}||C{1.7cm}|C{1.7cm}|C{1.7cm}|C{1.7cm}|C{1.7cm}|C{1.7cm}|C{1.7cm}|C{1.7cm}|}
\hline
& ${\rm C_{20}}$ & ${\rm C_{28}}$ & ${\rm C_{32}}$ & ${\rm C_{36}}$ & ${\rm C_{60}}$ & ${\rm C_{80}}$ & ${\rm C_{240}}$ & ${\rm C_{540}}$\\ \hline\hline
HOMO & $-6.7$ & $-7.19$ & $-6.85$ & $-7.02$ & $-7.6$ & $-6.81$ & $-7.13$ & $-7.02$\\ \hline
LUMO & $-6.31$ & $-7.02$ & $-5.8$ & $-6.92$ & $-5.57$ & $-6.7$ & $-5.88$ & $-5.93$\\ \hline
H-L gap & $0.38$ & $0.17$ & $1.04$ & $0.09$ & $2.03$ & $0.1$ & $1.25$ & $1.09$\\ \hline
\end{tabular}
\label{tab1a}
\end{table}
\renewcommand{\arraystretch}{1}
\begin{table}[htbp]
\caption{The geometry parameters of some isolated fullerene obtained by original tight-binding method.}
\begin{tabular}{|C{3cm}|C{3cm}|C{3cm}|}
\hline
Fullerene & Radius,${\rm \,\AA}$ & Average bond length,${\rm \,\AA}$\\
\hline\hline
${\rm C_{20}}$ & 2.06 & 1.47\\ \hline
${\rm C_{28}}$ & 2.53 & 1.46\\ \hline
${\rm C_{32}}$ & 2.65 & 1.51\\ \hline
${\rm C_{36}}$ & 2.37 & 1.44\\ \hline
${\rm C_{60}}$ & 3.4 & 1.445\\ \hline
${\rm C_{80}}$ & 3.94 & 1.44\\ \hline
${\rm C_{240}}$ & 7.2 & 1.415\\ \hline
${\rm C_{540}}$ & 10.1 & 1.435\\
\hline
\end{tabular}
\label{tab2}
\end{table}
The intershell interaction in bilayer onions is mainly determined by the van der Waals interaction and the overlapping energy of the electron clouds. We used the potential of Lennard-Jones to determine the interaction between the non-bonded atoms. As we showed earlier \cite{15}, for the bilayer onions with external icosahedral shell (${\rm C_{60}}$, ${\rm C_{240}}$, ${\rm C_{540}}$) there are three potential wells where the internal fullerene can be located. In this paper, for such onions we will consider the well with the lowest energy. The intershell interaction between the internal and the external shells of the onions is shown in Table \ref{tab3}.
\begin{table}
\caption{The intershell interaction (van der Waals interaction) for some onions (in eV, model \#1).}
\begin{tabular}{|C{2cm}||C{2cm}|C{2cm}|C{2cm}|C{2cm}|C{2cm}|C{2cm}|C{2cm}|}
\hline
${\rm C_{80}}$ & -1.256 & X & X & X & X & X & X \\ \hline
${\rm C_{240}}$ & -0.786 & -1.066 & -1.228 & -1.228 & -2.66 & -2.41 & X \\ \hline
${\rm C_{540}}$ & -0.518 & -0.656 & -0.658 & -0.743 & -1.209 & -1.16 & -7.106 \\ \hline\hline
@ & ${\rm C_{20}}$ & ${\rm C_{28}}$ & ${\rm C_{32}}$ & ${\rm C_{36}}$ & ${\rm C_{60}}$ & ${\rm C_{80}}$ &
${\rm C_{240}}$ \\ \hline
\end{tabular}
\label{tab3}
\end{table}
Let's note that the energy of the van der Waals interaction is positive for the onions with such external shells as ${\rm C_{20}}$, ${\rm C_{28}}$, ${\rm C_{32}}$, ${\rm C_{36}}$, ${\rm C_{60}}$. It means that bilayer onions with these external shells can't exist under usual conditions. The onion ${\rm C_{20}@C_{80}}$ is the only bilayer onion with external shell ${\rm C_{80}}$ that has negative energy.\\
\section{The results obtained by the parametrization based on the hybridization of the orbitals}
The main content of this method consists in the calculation of the difference between the Fermi levels of the outer and the inner shell, given by the energy of the $\pi$-bonds which are perpendicular to the molecular surface. In \cite{12}, the spatial orientation of the corresponding bonds is considered for this purpose. Here, the wave function of the $\pi$-bond corresponding to the inner sphere of the bilayer onion has the form
\begin{equation}|\pi \rangle = {D_1}|s\rangle + {D_2}|{p_x}\rangle + {D_4}|{p_z}\rangle,\end{equation}
where the following equations are satisfied:
\begin{equation}\langle {\sigma _i}|{\sigma _j}\rangle = {\delta _{ij}},\langle \pi |{\sigma _j}\rangle = 0,\langle \pi |\pi \rangle = 1.\end{equation}
Here, $|\sigma_i\rangle$ correspond to the bonds lying on the molecular surface. The values of $D_1, D_2, D_4$ depend on the radius and average bond length of the given form of fullerene. For different kinds of fullerenes, these geometry parameters were calculated in the previous section.
Then, the energy of the corresponding $\pi$-bond is given by
\begin{equation}\epsilon = \langle \pi |H|\pi \rangle = D_1^2\langle s|H|s\rangle + D_2^2\langle {p_x}|H|{p_x}\rangle + D_4^2\langle {p_z}|H|{p_z}\rangle, \end{equation}
where \cite{18}
\begin{equation}\langle s|H|s\rangle \approx - 12{\rm{eV}},\langle {p_x}|H|{p_x}\rangle {\rm{ = }}\langle {p_y}|H|{p_y}\rangle {\rm{ = }}\langle {p_z}|H|{p_z}\rangle \approx - 4{\rm{eV}}\end{equation}
In Table \ref{tab4}, the values of ${D_1},{D_2},{D_4}$ are listed together with the energy of $\pi$-bonds.
\begin{table}
\caption{Parameters ${D_1},{D_2},{D_4}$ and the energy of $\pi$-bonds (in eV).}
\begin{tabular}{|C{3cm}||C{3cm}|C{3cm}|C{3cm}|C{3cm}|}
\hline
Fullerene & $D_1$ & $D_2$ & $D_4$ & $\varepsilon$ \\ \hline\hline
${\rm C_{20}}$ & 0.225i & -0.517 & 0.885 & -3.595 \\ \hline
${\rm C_{28}}$ & 0.367 & -0.178 & 0.913 & -5.076 \\ \hline
${\rm C_{32}}$ & 0.366 & -0.169 & 0.915 & -5.071 \\ \hline
${\rm C_{36}}$ & 0.364 & -0.223 & 0.904 & -5.060 \\ \hline
${\rm C_{60}}$ & 0.297 & -0.558 & 0.953 & -4.705 \\ \hline
${\rm C_{80}}$ & 0.257 & -0.033 & 0.966 & -4.053 \\ \hline
${\rm C_{240}}$ & 0.139 & -0.005 & 0.990 & -4.155 \\ \hline
${\rm C_{540}}$ & 0.101 & $\sim$ 0 & 0.990 & -4.081 \\ \hline
\end{tabular}
\label{tab4}
\end{table}
On the base of these results, the difference between the Fermi levels of the onion shells was calculated using the formula $\Delta = {\varepsilon _{out}} - {\varepsilon _{in}}$ with the results listed in Table \ref{tab5}. Regarding the results of the calculation of the corresponding van der Waals interaction (Table \ref{tab3}), we exclude all the unacceptable forms of the onions.
\begin{table}
\caption{Difference between Fermi levels of onion shells (model \#2).}
\begin{tabular}{|C{2cm}||C{2cm}|C{2cm}|C{2cm}|C{2cm}|C{2cm}|C{2cm}|C{2cm}|}
\hline
${\rm C_{80}}$ & -0.936 & X & X & X & X & X & X \\ \hline
${\rm C_{240}}$ & -0.560 & 0.921 & 0.916 & 0.905 & 0.550 & 0.375 & X \\ \hline
${\rm C_{540}}$ & -0.486 & 0.995 & 0.990 & 0.979 & 0.624 & 0.449 & 0.074 \\ \hline\hline
@ & ${\rm C_{20}}$ & ${\rm C_{28}}$ & ${\rm C_{32}}$ & ${\rm C_{36}}$ & ${\rm C_{60}}$ & ${\rm C_{80}}$ &
${\rm C_{240}}$ \\ \hline
\end{tabular}
\label{tab5}
\end{table}
Now, we can calculate the HOMO-LUMO gap for the bilayer onions. The procedure is given by the scheme in Fig. \ref{fg2}. Here, the symbols $F_1$, $F_2$ correspond to ${\varepsilon _{in}},{\varepsilon _{out}}$ in the above formula - they denote the Fermi levels of the corresponding shells. $H_1$, $L_1$, $H_2$, $L_2$ denote the highest occupied and the lowest unoccupied molecular orbital energy levels of the inner and outer shell, respectively, $H_{2a}$ and $L_{2a}$ denote the $H_2$ and $L_2$ levels shifted by the energy difference $\Delta$ of the Fermi levels. Then, the HOMO-LUMO gap of the bilayer onion can be calculated as
\begin{equation}{E_{H - Lgap}} = {L_1} - {H_{2a}} = {L_1} - ({H_2} + \Delta ) = {L_1} - {H_2} - \Delta \end{equation}
\begin{figure}
\includegraphics[width=15cm]{HOMOLUMO.jpg}
\caption{Calculation of the HOMO-LUMO gap in bilayer onions.}\label{fg2}
\end{figure}
The values of HOMO and LUMO energy levels of isolated fullerenes are calculated using the method described in the previous section. They corresponding HOMO-LUMO gaps are listed in Table \ref{tab1a}.
Then, using the values of the energy difference between the Fermi levels listed in Table \ref{tab5}, we get the results in Table \ref{tab7}.
\begin{table}
\caption{HOMO-LUMO gap of bilayer onions.}
\begin{tabular}{|C{2cm}||C{2cm}|C{2cm}|C{2cm}|C{2cm}|C{2cm}|C{2cm}|C{2cm}|}
\hline
${\rm C_{80}}$ & 1.44 & X & X & X & X & X & X \\ \hline
${\rm C_{240}}$ & 1.38 & -0.81 & 0.41 & -0.70 & 1.01 & 0.05 & X \\ \hline
${\rm C_{540}}$ & 1.01 & -1.18 & 0.04 & -1.07 & 0.64 & -0.32 & 0.88 \\ \hline\hline
@ & ${\rm C_{20}}$ & ${\rm C_{28}}$ & ${\rm C_{32}}$ & ${\rm C_{36}}$ & ${\rm C_{60}}$ & ${\rm C_{80}}$ &
${\rm C_{240}}$ \\ \hline
\end{tabular}
\label{tab7}
\end{table}
\section{Conclusion}
We verified which forms of bilayer fullerene onions can really exist and we found the HOMO-LUMO gaps of them which are listed in Table \ref{tab7}. Some of the values have negative sign which means that the value of the LUMO energy level of the inner shell is lower than the HOMO energy level of the outer shell. To get an insight into these results, on the base of Table \ref{tab7}, we can outline their distribution, where we can regard the real values or the modulus (Fig. \ref{fg3}).
\begin{figure}[htbp]
\includegraphics[width=15cm]{HLgaps.jpg}
\caption{Distribution of the HOMO-LUMO gaps on the base of Table \ref{tab7}: a) regarding the order of the energy values for the outer and the inner shell, b) disregarding the order of the energy values for the outer and the inner shell.}\label{fg3}
\end{figure}
From the the outlined distributions does not follow any special rule, for different kinds of the bilayer onions the mutual placement is accidental. It is worth mentioning the coincidence of the HOMO-LUMO gaps for ${\rm C_{20}@C_{540}}$ and ${\rm C_{60}@C_{240}}$ (1.01 eV) and for ${\rm C_{32}@C_{540}}$ and ${\rm C_{80}@C_{240}}$ (0.04 eV and 0.05 eV).
To conclude, the HOMO-LUMO gap serve as a characteristic of the electronic properties: the lower value of this gap, the more metallic the corresponding material is. From our investigations follows that the less metallic is ${\rm C_{20}@C_{80}}$, the most metallic are ${\rm C_{32}@C_{540}}$ and ${\rm C_{80}@C_{240}}$.\\
ACKNOWLEDGEMENTS --- The authors gratefully acknowledge funding of this work by Presidential scholarship 2016-2018 (project No. SP-3135.2016.1).
|
1,108,101,564,981 | arxiv | \section{Conclusion of the Local-existence}
\label{regu}
This step is classical, then we only sketch this procedure.
We regularize the problem as follows:
\begin{align*}
&z^{\varepsilon}_{t}(\alpha,t)=\brzep{z^{\varepsilon}}(\alpha,t)+\brhep{z^{\varepsilon}}(\alpha,t)+c^{\varepsilon}(\alpha,t)\partial_{\alpha}z^{\varepsilon}(\alpha,t)\\
&z^{\varepsilon}(\alpha,0)=\phi_{\varepsilon}*z_{0}(\alpha)\\
\end{align*}
where
\begin{align*}
&c^{\varepsilon}(\alpha,t)=\frac{\alpha+\pi}{2\pi A^{\varepsilon}(t)}\int_{{{\mathbb T}}}\partial_{\alpha}z^{\varepsilon}(\beta,t)\cdot\partial_{\alpha}(\brzep{z^{\varepsilon}}+\brhep{z^{\varepsilon}})d\beta\\
&-\int_{-\pi}^{\alpha}\frac{\partial_{\alpha}z^{\varepsilon}(\beta,t)}{A^{\varepsilon}(t)}\cdot\partial_{\alpha}(\brzep{z^{\varepsilon}}+\brhep{z^{\varepsilon}})d\beta,\\
&\varpi_{1}^{\varepsilon}(\alpha,t)=-2\frac{\mu^{2}-\mu^{1}}{\mu^{2}+\mu^{1}}\phi_{\varepsilon}*\phi_{\varepsilon}*(\brzep{z^{\varepsilon}}+\brhep{z^{\varepsilon}})\cdot\partial_{\alpha}z^{\varepsilon}(\alpha,t)\\
&-2\kappa^{1}\frac{\rho^{2}-\rho^{1}}{\mu^{2}+\mu^{1}}g\phi_{\varepsilon}*\phi_{\varepsilon}*\partial_{\alpha}z^{\varepsilon}_{2}(\alpha,t)\\
&\varpi_{2}(\alpha,t)=-2\frac{\kappa^{2}-\kappa^{1}}{\kappa^{2}+\kappa^{1}}\phi_{\varepsilon}*\phi_{\varepsilon}*(\brzep{h^{\varepsilon}}+\brhep{h^{\varepsilon}})\cdot\partial_{\alpha}h^{\varepsilon}(\alpha)
\end{align*}
for $\phi\in\mathcal{C}^{\infty}_{c}$, $\phi(\alpha)\ge 0$, $\phi(-\alpha)=\phi(\alpha)$, $\int_{{{\mathbb R}}}\phi(\alpha)d\alpha=1$ and $\phi_{\varepsilon}(\alpha)=\phi(\frac{\alpha}{\varepsilon})/\varepsilon$.
Using the same techniques that in the above secction, we can prove that:
\begin{align*}
&\frac{d}{dt}\norm{z^{\varepsilon}}^{2}_{H^{k}}(t)\le\estheps{k}\\
&-\frac{\kappa^{1}}{2\pi(\mu^{2}+\mu^{1})}\int_{{{\mathbb T}}}\frac{\sigma^{\varepsilon}(\alpha,t)}{A^{\varepsilon}(t)}\phi_{\varepsilon}*\pa{k}z^{\varepsilon}\cdot\Lambda(\phi_{\varepsilon}*\pa{k}z^{\varepsilon})d\alpha
\end{align*}
where
\begin{align*}
\sigma^{\varepsilon}(\alpha,t)=\frac{\mu^{2}-\mu^{1}}{\kappa^{1}}(\brzep{z^{\varepsilon}}+\brhep{z^{\varepsilon}})\cdot\pa{\bot}z^{\varepsilon}(\alpha,t)+g(\rho^{2}-\rho^{1})\pa{}z^{\varepsilon}(\alpha,t)
\end{align*}
The next step is to integrate during a time $T$ independent of $\varepsilon$. Let us observe that if $\phi_{\varepsilon}*z_{0}(\alpha)\in H^{k}$, then we have the solution $z^{\varepsilon}\in\mathcal{C}^{1}([0,T^{\varepsilon}],H^{k})$. If $\sigma(\alpha,0)>0$, there exists $T^{\varepsilon}$ dependent of $\varepsilon$ where $\sigma^{\varepsilon}(\alpha,t)>0$. Then for $t\le T^{\varepsilon}$ with our a priori estimates and the fact that
\begin{displaymath}
f(\alpha)\Lambda f(\alpha)-\frac{1}{2}\Lambda(f^{2})(\alpha)\ge 0
\end{displaymath}
we get
\begin{align*}
&\frac{d}{dt}\norm{z^{\varepsilon}}^{2}_{H^{k}}(t)\le\estheps{k}\\
&-\frac{\kappa^{1}}{4\pi(\mu^{2}+\mu^{1})A^{\varepsilon}(t)}\int_{{{\mathbb T}}}\sigma^{\varepsilon}(\alpha,t)\Lambda(\abs{\phi_{\varepsilon}*\pa{k}z^{\varepsilon}}^{2})(\alpha)d\alpha.
\end{align*}
Since
\begin{align*}
&\norm{\Lambda\sigma^{\varepsilon}}_{L^{\infty}}\le C\norm{\sigma^{\varepsilon}}_{H^{2}}\le C(\norm{\brzep{z^{\varepsilon}}+\brhep{z^{\varepsilon}}}_{L^{2}}\\
&+\norm{\pa{2}\brzep{z^{\varepsilon}}+\pa{2}\brhep{z^{\varepsilon}}}_{L^{2}}+1)\norm{z^{\varepsilon}}_{H^{3}},
\end{align*}
then
\begin{displaymath}
\frac{d}{dt}\norm{z^{\varepsilon}}^{2}_{H^{k}}(t)\le\estheps{k}.
\end{displaymath}
In the same way as in \ref{evoldi} we can check that
\begin{align*}
\frac{d}{dt}\norm{\mathcal{F}(z^{\varepsilon})}_{L^{\infty}}^{2}\le\estheps{k}.
\end{align*}
And we have
\begin{align*}
\frac{d}{dt}\norm{d(z^{\varepsilon},h^{\varepsilon})}_{L^{\infty}}^{2}\le\estheps{k}.
\end{align*}
Therefore,
\begin{align*}
\frac{d}{dt}(\norm{\mathcal{F}(z^{\varepsilon})}_{L^{\infty}}^{2}+\norm{d(z^{\varepsilon},h^{\varepsilon})}_{L^{\infty}}^{2}+\norm{z^{\varepsilon}}^{2}_{H^{k}})\le\estheps{k}.
\end{align*}
Integrating,
\begin{align*}
&\norm{\mathcal{F}(z^{\varepsilon})}_{L^{\infty}}^{2}+\norm{d(z^{\varepsilon},h^{\varepsilon})}_{L^{\infty}}^{2}+\norm{z^{\varepsilon}}^{2}_{H^{k}}\\
&\le -\frac{1}{C}\ln(-t+\exp(-C(\norm{\mathcal{F}(z_{0})}_{L^{\infty}}^{2}+\norm{d(z_{0},h)}_{L^{\infty}}^{2}+\norm{z_{0}}^{2}_{H^{k}}))).
\end{align*}
Since $m^{\varepsilon}(t)\ge m(0)-\int_{0}^{t}\estheps{k}(s)ds$, where $m^{\varepsilon}(t)=\min_{\alpha\in{{\mathbb T}}}\sigma^{\varepsilon}(\alpha,t)$ for $t\le T^{\varepsilon}$, using the above estimations
\begin{align*}
&m^{\epsilon}(t)\ge m(0)+C(\norm{\mathcal{F}(z_{0})}_{L^{\infty}}^{2}+\norm{d(z_{0},h)}_{L^{\infty}}^{2}+\norm{z_{0}}^{2}_{H^{k}})\\
&+\ln(-t+\exp(-C(\norm{\mathcal{F}(z_{0})}_{L^{\infty}}^{2}+\norm{d(z_{0},h)}_{L^{\infty}}^{2}+\norm{z_{0}}^{2}_{H^{k}})))
\end{align*}
for $t\le T^{\varepsilon}$.
Now if we $\varepsilon\to 0$ we have $T^{\varepsilon}\nrightarrow 0$. This is because if we take $T=\min(T^{1},T^{2})$ where $T^{1}$ satisfies,
\begin{align*}
&m(0)+C(\norm{\mathcal{F}(z_{0})}_{L^{\infty}}^{2}+\norm{d(z_{0},h)}_{L^{\infty}}^{2}+\norm{z_{0}}^{2}_{H^{k}})\\
&+\ln(-T^{1}+\exp(-C(\norm{\mathcal{F}(z_{0})}_{L^{\infty}}^{2}+\norm{d(z_{0},h)}_{L^{\infty}}^{2}+\norm{z_{0}}^{2}_{H^{k}})))>0
\end{align*}
and $T_{2}$ satisfies,
\begin{align*}
-\frac{1}{C}\ln(-T^{2}+\exp(-C(\norm{\mathcal{F}(z_{0})}_{L^{\infty}}^{2}+\norm{d(z_{0},h)}_{L^{\infty}}^{2}+\norm{z_{0}}^{2}_{H^{k}})))<\infty.
\end{align*}
For $t\le T$ we have $m^{\varepsilon}(t)>0$ and
\begin{align*}
&\norm{\mathcal{F}(z^{\varepsilon})}_{L^{\infty}}^{2}+\norm{d(z^{\varepsilon},h)}_{L^{\infty}}^{2}+\norm{z^{\varepsilon}}^{2}_{H^{k}}\\
&\le-\frac{1}{C}\ln(-T^{2}+\exp(-C(\norm{\mathcal{F}(z_{0})}_{L^{\infty}}^{2}+\norm{d(z_{0},h)}_{L^{\infty}}^{2}+\norm{z_{0}}^{2}_{H^{k}})))<\infty
\end{align*}
and $T$ only depend on $z_{0}$. Then, we have local existence when $\varepsilon\to 0$.
\section{Estimates on $\varpi$}\label{estiamplitud}
In this section we show that the norm of amplitude of the vorticity $\varpi=(\varpi_{1},\varpi_{2})$ is bounded in $H^{k}$, for $k\ge 2$.
\begin{lem}
Let $\varpi=(\varpi_{1},\varpi_{2})$ be a function given by
\begin{align}
\label{var1}
&\varpi_{1}(\alpha)=-\gamma_{1}T_{1}(\varpi_{1})(\alpha)-\gamma_{1}T_{2}(\varpi_{2})(\alpha)-N\pa{}z_{2}(\alpha),\\
\label{var2}
&\varpi_{2}(\alpha)=-\gamma_{2}T_{3}(\varpi_{1})(\alpha)-\gamma_{2}T_{4}(\varpi_{2})(\alpha)
\end{align}
where $\gamma_{1}=\frac{\mu_{2}-\mu_{1}}{\mu_{1}+\mu_{2}}$, $\gamma_{2}=\frac{\kappa_{1}-\kappa_{2}}{\kappa_{1}+\kappa_{2}}$ and $N=2\kappa_{1}g\frac{\rho_{2}-\rho_{1}}{\mu_{2}+\mu_{1}}$.
Then
\begin{displaymath}
\norm{\varpi}_{H^{k}}\le\esth{k+1}
\end{displaymath}
for $k\ge 2$.
\end{lem}
\begin{proof}
We can write,
\begin{equation}
\label{amplieq}
\varpi=M{{\mathcal T}}\varpi-v
\end{equation}
where $M=\left(\begin{matrix}
-\gamma_{1}&0\\
0&-\gamma_{2}
\end{matrix}\right)$, ${{\mathcal T}}=\left(\begin{matrix}
T_{1}&T_{2}\\
T_{3}&T_{4}
\end{matrix}\right)$ and $v=\left(\begin{matrix}
N\pa{}z_{2}(\alpha)\\
0
\end{matrix}\right)$.
The formula (\ref{amplieq}) is equivalent to
\begin{displaymath}
\varpi=(I+M{{\mathcal T}})^{-1}v.
\end{displaymath}
It yields
\begin{displaymath}
\norm{\varpi}_{H^{\frac{1}{2}}}\le\norm{(I+M{{\mathcal T}})^{-1}}_{H^{\frac{1}{2}}}\norm{\pa{}z_{2}}_{H^{\frac{1}{2}}}.
\end{displaymath}
Since $\abs{\gamma_{i}}<1$ for all $i$, the proposition \ref{invertoperhunmedio} gives
\begin{displaymath}
\norm{\varpi}_{H^{\frac{1}{2}}}\le e^{C\nor{z,h}^{2}}.
\end{displaymath}
Recall that $\nor{z,h}^{2}=\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}+\norm{d(z,h)}^{2}_{L^{\infty}}+\norm{z}^{2}_{H^{3}}$.
Now, we consider the $H^{k+1}$-norm
\begin{displaymath}
\norm{\varpi}_{H^{k+1}}=\norm{\varpi_{1}}_{H^{k+1}}+\norm{\varpi_{2}}_{H^{k+1}}.
\end{displaymath}
Then, we study each component one by one.
Taking the $k$ derivative of (\ref{var1}) we get:
\begin{displaymath}
\pa{k}\varpi_{1}(\alpha)=-\lambda_{1}\pa{k}(2\brz{z}\cdot\pa{}z(\alpha))-\lambda_{1}\pa{k}(2\brh{z}\cdot\pa{}z(\alpha))-N\pa{k+1}z_{2}(\alpha).
\end{displaymath}
Using Leibniz's rule we have,
\begin{align*}
&2\pa{k}(\brz{z}\cdot\pa{}z(\alpha))=\sum_{j=0}^{k}\frac{C_{k}}{\pi}\int_{{{\mathbb R}}}\pa{k-j}(\frac{\Delta z^{\bot}\cdot\pa{}z(\alpha)}{\abs{\Delta z}^{2}})\pa{j}\varpi_{1}(\alpha-\beta)d\beta\\
&=\sum_{j=0}^{k-1}\frac{C_{k}}{\pi}\int_{{{\mathbb R}}}\pa{k-j}(\frac{\Delta z^{\bot}\cdot\pa{}z(\alpha)}{\abs{\Delta z}^{2}})\pa{j}\varpi_{1}(\alpha-\beta)d\beta+T_{1}(\pa{k}\varpi_{1})(\alpha)
\end{align*}
and
\begin{align*}
&2\pa{k}(\brh{z}\cdot\pa{}z(\alpha))\\
&=\sum_{j=0}^{k-1}\frac{C_{k}}{\pi}\int_{{{\mathbb R}}}\pa{k-j}(\frac{\Delta zh^{\bot}\cdot\pa{}z(\alpha)}{\abs{\Delta zh}^{2}})\pa{j}\varpi_{2}(\alpha-\beta)d\beta+T_{2}(\pa{k}\varpi_{2})(\alpha).
\end{align*}
Recall that $\Delta z=z(\alpha)-z(\alpha-\beta)$ and $\Delta zh=z(\alpha)-h(\alpha-\beta)$.
Therefore, we obtain
\begin{displaymath}
\pa{k}\varpi_{1}(\alpha)+\lambda_{1}T_{1}(\pa{k}\varpi_{1})(\alpha)+\lambda_{1}T_{2}(\pa{k}\varpi_{2})(\alpha)=R_{k}^{1}(\varpi_{1})+R_{k}^{2}(\varpi_{2})-N\pa{k+1}z_{2}(\alpha)
\end{displaymath}
where
\begin{align*}
&R^{1}_{k}(\varpi_{1})=\sum_{j=0}^{k-1}\frac{C_{k}}{\pi}\int_{{{\mathbb R}}}\pa{k-j}(\frac{\Delta z^{\bot}\cdot\pa{}z(\alpha)}{\abs{\Delta z}^{2}})\pa{j}\varpi_{1}(\alpha-\beta)d\beta,\\
&R^{2}_{k}(\varpi_{2})=\sum_{j=0}^{k-1}\frac{C_{k}}{\pi}\int_{{{\mathbb R}}}\pa{k-j}(\frac{\Delta zh^{\bot}\cdot\pa{}z(\alpha)}{\abs{\Delta zh}^{2}})\pa{j}\varpi_{2}(\alpha-\beta)d\beta.
\end{align*}
If we observe that
\begin{displaymath}
\pa{}T_{1}(\varpi_{1})(\alpha)=\frac{1}{\pi}\int_{{{\mathbb R}}}\pa{}(\frac{\Delta z^{\bot}\cdot\pa{}z(\alpha)}{\abs{\Delta z}^{2}})\varpi_{1}(\alpha-\beta)d\beta+T_{1}(\pa{}\varpi_{1})(\alpha)
\end{displaymath}
we get
\begin{align*}
&R_{k}^{1}(\varpi_{1})=\pa{k-1}(\frac{1}{\pi}\int_{{{\mathbb R}}}\pa{}(\frac{\Delta z^{\bot}\cdot\pa{}z(\alpha)}{\abs{\Delta z}^{2}})\varpi_{1}(\alpha-\beta)d\beta)\\
&=\pa{k-1}(\pa{}T_{1}(\varpi_{1})(\alpha)-T_{1}(\pa{}\varpi_{1})(\alpha))=\pa{k}T_{1}(\varpi_{1})(\alpha)-\pa{k-1}T_{1}(\pa{}\varpi_{1})(\alpha)
\end{align*}
and
\begin{displaymath}
R_{k}^{2}(\varpi_{2})=\pa{k}T_{2}(\varpi_{2})(\alpha)-\pa{k-1}T_{2}(\pa{}\varpi_{2})(\alpha).
\end{displaymath}
Taking the $k$-derivatives in (\ref{var2}), we get
\begin{displaymath}
\pa{k}\varpi_{2}(\alpha)+\lambda_{2}T_{3}(\pa{k}\varpi_{1})(\alpha)+\lambda_{2}T_{4}(\pa{k}\varpi_{2})(\alpha)=R_{k}^{3}(\varpi_{1})+R^{4}_{k}(\varpi{2})
\end{displaymath}
with
\begin{align*}
&R^{3}_{k}(\varpi_{1})=\pa{k}T_{3}(\varpi_{1})(\alpha)-\pa{k-1}T_{3}(\pa{}\varpi_{1})(\alpha),\\
&R^{4}_{k}(\varpi_{2})=\pa{k}T_{4}(\varpi_{2})(\alpha)-\pa{k-1}T_{4}(\pa{}\varpi_{2})(\alpha).
\end{align*}
Next let us consider
\begin{align}
\label{s1}
&\Lambda^{\frac{1}{2}}\pa{k}\varpi_{1}(\alpha)+\lambda_{1}T_{1}(\Lambda^{\frac{1}{2}}\pa{k}\varpi_{1})(\alpha)+\lambda_{1}T_{2}(\Lambda^{\frac{1}{2}}\pa{k}\varpi_{2})(\alpha)\\\nonumber
&=\lambda_{1}T_{1}(\Lambda^{\frac{1}{2}}\pa{k}\varpi_{1})(\alpha)+\lambda_{1}T_{2}(\Lambda^{\frac{1}{2}}\pa{k}\varpi_{2})(\alpha)-\lambda_{1}\Lambda^{\frac{1}{2}}T_{1}(\pa{k}\varpi_{1})(\alpha)-\lambda_{1}\Lambda^{\frac{1}{2}}T_{2}(\pa{k}\varpi_{2})(\alpha)\\
&+\Lambda^{\frac{1}{2}}R^{1}_{k}(\varpi_{1})(\alpha)+\Lambda^{\frac{1}{2}}R^{2}_{k}(\varpi_{2})(\alpha)-N\Lambda^{\frac{1}{2}}\pa{k+1}z_{2}(\alpha)\nonumber
\end{align}
and
\begin{align}
\label{s2}
&\Lambda^{\frac{1}{2}}\pa{k}\varpi_{2}(\alpha)+\lambda_{2}T_{3}(\Lambda^{\frac{1}{2}}\pa{k}\varpi_{1})(\alpha)+\lambda_{2}T_{4}(\Lambda^{\frac{1}{2}}\pa{k}\varpi_{2})(\alpha)\\\nonumber
&=\lambda_{2}T_{3}(\Lambda^{\frac{1}{2}}\pa{k}\varpi_{1})(\alpha)+\lambda_{2}T_{4}(\Lambda^{\frac{1}{2}}\pa{k}\varpi_{2})(\alpha)-\lambda_{2}\Lambda^{\frac{1}{2}}T_{3}(\pa{k}\varpi_{1})(\alpha)-\lambda_{2}\Lambda^{\frac{1}{2}}T_{4}(\pa{k}\varpi_{2})(\alpha)\\
&+\Lambda^{\frac{1}{2}}R^{3}_{k}(\varpi_{1})(\alpha)+\Lambda^{\frac{1}{2}}R^{4}_{k}(\varpi_{2})(\alpha).\nonumber
\end{align}
Then, we write
\begin{displaymath}
\left(\begin{matrix}
\Lambda^{\frac{1}{2}}\pa{k}\varpi_{1}\\
\Lambda^{\frac{1}{2}}\pa{k}\varpi_{2}
\end{matrix}\right)+\left(\begin{matrix}
\gamma_{1}&0\\
0&\gamma_{2}
\end{matrix}\right)\left(\begin{matrix}
T_{1}&T_{2}\\
T_{3}&T_{4}
\end{matrix}\right)\left(\begin{matrix}
\Lambda^{\frac{1}{2}}\pa{k}\varpi_{1}\\
\Lambda^{\frac{1}{2}}\pa{k}\varpi_{2}
\end{matrix}\right)=\left(\begin{matrix}
S_{1}\\
S_{2}
\end{matrix}\right)
\end{displaymath}
where $S_{1}$ is the right hand side of (\ref{s1}) and $S_{2}$ the right hand side of (\ref{s2}).
Using the estimate for the inverse $(I+M{{\mathcal T}})^{-1}$ in the space $H^{\frac{1}{2}}$ we get
\begin{displaymath}
\norm{\varpi}_{H^{k+1}}\le\norm{\Lambda^{\frac{1}{2}}\pa{k}\varpi}_{H^{\frac{1}{2}}}\le C\norm{(I+M{{\mathcal T}})^{-1}}_{H^{\frac{1}{2}}}\norm{S}_{H^{\frac{1}{2}}}\le e^{C\nor{z,h}^{2}}\norm{S}_{H^{\frac{1}{2}}}.
\end{displaymath}
We have that
\begin{displaymath}
S=\left(\begin{matrix}
S_{1}\\
S_{2}
\end{matrix}\right)=M{{\mathcal T}}(\Lambda^{\frac{1}{2}}\pa{k}\varpi)-M\Lambda^{\frac{1}{2}}({{\mathcal T}}(\pa{k}\varpi))+\Lambda^{\frac{1}{2}}({{\mathbb R}}_{k}(\varpi))-\left(\begin{matrix}
N\Lambda^{\frac{1}{2}}\pa{k+1}z_{2}(\alpha)\\
0
\end{matrix}\right)
\end{displaymath}
where ${{\mathbb R}}_{k}(\varpi)=\left(\begin{matrix}
R^{1}_{k}&R^{2}_{k}\\
R^{3}_{k}&R^{4}_{k}
\end{matrix}\right)\left(\begin{matrix}
\varpi_{1}\\
\varpi_{2}
\end{matrix}\right)$.
Thus,
\begin{displaymath}
\norm{S}_{H^{\frac{1}{2}}}\le C\norm{{{\mathcal T}}(\Lambda^{\frac{1}{2}}\pa{k}\varpi)}_{H^{\frac{1}{2}}}+\norm{{{\mathcal T}}(\pa{k}\varpi))}_{H^{1}}+\norm{{{\mathbb R}}_{k}(\varpi)}_{H^{1}}+\norm{z}_{H^{k+2}}.
\end{displaymath}
Using the lemma \ref{tl2h1},
\begin{displaymath}
\norm{{{\mathcal T}}(\pa{k}\varpi)}_{H^{1}}\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}^{2}\norm{{{\mathcal{F}(h)}}}^{2}_{L^{\infty}}\norm{d(z,h)}^{2}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}^{4}\norm{h}^{4}_{\mathcal{C}^{2}}\norm{\varpi}_{H^{k}}.
\end{displaymath}
Since $R^{i}_{k}(\varpi_{j})=\pa{k}T_{i}(\varpi_{j})(\alpha)-\pa{k-1}T_{i}(\pa{}\varpi_{j})(\alpha)$ for $i,j=1,2,3,4$ we can write
\begin{displaymath}
{{\mathbb R}}_{k}(\varpi)=\pa{k}{{\mathcal T}}(\varpi)-\pa{k-1}{{\mathcal T}}(\pa{}\varpi).
\end{displaymath}
Then using the lemma \ref{thkhk1} (proved below),
\begin{align*}
&\norm{{{\mathbb R}}_{k}(\varpi)}_{H^{1}}\le\norm{\pa{k}{{\mathcal T}}(\varpi)}_{H^{1}}+\norm{\pa{k-1}{{\mathcal T}}(\pa{}\varpi)}_{H^{1}}\le\norm{{{\mathcal T}}(\varpi)}_{H^{k+1}}+\norm{{{\mathcal T}}(\pa{}\varpi)}_{H^{k}}\\
&\le C\nor{z,h}^{2}(\norm{z}_{H^{k+2}}^{2}+\norm{h}_{H^{k+2}}^{2})\norm{\varpi}_{H^{k}}+C\nor{z,h}^{2}(\norm{z}_{H^{k+1}}^{2}+\norm{h}_{H^{k+1}}^{2})\norm{\pa{}\varpi}_{H^{k-1}}.
\end{align*}
Finally, using lemma \ref{tl2h1}
\begin{align*}
&\norm{{{\mathcal T}}(\Lambda^{\frac{1}{2}}\pa{k}\varpi)}_{H^{\frac{1}{2}}}\le\norm{{{\mathcal T}}(\Lambda^{\frac{1}{2}}\pa{k}\varpi)}_{H^{1}}\\
&\le C\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}\norm{{{\mathcal{F}(h)}}}^{2}_{L^{\infty}}\norm{d(z,h)}^{2}_{L^{\infty}}\norm{z}^{4}_{\mathcal{C}^{2}}\norm{h}^{4}_{\mathcal{C}^{2}}\norm{\Lambda^{\frac{1}{2}}\pa{k}\varpi}_{L^{2}}.
\end{align*}
Since
\begin{displaymath}
\pa{k}\varpi+M{{\mathcal T}}\pa{k}\varpi={{\mathbb R}}_{k}(\varpi)-\left(\begin{matrix}
N\pa{k+1}z_{2}\\
0
\end{matrix}\right),
\end{displaymath}
then
\begin{displaymath}
\pa{k}\varpi=(I+M{{\mathcal T}})^{-1}({{\mathbb R}}_{k}(\varpi)-\left(\begin{matrix}
N\pa{k+1}z_{2}\\
0
\end{matrix}\right)).
\end{displaymath}
Therefore,
\begin{align*}
&\norm{\Lambda^{\frac{1}{2}}\pa{k}\varpi}_{L^{2}}=\norm{\pa{k}\varpi}_{H^{\frac{1}{2}}}\le\norm{(I+M{{\mathcal T}})^{-1}}_{H^{\frac{1}{2}}}(\norm{{{\mathbb R}}_{k}(\varpi)}_{H^{\frac{1}{2}}}+\norm{z}_{H^{k+\frac{3}{2}}})\\
&\le e^{C\nor{z,h}^{2}}(C\nor{z,h}^{2}(\norm{z}_{H^{k+2}}^{2}+\norm{h}^{2}_{H^{k+2}})\norm{\varpi}_{H^{k}}+\norm{z}_{H^{k+\frac{3}{2}}}).
\end{align*}
In conclusion,
\begin{displaymath}
\norm{\varpi}_{H^{k+1}}\le e^{C\nor{z,h}^{2}}(\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}\norm{d(z,h)}^{2}_{L^{\infty}}\norm{z}_{H^{k+2}}^{2}\norm{\varpi}_{H^{k}}+\norm{z}_{H^{k+2}}).
\end{displaymath}
For $k=\frac{1}{2}$, since $\norm{\varpi}_{H^{\frac{1}{2}}}\le e^{C\nor{z,h}^{2}}$ then
\begin{displaymath}
\norm{\varpi}_{H^{\frac{3}{2}}}\le \exp{C(\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}+\norm{d(z,h)}^{2}_{L^{\infty}}+\norm{z}^{2}_{H^{\frac{5}{2}}})}.
\end{displaymath}
Therefore using induction on $k\ge 2$ allows us to finish the proof.
\end{proof}
\begin{lem}
\label{thkhk1}
The operator ${{\mathcal T}}$ maps Sobolev space $H^{k}\times H^{k}$, $k\ge 1$, into $H^{k+1}\times H^{k+1}$ as long as $z,h\in H^{k+2}$ and satisfies the estimate
\begin{displaymath}
\norm{{{\mathcal T}}}_{H^{k}\times H^{k}\to H^{k+1}\times H^{k+1}}\le C\nor{z,h}^{2}\norm{z}^{2}_{H^{k+2}}
\end{displaymath}
\end{lem}
\begin{proof}
For the lemma $5.2$ in \cite{hele} we have
\begin{displaymath}
\norm{T_{1}(\varpi_{1})}_{H^{k+1}}\le C(\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}+\norm{z}^{2}_{H^{3}})\norm{z}^{2}_{H^{k+2}}\norm{\varpi_{1}}_{H^{k}}
\end{displaymath}
and changing $z$ for $h$ then
\begin{displaymath}
\norm{T_{4}(\varpi_{2})}_{H^{k+1}}\le C(\norm{{{\mathcal{F}(h)}}}^{2}_{L^{\infty}}+\norm{h}^{2}_{H^{3}})\norm{h}^{2}_{H^{k+2}}\norm{\varpi_{2}}_{H^{k}}.
\end{displaymath}
Let us see what happens with $T_{2}(\varpi_{2})$. Taking the $k+1$-derivatives,
\begin{align*}
&\pa{k+1}T_{2}(\varpi_{2})(\alpha)=\sum_{j=0}^{k+1}\frac{C_{k}}{\pi}\int_{{{\mathbb R}}}\pa{k+1-j}(\frac{\Delta zh^{\bot}\cdot\pa{}z(\alpha)}{\abs{\Delta zh}^{2}})\pa{j}\varpi_{2}(\alpha-\beta)d\beta\\
&=T_{2}(\pa{k+1}\varpi_{2})(\alpha)+\frac{1}{\pi}\int_{{{\mathbb R}}}\pa{k+1-j}(\frac{\Delta zh^{\bot}\cdot\pa{}z(\alpha)}{\abs{\Delta zh}^{2}})\varpi_{2}(\alpha-\beta)d\beta\\
&+\sum_{j=1}^{k}\frac{C_{k}}{\pi}\int_{{{\mathbb R}}}\pa{k+1-j}(\frac{\Delta zh^{\bot}\cdot\pa{}z(\alpha)}{\abs{\Delta zh}^{2}})\pa{j}\varpi_{2}(\alpha-\beta)d\beta\\
&=T_{2}(\pa{k+1}\varpi_{2})(\alpha)+\frac{1}{\pi}\int_{{{\mathbb R}}}\frac{\Delta zh^{\bot}\cdot\pa{k+2}z(\alpha)}{\abs{\Delta zh}^{2}}\varpi_{2}(\alpha-\beta)d\beta\\
&+ \text{``other terms''}\\
&=T_{2}(\pa{k+1}\varpi_{2})(\alpha)+J_{1}+\text{``other terms''}
\end{align*}
The estimate for ``other terms'' is straighforward. For $T_{2}(\pa{k+1}\varpi_{2})(\alpha)$ we integrate by parts:
\begin{align*}
&T_{2}(\pa{k+1}\varpi_{2})(\alpha)=\frac{-1}{\pi}\int_{{{\mathbb R}}}\frac{\Delta zh^{\bot}\cdot\pa{}z(\alpha)}{\abs{\Delta zh}^{2}}\partial_{\beta}(\pa{k}\varpi_{2}(\alpha-\beta))d\beta\\
&=\frac{1}{\pi}\int_{{{\mathbb R}}}\partial_{\beta}(\frac{\Delta zh^{\bot}\cdot\pa{}z(\alpha)}{\abs{\Delta zh}^{2}})\pa{k}\varpi_{2}(\alpha-\beta)d\beta\\
&=-\frac{1}{\pi}\int_{{{\mathbb R}}}\frac{\pa{\bot}h(\alpha-\beta)\cdot\pa{}z(\alpha)}{\abs{\Delta zh}^{2}}\pa{k}\varpi_{2}(\alpha-\beta)d\beta\\
&+\frac{2}{\pi}\int_{{{\mathbb R}}}\frac{\Delta zh^{\bot}\cdot\pa{}z(\alpha)\Delta zh\cdot\pa{}h(\alpha-\beta)}{\abs{\Delta zh}^{4}}\pa{k}\varpi_{2}(\alpha-\beta)d\beta\equiv I_{1}+I_{2}.
\end{align*}
It is easy estimate $I_{1}$
\begin{displaymath}
\abs{I_{1}}\le C\norm{\pa{}z}_{L^{\infty}}\norm{d(z,h)}_{L^{\infty}}\norm{h}_{H^{1}}\norm{\varpi_{2}}_{H^{k}}.
\end{displaymath}
For $I_{2}$, using the Cauchy inequality
\begin{align*}
&\abs{I_{2}}\le \frac{1}{2\pi}\int_{{{\mathbb R}}}\frac{(\abs{\Delta zh}^{2}+\abs{\pa{}z(\alpha)}^{2})(\abs{\Delta zh}^{2}+\abs{\pa{}h(\alpha-\beta)}^{2})}{\abs{\Delta zh}^{4}}\pa{k}\varpi_{2}(\alpha-\beta)d\beta\\
&\le C\norm{\varpi_{2}}_{H^{k}}+C\norm{d(z,h)}_{L^{\infty}}\norm{z}_{\mathcal{C}^{1}}^{2}\norm{\varpi_{2}}_{H^{k}}+C\norm{d(z,h)}_{L^{\infty}}\norm{h}_{\mathcal{C}^{1}}^{2}\norm{\varpi_{2}}_{H^{k}}+C\norm{d(z,h)}^{2}_{L^{\infty}}\norm{z}_{\mathcal{C}^{1}}^{2}\norm{h}_{\mathcal{C}^{1}}^{2}\norm{\varpi_{2}}_{H^{k}}.
\end{align*}
If we use the same procedure to estimate $J_{1}$,
\begin{align*}
J_{1}\le\frac{1}{2\pi}\int_{{{\mathbb R}}}\frac{\abs{\Delta zh}^{2}+\abs{\pa{k+2}z(\alpha)}^{2}}{\abs{\Delta zh}^{2}}\varpi_{2}(\alpha-\beta)d\beta\le C\norm{\varpi_{2}}_{L^{2}}+C\norm{d(z,h)}_{L^{\infty}}\norm{z}_{H^{k+2}}^{2}\norm{\varpi_{2}}_{L^{2}}.
\end{align*}
Then,
\begin{align*}
\norm{T_{2}(\varpi_{2})}_{H^{k+1}}\le C\nor{z,h}^{2}\norm{z}^{2}_{H^{k+2}}\norm{\varpi_{2}}_{H^{k}}.
\end{align*}
Since $T_{3}(\varpi_{1})$ is $T_{2}(\varpi_{2})$ changing $z$ for $h$ and viceverse, the estimations will be
\begin{align*}
\norm{T_{3}(\varpi_{1})}_{H^{k+1}}\le C\nor{z,h}^{2}\norm{h}^{2}_{H^{k+2}}\norm{\varpi_{1}}_{H^{k}}.
\end{align*}
Therefore,
\begin{displaymath}
\norm{{{\mathcal T}}\varpi}_{H^{k+1}}\le C\nor{z,h}^{2}(\norm{z}_{H^{k+2}}^{2}+\norm{h}_{H^{k+2}}^{2})\norm{\varpi}_{H^{k}}.
\end{displaymath}
Since $h$ is fixed on time, we get the desired estimate.
\end{proof}
\section{Estimates on $\brz{z}+\brh{z}+\brz{h}+\brh{h}$}\label{estibr}
This section is devoted to show that the Birkhoff-Rott integral is as regular as $\pa{}z$.
\begin{lem}
The following estimate holds
\begin{align}
\label{brzh}
&\norm{\brz{z}}_{H^{k}}+\norm{\brh{z}}_{H^{k}}+\norm{\brz{h}}_{H^{k}}+\norm{\brh{h}}_{H^{k}}\\\nonumber
&\le\esth{k+1}
\end{align}
for $k\ge 2$.
\end{lem}
\begin{proof}
The lemma $6.1$ on \cite{hele} gives us,
\begin{displaymath}
\norm{\brz{z}}_{H^{k}}\le\le\exp{C(\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}^{2}+\norm{z}^{2}_{H^{k+1}})}
\end{displaymath}
and
\begin{displaymath}
\norm{\brh{h}}_{H^{k}}\le\exp{C(\norm{{{\mathcal{F}(h)}}}_{L^{\infty}}^{2}+\norm{h}^{2}_{H^{k+1}})}.
\end{displaymath}
Using that $\norm{h}^{2}_{H^{k+1}}$ and $\norm{{{\mathcal{F}(h)}}}_{L^{\infty}}^{2}$ are not dependent of time,
\begin{align*}
&\norm{\brz{z}}_{H^{k}}+\norm{\brh{h}}_{H^{k}}\\
&\le\esth{k+1}.
\end{align*}
Let us see what happens with $\brz{h}$ and $\brh{z}$.
It is enough study one of then. For example let study $\brh{z}$.
For $k=2$,
\begin{align*}
&\norm{\brh{z}}_{L^{2}}\le C\norm{d(z,h)}_{L^{\infty}}(\norm{z}_{L^{2}}+\norm{h}_{L^{2}})\norm{\varpi_{2}}_{L^{2}}\\
&\le\esth{1}.
\end{align*}
If we take two derivatives, we get $\brh{z}=B_{1}+B_{2}+B_{3}+\text{``other terms''}$ where
\begin{align*}
&B_{1}=\frac{1}{2\pi}\int_{{{\mathbb R}}}\frac{(\Delta zh)^{\bot}}{\abs{\Delta zh}^{2}}\pa{2}\varpi_{2}(\alpha-\beta)d\beta,\\
&B_{2}=\frac{1}{2\pi}\int_{{{\mathbb R}}}\frac{(\pa{2}z(\alpha)-\pa{2}h(\alpha-\beta))^{\bot}}{\abs{\Delta zh}^{2}}\varpi_{2}(\alpha-\beta)d\beta,\\
&B_{3}=-\frac{1}{\pi}\int_{{{\mathbb R}}}\frac{(\Delta zh)^{\bot}(\Delta zh\cdot(\pa{2}z(\alpha)-\pa{2}h(\alpha-\beta)))}{\abs{\Delta zh}^{4}}\varpi_{2}(\alpha-\beta)d\beta.
\end{align*}
Using the estimations in $\varpi$ and the distance of $z$ and $h$,
\begin{align*}
&\norm{B_{1}}_{L^{2}}\le C\norm{d(z,h)}_{L^{\infty}}^{\frac{1}{2}}\norm{\varpi_{2}}_{H^{2}}\\
&\le\esth{3}.
\end{align*}
For $B_{2}$,
\begin{align*}
&\norm{B_{2}}_{L^{2}}\le C\norm{d(z,h)}_{L^{\infty}}(\norm{z}_{H^{2}}+\norm{h}_{H^{2}})\norm{\varpi_{2}}_{L^{2}}\\
&\le\esth{2}.
\end{align*}
And $B_{3}$ will be the same,
\begin{align*}
&\norm{B_{3}}_{L^{2}}\le C\norm{d(z,h)}_{L^{\infty}}(\norm{z}_{H^{2}}+\norm{h}_{H^{2}})\norm{\varpi_{2}}_{L^{2}}\\
&\le\esth{2}.
\end{align*}
These estimations allow us to get the desire result.
\end{proof}
\subsection{Estimates for the $L^{2}$ norm of the curve}\label{estimal2}
We have
\begin{displaymath}
z_{t}(\alpha)=\brz{z}+\brh{z}+c(\alpha)\pa{}z(\alpha)
\end{displaymath}
where
\begin{align*}
c(\alpha)=&\frac{\alpha+\pi}{2\pi A(t)}\int_{{{\mathbb T}}}\pa{}z(\beta)\cdot(\pa{}\brz{z}+\pa{}\brh{z})d\beta\\
&-\int_{\pi}^{\alpha}\frac{\pa{}z(\beta)}{A(t)}\cdot(\pa{}\brz{z}+\pa{}\brh{z})d\beta.
\end{align*}
Recall that $A(t)=\abs{\pa{}z(\alpha)}^{2}$.
Then,
\begin{align*}
&\frac{1}{2}\frac{d}{dt}\int_{{{\mathbb T}}}\abs{z(\alpha)}^{2}d\alpha=\int_{{{\mathbb T}}}z(\alpha)\cdot z_{t}(\alpha)d\alpha=\int_{{{\mathbb T}}}z(\alpha)\cdot\brz{z}d\alpha\\
&+\int_{{{\mathbb T}}}z(\alpha)\cdot\brh{z}d\alpha+\int_{{{\mathbb T}}}c(\alpha)z(\alpha)\cdot\pa{}z(\alpha)d\alpha\equiv I_{1}+I_{2}+I_{3}.
\end{align*}
Taking $I_{1}+I_{2}\le\norm{z}_{L^{2}}(\norm{\brz{z}}_{L^{2}}+\norm{\brh{z}}_{L^{2}})$ and the inequality (\ref{brzh}) allow us to write,
\begin{displaymath}
I_{1}+I_{2}\le\esth{1}.
\end{displaymath}
Next we get,
\begin{align*}
I_{3}&\le A^{\frac{1}{2}}(t)\norm{c}_{L^{\infty}}\int_{{{\mathbb T}}}\abs{z(\alpha)}d\alpha\le 2\int_{{{\mathbb T}}}\abs{\brz{z}}+\abs{\brh{z}}d\alpha\int_{{{\mathbb T}}}\abs{z(\alpha)}d\alpha\\
&\le\esth{1}.
\end{align*}
Therefore,
\begin{displaymath}
\frac{d}{dt}\norm{z}_{L^{2}}^{2}(t)\le\esth{3}.
\end{displaymath}
\subsection{Estimates on the $H^{3}$ norm}
Taking the $3$ derivatives on the curve, we get
\begin{align*}
&\int_{{{\mathbb T}}}\pa{3}z(\alpha)\cdot\pa{3}z_{t}(\alpha)d\alpha=\int_{{{\mathbb T}}}\pa{3}z(\alpha)\cdot\pa{3}\brz{z}d\alpha\\
&+\int_{{{\mathbb T}}}\pa{3}z(\alpha)\cdot\pa{3}\brh{z}d\alpha+\int_{{{\mathbb T}}}\pa{3}z(\alpha)\cdot\pa{3}(c(\alpha)\pa{}z(\alpha))d\alpha\\
&\equiv I_{1}+I_{2}+I_{3}.
\end{align*}
Here and in the next section we will study $I_{1}+I_{2}$. We shall estimate $I_{3}$ in section \ref{estimacionesc}. Let estimate first the term $I_{2}$. We can split $I_{2}=J_{1}+J_{2}+J_{3}+J_{4}$, where
\begin{align*}
&J_{1}=\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot\pa{3}(\frac{(z(\alpha)-h(\alpha-\beta))^{\bot}}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}})\varpi_{2}(\alpha-\beta)d\beta d\alpha,\\
&J_{2}=\frac{3}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot\pa{2}(\frac{(z(\alpha)-h(\alpha-\beta))^{\bot}}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}})\pa{}\varpi_{2}(\alpha-\beta)d\beta d\alpha,\\
&J_{3}=\frac{3}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot\pa{}(\frac{(z(\alpha)-h(\alpha-\beta))^{\bot}}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}})\pa{2}\varpi_{2}(\alpha-\beta)d\beta d\alpha,\\
&J_{4}=\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot\frac{(z(\alpha)-h(\alpha-\beta))^{\bot}}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}}\pa{3}\varpi_{2}(\alpha-\beta)d\beta d\alpha.\\
\end{align*}
The most singular terms for $J_{1}$ are:
\begin{align*}
&J_{1}^{1}=\frac{1}{4\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot(\frac{(\pa{3}z(\alpha)-\pa{3}h(\alpha-\beta))^{\bot}}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}})\varpi_{2}(\alpha-\beta)d\beta d\alpha,\\
&J_{1}^{2}=-\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot(\frac{(\Delta zh)^{\bot}\Delta zh\cdot(\pa{3}z(\alpha)-\pa{3}h(\alpha-\beta))}{\abs{z(\alpha)-h(\alpha-\beta)}^{4}})\varpi_{2}(\alpha-\beta)d\beta d\alpha.\\
\end{align*}
Using $\pa{3}z\cdot\pa{3}z^{\bot}=0$,
\begin{displaymath}
\abs{J_{1}^{1}}\le C\norm{d(z,h)}_{L^{\infty}}\norm{z}_{H^{3}}\norm{h}_{H^{3}}\norm{\varpi_{2}}_{L^{\infty}}.
\end{displaymath}
Using the same technique,
\begin{align*}
&\abs{J_{1}^{2}}\le C\norm{d(z,h)}_{L^{\infty}}\norm{z}_{H^{3}}(\norm{z}_{H^{3}}+\norm{h}_{H^{3}})\norm{\varpi_{2}}_{L^{\infty}}.
\end{align*}
Then,
\begin{displaymath}
J_{1}\le\esth{3}.
\end{displaymath}
The most singular term in $J_{2}$ is:
\begin{align*}
&J_{2}^{1}=C\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot\frac{(\pa{2}z(\alpha)-\pa{2}h(\alpha-\beta))^{\bot}}{\abs{\Delta zh}^{2}}\pa{}\varpi_{2}(\alpha-\beta)d\beta d\beta\\
&\le C\norm{d(z,h)}_{L^{\infty}}\norm{z}_{H^{3}}(\norm{z}_{\mathcal{C}^{2}}+\norm{h}_{\mathcal{C}^{2}})\norm{\varpi_{2}}_{H^{1}}.
\end{align*}
Then,
\begin{displaymath}
J_{2}\le\esth{3}.
\end{displaymath}
For $J_{3}$,
\begin{align*}
&J_{3}^{1}=C\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot\frac{(\pa{}z(\alpha)-\pa{}h(\alpha-\beta))^{\bot}}{\abs{\Delta zh}^{2}}\pa{2}\varpi_{2}(\alpha-\beta)d\beta d\alpha\\
&\le C\norm{d(z,h)}_{L^{\infty}}(\norm{z}_{\mathcal{C}^{1}}+\norm{h}_{\mathcal{C}^{1}})\norm{z}_{H^{3}}\norm{\varpi_{2}}_{H^{2}}\\
&\le\esth{3}.
\end{align*}
Using integration by parts we will estimate $J_{4}$,
\begin{align*}
&J_{4}=-\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot\frac{(\Delta zh)^{\bot}}{\abs{\Delta zh}^{2}}\partial_{\beta}\pa{2}\varpi_{2}(\alpha-\beta)d\beta d\alpha\\
&=-\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot\frac{\pa{\bot}h(\alpha-\beta)}{\abs{\Delta zh}^{2}}\pa{2}\varpi_{2}(\alpha-\beta)d\beta d\alpha\\
&+\frac{1}{\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot\frac{(\Delta zh)^{\bot}\Delta zh\cdot\pa{}h(\alpha-\beta)}{\abs{\Delta zh}^{4}}\partial_{\beta}\pa{2}\varpi_{2}(\alpha-\beta)d\beta d\alpha\equiv J_{4}^{1}+J_{4}^{2}.\\
\end{align*}
It is clear,
\begin{align*}
&J_{4}^{1}\le C\norm{d(z,h)}_{L^{\infty}}\norm{h}_{\mathcal{C}^{1}}\norm{z}_{H^{3}}\norm{\varpi_{2}}_{H^{2}},\\
&J_{4}^{2}\le C\norm{d(z,h)}_{L^{\infty}}\norm{h}_{\mathcal{C}^{1}}\norm{z}_{H^{3}}\norm{\varpi_{2}}_{H^{2}}.
\end{align*}
Therefore,
\begin{displaymath}
I_{2}\le\esth{3}.
\end{displaymath}
\subsection{Estimations on $I_{1}$}\label{estimini7}
We can split $I_{1}$ in the following terms:
\begin{align*}
&I_{1}^{1}=\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot\pa{3}(\frac{(z(\alpha)-z(\alpha-\beta))^{\bot}}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}})\varpi_{1}(\alpha-\beta)d\beta d\alpha,\\
&I_{1}^{2}=\frac{3}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot\pa{2}(\frac{(z(\alpha)-z(\alpha-\beta))^{\bot}}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}})\pa{}\varpi_{1}(\alpha-\beta)d\beta d\alpha,\\
&I_{1}^{3}=\frac{3}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot\pa{}(\frac{(z(\alpha)-z(\alpha-\beta))^{\bot}}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}})\pa{2}\varpi_{1}(\alpha-\beta)d\beta d\alpha,\\
&I_{1}^{4}=\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot\frac{(z(\alpha)-z(\alpha-\beta))^{\bot}}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}}\pa{3}\varpi_{1}(\alpha-\beta)d\beta d\alpha.\\
\end{align*}
The terms $I_{1}^{1}$, $I_{1}^{2}$ and $I_{1}^{3}$ can be estimated like in the section $7.2$ in \cite{hele}. Then we have to estimate $I_{1}^{4}$.
\begin{align*}
&I_{1}^{4}=\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot(\frac{(\Delta z)^{\bot}}{\abs{\Delta z}^{2}}-\frac{\pa{\bot}z(\alpha)}{\beta\abs{\pa{}z(\alpha)}^{2}})\pa{3}\varpi_{1}(\alpha-\beta)d\beta d\alpha\\
&+\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot(\frac{\pa{\bot}z(\alpha)}{\beta\abs{\pa{}z(\alpha)}^{2}})\pa{3}\varpi_{1}(\alpha-\beta)d\beta d\alpha\equiv I_{1}^{41}+I_{1}^{42}.\\
\end{align*}
Using integration by parts,
\begin{align*}
&I_{1}^{41}=-\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot(\frac{(\Delta z)^{\bot}}{\abs{\Delta z}^{2}}-\frac{\pa{\bot}z(\alpha)}{\beta\abs{\pa{}z(\alpha)}^{2}})\partial_{\beta}\pa{2}\varpi_{1}(\alpha-\beta)d\beta d\alpha\\
&=\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot\partial_{\beta}(\frac{(\Delta z)^{\bot}}{\abs{\Delta z}^{2}}-\frac{\pa{\bot}z(\alpha)}{\beta\abs{\pa{}z(\alpha)}^{2}})\pa{2}\varpi_{1}(\alpha-\beta)d\beta d\alpha.
\end{align*}
If we decompose,
\begin{align}
&\partial_{\beta}(\frac{(\Delta z)^{\bot}}{\abs{\Delta z}^{2}}-\frac{\partial^{\bot}_{\alpha}z(\alpha)}{\abs{\partial_{\alpha}z(\alpha)}^{2}\beta})\nonumber\\
&=\frac{(\Delta\partial_{\alpha}z)^{\bot}}{\abs{\Delta z}^{2}}+\partial^{\bot}_{\alpha}z(\alpha)(\frac{1}{\abs{\Delta z}^{2}}-\frac{1}{\abs{\partial_{\alpha}z(\alpha)}^{2}\beta^{2}})-2\frac{(\Delta z)^{\bot}\Delta z\cdot\Delta\partial_{\alpha}z}{\abs{\Delta z}^{4}}\nonumber\\
&-2\frac{(\Delta z)^{\bot}(\Delta z-\beta\partial_{\alpha}z(\alpha))\cdot\partial_{\alpha}z(\alpha)}{\abs{\Delta z}^{4}}-2\frac{(\Delta z-\beta\partial_{\alpha}z(\alpha))^{\bot}\beta\abs{\partial_{\alpha}z(\alpha)}^{2}}{\abs{\Delta z}^{4}}\nonumber\\
&+(\frac{2\partial^{\bot}_{\alpha}z(\alpha)}{\abs{\partial_{\alpha}z(\alpha)}^{2}\beta^{2}}-\frac{2\beta^{2}\partial^{\bot}_{\alpha}z(\alpha)\abs{\partial_{\alpha}z(\alpha)}^{2}}{\abs{\Delta z}^{4}})\nonumber\\
&\equiv F_{1}(\alpha,\beta)+F_{2}(\alpha,\beta)+F_{3}(\alpha,\beta)+F_{4}(\alpha,\beta)+F_{5}(\alpha,\beta)+F_{6}(\alpha,\beta).\label{descomposicion}
\end{align}
We have
\begin{align*}
I_{1}^{41}&=\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot F_{1}(\alpha,\beta)\pa{2}\varpi_{1}(\alpha-\beta)d\beta d\alpha\\
&+\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot F_{2}(\alpha,\beta)\pa{2}\varpi_{1}(\alpha-\beta)d\beta d\alpha\\
&+\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot F_{3}(\alpha,\beta)\pa{2}\varpi_{1}(\alpha-\beta)d\beta d\alpha\\
&+\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot F_{4}(\alpha,\beta)\pa{2}\varpi_{1}(\alpha-\beta)d\beta d\alpha\\
&+\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot F_{5}(\alpha,\beta)\pa{2}\varpi_{1}(\alpha-\beta)d\beta d\alpha\\
&+\frac{1}{2\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot F_{6}(\alpha,\beta)\pa{2}\varpi_{1}(\alpha-\beta)d\beta d\alpha\\
&\equiv I_{1}^{411}+I_{1}^{412}+I_{1}^{413}+I_{1}^{414}+I_{1}^{415}+I_{1}^{416}.
\end{align*}
Computing
\begin{align*}
&F_{1}(\alpha,\beta)-\frac{\pa{2}z(\alpha)^{\bot}}{\beta\abs{\pa{}z(\alpha)}^{2}}=\frac{\beta^{2}\int_{0}^{1}\pa{2}z(\alpha-\beta ts)^{\bot}-\pa{2}z(\alpha)^{\bot}dsdt}{\abs{\Delta z}^{2}}\\
&+\frac{\beta^{2}\pa{2}z(\alpha)^{\bot}\int_{0}^{1}\int_{0}^{1}(1-t)\pa{2}z(\alpha)dsdt\cdot\int_{0}^{1}\pa{}z(\alpha)+\pa{}z(\alpha-\beta+\beta t)dt}{\abs{\Delta z}^{2}\abs{\pa{}z(\alpha)}^{2}}
\end{align*}
where $\alpha=\alpha-\beta+\beta t+s\beta+\beta ts$, we get
\begin{align*}
&I_{1}^{411}\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2,\delta}}\norm{z}_{H^{3}}\norm{\varpi_{1}}_{H^{2}}+C\norm{{{\mathcal{F}(z)}}}^{\frac{3}{2}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}\norm{z}_{H^{3}}\norm{\varpi_{1}}_{H^{2}}\\
&+\frac{1}{2\pi}\int_{{{\mathbb T}}}\pa{3}z(\alpha)\cdot\frac{\pa{2}z(\alpha)^{\bot}}{\abs{\pa{}z(\alpha)}^{2}}H(\pa{2}\varpi_{1})(\alpha)d\alpha.\\
\end{align*}
Then,
\begin{displaymath}
I_{1}^{411}\le\esth{3}.
\end{displaymath}
Analogously,
\begin{align*}
I_{1}^{412}\le C\norm{{{\mathcal{F}(z)}}}^{\frac{3}{2}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}^{2}\norm{z}_{H^{3}}\norm{\varpi_{1}}_{H^{2}}+\frac{1}{\pi}\int_{{{\mathbb T}}}\pa{3}z(\alpha)\cdot\pa{\bot}z(\alpha)\frac{\pa{2}z(\alpha)\cdot\pa{}z(\alpha)}{\abs{\pa{}z(\alpha)}^{2}}H(\pa{2}\varpi_{1})(\alpha)d\alpha.
\end{align*}
Using the fact that we can split $F_{3}$, with $\phi=\alpha-\beta+\beta t$ and $\psi=\alpha-\beta+\beta t+s\beta-\beta ts$.
\begin{align*}
&F_{3}(\alpha,\beta)=-2\beta^{3}\int_{0}^{1}\pa{\bot}z(\phi)dt\int_{0}^{1}\pa{}z(\phi)dt\cdot\int_{0}^{1}\pa{2}z(\phi)dt(\frac{1}{\abs{\Delta z}^{4}}-\frac{1}{\beta^{4}\abs{\pa{}z(\alpha)}^{4}})\\
&-\frac{2\int_{0}^{1}\pa{\bot}z(\phi)dt\int_{0}^{1}\pa{}z(\phi)dt\cdot\int_{0}^{1}\pa{2}z(\phi)dt}{\beta\abs{\pa{}z(\alpha)}^{4}}
\end{align*}
and
\begin{align*}
&\frac{1}{\abs{\Delta z}^{4}}-\frac{1}{\beta^{4}\abs{\pa{}z(\alpha)}^{4}}\\
&=\frac{\beta\int_{0}^{1}\int_{0}^{1}\pa{2}z(\psi)(t-1)dtds\cdot\int_{0}^{1}\pa{}z(\alpha)+\pa{}z(\phi)dt\int_{0}^{1}\abs{\pa{}z(\alpha)}^{2}+\abs{\pa{}z(\phi)}^{2}}{\abs{\Delta z}^{4}\abs{\pa{}z(\alpha)}^{4}},
\end{align*}
we get,
\begin{align*}
&I_{1}^{413}\le\esth{3}\\
&-\frac{1}{\pi}\int_{{{\mathbb T}}}\pa{3}z(\alpha)\cdot\pa{\bot}z(\alpha)\frac{\pa{}z(\alpha)\cdot\pa{2}z(\alpha)}{\abs{\pa{}z(\alpha)}^{4}}H(\pa{2}\varpi_{1})(\alpha)d\alpha\\
&\le\esth{3}+C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}\norm{z}_{H^{3}}\norm{\varpi_{1}}_{H^{2}}.
\end{align*}
For $I_{1}^{414}$ and $I_{1}^{415}$ it is easy to see that it is bounded in the same way as the above terms. Let us study the term $I_{1}^{416}$.
Since,
\begin{align*}
&-\frac{1}{2}F_{6}(\alpha,\beta)=\partial^{\bot}_{\alpha}z(\alpha)\frac{\beta^{4}\abs{\partial_{\alpha}z(\alpha)}^{4}-\abs{\Delta z}^{4}}{\abs{\Delta z}^{4}\abs{\partial_{\alpha}z(\alpha)}^{2}\beta^{2}}\\
&=\partial^{\bot}_{\alpha}z(\alpha)\frac{\beta^{3}\int_{0}^{1}\int_{0}^{1}(\partial^{2}_{\alpha}z(\psi)-\partial^{2}_{\alpha}z(\alpha))(1-s)dtds\int^{1}_{0}[\partial_{\alpha}z(\alpha)+\partial_{\alpha}z(\phi)]ds\int^{1}_{0}[\abs{\partial_{\alpha}z(\alpha)}^{2}+\abs{\partial_{\alpha}z(\phi)}^{2}]ds}{\abs{\Delta z}^{4}\abs{\partial_{\alpha}z(\alpha)}^{2}}\\
&+\frac{\partial^{\bot}_{\alpha}z(\alpha)}{2}\frac{\beta^{4}\partial^{2}_{\alpha}z(\alpha)\int_{0}^{1}\int_{0}^{1}\partial^{2}_{\alpha}z(\eta)(s-1)dtds\int^{1}_{0}[\abs{\partial_{\alpha}z(\alpha)}^{2}+\abs{\partial_{\alpha}z(\phi)}^{2}]ds}{\abs{\Delta z}^{4}\abs{\partial_{\alpha}z(\alpha)}^{2}}\\
&+\partial^{\bot}_{\alpha}z(\alpha)\frac{\beta^{4}\partial^{2}_{\alpha}z(\alpha)\partial_{\alpha}z(\alpha)\int_{0}^{1}\int_{0}^{1}\partial_{\alpha}z(\eta)\cdot\partial^{2}_{\alpha}z(\eta)(s-1)dtds}{\abs{\Delta z}^{4}\abs{\partial_{\alpha}z(\alpha)}^{2}}+\partial^{\bot}_{\alpha}z(\alpha)\frac{\beta^{3}\partial^{2}_{\alpha}z(\alpha)\partial_{\alpha}z(\alpha)}{\abs{\Delta z}^{4}}\\
&\equiv U_{1}(\alpha,\beta)+U_{2}(\alpha,\beta)+U_{3}(\alpha,\beta)+U_{4}(\alpha,\beta)
\end{align*}
we get,
\begin{align*}
I_{1}^{416}=&-\frac{1}{\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot U_{1}(\alpha,\beta)\partial^{2}_{\alpha}\varpi(\alpha-\beta)d\alpha d\beta-\frac{1}{\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot U_{2}(\alpha,\beta)\partial^{2}_{\alpha}\varpi(\alpha-\beta)d\alpha d\beta\\
&-\frac{1}{\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot U_{3}(\alpha,\beta)\partial^{2}_{\alpha}\varpi(\alpha-\beta)d\alpha d\beta-\frac{1}{\pi}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}\pa{3}z(\alpha)\cdot U_{4}(\alpha,\beta)\partial^{2}_{\alpha}\varpi(\alpha-\beta)d\alpha d\beta\\
&\equiv Q_{1}+Q_{2}+Q_{3}+Q_{4}.
\end{align*}
It is clear,
\begin{align*}
&Q_{1}\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2,\delta}}\norm{z}_{H^{3}}\norm{\varpi_{1}}_{H^{2}},\\
&Q_{2}\le C\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}^{2}\norm{z}_{\mathcal{C}^{1}}\norm{z}_{H^{3}}\norm{\varpi_{1}}_{H^{2}},\\
&Q_{3}\le C\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}^{2}\norm{z}_{\mathcal{C}^{1}}\norm{z}_{H^{3}}\norm{\varpi_{1}}_{H^{2}},\\
&Q_{4}\le C\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}^{2}\norm{z}_{\mathcal{C}^{1}}\norm{z}_{H^{3}}\norm{\varpi_{1}}_{H^{2}}\\
&-\frac{1}{\pi}\int_{{{\mathbb T}}}\pa{3}z(\alpha)\cdot\pa{\bot}z(\alpha)\frac{\pa{2}z(\alpha)\cdot\pa{}z(\alpha)}{\abs{\pa{}z(\alpha)}^{4}}H(\pa{2}\varpi_{1})(\alpha)d\alpha.\\
\end{align*}
Therefore,
\begin{displaymath}
I_{1}^{41}\le\esth{3}.
\end{displaymath}
Now, we have to study $I_{1}^{42}$. Using $\pa{}H=\Lambda$,
\begin{align*}
&I_{1}^{42}=\frac{1}{2\pi}\int_{{{\mathbb T}}}\pa{3}z(\alpha)\cdot\frac{\pa{\bot}z(\alpha)}{\abs{\pa{}z(\alpha)}^{2}}H(\pa{3}\varpi_{1})(\alpha)d\alpha=\frac{1}{2\pi}\int_{{{\mathbb T}}}\pa{3}z(\alpha)\cdot\frac{\pa{\bot}z(\alpha)}{\abs{\pa{}z(\alpha)}^{2}}\Lambda(\pa{2}\varpi_{1})(\alpha)d\alpha\\
&=\frac{1}{2\pi A(t)}\int_{{{\mathbb T}}}\Lambda(\pa{3}z(\alpha)\cdot\pa{\bot}z(\alpha))\pa{2}\varpi_{1}(\alpha)d\alpha.\\
\end{align*}
Since $\varpi_{1}=-\gamma_{1}T_{1}(\varpi_{1})-\gamma_{1}T_{2}(\varpi_{2})-N\pa{}z_{2}(\alpha)$ we split $I_{1}^{42}$,
\begin{align*}
&I_{1}^{421}=\frac{-N}{2\pi A(t)}\int_{{{\mathbb T}}}\Lambda(\pa{3}z(\alpha)\cdot\pa{\bot}z(\alpha))\pa{3}z_{2}(\alpha)d\alpha,\\
&I_{1}^{422}=\frac{-\gamma_{1}}{2\pi A(t)}\int_{{{\mathbb T}}}\Lambda(\pa{3}z\cdot\pa{\bot}z)(\alpha)(\pa{2}T_{1}(\varpi_{1})+\pa{2}T_{2}(\varpi_{2}))d\alpha.\\
\end{align*}
We can write $I_{1}^{421}=L_{1}+L_{2}$ where
\begin{align*}
&L_{1}=\frac{N}{2\pi A(t)}\int_{{{\mathbb T}}}\Lambda(\pa{3}z_{1}\pa{}z_{2})(\alpha)\pa{3}z_{2}(\alpha)d\alpha,\\
&L_{2}=\frac{-N}{2\pi A(t)}\int_{{{\mathbb T}}}\Lambda(\pa{3}z_{2}\pa{}z_{1})(\alpha)\pa{3}z_{2}(\alpha)d\alpha.\\
\end{align*}
Use the commutator estimation allow us,
\begin{align*}
&L_{1}\le C\norm{z}_{\mathcal{C}^{2,\delta}}\norm{z}_{H^{3}}^{2}+\frac{N}{2\pi A(t)}\int_{{{\mathbb T}}}\pa{}z_{2}(\alpha)\pa{3}z_{2}(\alpha)\Lambda(\pa{3}z_{1})(\alpha)d\alpha\\
&\le\esth{3}+L_{1}^{1}.
\end{align*}
Since $A(t)=\abs{\pa{}z(\alpha)}^{2}$ if we derivate twice with $\pa{}$ we get
\begin{displaymath}
\pa{}z_{2}(\alpha)\pa{3}z_{2}(\alpha)=-\pa{}z_{1}(\alpha)\pa{3}z_{1}(\alpha)-\abs{\pa{2}z(\alpha)}^{2}.
\end{displaymath}
Then,
\begin{align*}
&L_{1}^{1}=-\frac{N}{2\pi A(t)}\int_{{{\mathbb T}}}\abs{\pa{2}z(\alpha)}^{2}\Lambda(\pa{3}z_{1})(\alpha)d\alpha-\frac{N}{2\pi A(t)}\int_{{{\mathbb T}}}\pa{}z_{1}(\alpha)\pa{3}z_{1}(\alpha)\Lambda(\pa{3}z_{1})(\alpha)d\alpha\\
&=\frac{N}{2\pi A(t)}\int_{{{\mathbb T}}}\pa{}\abs{\pa{2}z(\alpha)}^{2}H(\pa{3}z_{1})(\alpha)d\alpha-\frac{N}{2\pi A(t)}\int_{{{\mathbb T}}}\pa{}z_{1}(\alpha)\pa{3}z_{1}(\alpha)\Lambda(\pa{3}z_{1})(\alpha)d\alpha\\
&\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}\norm{z}_{H^{3}}^{2}-\frac{N}{2\pi A(t)}\int_{{{\mathbb T}}}\pa{}z_{1}(\alpha)\pa{3}z_{1}(\alpha)\Lambda(\pa{3}z_{1})(\alpha)d\alpha.\\
\end{align*}
In the same way, using the commutator estimation we have,
\begin{displaymath}
L_{2}\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2,\delta}}\norm{z}_{H^{3}}^{2}-\frac{N}{2\pi A(t)}\int_{{{\mathbb T}}}\pa{}z_{1}(\alpha)\pa{3}z_{2}(\alpha)\Lambda(\pa{3}z_{2})(\alpha)d\alpha.
\end{displaymath}
Therefore,
\begin{align*}
&I_{1}^{421}\le\esth{3}\\
&-\frac{N}{2\pi A(t)}\int_{{{\mathbb T}}}\pa{}z_{1}(\alpha)\pa{3}z(\alpha)\cdot\Lambda(\pa{3}z)(\alpha)d\alpha.
\end{align*}
Here we can observe that a part of the Rayleigh-Taylor condition appears.
Let us estimate the term $I_{1}^{422}$. We can split this term in
\begin{align*}
&L_{3}=\frac{-\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\Lambda(\pa{3}z\cdot\pa{\bot}z)(\alpha)(\pa{2}\brz{z}+\pa{2}\brh{z})\cdot\pa{}z(\alpha)d\alpha,\\
&L_{4}=\frac{-\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\Lambda(\pa{3}z\cdot\pa{\bot}z)(\alpha)(\pa{}\brz{z}+\pa{}\brh{z})\cdot\pa{2}z(\alpha)d\alpha,\\
&L_{5}=\frac{-\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\Lambda(\pa{3}z\cdot\pa{\bot}z)(\alpha)(\brz{z}+\brh{z})\cdot\pa{3}z(\alpha)d\alpha.\\
\end{align*}
We will estimate $L_{3}+L_{4}$ and then we will find the rest of the R-T condition in the estimations of the term $L_{5}$.
For $L_{3}$, using integration by parts,
\begin{align*}
&L_{3}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)(\pa{2}\brz{z}+\pa{2}\brh{z})\cdot\pa{2}z(\alpha)d\alpha\\
&+\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)(\pa{3}\brz{z}+\pa{3}\brh{z})\cdot\pa{}z(\alpha)d\alpha\\
&\equiv L_{3}^{1}+L_{3}^{2}.
\end{align*}
Directly, using (\ref{brzh})
\begin{align*}
&L_{3}^{1}\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{\pa{3}z\cdot\pa{\bot}z}_{L^{2}}(\norm{\pa{2}\brz{z}}_{L^{2}}+\norm{\pa{2}\brh{z}}_{L^{2}})\norm{z}_{\mathcal{C}^{2}}\\
&\le\esth{3}.
\end{align*}
For $L_{3}^{2}$, we write
\begin{align*}
&L_{3}^{2}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\pa{3}\brz{z}\cdot\pa{}z(\alpha)d\alpha\\
&+\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\pa{3}\brh{z}\cdot\pa{}z(\alpha)d\alpha\equiv L_{3}^{21}+L_{3}^{22}.
\end{align*}
The application of the Leibniz's rule to $\pa{3}\brz{z}$ produces terms which can be estimated with the same tools used before. The most singular terms for $L_{3}^{21}$ are
\begin{align*}
&L_{3}^{211}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\pa{}(BR(\pa{2}\varpi_{1},z)_{z})\cdot\pa{}z(\alpha)d\alpha,\\
&L_{3}^{212}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\frac{(\Delta\pa{3}z)^{\bot}\cdot\pa{}z(\alpha)}{\abs{\Delta z}^{2}}\varpi_{1}(\alpha-\beta)d\beta d\alpha,\\
&L_{3}^{213}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\frac{(\Delta z)^{\bot}\cdot\pa{}z(\alpha)\Delta z\cdot\Delta\pa{3}z}{\abs{\Delta z}^{4}}\varpi_{1}(\alpha-\beta)d\beta d\alpha.\\
\end{align*}
Since,
\begin{displaymath}
\pa{}(BR(\pa{2}\varpi_{1},z)_{z})\cdot\pa{}z(\alpha)=\pa{}(T_{1}(\pa{2}\varpi_{1}))-BR(\pa{2}\varpi_{1},z)_{z}\cdot\pa{2}z(\alpha).
\end{displaymath}
And using the estimations on $\norm{{{\mathcal T}}}_{L^{2}\times L^{2}\to H^{1}\times H^{1}}$ and the estimations on $\brz{z}$ we get
\begin{align*}
L_{3}^{211}&\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{\pa{3}z\cdot\pa{\bot}z}_{L^{2}}(\norm{T_{1}(\pa{2}\varpi_{1})}_{H^{1}}+\norm{BR(\pa{2}\varpi_{1},z)_{z}}_{L^{2}}\norm{z}_{\mathcal{C}^{2}}).\\
&\le\esth{3}
\end{align*}
For $L_{3}^{212}$ we get,
\begin{align*}
&M_{1}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\Delta\pa{3}z^{\bot}\cdot\pa{}z(\alpha)\varpi_{1}(\alpha-\beta)(\frac{1}{\abs{\Delta z}^{2}}-\frac{1}{\beta^{2}\abs{\pa{}z(\alpha)}^{2}})d\beta d\alpha,\\
&M_{2}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\Delta\pa{3}z^{\bot}\cdot\pa{}z(\alpha)\frac{\varpi_{1}(\alpha-\beta)}{\beta^{2}\abs{\pa{}z(\alpha)}^{2}}d\beta d\alpha.\\
\end{align*}
If we compute $\frac{1}{\abs{\Delta z}^{2}}-\frac{1}{\beta^{2}\abs{\pa{}z(\alpha)}^{2}}=B_{1}+B_{2}+B_{3}$ where
\begin{align*}
&B_1(\alpha,\beta)=\frac{\beta\int_{0}^{1}\int_{0}^{1}\frac{\partial^{2}_{\alpha}z(\psi)-\partial^{2}_{\alpha}z(\alpha)}{\abs{\psi-\alpha}^{\delta}}\beta^{\delta}(1+s+t-st)^{\delta}(1-s)dtds\int^{1}_{0}[\partial_{\alpha}z(\alpha)+\partial_{\alpha}z(\phi)]ds}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}\abs{\partial_{\alpha}z(\alpha)}^{2}},\\
&B_3(\alpha,\beta)=\frac{\beta^{2}\partial^{2}_{\alpha}z(\alpha)\int_{0}^{1}\int_{0}^{1}\partial^{2}_{\alpha}z(\eta)(s-1)dtds}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}\abs{\partial_{\alpha}z(\alpha)}^{2}},\\
&B_4(\alpha,\beta)=\frac{\beta\partial^{2}_{\alpha}z(\alpha)2\partial_{\alpha}z(\alpha)}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}\abs{\partial_{\alpha}z(\alpha)}^{2}},
\end{align*}
we can split $M_{1}=M_{1}^{1}+M_{1}^{2}+M_{1}^{3}$ for
\begin{align*}
&M_{1}^{1}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\Delta\pa{3}z^{\bot}\cdot\pa{}z(\alpha)\varpi_{1}(\alpha-\beta)B_{1}(\alpha,\beta)d\beta d\alpha,\\
&M_{1}^{2}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\Delta\pa{3}z^{\bot}\cdot\pa{}z(\alpha)\varpi_{1}(\alpha-\beta)B_{2}(\alpha,\beta)d\beta d\alpha,\\
&M_{1}^{3}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\Delta\pa{3}z^{\bot}\cdot\pa{}z(\alpha)\varpi_{1}(\alpha-\beta)B_{3}(\alpha,\beta)d\beta d\alpha.\\
\end{align*}
It is easy see that
\begin{align*}
&M_{1}^{1}\le C\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2,\delta}}\norm{z}_{\mathcal{C}^{1}}\norm{\varpi_{1}}_{L^{\infty}}\norm{z}^{2}_{H^{3}},\\
&M_{1}^{2}\le C\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}\norm{z}^{2}_{\mathcal{C}^{2}}\norm{\varpi_{1}}_{L^{\infty}}\norm{z}^{2}_{H^{3}}.\\
\end{align*}
We need to study more precisely the term $M_{1}^{3}$. Again, we decompose $M_{1}^{3}$ in the following terms:
\begin{align*}
&M_{1}^{31}=\frac{2\gamma_{1}}{\pi A^{2}(t)}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\Delta\pa{3}z^{\bot}\cdot\pa{}z(\alpha)\varpi_{1}(\alpha-\beta)\beta\pa{2}z(\alpha)\cdot\pa{}z(\alpha)B(\alpha,\beta)d\beta d\alpha,\\
&M_{1}^{32}=\frac{2\gamma_{1}}{\pi A^{3}(t)}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\Delta\pa{3}z^{\bot}\cdot\pa{}z(\alpha)\frac{\varpi_{1}(\alpha-\beta)}{\beta}\pa{2}z(\alpha)\cdot\pa{}z(\alpha)d\beta d\alpha,\\
\end{align*}
where
\begin{displaymath}
B(\alpha,\beta)=\frac{1}{\abs{\Delta z}^{2}}-\frac{1}{\beta^{2}\abs{\pa{}z(\alpha)}^{2}}=\frac{\beta\int_{0}^{1}\int_{0}^{1}\partial^{2}_{\alpha}z(\psi)(1-s)dtds\cdot\int^{1}_{0}[\partial_{\alpha}z(\alpha)+\partial_{\alpha}z(\phi)]ds}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}\abs{\partial_{\alpha}z(\alpha)}^{2}}.
\end{displaymath}
Directly,
\begin{displaymath}
M_{1}^{31}\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}^{2}\norm{\varpi_{1}}_{L^{\infty}}\norm{z}_{H^{3}}.
\end{displaymath}
For $M_{1}^{32}$,
\begin{align*}
&M_{1}^{321}=\frac{2\gamma_{1}}{\pi A^{3}(t)}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\pa{3}z(\alpha)^{\bot}\cdot\pa{}z(\alpha)\frac{\varpi_{1}(\alpha-\beta)}{\beta}\pa{2}z(\alpha)\cdot\pa{}z(\alpha)d\beta d\alpha\\
&=\frac{2\gamma_{1}}{\pi A^{3}(t)}\int_{{{\mathbb T}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\pa{3}z(\alpha)^{\bot}\cdot\pa{}z(\alpha)H\varpi_{1}(\alpha)\pa{2}z(\alpha)\cdot\pa{}z(\alpha)d\beta d\alpha\\
&\le C\norm{{{\mathcal{F}(z)}}}^{\frac{3}{2}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}\norm{H\varpi_{1}}_{L^{\infty}}\norm{z}^{2}_{H^{3}}
\end{align*}
and
\begin{align*}
&M_{1}^{322}=\frac{2\gamma_{1}}{\pi A^{3}(t)}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\pa{3}z(\alpha-\beta)^{\bot}\cdot\pa{}z(\alpha)\frac{\varpi_{1}(\alpha-\beta)}{\beta}\pa{2}z(\alpha)\cdot\pa{}z(\alpha)d\beta d\alpha\\
&\le C\norm{{{\mathcal{F}(z)}}}^{\frac{3}{2}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}\norm{\varpi}_{\mathcal{C}^{1}}\norm{z}^{2}_{H^{3}}\\
&+\frac{2\gamma_{1}}{\pi A^{3}(t)}\int_{{{\mathbb T}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)H(\pa{3}z^{\bot})(\alpha)\cdot\pa{}z(\alpha)\varpi_{1}(\alpha)\pa{2}z(\alpha)\cdot\pa{}z(\alpha)d\alpha\\
&\le C\norm{{{\mathcal{F}(z)}}}^{\frac{3}{2}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}\norm{\varpi}_{\mathcal{C}^{1}}\norm{z}^{2}_{H^{3}}.\\
\end{align*}
Therefore, $M_{1}\le\esth{3}$.
For $M_{2}$ we proceed as follows:
\begin{align*}
&M_{2}^{1}=\frac{\gamma_{1}}{\pi A^{2}(t)}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\Delta\pa{3}z^{\bot}\cdot\pa{}z(\alpha)\frac{\varpi_{1}(\alpha-\beta)-\varpi_{1}(\alpha)}{\beta^{2}}d\beta d\alpha,\\
&M_{2}^{2}=\frac{\gamma_{1}}{\pi A^{2}(t)}\int_{{{\mathbb T}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\Lambda(\pa{3}z^{\bot})(\alpha)\cdot\pa{}z(\alpha)\varpi_{1}(\alpha)d\alpha.\\
\end{align*}
In the same way as before,
\begin{align*}
&M_{2}^{1}=\frac{\gamma_{1}}{\pi A^{2}(t)}\int_{{{\mathbb T}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\pa{3}z(\alpha)^{\bot}\cdot\pa{}z(\alpha)\Lambda\varpi_{1}(\alpha)d\alpha\\
&+\frac{\gamma_{1}}{\pi A^{2}(t)}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\pa{3}z(\alpha-\beta)^{\bot}\cdot\pa{}z(\alpha)\frac{\int_{0}^{1}\pa{}\varpi_{1}(\alpha-\beta+\beta t)-\pa{}\varpi_{1}(\alpha)dt}{\beta}d\beta d\alpha\\
&+\frac{\gamma_{1}}{\pi A^{2}(t)}\int_{{{\mathbb T}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)H(\pa{3}z^{\bot})(\alpha)\cdot\pa{}z(\alpha)\pa{}\varpi_{1}(\alpha)d\alpha\\
&\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{\varpi_{1}}_{\mathcal{C}^{1,\delta}}\norm{z}^{2}_{H^{3}}.
\end{align*}
Using the commutator estimation and $\Lambda H=-\pa{}$
\begin{align*}
&M_{2}^{2}=\frac{\gamma_{1}}{\pi A^{2}(t)}\int_{{{\mathbb T}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)(\Lambda(\pa{3}z^{\bot})(\alpha)\cdot\pa{}z(\alpha)\varpi_{1}(\alpha)-\Lambda(\pa{3}z^{\bot}\cdot\pa{}z\varpi_{1})(\alpha))d\alpha\\
&+\frac{\gamma_{1}}{\pi A^{2}(t)}\int_{{{\mathbb T}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\Lambda(\pa{3}z^{\bot}\cdot\pa{}z\varpi_{1})(\alpha)d\alpha\\
&\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}^{2}\norm{\pa{3}z\pa{}z}_{L^{2}}\norm{\varpi_{1}\pa{}z}_{\mathcal{C}^{1,\delta}}\norm{\pa{3}z}_{L^{2}}\\
&-\frac{\gamma_{1}}{\pi A^{2}(t)}\int_{{{\mathbb T}}}\pa{}(\pa{3}z\cdot\pa{\bot}z)(\alpha)\pa{3}z^{\bot}(\alpha)\cdot\pa{}z(\alpha)\varpi_{1}(\alpha)d\alpha\\
&\le\esth{3}\\
&+\frac{\gamma_{1}}{\pi A^{2}(t)}\int_{{{\mathbb T}}}\pa{}(\pa{3}z\cdot\pa{\bot}z)(\alpha)\pa{3}z(\alpha)\cdot\pa{\bot}z(\alpha)\varpi_{1}(\alpha)d\alpha\\
&\le\esth{3}\\
&+C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{\varpi_{1}}_{\mathcal{C}^{1}}\norm{z}_{H^{3}}
\end{align*}
We can estimate $L_{3}^{213}$ as before. Then, we get the estimation for $L_{3}^{21}$.
Let us estimate $L_{3}^{22}$. The most singular terms are
\begin{align*}
&L_{3}^{221}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\pa{}(BR(\pa{2}\varpi_{2},h)_{z})\cdot\pa{}z(\alpha)d\alpha,\\
&L_{3}^{222}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\frac{(\Delta\pa{3}zh)^{\bot}\cdot\pa{}z(\alpha)}{\abs{\Delta zh}^{2}}\varpi_{2}(\alpha-\beta)d\beta d\alpha\\
&L_{3}^{223}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\int_{{{\mathbb R}}}H(\pa{3}z\cdot\pa{\bot}z)(\alpha)\frac{(\Delta zh)^{\bot}\cdot\pa{}z(\alpha)\Delta zh\cdot\Delta\pa{3}zh}{\abs{\Delta zh}^{4}}\varpi_{2}(\alpha-\beta)d\beta d\alpha\\
\end{align*}
Since
\begin{align*}
\pa{}(BR(\pa{2}\varpi_{2},h)_{z})\cdot\pa{}z=\pa{}(T_{2}(\pa{2}\varpi_{2}))-BR(\pa{2}\varpi_{2},h)_{z}\cdot\pa{2}z
\end{align*}
then,
\begin{align*}
L_{3}^{221}\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{\pa{3}z\pa{\bot}z}_{L^{2}}(\norm{T_{2}(\pa{2}\varpi_{2})}_{H^{1}}+\norm{BR(\pa{2}\varpi_{2},h)_{z}}_{L^{2}}\norm{\pa{2}z}_{L^{\infty}})
\end{align*}
Using the estimation on $\norm{{{\mathcal T}}\varpi}_{H^{1}}$ and $\brh{z}$, $L_{3}^{221}$ is controlled.
We can get,
\begin{align*}
L_{3}^{222}\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{d(z,h)}_{L^{\infty}}\norm{\pa{3}z\pa{}z}_{L^{2}}(\norm{\pa{3}z}_{L^{2}}+\norm{\pa{3}h}_{L^{2}})\norm{z}_{\mathcal{C}^{1}}\norm{\varpi_{2}}_{L^{\infty}}.
\end{align*}
For $L_{3}^{223}$ we get the same
\begin{align*}
L_{3}^{223}\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{d(z,h)}_{L^{\infty}}\norm{\pa{3}z}_{L^{2}}(\norm{\pa{3}z}_{L^{2}}+\norm{\pa{3}h}_{L^{2}})\norm{z}^{2}_{\mathcal{C}^{1}}\norm{\varpi_{2}}_{L^{\infty}}.
\end{align*}
For $L_{4}$ integrating by parts we obtain:
\begin{align*}
&L_{4}\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{\pa{3}z\pa{}z}_{L^{2}}((\norm{\pa{2}\brz{z}}_{L^{2}}+\norm{\pa{2}\brh{z}})\norm{\pa{2}z}_{L^{\infty}}\\
&+(\norm{\pa{}\brz{z}}_{L^{\infty}}+\norm{\pa{}\brh{z}}_{L^{\infty}})\norm{\pa{3}z}_{L^{2}})\\
&\le\esth{3}.
\end{align*}
Finally we have to find $\sigma(\alpha)$ in $L_{5}$ to finish the estimations. To do that let us split $L_{5}=L_{5}^{1}+L_{5}^{2}+L_{5}^{3}+L_{5}^{4}$ where
\begin{align*}
&L_{5}^{1}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\Lambda(\pa{3}z_{1}\pa{}z_{2})(\alpha)(BR_{1}(\varpi_{1},z)_{z}+BR_{1}(\varpi_{2},h)_{z})\pa{3}z_{1}(\alpha)d\alpha,\\
&L_{5}^{2}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\Lambda(\pa{3}z_{1}\pa{}z_{2})(\alpha)(BR_{2}(\varpi_{1},z)_{z}+BR_{2}(\varpi_{2},h)_{z})\pa{3}z_{2}(\alpha)d\alpha,\\
&L_{5}^{3}=-\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\Lambda(\pa{3}z_{2}\pa{}z_{1})(\alpha)(BR_{1}(\varpi_{1},z)_{z}+BR_{1}(\varpi_{2},h)_{z})\pa{3}z_{1}(\alpha)d\alpha,\\
&L_{5}^{4}=-\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\Lambda(\pa{3}z_{2}\pa{}z_{1})(\alpha)(BR_{2}(\varpi_{1},z)_{z}+BR_{2}(\varpi_{2},h)_{z})\pa{3}z_{2}(\alpha)d\alpha.\\
\end{align*}
In order to reduce the notation, we denote $BR_{i}=BR_{i}(\varpi_{1},z)_{z}+BR_{i}(\varpi_{2},h)_{z}$ for $i=1,2$.
Then we can write,
\begin{align*}
&L_{5}^{1}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}(\Lambda(\pa{3}z_{1}\pa{}z_{2})(\alpha)-\pa{}z_{2}\Lambda(\pa{3}z)(\alpha))BR_{1}\pa{3}z_{1}(\alpha)d\alpha\\
&+\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}BR_{1}\pa{}z_{2}(\alpha)\pa{3}z_{1}(\alpha)\Lambda(\pa{3}z_{1})(\alpha)d\alpha\\
&\le C\norm{\pa{}z}_{\mathcal{C}^{1,\delta}}\norm{BR_{1}}_{L^{\infty}}\norm{z}_{H^{3}}^{2}+\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}BR_{1}\pa{}z_{2}(\alpha)\pa{3}z_{1}(\alpha)\Lambda(\pa{3}z_{1})(\alpha)d\alpha.
\end{align*}
In the same way,
\begin{align*}
&L_{5}^{2}\le\esth{3}+ L^{21}_{5}
\end{align*}
where
\begin{align*}
&L_{5}^{21}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}BR_{2}\pa{}z_{2}(\alpha)\pa{3}z_{2}(\alpha)\Lambda(\pa{3}z_{1})(\alpha)d\alpha.
\end{align*}
Using $\pa{}z_{2}\pa{3}z_{2}=-\pa{}z_{1}\pa{3}z_{1}-\abs{\pa{2}z}^{2}$ we separate $L_{5}^{21}$ in
\begin{align*}
&L_{5}^{211}=-\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}BR_{2}\abs{\pa{2}z(\alpha)}^{2}\Lambda(\pa{3}z_{1})(\alpha)d\alpha,\\
&L_{5}^{212}=-\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}BR_{2}\pa{}z_{1}(\alpha)\pa{3}z_{1}(\alpha)\Lambda(\pa{3}z_{1})(\alpha)d\alpha.
\end{align*}
The fact that $\Lambda=\pa{}H$ allows us to
\begin{align*}
&L_{5}^{211}=\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}\pa{}(BR_{2}\abs{\pa{2}z(\alpha)}^{2})H(\pa{3}z_{1})(\alpha)d\alpha\\
&\le C(\norm{\pa{}BR_{2}}_{L^{2}}\norm{z}^{2}_{\mathcal{C}^{2}}+\norm{BR_{2}}_{L^{2}}\norm{z}_{\mathcal{C}^{2}}^{2})\norm{z}_{H^{3}}\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\\
&\le\esth{3}.
\end{align*}
Then we get,
\begin{align*}
&L_{5}^{2}\le\esth{3}\\
&-\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}BR_{2}\pa{}z_{1}(\alpha)\pa{3}z_{1}(\alpha)\Lambda(\pa{3}z_{1})(\alpha)d\alpha.
\end{align*}
Now, we add $L_{5}^{1}+L_{5}^{2}$:
\begin{align*}
&L_{5}^{1}+L_{5}^{2}\le\esth{3}\\
&-\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}(\brz{z}+\brh{z})\cdot\pa{\bot}z(\alpha)\pa{3}z_{1}(\alpha)\Lambda(\pa{3}z_{1})(\alpha)d\alpha.
\end{align*}
Analogously, using $\pa{}z_{1}\pa{3}z_{1}=-\pa{}z_{2}\pa{3}z_{2}-\abs{\pa{2}z}^{2}$ we get:
\begin{align*}
&L_{5}^{3}+L_{5}^{4}\le\esth{3}\\
&-\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}(\brz{z}+\brh{z})\cdot\pa{\bot}z(\alpha)\pa{3}z_{2}(\alpha)\Lambda(\pa{3}z_{2})(\alpha)d\alpha.
\end{align*}
Therefore,
\begin{align*}
&L_{5}\le\esth{3}\\
&-\frac{\gamma_{1}}{\pi A(t)}\int_{{{\mathbb T}}}(\brz{z}+\brh{z})\cdot\pa{\bot}z(\alpha)\pa{3}z(\alpha)\cdot\Lambda(\pa{3}z)(\alpha)d\alpha.
\end{align*}
In conclusion,
\begin{align*}
&I_{1}^{4}\le\esth{3}\\
&-\frac{1}{\pi A(t)}\int_{{{\mathbb T}}}\left[\gamma_{1}(\brz{z}+\brh{z})\cdot\pa{\bot}z(\alpha)+N\pa{}z_{1}(\alpha)\right]\pa{3}z(\alpha)\cdot\Lambda(\pa{3}z)(\alpha)d\alpha.
\end{align*}
Since,
\begin{displaymath}
\sigma(\alpha,t)=\frac{\mu_{2}-\mu_{1}}{\kappa_{1}}(\brz{z}+\brh{z})\cdot\pa{\bot}z(\alpha)+(\rho_{2}-\rho_{1})g\pa{}z_{1}(\alpha)
\end{displaymath}
then,
\begin{align*}
&I_{1}\le\esth{3}\\
&-\frac{\kappa_{1}}{2\pi(\mu_{2}+\mu_{1})A(t)}\int_{{{\mathbb T}}}\sigma(\alpha,t)\pa{3}z(\alpha)\cdot\Lambda(\pa{3}z)(\alpha)d\alpha.
\end{align*}
\subsection{Estimates on $I_{3}$}
\label{estimacionesc}
To finish all estimation on $z$, we consider:
\begin{align*}
&I_{3}=\int_{{{\mathbb T}}}\pa{3}z(\alpha)\cdot\pa{4}z(\alpha)c(\alpha)d\alpha+3\int_{{{\mathbb T}}}\abs{\pa{3}z(\alpha)}^{2}\pa{}c(\alpha)d\alpha\\
&+3\int_{{{\mathbb T}}}\pa{3}z(\alpha)\cdot\pa{2}z(\alpha)\pa{2}c(\alpha)d\alpha+\int_{{{\mathbb T}}}\pa{3}z(\alpha)\cdot\pa{}z(\alpha)\pa{3}c(\alpha)d\alpha\\
&I_{3}^{1}+I_{3}^{2}+I_{3}^{3}+I_{3}^{4}.
\end{align*}
Integrating by parts and using the definition of $c(\alpha)$,
\begin{align*}
&I_{3}^{1}=-\frac{1}{2}\int_{{{\mathbb T}}}\abs{\pa{3}z(\alpha)}^{2}\pa{}c(\alpha)d\alpha\le C\norm{\pa{}c}_{L^{\infty}}\norm{\pa{3}z}^{2}_{L^{2}}\\
&\le 2C\norm{{{\mathcal{F}(z)}}}^{\frac{1}{2}}_{L^{\infty}}(\norm{\pa{}\brz{z}}_{L^{\infty}}+\norm{\pa{}\brh{z}}_{L^{\infty}})\norm{\pa{3}z}^{2}_{L^{2}}.
\end{align*}
Since $I_{3}^{2}=-6I_{3}^{1}$ we have $I_{3}^{2}$ controlled.
Computing $\pa{2}c$,
\begin{align*}
&\pa{2}c(\alpha)=-\frac{\pa{2}z(\alpha)}{A(t)}\cdot(\pa{}\brz{z}+\pa{}\brh{z})\\
&-\frac{\pa{}z(\alpha)}{A(t)}\cdot(\pa{2}\brz{z}+\pa{2}\brh{z}).
\end{align*}
Thus,
\begin{align*}
&I_{3}^{3}=-3\int_{{{\mathbb T}}}\pa{3}z(\alpha)\cdot\pa{2}z(\alpha)\frac{\pa{2}z(\alpha)}{A(t)}\cdot(\pa{}\brz{z}+\pa{}\brh{z})d\alpha\\
&-3\int_{{{\mathbb T}}}\pa{3}z(\alpha)\cdot\pa{2}z(\alpha)\frac{\pa{}z(\alpha)}{A(t)}\cdot(\pa{2}\brz{z}+\pa{2}\brh{z})d\alpha\\
&\equiv I_{3}^{31}+I_{3}^{32}
\end{align*}
where
\begin{align*}
&I_{3}^{31}\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}^{2}(\norm{\pa{}\brz{z}}_{L^{\infty}}+\norm{\pa{}\brh{z}}_{L^{\infty}})\norm{\pa{3}z}_{L^{2}},\\
&I_{3}^{32}\le C\norm{{{\mathcal{F}(z)}}}^{\frac{1}{2}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}(\norm{\pa{2}\brz{z}}_{L^{2}}+\norm{\pa{2}\brh{z}}_{L^{2}})\norm{\pa{3}z}_{L^{2}}.\\
\end{align*}
Using the estimation on $\norm{\brz{z}}_{H^{k}}+\norm{\brh{z}}_{H^{k}}$ we obtain,
\begin{displaymath}
I_{3}^{3}\le\esth{3}.
\end{displaymath}
Since $\abs{\pa{}z(\alpha)}^{2}=A(t)$, if we differentiate respect to $\alpha$:
\begin{align*}
0=2\abs{\pa{2}z(\alpha)}^{2}+2\pa{}z(\alpha)\cdot\pa{3}z(\alpha)\Rightarrow \pa{}z(\alpha)\cdot\pa{3}z(\alpha)=-\abs{\pa{2}z(\alpha)}^{2}.
\end{align*}
Then, integrating by parts
\begin{align*}
I_{3}^{4}=-\int_{{{\mathbb T}}}\abs{\pa{2}z(\alpha)}^{2}\pa{3}c(\alpha)d\alpha=2\int_{{{\mathbb T}}}\pa{2}z(\alpha)\cdot\pa{3}z(\alpha)\pa{2}c(\alpha)d\alpha=\frac{2}{3}I_{3}^{3}.
\end{align*}
Therefore,
\begin{displaymath}
I_{3}^{4}\le\esth{3}.
\end{displaymath}
Putting together all above estimates, since the case for $k>3$ is straighforward we have
\begin{align*}
&\frac{d}{dt}\norm{z}_{H^{k}}^{2}\le\esth{k}\\
&-\frac{\kappa^{1}}{2\pi(\mu^{1})+\mu^{2}}\int_{{{\mathbb T}}}\frac{\sigma(\alpha)}{A(t)}\pa{k}z(\alpha)\cdot\Lambda(\pa{k}z)(\alpha)d\alpha
\end{align*}
for $k\ge 3$.
\section{Evolution of the arc-chord condition}
\section{Evolution of the distance between $z$ and $h$}
\label{evoldi}
Remind that we relate the distance of the curve $z$ with $h$ through the function
\begin{displaymath}
d(z,h)=\frac{1}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}}
\end{displaymath}
\begin{lem}
The following estimate holds
\begin{align*}
\frac{d}{dt}\norm{d(z,h)}_{L^{\infty}}^{2}\le\esth{3}.
\end{align*}
\end{lem}
\begin{proof}
If we take $p>1$, we get
\begin{align*}
&\frac{d}{dt}\norm{d(z,h)}_{L^{p}}^{p}(t)=\frac{d}{dt}\int_{{{\mathbb T}}}\int_{{{\mathbb T}}}\frac{1}{\abs{z(\alpha)-h(\alpha-\beta)}^{2p}}d\alpha d\beta\\
&=-2p\int_{{{\mathbb T}}}\int_{{{\mathbb T}}}\frac{(z(\alpha)-h(\alpha-\beta))\cdot z_{t}(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{2p+2}}d\alpha d\beta\\
&=-2p\int_{{{\mathbb T}}}\int_{{{\mathbb T}}}\frac{z(\alpha)\cdot z_{t}(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{2p+2}}d\alpha d\beta+2p\int_{{{\mathbb T}}}\int_{{{\mathbb T}}}\frac{h(\alpha-\beta)\cdot z_{t}(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{2p+2}}d\alpha d\beta\\
&=-2p\int_{{{\mathbb T}}}\int_{{{\mathbb T}}}\frac{z(\alpha)\cdot(\brz{z}+\brh{z})}{\abs{z(\alpha)-h(\alpha-\beta)}^{2p+2}}d\alpha d\beta\\
&-2p\int_{{{\mathbb T}}}\int_{{{\mathbb T}}}\frac{z(\alpha\cdot\pa{}z(\alpha)c(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{2p+2}}d\alpha d\beta+2p\int_{{{\mathbb T}}}\int_{{{\mathbb T}}}\frac{h(\alpha-\beta)\cdot(\brz{z}+\brh{z})}{\abs{z(\alpha)-h(\alpha-\beta)}^{2p+2}}d\alpha d\beta\\
&+2p\int_{{{\mathbb T}}}\int_{{{\mathbb T}}}\frac{h(\alpha-\beta)\cdot \pa{}z(\alpha)c(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{2p+2}}d\alpha d\beta\equiv J_{1}+J_{2}+J_{3}+J_{4}.
\end{align*}
It is easy to see that
\begin{align*}
&J_{1}\le C\norm{d(z,h)}_{L^{\infty}}\norm{z}_{L^{2}}\norm{\brz{z}+\brh{z}}_{L^{2}}\int_{{{\mathbb T}}}\int_{{{\mathbb T}}}\frac{1}{\abs{z(\alpha)-h(\alpha-\beta)}^{2p}}d\alpha d\beta\\
&\le\esth{3}\norm{d(z,h)}_{L^{p}}^{p},\\
&J_{3}\le C\norm{d(z,h)}_{L^{\infty}}\norm{h}_{L^{2}}\norm{\brz{z}+\brh{z}}_{L^{2}}\int_{{{\mathbb T}}}\int_{{{\mathbb T}}}\frac{1}{\abs{z(\alpha)-h(\alpha-\beta)}^{2p}}d\alpha d\beta\\
&\le\esth{3}\norm{d(z,h)}_{L^{p}}^{p},\\
&J_{2}\le C\norm{d(z,h)}_{L^{\infty}}\norm{c}_{L^{\infty}}\norm{z}_{L^{\infty}}\norm{z}_{\mathcal{C}^{1}}\int_{{{\mathbb T}}}\int_{{{\mathbb T}}}\frac{1}{\abs{z(\alpha)-h(\alpha-\beta)}^{2p}}d\alpha d\beta\\
&\le\esth{3}\norm{d(z,h)}_{L^{p}}^{p},
\end{align*}
and
\begin{align*}
&J_{4}\le C\norm{d(z,h)}_{L^{\infty}}\norm{c}_{L^{\infty}}\norm{h}_{L^{\infty}}\norm{z}_{\mathcal{C}^{1}}\int_{{{\mathbb T}}}\int_{{{\mathbb T}}}\frac{1}{\abs{z(\alpha)-h(\alpha-\beta)}^{2p}}d\alpha d\beta\\
&\le\esth{3}\norm{d(z,h)}_{L^{p}}^{p}.
\end{align*}
Therefore,
\begin{align*}
\frac{d}{dt}\norm{d(z,h)}_{L^{p}}^{p}\le\exp C\nor{z,h}^{2}\norm{d(z,h)}_{L^{p}}^{p}.
\end{align*}
Let integrate on $t$,
\begin{align*}
&\norm{d(z,h)}_{L^{p}}(t+h)\le\norm{d(z,h)}_{L^{p}}(t)\exp(\int_{t}^{t+h}\exp C\nor{z,h}^{2}(s)ds).
\end{align*}
If we take $p\to\infty$ we get
\begin{align*}
\norm{d(z,h)}_{L^{\infty}}(t+h)\le\norm{d(z,h)}_{L^{\infty}}(t)\exp(\int_{t}^{t+h}\exp C\nor{z,h}^{2}(s)ds),
\end{align*}
then
\begin{align*}
&\frac{d}{dt}\norm{d(z,h)}_{L^{\infty}}(t)=\lim_{h\to 0}(\frac{\norm{d(z,h)}_{L^{\infty}}(t+h)-\norm{d(z,h)}_{L^{\infty}}(t)}{h})\\
&\le\norm{d(z,h)}_{L^{\infty}}(t)\lim_{h\to 0}(\frac{\exp\int_{t}^{t+h}\exp\nor{z,h}^{2}(s)ds-1}{h})\\
&\le\norm{d(z,h)}_{L^{\infty}}(t)\exp\nor{z,h}^{2}(t).
\end{align*}
\end{proof}
\
\section{Evolution of the minimum of $\sigma(\alpha,t)$}
\label{evolsig}
We know that
\begin{displaymath}
\sigma(\alpha,t)=\frac{\mu^{2}-\mu^{1}}{\kappa^{1}}(\brz{z}+\brh{z})\cdot\pa{\bot}z(\alpha)+(\rho^{2}-\rho^{1})g\pa{}z_{1}(\alpha).
\end{displaymath}
\begin{lem}
Let $z(\alpha,t)$ be a solution of the system with $z(\alpha,t)\in\mathcal{C}^{1}(\left[0,T\right];H^{3})$ and $m(t)=\min_{\alpha\in{{\mathbb T}}}\sigma(\alpha,t)$.
Then
\begin{displaymath}
m(t)\ge m(0)-\int_{0}^{t}\exp C\nor{z,h}^{2}(s)ds.
\end{displaymath}
\end{lem}
Recall that
\begin{align*}
&\exp C\nor{z,h}^{2}=\esth{3}.
\end{align*}
\begin{proof}
We consider $\alpha_{t}\in{{\mathbb T}}$ such that
\begin{displaymath}
m(t)=\min_{\alpha\in{{\mathbb T}}}\sigma(\alpha,t)=\sigma(\alpha_{t},t).
\end{displaymath}
We may calculate the derivate of $m(t)$, to obtain
\begin{displaymath}
m'(t)=\sigma_{t}(\alpha_{t},t).
\end{displaymath}
Using the definition,
\begin{align*}
&\sigma_{t}(\alpha,t)=\frac{\mu^{2}-\mu^{1}}{\kappa^{1}}(\partial_{t}\brz{z}+\partial_{t}\brh{z})\cdot\pa{\bot}z(\alpha)\\
&+(\frac{\mu^{2}-\mu^{1}}{\kappa^{1}}(\brz{z}+\brh{z})\cdot\pa{\bot}z_{t}(\alpha)+(\rho^{2}-\rho^{1})g\pa{}\partial_{t}z_{1}(\alpha))\\
&\equiv I_{1}+I_{2}.
\end{align*}
We have,
\begin{align*}
&\abs{I_{2}}\le C(\norm{\brz{z}+\brh{z}}_{L^{\infty}}+1)\norm{\pa{}z_{t}}_{L^{\infty}}\\
&\le\exp C\nor{z,h}^{2}\norm{\pa{}z_{t}}_{L^{\infty}}.
\end{align*}
Using the equation of $z_{t}$ we can calculate the estimations of $\norm{\pa{}z_{t}}_{L^{\infty}}$. We have,
\begin{align}
\label{paz}
&\norm{\pa{}z_{t}}_{L^{\infty}}\le\norm{\brz{z}}_{L^{\infty}}+\norm{\brh{z}}_{L^{\infty}}+\norm{\pa{}c}_{L^{\infty}}\norm{\pa{}z}_{L^{\infty}}\\
&+\norm{c}_{L^{\infty}}\norm{\pa{2}z}_{L^{\infty}}\le\exp C\nor{z,h}^{2}\nonumber
\end{align}
then we obtain
\begin{displaymath}
\abs{I_{2}}\le\exp C\nor{z,h}^{2}.
\end{displaymath}
Let us write $\partial_{t}\brz{z}=B_{1}+B_{2}+B_{3}$ where
\begin{align*}
&B_{1}=\frac{1}{2\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta z)^{\bot}}{\abs{\Delta z}^{2}}\partial_{t}\varpi_{1}(\alpha-\beta)d\alpha d\beta,\\
&B_{2}=\frac{1}{2\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta z_{t})^{\bot}}{\abs{\Delta z}^{2}}\varpi_{1}(\alpha-\beta)d\alpha d\beta,\\
&B_{3}=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta z)^{\bot}\Delta z\cdot\Delta z_{t}}{\abs{\Delta z}^{4}}\varpi_{1}(\alpha-\beta)d\alpha d\beta.\\
\end{align*}
We split $B_{1}$ in the following way,
\begin{align*}
&B_{1}=\frac{1}{2\pi}PV\int_{{{\mathbb R}}}(\frac{(\Delta z)^{\bot}}{\abs{\Delta z}^{2}}-\frac{\pa{\bot}z(\alpha)}{\beta\abs{\pa{}z(\alpha)}^{2}})\partial_{t}\varpi_{1}(\alpha-\beta)d\alpha d\beta+\frac{\pa{\bot}z(\alpha)}{\abs{\pa{}z(\alpha)}^{2}}H(\partial_{t}\varpi_{1})(\alpha).
\end{align*}
Then,
\begin{align*}
&\abs{B_{1}}\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}\norm{\partial_{t}\varpi_{1}}_{L^{2}}+\norm{{{\mathcal{F}(z)}}}^{\frac{1}{2}}_{L^{\infty}}\norm{\partial_{t}\varpi_{1}}_{\mathcal{C}^{\delta}}.
\end{align*}
For estimate $B_{2}$ we separate
\begin{align*}
&B_{2}=\frac{1}{2\pi}PV\int_{{{\mathbb R}}}(\Delta z_{t})^{\bot}(\frac{1}{\abs{\Delta z}^{2}}-\frac{1}{\beta^{2}\abs{\pa{}z(\alpha)}^{2}})\varpi_{1}(\alpha-\beta)d\alpha d\beta\\
&+\frac{1}{2\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta z_{t})^{\bot}}{\beta^{2}\abs{\pa{}z(\alpha)}^{2}}\varpi_{1}(\alpha-\beta)d\alpha d\beta\equiv B_{2}^{1}+B_{2}^{2}.
\end{align*}
Since,
\begin{align*}
&z_{t}(\alpha)-z_{t}(\alpha-\beta)=((\brz{z}(\alpha)-\brz{z}(\alpha-\beta))\\
&+(\brh{z}(\alpha)-\brh{z}(\alpha-\beta)))+ (c(\alpha)-c(\alpha-\beta))\pa{}z(\alpha-\beta)\\
&+c(\alpha-\beta)(\pa{}z(\alpha)-\pa{}z(\alpha-\beta))\equiv J_{1}+J_{2}+J_{3}
\end{align*}
we have,
\begin{align}
\label{j1}
&J_{1}=\beta\int_{0}^{1}\pa{}\brz{z}(\alpha-\beta+t\beta)+\pa{}\brh{z}(\alpha-\beta+t\beta)dt\\\nonumber
&\le\abs{\beta}(\norm{\pa{}\brz{z}}_{L^{\infty}}+\norm{\pa{}\brh{z}}_{L^{\infty}}),
\end{align}
\begin{align}
\label{j2}
&J_{2}\le\frac{\abs{\beta}}{A^{\frac{1}{2}}(t)}(\norm{\pa{}\brz{z}}_{L^{\infty}}+\norm{\pa{}\brh{z}}_{L^{\infty}})
\end{align}
and
\begin{align}
\label{j3}
&J_{3}=c(\alpha-\beta)\beta\int_{0}^{1}\pa{2}z(\alpha-\beta+t\beta)dt\le\norm{c}_{L^{\infty}}\abs{\beta}\norm{z}_{\mathcal{C}^{2}}.
\end{align}
Computing $\frac{1}{\abs{\Delta z}^{2}}-\frac{1}{\beta^{2}\abs{\pa{}z(\alpha)}^{2}}$ and using (\ref{j1}), (\ref{j2}) and (\ref{j3}), we get
\begin{displaymath}
\abs{B_{2}^{1}}\le C\exp C\nor{z,h}^{2}\norm{{{\mathcal{F}(z)}}}^{\frac{3}{2}}_{L^{\infty}}\norm{z}_{H^{2}}\norm{\varpi_{1}}_{L^{2}}.
\end{displaymath}
Since,
\begin{align*}
\Delta z_{t}=\beta^{2}\int_{0}^{1}\int_{0}^{1}\pa{2}z(\alpha-\beta s+\beta ts)(1-t)dsdt+\beta\pa{}z_{t}(\alpha)
\end{align*}
then
\begin{align*}
&B_{2}^{2}\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{\pa{2}z_{t}}_{L^{2}}\norm{\varpi_{1}}_{L^{2}}+C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{\pa{}z_{t}}_{L^{\infty}}\norm{H(\varpi_{1})}_{L^{2}}.
\end{align*}
Using (\ref{paz}) and
\begin{align}
\label{pa2z}
&\norm{\pa{2}z_{t}}_{L^{2}}\le\norm{\pa{2}\brz{z}+\pa{2}\brh{z}}_{L^{2}}+\norm{\pa{2}c}_{L^{2}}\norm{\pa{}z}_{L^{\infty}}\\\nonumber
&+\norm{\pa{}c}_{L^{\infty}}\norm{\pa{2}z}_{L^{2}}+\norm{c}_{L^{\infty}}\norm{\pa{3}z}_{L^{2}},
\end{align}
since
\begin{align*}
\pa{2}c(\alpha)=&-\frac{\pa{2}z(\alpha)}{A(t)}(\pa{}\brz{z}+\pa{}\brh{z})\\
&-\frac{\pa{}z(\alpha)}{A(t)}(\pa{2}\brz{z}+\pa{2}\brh{z}),
\end{align*}
we get
\begin{align*}
B_{2}^{2}\le\exp C\nor{z,h}^{2}.
\end{align*}
Using the same proceeding, we have $B_{3}\le\exp C\nor{z,h}^{2}$.
On the other hand, we split $\partial_{t}\brh{z}=C_{1}+C_{2}+C_{3}$ where,
\begin{align*}
&C_{1}=\frac{1}{2\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta zh)^{\bot}}{\abs{\Delta zh}^{2}}\partial_{t}\varpi_{2}(\alpha-\beta)d\alpha d\beta,\\
&C_{2}=\frac{1}{2\pi}PV\int_{{{\mathbb R}}}\frac{\partial_{t}^{\bot}z(\alpha)}{\abs{\Delta zh}^{2}}\varpi_{2}(\alpha-\beta)d\alpha d\beta,\\
&C_{3}=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta zh)^{\bot}\Delta zh\cdot\partial_{t}z(\alpha)}{\abs{\Delta z}^{4}}\varpi_{2}(\alpha-\beta)d\alpha d\beta.\\
\end{align*}
Thus we have
\begin{align*}
&C_{1}\le C\norm{d(z,h)}_{L^{\infty}}^{\frac{1}{2}}\norm{\partial_{t}\varpi_{2}}_{L^{2}},\\
&C_{2}\le C\norm{d(z,h)}_{L^{\infty}}\norm{\partial_{t}z}_{L^{\infty}}\norm{\varpi_{2}}_{L^{2}}\le\exp C\nor{z,h}^{2},\\
&C_{3}\le C\norm{d(z,h)}_{L^{\infty}}\norm{\pa{}z_{t}}_{L^{\infty}}\norm{\varpi_{2}}_{L^{2}}\le\exp C\nor{z,h}^{2}.
\end{align*}
We only need to know what happen with $\norm{\partial_{t}\varpi_{1}}_{L^{2}}$, $\norm{\partial_{t}\varpi_{2}}_{L^{2}}$ and $\norm{\varpi_{1}}_{\mathcal{C}^{\delta}}$.
Using the definitions of $\partial_{t}\varpi_{1}$ and $\partial_{t}\varpi_{2}$ we can see that
\begin{align*}
\varpi_{t}+M{{\mathcal T}}(\varpi_{t})=-M{{\mathbb R}}\varpi-\left(\begin{matrix}
N\partial_{t}\pa{}z_{2}(\alpha)\\
0
\end{matrix}\right)
\end{align*}
where
\begin{displaymath}
{{\mathbb R}}=\left(\begin{matrix}
R_{1}&R_{2}\\
R_{3}&0
\end{matrix}\right)
\end{displaymath}
with
\begin{align*}
&R_{1}(\varpi_{1})=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\partial_{t}(\frac{(z(\alpha)-z(\alpha-\beta))^{\bot}\cdot\pa{}z(\alpha)}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}})\varpi_{1}(\alpha-\beta)d\beta,\\
&R_{2}(\varpi_{2})=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\partial_{t}(\frac{(z(\alpha)-h(\alpha-\beta))^{\bot}\cdot\pa{}z(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}})\varpi_{2}(\alpha-\beta)d\beta,\\
&R_{3}(\varpi_{1})=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\partial_{t}(\frac{(h(\alpha)-z(\alpha-\beta))^{\bot}\cdot\pa{}z(\alpha)}{\abs{h(\alpha)-z(\alpha-\beta)}^{2}})\varpi_{1}(\alpha-\beta)d\beta.\\
\end{align*}
Then,
\begin{displaymath}
\norm{\varpi_{t}}_{H^{\frac{1}{2}}}\le\norm{(I+M{{\mathcal T}})^{-1}}_{H^{\frac{1}{2}}}(\norm{{{\mathbb R}}\varpi}_{H^{\frac{1}{2}}}+\norm{\pa{}z_{t}}_{H^{\frac{1}{2}}}).
\end{displaymath}
Therefore, it is clear that in order to control $\norm{\varpi_{t}}_{L^{2}}$ we only need to estimate $\norm{{{\mathbb R}}\varpi}_{H^{\frac{1}{2}}}$. To do that, let us estimate $\norm{{{\mathbb R}}\varpi}_{H^{1}}$:
Let separates $R_{1}(\varpi_{1})=S_{1}+S_{2}+S_{3}$ where
\begin{align*}
&S_{1}=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(z_{t}(\alpha)-z_{t}(\alpha-\beta))^{\bot}\cdot\pa{}z(\alpha)}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}}\varpi_{1}(\alpha-\beta)d\beta,\\
&S_{2}=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(z(\alpha)-z(\alpha-\beta))^{\bot}\cdot\pa{}z_{t}(\alpha)}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}}\varpi_{1}(\alpha-\beta)d\beta,\\
&S_{3}=-\frac{2}{\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta z)^{\bot}\cdot\pa{}z(\alpha)\Delta z\cdot\Delta z_{t}}{\abs{z(\alpha)-z(\alpha-\beta)}^{4}}\varpi_{1}(\alpha-\beta)d\beta.\\
\end{align*}
We will estimate $\pa{}S_{1}$, the other terms $\pa{}S_{2}$ and $\pa{}S_{3}$ are estimated with the same procedure.
\begin{align*}
&\pa{}S_{1}=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(\pa{}z_{t}(\alpha)-\pa{}z_{t}(\alpha-\beta))^{\bot}\cdot\pa{}z(\alpha)}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}}\varpi_{1}(\alpha-\beta)d\beta\\
&+\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(z_{t}(\alpha)-z_{t}(\alpha-\beta))^{\bot}\cdot\pa{}z(\alpha)}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}}\pa{}\varpi_{1}(\alpha-\beta)d\beta\\
&+\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(z_{t}(\alpha)-z_{t}(\alpha-\beta))^{\bot}\cdot\pa{2}z(\alpha)}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}}\varpi_{1}(\alpha-\beta)d\beta\\
&-\frac{2}{\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta z_{t})^{\bot}\cdot\pa{}z(\alpha)\Delta z\cdot\Delta\pa{}z}{\abs{z(\alpha)-z(\alpha-\beta)}^{4}}\varpi_{1}(\alpha-\beta)d\beta\\
&\equiv S_{1}^{1}+S_{1}^{2}+S_{1}^{3}+S_{1}^{4}.
\end{align*}
As we could see in the evolution of the arc-chord condition, using the definitions (\ref{j1}), (\ref{j2}) and (\ref{j3}), we can write $\Delta z_{t}=J_{1}+J_{2}+J_{3}$.
For $S_{1}^{1}$, we split
\begin{align*}
&S_{1}^{1}=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(\pa{}z_{t}(\alpha)-\pa{}z_{t}(\alpha-\beta))^{\bot}\cdot\pa{}z(\alpha)}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}}(\varpi_{1}(\alpha-\beta)-\varpi_{1}(\alpha))d\beta\\
&+\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(\pa{}z_{t}(\alpha)-\pa{}z_{t}(\alpha-\beta))^{\bot}\cdot\pa{}z(\alpha)}{\abs{z(\alpha)-z(\alpha-\beta)}^{2}}\varpi_{1}(\alpha)d\beta\\
&\equiv S_{1}^{11}+S_{1}^{12}.
\end{align*}
Since $\varpi_{1}(\alpha-\beta)-\varpi_{1}(\alpha)=\beta\int_{0}^{1}\pa{}\varpi_{1}(\alpha-\beta t)dt$ and $\pa{}z_{t}(\alpha)-\pa{}z_{t}(\alpha-\beta)=\beta\int_{0}^{1}\pa{2}z_{t}(\alpha+\beta-\beta t)dt$ and we have seen (\ref{pa2z}) then we have controlled $S_{1}^{11}$.
For $S_{1}^{12}$, computing
\begin{displaymath}
B(\alpha,\beta)=\frac{1}{\abs{\Delta z}^{2}}-\frac{1}{\beta^{2}\abs{\pa{}z(\alpha)}^{2}}=\frac{\beta\int_{0}^{1}\int_{0}^{1}\pa{2}z(\psi)(t-1)dtds\cdot\int_{0}^{1}\pa{}z(\alpha)+\pa{}z(\phi)dt}{\abs{\pa{}z(\alpha)}^{2}\abs{\Delta z}^{2}}
\end{displaymath} we split
\begin{align*}
&S_{1}^{12}=\frac{1}{\pi}PV\int_{{{\mathbb R}}}(\beta\int_{0}^{1}\pa{2}z_{t}(\alpha-\beta+\beta t)^{\bot})dt\cdot\pa{}z(\alpha)B(\alpha,\beta)\varpi_{1}(\alpha)d\beta\\
&+\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(\pa{}z_{t}(\alpha)-\pa{}z_{t}(\alpha-\beta))^{\bot}\cdot\pa{}z(\alpha)}{\beta^{2}\abs{\pa{}z(\alpha)}^{2}}\varpi_{1}(\alpha)d\beta\\
&\le\exp C\nor{z,h}^{2}+S_{1}^{121}.
\end{align*}
Here, we have
\begin{align*}
&S_{1}^{121}=\Lambda(\pa{\bot}z_{t})\cdot\pa{}z(\alpha)\frac{\varpi_{1}(\alpha)}{\abs{\pa{}z(\alpha)}^{2}},
\end{align*}
then
\begin{align*}
\abs{S_{1}^{121}}\le\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{1}}\norm{\varpi_{1}}_{L^{\infty}}\abs{\Lambda(\pa{}z_{t})}.
\end{align*}
Thus,
\begin{align*}
\norm{S_{1}^{1}}_{L^{2}}\le\exp C\nor{z,h}^{2}\norm{\Lambda(\pa{}z_{t})}_{L^{2}}\le\exp C\nor{z,h}^{2}\norm{\pa{2}z_{t}}_{L^{2}}\le\exp C\nor{z,h}^{2}.
\end{align*}
For $S_{1}^{2}$ we do the same thing,
\begin{align*}
&S_{1}^{2}=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\beta\int_{0}^{1}\pa{\bot}z_{t}(\alpha-\beta+\beta t)dt\cdot\pa{}z(\alpha)B(\alpha,\beta)\pa{}\varpi_{1}(\alpha-\beta)d\beta\\
&+\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{\int_{0}^{1}\pa{\bot}z_{t}(\alpha-\beta+\beta t)dt\cdot\pa{}z(\alpha)}{\beta\abs{\pa{}z(\alpha)}^{2}}\pa{}\varpi_{1}(\alpha-\beta)d\beta\\
&\le\exp c\nor{z,h}^{2}\\
&+\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{\int_{0}^{1}\int_{0}^{1}\pa{2}z_{t}(\alpha-\beta s+\beta ts)^{\bot}(1-t)dsdt\cdot\pa{}z(\alpha)}{\abs{\pa{}z(\alpha)}^{2}}\pa{}\varpi_{1}(\alpha-\beta)d\beta\\
&+\frac{\pa{\bot}z_{t}(\alpha)\cdot\pa{}z(\alpha)}{\abs{\pa{}z(\alpha)}^{2}}H(\pa{}\varpi_{1}).
\end{align*}
Therefore, using (\ref{paz}) and (\ref{pa2z})
\begin{align*}
\norm{S_{1}^{2}}_{L^{2}}\le\exp C\nor{z,h}^{2}.
\end{align*}
For $S_{1}^{3}$ exactly the same as in $S_{1}^{2}$. And for $S_{1}^{4}$, computing:
\begin{align*}
&C(\alpha,\beta)=\frac{1}{\abs{\Delta z}^{4}}-\frac{1}{\beta^{4}\abs{\pa{}z(\alpha)}^{4}}\\
&=\frac{\beta\int_{0}^{1}\int_{0}^{1}\pa{2}z(\psi)(t-1)dsdt\cdot\int_{0}^{1}\pa{}z(\alpha)+\pa{}z(\phi)dt\int_{0}^{1}\abs{\pa{}z(\alpha)}^{2}+\abs{\pa{}z(\phi)}^{2}dt}{\abs{\Delta z}^{4}\abs{\pa{}z(\alpha)}^{4}}
\end{align*}
where $\psi=\alpha-\beta+\beta t+\beta s-\beta ts$ and $\phi=\alpha-\beta+\beta t$. We have,
\begin{align*}
&S_{1}^{4}=-\frac{2}{\pi}PV\int_{{{\mathbb R}}}\beta^{3}\int_{0}^{1}\pa{}z_{t}(\phi)^{\bot}dt\cdot\pa{}z(\alpha)\int_{0}^{1}\pa{}z(\phi)dt\cdot\int_{0}^{1}\pa{2}z(\phi)dtC(\alpha,\beta)\varpi_{1}(\alpha-\beta)d\beta\\
&-\frac{2}{\pi}PV\int_{{{\mathbb R}}}\frac{\int_{0}^{1}\pa{}z_{t}(\phi)^{\bot}dt\cdot\pa{}z(\alpha)\int_{0}^{1}\pa{}z(\phi)dt\cdot\int_{0}^{1}\pa{2}z(\phi)dt}{\beta\abs{\pa{}z(\alpha)}^{4}}\varpi_{1}(\alpha-\beta)d\beta\\
&\le\exp C\nor{z,h}^{2}+S_{1}^{41}.
\end{align*}
It is easy to get
\begin{align*}
S_{1}^{41}\le\exp C\nor{z,h}^{2}-2\frac{\pa{\bot}z_{t}(\alpha)\cdot\pa{}z(\alpha)\pa{}z(\alpha)\cdot\pa{2}z(\alpha)}{\abs{\pa{}z(\alpha)}^{4}}H(\varpi_{1}).
\end{align*}
Then,
\begin{displaymath}
\norm{S_{1}^{4}}_{L^{2}}\le\exp C\nor{z,h}^{2}.
\end{displaymath}
Therefore, $\norm{\pa{}S_{1}}_{L^{2}}\le\exp C\nor{z,h}^{2}$.
We have controlled $\norm{\pa{}S_{2}}_{L^{2}}+\norm{\pa{}S_{3}}_{L^{2}}$ in the same way.
Now let us estimate $\norm{\pa{}R_{2}}_{L^{2}}$. We have,
\begin{align*}
&R_{2}(\varpi_{2})=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{z^{\bot}_{t}(\alpha)\cdot\pa{}z(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}}\varpi_{2}(\alpha-\beta)d\beta\\
&+\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(z(\alpha)-h(\alpha-\beta))^{\bot}\cdot\pa{}z_{t}(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}}\varpi_{2}(\alpha-\beta)d\beta\\
&-\frac{2}{\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta zh)^{\bot}\cdot\pa{}z(\alpha)\Delta zh\cdot z_{t}(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{4}}\varpi_{2}(\alpha-\beta)d\beta\\
&\equiv S_{4}+S_{5}+S_{6}.
\end{align*}
Then,
\begin{align*}
&S_{4}\le C\norm{d(z,h)}_{L^{\infty}}\norm{z}_{\mathcal{C}^{1}}\norm{z_{t}}_{L^{2}}\norm{\varpi_{1}}_{L^{2}},\\
&S_{5}\le C\norm{d(z,h)}_{L^{\infty}}^{\frac{1}{2}}\norm{z}_{\mathcal{C}^{1}}\norm{\varpi_{1}}_{L^{2}},\\
&S_{6}\le C\norm{d(z,h)}^{\frac{1}{2}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{1}}\norm{z_{t}}_{L^{2}}\norm{\varpi_{1}}_{L^{2}}.
\end{align*}
In conclusion,
\begin{displaymath}
\norm{\pa{}R_{2}(\varpi_{2})}_{L^{2}}\le\exp C\nor{z,h}^{2}.
\end{displaymath}
Moreover, $\pa{}R_{3}$ is like $\pa{}R_{2}$ changing $z$ with $h$, then $\norm{\pa{}R_{3}(\varpi_{1})}_{L^{2}}\le\exp C\nor{z,h}^{2}$. Thus, we have controlled $\norm{\partial_{t}\varpi_{1}}_{L^{2}}$ and $\norm{\partial_{t}\varpi_{2}}_{L^{2}}$.
Finally, in order to control $\norm{\partial_{t}\varpi_{1}}_{\mathcal{C}^{\delta}}$ we will use
\begin{align*}
\norm{\partial_{t}\varpi_{1}}_{\mathcal{C}^{\delta}}\le C(\norm{T_{1}(\partial_{t}\varpi_{1})}_{\mathcal{C}^{\delta}}+\norm{T_{2}(\partial_{t}\varpi_{2})}_{\mathcal{C}^{\delta}}+\norm{R_{1}(\varpi_{1})}_{\mathcal{C}^{\delta}}+\norm{R_{2}(\varpi_{2})}_{\mathcal{C}^{\delta}}+\norm{\pa{}z_{t}}_{\mathcal{C}^{\delta}}).
\end{align*}
Using the Lemma \ref{tl2h1},
\begin{align*}
&\norm{T_{1}(\varpi_{1})}_{\mathcal{C}^{\delta}}\le\norm{T_{1}(\varpi_{1})}_{H^{1}}\le C\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}\norm{z}^{4}_{\mathcal{C}^{2,\delta}}\norm{\partial_{t}\varpi_{1}}_{L^{2}},\\
&\norm{T_{2}(\varpi_{2})}_{\mathcal{C}^{\delta}}\le\norm{T_{2}(\varpi_{2})}_{H^{1}}\le C\norm{d(z,h)}^{2}_{L^{\infty}}\norm{h}^{4}_{\mathcal{C}^{2,\delta}}\norm{\partial_{t}\varpi_{2}}_{L^{2}},\\
\end{align*}
for $\delta\le\frac{1}{2}$.
We have already seen $\norm{R_{1}(\varpi_{1})}_{H^{1}}+\norm{R_{2}(\varpi_{2})}_{H^{1}}\le\exp C\nor{z,h}^{2}$ then
\begin{displaymath}
\norm{R_{1}(\varpi_{1})}_{\mathcal{C}^{\delta}}+\norm{R_{2}(\varpi_{2})}_{\mathcal{C}^{\delta}}\le\exp C\nor{z,h}^{2}.
\end{displaymath}
Finally let us observe that $\norm{\pa{}z_{t}}_{\mathcal{C}^{\delta}}\le\norm{z_{t}}_{H^{2}}$ which is controlled by $\norm{\pa{2}z_{t}}_{L^{2}}$.
The upper bound
\begin{displaymath}
\abs{\sigma_{t}(\alpha,t)}\le\exp C\nor{z,h}^{2}
\end{displaymath}
gives us
\begin{displaymath}
m'(t)\ge -\exp C\nor{z,h}^{2}
\end{displaymath}
for almost every $t$. And a futher integration yields
\begin{displaymath}
m(t)\ge m(0)-\int_{0}^{t}\exp C\nor{z,h}^{2}(s)ds.
\end{displaymath}
\end{proof}
\subsection{The basic operator}
Let us consider the operator
\begin{displaymath}
{{\mathcal T}}(u_{1},u_{2})(\alpha) = \;
\begin{pmatrix}
T_{1} & T_{2} \\
T_{3} & T_{4}
\end{pmatrix}
\begin{pmatrix}
u_{1}\\
u_{2}
\end{pmatrix}
\end{displaymath}
where
\begin{align*}
&T_{1}(u)(\alpha)=2BR(u,z)_{z}(\alpha)\cdot\partial_{\alpha}z(\alpha)\\
&T_{2}(u)(\alpha)=2BR(u,h)_{z}(\alpha)\cdot\partial_{\alpha}z(\alpha)\\
&T_{3}(u)(\alpha)=2BR(u,z)_{h}(\alpha)\cdot\partial_{\alpha}h(\alpha)\\
&T_{4}(u)(\alpha)=2BR(u,h)_{h}(\alpha)\cdot\partial_{\alpha}h(\alpha)
\end{align*}
\begin{lem}
\label{tl2h1}
Suppose that $\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}<\infty$, $\norm{{{\mathcal{F}(h)}}}_{L^{\infty}}<\infty$, $\norm{d(z,h)}_{L^{\infty}}<\infty$ and $z\in\mathcal{C}^{2,\delta}$, $h\in\mathcal{C}^{2,\delta}$. Then ${{\mathcal T}}:L^{2}\times L^{2}\to H^{1}\times H^{1}$ is compact and
\begin{displaymath}
\norm{{{\mathcal T}}}_{L^{2}\times L^{2}\to H^{1}\times H^{1}}\le C\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}\norm{d(z,h)}^{2}_{L^{\infty}}\norm{z}^{4}_{\mathcal{C}^{2,\delta}}
\end{displaymath}
\end{lem}
\begin{proof}
We have
\begin{displaymath}
{{\mathcal T}}(w)(\alpha) = \;
\begin{pmatrix}
T_{1} & T_{2} \\
T_{3} & T_{4}
\end{pmatrix}
\begin{pmatrix}
u\\
v
\end{pmatrix}=\begin{pmatrix}
T_{1}(u)+T_{2}(v)\\
T_{3}(u)+T_{4}(v)
\end{pmatrix}
\end{displaymath}
and we consider $\norm{(u,v)}_{L^{2}}=\norm{u}_{L^{2}}+\norm{v}_{L^{2}},$
then
\begin{displaymath}
\norm{{{\mathcal T}}(w)}_{L^{2}}=\norm{T_{1}(u)+T_{2}(v)}_{L^{2}}+\norm{T_{3}(u)+T_{4}(v)}_{L^{2}}
\end{displaymath}
We want to estimate $\norm{\partial_{\alpha}{{\mathcal T}}(w)}_{L^{2}}$. Since
\begin{align*}
&\norm{\partial_{\alpha}T_{1}(u)+\partial_{\alpha}T_{2}(v)}_{L^{2}}\le\norm{\partial_{\alpha}T_{1}(u)}_{L^{2}}+\norm{\partial_{\alpha}T_{2}(v)}_{L^{2}},\\
&\norm{\partial_{\alpha}T_{3}(u)+\partial_{\alpha}T_{4}(v)}_{L^{2}}\le\norm{\partial_{\alpha}T_{3}(u)}_{L^{2}}+\norm{\partial_{\alpha}T_{4}(v)}_{L^{2}},
\end{align*}
it is enough to estimate each $T_{i}$ for $i=1,2,3,4$ separately.
Operator $T_{1}$ and $T_{4}$ are exactly the same as the operator $T$ on \cite{hele}. Therefore, by lemma 3.1 on \cite{hele} we have:
\begin{align*}
&\norm{\partial_{\alpha}T_{1}(u)}_{L^{2}}\le C\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}\norm{z}^{4}_{\mathcal{C}^{2,\delta}}\norm{u}_{L^{2}},\\
&\norm{\partial_{\alpha}T_{4}(v)}_{L^{2}}\le C\norm{{{\mathcal{F}(h)}}}^{2}_{L^{\infty}}\norm{h}^{4}_{\mathcal{C}^{2,\delta}}\norm{v}_{L^{2}}.
\end{align*}
Let us estimate operator $T_{2}$ and $T_{3}$.
We write first,
\begin{align*}
\partial_{\alpha}T_{2}(v)&=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\pa{} (\frac{(z(\alpha)-h(\alpha-\beta))^{\bot}\cdot\pa{} z(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}})v(\alpha-\beta)d\beta\\
&+\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(z(\alpha)-h(\alpha-\beta))^{\bot}\cdot\pa{} z(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}}\pa{} v(\alpha-\beta)d\beta\equiv I_{1}+I_{2}.
\end{align*}
Then, using $\Delta zh=z(\alpha)-h(\alpha-\beta)$ in order to reduce notation,
\begin{align*}
&I_{1}=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(\pa{} z(\alpha)-\pa{} h(\alpha-\beta))^{\bot}\cdot\pa{} z(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}}v(\alpha-\beta)d\beta\\
&+\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(z(\alpha)-h(\alpha-\beta))^{\bot}\cdot\partial^{2}_{\alpha} z(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}}v(\alpha-\beta)d\beta\\
&-\frac{2}{\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta zh)^{\bot}\cdot\pa{} z(\alpha)(\Delta zh)\cdot\pa{}\Delta zh}{\abs{z(\alpha)-h(\alpha-\beta)}^{4}}v(\alpha-\beta)d\beta\\
&\equiv I_{1}^{1}+I_{1}^{2}+I_{1}^{3}.
\end{align*}
Since $\pa{} z(\alpha)\cdot\pa{} z(\alpha)^{\bot}=0$ we have,
\begin{displaymath}
I_{1}^{1}=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{\pa{} h(\alpha-\beta)\cdot\pa{} z(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}}v(\alpha-\beta)d\beta\le C\norm{d(z,h)}_{L^{\infty}}\norm{\pa{} z}_{L^{\infty}}\norm{h}_{H^{1}}\norm{v}_{L^{2}}.
\end{displaymath}
Using the Cauchy inequality it is easy to get $u\cdot v\le\frac{\abs{u}^{2}}{2}+\frac{\abs{v}^{2}}{2}$, then
\begin{align*}
I_{1}^{2}&\le\frac{1}{2\pi}PV\int_{{{\mathbb R}}}v(\alpha-\beta)d\beta+\frac{1}{2\pi}PV\int_{{{\mathbb R}}}\frac{\abs{\partial^{2}_{\alpha}z(\alpha)}^{2}}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}}v(\alpha-\beta)d\beta\\
&\le C\norm{v}_{L^{2}}+C\norm{d(z,h)}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}^{2}\norm{v}_{L^{2}}
\end{align*}
\begin{align*}
&\abs{I_{1}^{3}}\le\frac{2}{\pi}PV\int_{{{\mathbb R}}}\frac{\abs{z(\alpha)-h(\alpha-\beta)}^{2}\abs{\pa{} z(\alpha)}\abs{\pa{} z(\alpha)-\pa{} h(\alpha-\beta)}}{\abs{z(\alpha)-h(\alpha-\beta)}^{4}}\abs{v(\alpha-\beta)}d\beta\\
&\le C\norm{d(z,h)}_{L^{\infty}}\norm{z}_{\mathcal{C}^{1}}(\norm{z}_{\mathcal{C}^{1}}+\norm{h}_{\mathcal{C}^{1}})\norm{v}_{L^{2}}
\end{align*}
On the other hand, using integration by parts
\begin{align*}
I_{2}&=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\partial_{\beta}(\frac{(z(\alpha)-h(\alpha-\beta))^{\bot}\cdot\pa{} z(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}})v(\alpha-\beta)d\beta\\
&=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(\pa{} h(\alpha-\beta))^{\bot}\cdot\pa{} z(\alpha)}{\abs{z(\alpha)-h(\alpha-\beta)}^{2}}v(\alpha-\beta)d\beta\\
&-\frac{2}{\pi}PV\int_{{{\mathbb R}}}\frac{(z(\alpha)-h(\alpha-\beta))^{\bot}\cdot\pa{} z(\alpha)(z(\alpha)-h(\alpha-\beta))\cdot\pa{} h(\alpha-\beta)}{\abs{z(\alpha)-h(\alpha-\beta)}^{4}}v(\alpha-\beta)d\beta\\
&\le C\norm{d(z,h)}_{L^{\infty}}\norm{\pa{} z}_{L^{\infty}}\norm{h}_{\mathcal{C}^{1}}\norm{v}_{L^{2}}+C\norm{d(z,h)}_{L^{\infty}}\norm{z}_{\mathcal{C}^{1}}\norm{h}_{\mathcal{C}^{1}}\norm{v}_{L^{2}}.
\end{align*}
Then,
\begin{displaymath}
\norm{\pa{} T_{2}(v)}_{L^{2}}\le C\norm{d(z,h)}_{L^{\infty}}\norm{z}^{2}_{\mathcal{C}^{2}}\norm{h}_{\mathcal{C}^{1}}\norm{v}_{L^{2}}.
\end{displaymath}
Finally, we have to estimate $\pa{} T_{3}$. We have,
\begin{align*}
\pa{} T_{3}(u)(\alpha)&=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\pa{} (\frac{(h(\alpha)-z(\alpha-\beta))^{\bot}\cdot\pa{} h(\alpha)}{\abs{h(\alpha)-z(\alpha-\beta)}^{2}})u(\alpha-\beta)d\beta\\
&+\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(h(\alpha)-z(\alpha-\beta))^{\bot}\cdot\pa{} h(\alpha)}{\abs{h(\alpha)-z(\alpha-\beta)}^{2}}\pa{} u(\alpha-\beta)d\beta.
\end{align*}
Changing $z$ for $h$, we can check that we have the same estimates as in $T_{2}$. Thus,
\begin{displaymath}
\norm{T_{3}(u)}_{L^{2}}\le C\norm{d(z,h)}_{L^{\infty}}\norm{h}^{2}_{\mathcal{C}^{2}}\norm{z}_{\mathcal{C}^{1}}\norm{u}_{L^{2}}.
\end{displaymath}
Therefore,
\begin{displaymath}
\norm{\pa{}{{\mathcal T}}(u,v)}_{L^{2}}\le C\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}\norm{{{\mathcal{F}(h)}}}^{2}_{L^{\infty}}\norm{d(z,h)}^{2}_{L^{\infty}}\norm{h}^{4}_{\mathcal{C}^{2,\delta}}\norm{z}^{4}_{\mathcal{C}^{2,\delta}}\norm{(u,v)}_{L^{2}}.
\end{displaymath}
Since $h$ is fixed on time, $\norm{{{\mathcal{F}(h)}}}_{L^{\infty}}^{2}$ and $\norm{h}^{4}_{\mathcal{C}^{2,\delta}}$ are not dependent of time. Thus we get,
\begin{displaymath}
\norm{\pa{}{{\mathcal T}}(w)}_{L^{2}}\le C\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}\norm{d(z,h)}^{2}_{L^{\infty}}\norm{z}^{4}_{\mathcal{C}^{2,\delta}}\norm{(w)}_{L^{2}}.
\end{displaymath}
\end{proof}
\subsection{Estimates on the inverse operator}\label{estiminversesect}
We are going to work with the adjoint operator of ${{\mathcal T}}$ in order to estimate the inverse operator $(I+M{{\mathcal T}})^{-1}$.
We have,
\begin{align*}
&\left( \begin{pmatrix}
u_{1}\\
u_{2}
\end{pmatrix},
\begin{pmatrix}
T_{1} & T_{2} \\
T_{3} & T_{4}
\end{pmatrix}
\begin{pmatrix}
w_{1}\\
w_{2}
\end{pmatrix}
\right) =\left(
\begin{pmatrix}
u_{1}\\
u_{2}
\end{pmatrix},
\begin{pmatrix}
T_{1}(w_{1})+ T_{2}(w_{2}) \\
T_{3}(w_{1})+ T_{4}(w_{2})
\end{pmatrix}\right)\\
&=(T_{1}(w_{1}),u_{1})+(T_{2}(w_{2}),u_{1})+(T_{3}(w_{1}),u_{2})+(T_{4}(w_{2}),u_{2})\\
&=(w_{1},T^{*}_{1}(u_{1}))+(w_{2},T^{*}_{2}(u_{1}))+(w_{1},T^{*}_{3}(u_{2}))+(w_{2},T^{*}_{4}(u_{2}))=\\
&\left( \begin{pmatrix}
w_{1}\\
w_{2}
\end{pmatrix},
\begin{pmatrix}
T^{*}_{1}(u_{1}) + T^{*}_{3}(u_{2}) \\
T^{*}_{2}(u_{1}) + T^{*}_{4}(u_{2})
\end{pmatrix}
\right) =\left(
\begin{pmatrix}
w_{1}\\
w_{2}
\end{pmatrix},
\begin{pmatrix}
T^{*}_{1} & T^{*}_{3} \\
T^{*}_{2} & T^{*}_{4}
\end{pmatrix}
\begin{pmatrix}
u_{1} \\
u_{2}
\end{pmatrix}\right)
\end{align*}
The adjoint operator is given by
\begin{displaymath}
{{\mathcal T}}^{*}(u_{1},u_{2})(\alpha) = \;
\begin{pmatrix}
T^{*}_{1} & T^{*}_{3} \\
T^{*}_{2} & T^{*}_{4}
\end{pmatrix}
\begin{pmatrix}
u_{1}\\
u_{2}
\end{pmatrix}
\end{displaymath}
where we can compute:
\begin{displaymath}
T_{1}^{*}(u)(\alpha)=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(z(\alpha)-z(\beta))^{\bot}\cdot\partial_{\alpha}z(\beta)}{\abs{z(\alpha)-z(\beta)}^{2}}u(\beta)d\beta,
\end{displaymath}
\begin{displaymath}
T_{2}^{*}(u)(\alpha)=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(h(\alpha)-z(\beta))^{\bot}\cdot\partial_{\alpha}z(\beta)}{\abs{h(\alpha)-z(\beta)}^{2}}u(\beta)d\beta,
\end{displaymath}
\begin{displaymath}
T_{3}^{*}(u)(\alpha)=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(z(\alpha)-h(\beta))^{\bot}\cdot\partial_{\alpha}h(\beta)}{\abs{z(\alpha)-h(\beta)}^{2}}u(\beta)d\beta,
\end{displaymath}
and
\begin{displaymath}
T_{4}^{*}(u)(\alpha)=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(h(\alpha)-h(\beta))^{\bot}\cdot\partial_{\alpha}h(\beta)}{\abs{h(\alpha)-h(\beta)}^{2}}u(\beta)d\beta.
\end{displaymath}
\begin{prop}\label{estimaT}
Suppose that $\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}<\infty$, $\norm{{{\mathcal{F}(h)}}}_{L^{\infty}}<\infty$,\\ $\norm{d(z,h)}_{L^{\infty}}<\infty$ and $z,h\in\mathcal{C}^{2,\delta}$. Then ${{\mathcal T}}^{*}:L^{2}\times L^{2}\to H^{1}\times H^{1}$ and
\begin{displaymath}
\norm{{{\mathcal T}}^{*}}_{L^{2}\times L^{2}\to H^{1}\times H^{1}}\le\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}^{2}\norm{d(z,h)}_{L^{\infty}}^{2}\norm{z}^{2}_{\mathcal{C}^{2,\delta}}.
\end{displaymath}
\end{prop}
\begin{proof}
In the same way as in the study of ${{\mathcal T}}$, we can prove this estimate studying each $T_{i}^{*}$.
\begin{displaymath}
T_{1}^{*}(u)=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(z(\alpha)-z(\beta))^{\bot}\cdot\partial_{\alpha}z(\beta)}{\abs{z(\alpha)-z(\beta)}^{2}}u(\beta)d\beta
\end{displaymath}
then
\begin{align*}
\partial_{\alpha}T_{1}^{*}(u)=&-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\partial_{\alpha}(\frac{(\Delta z)^{\bot}\cdot\partial_{\alpha}z(\alpha-\beta)}{\abs{\Delta z}^{2}})u(\alpha-\beta)d\beta\\
&-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta z)^{\bot}\cdot\partial_{\alpha}z(\alpha-\beta)}{\abs{\Delta z}^{2}}\partial_{\alpha}u(\alpha-\beta)d\beta\equiv I_{1}+I_{2}.
\end{align*}
$I_{1}$ is estimated in the same way that operator $T_{1}$. Using integration by parts
\begin{align*}
I_{2}&=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta z)^{\bot}\cdot\partial_{\alpha}z(\alpha-\beta)}{\abs{\Delta z}^{2}}\partial_{\beta}u(\alpha-\beta)d\beta\\
&=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\partial_{\beta}(\frac{(\Delta z)^{\bot}\cdot\partial_{\alpha}z(\alpha-\beta)}{\abs{\Delta z}^{2}})u(\alpha-\beta)d\beta\\
&=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(\partial^{\bot}_{\alpha} z(\alpha-\beta)\cdot\partial_{\alpha}z(\alpha-\beta)}{\abs{\Delta z}^{2}}u(\alpha-\beta)d\beta\\
&-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta z)^{\bot}\cdot\partial^{2}_{\alpha}z(\alpha-\beta)}{\abs{\Delta z}^{2}}u(\alpha-\beta)d\beta\\
&+\frac{2}{\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta z)^{\bot}\cdot\partial_{\alpha}z(\alpha-\beta)\Delta z\cdot\partial_{\alpha}z(\alpha-\beta)}{\abs{\Delta z}^{2}})u(\alpha-\beta)d\beta\equiv I_{2}^{1}+I_{2}^{2}+I_{2}^{3}.
\end{align*}
Since $\partial_{\alpha}^{\bot}z\cdot\partial_{\alpha}z=0$, $I_{2}^{1}=0$.
We can write
\begin{align*}
I_{2}^{2}&=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}(\frac{(\Delta z)^{\bot}}{\abs{\Delta z}^{2}}-\frac{\pa{\bot}z(\alpha)}{\beta\abs{\pa{} z(\alpha)}^{2}})\cdot\partial^{2}_{\alpha}z(\alpha-\beta)u(\alpha-\beta)d\beta\\
&-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{\pa{\bot}z(\alpha)}{\beta\abs{\pa{} z(\alpha)}^{2}}\cdot\partial^{2}_{\alpha}z(\alpha-\beta)u(\alpha-\beta)d\beta\equiv I_{2}^{21}+I_{2}^{22}.
\end{align*}
Since we compute:
\begin{align*}
&\frac{(\Delta z)^{\bot}}{\abs{\Delta z}^{2}}-\frac{\pa{\bot}z(\alpha)}{\beta\abs{\pa{} z(\alpha)}^{2}}=\frac{\beta\abs{\pa{} z(\alpha)}^{2}\Delta z^{\bot}-\pa{\bot}z(\alpha)\abs{\Delta z}^{2}}{\beta\abs{\pa{} z(\alpha)}^{2}\abs{\Delta z}^{2}}\\
&=\frac{\beta^{2}\abs{\pa{} z(\alpha)}^{2}\int_{0}^{1}\pa{\bot}z(\alpha-\beta+t\beta)dt-\pa{\bot}z(\alpha)\abs{\Delta z}^{2}}{\beta\abs{\pa{} z(\alpha)}^{2}\abs{\Delta z}^{2}}\\
&=\frac{\beta^{3}\abs{\pa{} z(\alpha)}^{2}\int_{0}^{1}\int_{0}^{1}\pa{2\bot}z(\alpha-s\beta+st\beta)(t-1)dsdt+\pa{\bot}z(\alpha)(\beta^{2}\abs{\pa{}z(\alpha)}^{2}-\abs{\Delta z}^{2})}{\beta\abs{\pa{} z(\alpha)}^{2}\abs{\Delta z}^{2}}\\
&\frac{\beta^{2}\int_{0}^{1}\int_{0}^{1}\pa{2}z(\alpha-s\beta+st\beta)(t-1)dsdt}{\abs{\Delta z}^{2}}\\
&+\frac{\beta^{2}\pa{\bot}z(\alpha)\int_{0}^{1}\int_{0}^{1}\pa{2}z(\alpha-\beta+t\beta+s\beta-ts\beta)(1-t)dsdt\cdot\int_{0}^{1}(\pa{}z(\alpha)+\pa{}z(\alpha-\beta+t\beta))dt}{\abs{\pa{} z(\alpha)}^{2}\abs{\Delta z}^{2}},
\end{align*}
therefore
\begin{align*}
I_{2}^{21}\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}^{2}\norm{u}_{L^{2}}.
\end{align*}
For the term $I_{2}^{22}$
\begin{align*}
I_{2}^{22}=&-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{\pa{\bot}z(\alpha)}{\abs{\pa{} z(\alpha)}^{2}}\cdot\frac{\partial^{2}_{\alpha}z(\alpha-\beta)-\pa{2}z(\alpha)}{\beta}u(\alpha-\beta)d\beta\\
&-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{\pa{\bot}z(\alpha)}{\abs{\pa{} z(\alpha)}^{2}}\cdot\partial^{2}_{\alpha}z(\alpha)\frac{u(\alpha-\beta)}{\beta}d\beta\\
&\le C\norm{{{\mathcal{F}(z)}}}^{\frac{1}{2}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2,\delta}}\norm{u}_{L^{2}}-\frac{1}{\pi}\frac{\pa{\bot}z(\alpha)\cdot\pa{2}z(\alpha)}{\abs{\pa{}z(\alpha)}^{2}}H(u)\\
&\le C\norm{{{\mathcal{F}(z)}}}^{\frac{1}{2}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2,\delta}}\norm{u}_{L^{2}}.
\end{align*}
We can see easily for $\phi=\alpha-\beta+t\beta$
\begin{align*}
I_{2}^{3}&= \frac{2}{\pi}PV\int_{{{\mathbb R}}}\frac{\beta^{2}\int_{0}^{1} \pa{\bot}z(\phi)dt\cdot\partial_{\alpha}z(\alpha-\beta)\int_{0}^{1}\pa{}z(\phi)dt\cdot\partial_{\alpha}z(\alpha-\beta)}{\abs{\Delta z}^{2}}u(\alpha-\beta)d\beta\\
&\le C\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}\norm{z}^{2}_{\mathcal{C}^{1}}\norm{z}^{2}_{H^{1}}\norm{u}_{L^{2}}.
\end{align*}
Now, we consider
\begin{align*}
T_{2}^{*}(v)=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(h(\alpha)-z(\beta))^{\bot}\cdot\partial_{\alpha}z(\beta)}{\abs{h(\alpha)-z(\beta)}^{2}}v(\beta)d\beta,
\end{align*}
then
\begin{align*}
\pa{}T_{2}^{*}(v)&=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\pa{}(\frac{(h(\alpha)-z(\alpha-\beta))^{\bot}\cdot\partial_{\alpha}z(\alpha-\beta)}{\abs{h(\alpha)-z(\alpha-\beta)}^{2}})v(\alpha-\beta)d\beta\\
&-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(h(\alpha)-z(\alpha-\beta))^{\bot}\cdot\partial_{\alpha}z(\alpha-\beta)}{\abs{h(\alpha)-z(\alpha-\beta)}^{2}}\pa{}v(\alpha-\beta)d\beta\equiv J_{1}+J_{2}.
\end{align*}
Using $\pa{\bot}z\cdot\pa{}z=0$,
\begin{align*}
J_{1}&=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{\pa{\bot}h(\alpha)\cdot\partial_{\alpha}z(\alpha-\beta)}{\abs{h(\alpha)-z(\alpha-\beta)}^{2}}v(\alpha-\beta)d\beta\\
&-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(h(\alpha)-z(\alpha-\beta))^{\bot}\cdot\pa{2}z(\alpha-\beta)}{\abs{h(\alpha)-z(\alpha-\beta)}^{2}}v(\alpha-\beta)d\beta\\
&+\frac{2}{\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta hz)^{\bot}\cdot\partial_{\alpha}z(\alpha-\beta)\Delta hz\cdot(\pa{}h(\alpha)-\pa{}z(\alpha-\beta))}{\abs{h(\alpha)-z(\alpha-\beta)}^{4}}v(\alpha-\beta)d\beta\\
&\equiv J_{1}^{1}+J_{1}^{2}+J_{1}^{3}.
\end{align*}
Directly,
\begin{align*}
\abs{J_{1}^{1}}\le C\norm{d(z,h)}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}\norm{h}_{\mathcal{C}^{1}}\norm{v}_{L^{2}},
\end{align*}
\begin{align*}
\abs{J_{1}^{2}}\le C\norm{d(z,h)}_{L^{\infty}}^{\frac{1}{2}}\norm{z}^{2}_{\mathcal{C}^{2}}\norm{v}_{L^{2}}
\end{align*}
and
\begin{align*}
&J_{1}^{3}\le C\norm{d(z,h)}_{L^{\infty}}\norm{z}_{\mathcal{C}^{1}}(\norm{z}_{\mathcal{C}^{1}}+\norm{h}_{\mathcal{C}^{1}})\norm{v}_{L^{2}}.
\end{align*}
Now, we study the term $J_{2}$. Since $\pa{}z\cdot\pa{\bot}z=0$,
\begin{align*}
J_{2}&=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\partial_{\beta}(\frac{(h(\alpha)-z(\alpha-\beta))^{\bot}\cdot\partial_{\alpha}z(\alpha-\beta)}{\abs{h(\alpha)-z(\alpha-\beta)}^{2}})v(\alpha-\beta)d\beta\\
&=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(h(\alpha)-z(\alpha-\beta))^{\bot}\cdot\partial_{\alpha}^{2}z(\alpha-\beta)}{\abs{h(\alpha)-z(\alpha-\beta)}^{2}}v(\alpha-\beta)d\beta\\
&-\frac{2}{\pi}PV\int_{{{\mathbb R}}}\frac{(\Delta zh)^{\bot}\cdot\partial_{\alpha}z(\alpha-\beta)\Delta zh\cdot\pa{}z(\alpha-\beta)}{\abs{h(\alpha)-z(\alpha-\beta)}^{4}}v(\alpha-\beta)d\beta.\\
\end{align*}
Using the same procedure as in term $J_{1}$,
\begin{align*}
\abs{J_{2}^{1}}\le C\norm{d(z,h)}^{\frac{1}{2}}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2}}\norm{v}_{L^{2}}
\end{align*}
and
\begin{align*}
\abs{J_{2}^{2}}\le C\norm{d(z,h)}_{L^{\infty}}\norm{z}^{2}_{\mathcal{C}^{1}}\norm{v}_{L^{2}}.
\end{align*}
The operator $T_{3}^{*}(v)(\alpha)$ is estimated as well as $T_{2}^{*}(u)(\alpha)$ changing $z$ with $h$ and vice verse. For $T_{4}^{*}(v)(\alpha)$ we do the same as for $T_{1}^{*}(u)(\alpha)$ changing $z$ for $h$ and instead of ${{\mathcal{F}(z)}}$ the arc-chord condition for $h$, ${{\mathcal{F}(h)}}$.
In conclusion,
\begin{displaymath}
\norm{\pa{}{{\mathcal T}}^{*}w}_{L^{2}}\le C\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}\norm{{{\mathcal{F}(h)}}}_{L^{\infty}}^{2}\norm{d(z,h)}^{2}_{L^{\infty}}\norm{z}_{\mathcal{C}^{2,\delta}}^{2}\norm{h}_{\mathcal{C}^{2,\delta}}^{2}\norm{w}_{L^{2}}.
\end{displaymath}
\end{proof}
Now it will be useful to consider the following functions:
Let $x$ be outside the curve $z(\alpha)$ and $h(\alpha)$, then we define
\begin{align*}
f_{1}(x)=&-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(x-z(\beta))^{\bot}\cdot\partial_{\alpha}z(\beta)}{\abs{x-z(\beta)}^{2}}u(\beta)d\beta\\
&=\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(x_{2}-z_{2}(\beta))\partial_{\alpha}z_{1}(\beta)}{\abs{x-z(\beta)}^{2}}u(\beta)d\beta-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(x_{1}-z_{1}(\beta))\partial_{\alpha}z_{2}(\beta)}{\abs{x-z(\beta)}^{2}}u(\beta)d\beta.
\end{align*}
In the following we identify $(u_{1},u_{2})$ with $u_{1}+iu_{2}$. Since $-u^{\bot}\cdot v=u_{2}v_{1}-u_{1}v_{2}$ and $(u_{1}+iu_{2})(v_{1}+iv_{2})=(u_{1}v_{1}+u_{2}v_{2})+i(u_{2}v_{1}-u_{1}v_{2})$
we get,
\begin{displaymath}
-u^{\bot}\cdot v={{\mathfrak{I}}}(u\bar{v}).
\end{displaymath}
Therefore, we can write
\begin{displaymath}
f_{1}(x)=\frac{1}{\pi}{{\mathfrak{I}}}\int_{{{\mathbb R}}}\frac{(x-z(\beta))\overline{\partial_{\alpha}z(\beta)}}{\abs{x-z(\beta)}^{2}}u(\beta)d\beta
\end{displaymath}
In the same way,
\begin{displaymath}
f_{2}(x)=\frac{1}{\pi}{{\mathfrak{I}}}\int_{{{\mathbb R}}}\frac{(x-h(\beta))\overline{\partial_{\alpha}h(\beta)}}{\abs{x-h(\beta)}^{2}}v(\beta)d\beta
\end{displaymath}
Both are the real part of the following Cauchy integrals
\begin{align*}
F_{1}(x)=f_{1}(x)+ig_{1}(x)=\frac{1}{i\pi}\int_{{{\mathbb R}}}\frac{(x-z(\beta))\overline{\partial_{\alpha}z(\beta)}}{\abs{x-z(\beta)}^{2}}u(\beta)d\beta,\\
F_{2}(x)=f_{2}(x)+ig_{2}(x)=\frac{1}{i\pi}\int_{{{\mathbb R}}}\frac{(x-h(\beta))\overline{\partial_{\alpha}h(\beta)}}{\abs{x-h(\beta)}^{2}}v(\beta)d\beta
\end{align*}
Taking $x=z(\alpha)+\epsilon\partial^{\bot}_{\alpha}z(\alpha)$ and letting $\epsilon\to 0$, we obtain
\begin{equation}
\label{f1}
f_{1}(z(\alpha))=T_{1}^{*}(u)(\alpha)-sign(\epsilon)u(\alpha).
\end{equation}
and taking $x=h(\alpha)+\epsilon\partial^{\bot}_{\alpha}h(\alpha)$ and letting $\epsilon\to 0$
\begin{equation}
\label{f4}
f_{2}(h(\alpha))=T_{4}^{*}(v)(\alpha)-sign(\epsilon)v(\alpha).
\end{equation}
Since the curve $z(\alpha)$ does not touch the curve $h(\alpha)$, we have
\begin{equation}
\label{f2}
f_{1}(h(\alpha))=T_{2}^{*}(u)(\alpha)
\end{equation}
and
\begin{equation}
\label{f3}
f_{2}(z(\alpha))=T_{3}^{*}(v)(\alpha).
\end{equation}
On the other hand,
\begin{displaymath}
\lim_{\epsilon\to 0}g_{1}(z(\alpha)\pm\epsilon\partial^{\bot}_{\alpha}z(\alpha))=\lim_{\epsilon\to 0}{{\mathfrak{I}}}(F_{1}(z(\alpha)\pm\epsilon\partial^{\bot}_{\alpha}z(\alpha))\equiv G_{1}(u)(\alpha)
\end{displaymath}
where
\begin{displaymath}
G_{1}(u)(\alpha)=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(z(\alpha)-z(\beta))\cdot\partial_{\alpha}z(\beta)}{\abs{z(\alpha)-z(\beta)}^{2}}u(\beta)d\beta.
\end{displaymath}
In the same way, taking limits
\begin{displaymath}
\lim_{\epsilon\to 0}g_{2}(h(\alpha)\pm\epsilon\partial^{\bot}_{\alpha}h(\alpha))=\lim_{\epsilon\to 0}{{\mathfrak{I}}}(F_{2}(h(\alpha)\pm\epsilon\partial^{\bot}_{\alpha}h(\alpha))\equiv G_{2}(u)(\alpha)
\end{displaymath}
where
\begin{displaymath}
G_{2}(v)(\alpha)=-\frac{1}{\pi}PV\int_{{{\mathbb R}}}\frac{(h(\alpha)-h(\beta))\cdot\partial_{\alpha}h(\beta)}{\abs{h(\alpha)-h(\beta)}^{2}}v(\beta)d\beta
\end{displaymath}
Therefore, we have the fact that $g_{i}^{+}(z(\alpha))=g_{i}^{-}(z(\alpha))$ and $g_{i}^{+}(h(\alpha))=g_{i}^{-}(h(\alpha))$ for $i=1,2$, where $(\cdot)^{+}$ denotes the limit obtained approaching from above to the boundaries in the normal direction and $(\cdot)^{-}$ from below.(This fact will be used on Subsection \ref{operhi}).
Now we will show that ${{\mathcal T}}^{*}w=\lambda w\Rightarrow\abs{\lambda}<1$.
If $w$ is a eigenvector of ${{\mathcal T}}$, we have
\begin{displaymath}
{{\mathcal T}}^{*}w = \;
\begin{pmatrix}
T_{1}^{*} & T_{3}^{*} \\
T_{2}^{*} & T_{4}^{*}
\end{pmatrix}
\begin{pmatrix}
u \\
v
\end{pmatrix}=
\begin{pmatrix}
T_{1}^{*}u + T_{3}^{*}v \\
T_{2}^{*}u + T_{4}^{*}v
\end{pmatrix}=
\begin{pmatrix}
\lambda u \\
\lambda v
\end{pmatrix}= \lambda w.
\end{displaymath}
Let us compute $\nabla f_{i}$ for $i=1,2$.
The identity
\begin{align*}
f_{1}(x)&=\frac{1}{\pi}{{\mathfrak{I}}}\int_{{{\mathbb R}}}\frac{(x-z(\beta))\overline{\partial_{\alpha}z(\beta)}}{\abs{x-z(\beta)}^{2}}u(\beta)d\beta=\frac{1}{\pi}{{\mathfrak{I}}}\int_{{{\mathbb R}}}\frac{\overline{\partial_{\alpha}z(\beta)}}{\overline{(x-z(\beta))}}u(\beta)d\beta\\
&=-\frac{1}{\pi}{{\mathfrak{I}}}\int_{{{\mathbb R}}}\partial_{\beta}\ln(x-z(\beta))u(\beta)d\beta=\frac{1}{\pi}{{\mathfrak{I}}}\int_{{{\mathbb R}}}\ln(x-z(\beta))\partial_{\beta}u(\beta)d\beta
\end{align*}
yields
\begin{displaymath}
\nabla f_{1}(x)=\frac{1}{\pi}{{\mathfrak{I}}}\int_{{{\mathbb R}}}\partial_{\beta}u(\beta)\nabla\ln(x-z(\beta))d\beta.
\end{displaymath}
That is
\begin{displaymath}
\nabla f_{1}(x)=\frac{1}{\pi}\int_{{{\mathbb R}}}\partial_{\beta}u(\beta)\nabla\arg(x-z(\beta))d\beta=\frac{1}{\pi}\int_{{{\mathbb R}}}\frac{(x-z(\beta))^{\bot}}{\abs{x-z(\beta)}^{2}}\partial_{\beta}u(\beta)d\beta.
\end{displaymath}
In the same way,
\begin{displaymath}
\nabla f_{2}(x)=\frac{1}{\pi}\int_{{{\mathbb R}}}\frac{(x-h(\beta))^{\bot}}{\abs{x-h(\beta)}^{2}}\partial_{\beta}v(\beta)d\beta.
\end{displaymath}
Taking $x=z(\alpha)+\epsilon z(\alpha)$ and letting $\epsilon\to 0$ in $\nabla f_{1}$ we have
\begin{equation}
\label{nf1}
\nabla f_{1}(z(\alpha))=2BR(\partial_{\alpha}u,z)_{z}-sign(\epsilon)\frac{\pa{}u(\alpha)\partial_{\alpha}z(\alpha)}{2\abs{\partial_{\alpha}z(\alpha)}^{2}}.
\end{equation}
On the other hand, taking $x=h(\alpha)+\epsilon h(\alpha)$ on $\nabla f_{2}$ and letting $\epsilon\to 0$,
\begin{equation}
\label{nf4}
\nabla f_{2}(h(\alpha))=2BR(\partial_{\alpha}v,h)_{h}-sign(\epsilon)\frac{\pa{}v(\alpha)\partial_{\alpha}h(\alpha)}{2\abs{\partial_{\alpha}h(\alpha)}^{2}}.
\end{equation}
Obviously,
\begin{equation}
\label{nf2}
\nabla f_{1}(h(\alpha))=2BR(\partial_{\alpha}u,z)_{h}
\end{equation}
and
\begin{equation}
\label{nf3}
\nabla f_{2}(z(\alpha))=2BR(\partial_{\alpha}v,h)_{z}.
\end{equation}
Assuming now that ${{\mathcal T}}^{*}w=\lambda w$, $\Omega_{1}$ is the domain placed above of the curve $z(\alpha)$, $\Omega_{2}$ is the domain between $z(\alpha)$ and $h(\alpha)$ and $\Omega_{3}$ is below of the curve $h(\alpha)$. The analyticity of $F_{i}$ for $i=1,2$ allows us to obtain:
\begin{align}
\label{lambda1}
&0<\int_{\Omega_{1}}\abs{F'_{1}(x)+F_{2}'(x)}^{2}dx=2\int_{\Omega_{1}}\abs{(\nabla f_{1}(x)+\nabla f_{2}(x))}^{2}dx\\\nonumber
&=-2\int_{\Omega_{1}}\Delta(f_{1}(x)+f_{2}(x))(f_{1}(x)+f_{2}(x))dx\\\nonumber
&-2\int_{{{\mathbb T}}}(f_{1}^{+}(z(\alpha))+f_{2}^{+}(z(\alpha)))(\nabla f_{1}^{+}(z(\alpha))+\nabla f_{2}^{+}(z(\alpha)))\cdot\frac{\partial_{\alpha}^{\bot}z(\alpha)}{\abs{\partial_{\alpha}z(\alpha)}}d\alpha\\\nonumber
&=2\int_{{{\mathbb T}}}(-T_{1}^{*}(u)(\alpha)+u(\alpha)-T_{3}^{*}(v)(\alpha))(2BR(\partial_{\alpha}u,z)_{z}+2BR(\partial_{\alpha}v,h)_{z})\cdot\frac{\partial_{\alpha}^{\bot}z(\alpha)}{\abs{\partial_{\alpha}z(\alpha)}}d\alpha\\\nonumber
&=\int_{{{\mathbb T}}}(u(\alpha)-\lambda u(\alpha))M(u,v,h,z)d\alpha=(1-\lambda)\int_{{{\mathbb T}}}u(\alpha)M(u,v,h,z)d\alpha\\\nonumber
&\equiv(1-\lambda)A,
\end{align}
\begin{align}
\label{lambda2}
&0<\int_{\Omega_{2}}\abs{F'_{1}(x)+F_{2}'(x)}^{2}dx\\\nonumber
&=2\int_{{{\mathbb T}}}(f_{1}(z(\alpha))+f_{2}(z(\alpha)))(\nabla f_{1}(z(\alpha))+\nabla f_{2}(z(\alpha)))\cdot\frac{\partial_{\alpha}^{\bot}z(\alpha)}{\abs{\partial_{\alpha}z(\alpha)}}d\alpha\\\nonumber
&-2\int_{{{\mathbb T}}}(f_{1}^{+}(h(\alpha))+f_{2}^{+}(h(\alpha)))(\nabla f_{1}^{+}(h(\alpha))+\nabla f_{2}^{+}(h(\alpha)))\cdot\frac{\partial_{\alpha}^{\bot}h(\alpha)}{\abs{\partial_{\alpha}h(\alpha)}}d\alpha\\\nonumber
&=2\int_{{{\mathbb T}}}(u(\alpha)+T_{1}^{*}(u)(\alpha)+T_{3}^{*}(v)(\alpha))M(u,v,h,z)d\alpha\\\nonumber
&+2\int_{{{\mathbb T}}}(v(\alpha)-T_{2}^{*}(u)(\alpha)-T_{4}^{*}(v)(\alpha))(2BR(\partial_{\alpha}u,z)_{h}+2BR(\partial_{\alpha}v,h)_{h})\cdot\frac{\partial_{\alpha}^{\bot}z(\alpha)}{\abs{\partial_{\alpha}z(\alpha)}}d\alpha\\\nonumber
&=\int_{{{\mathbb T}}}(u(\alpha)+\lambda u(\alpha))M(u,v,h,z)d\alpha+2\int_{{{\mathbb T}}}(v(\alpha)-\lambda v(\alpha))N(u,v,h,z)\\\nonumber
&=(1+\lambda)\int_{{{\mathbb T}}}u(\alpha)M(u,v,h,z)d\alpha+(1-\lambda)\int_{{{\mathbb T}}}v(\alpha)N(u,v,h,z)d\alpha\\\nonumber
&\equiv (1+\lambda)A+(1-\lambda)B
\end{align}
and
\begin{align}
\label{lambda3}
&0<\int_{\Omega_{3}}\abs{F'_{1}(x)+F_{2}'(x)}^{2}dx\\\nonumber
&=2\int_{{{\mathbb T}}}(f_{1}^{-}(h(\alpha))+f_{2}^{-}(h(\alpha)))(\nabla f_{1}^{-}(h(\alpha))+\nabla f_{2}^{-}(h(\alpha)))\cdot\frac{\partial_{\alpha}^{\bot}h(\alpha)}{\abs{\partial_{\alpha}h(\alpha)}}d\alpha\\\nonumber
&=2\int_{{{\mathbb T}}}(v(\alpha)+T_{2}^{*}(u)(\alpha)+T_{4}^{*}(v)(\alpha))N(u,v,h,z)d\alpha\\\nonumber
&=\int_{{{\mathbb T}}}(v(\alpha)+\lambda v(\alpha))N(u,v,h,z)d\alpha=(1+\lambda)B
\end{align}
where we have used (\ref{f1})-(\ref{nf3}).
Suppose that $\abs{\lambda}\ge 1$ then $\lambda\in(-\infty,-1]\cup[1,\infty)$:
\begin{itemize}
\item[$\to$] If $\lambda\in(-\infty,-1]$ then
\begin{itemize}
\item[i)]For (\ref{lambda1}) we get that $A>0$.
\item[ii)]For (\ref{lambda3}) we get that $B<0$ and $\lambda\neq -1$.
\item[iii)]Therefore, (\ref{lambda2}) is a contradiction.
\end{itemize}
\item[$\to$] If $\lambda\in[1,\infty)$
\begin{itemize}
\item[i)]For (\ref{lambda1}) we get that $A<0$ and $\lambda\neq 1$.
\item[ii)]For (\ref{lambda3}) we get that $B>0$.
\item[iii)]Therefore, (\ref{lambda2}) is a contradiction.
\end{itemize}
\end{itemize}
Thus $\abs{\lambda}<1$. At this point, since ${{\mathcal T}}^{*}$ is a compact operator, we know that there exists $(I-M{{\mathcal T}}^{*})^{-1}$ for $M=\left (\begin{matrix}
\mu_{1} & 0 \\
0 & \mu_{2}
\end{matrix}\right)$ with $\abs{\mu_{i}}<1$ for $i=1,2$.
Our propose is to prove that $H^{\frac{1}{2}}$-norm of the inverse operator are bounded by $\exp(C\nor{z,h}^{2})$ where $\nor{z,h}^{2}=\norm{{{\mathcal{F}(z)}}}^{2}_{L^{\infty}}+\norm{d(z,h)}^{2}_{L^{\infty}}+\norm{z}^{2}_{H^{3}}$.
To prove that we will start with the following proposition:
\begin{prop}\label{norml2}
The norms $\norm{(I\pm{{\mathcal T}}^{*})^{-1}}_{L^{2}_{0}}$ are bounded by $\exp(C\nor{z,h}^{2})$ for some universal constant $C$. Here the space $L^{2}_{0}$ is the usual $L^{2}$ with the extra condition of mean value zero.
\end{prop}
\begin{proof}
The proof follows if we demostrate the estimate
\begin{equation}
\label{esti}
e^{-C\nor{z,h}^{2}}\le\frac{\norm{\varpi-{{\mathcal T}}^{*}\varpi}_{L^{2}_{0}}}{\norm{\varpi+{{\mathcal T}}^{*}\varpi}_{L^{2}_{0}}}\le e^{C\nor{z,h}^{2}}
\end{equation}
valid for every nonzero $\varpi\in L^{2}_{0}\times L^2_{0}$.
This is because if we assume $\norm{\varpi-{{\mathcal T}}^{*}\varpi}_{L^{2}_{0}}\le e^{-2C\nor{z,h}^{2}}$ for some $\norm{\varpi}_{L^{2}_{0}}=1$ then we obtain $\norm{\varpi+{{\mathcal T}}^{*}\varpi}_{L^{2}_{0}}\ge 2\norm{\varpi}_{L^{2}_{0}}-e^{-2C\nor{z,h}^{2}}\ge 1$ wich contradicts \ref{esti}. Therefore we must have $\norm{\varpi-{{\mathcal T}}^{*}\varpi}_{L^{2}_{0}}\ge e^{-2C\nor{z,h}^{2}}$ for all $\norm{\varpi}_{L^{2}_{0}}=1$ i.e. $\norm{(I-{{\mathcal T}}^{*})^{-1}}_{L^{2}_{0}}\le e^{2C\nor{z,h}^{2}}$. Similarly we also have $\norm{(I+{{\mathcal T}}^{*})^{-1}}_{L^{2}_{0}}\le e^{2C\nor{z,h}^{2}}$.
Since
\begin{displaymath}
\varpi+{{\mathcal T}}^{*}\varpi=\left (\begin{matrix} u+T^{*}_{1}u+T^{*}_{3}v\\
v+T_{2}^{*}u+T_{4}^{*}v
\end{matrix}\right)=\left (\begin{matrix} f_{1}^{-}(z(\alpha))+f_{2}(z(\alpha))\\
f_{1}(h(\alpha))+f_{2}^{-}(h(\alpha))
\end{matrix}\right)\equiv\left(\begin{matrix}
m^{+}\\
w \end{matrix}\right)
\end{displaymath}
\begin{displaymath}
\varpi-{{\mathcal T}}^{*}\varpi=\left (\begin{matrix} u-T^{*}_{1}u-T^{*}_{3}v\\
v-T_{2}^{*}u-T_{4}^{*}v
\end{matrix}\right)=(-1)\left (\begin{matrix} f_{1}^{+}(z(\alpha))+f_{2}(z(\alpha))\\
f_{1}(h(\alpha))+f_{2}^{+}(h(\alpha))
\end{matrix}\right)\equiv(-1)\left(\begin{matrix}
f\\
m^{-}
\end{matrix}\right)
\end{displaymath}
Next we will see that we can write the above function as some operators, which we call $\mathcal{H}_{i}$ for $i=1,2,3$,
where $i$ denotes the corresponding domain $\Omega_{1}$, $\Omega_{2}$, and $\Omega_{3}$(See Subsection \ref{operhi}). The relations with these operator are:
\begin{align*}
&m^{+}=\mathcal{H}_{2}^{z}(f,m^{-}),\\
&w=\mathcal{H}_{3}(f,m^{-}),\\
&f=\mathcal{H}_{1}(m^{+},w),\\
&m^{-}=\mathcal{H}_{2}^{h}(m^{+},w).
\end{align*}
And we will prove that
\begin{displaymath}
\norm{\mathcal{H}_{i}(\varpi)}_{L^{2}}\le e^{C\nor{z,h}^{2}}\norm{\varpi}_{L^{2}},
\end{displaymath}
where $C$ denotes a universal constant not necessarily the same at each occurrence.
With all these assumptions, the proof is as follows:
\begin{align*}
&\norm{\varpi+{{\mathcal T}}^{*}\varpi}_{L^{2}_{0}}=\norm{\left(\begin{matrix}
\mathcal{H}_{2}^{z}(f,m^{-})\\
\mathcal{H}_{3}(f,m^{-})
\end{matrix}\right)}_{L^{2}_{0}}\le e^{C\nor{z,h}^{2}}\norm{\left(\begin{matrix}
f\\
m^{-}
\end{matrix}\right)}_{L^{2}_{0}}\\
&=e^{2C\nor{z,h}^{2}}\norm{\varpi-{{\mathcal T}}^{*}\varpi}_{L^{2}_{0}}.
\end{align*}
In the same way,
\begin{align*}
&\norm{\varpi-{{\mathcal T}}^{*}\varpi}_{L^{2}_{0}}=\norm{\left(\begin{matrix}
\mathcal{H}_{1}(m^{+},w)\\
\mathcal{H}_{2}^{h}(m^{+},w)
\end{matrix}\right)}_{L^{2}_{0}}\le e^{C\nor{z,h}^{2}}\norm{\left(\begin{matrix}
m^{+}\\
w
\end{matrix}\right)}_{L^{2}_{0}}\\
&=e^{2C\nor{z,h}^{2}}\norm{\varpi+{{\mathcal T}}^{*}\varpi}_{L^{2}_{0}}.
\end{align*}
\end{proof}
Once we have the estimation of $(I\pm{{\mathcal T}}^{*})^{-1}$, we introduce the term $M=\left (\begin{matrix}
\mu_{1} & 0 \\
0 & \mu_{2}
\end{matrix}\right)$ with $\abs{\mu_{i}}<1$ for all $i=1,2$.
\begin{lem}\label{l2conm}
The following estimate holds:
\begin{displaymath}
\norm{(I+ M{{\mathcal T}}^{*})^{-1}}_{L^{2}_{0}}\le e^{C\nor{z,h}^{2}}
\end{displaymath}
for a universal constant $C$ and $\abs{\mu_{i}}\le 1$ for $i=1,2$.
\end{lem}
\begin{proof}
If we look at the identity $I+ M{{\mathcal T}}^{*}=M(I+{{\mathcal T}}^{*})+(I-MI)$, using the estimate on proposition \ref{norml2} we can conclude that
\begin{displaymath}
\norm{(I+ M{{\mathcal T}}^{*})^{-1}}_{L^{2}_{0}}\le\exp{C\nor{z,h}^{2}}
\end{displaymath}
for $1-e^{-C_{1}\nor{z,h}^{2}}\le\abs{\mu_{i}}\le 1$.
For $\abs{\mu_{i}}\le 1-e^{-C_{1}\nor{z,h}^{2}}$:
Since $\norm{M{{\mathcal T}}^{*}}_{L^{2}}<1$ then we can write $(I+ M{{\mathcal T}}^{*})^{-1}=\sum_{n}(M{{\mathcal T}}^{*})^{n}$. Taking norms,
\begin{displaymath}
\norm{(I+ M{{\mathcal T}}^{*})^{-1}}_{L^{2}_{0}}\le\sum_{n}\norm{M{{\mathcal T}}^{*}}^{n}_{L^{2}_{0}}\le\sum_{n}(1-e^{-C_{1}\nor{z,h}^{2}})^{n}=e^{C_{1}\nor{z,h}^{2}}
\end{displaymath}
\end{proof}
Now we are in position to prove the $H^{\frac{1}{2}}$-norm,
\begin{prop}
\label{invertoperhunmedio}
For $\mu_{i}\le 1$ the following estimate holds
\begin{displaymath}
\norm{(I+M{{\mathcal T}})^{-1}}_{H^{\frac{1}{2}}_{0}}=\norm{(I+M{{\mathcal T}}^{*})^{-1}}_{H^{\frac{1}{2}}_{0}}\le e^{C\nor{z,h}^{2}},\\
\end{displaymath}
where $C$ is a universal constant and $M=\left(\begin{matrix}
\mu_{1} & 0\\
0 & \mu_{2}
\end{matrix}\right)$.
\end{prop}
\begin{proof}
We will use the same idea as in Proposition \ref{norml2}, therefore we are going to prove:
\begin{equation*}
e^{-C\nor{z,h}^{2}}\le\frac{\norm{\varpi-{{\mathcal T}}^{*}\varpi}_{H^{\frac{1}{2}}_{0}}}{\norm{\varpi+{{\mathcal T}}^{*}\varpi}_{H^{\frac{1}{2}}_{0}}}\le e^{C\nor{z,h}^{2}}.
\end{equation*}
To do that, using (\ref{estimaT}) and $\abs{\mu_{i}}<1$,
\begin{align*}
&\norm{\Lambda^{\frac{1}{2}}(\varpi+M{{\mathcal T}}^{*}\varpi)}_{L^{2}_{0}}\le\norm{\Lambda^{\frac{1}{2}}(\varpi-M{{\mathcal T}}^{*}\varpi)}_{L^{2}_{0}}+2\norm{M\Lambda^{\frac{1}{2}}({{\mathcal T}}^{*}\varpi)}_{L^{2}_{0}}\\\
&\le\norm{\Lambda^{\frac{1}{2}}(\varpi-M{{\mathcal T}}^{*}\varpi)}_{L^{2}_{0}}+2\norm{{{\mathcal T}}^{*}\varpi)}_{H^{1}}\\
&\le\norm{\Lambda^{\frac{1}{2}}(\varpi-M{{\mathcal T}}^{*}\varpi)}_{L^{2}_{0}}+e^{C\nor{z,h}^{2}}\norm{\varpi}_{L^{2}_{0}}.
\end{align*}
Using the estimate of Lemma \ref{l2conm},
\begin{displaymath}
\norm{\varpi}_{L^{2}_{0}}=\norm{(I-M{{\mathcal T}}^{*})^{-1}(I-M{{\mathcal T}}^{*})\varpi}\le e^{C\nor{z,h}^{2}}\norm{\varpi-M{{\mathcal T}}^{*}\varpi}_{L^{2}_{0}}.
\end{displaymath}
Therefore,
\begin{displaymath}
\norm{\varpi+M{{\mathcal T}}^{*}\varpi}_{H^{\frac{1}{2}}_{0}}\le e^{C\nor{z,h}^{2}}\norm{\varpi-M{{\mathcal T}}^{*}\varpi}_{H^{\frac{1}{2}}_{0}}
\end{displaymath}
Analogously, we get
\begin{displaymath}
\norm{\varpi-M{{\mathcal T}}^{*}\varpi}_{H^{\frac{1}{2}}_{0}}\le e^{C\nor{z,h}^{2}}\norm{\varpi+M{{\mathcal T}}^{*}\varpi}_{H^{\frac{1}{2}}_{0}}
\end{displaymath}
and we finish the proof.
\end{proof}
\subsection{$\mathcal{H}_{i}$ Operators}\label{operhi}
The truth of the above results depend on the existence of the $\mathcal{H}_{i}$ Operators which we have denoted on Proposition \ref{norml2}.
We will start with considering a flat domain, where the boundaries are $(x,0)$ and $(x,1)$.
Let be $F$ a harmonic function, decaying at infinity, above $(x,1)$ such that
\begin{displaymath}
\left\{ \begin{array}{ll}
\Delta F=0\\
F(x,1)=f(x)
\end{array} \right.
\end{displaymath}
Taking Fourier transform, we can get $\hat{F}(\xi,y)=e^{-\abs{\xi}(y-1)}\hat{f}(\xi)$.
Now, if we calculate the harmonic conjugate,which we will call $G$, we can get $\hat{G}(\xi,y)=-i sign(\xi)\hat{f}(\xi)e^{-\abs{\xi}(y-1)}$. And therefore, $\hat{G}(\xi,1)=-i sign(\xi)\hat{f}(\xi)$.
Now we consider between the boundaries the harmonic function $M$ such that,
\begin{displaymath}
\left\{ \begin{array}{ll}
\Delta M=0\\
M(x,1)=m^{+}(x)\\
M(x,0)=m^{-}(x)
\end{array} \right.
\end{displaymath}
Taking Fourier Transform and computing the harmonic conjugate, we get\\ $\hat{N}(\xi,y)=iA\cosh(\xi y)+iB\sinh(\xi y)$.
At the end, we want to relate these harmonic function with ours $F_{i}$ described at the Subsection \ref{estiminversesect}. We saw that $g_{i}^{+}(z(\alpha))=g_{i}^{-}(z(\alpha))$ and $g_{i}^{+}(h(\alpha))=g_{i}^{-}(h(\alpha))$ for $i=1,2$. That is why we consider now $G(x,1)=N(x,1)$ and before $N(x,0)=R(x,0)$.
Therefore, since
\begin{align*}
&\hat{N}(\xi,1)=iA\cosh(\xi)+iB\sinh(\xi)=\hat{G}(\xi,1)=-i sign(\xi)\hat{f}(\xi),\\
&\hat{M}(\xi,0)=B=\hat{m}^{-}(\xi)
\end{align*}
then
\begin{displaymath}
A=\frac{sign(\xi)\hat{f}(\xi)-\hat{m}^{-}(\xi)\sinh(\xi)}{\cosh(\xi)}
\end{displaymath}
and
\begin{displaymath}
\hat{m}^{+}(\xi)=\hat{M}(\xi,1)=\frac{sign(\xi)\hat{f}(\xi)\sinh(\xi)+\hat{m}^{-}(\xi)}{\cosh(\xi)}.
\end{displaymath}
Moreover,
\begin{displaymath}
\hat{N}(\xi,0)=\frac{isign(\xi)\hat{f}(\xi)-i\hat{m}^{-}(\xi)\sinh(\xi)}{\cosh(\xi)}.
\end{displaymath}
Finally, we consider an harmonic function $W$ below $(x,0)$ in such a way that
\begin{displaymath}
\left\{ \begin{array}{ll}
\Delta W=0\\
W(x,0)=w(x)
\end{array} \right.
\end{displaymath}
With the same procedure as before, we get the harmonic conjugate $\hat{R}(\xi,y)=isign(\xi)\hat{w}(\xi)e^{\abs{x}y}$.
Since, $\hat{R}(\xi,0)=\hat{N}(\xi,0)$ therefore
\begin{displaymath}
\hat{w}(\xi)=\frac{\hat{f}(\xi)-sign(\xi)\hat{m}^{-}(\xi)\sinh(\xi)}{\cosh(\xi)}.
\end{displaymath}
Thus we just put $\hat{m}^{+}$ and $\hat{w}$ as a function of $\hat{f}$ and $\hat{m}^{-}$. We do this going from the top of the domain to the bottom. If we do the same going from the bottom to the top we will obtain
\begin{align*}
&\hat{f}(\xi)=\frac{-\hat{w}(\xi)-sign(\xi)\hat{m}^{+}(\xi)\sinh(\xi)}{\cosh(\xi)},\\
&\hat{m}^{-}(\xi)=\frac{\hat{m}^{+}(\xi)-sign(\xi)\hat{w}(\xi)\sinh(\xi)}{\cosh(\xi)}.
\end{align*}
Here we define ours operators like:
\begin{align*}
&\widehat{H_{1}(m^{+},w)}=\frac{-\hat{w}(\xi)-sign(\xi)\hat{m}^{+}(\xi)\sinh(\xi)}{\cosh(\xi)},\\
&\widehat{H_{2}^{h}(m^{+},w)}=\frac{\hat{m}^{+}(\xi)-sign(\xi)\hat{w}(\xi)\sinh(\xi)}{\cosh(\xi)},\\
&\widehat{H_{2}^{z}(f,m^{-})}=\frac{\hat{m}^{-}(\xi)+sign(\xi)\hat{f}(\xi)\sinh(\xi)}{\cosh(\xi)},\\
&\widehat{H_{3}(f,m^{-})}=\frac{\hat{f}(\xi)-sign(\xi)\hat{f}(\xi)\sinh(\xi)}{\cosh(\xi)},
\end{align*}
which are bounded in $L^{2}$.
\begin{figure}[htb]
\centering
\includegraphics[width=65mm]{curve_flat_domains.pdf}
\caption{Conformal maps $\phi_{i}$}\label{fig:flat}
\end{figure}
Let $\phi_{i}$ the conformal mapping from the $\Omega_{i}$ domain to the "flat" domain (See the figure \ref{fig:flat}), then the corresponding operator in the "curve" domain are denoted by $\mathcal{H}_{i}$.
For the $L^{2}$-norm of the $\mathcal{H}_{i}$ Operator we can repeat the proofs in \cite{hele} for their Hilbert operator $\mathcal{H}_{1}$.
To do this we only have to look at the formulas:
\begin{align*}
&\mathcal{H}_{1}(m^{+},w)=H_{1}(m^{+}\circ\phi_{1}^{-1},w\circ\phi_{1}^{-1})\circ\phi_{1},\\
&\mathcal{H}_{2}^{h}(m^{+},w)=H_{2}^{h}(m^{+}\circ\phi_{2}^{-1},w\circ\phi_{2}^{-1})\circ\phi_{2},\\
&\mathcal{H}_{2}^{z}(f,m^{-})=H_{2}^{z}(f\circ\phi_{2}^{-1},m^{-}\circ\phi_{2}^{-1})\circ\phi_{2},\\
&\mathcal{H}_{3}(f,m^{-})=H_{3}(f\circ\phi_{3}^{-1},m^{-}\circ\phi_{3}^{-1})\circ\phi_{3}.
\end{align*}
Since our parametric curves $z(\alpha)$ and $h(\alpha)$ are $\mathcal{C}^{2,\delta}$ satisfying the arc-chord conditions $\norm{{{\mathcal{F}(z)}}}_{L^{\infty}}<\infty$, $\norm{{{\mathcal{F}(h)}}}_{L^{\infty}}<\infty$ and the distance $\norm{{{d(z,h)}}}_{L^{\infty}}<\infty$. Then we have tangent balls to the boundary contained inside the domains $\Omega_{i}$. Furthermore, we can estimate from below the radius of those balls by $C\nor{z,h}^{-1}$(As in Lemma 4.3 in \cite{hele}).
Following the steps of the proof of Lemma 4.4 in \cite{hele} we can conclude that
\begin{displaymath}
\norm{\mathcal{H}_{i}}_{L^{2}}\le e^{C\nor{z,h}^{2}}
\end{displaymath}
for all $i=1,2,3$.
\section{Introduction}
\input{./muskatconpermeabilidades}
\section{Inverse Operator}\label{invertiroperador}
\input{./invertiroperador}
\input{./estimacionesenlaamplitud}
\section{A priori estimates on $z(\alpha,t)$}\label{estimacionesz}
\input{./estimacionesenz}
\input{./evoluciondelascondiciones}
\input{./conclusionexistencia}
|
1,108,101,564,982 | arxiv |
\section{Introduction}\label{sec:introduction}
Several hybrid discretisation methods have been proposed in recent years for the numerical discretisation of partial differential equations \cite{di-pietro.ern.ea:2014:arbitrary,beirao-da-veiga.brezzi.ea:2013:basic,Cockburn2009}. One of the selling points of these schemes is their geometrical flexibility. Discretisation spaces are not bound to specific element topologies and can readily be used on general polytopal meshes. Body-fitted unstructured mesh generation is one of the main bottlenecks in complex numerical simulations, which requires intensive human intervention. Usually, these meshes are composed of tetrahedral (and/or hexahedral) elements. Polytopal methods can provide sought-after flexibility in the mesh generation step.
In this work, we focus on the \ac{hho} method. Developed in \cite{di-pietro.ern.ea:2014:arbitrary, di-pietro.ern:2015:hybrid}, the HHO method is a modern polytopal method for elliptic PDEs. A key aspect of HHO is its applicability to generic meshes with arbitrarily shaped elements. Additionally, HHO methods are of arbitrary order, dimension independent, and are amenable to static condensation. We refer the reader to \cite{di-pietro.droniou:2020:hybrid} for a thorough review of the method and its applications. An analysis on skewed meshes has been carried out for a diffusion problem in \cite{droniou:2020:interplay} and identifies how the error estimate is impacted by the element distortion and local diffusion tensor. The recent work of \cite{droniou.yemm:2021:robust} shows the HHO method to be accurate on meshes possessing elements with arbitrarily many small faces.
Unfitted (a.k.a.~embedded and immersed) discretisations can also simplify the geometrical discretisation step. The domain of interest is embedded in a simple background mesh (e.g., a Cartesian grid). The boundary (or interface) treatment is tackled at the numerical integration and discretisation step.
Many unfitted \ac{fe} schemes that rely on a standard \ac{fe} space on the background mesh have been proposed;~see, e.g.~the \ac{xfem}~\cite{belytschko_arbitrary_2001}, the cutFEM method~\cite{burman_cutfem_2015}, the aggregated \ac{fem} ~\cite{badia.verdugo.ea:2018:aggregated}, the finite cell method~\cite{Schillinger2015} and \ac{dg} methods with element aggregation~\cite{johansson2013high}. We also make note of the reference \cite{beirao-da-veiga.canuto.ea:2021:equilibrium}, which uses a \ac{vem} to model a rigid leaflet submerged in a fluid and fixed to a rotational spring at one end. The thin leaflet `cuts' through an isotropic background mesh, thus requiring the model to be applied on cut meshes.
Unfitted formulations can produce arbitrarily ill-conditioned linear systems~\cite{de-prenter.erhoosel.ea:2017:condition}. The intersection of a background element with the physical domain can be arbitrarily small and with an unbounded aspect ratio. It is known as the \emph{small cut element problem}. This problem is also present on unfitted interfaces with a high contrast of physical properties~\cite{Neiva2021}. Few unfitted formulations are fully robust and optimal regardless of cut location or material contrast. The ill-conditioning issue was addressed in~\cite{burman2010ghost} via the so-called ghost penalty stabilisation. Instead of adding stabilisation terms, the small cut element problem can be fixed by element aggregation (or agglomeration). This approach has been proposed in~\cite{johansson2013high} for \ac{dg} methods. While aggregation is natural in \ac{dg} methods (these schemes can readily be used on polytopal meshes), its extension to conforming spaces is more involved. The design of well-posed $\mathcal{C}^{0}$ Lagrangian finite elements on agglomerated meshes has been proposed in \cite{badia.verdugo.ea:2018:aggregated}. The aggregated \ac{fem} constructs a discrete extension operator from well-posed to ill-posed degrees of freedom that preserves continuity. All these formulations enjoy good numerical properties, such as stability, condition number bounds, optimal convergence and continuity with respect to data.
The \ac{fe} discretisation of linear second-order elliptic operators (e.g. the Laplacian) in weak form produces linear systems such that the $\ell^2$-condition number (on shape regular, quasi-uniform meshes) scales as the inverse square of the mesh size \cite{Ern2006Jan}. Likewise, the condition number on regular triangular meshes of interior penalty Galerkin and local discontinuous Galerkin methods scale as the inverse square of the mesh size, whereas the condition number of non-symmetric DG methods can potentially scale sub-optimally \cite{castillo:2002:performance}. We refer to \cite{Cockburn2013} for the condition number analysis of \ac{hdg} methods on regular simplicial quasi-uniform meshes. The authors in \cite{Mascotto2018} investigate experimentally the ill-conditioning of the \ac{vem} for high-order bases on distorted meshes. In this work, we analyse the properties of the linear systems that arise from \ac{hho} formulations. We prove estimates for the condition number arising from such systems. Under general assumptions on the stabilisation term (allowing for standard choices of HHO stabilisation) and when $L^2$-orthonormal bases are chosen for face polynomial spaces, we show that the estimates remain robust with respect to small element faces and track the dependence in the estimates of the polynomial degree of the unknowns.
The linear systems in HHO methods are obtained after the static condensation of the element unknowns.
This process allows the global system to depend only on the face unknowns \cite[Appendix B.3.2]{di-pietro.droniou:2020:hybrid}. In Section \ref{sec:eig.estimates} we state some estimates on the spectrum and conditioning of this condensed operator. We find that the condition number of the statically condensed system scales at worst like $\hmin^{-2}$ (where $\hmin$ denotes the minimum element diameter in the mesh) and that this bound is not affected by small faces. We also prove that if each face is attached (or close) to at least one element of diameter comparable to a characteristic mesh size $h_{\rm{max}}$, then the condition number scales as $\hmin^{-1}h_{\rm{max}}^{-1}$.
This sharper result is of practical interest when using cut meshes, since it is common to find small cells on the boundary in touch with larger ones.
To the best of our knowledge, no condition number estimates on general meshes exist for \ac{hho} or \ac{vem}. We note that, given the links between HHO and other polytopal methods (see e.g.~\cite[Sec.~5.5.5]{di-pietro.droniou:2020:hybrid} for the relationship between \ac{hho} and non-conforming \ac{vem}, or \cite{Cockburn.Di-Pietro.ea:16} for the link \ac{hho}--\ac{hdg}), our results could easily be extended to such methods; more generally, the analysis of condition number we carry out here uses a rather general approach and would certainly extend to even more polytopal methods.
Next, we apply the \ac{hho} method on \emph{cut} meshes obtained by the intersection of cells in a background (usually Cartesian) mesh and the physical domain (represented as the interior of an oriented boundary representation, e.g., a surface mesh). The intersection is cell-wise and represents a sound alternative to unstructured mesh generation \cite{2110.01378}. The analysis tells us the potential conditioning issues of \ac{hho} schemes on such meshes, for which arbitrarily small elements (and faces) appear scattered among large elements \cite{burman.cicuttin.ea:2021:unfitted}. Based on the analysis, we know that we must aggregate highly distorted small cut elements (e.g., due to sliver cuts) to interior elements. Since arbitrary small faces do not affect condition number bounds, there is no need for face aggregation or stabilisation. This way, we end up with an \ac{hho} method on aggregated cut meshes that leads to well-posed linear systems and optimal condition numbers.
Hybrid methods on cut meshes have some benefits compared to more standard unfitted \acp{fe}. First, we can enforce Dirichlet boundary conditions strongly; there are degrees of freedom located on boundaries faces. In unfitted standard \acp{fe}, degrees of freedom are defined in the background mesh. Dirichlet boundary conditions and trace continuity on interfaces are weakly enforced (using, e.g., Nitsche's method \cite{hansbo2002unfitted}). Second, the method does not involve the tuning of additional stabilisation parameters, which can have an impact on results \cite{badia2021linking}. Third, the extension to high order is straightforward. It is more complicated in face-based ghost penalty (it involves penalty terms on jumps of high-order derivatives) \cite{burman2010ghost} or aggregated \acp{fe} (extension operators for high order can amplify rounding errors) \cite{badia.verdugo.ea:2018:aggregated}.
The remainder of this paper is organised as follows: In Section \ref{sec:model.and.results} we introduce the HHO method and state our key findings. In Section \ref{sec:proofs} we prove the results and discuss viable stabilisation options, in Section \ref{sec:unfitted} we include a brief discussion of HHO on cut meshes, and in Section \ref{sec:numerical} we conduct a thorough numerical study of the condition number on various meshes.
\section{Presentation of the HHO method and main result}\label{sec:model.and.results}
\subsection{Model problem}
We take a polytopal domain \(\Omega\subset \bbR^d\), \(d\ge 2\) and a source term \(f\in \LP{2}(\Omega)\), and consider the Dirichlet problem: find \(u\) such that
\[
\begin{aligned}
-\Delta u ={}& f \quad\text{in}\quad\Omega,\\
u ={}& 0 \quad\text{on}\quad\partial\Omega.
\end{aligned}
\]
The variational problem reads: find \(u\in \HONE_0( \Omega) \) such that
\begin{equation}\label{eq:weak.form}
\rma(u, v) = \mathcal{L}(v), \qquad\forall v\in \HONE_0(\Omega),
\end{equation}
where \(\rma(u, v) \vcentcolon= \brac[\Omega]{\nabla u, \nabla v}\) and \(\mathcal{L}(v) \vcentcolon= \brac[\Omega]{f, v}\). Here and in the following, \(\brac[X]{\cdot, \cdot}\) is the \(\LP{2}\)-inner product of scalar- or vector-valued functions on a set \(X\) for its natural measure.
\subsection{HHO scheme}
Let \(\mathcal{H}\subset(0, \infty)\) be a countable set of mesh sizes with a unique cluster point at \(0\). For each \(h\in\mathcal{H}\), we partition the domain \(\Omega\) into a mesh \(\Mh=(\Th, \Fh)\), for which a detailed definition can be found in \cite[Definition 1.4]{di-pietro.droniou:2020:hybrid}. The set of mesh elements \(\Th\) is a disjoint set of polytopes such that \(\ol{\Omega}=\bigcup_{T\in\Th}\ol{T}\). The set \(\Fh\) is a collection of mesh faces forming a partition of the mesh skeleton, i.e. \(\bigcup_{T\in\Th}{\bdry \T}=\bigcup_{F\in\Fh}\ol{F}\). The boundary faces \(F\subset\partial \Omega\) are gathered in the set \(\Fh^{\rmb}\). The parameter \(h\) is given by \(h\vcentcolon=\max_{T\in\Th}h_\T\) where, for \(X=T\in\Th\) or \(X=F\in\Fh\), \(h_X\) denotes the diameter of \(X\). We shall also collect the set of faces attached to an element \(T\in\Th\) in the set \(\Fh[T]:=\{F\in\Fh:F\subset T\}\). The (constant) unit normal to \(F\in\Fh[T]\) pointing outside \(T\) is denoted by \(\nor_{\T\F}\), and \(\nor_{\bdryT}:{\bdry \T}\to\bbR^d\) is the piecewise constant outer unit normal defined by \((\nor_{\bdryT})|_F=\nor_{\T\F}\) for all \(F\in\Fh[T]\). Throughout this work we make the following assumption on the meshes, which allows for some meshes with arbitrarily large numbers of face in each element, or faces that have an arbitrarily small diameter compared to their elements' diameters.
\begin{assumption}[Regular mesh sequence]\label{assum:star.shaped}
There exists a constant \(\varrho>0\) such that, for each \(h\in\mathcal{H}\), each \(T\in\Th\) is connected by star-shaped sets with parameter \(\varrho\), as defined in \cite[Definition 1.41]{di-pietro.droniou:2020:hybrid}.
\end{assumption}
From hereon, we shall denote \(f\lesssim g\) to mean \(f \le Cg\) where \(C\) is a constant depending only on \(\Omega\), \(d\) and \(\varrho\), but independent of the considered face/element, the degrees of the considered polynomial spaces, and quantities \(f,g\). We shall also write \(f\approx g\) if \(f\lesssim g\) and \(g\lesssim f\). When necessary, we make some additional dependencies of the constant \(C\) explicit.
\subsubsection{Local construction}
Let \(X=T\in\Th\) or \(X=F\in\Fh\) be a face or an element in a mesh \(\Mh\), and let \(\POLY{\ell}(X)\) be the set of \(d_X\)-variate polynomials of degree \(\le \ell\) on \(X\), where \(d_X\) is the dimension of \(X\).
The space of piecewise discontinuous polynomial functions on an element boundary is given by
\begin{equation}\label{eq:bdry.space.def}
\POLY{\ell}(\Fh[T]) \vcentcolon= \{v\in L^1({\bdry \T}):v|_F\in\POLY{\ell}(F)\quad\forall F\in\Fh[T]\}.
\end{equation}
The \(L^2\) orthogonal projector \(\piX{0, \ell}:L^1(X) \to \POLY{\ell}(X)\) is defined as the unique polynomial satisfying
\begin{equation}\label{eq:L2proj.def}
\brac[X]{v - \piX{0, \ell}v, w} = 0 \qquad \forall w \in \POLY{\ell}(X).
\end{equation}
Fix two natural numbers $k,l\in\bbN$, $l \ge k-1$. For each element $T\in\Th$, the local space of unknowns is defined as
\[
\UT{k,l}\vcentcolon= \POLY{l}(T)\times\POLY{k}(\Fh[T]).
\]
The interpolator $\IT{k,l}:\HS{1}(T)\to\UT{k,l}$ is defined for all $v\in\HS{1}(T)$ as
\[
\IT{k,l} v = (\piTzr{l}v, \piFTzr{k}v)
\]
where \(\piFTzr{k}\) is the projector onto the space \(\POLY{k}(\Fh[T])\) satisfying \(\piFTzr{k}v|_F = \piFzr{k}v\) for all \(F\in\Fh[T]\) and $v\in L^1(\partial T)$. We endow the space \(\UT{k,l}\) with the discrete energy-like seminorm \(\norm[1, T]{{\cdot}}\) defined for all \(\ul{v}_T = (v_T, v_{\FT})\in\UT{k,l}\) via
\begin{equation}\label{eq:discrete.norm.def}
\norm[1, T]{\ul{v}_T}^2 \vcentcolon= \norm[T]{\nabla v_T}^2 + h_\T^{-1}\norm[{\bdry \T}]{v_{\FT} - v_T}^2.
\end{equation}
On each element we locally reconstruct a potential from the space of unknowns via the operator $\pT{k+1}:\UT{k,l}\to\POLY{k+1}(T)$ defined to satisfy, for all $\ul{v}_T\in\UT{k,l}$ and \(w\in\POLY{k+1}(T)\),
\begin{equation}\label{eq:pT.def}
\brac[{T}]{\nabla\pT{k+1}\ul{v}_T, \nabla w} = -\brac[{T}]{v_T, \Delta w} + \brac[{\bdry \T}]{v_{\FT},\nabla w\cdot\nor_{\bdryT}},
\end{equation}
\begin{equation}\label{eq:pT.closure}
\brac[{T}]{v_T-\pT{k+1}\ul{v}_T, 1} = 0.
\end{equation}
This potential reconstruction allows us to approximate $\rma(u,v)$ on each element by the bilinear form $\rma_T:\UT{k,l}\times\UT{k,l}\to\bbR$ defined as
\begin{equation*
\rma_T(\ul{u}_T, \ul{v}_T) \vcentcolon= \brac[{T}]{\nabla\pT{k+1}\ul{u}_T, \nabla\pT{k+1}\ul{v}_T} + \rms_T(\ul{u}_T,\ul{v}_T),
\end{equation*}
where $\rms_T:\UT{k,l}\times\UT{k,l}\to\bbR$ is a symmetric, positive semi-definite stabilisation such that
\begin{equation}\label{eq:norm.equivalence}
C_{{\rm{a}}}^{-1}\norm[1, T]{\ul{v}_T}^2 \le \rma_T(\ul{v}_T,\ul{v}_T) \le C_{{\rm{a}}}\norm[1, T]{\ul{v}_T}^2
\end{equation}
and for all $\ul{v}_T\in\UT{k,l}$, $w\in\POLY{k+1}(T)$,
\begin{equation}\label{eq:polynomial.consistency}
\rms_T(\ul{v}_T,\IT{k,l} w) = 0,
\end{equation}
where $C_{{\rm{a}}}$ is a positive constant that possibly depends on polynomial degrees $l$, $k$, the mesh regularity $\varrho$, and dimension $d$, but is independent of the element diameter $h_\T$. Equation \eqref{eq:norm.equivalence} is required to ensure that the global bilinear form describes a norm on the discrete space, and that optimal approximation rates with respect to $h$ are achieved \cite[Lemma 2.18]{di-pietro.droniou:2020:hybrid}. However, tracking the dependency of $C_{{\rm{a}}}$ with respect to $l$, $k$ and obtaining condition number estimates via equation \eqref{eq:norm.equivalence} leads to sub-optimal results. As such, we assume the following extra, more precise, conditions on the bilinear form $\rms_T$, in which the difference operators \(\deltaT{l}:\UT{k,l}\to\POLY{l}(T)\) and \(\deltaFT{k}:\UT{k,l}\to\POLY{k}(\Fh[T])\) are defined as: for all \(\ul{v}_T\in\UT{k,l}\),
\[
\deltaT{l} \ul{v}_T \vcentcolon= \piTzr{l}(\pT{k+1}\ul{v}_T - v_T) \quad\textrm{and}\quad \deltaFT{k} \ul{v}_T \vcentcolon= \piFTzr{k}(\pT{k+1}\ul{v}_T - v_{\FT}).
\]
\begin{assumption}\label{assum:aT}
For all $\ul{v}_T\in\UT{k,l}$ it holds that
\begin{equation}\label{eq:aT.lower.bound}
\norm[T]{\nabla\pT{k+1}\ul{v}_T}^2 + h_\T^{-1}\norm[{\bdry \T}]{\deltaFT{k}\ul{v}_T}^2 \lesssim \rma_T(\ul{v}_T,\ul{v}_T),
\end{equation}
and for all $\ul{v}_{T,\partial}=(0,v_{\FT})\in\UT{k,l}$ it holds that
\begin{equation}\label{eq:aT.faces.upper.bound}
\rma_T(\ul{v}_{T,\partial},\ul{v}_{T,\partial}) \lesssim \norm[T]{\nabla\pT{k+1}\ul{v}_{T,\partial}}^2 + h_\T^{-1}\norm[{\bdry \T}]{\deltaFT{k}\ul{v}_{T,\partial}}^2,
\end{equation}
where the hidden constants in \eqref{eq:aT.lower.bound} and \eqref{eq:aT.faces.upper.bound} depend on $\varrho$ and $d$ but are independent of $l$, $k$ and $h$.
\end{assumption}
We consider throughout this work the stabilisation form defined in \cite[Example 2.8]{di-pietro.droniou:2020:hybrid} (with the scaling change $\hF^{-1}\toh_\T^{-1}$)
\begin{equation}\label{eq:stab.def}
\rms_T(\ul{u}_T, \ul{v}_T) \vcentcolon= h_\T^{-2}\brac[{T}]{\deltaT{l}\ul{u}_T, \deltaT{l}\ul{v}_T} + h_\T^{-1}\brac[{\bdry \T}]{\deltaFT{k}\ul{u}_T, \deltaFT{k}\ul{v}_T}.
\end{equation}
We show in Section \ref{sec:analysis.stab} that the stabilisation \eqref{eq:stab.def} satisfies Assumption \ref{assum:aT}.
\subsubsection{Global formulation}
The global space of unknowns is defined as
\[
\Uh{k,l}\vcentcolon= \Big\{\ul{v}_h=((v_T)_{T\in\Th},(v_F)_{F\in\Fh})\,:\,v_T\in\POLY{l}(T)\quad\forall T\in\Th\,,
v_F\in\POLY{k}(F)\quad\forall F\in\Fh \Big\}.
\]
To account for the homogeneous boundary conditions, the following subspace is also introduced:
\[
\U{h, 0}{k,l}\vcentcolon=\{\ul{v}_h \in\Uh{k,l}:v_{F}=0\quad\forall F\in\Fh^{\rmb}\}.
\]
For any \(\ul{v}_h\in\Uh{k,l}\) we denote its restriction to an element \(T\) by \(\ul{v}_T=(v_T,v_{\FT})\in\UT{k,l}\) (where, naturally, \(v_{\FT}\) is defined from \((v_F)_{F\in\Fh[T]}\)). We also denote by \(v_h\) the piecewise polynomial function satisfying \(v_h|_T=v_T\) for all \(T\in\Th\).
The global bilinear forms \(\rma_h:\Uh{k,l}\times\Uh{k,l}\to\mathbb{R}\) and \(\rms_h:\Uh{k,l}\times\Uh{k,l}\to\mathbb{R}\) are defined as
\[
\rma_h(\ul{u}_h, \ul{v}_h) \vcentcolon= \sum_{T\in\Th} \rma_T(\ul{u}_T,\ul{v}_T)
\quad\textrm{and}\quad
\rms_h(\ul{u}_h, \ul{v}_h) \vcentcolon= \sum_{T\in\Th} \rms_T(\ul{u}_T,\ul{v}_T).
\]
We also define the discrete energy norm \(\energynorm{{\cdot}}\) on \(\U{h, 0}{k,l}\) as
\begin{equation}\label{eq:energy.norm.def}
\energynorm{\ul{v}_h}\vcentcolon= \rma_h(\ul{v}_h,\ul{v}_h)^\frac{1}{2} \qquad \forall \ul{v}_h\in\U{h, 0}{k,l}.
\end{equation}
The HHO scheme reads: find \(\ul{u}_h\in\U{h, 0}{k,l}\) such that
\begin{equation}\label{eq:discrete.problem}
\rma_h(\ul{u}_h, \ul{v}_h) = \mathcal L_h(\ul{v}_h) \qquad\forall \ul{v}_h\in\U{h, 0}{k,l},
\end{equation}
where \(\mathcal L_h:\U{h, 0}{k,l}\to\bbR\) is a linear form defined as
\begin{equation*
\mathcal L_h(\ul{v}_h) \vcentcolon= \sum_{T\in\Th}\brac[T]{f,v_T}.
\end{equation*}
Under assumptions \eqref{eq:norm.equivalence} and \eqref{eq:polynomial.consistency} on the bilinear form $\rma_T$, the scheme \eqref{eq:discrete.problem} satisfies the energy error estimate
\[
\energynorm{\ul{u}_h - \Ih{k,l} u} \le Ch^{k+1}\seminorm[\HS{k+2}(\Th)]{u},
\]
where $\Ih{k,l}|_T=\IT{k,l}$ for all $T\in\Th$, and $C$ is a positive constant that depends on $l$, $k$, $\varrho$, and $d$, but is independent of $h$ \cite[Theorem 2.27]{di-pietro.droniou:2020:hybrid}. An estimate of the dependency with respect to $l$ and $k$ for a diffusion scheme with a boundary based stabilisation is provided in \cite{aghili.di-pietro.ea:2017:hp-HHO}.
\subsubsection{Statically condensed system and eigenvalue estimates}\label{sec:eig.estimates}
The static condensation procedure, as outlined in \cite[Appendix B.3]{di-pietro.droniou:2020:hybrid}, allows for the elimination of the element unknowns. Selecting $\ul{v}_h$ with one free element component $v_T$, and all other element and face components vanishing, we see that the solution \(\ul{u}_h\) to problem \eqref{eq:discrete.problem} satisfies for all \(T\in\Th\) and \(v_T\in\POLY{l}(T)\)
\[
\rma_T((u_T, u_{\FT}), (v_T, 0)) = \brac[{T}]{f, v_T}.
\]
This can be alternatively written as
\[
\rma_T((u_T, 0), (v_T, 0)) = \brac[{T}]{f, v_T} - \rma_T((0, u_{\FT}), (v_T, 0)).
\]
Noting that the bilinear form $(u_T,v_T)\in\POLY{k}(T)\times\POLY{k}(T)\mapsto \rma_T((u_T,0),(v_T,0))$ is coercive (due to \eqref{eq:norm.equivalence}),
we can define the polynomial \(g_\T\in\POLY{l}(T)\) and the linear operator \(\calS_\T:\POLY{k}(\Fh[T])\to\POLY{l}(T)\) via
\begin{alignat}{2}
\rma_T((g_\T, 0) , (v_T, 0)) ={}& \brac[{T}]{f, v_T} \quad&\forallv_T\in\POLY{l}(T), \label{eq:def:gT} \\
\rma_T((\calSTu_{\FT}, 0) , (v_T, 0)) ={}& -\rma_T((0, u_{\FT}) , (v_T, 0)) \quad&\forallv_T\in\POLY{l}(T). \label{eq:ST.def}
\end{alignat}
Therefore, \(u_T\) is calculated from \(u_{\FT}\) via the affine transformation
\begin{equation}\label{eq:uT}
u_T = \calSTu_{\FT} + g_\T.
\end{equation}
Substituting \eqref{eq:uT} into \eqref{eq:discrete.problem} and testing against \(\ul{v}_h = (0, v_{\Fh}) = (\brac[T\in\Th]{0}, \brac[F\in\Fh]{v_F}) \in \U{h, 0}{k,l}\) yields
\begin{equation*}
\sum_{T\in\Th}\rma_T((\calSTu_{\FT}, u_{\FT}) , (0, v_{\FT})) + \sum_{T\in\Th} \rma_T((g_\T, 0) , (0, v_{\FT})) = 0.
\end{equation*}
Setting
\[
\POLY{k}_0(\Fh):=\{u_{\Fh}=(u_F)_{F\in\Fh}\,:\,u_F\in\POLY{k}(F)\quad\forall F\in\Fh\,,\quad u_F=0\mbox{ if $F\subset\partial\Omega$}\},
\]
the statically condensed problem then reads: find \(u_{\Fh}\in\POLY{k}_0(\Fh)\) such that
\begin{equation}\label{hho:statically.condensed}
\rmA_h(u_{\Fh}, v_{\Fh}) = \rmL_h(v_{\Fh})\quad\forallv_{\Fh}\in\POLY{k}_0(\Fh),
\end{equation}
where
\begin{align}
\rmA_h(u_{\Fh}, v_{\Fh}) \vcentcolon={}& \sum_{T\in\Th}\rma_T((\calSTu_{\FT}, u_{\FT}) , (0, v_{\FT})),
\label{def:Ah.sc}\\
\rmL_h(v_{\Fh}) \vcentcolon={}& \sum_{T\in\Th} -\rma_T((g_\T, 0) , (0, v_{\FT})).\nonumber
\end{align}
Upon choosing bases of the spaces $\POLY{k}(F)$ for $F\in\Fh^{\rmi}$, \eqref{hho:statically.condensed} takes the equivalent algebraic form
\[
\mat{A}_h\bm{U} = \bm{F}
\]
where $\mat{A}_h$ is the matrix of the bilinear form $\rmA_h$, $\bm{U}$ the vector of unknowns and $\bm{F}$ the source term corresponding to $\rmL_h$.
Our main result is the following; its proof is given in Section \ref{sec:proof.eigs}.
\begin{theorem}[Eigenvalue and condition number estimates]\label{th:estimates}
For each $F\in\Fh^{\rmi}$, denote by $T_F^+,T_F^-$ the two elements on each side of $F$, and define the characteristic lengths $\Hmin{\Fh}$ and $\Hmax{\Fh}$ by
\[
\Hmin{\Fh}=\min_{F\in\Fh}\left(h_{T_F^+}+h_{T_F^-}\right)\,,\quad\Hmax{\Fh}^{-1}=\max_{F\in\Fh}\left(h_{T_F^+}^{-1}+h_{T_F^-}^{-1}\right).
\]
If, for each $F\in\Fh^{\rmi}$, the basis on $\POLY{k}(F)$ is orthonormal for the $L^2(F)$-inner product, then
the minimal eigenvalue, maximal eigenvalue and condition number of $\mat{A}_h$ satisfy
\begin{subequations}\label{est:lambda.kappa}
\begin{align}
\label{est:lambda.min}
\lambda_{\rm min}(\mat{A}_h)\gtrsim{}& \Hmin{\Fh}\,,\\
\label{est:lambda.max}
\lambda_{\rm max}(\mat{A}_h)\lesssim{}& (k+1)^2 \Hmax{\Fh}^{-1}\,,\\
\label{est:kappa}
\kappa(\mat{A}_h)\lesssim{}& (k+1)^2 \Hmax{\Fh}^{-1}\Hmin{\Fh}^{-1}.
\end{align}
\end{subequations}
\end{theorem}
\begin{remark}[Characteristic lengths]\label{rem:HminHmax}
Setting $h_{\rm min}=\min_{T\in\Th}h_\T$, we have $\Hmin{\Fh}\gtrsim h_{\rm min}$ and $\Hmax{\Fh}^{-1}\lesssim h_{\rm min}^{-1}$.
Hence, \eqref{est:lambda.kappa} leads to the bounds
\[
\lambda_{\rm min}(\mat{A}_h)\gtrsim h_{\rm min}\,,\quad\lambda_{\rm max}(\mat{A}_h)\lesssim (k+1)^2h_{\rm min}^{-1}\,,\quad\kappa(\mat{A}_h)\lesssim (k+1)^2 h_{\rm min}^{-2}.
\]
For quasi-uniform meshes, $h_{\rm min}$ can be replaced above by $h$ in these estimates.
However, on specific meshes (especially cut meshes with small cut elements), \eqref{est:lambda.kappa} can lead to much better estimates than those purely based on $h_{\rm min}$; see Section \ref{sec:numerical}.
Note that the factor $(k+1)^2$ appearing in \eqref{est:lambda.max} and \eqref{est:kappa} is due to the dependency on the polynomial degree of
the generic discrete trace inequality \eqref{eq:discrete.trace}.
\end{remark}
\begin{remark}[Small faces]
The estimates \eqref{est:lambda.kappa} are fully independent of the maximum number of faces in each element, or on their diameter, and are therefore fully robust with respect to small faces.
\end{remark}
\section{Proofs}\label{sec:proofs}
\subsection{Estimate on the eigenvalues}\label{sec:proof.eigs}
Let us start with two preliminary estimates. The proof of the following trace inequality, under Assumption \ref{assum:star.shaped} (and therefore with hidden constants in $\lesssim$ that are not impacted by the presence of small faces in $T$), can be found in \cite[Section 3]{cangiani.dong.ea:2017:discontinuous}.
\begin{lemma}[Trace Inequality]
For all \(v \in \HS{1}(T)\),
\begin{equation}\label{eq:continuous.trace}
\norm[{\bdry \T}]{v}^2 \lesssim h_\T^{-1} \Big(\norm[{T}]{v}^2 + h_\T^2\norm[{T}]{\nabla v}^2\Big).
\end{equation}
For $v\in \POLY{\ell}(T)$, the following discrete trace inequality also holds:
\begin{equation}\label{eq:discrete.trace}
\norm[{\bdry \T}]{v}^2 \lesssim h_\T^{-1}(\ell+1)(\ell+d)\norm[{T}]{v}^2.
\end{equation}
\end{lemma}
\begin{lemma}[Poincar\'{e}--Wirtinger]
For all \(v\in H^1(T)\) the following Poincar\'{e}--Wirtinger inequality holds:
\begin{equation}\label{eq:poincare}
\norm[T]{v - \piTzr{0}v} \lesssim h_\T \seminorm[\HS{1}(T)]{v}.
\end{equation}
\end{lemma}
\begin{proof}
See \cite[Remark 1.46]{di-pietro.droniou:2020:hybrid}.
\end{proof}
\begin{lemma}[Discrete Poincar\'{e} inequality]
For all $\ul{v}_h\in\U{h, 0}{k,l}$ it holds that
\begin{equation}\label{eq:discrete.poincare.faces}
\sum_{T\in\Th}h_\T\norm[{\bdry \T}]{v_{\FT}}^2 \lesssim \sum_{T\in\Th}\Brac{\norm[T]{\nabla\pT{k+1}\ul{v}_T}^2 + h_\T^{-1}\norm[{\bdry \T}]{\deltaFT{k}\ul{v}_T}^2},
\end{equation}
where the hidden constant depends on $d$, $\varrho$ and $\Omega$ but is independent of $l$, $k$ and $h$.
\end{lemma}
\begin{proof}
By a triangle inequality it holds that
\[
\sum_{T\in\Th}h_\T\norm[{\bdry \T}]{v_{\FT}}^2 \lesssim \sum_{T\in\Th}h_\T\norm[{\bdry \T}]{\piFTzr{k}\pT{k+1}\ul{v}_T}^2 + \sum_{T\in\Th}h_\T\norm[{\bdry \T}]{\deltaFT{k}\ul{v}_T}^2.
\]
The second term clearly satisfies the desired bound due to $h_\T \le {\rm diam}(\Omega)^2 h_\T^{-1}$. It holds by the boundedness of $\piFTzr{k}\pT{k+1}\ul{v}_T$ and the continuous trace inequality \eqref{eq:continuous.trace} that
\[
\sum_{T\in\Th}h_\T\norm[{\bdry \T}]{\piFTzr{k}\pT{k+1}\ul{v}_T}^2 \lesssim \sum_{T\in\Th}\Brac{\norm[T]{\pT{k+1}\ul{v}_T}^2 + h_\T^2\norm[T]{\nabla\pT{k+1}\ul{v}_T}^2}.
\]
Thus it remains to prove that
\[
\norm[\Omega]{\ph{k+1}\ul{v}_h}^2 \lesssim \sum_{T\in\Th}\Brac{\norm[T]{\nabla\pT{k+1}\ul{v}_T}^2 + h_\T^{-1}\norm[{\bdry \T}]{\deltaFT{k}\ul{v}_T}^2}.
\]
As the divergence operator \(\nabla\cdot:\HS{1}(\Omega)^d\to\LP{2}(\Omega)\) is onto, there exists a \({\bm{\tau}}\in\HS{1}(\Omega)^d\) such that \(-\nabla\cdot{\bm{\tau}} = \ph{k+1}\ul{v}_h\) and \(\norm[H^1(\Omega)^d]{{\bm{\tau}}} \lesssim \norm[\Omega]{\ph{k+1}\ul{v}_h}\) \cite[Lemma 8.3]{di-pietro.droniou:2020:hybrid}. Therefore
\begin{align*}
\norm[\Omega]{\ph{k+1}\ul{v}_h}^2 = -\brac[\Omega]{\ph{k+1}\ul{v}_h,\nabla \cdot {\bm{\tau}}} ={}& \sum_{T\in\Th}\Brac{\brac[T]{\nabla\pT{k+1}\ul{v}_T, {\bm{\tau}}} - \brac[{\bdry \T}]{\pT{k+1}\ul{v}_T, {\bm{\tau}}\cdot \nor_{\bdryT}}} \nn\\
={}& \sum_{T\in\Th}\Brac{\brac[T]{\nabla\pT{k+1}\ul{v}_T, {\bm{\tau}}} + \brac[{\bdry \T}]{v_{\FT} - \pT{k+1}\ul{v}_T, {\bm{\tau}}\cdot\nor_{\bdryT}}},
\end{align*}
where we have invoked the homogeneous conditions on the space $\U{h, 0}{k,l}$ and the fact that ${\bm{\tau}}\cdot{\bm{n}}_{TF}+{\bm{\tau}}\cdot{\bm{n}}_{T'F}=0$ whenever $T,T'$ are the two elements on each side of an internal face $F\in\Fh^{\rmi}$. Thus, by the Cauchy--Schwarz inequality and continuous trace inequalities it holds that
\begin{align*}
\norm[\Omega]{\ph{k+1}\ul{v}_h}^2 \lesssim{}& \sum_{T\in\Th}\norm[\HS{1}(T)]{{\bm{\tau}}}\Brac{\norm[T]{\nabla\pT{k+1}\ul{v}_T} + h_\T^{-\frac12}\norm[{\bdry \T}]{v_{\FT} - \pT{k+1}\ul{v}_T}} \nn\\
\lesssim{}& \sum_{T\in\Th}\norm[\HS{1}(T)]{{\bm{\tau}}}\Brac{\norm[T]{\nabla\pT{k+1}\ul{v}_T} + h_\T^{-\frac12}\norm[{\bdry \T}]{\deltaFT{k}\ul{v}_T}},
\end{align*}
where in the second line we have added and subtracted $\piFTzr{k}\pT{k+1}\ul{v}_T$ to the boundary term, invoked the minimisation of $\piFTzr{k}$ and applied a continuous trace inequality. The proof follows from a discrete Cauchy--Schwarz inequality and the bound $\norm[\HS{1}(\Omega)^d]{{\bm{\tau}}} \lesssim \norm[\Omega]{\ph{k+1}\ul{v}_h}$.
\end{proof}
\begin{lemma}
For all $\ul{v}_{T,\partial}=(0,v_{\FT})\in\UT{k,l}$, it holds that
\begin{equation}\label{eq:discrete.upper.bound}
\norm[T]{\nabla\pT{k+1}\ul{v}_{T,\partial}}^2 + h_\T^{-1}\norm[{\bdry \T}]{\deltaFT{k}\ul{v}_{T,\partial}}^2 \lesssim (k+1)^2h_\T^{-1}\norm[{\bdry \T}]{v_{\FT}}^2,
\end{equation}
where the hidden constant depends on $d$ and $\varrho$ but is independent of $l$, $k$ and $h$.
\end{lemma}
\begin{proof}
By a triangle inequality, the boundedness of $\piFTzr{k}$, and the continuous trace inequality \eqref{eq:continuous.trace} it holds that
\[
h_\T^{-1}\norm[{\bdry \T}]{\deltaFT{k}\ul{v}_{T,\partial}}^2 \lesssim h_\T^{-1}\norm[{\bdry \T}]{v_{\FT}}^2 + h_\T^{-2}\norm[T]{\pT{k+1}\ul{v}_{T,\partial}}^2 + \norm[T]{\nabla\pT{k+1}\ul{v}_{T,\partial}}^2.
\]
As the element unknown is zero, it holds by \eqref{eq:pT.closure} that $\piTzr{0} \pT{k+1}\ul{v}_{T,\partial}=0$. Thus, we may apply Poincar\'{e}--Wirtinger inequality \eqref{eq:poincare} to yield $h_\T^{-2}\norm[T]{\pT{k+1}\ul{v}_{T,\partial}}^2\lesssim \norm[T]{\nabla\pT{k+1}\ul{v}_{T,\partial}}^2$. Hence, it remains to be proven that
\begin{equation}\label{eq:pT.bdry}
\norm[T]{\nabla\pT{k+1}\ul{v}_{T,\partial}}^2 \lesssim (k+1)^2h_\T^{-1}\norm[{\bdry \T}]{v_{\FT}}^2.
\end{equation}
It follows from equation \eqref{eq:pT.def} with $w=\pT{k+1}\ul{v}_{T,\partial}$ that
\[
\norm[T]{\nabla\pT{k+1}\ul{v}_{T,\partial}}^2 = \brac[{\bdry \T}]{v_{\FT}, \nabla \pT{k+1}\ul{v}_{T,\partial}\cdot\nor_{\bdryT}}.
\]
Applying the discrete trace inequality \eqref{eq:discrete.trace} and $(k+1)(k+d)\lesssim (k+1)^2$ yields
\[
\norm[T]{\nabla\pT{k+1}\ul{v}_{T,\partial}}^2 \lesssim h_\T^{-\frac12}(k+1)\norm[{\bdry \T}]{v_{\FT}}\norm[T]{\nabla \pT{k+1}\ul{v}_{T,\partial}}.
\]
Simplifying by $\norm[T]{\nabla \pT{k+1}\ul{v}_{T,\partial}}$ and squaring yields the desired result \eqref{eq:pT.bdry}.
\end{proof}
We can now prove the estimates \eqref{est:lambda.kappa} on the eigenvalues and condition number of $\mat{A}_h$.
\begin{proof}[Proof of Theorem \ref{th:estimates}]
We note that
\begin{align*}
\rma_T((\mathcal{S}_Tu_{\FT}, u_{\FT}) , (0, u_{\FT})) ={}& \rma_T((\mathcal{S}_Tu_{\FT}, u_{\FT}) , (\mathcal{S}_Tu_{\FT}, u_{\FT})) - \cancel{\rma_T((\mathcal{S}_Tu_{\FT}, u_{\FT}) , (\mathcal{S}_Tu_{\FT}, 0))
\end{align*}
where the cancellation follows setting \(v_T = \mathcal{S}_Tu_{\FT}\) in \eqref{eq:ST.def}. By equations \eqref{eq:discrete.poincare.faces} and \eqref{eq:aT.lower.bound}, and recalling the definition \eqref{def:Ah.sc} of $\rmA_h$, it thus holds that
\begin{equation}\label{eq:Ah.lower.bound}
\sum_{T\in\Th}h_\T\norm[{\bdry \T}]{u_{\FT}}^2 \lesssim \rmA_h(u_{\Fh}, u_{\Fh}).
\end{equation}
Consider also
\begin{align*}
\rma_T((\mathcal{S}_Tu_{\FT}, u_{\FT}) , (0, u_{\FT})) ={}& \rma_T((0, u_{\FT}) , (0, u_{\FT})) + \rma_T((\mathcal{S}_Tu_{\FT}, 0) , (0, u_{\FT})) \nn\\
={}& \rma_T((0, u_{\FT}) , (0, u_{\FT})) - \rma_T((\mathcal{S}_Tu_{\FT}, 0) , (\mathcal{S}_Tu_{\FT}, 0)), \nn\\
\le{}& \rma_T((0, u_{\FT}) , (0, u_{\FT}))
\end{align*}
where the second line follows from equation \eqref{eq:ST.def} with $v_T=\mathcal{S}_Tu_{\FT}$ and the symmetry of \(\rma_T\), and the conclusion from the fact that $\rma_T$ is semi-definite positive. Therefore, by equations \eqref{eq:aT.faces.upper.bound} and \eqref{eq:discrete.upper.bound},
\begin{equation}\label{eq:Ah.upper.bound}
\rma_T((\mathcal{S}_Tu_{\FT}, u_{\FT}) , (0, u_{\FT})) \lesssim (k+1)^2h_\T^{-1}\norm[{\bdry \T}]{u_{\FT}}^2.
\end{equation}
Thus, combining \eqref{eq:Ah.lower.bound} and \eqref{eq:Ah.upper.bound} it holds that
\[
\sum_{T\in\Th}h_\T\norm[{\bdry \T}]{u_{\FT}}^2 \lesssim \rmA_h(u_{\Fh}, u_{\Fh}) \lesssim (k+1)^2\sum_{T\in\Th}h_\T^{-1}\norm[{\bdry \T}]{u_{\FT}}^2.
\]
Gathering by faces (and recalling that $u_{\Fh}$ vanishes on boundary faces), we obtain
\begin{equation}\label{final.before.orthonormal}
\sum_{F\in\Fh^{\rmi}}(h_{T_F^+}+h_{T_F^-})\norm[F]{u_F}^2 \lesssim \rmA_h(u_{\Fh}, u_{\Fh}) \lesssim (k+1)^2\sum_{F\in\Fh^{\rmi}}(h_{T_F^+}^{-1}+h_{T_F^-}^{-1})\norm[F]{u_F}^2.
\end{equation}
Having chosen orthonormal bases on the space $\POLY{k}(F)$, and recalling the definitions of $\Hmin{\Fh}$ and $\Hmax{\Fh}$, this relation reduces to
\begin{equation}\label{final.after.orthonormal}
\Hmin{\Fh}\bm{U}\cdot\bm{U} \lesssim \mat{A}_h\bm{U}\cdot\bm{U}\lesssim (k+1)^2\Hmax{\Fh}^{-1}\bm{U}\cdot\bm{U}.
\end{equation}
The estimates \eqref{est:lambda.kappa} classically follow from these bounds.
\end{proof}
\begin{remark}[Non-orthonormal polynomial bases]
The choice of orthonormal bases allows us, in the proof above, to substitute each $\norm[F]{u_F}^2$ in \eqref{final.before.orthonormal} with the Euclidean norm of the coefficients of $u_F$ on the basis of $\POLY{k}(F)$, thus leading to the global expressions $\Hmin{\Fh}\bm{U}\cdot\bm{U}$ and $\Hmax{\Fh}^{-1}\bm{U}\cdot\bm{U}$ in \eqref{final.after.orthonormal}.
If non-orthonormal polynomial bases are chosen in some $\POLY{k}(F)$, the proof shows that $\Hmin{\Fh}$ and $\Hmax{\Fh}$ have to be adjusted
the following way: for each $F$, letting $c_F,C_F$ be positive constants such that, for the chosen basis $(q^F_i)_{i\in I_F}$ of $\POLY{k}(F)$, we have
\[
c_F\sum_{i\in I_F} \lambda_i^2 \le \norm[F]{\sum_{i\in I_F}\lambda_iq^F_i}^2\le C_F\sum_{i\in I_F} \lambda_i^2\qquad\forall (\lambda_i)_{i\in I_F}\in\bbR,
\]
we set
\[
\Hmin{\Fh}=\min_{F\in\Fh}c_F\left(h_{T_F^+}+h_{T_F^-}\right)\,,\quad\Hmax{\Fh}^{-1}=\max_{F\in\Fh}C_F\left(h_{T_F^+}^{-1}+h_{T_F^-}^{-1}\right).
\]
It should be noted that $c_F$ and $C_F$ might depend, for some choice of polynomial bases, on the face geometry and its size. In this case, the resulting estimates on the eigenvalues and condition number may not be robust with respect to small faces in the mesh, on the contrary to those obtained using orthonormal bases (the importance, for meshes containing distorted elements, of using orthonormal bases over, say, monomial bases was already noticed in \cite[Section B.1]{di-pietro.droniou:2020:hybrid}).
\end{remark}
\subsection{Analysis of the stabilisation}\label{sec:analysis.stab}
We prove here the validity of the stabilisation term $\rms_T$ defined by \eqref{eq:stab.def}, and provide a brief discussion of alternate choices of stabilisation bilinear form. As the coercivity and boundedness \eqref{eq:norm.equivalence}, and polynomial consistency \eqref{eq:polynomial.consistency} are well established for the stabilisations considered here, we only wish to show that Assumption \ref{assum:aT} holds true.
\begin{lemma}
The stabilisation bilinear form defined by \eqref{eq:stab.def} satisfies Assumption \ref{assum:aT}.
\end{lemma}
\begin{proof}
The lower bound \eqref{eq:aT.lower.bound} follows trivially by noting that for all $\ul{v}_T\in\UT{k,l}$, we have
\[
\rma_T(\ul{v}_T, \ul{v}_T) = \norm[T]{\nabla\pT{k+1}\ul{v}_T}^2 + h_\T^{-2}\norm[T]{\deltaT{l}\ul{v}_T}^2 + h_\T^{-1}\norm[{\bdry \T}]{\deltaFT{k}\ul{v}_T}^2.
\]
To prove the bound \eqref{eq:aT.faces.upper.bound} it remains to show that, for all $\ul{v}_{T,\partial}=(0,v_{\FT})\in\UT{k,l}$,
\[
h_\T^{-2}\norm[T]{\deltaT{l}\ul{v}_{T,\partial}}^2 \lesssim \norm[T]{\nabla\pT{k+1}\ul{v}_{T,\partial}}^2 + h_\T^{-1}\norm[{\bdry \T}]{\deltaFT{k}\ul{v}_{T,\partial}}^2.
\]
We invoke the boundedness of $\piTzr{l}$ and the Poincar\'{e}--Wirtinger \eqref{eq:poincare} inequality, valid since $\piTzr{0}\pT{k+1}\ul{v}_{T,\partial}=0$ by \eqref{eq:pT.closure}, to see that
\begin{equation*}
h_\T^{-2}\norm[T]{\deltaT{l}\ul{v}_{T,\partial}}^2 = h_\T^{-2}\norm[T]{\piTzr{l}\pT{k+1}\ul{v}_{T,\partial}}^2 \le h_\T^{-2}\norm[T]{\pT{k+1}\ul{v}_{T,\partial}}^2 \lesssim \norm[T]{\nabla\pT{k+1}\ul{v}_{T,\partial}}^2,
\end{equation*}
thus, completing the proof.
\end{proof}
\subsubsection{Alternate choices for the stabilisation bilinear form}
We briefly comment here on a variety of different choices for the stabilisation term $\rms_T$. For the choice of element polynomial degree $l=k-1$, the stabilisation bilinear form $\sT^{(k-1)}:\UT{k-1,k}\times\UT{k-1,k}\to\bbR$ defined for all $\ul{v}_T,\ul{w}_T\in\UT{k-1,k}$ via
\[
\sT^{(k-1)}(\ul{v}_T, \ul{w}_T) \vcentcolon= h_\T^{-1}\brac[{\bdry \T}]{\deltaFT{k}\ul{v}_T, \deltaFT{k}\ul{w}_T}
\]
satisfies the requirements \eqref{eq:norm.equivalence} and \eqref{eq:polynomial.consistency} \cite[Section 4.3]{droniou.yemm:2021:robust}. Moreover, it is clear that $\sT^{(k-1)}$ satisfies Assumption \ref{assum:aT} with hidden constant in \eqref{eq:aT.lower.bound} and \eqref{eq:aT.faces.upper.bound} equal to $1$. We emphasise, however, that when $l>k-1$ the coercivity \eqref{eq:norm.equivalence} of $\rma_h$ fails for this choice of stabilisation, and that the discrete problem \eqref{eq:discrete.problem} is ill posed.
Another choice of stabilisation with only boundary terms is the ``original HHO stabilisation'' $\sT^{\partial}:\UT{l,k}\times\UT{l,k}\to\bbR$ defined for all $\ul{v}_T,\ul{w}_T\in\UT{l,k}$ via
\begin{equation}\label{eq:bdry.stab.def}
\sT^{\partial}(\ul{v}_T, \ul{w}_T) \vcentcolon= h_\T^{-1}\brac[{\bdry \T}]{(\deltaFT{k} - \deltaT{l})\ul{v}_T, (\deltaFT{k} - \deltaT{l})\ul{w}_T}.
\end{equation}
It satisfies the coercivity and boundedness requirements \eqref{eq:norm.equivalence} for all $l\le k+1$ \cite[Proposition 2.13]{di-pietro.droniou:2020:hybrid}. This choice of stabilisation also satisfies the upper bound \eqref{eq:aT.faces.upper.bound} in Assumption \ref{assum:aT}, however, we have been yet unable to prove the lower bound \eqref{eq:aT.lower.bound} with a constant that does not depend on $k$.
\begin{remark}[HDG stabilisation]
In the case $l=k+1$, the following \ac{hdg}-inspired stabilisation can also be considered (see \cite[Section 5.1.6]{di-pietro.droniou:2020:hybrid} and \cite{Cockburn.Di-Pietro.ea:16}):
\[
\mathrm{s}_T^{\textsc{hdg}}(\ul{v}_T, \ul{w}_T) = h_\T^{-1}\brac[{\bdry \T}]{\piFTzr{k}(v_{\FT}-v_T), \piFTzr{k}(w_{\FT}-w_T)}.
\]
As for $\sT^{\partial}$ above, we can prove a uniform-in-$k$ upper bound \eqref{eq:aT.faces.upper.bound} for $\mathrm{s}_T^{\textsc{hdg}}$, but the lower bounds we could establish depend on $k$.
\end{remark}
The gradient-based stabilisation $\sT^{\nabla}:\UT{l,k}\times\UT{l,k}\to\bbR$ introduced in \cite[Section 4]{droniou.yemm:2021:robust} is defined for all $\ul{v}_T,\ul{w}_T\in\UT{l,k}$ via
\begin{equation}\label{eq:grad.stab.def}
\sT^{\nabla}(\ul{v}_T, \ul{w}_T) \vcentcolon= \brac[T]{\nabla \deltaT{l}\ul{v}_T, \nabla \deltaT{l}\ul{w}_T} + h_\T^{-1}\brac[{\bdry \T}]{\deltaFT{k}\ul{v}_T, \deltaFT{k}\ul{w}_T}.
\end{equation}
The gradient-based stabilisation satisfies coercivity, boundedness, and polynomial consistency for all $l\ge k-1$. Moreover, it is clear that $\sT^{\nabla}$ satisfies equation \eqref{eq:aT.lower.bound} in Assumption \ref{assum:aT}. For $l\ge k+1$, the upper bound \eqref{eq:aT.faces.upper.bound} also follows trivially. However, for $l=k-1,k$ we have been unable to prove that this choice of stabilisation satisfies \eqref{eq:aT.faces.upper.bound} without an extra dependency on $k$.
Despite these shortcomings in the analysis, numerical tests suggest that $\sT^{\partial}$ and $\sT^{\nabla}$ satisfy the eigenvalue estimates stated in Theorem \ref{th:estimates}. This is illustrated in Figure \ref{fig:k.test}. Moreover, these choices of stabilisation might be preferable as the error induced when measured in certain norms can be significantly lower than for the choice \eqref{eq:stab.def} \cite[Figures 2, 3]{droniou.yemm:2021:robust}.
\section{HHO on cut meshes}\label{sec:unfitted}
As discussed in the introduction, the generation of unstructured body-fitted meshes of geometrically complex regions -- such as those with curved boundaries and high curvatures -- can present great difficulties. Unfitted finite element methods avoid this issue because they are defined on a simple (e.g., Cartesian or octree) background mesh covering the domain of interest. The elements in touch with interface boundaries can be locally cut to produce polytopal elements on the physical domain boundaries \cite{2110.01378}. These cuts can produce narrow, anisotropic `sliver-cut' elements, as well as small but round `small-cut' elements.
The design of a variant of the HHO method on cut meshes, with potentially curved elements, is presented and analysed in \cite{burman.cicuttin.ea:2021:unfitted} for elliptic interface problems. The unfitted \ac{hho} method therein makes use of Nitsche's method for the local reconstruction operator. Instead, we consider a standard \ac{hho} method on cut meshes. In particular, we define a simple structured background mesh $\mathcal{T}_{h}^{\mathrm{bg}}$ and extract the submesh of active elements $\mathcal{T}_{h}^{\mathrm{act}}$. The active mesh is split into interior elements $\mathcal{T}_{h}^{\mathrm{in}}$ and cut elements $\mathcal{T}_{h}^{\mathrm{cut}}$.
Based on the condition number bounds in Theorem \ref{th:estimates}, we know that the conditioning of the system matrix can be severely affected by the presence of small-cut and sliver-cut elements. To attain condition number bounds on cut meshes that are independent of the cut location, sliver-cut and small-cut elements in $\mathcal{T}_{h}^{\mathrm{cut}}$ are aggregated to their neighbours to form an isotropic, quasi-uniform mesh. In particular, we iterate over elements $T \in \mathcal{T}_{h}^{\mathrm{cut}}$ and merge $T$ with its neighbour sharing the longest edge (or face) if
\[
\frac{|T|_d}{|{\bdry \T}|_{d-1}} < \epsilon_1 h_\T \qquad\textrm{or}\qquad h_\T < \epsilon_2 h_{\rm{max}}.
\]
The algorithm is re-run until no ill-posed elements are found. The convergence of this algorithm is assured, since any ill-posed cell is at finite distance to a well-posed cell. The size of the aggregates is bounded by the maximum of such distance for all ill-cells, which depends on the scale of the geometrical features (see \cite[Lemma 2.2]{badia.verdugo.ea:2018:aggregated}). We take $\epsilon_1 = 0.05$ and $\epsilon_2 = 0.3$ in the numerical experiments section. After this aggregation step, we end up with a new mesh $\mathcal{T}_{h}^{\mathrm{ag}}$. Let us note that arbitrarily small faces can still be present in $\mathcal{T}_{h}^{\mathrm{ag}}$. The following corollary is a direct consequence of Theorem~\ref{th:estimates} and the aggregation algorithm.
\begin{corollary}[Eigenvalues and condition numbers on cut meshes]
Let $\mathcal{T}_{h}^{\mathrm{bg}}$ be a background mesh covering $\Omega$ with characteristic mesh size $h$ and $\mathcal{T}_{h}^{\mathrm{ag}}$ the corresponding aggregated mesh obtained using the algorithm described above. Let $\mat{A}_h$ be the linear system matrix
corresponding to the \ac{hho} discretisation (\ref{hho:statically.condensed}) for $\mathcal{T}_{h}^{\mathrm{ag}}$. Under the assumptions in Theorem~\ref{th:estimates}, it holds:
\[
\lambda_{\rm min}(\mat{A}_h)\gtrsim h, \qquad
\lambda_{\rm max}(\mat{A}_h)\lesssim (k+1)^2 h^{-1}, \qquad
\kappa(\mat{A}_h)\lesssim (k+1)^2 h^{-2},
\]
where the constants are independent of the cut location but depend on the choice of $\epsilon_1$ and $\epsilon_2$.
\end{corollary}
We note that the ill-conditioning of systems arising in unfitted $C^0$-Lagrangian \acp{fe} can be solved by aggregating ill-conditioned elements into their neighbours \cite{badia.verdugo.ea:2018:aggregated}. However, the strategy we consider here is simpler because there is no need to eliminate ill-posed nodes via constraints in each aggregate.
\section{Numerical Results}\label{sec:numerical}
We provide here a numerical study of the condition number to illustrate the results derived in previous sections. The linear system \eqref{hho:statically.condensed} is assembled using the \texttt{HArDCore} open source C++ library \cite{hhocode}. We compute the condition number using the \texttt{SymEigsSolver} solver found in the \texttt{Spectra} library, with documentation available at \url{https://spectralib.org/doc/index.html}.
All numerical tests in this section are performed using element degree $l=k$, and $L^2$- orthonormalised basis functions. The orthonormalisation process is achieved using a classical Gram-Schmidt algorithm.
\subsection{Coarsened meshes}
In order to capture intricate geometric details in a given domain it is sometimes sensible to start with a regular, fine mesh of small element diameter, and agglomerate elements together in order to save computation time. These coarsened meshes are (relatively) isotropic and quasi-uniform, however can have many faces per element and arbitrarily small face diameters. Thus, Theorem \ref{th:estimates} predicts the maximum and minimum eigenvalues to scale as $\lambda_{\rm{min}}(\mat{A}_h)\approx h$ and $\lambda_{\rm{max}}(\mat{A}_h)\approx h^{-1}$ respectively, independently of the number and size of faces in each element. We consider the unit box $\Omega=(0,1)^2\subset\bbR^2$, and a fine triangular mesh of $\Omega$. We then design successive coarsenings of these meshes and observe how the condition number evolves. Such meshes are plotted in Figure \ref{fig:coarse.mesh} with the data of the mesh sequence presented in Table \ref{table:coarse.mesh}.
\begin{figure}[H]
\centering
\includegraphics[width=3\textwidth/10]{data/mesh1_5_0.pdf}
\includegraphics[width=3\textwidth/10]{data/mesh1_5_2.pdf}
\includegraphics[width=3\textwidth/10]{data/mesh1_5_6.pdf}
\caption{Coarsened meshes}
\label{fig:coarse.mesh}
\end{figure}
\begin{table}[H]
\centering
\pgfplotstableread{data/coarse_mesh_test_k0.dat}\loadedtable
\pgfplotstabletypeset
[
columns={hMin,hMax,NbCells,NbInternalEdges},
columns/hMin/.style={column name=\(\hmin\)},
columns/hMax/.style={column name=\(h_{\rm{max}}\)},
columns/NbCells/.style={column name=Nb. Elements},
columns/NbInternalEdges/.style={column name=Nb. Internal Edges},
every head row/.style={before row=\toprule,after row=\midrule},
every last row/.style={after row=\bottomrule}
]\loadedtable
\caption{Coarsened meshes}
\label{table:coarse.mesh}
\end{table}
The condition number and eigenvalues on each mesh are plotted in Figure \ref{fig:coarse.mesh.test}. As the mesh is coarsened the condition number appears to decay slightly slower than $h^{-2}$. This is easily explainable due to the successive meshes becoming less `round', and thus the mesh regularity parameter increasing slightly with each level of coarsening.
\begin{figure}[H]
\centering
\ref{coarse_mesh_test}
\vspace{0.5cm}\\
\subcaptionbox{$\kappa(\mat{A}_h)$ vs $h^{-1}$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[legend columns=3, legend to name=coarse_mesh_test, xlabel={$h^{-1}$}, ylabel={$\kappa(\mat{A}_h)$}]
\addplot table[x=hMaxInv, y=Condition] {data/coarse_mesh_test_vol_stab_k0.dat};
\addlegendentry{\(k=0\)};
\addplot table[x=hMaxInv,y=Condition] {data/coarse_mesh_test_vol_stab_k1.dat};
\addlegendentry{\(k=1\)};
\addplot table[x=hMaxInv,y=Condition] {data/coarse_mesh_test_vol_stab_k2.dat};
\addlegendentry{\(k=2\)};
\logLogSlopeTriangle{0.9}{0.4}{0.1}{2}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{min}}(\mat{A}_h)$ vs $h$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[xlabel={$h$}, ylabel={$\lambda_{\rm{min}}(\mat{A}_h)$}]
\addplot table[x=hMax,y=MinEig] {data/coarse_mesh_test_vol_stab_k2.dat};
\addplot table[x=hMax,y=MinEig] {data/coarse_mesh_test_vol_stab_k1.dat};
\addplot table[x=hMax,y=MinEig] {data/coarse_mesh_test_vol_stab_k2.dat};
\logLogSlopeTriangle{0.9}{0.4}{0.1}{1}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{max}}(\mat{A}_h)$ vs $h^{-1}$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[xlabel={$h^{-1}$}, ylabel={$\lambda_{\rm{max}}(\mat{A}_h)$}]
\addplot table[x=hMaxInv,y=MaxEig] {data/coarse_mesh_test_vol_stab_k0.dat};
\addplot table[x=hMaxInv,y=MaxEig] {data/coarse_mesh_test_vol_stab_k1.dat};
\addplot table[x=hMaxInv,y=MaxEig] {data/coarse_mesh_test_vol_stab_k2.dat};
\logLogSlopeTriangle{0.9}{0.4}{0.1}{1}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\caption{Coarse square meshes}
\label{fig:coarse.mesh.test}
\end{figure}
In Figure \ref{fig:k.test} we fix the mesh (the mesh with $\hmin=0.11$ in Table \ref{table:coarse.mesh}) and vary the polynomial degree $k$. We test with the stabilisation defined by \eqref{eq:stab.def} as well as the gradient-based \eqref{eq:grad.stab.def} and boundary \eqref{eq:bdry.stab.def} stabilisations. It is apparent from Figure \ref{fig:k.test} that all three stabilisations result in a system matrix with eigenvalue estimates determined by Theorem \ref{th:estimates}.
\begin{figure}[H]
\centering
\ref{ktest}
\vspace{0.5cm}\\
\subcaptionbox{$\kappa(\mat{A}_h)$ vs $k$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}
[
legend columns=3,
legend to name=ktest,
xtick={1,2,3,4,5,6,7,8,9},
xticklabels={1,2,3,4,5,6,7,8,9}
]
\addplot table[x=EdgeDegree, y=Condition] {data/k_test_coarse_mesh_vol_stab.dat};
\addlegendentry{$\rms_T$};
\addplot table[x=EdgeDegree, y=Condition] {data/k_test_mesh1_5_coarse_8.dat};
\addlegendentry{$\sT^{\nabla}$};
\addplot table[x=EdgeDegree, y=Condition] {data/k_test_coarse_mesh_bdry_stab.dat};
\addlegendentry{$\sT^{\partial}$};
\logLogSlopeTriangle{0.9}{0.3}{0.1}{2}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{min}}(\mat{A}_h)$ vs $k$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}
[
xtick={1,2,3,4,5,6,7,8,9},
xticklabels={1,2,3,4,5,6,7,8,9},
ymin=0.6,
ymax=0.7,
ytick={0.6,0.65,0.7},
yticklabels={0.6,0.65,0.7}
]
\addplot table[x=EdgeDegree, y=MinEig] {data/k_test_coarse_mesh_vol_stab.dat};
\addplot table[x=EdgeDegree, y=MinEig] {data/k_test_mesh1_5_coarse_8.dat};
\addplot table[x=EdgeDegree, y=MinEig] {data/k_test_coarse_mesh_bdry_stab.dat};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{max}}(\mat{A}_h)$ vs $k$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}
[
xtick={1,2,3,4,5,6,7,8,9},
xticklabels={1,2,3,4,5,6,7,8,9}
]
\addplot table[x=EdgeDegree, y=MaxEig] {data/k_test_coarse_mesh_vol_stab.dat};
\addplot table[x=EdgeDegree, y=MaxEig] {data/k_test_mesh1_5_coarse_8.dat};
\addplot table[x=EdgeDegree, y=MaxEig] {data/k_test_coarse_mesh_bdry_stab.dat};
\logLogSlopeTriangle{0.9}{0.3}{0.1}{2}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\caption{Condition number vs polynomial degree}
\label{fig:k.test}
\end{figure}
\subsection{Cut meshes}
In this section, we apply the \ac{hho} method to cut meshes using the aggregation strategy proposed above. The computation of the cut meshes and the boundary-element intersections has been carried out using the $\mathtt{Gridap}$ open-source Julia library \cite{Badia2020b} version 0.16.3 and its extension package for unfitted methods GridapEmbedded.jl~\cite{GridapEmbedded-jl} version 0.7 (see \cite{2110.01378} for more details). We consider first-order boundary representations -- that is, we consider a piecewise linear approximation of the curved boundary.
\subsubsection{Test A}
Take the circular domain \(\Omega = \{(x, y)\in\bbR^2:x^2 + y^2 < 1\} \subset\bbR^2\) and consider three similar cut meshes of $\Omega$, with a parameter $\epsilon$ controlling the diameter of the smallest cut elements ($\epsilon < h_\T$ for all $T\in\Th$). The mesh data is given in Table \ref{table:mesh.epsilon.cut}, and we plot values of the condition number and eigenvalues versus $\epsilon$ in Figure \ref{fig:small.cut.cells}. It is clear that both the maximum eigenvalue, and the condition number, become unbounded as $\epsilon\to 0$. The minimum eigenvalue, however, stays approximately constant. This is consistent with the theory as each face is connected to at least one element with diameter proportional to $h_{\rm{max}}$, thus we expect $\lambda_{\rm{min}}(\mat{A}_h)\sim h_{\rm{max}} = {\rm const}$.
\begin{table}[H]
\centering
\pgfplotstableread{data/small_cut_cells_test_k0.dat}\loadedtable
\pgfplotstabletypeset
[
columns={Epsilon,hMin,hMax,NbCells,NbInternalEdges},
columns/Epsilon/.style={column name=\(\epsilon\)},
columns/hMin/.style={column name=\(\hmin\)},
columns/hMax/.style={column name=\(h_{\rm{max}}\)},
columns/NbCells/.style={column name=Nb. Elements},
columns/NbInternalEdges/.style={column name=Nb. Internal Edges},
every head row/.style={before row=\toprule,after row=\midrule},
every last row/.style={after row=\bottomrule}
]\loadedtable
\caption{Parameters of the circular meshes with varying values of $\epsilon$}
\label{table:mesh.epsilon.cut}
\end{table}
\begin{figure}[H]
\centering
\ref{small_cut_cells}
\vspace{0.5cm}\\
\subcaptionbox{$\kappa(\mat{A}_h)$ vs $\epsilon$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[legend columns=3, legend to name=small_cut_cells, xlabel={$\epsilon$}, ylabel={$\kappa(\mat{A}_h)$}]
\addplot table[x=Epsilon,y=Condition] {data/small_cut_cells_test_vol_stab_k0.dat};
\addlegendentry{\(k=0\)};
\addplot table[x=Epsilon,y=Condition] {data/small_cut_cells_test_vol_stab_k1.dat};
\addlegendentry{\(k=1\)};
\addplot table[x=Epsilon,y=Condition] {data/small_cut_cells_test_vol_stab_k2.dat};
\addlegendentry{\(k=2\)};
\reverseLogLogSlopeTriangle{0.5}{0.4}{0.1}{1}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{min}}(\mat{A}_h)$ vs $\epsilon$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[
legend columns=3,
xlabel={$\epsilon$},
ylabel={$\lambda_{\rm{min}}(\mat{A}_h)$},
ymin=0.5,
ymax=0.6,
ytick={0.5,0.55,0.6},
yticklabels={0.5,0.55,0.6}
]
\addplot table[x=Epsilon,y=MinEig] {data/small_cut_cells_test_vol_stab_k0.dat};
\addplot table[x=Epsilon,y=MinEig] {data/small_cut_cells_test_vol_stab_k1.dat};
\addplot table[x=Epsilon,y=MinEig] {data/small_cut_cells_test_vol_stab_k2.dat};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{max}}(\mat{A}_h)$ vs $\epsilon$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[legend columns=3, xlabel={$\epsilon$}, ylabel={$\lambda_{\rm{max}}(\mat{A}_h)$}]
\addplot table[x=Epsilon,y=MaxEig] {data/small_cut_cells_test_vol_stab_k0.dat};
\addplot table[x=Epsilon,y=MaxEig] {data/small_cut_cells_test_vol_stab_k1.dat};
\addplot table[x=Epsilon,y=MaxEig] {data/small_cut_cells_test_vol_stab_k2.dat};
\reverseLogLogSlopeTriangle{0.5}{0.4}{0.1}{1}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\caption{Cut meshes with small-cut elements}
\label{fig:small.cut.cells}
\end{figure}
To avoid unbounded condition numbers on cut meshes, sliver and small-cut elements are aggregated as explained above.
A portion of the resulting aggregated mesh $\mathcal{T}_{h}^{\mathrm{ag}}$ of $\Omega$ is plotted in Figure \ref{fig:cut.mesh.aggr.zoom} showing the aggregation of sliver-cut and small-cut elements. We note the existence of arbitrarily small faces after the aggregation of small-cut elements.
\begin{figure}[H]
\centering
\subcaptionbox{No aggregation}{\includegraphics[width=0.3\textwidth]{data/tri2_no_aggr_zoom.pdf}}$\ $
\subcaptionbox{Sliver-cut elements aggregated}{\includegraphics[width=0.3\textwidth]{data/tri2_iso_aggr_zoom.pdf}}$\ $
\subcaptionbox{Sliver-cut and small-cut elements aggregated}{\includegraphics[width=0.3\textwidth]{data/tri2_iso_hom_aggr_zoom.pdf}}
\caption{Aggregation of cut meshes (local)}
\label{fig:cut.mesh.aggr.zoom}
\end{figure}
\begin{figure}[H]
\centering
\subcaptionbox{No aggregation}{\includegraphics[width=0.495\textwidth]{data/epsilon_2_no_aggr.pdf}}
\subcaptionbox{Sliver-cut and small-cut elements aggregated}{\includegraphics[width=0.495\textwidth]{data/epsilon_2_iso_hom_aggr.pdf}}
\caption{Aggregation of cut meshes (global)}
\label{fig:cut.mesh.aggr.mesh1.global}
\end{figure}
For each mesh in Table \ref{table:mesh.epsilon.cut} we consider a corresponding aggregated mesh, and in Figure \ref{fig:aggr.cells} we test the condition number and eigenvalues of the system matrix for various polynomial degrees $k$. It is clear that after aggregation the minimal eigenvalue, maximal eigenvalue and condition number are independent of $\epsilon$.
\begin{figure}[H]
\centering
\ref{aggr_cut_cells}
\vspace{0.5cm}\\
\subcaptionbox{$\kappa(\mat{A}_h)$ vs $\epsilon$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[
legend columns=3,
legend to name=aggr_cut_cells,
xlabel={$\epsilon$},
ylabel={$\kappa(\mat{A}_h)$}
]
\addplot table[x=Epsilon,y=Condition] {data/aggr_cut_cells_test_vol_stab_k0.dat};
\addlegendentry{\(k=0\)};
\addplot table[x=Epsilon,y=Condition] {data/aggr_cut_cells_test_vol_stab_k1.dat};
\addlegendentry{\(k=1\)};
\addplot table[x=Epsilon,y=Condition] {data/aggr_cut_cells_test_vol_stab_k2.dat};
\addlegendentry{\(k=2\)};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{min}}(\mat{A}_h)$ vs $\epsilon$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[
legend columns=3,
xlabel={$\epsilon$},
ylabel={$\lambda_{\rm{min}}(\mat{A}_h)$},
ymin=0.5,
ymax=0.6,
ytick={0.5,0.55,0.6},
yticklabels={0.5,0.55,0.6}
]
\addplot table[x=Epsilon,y=MinEig] {data/aggr_cut_cells_test_vol_stab_k0.dat};
\addplot table[x=Epsilon,y=MinEig] {data/aggr_cut_cells_test_vol_stab_k1.dat};
\addplot table[x=Epsilon,y=MinEig] {data/aggr_cut_cells_test_vol_stab_k2.dat};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{max}}(\mat{A}_h)$ vs $\epsilon$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[legend columns=3, xlabel={$\epsilon$}, ylabel={$\lambda_{\rm{max}}(\mat{A}_h)$}]
\addplot table[x=Epsilon,y=MaxEig] {data/aggr_cut_cells_test_vol_stab_k0.dat};
\addplot table[x=Epsilon,y=MaxEig] {data/aggr_cut_cells_test_vol_stab_k1.dat};
\addplot table[x=Epsilon,y=MaxEig] {data/aggr_cut_cells_test_vol_stab_k2.dat};
\end{loglogaxis}
\end{tikzpicture}
}
\caption{Cut meshes with aggregated elements}
\label{fig:aggr.cells}
\end{figure}
\subsubsection{Test B}
We now consider a sequence of cut and approximate meshes of the circular domain $\Omega = \{(x, y)\in\bbR^2:x^2 + y^2 < 1\}$ and track the conditioning of the scheme before and after the agglomeration of sliver-cut and small-cut elements. The parameters of this sequence of meshes are given in Table \ref{table:mesh.circular.no.aggr} and three of the meshes are plotted in Figure \ref{fig:mesh.circular.no.aggr}.
\begin{table}[H]
\centering
\pgfplotstableread{data/circular_tri_no_aggr_k0.dat}\loadedtable
\pgfplotstabletypeset
[
columns={hMin,hMax,NbCells,NbInternalEdges},
columns/hMin/.style={column name=\(\hmin\)},
columns/hMax/.style={column name=\(h_{\rm{max}}\)},
columns/NbCells/.style={column name=Nb. Elements},
columns/NbInternalEdges/.style={column name=Nb. Internal Edges},
every head row/.style={before row=\toprule,after row=\midrule},
every last row/.style={after row=\bottomrule}
]\loadedtable
\caption{Parameters of the meshes used in Test B prior to aggregation}
\label{table:mesh.circular.no.aggr}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=0.3\textwidth]{data/circular_tri_1.pdf}
\includegraphics[width=0.3\textwidth]{data/circular_tri_3.pdf}
\includegraphics[width=0.3\textwidth]{data/circular_tri_5.pdf}
\caption{Three of the meshes used in Test B prior to aggregation}
\label{fig:mesh.circular.no.aggr}
\end{figure}
Prior to aggregation of small-cut elements, each face is attached to at least one element of diameter proportional to $h_{\rm{max}}$. Thus we expect to observe $\lambda_{\rm min}(\mat{A}_h) \sim h_{\rm{max}}$ and $\lambda_{\rm max}(\mat{A}_h) \sim \hmin^{-1}$. In Figure \ref{fig:circular.no.aggr} we plot the condition number and eigenvalues for each mesh prior to aggregation. The results are not smooth due to the presence of sliver-cut elements which have potentially very large mesh regularity parameters. In Figure \ref{fig:circular.iso}, we observe that after the agglomeration of sliver-cut elements the results behave as predicted by theory.
\begin{figure}[H]
\centering
\ref{circular_no_aggr}
\vspace{0.5cm}\\
\subcaptionbox{$\kappa(\mat{A}_h)$ vs $h_{\rm{max}}^{-1}\hmin^{-1}$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[legend columns=3, legend to name=circular_no_aggr, xlabel={$h_{\rm{max}}^{-1}\hmin^{-1}$}, ylabel={$\kappa(\mat{A}_h)$}]
\addplot table[x=InvMaxMin,y=Condition] {data/circular_tri_no_aggr_vol_stab_k0.dat};
\addlegendentry{\(k=0\)};
\addplot table[x=InvMaxMin,y=Condition] {data/circular_tri_no_aggr_vol_stab_k1.dat};
\addlegendentry{\(k=1\)};
\addplot table[x=InvMaxMin,y=Condition] {data/circular_tri_no_aggr_vol_stab_k2.dat};
\addlegendentry{\(k=2\)};
\logLogSlopeTriangle{0.9}{0.4}{0.1}{1}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{min}}(\mat{A}_h)$ vs $h_{\rm{max}}$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[legend columns=3, xlabel={$h_{\rm{max}}$}, ylabel={$\lambda_{\rm{min}}(\mat{A}_h)$}]
\addplot table[x=hMax,y=MinEig] {data/circular_tri_no_aggr_vol_stab_k0.dat};
\addplot table[x=hMax,y=MinEig] {data/circular_tri_no_aggr_vol_stab_k1.dat};
\addplot table[x=hMax,y=MinEig] {data/circular_tri_no_aggr_vol_stab_k2.dat};
\logLogSlopeTriangle{0.9}{0.4}{0.1}{1}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{max}}(\mat{A}_h)$ vs $\hmin^{-1}$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[legend columns=3, xlabel={$\hmin^{-1}$}, ylabel={$\lambda_{\rm{max}}(\mat{A}_h)$}]
\addplot table[x=hMinInv,y=MaxEig] {data/circular_tri_no_aggr_vol_stab_k0.dat};
\addplot table[x=hMinInv,y=MaxEig] {data/circular_tri_no_aggr_vol_stab_k1.dat};
\addplot table[x=hMinInv,y=MaxEig] {data/circular_tri_no_aggr_vol_stab_k2.dat};
\logLogSlopeTriangle{0.9}{0.4}{0.1}{1}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\caption{Circular meshes with no aggregation}
\label{fig:circular.no.aggr}
\end{figure}
\begin{figure}[H]
\centering
\ref{circular_iso}
\vspace{0.5cm}\\
\subcaptionbox{$\kappa(\mat{A}_h)$ vs $h_{\textrm{min}}^{-1}h_{\textrm{max}}^{-1}$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[legend columns=3, legend to name=circular_iso, xlabel={$h_{\textrm{min}}^{-1}h_{\textrm{max}}^{-1}$}, ylabel={$\kappa(\mat{A}_h)$}]
\addplot table[x=InvMaxMin, y=Condition] {data/circular_tri_isometric_vol_stab_k0.dat};
\addlegendentry{\(k=0\)};
\addplot table[x=InvMaxMin,y=Condition] {data/circular_tri_isometric_vol_stab_k1.dat};
\addlegendentry{\(k=1\)};
\addplot table[x=InvMaxMin,y=Condition] {data/circular_tri_isometric_vol_stab_k2.dat};
\addlegendentry{\(k=2\)};
\logLogSlopeTriangle{0.9}{0.4}{0.1}{1}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{min}}(\mat{A}_h)$ vs $h_{\textrm{max}}$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[legend columns=3, xlabel={$h_{\textrm{max}}$}, ylabel={$\lambda_{\rm{min}}(\mat{A}_h)$}]
\addplot table[x=hMax,y=MinEig] {data/circular_tri_isometric_vol_stab_k0.dat};
\addplot table[x=hMax,y=MinEig] {data/circular_tri_isometric_vol_stab_k1.dat};
\addplot table[x=hMax,y=MinEig] {data/circular_tri_isometric_vol_stab_k2.dat};
\logLogSlopeTriangle{0.9}{0.4}{0.1}{1}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{max}}(\mat{A}_h)$ vs $h_{\textrm{min}}^{-1}$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[legend columns=3, xlabel={$h_{\textrm{min}}^{-1}$}, ylabel={$\lambda_{\rm{max}}(\mat{A}_h)$}]
\addplot table[x=hMinInv,y=MaxEig] {data/circular_tri_isometric_vol_stab_k0.dat};
\addplot table[x=hMinInv,y=MaxEig] {data/circular_tri_isometric_vol_stab_k1.dat};
\addplot table[x=hMinInv,y=MaxEig] {data/circular_tri_isometric_vol_stab_k2.dat};
\logLogSlopeTriangle{0.9}{0.4}{0.1}{1}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\caption{Circular meshes with sliver-cut elements aggregated}
\label{fig:circular.iso}
\end{figure}
In Figure \ref{fig:circular.iso.uniform}, results are plotted with both sliver-cut and small-cut elements aggregated. The condition number is one order of magnitude smaller than it was prior to aggregation, and scales as $h^{-2}$. Again, this is expected due to the meshes being quasi-uniform ($h=h_{\rm{max}}\sim\hmin$).
\begin{figure}[H]
\centering
\ref{circular_iso_uniform}
\vspace{0.5cm}\\
\subcaptionbox{$\kappa(\mat{A}_h)$ vs $h^{-1}$}
{
\begin{tikzpicture}[scale=0.56]
\begin{loglogaxis}[legend columns=3, legend to name=circular_iso_uniform, xlabel={$h^{-1}$}, ylabel={$\kappa(\mat{A}_h)$}]
\addplot table[x=hMaxInv,y=Condition] {data/circular_tri_isometric_hom_vol_stab_k0.dat};
\addlegendentry{\(k=0\)};
\addplot table[x=hMaxInv,y=Condition] {data/circular_tri_isometric_hom_vol_stab_k1.dat};
\addlegendentry{\(k=1\)};
\addplot table[x=hMaxInv,y=Condition] {data/circular_tri_isometric_hom_vol_stab_k2.dat};
\addlegendentry{\(k=2\)};
\logLogSlopeTriangle{0.9}{0.4}{0.1}{2}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{min}}(\mat{A}_h)$ vs $h$}
{
\begin{tikzpicture}[scale=0.56]
\begin{loglogaxis}[legend columns=3, xlabel={$h$}, ylabel={$\lambda_{\rm{min}}(\mat{A}_h)$}]
\addplot table[x=hMax,y=MinEig] {data/circular_tri_isometric_hom_vol_stab_k0.dat};
\addplot table[x=hMax,y=MinEig] {data/circular_tri_isometric_hom_vol_stab_k1.dat};
\addplot table[x=hMax,y=MinEig] {data/circular_tri_isometric_hom_vol_stab_k2.dat};
\logLogSlopeTriangle{0.9}{0.4}{0.1}{1}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{max}}(\mat{A}_h)$ vs $h^{-1}$}
{
\begin{tikzpicture}[scale=0.56]
\begin{loglogaxis}[legend columns=3, xlabel={$h^{-1}$}, ylabel={$\lambda_{\rm{max}}(\mat{A}_h)$}]
\addplot table[x=hMaxInv,y=MaxEig] {data/circular_tri_isometric_hom_vol_stab_k0.dat};
\addplot table[x=hMaxInv,y=MaxEig] {data/circular_tri_isometric_hom_vol_stab_k1.dat};
\addplot table[x=hMaxInv,y=MaxEig] {data/circular_tri_isometric_hom_vol_stab_k2.dat};
\logLogSlopeTriangle{0.9}{0.4}{0.1}{1}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\caption{Circular meshes with sliver-cut and small-cut elements aggregated}
\label{fig:circular.iso.uniform}
\end{figure}
\subsection{Penta-diagonal meshes}
We consider in this section a family of meshes with a penta-diagonal of elements being refined, and two large elements
on each side (see Figure \ref{fig:penta.mesh}). The purpose of this test is to assess the accuracy of our estimates, and the robustness of the HHO
condition number itself, when some large elements are neighbouring very small elements, all the while having an increasing number of
faces. While testing on such extreme meshes is possibly contrived, the behaviour of the condition number illustrates that in some situations the estimates of Theorem \ref{th:estimates} can be improved.
\begin{figure}[H]
\centering
\includegraphics[width=3\textwidth/10]{data/mesh2_3_pentadiag.pdf}
\includegraphics[width=3\textwidth/10]{data/mesh2_5_pentadiag.pdf}
\caption{Penta-diagonal square meshes}\label{fig:penta.mesh}
\end{figure}
\begin{figure}[H]
\centering
\ref{square_penta_diag_no_condition}
\vspace{0.5cm}\\
\subcaptionbox{$\kappa(\mat{A}_h)$ vs $h_{\textrm{min}}^{-1}$\label{fig:penta.kappa}}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[legend columns=3, legend to name=square_penta_diag_no_condition, xlabel={$h_{\textrm{min}}^{-1}$}, ylabel={$\kappa(\mat{A}_h)$}]
\addplot table[x=hMinInv, y=Condition] {data/square_pentadiag_vol_stab_k0.dat};
\addlegendentry{\(k=0\)};
\addplot table[x=hMinInv,y=Condition] {data/square_pentadiag_vol_stab_k1.dat};
\addlegendentry{\(k=1\)};
\addplot table[x=hMinInv,y=Condition] {data/square_pentadiag_vol_stab_k2.dat};
\addlegendentry{\(k=2\)};
\logLogSlopeTriangle{0.9}{0.4}{0.1}{1}{black};
\logLogSlopeTriangle{0.9}{0.4}{0.1}{2}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{min}}(\mat{A}_h)$ vs $h_{\textrm{min}}^{-1}$\label{fig:penta.lambdamin}}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[legend columns=3, xlabel={$h_{\textrm{min}}^{-1}$}, ylabel={$\lambda_{\rm{min}}(\mat{A}_h)$}]
\addplot table[x=hMinInv,y=MinEig] {data/square_pentadiag_vol_stab_k0.dat};
\addplot table[x=hMinInv,y=MinEig] {data/square_pentadiag_vol_stab_k1.dat};
\addplot table[x=hMinInv,y=MinEig] {data/square_pentadiag_vol_stab_k2.dat};
\end{loglogaxis}
\end{tikzpicture}
}
\subcaptionbox{$\lambda_{\rm{max}}(\mat{A}_h)$ vs $h_{\textrm{min}}^{-1}$}
{
\begin{tikzpicture}[scale=0.57]
\begin{loglogaxis}[legend columns=3, xlabel={$h_{\textrm{min}}^{-1}$}, ylabel={$\lambda_{\rm{max}}(\mat{A}_h)$}]
\addplot table[x=hMinInv,y=MaxEig] {data/square_pentadiag_vol_stab_k0.dat};
\addplot table[x=hMinInv,y=MaxEig] {data/square_pentadiag_vol_stab_k1.dat};
\addplot table[x=hMinInv,y=MaxEig] {data/square_pentadiag_vol_stab_k2.dat};
\logLogSlopeTriangle{0.9}{0.4}{0.1}{1}{black};
\end{loglogaxis}
\end{tikzpicture}
}
\caption{Penta-diagonal square meshes}\label{fig:penta.results}
\end{figure}
The results presented in Figure \ref{fig:penta.results} show a growth of the maximum eigenvalue as $\mathcal O(h_{\textrm{min}}^{-1})$, which is consistent with the estimate \eqref{est:lambda.max}. Figure \ref{fig:penta.lambdamin} however seems to indicate that, for this family of meshes, $\lambda_{\rm min}(\mat{A}_h)$ actually remains bounded below, which would indicate that the estimate \eqref{est:lambda.min} is not optimal; it can actually be proved (see Lemma \ref{lem:lmin.penta}) that for these meshes the minimal eigenvalue indeed remains bounded below. As a consequence, the condition number $\kappa(\mat{A}_h)$ does not grow as $\mathcal O(h_{\textrm{min}}^{-2})$ but as $\mathcal O(h_{\textrm{min}}^{-1})$, which is illustrated in Figure \ref{fig:penta.kappa}.
\begin{lemma}\label{lem:lmin.penta}
For the family of penta-diagonal meshes, it holds that
$\lambda_{\rm min}(\mat{A}_h)\gtrsim 1$.
\end{lemma}
\begin{proof}
We first note that even if the penta-diagonal meshes do not satisfy Assumption \ref{assum:star.shaped} (due to the two large elements with ``stairs'' boundary), the analysis carried out in the previous sections still applies. Indeed, we can easily find uniform bi-Lipschitz mappings between each of these elements and a ball of size comparable to these elements, which ensures that the trace inequality \eqref{eq:continuous.trace} still holds; since all elements contain a ball of size comparable to their diameters, the other relevant inequalities (approximation properties of projectors, discrete inverse inequalities) also remain valid.
An inspection of the proof of Theorem \ref{th:estimates} (see in particular \eqref{eq:Ah.lower.bound}) reveals that the bound on $\lambda_{\rm min}(\mat{A}_h)$ is a direct consequence of \eqref{eq:discrete.poincare.faces}. The result thus follows if we establish this improved version of \eqref{eq:discrete.poincare.faces}, in which the scaling factor $h_T$ has been removed from the left-hand side: for all $\ul{v}_h\in\U{h, 0}{k,l}$,
\begin{equation}\label{eq:discrete.poincare.faces.penta}
\sum_{T\in\Th}\norm[{\bdry \T}]{v_{\FT}}^2 \lesssim \sum_{T\in\Th}\Brac{\norm[T]{\nabla\pT{k+1}\ul{v}_T}^2 + h_\T^{-1}\norm[{\bdry \T}]{\deltaFT{k}\ul{v}_T}^2}.
\end{equation}
Let us take a face $F$ in one of the small elements. Assuming for example that $F$ is a vertical face, we can create a finite sequence of vertical faces $(F=F_1,F_2,\ldots,F_r)$ (with $r\le 3$) such that $F_r$ is a face of one of the two big elements in the mesh, say $T_\star$; see Figure \ref{fig:proof.penta} for an illustration.
\begin{figure}
\begin{center}
\input{fig-penta_proof.pdf_t}
\caption{Illustration of the proof of Lemma \ref{lem:lmin.penta}.}
\label{fig:proof.penta}
\end{center}
\end{figure}
Denoting by $(T_1,\ldots,T_r,T_{r+1}=T_\star)$ the elements encountered along the sequence $(F_1,\ldots,F_r)$, we can then write
\begin{align}\label{eq:penta.proof.1}
\norm[F]{v_F}^2 \le{}& \norm[F]{v_{F_1}-\pi_{F_{1}}^{0, k}{\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2}}^2 + \norm[F]{{\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2} - \pi_{T_{2}}^{0, 0}{\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2}}^2 + \norm[F]{\pi_{T_2}^{0,0}{\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2}}^2 \nn\\
\lesssim{}& \norm[\partial T_2]{\delta_{\partial T_2}^{k}\ul{v}_{T_2}}^2 + h_{T_2}\norm[T_2]{\nabla {\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2}}^2 + \norm[F_2]{\pi_{T_2}^{0,0}{\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2}}^2
\end{align}
where we have introduced $\pm\pi_{F_{1}}^{0, k}({\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2}-\pi_{T_2}^{0,0}{\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2})=\pm\pi_{F_{1}}^{0, k}{\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2}-\pi_{T_2}^{0,0}{\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2}$ and used the $L^2(F)$-boundedness of $\pi_{F_{1}}^{0, k}$ and a triangle inequality in the first line, and invoked in the second line the bound $\norm[F_1]{\delta_{F_1}^{k}\ul{v}_{T_2}}^2 \le \norm[\partial T_2]{\delta_{\partial T_2}^{k}\ul{v}_{T_2}}^2$, the continuous trace inequality \eqref{eq:continuous.trace} and Poincar\'{e}--Wirtinger inequality \eqref{eq:poincare}, and the fact that $\pi_{T_2}^{0,0}{\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2}$ is constant and $|F|=|F_2|$. By a similar argument it holds that
\begin{align}\label{eq:penta.proof.2}
\norm[F_2]{\pi_{T_2}^{0,0}{\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2}}^2 \le{}& \norm[F_2]{\pi_{T_2}^{0,0}{\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2} - \pi_{F_2}^{0,k}{\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2}}^2 + \norm[F_2]{\pi_{F_2}^{0,k}{\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2} - v_{F_2}}^2 + \norm[F_2]{v_{F_2}}^2\nn\\
\lesssim{}& h_{T_2}\norm[T_2]{\nabla {\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2}}^2 +\norm[\partial T_2]{\delta_{\partial T_2}^{k}\ul{v}_{T_2}}^2 + \norm[F_2]{v_{F_2}}^2.
\end{align}
Thus, combining \eqref{eq:penta.proof.1} and \eqref{eq:penta.proof.2} we are able to write
\[
\norm[F_1]{v_{F_1}}^2 \lesssim h_{T_2}\norm[T_2]{\nabla {\rm{p}}_{T_2}^{k+1}\ul{v}_{T_2}}^2 +\norm[\partial T_2]{\delta_{\partial T_2}^{k}\ul{v}_{T_2}}^2+ \norm[F_2]{v_{F_2}}^2.
\]
Iterating these estimates along the family $(F_1,F_2,\ldots,F_r)$, using $r\le 3$ and $h_{T_i}\lesssim 1$ we deduce that
\[
\norm[F]{v_F}^2\lesssim \sum_{i=2}^r \left(\norm[T_i]{\nabla\pTi{k+1}\ul{v}_{T_i}}^2 + h_{\T_i}^{-1}\norm[{\bdry \T_i}]{\deltaFTi{k}\ul{v}_{T_i}}^2\right)+\norm[F_r]{v_{F_r}}^2.
\]
Summing this inequality over $F\in\Fh[T]$ and then over the small elements $T$ on the diagonal of the mesh, each of the small diagonal elements
appear at most $3$ times in the right-hand side, and the last boundary term is bounded above by $\norm[\partial T_\star]{v_{\partial T_\star}}^2$. This term can be estimated by introducing $\pi_{\partial T_\star}^{0,k}{\rm{p}}^{k+1}_{T_\star}\ul{v}_{T_\star}$ as follows:
\begin{align*}
\norm[\partial T_\star]{v_{\partial T_\star}}^2\lesssim{}& \norm[\partial T_\star]{v_{\partial T_\star}-\pi_{\partial T_\star}^{0,k}{\rm{p}}^{k+1}_{T_\star}\ul{v}_{T_\star}}^2+
\norm[\partial T_\star]{\pi_{\partial T_\star}^{0,k}{\rm{p}}^{k+1}_{T_\star}\ul{v}_{T_\star}}^2\\
\lesssim{}& \norm[{\bdry \T_\star}]{\deltaFTs{k}\ul{v}_{T_\star}}^2 +h_{T_\star}\norm[T_i]{\nabla\pTs{k+1}\ul{v}_{T_\star}}^2+
h_{T_\star}^{-1}\norm[T_\star]{{\rm{p}}^{k+1}_{T_\star}\ul{v}_{T_\star}}^2,\\
\lesssim{}& h_{\T_\star}^{-1}\norm[{\bdry \T_\star}]{\deltaFTs{k}\ul{v}_{T_\star}}^2 +\norm[T_i]{\nabla\pTs{k+1}\ul{v}_{T_\star}}^2+ \norm[T_\star]{{\rm{p}}^{k+1}_{T_\star}\ul{v}_{T_\star}}^2,
\end{align*}
where we have invoked the boundedness of $\pi_{\partial T_\star}^{0,k}$ and the continuous trace inequality \eqref{eq:continuous.trace}, and in the final line we have used $h_{T_\star}\approx 1$, since $T_\star$ is one of the large elements whose diameter does not go to zero. Combining all these estimates leads to
$$
\sum_{T\in\Th}\norm[{\bdry \T}]{v_{\FT}}^2\lesssim \sum_{T\in\Th}\left(\norm[T]{\nabla\pT{k+1}\ul{v}_T}^2 + h_\T^{-1}\norm[{\bdry \T}]{\deltaFT{k}\ul{v}_T}^2\right) + \norm[\Omega]{\ph{k+1}\ul{v}_h}^2.
$$
The estimate \eqref{eq:discrete.poincare.faces.penta} then follows in the same manner as in the proof of \eqref{eq:discrete.poincare.faces}.
\end{proof}
\section{Conclusions}\label{sec:conclusions}
In this work, we prove detailed eigenvalue and condition number bounds for the linear system matrix that arises from \ac{hho} discretisations of the Laplace problem. The analysis applies to general polytopal meshes and polynomial orders. It reveals the effect of small and highly distorted elements and faces on the conditioning of the linear system. Whereas highly distorted elements negatively impact condition numbers, faces shapes and sizes do not affect these bounds. With this information, we apply \ac{hho} methods on cut meshes. We combine simple background meshes, element intersection algorithms and an aggregation strategy to end up with well-posed \ac{hho} methods on cut meshes.
We carry out a detailed set of numerical experiments that are in agreement with the numerical analysis. First, we analyse the condition number as one coarsens polytopal meshes with many faces per element and arbitrarily small faces. Next, we show that the \ac{hho} method on aggregated cut meshes provides the expected condition number with respect to the mesh size. We also observe that the condition number of the algorithm is not affected by increasingly small cut elements. Finally, we consider a limit case with penta-diagonal squared meshes that motivates sharper condition number bounds for some specific mesh configurations.
Future work includes the combination of \ac{hho} methods with higher-order cut geometrical discretisations (curved boundaries) and the design of optimal and scalable preconditioners for these linear systems.
\section*{Declarations}
{\footnotesize{
\textbf{Funding} This work was partially supported by the Australian Government through the \emph{Australian Research Council}'s Discovery Projects funding scheme (grant number DP210103092).
\medskip
\noindent\textbf{Competing Interests} The corresponding author states on behalf of all authors, that there is no conflict of interest.
\medskip
\noindent\textbf{Code Availability} All code used in this article is available in open source libraries and duly cited.
\medskip
\noindent\textbf{Data Availability} The data generated in this article is available from the corresponding author on reasonable request.}}
\bibliographystyle{plain}
|
1,108,101,564,983 | arxiv | \section{Introduction}
One of the most important and well studied questions in particle physics is why the observable Universe
has a tiny but non-zero ratio of baryons to photons without which there would be no stars, planets or life. The measurement of cosmic microwave background (CMB) anisotropies and the successful prediction of light element abundances from big bang nucleosynthesis (BBN), both lead to a consistent value of this ratio at the recombination time when atoms are formed \cite{Komatsu:2010fb},
\begin{equation}
\label{eqn:eta}
\eta=\frac{n_B}{n_{\gamma}}\approx 6.2\times10^{-10},
\end{equation}
where $n_B$ and $n_{\gamma}$ are baryon and photon number densities respectively.\footnote{Corresponding to a portion of comoving volume containing 1 photon at temperatures where the right-handed neutrinos are relativistic.} Any theory which successfully produces such a baryon asymmetry must fulfil the famous Sakharov conditions \cite{Sakharov:1967dj} of C and CP violation, $B$ violation and departure from thermal equilibrium. One of the most popular of these is known as leptogenesis \cite{leptogenesis}, which takes advantage of the fact that non-perturbative, $B-L$ conserving, $B+L$ violating sphaleron processes can convert a lepton number asymmetry into a $B$ asymmetry. The lepton number asymmetry is obtained from the decays of heavy Majorana neutrinos and so leptogenesis is intimately linked to neutrino mass, mixing and CP violation.
The discovery of neutrino mass and mixing is arguably one of the most influential observations in particle physics in the last 15 years. It has inspired a large number of works aimed at explaining both the extremely small mass and, in particular, the striking mixing pattern (which is very different from the close to diagonal quark sector) \cite{S3-L,Dn-L,A4-L,A4-L-Altarelli,S4-L,A5-L,Dn-LQ,A4-LQ,King:2007A4,S4-LQsum,PSL-LQ,Z7Z3-LQ,delta27-LQ,delta27-LQ-Dterms,SO3-LQ,SU3-LQ,Cooper:A4typeIII,Reviews}. Seesaw mechanisms provide the most common explanation for small neutrino masses; a heavy particle is introduced which has a Yukawa coupling with the lepton doublet. When this particle is integrated out, the effective theory has a Majorana mass term for the left-handed neutrinos which is suppressed by the large mass of the new particle. This new particle must be a colour singlet but can be a weak fermionic singlet with zero hypercharge (type~I) \cite{Minkowski:1977type1}, a weak scalar triplet with two units of hypercharge (type~II) \cite{type2} or a weak fermionic triplet with zero hypercharge (type~III) \cite{Foot:1989type3}. The issue of neutrino mixing is also very well studied, and a popular technique is to postulate the existence of an extra family symmetry at high energies. This symmetry is then broken in a specific way, by the vacuum expectation value (VEV) of heavy scalars called flavons. The remnants of the breaking pattern show up in the observed mixing of neutrinos at low energies.
Many of these models of neutrino mixing (predominantly employing the type I seesaw) exhibit a property known as form dominance (FD) \cite{Chen:2009um}, defined by the condition that the columns of the neutrino Yukawa matrix are proportional to columns of the mixing matrix in a particular basis corresponding to diagonal charged lepton and right-handed neutrino mass matrices. As discussed in several papers \cite{Choubey:2010vs,Antusch:2006cw,Jenkins:2008rb,diBari:lepto,Felipe:2009rr,AristizabalSierra:2009ex,King:2010bk}, models with family symmetry typically predict vanishing CP violating lepton asymmetry parameters~$\epsilon$ and hence zero leptogenesis.\footnote{For a discussion of how to achieve leptogenesis in the flavour symmetric phase, see e.g. \cite{AristizabalSierra:2011Arx}.}
As pointed out in \cite{Choubey:2010vs}, this can be understood very simply from the FD property
that the columns of the neutrino Yukawa matrix are mutually orthogonal since
they are proportional to the columns of the mixing matrix which is
unitary.\footnote{The vanishing of leptogenesis due to the orthogonality of
the columns of the neutrino Yukawa matrix was first observed in the case of
hierarchical neutrinos and constrained sequential dominance with
tri-bimaximal mixing in \cite{Antusch:2006cw} and was subsequently generalised to the case of FD with any neutrino mass pattern and any mixing pattern in \cite{Choubey:2010vs}.}
However in family symmetry models the Yukawa matrices are predicted at the scale of family symmetry breaking,
which may be close to the grand unified theory (GUT) scale, and above the mass scale of right-handed neutrinos.
Therefore in such models the Yukawa matrix will be subject to renormalisation group (RG) running from the
family breaking scale down to the scale of right-handed neutrino masses relevant for leptogenesis.
To illustrate the effects of RG corrections, we analyse two specific models involving
sizeable neutrino and $\tau$ Yukawa couplings and satisfying FD at leading order (LO):
the first model \cite{A4-L-Altarelli} reproduces the well studied
tri-bimaximal (TB) mixing pattern \cite{Harrison:2002tbm}; and the second model \cite{King:2011zj}
reproduces the tri-maximal (TM) mixing pattern \cite{Haba:2006dz}
consistent with the results from T2K \cite{Abe:2011sj}. Although in both models RG running occurs over only one or two orders of magnitude in the energy scale, we shall show that this leads to sufficient violation of FD to allow successful leptogenesis in each case.
One could ask why RG effects should be considered when higher order (HO) operators in the $A_4$ model have been shown to produce a realistic value of $\eta$ \cite{diBari:lepto}. The answer is that RG effects turn out to be of equal importance to HO operators in determining leptogenesis and so in general both effects should be considered together. Here we choose to drop the effect of HO operators for clarity: we want to study the effects of RG corrections to leptogenesis in isolation in order to illustrate the magnitude of the effect. Moreover, there are ultraviolet completions of the $A_4$ model of both TB \cite{Varzielas:2010mp} and TM mixing \cite{King:2011zj} in which HO operators play a negligible role, and the viability of leptogenesis in such cases then relies exclusively on the effects of RG corrections considered here.
We emphasise that this paper represents the first study which takes into account RG corrections to leptogenesis in family symmetry models. The results in this paper show that RG corrections have a large impact on leptogenesis in any family symmetry models involving neutrino Yukawa couplings of order unity. Therefore, when considering leptogenesis in such models, RG corrections should not be ignored even when corrections arising from HO operators are also present.
The rest of the paper is organised as follows. Section \ref{sec:lepto} briefly outlines the process of calculating the baryon asymmetry of the universe $\eta$ arising from leptogenesis. Then in section \ref{sec:formdom}, the idea of FD is introduced and it is shown that the CP violating parameter in leptogenesis is indeed zero under the condition of FD. Section \ref{sec:AFModel} introduces the Altarelli-Feruglio $A_4$ model of TB neutrino mixing, while section \ref{sec:tri} introduces the parameters of the $A_4$ model of TM mixing. In section \ref{sec:RGE} we analytically estimate the RG running of the neutrino Yukawa matrices in leading log approximation. Numerical results for the baryon asymmetry of the universe arising from leptogenesis in both TB and TM models are presented in section \ref{sec:results} including contour plots of input parameters reproducing the physical value of $\eta$. Section \ref{sec:conclusion} concludes the paper.
\section{Leptogenesis}
\label{sec:lepto}
Leptogenesis takes advantage of the heavy right-handed neutrinos introduced in many models to account for the smallness of the left-handed neutrino mass. The addition of these right-handed neutrino fields $N_i$ introduces two new terms into the superpotential
\begin{equation}
\label{eqn:heavyseesaw}
W_{\nu}=\left(Y_{\nu}\right)_{\alpha i}\left(l_{\alpha}\cdot H_u\right)N_i+\frac{1}{2}N_i\left(M_{RR}\right)_{ij}N_{j},
\end{equation}
which then lead to an effective light neutrino mass once the heavy degrees of freedom are integrated out. In \eqref{eqn:heavyseesaw}, $N_i$ are the heavy right-handed neutrinos with Majorana mass matrix $M_{RR}$; $l_{\alpha}$ are the lepton doublets and $H_u$ the (hypercharge $+1/2$) Higgs doublet, which interact with $N_i$ through the Yukawa couplings $\left(Y_{\nu}\right)_{\alpha i}$.
These interactions also fulfil the well known Sakharov conditions \cite{Sakharov:1967dj} required to generate a baryon asymmetry: 1) C and CP violation (coming from the complex Yukawa coupling); 2) $B$ violation (the Majorana mass of $N$s violates $L$; sphalerons convert $\sim \frac{1}{3}$ of this into $B$ violation); 3) Departure from thermal equilibrium (due to out-of-equilibrium decays of the right-handed neutrinos). The procedure for calculation of this asymmetry is first to calculate the amount of CP violation in the decays of the right-handed neutrinos. This is then used as an input parameter to find the $B-L$ asymmetry through integration of the Boltzmann equations \cite{Buchmuller:2004nz}. These equations take into account the evolution of a $B-L$ asymmetry generated by $N$ decays against the background of
$N$ inverse decays partially washing it out. Finally, this $B-L$ asymmetry is converted into a $B$ asymmetry using previously calculated results for sphaleron processes \cite{Khlebnikov:1988sr,Harvey:1990qw}.
\subsection{Unflavoured asymmetry}
To one-loop order, the CP asymmetry arises from the interference of the
diagrams in Fig.~\ref{fig:lepto}.
\begin{figure*}
\centerline{
\mbox{\includegraphics[width=6in]{CPdiags.eps}}
}
\caption{Diagrams contributing to the CP violating parameter
$\epsilon_{i,\alpha i}$; it is the interference of $(a)$ with $(b)$ and $(c)$ which gives rise to non-zero $\epsilon_{i,\alpha i}$. Lines labelled $N$ can be any one of the seesaw particles.}
\label{fig:lepto}
\end{figure*}
Using the standard supersymmetric Feynman rules, one can calculate the decay
widths for the decay $N_i\rightarrow l_{\alpha}+H_u$,
$\Gamma_i=\sum_{\alpha}\Gamma_{\alpha i}$; these are then used to find the CP
asymmetry for $N_i$ by summing over all lepton flavours $\alpha$ \cite{Covi:Lepto},
\begin{equation}
\label{eqn:cp}
\epsilon_i=\frac{\Gamma_i-\overline{\Gamma}_i}{\Gamma_i+\overline{\Gamma}_i}=\frac{1}{8\pi\left(Y_{\nu}^{\dagger}Y_{\nu}\right)_{ii}}\sum_{j\neq i}\mathrm{Im}\left(\left(Y_{\nu}^{\dagger}Y_{\nu}\right)_{ij}^2\right)f\left(\frac{M_j^2}{M_i^2}\right).
\end{equation}
Here, $M_i$ are the real mass eigenvalues of $M_{RR}$, and \cite{Choubey:2010vs,Antusch:2006cw,Davidson:2008bu}
\begin{equation}
f(x_{ij})=f_{ij}=\sqrt{x_{ij}}\left(\frac{2}{1-x_{ij}}-\ln\left(\frac{1+x_{ij}}{x_{ij}}\right)\right),
\end{equation}
with $x_{ij}=\frac{M_j^2}{M_i^2}$, is the loop factor. Note that $ \epsilon_i$ is summed over all flavours of the outgoing lepton and is called the \emph{unflavoured} asymmetry.
\subsection{Flavoured asymmetry}
\label{sub:flavoured}
The above discussion and formula for $\epsilon_i$ is relevant when the lepton doublets produced are a coherent superposition of the three flavours. This is only the case above a certain energy when the expansion rate of the universe is greater than all charged lepton interaction rates. However, as the universe cools, the $\tau$ lepton Yukawa coupling will start to come in to equilibrium at an energy of around \cite{Antusch:2006cw} $\left(1+\tan^2\beta\right)\times10^{12}$ GeV,\footnote{$\tan\beta$ here is the ratio of MSSM Higgs VEVs, and will be defined algebraically in section \ref{sec:AFModel}.} breaking the coherence of the single state superposition $e+\mu + \tau$ down into two states: the $\tau$ and the remaining coherent combination $e+\mu$. Thus, if the dynamics of leptogenesis occur below this temperature,\footnote{Strictly speaking the $\tau$ interaction rate must be faster than the $N$ inverse decay rate to overcome the Quantum Zeno effect \cite{Blanchet:2006ch}, but this is a small effect and beyond the scope of this paper.} one should take such differences into account in the calculations. The CP parameter taking into account such flavour effects is \cite{Choubey:2010vs,Antusch:2006cw,Davidson:2008bu}
\begin{equation}
\label{eqn:flavcp}
\epsilon_{\alpha i}=\frac{1}{8\pi\left(Y^{\dagger}_{\nu}Y_{\nu}\right)_{ii}}\sum_{j\neq1}\left(\mathrm{Im}\left(Y^{*}_{\alpha i}Y_{\alpha j}(Y^{\dagger}_{\nu}Y_{\nu})_{ij}\right)f(x_{ij})+\mathrm{Im}\left(Y^{*}_{\alpha i}Y_{\alpha j}(Y^{\dagger}_{\nu}Y_{\nu})_{ji}\right)g(x_{ij})\right),
\end{equation}
with $g(x_{ij})=g_{ij}=\frac{1}{(1-x_{ij})}$ and $f_{ij}$ as above.
\subsection{Final asymmetry}
\label{sub:asymmetry}
Ultimately we want to estimate a value for the baryon to photon ratio at recombination; this is related to the $B-L$ asymmetry $N_{B-L}$ at the leptogenesis scale by \cite{DiBari:2004en}
\begin{equation}
\eta=0.89\times10^{-2}N_{B-L}.
\end{equation}
The numerical coefficient above has two contributions: 1) from the $B-L$ conserving sphaleron processes (which are only $\sim33\%$ efficient at converting $B-L$ into $B$); 2) from scaling by photon number density in the relevant comoving volume (recall that we are calculating the baryon to photon ratio at recombination). The sphalerons convert part of the $L$ asymmetry into a $B$ asymmetry via a suppressed dimension $18$ operator active at the energies we consider, $\gg M_{EW}$. The CP asymmetries calculated in the previous section are then related to $N_{B-L}$ via
\begin{equation}
\label{eqn:washout}
N_{B-L}=\sum_{\alpha, i}\epsilon_{\alpha i}\kappa_{\alpha i},
\end{equation}
which defines the efficiency parameters $\kappa_{\alpha i}$; these encode how efficiently the decays of $N$ produce a $B-L$ asymmetry at the leptogenesis scale. In the strong washout regime, the $\kappa_{\alpha i}$ are approximated analytically by (up to superpartner effects which increase $N_{B-L}$ by a factor of $\sqrt{2}$; see, for example, \cite{DiBari:2004en}):
\begin{equation}
\kappa_{\alpha i}\approx\frac{2}{K_{\alpha i}z_B(K_{\alpha i})}\left(1-\exp{\left(-\frac{1}{2}K_{\alpha i}z_B(K_{\alpha i})\right)}\right),
\end{equation}
with
\begin{equation}
z_B(K_{\alpha i})\approx 2+4(K_{\alpha i})^{0.13}\exp{\left(-\frac{2.5}{K_{\alpha i}}\right)},
\end{equation}
the decay parameter
\begin{equation}
K_{\alpha i}=\frac{\widetilde{m}_{\alpha i}}{m^*_{MSSM}},
\end{equation}
and effective neutrino mass
\begin{equation}
\widetilde{m}_{\alpha i}=\left(Y^{\dagger}_{\nu}\right)_{i \alpha}\left(Y_{\nu}\right)_{\alpha i}\frac{v_u^2}{M_i}.
\end{equation}
The $\widetilde{m}_{\alpha i}$ are model specific and are presented below for
the model in question (in Table~\ref{tab:Flavoured}), while
$m^*_{MSSM}=1.58\times10^{-3}\sin^2\beta$ eV \cite{Antusch:2006cw} is the
equilibrium neutrino mass. The up-type Higgs VEV is denoted by $v_u$.
The main point to address is then the form that the Yukawa matrices take. This is discussed in the context of family symmetries \cite{S3-L,Dn-L,A4-L,A4-L-Altarelli,S4-L,A5-L,Dn-LQ,A4-LQ,King:2007A4,S4-LQsum,PSL-LQ,Z7Z3-LQ,delta27-LQ,delta27-LQ-Dterms,SO3-LQ,SU3-LQ,Cooper:A4typeIII,Reviews} which we turn to now.
\section{Form dominance}
\label{sec:formdom}
In order to explain the observed pattern of neutrino mixing, many models
invoke the idea that a high energy family symmetry unifying the three flavours
is spontaneously broken in a specific way that leaves some imprint in the
neutrino sector at low energies. This method introduces relationships between
the parameters of $Y_{\nu}$ leading to predictions for $\epsilon_{\alpha i}$
and $\epsilon_{i}$. As discussed in the introduction, it is a striking fact
that many of these family symmetry models exhibit a property known as FD \cite{Chen:2009um}, which constrains the CP violating parameter of leptogenesis to be identically zero \cite{Choubey:2010vs}, as we now discuss.
The FD \cite{Chen:2009um} condition is that the columns of $Y_{\nu}$
in Eq. \eqref{eqn:heavyseesaw} are proportional to the columns of $U$,
\begin{equation}
\label{eqn:form}
A_i=\alpha U_{i1},\quad B_i=\beta U_{i2}, \quad C_i=\gamma U_{i3},
\end{equation}
where $U$ is the unitary Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix which is parameterised by three mixing angles and three complex phases (one Dirac and two Majorana).
The consequences of such FD on leptogenesis is then very simple to understand: since $U$ is unitary, the columns of $Y_{\nu}$ must be mutually orthogonal. This means that the contraction $(Y^{\dagger}_{\nu}Y_{\nu})_{ij}$,
with $i\neq j$, appearing in Eqs \eqref{eqn:cp} and \eqref{eqn:flavcp} are identically zero and so leptogenesis gives $\eta=0$.
The FD condition also greatly simplifies the form of the effective neutrino
mass matrix arising from the type I seesaw formula.
In terms of parameters in Eq. \eqref{eqn:heavyseesaw}, the effective neutrino mass matrix can be written,
\begin{equation}
\label{eqn:neutrinomass}
m_{\nu}=-v_u^2Y_{\nu}M_{RR}^{-1}Y_{\nu}^T.
\end{equation}
In the basis where the right-handed neutrinos are diagonal, i.e. that in which $M_{RR}=\mathrm{diag}\left(M_A,M_B,M_C\right)$, and writing $Y_{\nu}=\left(A,B,C\right)$, Eq. \eqref{eqn:neutrinomass} gives
\begin{equation}
\label{eqn:columns}
m_{\nu}=-v_u^2\left(\frac{AA^T}{M_A}+\frac{BB^T}{M_B}+\frac{CC^T}{M_C}\right).
\end{equation}
In the charged lepton diagonal basis, $m_{\nu}$ is diagonalised by $U$.
Assuming FD, $m_{\nu}$ is diagonalisable independently of the parameters $\alpha,\beta,\gamma$, and, from
(\ref{eqn:columns}) and (\ref{eqn:form}), one finds
\begin{equation}
\label{eqn:lhnmass}
m_{\nu}^{diag}=v_u^2\mathrm{diag}\left(\frac{\alpha^2}{M_A},\frac{\beta^2}{M_B},\frac{\gamma^2}{M_C}\right).
\end{equation}
A particularly well studied case is that of TB mixing \cite{Harrison:2002tbm}. However, as emphasised in \cite{King:2010bk}, TB mixing is not linked to FD. Indeed we shall study two $A_4$ family symmetry models, one with TB mixing and one with TM mixing, where FD is present in both cases, leading to zero leptogenesis at LO, before RG corrections are included.
\section{Parameters of the ${\boldsymbol{A_4}}$ model of TB mixing}
\label{sec:AFModel}
We first consider the parameters of the Altarelli-Feruglio $A_4$ model of TB mixing
with renormalisable neutrino Dirac coupling \cite{A4-L-Altarelli},
\begin{equation}
\label{eqn:Altarellinu}
W_{\nu}=y(lN)H_u+(x_A\xi+\widetilde{x}_A\widetilde{\xi})(NN)+x_B(\varphi_SNN),
\end{equation}
where, under $A_4$, the conventionally defined fields transform as
$N\sim {\bf 3}$, $l\sim {\bf 3}$, $\varphi_S\sim {\bf 3}$, while $H_u\sim {\bf
1}$ and $\xi , \tilde{\xi} \sim {\bf 1}$; the $x_i$ are constant complex parameters.
Since this model is well known, we refer the reader to \cite{A4-L-Altarelli} for more details.
The charged lepton mass matrix in the basis used in \cite{A4-L-Altarelli} is diagonal so the mixing structure in the neutrino sector will not receive corrections from charged lepton rotations.
The TB structure in the neutrino sector arises from the flavon fields obtaining vacuum expectation values (VEVs) in particular directions,
\begin{equation}
\vev{\varphi_S}=v_s\begin{pmatrix}
1 \\
1 \\
1
\end{pmatrix} , \;\vev{\xi}=u \quad \mathrm{and} \quad\vev{\widetilde\xi}=0,
\end{equation}
where the dynamics responsible for vacuum alignment has been extensively studied (for instance, in \cite{A4-L-Altarelli} for $F$-term alignment or in \cite{King:2007A4} for $D$-term alignment).
The TB structure arises in the Majorana sector of Eq. \eqref{eqn:Altarellinu}, explicitly
\begin{equation}
\label{eqn:AFMaj}
M_{RR}=\begin{pmatrix}
a+\frac{2b}{3} & -\frac{b}{3} & -\frac{b}{3} \\
-\frac{b}{3} & \frac{2b}{3} & a-\frac{b}{3} \\
-\frac{b}{3} & a-\frac{b}{3} & \frac{2b}{3}
\end{pmatrix},
\end{equation}
where we define $a=2x_Au$, $b=2x_Bv_s$ as complex parameters with phase $\phi_{a,b}$. For the purposes of leptogenesis it is convenient to rotate the $N$ such that their mass matrix is diagonal.
The resulting neutrino Yukawa matrix in the diagonal $N$ basis is then,
\begin{equation}
\label{eqn:Altyuk}
Y_{TB}=y\begin{pmatrix}
\frac{-2}{\sqrt{6}}e^{i\phi_A} & \frac{1}{\sqrt{3}}e^{i\phi_B} & 0 \\
\frac{1}{\sqrt{6}}e^{i\phi_A} & \frac{1}{\sqrt{3}}e^{i\phi_B} & \frac{-1}{\sqrt{2}}e^{i\phi_C} \\
\frac{1}{\sqrt{6}}e^{i\phi_A} & \frac{1}{\sqrt{3}}e^{i\phi_B} & \frac{1}{\sqrt{2}}e^{i\phi_C}
\end{pmatrix}.
\end{equation}
One can see explicitly that FD is present in this model, since the columns of $Y_{TB}$ are manifestly proportional to the columns of the TB mixing matrix,
and thus it immediately follows that $\epsilon_i=\epsilon_{\alpha i}=0$ at the scale of $A_4$ breaking.
The phases defined in \eqref{eqn:Altyuk} are given as,
\begin{align}
\label{eqn:phases}
\phi_A&=-\frac{1}{2}\left(\phi_{b}+\tan^{-1}\left(\frac{-|a|\sin\left(\phi_{b}-\phi_a\right)}{|b|+|a|\cos\left(\phi_{b}-\phi_a\right)}\right)\right), \\
\phi_B&=-\frac{1}{2}\phi_{a}, \\
\phi_C&=-\frac{1}{2}\left(\phi_{b}+\tan^{-1}\left(\frac{|a|\sin\left(\phi_{b}-\phi_a\right)}{|b|-|a|\cos\left(\phi_{b}-\phi_a\right)}\right)\right).
\end{align}
Therefore, there are actually only two phases ($\phi_{a}$ and $\phi_b$) and
two magnitudes ($|a|$ and $|b|$)
in the model, although
only phase differences appear when considering physical quantities. This means that we may set one phase to zero without loss of generality; for our calculations we choose $\phi_a=0$.
In this basis, the Majorana neutrino mass matrix is real and diagonal and is given by
\begin{equation}
\label{eqn:majmass}
M_{RR}^{diag}=\mathrm{diag}\left(M_1,M_2,M_3\right)=\begin{pmatrix}
\left|a+b\right| & 0 & 0 \\
0 & \left|a\right| & 0 \\
0 & 0 & \left|-a+b\right|
\end{pmatrix}.
\end{equation}
The effective left-handed neutrino masses are then given by\footnote{Note that in this paper we consider a normal ordering of light neutrino masses, therefore $M_1$ is the heaviest right-handed neutrino mass. This means that $\epsilon_{3}$ and $\epsilon_{\alpha 3}$ will be dominant contributions to leptogenesis, coming from the lightest right-handed neutrino. This is simply a notational consideration, and does not affect the physics.}
\begin{equation}
\label{eqn:effective}
m_i=\frac{y_{\beta}^2v^2}{M_i},
\end{equation}
which incorporates the SUSY parameter $\tan\beta$
\begin{equation}
\label{eqn:beta}
y_{\beta}=y\sin{\beta}\;,\quad\;\tan\beta=\frac{v_u}{v_d}\;,\quad\;v=\sqrt{v_u^2+v_d^2}=\sqrt{\left\langle H_u\right\rangle^2+\left\langle H_d\right\rangle^2}\approx174\;\mathrm{GeV}.
\end{equation}
\section{Parameters of the ${\boldsymbol{A_4}}$ model of TM mixing}
\label{sec:tri}
In light of results from T2K \cite{Abe:2011sj} indicating a sizeable reactor angle, models predicting TB mixing can potentially be ruled out. Instead, schemes such as TM mixing remain viable \cite{Haba:2006dz}:
\begin{equation}
\label{eqn:tm}
U_{TM}=\begin{pmatrix}
\frac{2}{\sqrt{6}}\cos\theta & \frac{1}{\sqrt{3}} & \frac{2}{\sqrt{6}}\sin\theta e^{i\rho} \\
-\frac{1}{\sqrt{6}}\cos\theta-\frac{1}{\sqrt{2}}\sin\theta e^{-i\rho} & \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{2}}\cos\theta-\frac{1}{\sqrt{6}}\sin\theta e^{i\rho} \\
-\frac{1}{\sqrt{6}}\cos\theta+\frac{1}{\sqrt{2}}\sin\theta e^{-i\rho} & \frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{2}}\cos\theta-\frac{1}{\sqrt{6}}\sin\theta e^{i\rho}
\end{pmatrix}.
\end{equation}
Here $\frac{2}{\sqrt{6}}\sin\theta=\sin\theta_{13}$ and $\rho$ is related to the Dirac phase. It is possible to minimally extend the Altarelli-Feruglio model above by adding a flavon in the ${\bf 1'}$ representation of $A_4$ which reproduces this pattern \cite{King:2011zj}:
\begin{equation}
\label{eqn:oneprime}
W_{{\bf 1'}}=x_C\xi'NN ,
\end{equation}
where we define the complex parameter $c=x_C\left\langle\xi'\right\rangle$,
with phase $\phi_c$. It has been shown in \cite{King:2011zj} that the addition of this flavon doesn't affect the right-handed neutrino masses to first order, and so the parameters in common with the previous section will be unaffected. Analogously to Eq.~(\ref{eqn:Altyuk}), in the basis where charged leptons are diagonal and right-handed neutrinos are real and diagonal, the Yukawa matrix for TM mixing is,
\begin{equation}
\label{eqn:TMYuk}
Y_{TM}\!\!=y\!\!\begin{pmatrix}
\frac{2}{\sqrt{6}} \! & \!\! \frac{1}{\sqrt{3}} \! & \!\! \frac{2}{\sqrt{6}}\alpha_{13}^* \! \\
-\frac{1}{\sqrt{6}}-\frac{1}{\sqrt{2}}\alpha_{13} \! & \!\!
\frac{1}{\sqrt{3}} \! & \!\! \frac{1}{\sqrt{2}}
-\frac{1}{\sqrt{6}}\alpha_{13}^* \! \\
-\frac{1}{\sqrt{6}}+\frac{1}{\sqrt{2}}\alpha_{13} \! & \!\!
\frac{1}{\sqrt{3}} \! & \!\! -\frac{1}{\sqrt{2}
-\frac{1}{\sqrt{6}}\alpha_{13}^* \!
\end{pmatrix}
\!\!\begin{pmatrix}
\exp\left(i\phi_A\right) \!\! & \!\! 0 \!\! & \!\! 0 \! \\
0 \!\! & \!\! \exp\left(i\phi_B\right) \!\! & \!\! 0 \! \\
0 \!\! & \!\! 0 \!\! & \!\! \exp\left(i\phi_C\right) \!
\end{pmatrix},
\end{equation}
where the $\phi_{A,B,C}$ are as in Eq. \eqref{eqn:phases}. We can see that the columns of this matrix are proportional to columns of $U_{TM}$ and therefore the model respects FD.
Therefore, as for the previous model of TB mixing, this model of TM mixing
also gives zero leptogenesis and $\eta=0$, to leading order.
The parameter $\alpha_{13}$ measures the deviation from TB mixing and is given
by \cite{King:2011zj}
\begin{equation}
\label{eqn:alpha}
\alpha_{13}=\frac{\sqrt{3}}{2}\left(\Re{\frac{c}{2\left(a-\frac{c}{2}\right)}}+\Im{\frac{c}{2\left(a-\frac{c}{2}\right)}}\frac{\Im{\frac{b}{a-\frac{c}{2}}}}{Re{\frac{b}{a-\frac{c}{2}}}}-i\frac{\Im{\frac{c}{2\left(a-\frac{c}{2}\right)}}}{\Re{\frac{b}{a-\frac{c}{2}}}}\right).
\end{equation}
\section{Renormalisation group evolution of the Yukawa couplings}
\label{sec:RGE}
In order to generate a non-zero $\epsilon_{\alpha i}$ and $\epsilon_{i}
$, we now consider the effects of running the neutrino Yukawa couplings from the scale at which $A_4$ is broken down to the scale at which leptogenesis takes place. At one-loop, the RG equation for the neutrino Yukawa couplings in the MSSM above the scale of right-handed neutrino masses is given by \cite{King:2000hk,Chung:2003fi},
\begin{equation}
\label{eqn:RGE}
\frac{\mathrm{d}Y_{\nu}}{\mathrm{d}t}=\frac{1}{16\pi^2}\left[N_l\cdot Y_{\nu}+Y_{\nu}\cdot N_{\nu}+\left(N_{H_u}\right)Y_{\nu}\right],
\end{equation}
where
\begin{align}
N_l&=Y_eY_e^{\dagger}+Y_{\nu}Y_{\nu}^{\dagger}-\left(\frac{3}{2}g_2^2+\frac{3}{10}g_1^2\right)\cdot I_3, \\ N_{\nu}&=2Y_{\nu}^{\dagger}Y_{\nu}, \\ N_{H_u}&=3\mathrm{Tr}\left(Y_u^{\dagger}Y_u\right)+\mathrm{Tr}\left(Y_{\nu}^{\dagger}Y_{\nu}\right)-\left(\frac{3}{2}g_2^2+\frac{3}{10}g_1^2\right).
\end{align}
In these equations,
$t=\log\left(\frac{Q_1}{Q_0}\right)$ with $Q_1$ being the renormalisation scale and $Q_0$ the family symmetry breaking scale; $Y_{e,u}$ are the charged lepton and up-type quark Yukawa couplings respectively; $g_{1,2}$ are the\footnote{Note that $g_1$ is the GUT normalised hypercharge coupling, related to the standard hypercharge coupling $g'$ by $g_1=\sqrt{\frac{5}{3}}g'$.} ${U}(1)_Y$ and ${SU}(2)_L$ gauge couplings respectively; and $I_3$ is the $3\times3$ identity matrix. Each $N_X$ arises from all one-loop insertions allowed by gauge symmetry on the $X$-leg of the vertex.
In leading log approximation, taking the continuous derivatives to be approximately equal to a single discrete step,
Eq.~(\ref{eqn:RGE}) may be approximated as:
\begin{equation}
\label{eqn:leadinglog}
\frac{\mathrm{d}Y_{\nu}}{\mathrm{d}t}\approx\frac{\Delta Y_{\nu}}{\Delta t}=\frac{Y_{\nu}(Q_0)-Y_{\nu}(Q_1)}{t({Q_0})-t({Q_1})}\equiv Z,
\end{equation}
yielding the solution,
\begin{equation}
\label{eqn:RGY}
Y_{\nu}(Q_1)\approx Y_{\nu}(Q_0)-Z\Delta t.
\end{equation}
As an example, we demonstrate the RG evolution of the TB Yukawa matrix in \eqref{eqn:Altyuk}
$Y_{\nu}=Y_{TB}$ (the case of $Y_{TM}$ is completely analogous).
Inserting \eqref{eqn:Altyuk} into \eqref{eqn:RGE} and using the third family approximation then gives
\begin{equation}
\begin{split}
\frac{\mathrm{d}Y_{TB}}{\mathrm{d}t}&\approx\frac{y}{16\pi^2}\left(\left(J\!+\!3\left|y\right|^2\right)\begin{pmatrix}
\frac{-2}{\sqrt{6}}e^{i\phi_A} & \frac{1}{\sqrt{3}}e^{i\phi_B} & 0 \\
\frac{1}{\sqrt{6}}e^{i\phi_A} & \frac{1}{\sqrt{3}}e^{i\phi_B} & \frac{-1}{\sqrt{2}}e^{i\phi_C} \\
\frac{1}{\sqrt{6}}e^{i\phi_A} & \frac{1}{\sqrt{3}}e^{i\phi_B} & \frac{1}{\sqrt{2}}e^{i\phi_C} \\
\end{pmatrix}\right. \\
&\left. +y_{\tau}^2\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & 0 \\
\frac{1}{\sqrt{6}}e^{i\phi_A} &
\frac{1}{\sqrt{3}}e^{i\phi_B} & \frac{1}{\sqrt{2}}e^{i\phi_C}
\end{pmatrix}\right)\equiv Z_{TB}
\end{split}
\label{RGTB}
\end{equation}
where $J=N_{H_u}-\left(\frac{3}{2}g_2^2+\frac{3}{10}g_1^2\right)$ and $y_{\tau}$ is the Yukawa coupling of the $\tau$ lepton. This shows that the contributions from the charged lepton Yukawa couplings breaks the orthogonality of the columns, appearing as they do in only the third component of each column. This is the
effect which gives rise to a non-zero CP violating parameter. The leading log solution for the TB case is then given by
\begin{equation}
\label{eqn:RGYTB}
Y_{TB}(Q_1)\approx Y_{TB}(Q_0)-Z_{TB}\Delta t.
\end{equation}
\section{Results}
\label{sec:results}
\begin{table}
\centering
\begin{tabular}{|c||c|c|}
\hline
& Asymmetry & $\widetilde{m}_{\alpha i}$ \\ \hline\hline
$\epsilon_{\alpha 1}$ & $\frac{1}{8\pi A^{\dagger}A}\left[\mathrm{Im}\left(A^*_{\alpha}B_{\alpha}\left(A^{\dagger}B\right)\right)f_{12}+\mathrm{Im}\left(A^*_{\alpha}B_{\alpha}\left(B^{\dagger}A\right)\right)g_{12}\right.$ & $\frac{\left|A_{\alpha}\right|^2}{M_1}v_u^2$
\\
& $\quad\quad\;\,\left.+\mathrm{Im}\left(A^*_{\alpha}C_{\alpha}\left(A^{\dagger}C\right)\right)f_{13}+\mathrm{Im}\left(A^*_{\alpha}C_{\alpha}\left(C^{\dagger}A\right)\right)g_{13}\right]$ & \\ \hline
$\epsilon_{\alpha 2}$ & $\frac{1}{8\pi B^{\dagger}B}\left[\mathrm{Im}\left(B^*_{\alpha}A_{\alpha}\left(B^{\dagger}A\right)\right)f_{21}+\mathrm{Im}\left(B^*_{\alpha}A_{\alpha}\left(A^{\dagger}B\right)\right)g_{21}\right.$& $\frac{\left|B_{\alpha}\right|^2}{M_2}v_u^2$
\\
& $\quad\quad\;\,\left.+\mathrm{Im}\left(B^*_{\alpha}C_{\alpha}\left(B^{\dagger}C\right)\right)f_{23}+\mathrm{Im}\left(B^*_{\alpha}C_{\alpha}\left(C^{\dagger}B\right)\right)g_{23}\right]$ & \\ \hline
$\epsilon_{\alpha 3}$ & $\frac{1}{8\pi C^{\dagger}C}\left[\mathrm{Im}\left(C^*_{\alpha}A_{\alpha}\left(C^{\dagger}A\right)\right)f_{31}+\mathrm{Im}\left(C^*_{\alpha}A_{\alpha}\left(A^{\dagger}C\right)\right)g_{31}\right.$ & $\frac{\left|C_{\alpha}\right|^2}{M_3}v_u^2$
\\
& $\quad\quad\;\,\left.+\mathrm{Im}\left(C^*_{\alpha}B_{\alpha}\left(C^{\dagger}B\right)\right)f_{32}+\mathrm{Im}\left(C^*_{\alpha}B_{\alpha}\left(B^{\dagger}C\right)\right)g_{32}\right]$ & \\ \hline
\end{tabular}
\caption{Flavoured asymmetries and washout parameters}
\label{tab:Flavoured}
\end{table}
In this section we present the results of our analysis for both the TB and TM models in leading log approximation.
The use of leading log approximation is justified by the small interval of energies over which the running takes place.
As before, one can represent the Yukawa matrix derived in \eqref{eqn:RGY} as $Y_{\nu}(Q_1)=\left(A(Q_1),B(Q_1),C(Q_1)\right)$ where $A(Q_1)$, $B(Q_1)$ and $C(Q_1)$ are the RG evolved versions of the column vectors in section \ref{sec:formdom}, which, as clearly seen in \eqref{RGTB}, \eqref{eqn:RGYTB} are no longer orthogonal after RG corrections are included.
This allows us to write the flavoured asymmetries as in Table \ref{tab:Flavoured}.
Using Eq. \eqref{eqn:flavcp} one notices
immediately that $\epsilon_{13}=0$ since $C_1(Q_1)=0$.
We can see that the $\epsilon_{\alpha i}$ receive a correction from RG running since, e.g.
\begin{equation}
A(Q_1)^{\dagger}B(Q_1)=\left(A(Q_0)-\left(Z\Delta t\right)_{\alpha1}\right)^{\dagger}\left(B(Q_0)-\left(Z\Delta t\right)_{\alpha2}\right),
\end{equation}
where the leading term on the right-hand side vanishes since FD implies that $A(Q_0)$ and $B(Q_0)$ (and $C(Q_0)$) are orthogonal.
In order to progress further, one will need to insert specific values for the parameters in the matrix, which are model dependent. Here,
we make use of work presented in \cite{diBari:lepto} to fix the parameters consistently with experimental data. We take the leptogenesis
scale $Q_1$ to be approximately the seesaw scale, $Q_1\sim (1.74^2y^2)10^{14}$ GeV (using the basic seesaw formula).\footnote{This mass scale may look quite large especially when compared to the upper bound on the reheating temperature due to the
over-production of late-decaying gravitinos \cite{Khlopov:1984pf}. However, heavy gravitinos with masses
$m_{3/2}>40$ TeV, will decay before nucleosynthesis. Assuming dark matter to have a significant axion/axino component, then allows reheat temperatures to be sufficiently high to produce right-handed neutrinos of mass
$\sim 10^{14}\,\mathrm{GeV}$, as recently discussed in
e.g. \cite{Baer:2010kw}, \cite{Baer:2010gr} (and references therein).} This indicates that for small $y$ we are in the two flavour regime for $\tan \beta >10$; for larger values of $y$, $\tan\beta$ needs to be larger for us to be in the 2 flavour regime. However in the forthcoming plots, parts of the contour existing at large $y$ correspond also to larger $y_{\tau}$ and so sufficiently large $\tan\beta$. The family
symmetry scale is around an order of magnitude below the GUT scale, roughly $Q_0 \sim 1.5\times10^{15}$ GeV; and $y_t\sim1$. We then calculate the
asymmetry for $0<y<2\sqrt{\pi}$ (to keep the coupling perturbative) and $0<y_{\tau}<0.5$ (to remain within bounds for $\tan\beta$ \cite{Benjamin:2010xb}).
\begin{figure}[t]
\begin{center}$
\begin{array}{cc}
\includegraphics[width=3.3in]{NO2flav1y.eps} &
\includegraphics[width=3.3in]{NO2flav1yt.eps} \\
\includegraphics[width=3.3in]{NO2flav2y.eps} &
\includegraphics[width=3.3in]{NO2flav2yt.eps} \\
\includegraphics[width=3.3in]{NO2flav3y.eps} &
\includegraphics[width=3.3in]{NO2flav3yt.eps}
\end{array}$
\end{center}
\caption{Flavoured asymmetries plotted against neutrino Yukawa $y$ and tau lepton Yukawa $y_{\tau}$ in the two flavour regime (i.e. $e-\mu$ and $\tau$) for the TB model. In the $y_{\tau}$ graphs, $y$ is fixed to be $3$, while in the $y$ graphs, $y_{\tau}$ is fixed to be $0.5$. $\epsilon_{e\mu, i}$ are black solid lines while $\epsilon_{\tau,i}$ are red dashed lines.}
\label{fig:flavouredgraphs}
\end{figure}
\subsection{TB mixing}
\label{sub:TB}
We now specialise to the case of RG improved leptogenesis in the TB model, where the
TB Yukawa couplings are given in (\ref{eqn:RGYTB}), repeated below,
\begin{equation}
\label{eqn:RGYTB2}
Y_{TB}(Q_1)\approx Y_{TB}(Q_0)-Z_{TB}\Delta t.
\end{equation}
The results for the flavoured asymmetries versus $y$ and $y_{\tau}$ are presented in Fig.~\ref{fig:flavouredgraphs}, in the two-flavour regime. It can be seen that the contributions from $\epsilon_{\alpha 3}$ are the dominant ones, as expected.
\begin{figure}[t]
\begin{center}
\includegraphics[width=6in]{contour2.jpg}
\end{center}
\caption{A plot showing the contours of the baryon to photon ratio $\eta$ in the tau Yukawa, $y_{\tau}$, versus neutrino Yukawa, $y$, plane. The dotted and dashed lines are $\eta=8.2\times10^{-10}$ and $\eta=4.2\times10^{-10}$ while the solid line is the measured value of $\eta=6.2\times10^{-10}.$}
\label{fig:expcontour}
\end{figure}
Following the procedure set out in section \ref{sub:asymmetry}, we then calculate the baryon to photon ratio $\eta$. Fig. \ref{fig:expcontour} displays the contour matching the experimentally measured value of $6.2\times10^{-10}$, along with two others, demonstrating the sensitivity of the required Yukawa couplings to the value of $\eta$. This shows that there is a definite range of Yukawa couplings for which a realistic matter-antimatter asymmetry can be obtained purely by considering RG evolution of the neutrino Yukawa matrix, without the need for any extra particles or HO operators to be considered.
\subsection{TM mixing}
\label{sub:TM}
We now perform a similar analysis in the TM model, using the RG improved Yukawa matrix
analogous to (\ref{eqn:RGYTB}), namely,
\begin{equation}
\label{eqn:RGYTM}
Y_{TM}(Q_1)\approx Y_{TM}(Q_0)-Z_{TM}\Delta t.
\end{equation}
where the high energy Yukawa matrix $Y_{TM}(Q_0)$ is given in (\ref{eqn:TMYuk}),
with $Z_{TM}$ analogous to (\ref{RGTB})
and otherwise assuming similar parameters to the case of TB mixing.
However one must choose the new complex parameter $c$ carefully in order to satisfy the relation \cite{King:2011zj}
\begin{equation}
\label{eqn:theta13}
\frac{\sqrt{6}}{2}\sin{\theta_{13}}=\left|\alpha_{13}\right|.
\end{equation}
We present flavoured asymmetries for $\theta_{13}=8\ensuremath{^\circ}$ (the current T2K central value \cite{Abe:2011sj}) and $\phi_c=0$ in Fig. \ref{fig:TMflavouredgraphs}. We also plot contours of $\eta=4.2\times10^{-10},\;6.2\times10^{-10},\;8.2\times10^{-10}$ for $\theta_{13}=8\ensuremath{^\circ}$ and $\eta=6.2\times10^{-10}$ for $\theta_{13}=0.1\ensuremath{^\circ},\;3\ensuremath{^\circ},\;6\ensuremath{^\circ},\;9\ensuremath{^\circ},\;12\ensuremath{^\circ}$; and for each value of $\theta_{13}$, we present four different choices of phase and modulus of $c$ which satisfy \eqref{eqn:theta13}. These can be seen in Figs \ref{fig:TMcontourscentral} and \ref{fig:TMcontoursoverlay}. For small $\theta_{13}$ and therefore small $c$, the results are very similar to those for TB mixing (c.f. Figs \ref{fig:expcontour} and \ref{fig:TMcontoursoverlay} purple line), which is expected since the only difference between the two models is the presence of the $\xi'$ flavon. For the larger values of $\theta_{13}$, it is clear that changing $c$ has a significant effect as one can see from the variation of contours in Fig. \ref{fig:TMcontoursoverlay}; for instance the $12\ensuremath{^\circ}$ contour for a phase of $\phi_c=0.91$ rad doesn't show up across the whole displayed plane.
\begin{figure}[ht]
\begin{center}$
\begin{array}{cc}
\includegraphics[width=3.3in]{TMe1vsy.eps} &
\includegraphics[width=3.3in]{TMe1vsyt.eps} \\
\includegraphics[width=3.3in]{TMe2vsy.eps} &
\includegraphics[width=3.3in]{TMe2vsyt.eps} \\
\includegraphics[width=3.3in]{TMe3vsy.eps} &
\includegraphics[width=3.3in]{TMe3vsyt.eps}
\end{array}$
\end{center}
\caption{Flavoured asymmetries plotted against neutrino Yukawa $y$ and tau
lepton Yukawa $y_{\tau}$ in the two flavour regime (i.e. $e-\mu$ and $\tau$)
for the TM model with $\theta_{13}=8\ensuremath{^\circ}$ and a real parameter $c=x_C
\langle \xi'\rangle$. In the $y_{\tau}$ graphs, $y$ is fixed to be $3$, while in the $y$ graphs, $y_{\tau}$ is fixed to be $0.5$. $\epsilon_{e\mu,i}$ are black solid lines while $\epsilon_{\tau,i}$ are red dashed lines.}
\label{fig:TMflavouredgraphs}
\end{figure}
\begin{figure}[ht]%
$\begin{array}{cc}
\subfloat[][$\phi_c=0 \; \mathrm{rad}$]{\includegraphics[width=3.3in]{TM8a.jpg}}%
&
\subfloat[][$\phi_c=0.91 \; \mathrm{rad}$]{\includegraphics[width=3.3in]{TM8b.jpg}}%
\\
\subfloat[][$\phi_c=3.28 \; \mathrm{rad}$]{\includegraphics[width=3.3in]{TM8c.jpg}}%
&
\subfloat[][$\phi_c=6 \; \mathrm{rad}$]{\includegraphics[width=3.3in]{TM8d.jpg}}%
\end{array}$
\caption{These plots show contours of the baryon to photon ratio $\eta$ from
the TM model with $\theta_{13}=8\ensuremath{^\circ}$ in the tau Yukawa, $y_{\tau}$,
versus neutrino Yukawa, $y$, plane. The dotted and dashed lines are
$\eta=8.2\times10^{-10}$ and $\eta=4.2\times10^{-10}$ while the solid line
is the measured value of $\eta=6.2\times10^{-10}$. Each plot is for a
different value of the phase of $c=x_C \langle \xi'\rangle$, given above the relevant panel.
\label{fig:TMcontourscentral}%
\end{figure}
\begin{figure}[ht]%
$\begin{array}{cc}
\subfloat[][$\phi_c=0 \; \mathrm{rad}$]{\includegraphics[width=3.3in]{Overlay1.jpg}}%
&
\subfloat[][$\phi_c=0.91 \; \mathrm{rad}$]{\includegraphics[width=3.3in]{Overlay2.jpg}}%
\\
\subfloat[][$\phi_c=3.28 \; \mathrm{rad}$]{\includegraphics[width=3.3in]{Overlay3.jpg}}%
&
\subfloat[][$\phi_c=6 \; \mathrm{rad}$]{\includegraphics[width=3.3in]{Overlay4.jpg}}%
\end{array}$
\caption{These plots show contours of the baryon to photon ratio
$\eta=6.2\times10^{-10}$ from the TM model with
$\theta_{13}=0.1\ensuremath{^\circ},3\ensuremath{^\circ},6\ensuremath{^\circ},9\ensuremath{^\circ},12\ensuremath{^\circ}$ (purple, red,
yellow, green, blue respectively) in the tau Yukawa, $y_{\tau}$, versus
neutrino Yukawa, $y$, plane. Each plot is for a different value of the phase
of $c=x_C \langle\xi' \rangle$, given above the relevant panel. Note that the $\theta_{13}=12\ensuremath{^\circ}$ contour is not possible for
$\phi_c=0.91$ radians.
\label{fig:TMcontoursoverlay}%
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this paper we have studied RG corrections relevant for leptogenesis in the case of
family symmetry models such as the Altarelli-Feruglio
$A_4$ model of tri-bimaximal lepton mixing or its extension to tri-maximal mixing.
Such corrections are particularly relevant since in large classes of family
symmetry models, to leading order, the CP violating parameters of leptogenesis
would be identically zero at the family symmetry breaking scale, due to the
form dominance property. We have used the third family approximation, keeping only the largest Yukawa couplings,
subject to the constraint of perturbativity. In addition,
the $\tau$ Yukawa coupling is related to the SUSY parameter $\tan\beta$, which has had experimental bounds placed upon it.
Our results demonstrate that it is possible to obtain the observed value for
the baryon asymmetry of the Universe in models with FD by exploiting RG
running of the neutrino Yukawa matrix over the small energy interval between
the family symmetry breaking scale and the right-handed neutrino mass scale
$\sim 10^{14}\,\mathrm{GeV}$.
Of course, the importance of RG corrections applies more generally than to the particular models we have considered here for illustrative purposes, and the right-handed neutrino masses may be lower in some models.
In conclusion, the results in this paper show that RG corrections have a large impact on leptogenesis in any family symmetry models involving neutrino and charged lepton Yukawa couplings of order unity, even though the range of RG running between the flavour scale and the leptogenesis scale may be only one or two orders of magnitude in energy. Therefore, when considering leptogenesis in such models, RG corrections should not be ignored, even when corrections arising from HO operators are also present.
\section*{Acknowledgements}
We thank Pasquale Di Bari and David A. Jones for discussions regarding
leptogenesis calculations throughout this work. The authors acknowledge
support from the STFC Rolling Grant No. ST/G000557/1.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
|
1,108,101,564,984 | arxiv | \section{Preliminaries}
By a {\it generated} group $(G, S)$ we mean a group $G$ with a specific generating set $S$. In the case when $S$ is finite, we say that $(G, S)$ is a {\it finitely generated} group. Throughout this article, we assume that every generating set of a group does not contain the group identity. One of the most important metrics defined on a \mbox{generated} group is presented below.
\begin{definition}[Word metrics]
Let $(G, S)$ be a generated group. The {\it word metric} with respect to $S$, denoted by $d_W$, is defined on $G$ by
\begin{equation}
d_W(g, h) = \min{\cset{n\in\mathbb{N}}{g^{-1}h = s_1^{\epsilon_1}s_2^{\epsilon_2}\cdots s_n^{\epsilon_n}, s_i\in S, \epsilon_i\in\set{\pm 1}}}
\end{equation}
for all $g, h\in G$ with $g\ne h$ and $d_W(g, g) = 0$ for all $g\in G$.
\end{definition}
Let $(G, S)$ be a generated group. Recall that the {\it Cayley digraph} of $G$ with respect to $S$, denoted by $\dCay{G, S}$, is a directed graph (also called a digraph) such that
\begin{enumerate}[label=(\roman*)]
\item the vertex set is $G$ and
\item the set of arcs is $\cset{(g, gs)}{g\in G, s\in S}$.
\end{enumerate}
The (undirected) {\it Cayley graph} of $G$ with respect to $S$, denoted by $\Cay{G, S}$, is defined as the underlying graph of $\dCay{G, S}$; that is, the vertex sets of $\Cay{G, S}$ and $\dCay{G, S}$ are the same and $\set{g, h}$ is an edge in $\Cay{G, S}$ if and only if $(g, h)$ or $(h, g)$ is an arc in $\dCay{G, S}$. A strong connection between word metrics and Cayley graphs is reflected in the fact that the distance between two arbitrary points $g$ and $h$ in $G$ measured by the word metric coincides with the shortest length of a path joining vertices $g$ and $h$ in $\Cay{G, S}$.
The word metric of a generated group leads to a {\it group-norm} \cite{NBAONVT2010}, which is a function analogous to a norm on a linear space. In fact, if $(G, S)$ is a generated group, then
\begin{equation}
\|g\|_W= \min{\cset{n\in\mathbb{N}}{g = s_1^{\epsilon_1}s_2^{\epsilon_2}\cdots s_n^{\epsilon_n}, s_i\in S, \epsilon_i\in\set{\pm 1}}}
\end{equation}
for all $g\ne e$ defines a group-norm on $G$. For basic knowledge of geometric group theory, we refer the reader to \cite{MR3729310}.
\section{Generated groups endowed with cardinal metrics}
Motivated by word metrics, we introduce a new metric on a group with a \mbox{specific} (finite or infinite) generating set. This metric enriches the group structure and so the group becomes a metric space with a norm-like function. We then examine its geometric structure. In particular, we show that large scale geometry of a certain finitely generated group is different when it is equipped with this new metric instead of the word metric. This enables us to investigate finitely generated groups from another point of view.
Let $G$ be a group and let $S$ be a subset of $G$. Recall that $G = \gen{S}$ if and only if every element of $G$ is of the form $s_1^{\epsilon_1}s_2^{\epsilon_2}\cdots s_n^{\epsilon_n}$, where $s_i\in S$ and $\epsilon_i\in\set{\pm 1}$ for all $i = 1,2,\ldots, n$. Let $(G, S)$ be a generated group. According to the well-ordering principle, we can define a function $\norm{\cdot}$ corresponding to $S$, called the {\it cardinal norm} on $G$, by
\begin{equation}\label{eqn: cardinal norm}
\norm{g} = \min\cset{\abs{A}}{A\subseteq S\textrm{ and }g\in \gen{A}}
\end{equation}
for all $g\in G$. For emphasis and clarity, we sometimes use the notation $\norm{\cdot}_S$. Note that the cardinal norm defined by \eqref{eqn: cardinal norm} does depend on a given generating set $S$. For example, consider the symmetric group $S_3$ and let $S = \set{(1\ 2), (1\ 2\ 3)}$ and $T = \set{(1\ 2), (1\ 3), (1\ 2\ 3)}$. Then $S$ and $T$ are generating sets of $S_3$. In this case, $\norm{(1\ 3)}_S = 2$, whereas $\norm{(1\ 3)}_T = 1$.
\begin{theorem}\label{thm: group-norm induced by cardinal metric}
Let $(G, S)$ be a generated group. The cardinal norm induced by $S$ is a group-norm; that is, it satisfies the following properties:
\begin{enumerate}
\item\label{item: positivity} $\norm{g}\geq 0$ for all $g\in G$ and $\norm{g} = 0$ if and only if $g$ is the identity of $G$;
\item\label{item: invariant under taking inverses} $\norm{g^{-1}} = \norm{g}$ for all $g\in G$;
\item\label{item: subadditivity} $\norm{gh}\leq \norm{g}+\norm{h}$ for all $g, h\in G$.
\end{enumerate}
\end{theorem}
\begin{proof}
The proofs of items \ref{item: positivity} and \ref{item: invariant under taking inverses} are straightforward. To prove item \ref{item: subadditivity}, let $g, h\in G$. Then $g\in \gen{T}$ and $h\in \gen{R}$, where $T$ and $R$ are subsets of $S$ such that $\norm{g} = \abs{T}$ and $\norm{h} = \abs{R}$. Note that $T\cup R$ is a finite subset of $S$ and that $gh\in\gen{T\cup R}$. Hence, $\norm{gh}\leq \abs{T\cup R}\leq \abs{T}+\abs{R} = \norm{g}+\norm{h}$.
\end{proof}
It follows from Theorem \ref{thm: group-norm induced by cardinal metric} that the cardinal norm of a generated group $G$ induces a metric given by
\begin{equation}\label{eqn: cardinal metric}
d_C(g, h) = \norm{g^{-1}h},\qquad g,h\in G,
\end{equation}
called the {\it cardinal metric}, and so $G$ becomes a metric space. This gives another way to define a geometric structure on a (finitely) generated group. One of the most important geometric structures defined on a finitely generated group is the word metric, of course. By definition, $d_C(g, h)$ equals the smallest cardinality of a subset $A$ of $S$ such that $g^{-1}h\in\gen{A}$, which justifies the use of the term \qt{cardinal}. It will be apparent that the structure of a generated group depends on its diameter with respect to the cardinal metric as well as the word metric.
\begin{remark}
Throughout the remainder of this article, the word and cardinal metrics mentioned in the same place are induced by the same given generating set unless stated otherwise.
\end{remark}
\begin{lemma}\label{lem: upper bound for cardinal metric}
If $(G, S)$ is a finitely generated group, then
\begin{equation}
d_C(g, h)\leq \abs{S}
\end{equation}
for all $g, h\in G$.
\end{lemma}
\begin{proof}
The lemma follows from the fact that $a\in\gen{S}$ for all $a\in G$.
\end{proof}
The following theorem shows that every finitely generated group has finite \mbox{diameter} with respect to the cardinal metric. Furthermore, there is an example of a generated group of infinite diameter.
\begin{theorem}\label{thm: sufficient condition to be finite diameter}
Let $(G, S)$ be a generated group. If $S$ contains a finite subset that generates $G$, then $(G, d_C)$ is of finite diameter.
\end{theorem}
\begin{proof}
Suppose that $T$ is a finite subset of $S$ such that $\gen{T} = G$. As in Lemma \ref{lem: upper bound for cardinal metric}, $d_C(g, h)\leq \abs{T}$ for all $g, h\in G$. Hence, $\sup{\cset{d_C(g,h)}{g, h\in G}} < \infty$ and so $\diam{G, d_C}$ is finite.
\end{proof}
The converse to Theorem \ref{thm: sufficient condition to be finite diameter} does not hold. For example, $(\mathbb{R}, \mathbb{R})$ is a generated group and $(\mathbb{R}, d_C)$ is of finite diameter since $d_C(x, y)\leq 1$ for all $x, y\in \mathbb{R}$. However, $\mathbb{R}$ dose not contain a finite subset that generates $\mathbb{R}$.
\begin{example}
Let $F$ be a free abelian group with infinite countable basis $B = \cset{b_i}{i\in\mathbb{N}}$ (For example, $F$ can be chosen as the group of all functions from $\mathbb{N}$ to $\mathbb{Z}$ with finitely many nonzero values under pointwise addition). For each $n\in\mathbb{N}$, define $s_n = b_1+b_2+\cdots+b_n$. Then $s_n\in\gen{b_1, b_2,\ldots, b_n}$ and so $d_C(0, s_n)\leq n$.
Suppose to the contrary that $d_C(0, s_n) = m < n$. Then there are distinct elements $c_1, c_2, \ldots, c_m\in B$ such that $s_n\in\gen{c_1, c_2,\ldots, c_m}$. Hence,
$$
b_1+b_2+\cdots+b_n = k_1c_1 + k_2c_2+\cdots +k_mc_m
$$
for some $k_1, k_2,\ldots, k_m\in\mathbb{Z}$. Since $m<n$, the previous equation is a contradiction for $b_1, b_2,\ldots, b_n, c_1, c_2, \ldots, c_m$ are basis elements. This proves that $d_C(0, s_n) = n$. It follows that $\sup{\cset{d_C(x, y)}{x, y\in F}} = \infty$ and so $(F, d_C)$ is of infinite diameter.
\end{example}
\subsection{Geometric structures}
Let $(G, S)$ be a generated group. It is not difficult to check that the following are classes of (surjective) isometries of $G$ with respect to the cardinal metric:
\begin{itemize}
\item the left multiplication maps $L_g\colon h\mapsto gh$;
\item the automorphisms $\tau$ of $G$ with the property that $\norm{\tau(g)} = \norm{g}$ for all $g\in G$;
\item the automorphisms $\tau$ of $G$ with the property that $\tau(S) = S$.
\end{itemize}
An immediate consequence of the previous result is that the space $(G, d_C)$ is homo-geneous; that is, if $x$ and $y$ are arbitrary points of $G$, then there is an isometry $T$ of $(G, d_C)$ for which $T(x) = y$. In fact, $T = L_{yx^{-1}}$ is the desired isometry.
As mentioned previously, the cardinal metric depends on its generating set. However, in the case of finitely generated groups, the cardinal metrics are unique up to bi-Lipschitz equivalence, as shown in the following theorem.
\begin{theorem}
Let $G$ be a group with finite generating sets $S$ and $T$ and let $d_S$ and $d_T$ be the cardinal metrics induced by $S$ and $T$, respectively. Every injective self-map of $G$ is bi-Lipschitz. In particular, every permutation of $G$ is a bi-Lipschitz equivalence and so $(G, d_S)$ and $(G, d_T)$ are bi-Lipschitz equivalent.
\end{theorem}
\begin{proof}
We may assume without loss of generality that $\abs{S}\leq \abs{T}$. Suppose that $f\colon G\to G$ is an injective map. Let $g, h\in G$ and let $g\ne h$. Then $f(g)\ne f(h)$ and so $d_S(f(g), f(h))>0$. By the defining property of $d_S$, $d_S(f(g), f(h))\geq 1$. By Lemma \ref{lem: upper bound for cardinal metric}, $d_T(g, h) < \abs{T}+1$. Hence, $\dfrac{1}{\abs{T}+1}d_T(g, h) < 1\leq d_S(f(g), f(h))$. By the same lemma, $d_S(f(g), f(h)) \leq \abs{S}\leq\abs{T} < (\abs{T}+1)d_T(g, h)$. This proves that
$$
\dfrac{1}{\abs{T}+1} d_T(g, h) \leq d_S(f(g), f(h)) \leq (\abs{T}+1)d_T(g, h)
$$
and so $f$ is bi-Lipschitz. The remaining part of the theorem is immediate.
\end{proof}
In some instances large scale geometry of a generated group is variant when its word metric is replaced by the cardinal metric, as we will see shortly. The next theorem gives a comparison between cardinal and word metrics.
\begin{theorem}\label{thm: comparison between word and cardinal metric}
Let $(G, S)$ be a generated group. Then
\begin{equation}\label{eqn: comparison of dW and dC}
d_C(g, h)\leq d_W(g, h)
\end{equation}
for all $g, h\in G$. Further, there is an example of a group such that the equality in \eqref{eqn: comparison of dW and dC} does not hold.
\end{theorem}
\begin{proof}
Let $g, h\in G$. If $g = h$, then $d_C(g, h) = 0 = d_W(g,h)$. Suppose that $g\ne h$ and that $d_W(g, h) = m$. Then there are elements $s_1, s_2, \ldots, s_m$ in $S$ such that $g^{-1}h = s_1^{\epsilon_1}s_2^{\epsilon_2}\cdots s_m^{\epsilon_m}$, where $\epsilon_i\in\set{\pm 1}$ for all $i = 1,2,\ldots, m$. Thus, $g^{-1}h\in \gen{s_1, s_2,\ldots, s_m}$ and so $\norm{g^{-1}h}\leq m$. Hence, $d_C(g, h)\leq d_W(g, h)$.
For the remaining part of the theorem, consider the additive group $\mathbb{Z}$. Since $\mathbb{Z} = \gen{1}$, it follows that $\norm{k} = 1$ for all nonzero $k\in\mathbb{Z}$. Hence, $d_C(m, n) \leq 1$ for all $m, n\in\mathbb{Z}$. If $-m + n\geq 2$, then $d_W(m, n)\geq 2$. This proves that $d_C(m, n)<d_W(m, n)$ whenever $-m+n\geq 2$.
\end{proof}
\begin{theorem}\label{thm: dW and dC, infinite diameter}
Let $(G, S)$ be a finitely generated group. If $(G, d_W)$ is of infinite diameter, then $(G, d_W)$ and $(G, d_C)$ are not quasi-isometric.
\end{theorem}
\begin{proof}
We show that there is no quasi-isometric embedding from $(G, d_W)$ to $(G, d_C)$. Let $T$ be a self-map of $G$. Let $K$ and $c$ be arbitrary positive constants. Since $\diam{G, d_W} =\sup{\cset{d_W(x, y)}{x, y\in G}}= \infty$ and $K(\abs{S}+c)$ is a constant, there must be points $g$ and $h$ in $G$ such that $d_W(g, h) > K(\abs{S}+c)$. It follows that $\dfrac{1}{K}d_W(g, h) - c > \abs{S}$. By Lemma \ref{lem: upper bound for cardinal metric}, $d_C(T(g), T(h)) \leq \abs{S}$. Hence, $$\dfrac{1}{K}d_W(g, h) - c > d_C(T(g), T(h)).$$ This proves that $T$ cannot define a quasi-isometric embedding and so $(G, d_W)$ and $(G, d_C)$ are not quasi-isometric.
\end{proof}
\begin{theorem}\label{thm: bilipschitz equivanet, depending on diameter}
Let $G$ be a finitely generated group.
\begin{enumerate}
\item\label{item: infinite diameter} If $(G, d_W)$ is of infinite diameter, then $(G, d_W)$ and $(G, d_C)$ are not bi-Lipschitz equivalent.
\item\label{item: finite diameter} If $(G, d_W)$ is of finite diameter, then $(G, d_W)$ and $(G, d_C)$ are bi-Lipschitz equivalent. Therefore, they are quasi-isometric.
\end{enumerate}
\end{theorem}
\begin{proof}\hfill
\eqref{item: infinite diameter} If $(G, d_W)$ is of infinite diameter, then by Theorem \ref{thm: dW and dC, infinite diameter}, $(G, d_W)$ and $(G, d_C)$ are not quasi-isometric and so they are not bi-Lipschitz equivalent.
\eqref{item: finite diameter} Suppose that $(G, d_W)$ is of finite diameter and let $T$ be an injective self-map of $G$. We claim that $T$ is a bi-Lipschitz embedding. Set $K = \diam{G, d_W}$. Using Theorem \ref{thm: comparison between word and cardinal metric}, we obtain
$$
\dfrac{1}{K}d_W(g, h)\leq d_C(T(g), T(h)) \leq Kd_W(g, h)
$$
for all $g, h\in G$. Hence, $T$ is bi-Lipschitz. This implies that every permutation of $G$ is a bi-Lipschitz equivalence between $(G, d_W)$ and $(G, d_C)$.
\end{proof}
By Theorem \ref{thm: bilipschitz equivanet, depending on diameter} \eqref{item: finite diameter}, if $(G, d_W)$ has finite diameter, then $(G, d_W)$ and $(G, d_C)$ are bi-Lipschitz equivalent. Therefore, the next obvious question is whether they are isometric. It turns out that they need not be isometric, in general. The following two examples support our claim.
\begin{example}\label{exm: isometric (G, dW) and (G, dC)}
Let $G$ be a finite group. Let $d_W$ and $d_C$ be the word and cardinal metrics with respect to $G$ itself. Note that $(G, d_W)$ is of finite diameter. In fact, if $g, h\in G$, then $d_W(g, h) = 1$ since $\Cay{G, G}$ is a complete graph and so $\set{g,h}$ is an edge in $\Cay{G, G}$. This implies that the diameter of $(G,d_W)$ equals $1$. The identity map on $G$ is easily seen to be an isometry between $(G, d_W)$ and $(G, d_C)$.
\end{example}
\begin{example}\label{exm: not isometric (G, dW) and (G, dC)}
Let $G$ be a cyclic group of finite order $n \geq 4$ with a generator $a$. Let $d_W$ and $d_C$ be the word and cardinal metrics induced by $\set{a}$ and let $T$ be a bijection from $G$ to itself. We claim that $T$ cannot define an isometry between $(G, d_W)$ and $(G, d_C)$. Since $T$ is surjective, there are elements $g$ and $h$ of $G$ such that $T(g) = e$ and $T(h) = a^2$. Note that $d_W(T(g), T(h)) = d_W(e, a^2) = 2$ since $e^{-1}a^2 = a^2$ is a word of length $2$ ($a^2\ne e$, $a^2\ne a$, and $a^2\ne a^{-1}$). Moreover, $d_C(g, h) \leq 1$ since $g^{-1}h\in G = \gen{a}$. Hence, $d_W(T(g), T(h))\ne d_C(g, h)$ and so $T$ is not an isometry.
\end{example}
We close this section with a description of isometries between finitely \mbox{generated} groups equipped with word and cardinal metrics.
\begin{theorem}
Let $(G, S)$ be a finitely generated group. Every isometry from $(G, d_C)$ to $(G, d_W)$ is of the form
$L_a\circ\tilde{T}$, where $a\in G$ and $\tilde{T}\colon (G, d_C)\to (G, d_W)$ is an isometry that preserves the group identity and $\tilde{{T}}(S)\subseteq S\cup S^{-1}$. Furthermore, $\tilde{T}$ is a nonexpansive mapping on $(G, d_W)$.
\end{theorem}
\begin{proof}
Denote by $e$ the identity of $G$ and let $T\colon (G, d_C)\to (G, d_W)$ be an isometry. Define $\tilde{T} = L_{T(e)^{-1}}\circ T$. Then $\tilde{T}(e) = T(e)^{-1}T(e) = e$. Let $g, h\in G$. Since $L_{T(e)^{-1}}$ is an isometry of $(G, d_W)$, it follows that
$$
d_W(\tilde{T}(g), \tilde{T}(h)) = d_W(T(g), T(h)) = d_C(g, h).
$$
Hence, $\tilde{T}$ is an isometry from $(G, d_C)$ to $(G, d_W)$. Let $s\in S$. Then
$$
1 = d_C(e, s) = d_W(\tilde{T}(e), \tilde{T}(s)) = d_W(e, \tilde{T}(s)).
$$
This implies that $\tilde{T}(s)$ can be expressed as a word of length $1$; that is, $\tilde{T}(s) = t^{\epsilon}$ for some $t\in S$ and $\epsilon\in\set{\pm 1}$. This proves that $\tilde{T}(S)\subseteq S\cup S^{-1}$.
Let $g,h\in G$. By Theorem \ref{thm: comparison between word and cardinal metric},
$$
d_W(\tilde{T}(g), \tilde{T}(h)) = d_C(g, h) \leq d_W(g, h)
$$
and so $\tilde{T}$ is nonexpansive.
\end{proof}
\subsection{Topological structures}
Let $G$ be a generated group. It is clear that the distance between two arbitrary points in $G$ measured by the cardinal metric is a nonnegative integer and so the cardinal metric induces the {\it discrete} topology on $G$ (the same is true for the word metric). Actually, the open ball centered at $x$ of radius $1/2$ is the singleton set $\set{x}$. This implies that if $G$ is finite, then $(G, d_C)$ is compact and hence is complete and totally bounded. In contrast, if $G$ is infinite, then $(G, d_C)$ is neither compact nor totally bounded. Nevertheless, it is complete since any Cauchy sequence in $(G, d_C)$ must become constant at some point. It is well known that any finitely generated group is countable. Therefore, if $G$ is a finitely generated group, then $(G, d_C)$ is separable since $G$ is a countable dense subset of itself. In this case, $(G, d_C)$ forms a {\it Polish} metric space \cite{GBPTC1991} (and even a Polish group).
\section{Isometries of cardinal metrics}
In this section, we give an alternative description of cardinal metrics by using Cayley color graphs. This leads to a remarkable connection between cardinal metrics and color-permuting automorphisms of Cayley graphs of generated groups. More precisely, cardinal metrics are invariant under color-permuting automorphisms (and hence also color-preserving automorphisms).
Let $(G, S)$ be a generated group. To elements of $S$, we can associate distinct colors, labeled by their names. The (right) {\it Cayley color digraph} of $G$ with \mbox{respect} to $S$, denoted by $\dcCay{G, S}$, is a digraph with $G$ as the vertex set and for all $g, h\in G$, there is an arc from $g$ to $h$ if and only if $h = gc$ for some $c\in S$. In this case, we say that $(g, h)$ is an arc with color $c$. Recall that an (undiretced) {\it path} from $g$ to $h$ in $\dcCay{G, S}$ is an alternating sequence of vertices and arcs, $g = g_0, e_1, g_1, e_2, g_2,\ldots, g_{n-1}, e_n, g_n = h$, such that $e_i\in\set{(g_{i-1}, g_i), (g_i, g_{i-1})}$ for all $i = 1,2,\ldots, n$.
\begin{theorem}[An alternative description of cardinal metrics]\label{thm characterization of cardinal metric}
Let $(G, S)$ be a generated group and let $g$ and $h$ be distinct elements of $G$. Then $d_C(g, h)$ equals the minimum number of colors associated with a path connecting $g$ and $h$ in $\dcCay{G, S}$.
\end{theorem}
\begin{proof}
Let $n$ be the minimum number of colors associated with a path connecting $g$ and $h$ in $\dcCay{G, S}$. Then there is a path $g = x_0, x_1,\ldots, x_m = h$ in $\dcCay{G, S}$ with $c_1, c_2,\ldots, c_n$ as its colors. It follows that $g^{-1}h = a_1^{\epsilon_1}a_2^{\epsilon_2}\cdots a_m^{\epsilon_m}$, where $a_i\in\set{c_1, c_2, \ldots, c_n}$ and $\epsilon_i\in\set{\pm 1}$ for all $i = 1,2,\ldots, m$. This implies that $g^{-1}h\in\gen{c_1, c_2,\ldots, c_n}$. Hence, $d_C(g, h)\leq n$.
By definition, there is a subset $T$ of $S$ with $\abs{T} = d_C(g,h)$ such that $g^{-1}h\in\gen{T}$. It follows that $g^{-1}h = b_1^{\epsilon_1}b_2^{\epsilon_2}\cdots b_k^{\epsilon_k}$, where $b_i\in T$ and $\epsilon_i\in\set{\pm 1}$ for all $i = 1,2,\ldots, k$. Further, we may assume that $b_1^{\epsilon_1}b_2^{\epsilon_2}\cdots b_k^{\epsilon_k}$ does not contain a subword equal to the group identity. Define $y_0 = g$ and $y_i = y_{i-1}b_i^{\epsilon_i}$. Then $g = y_0, y_1, y_2,\ldots, y_k = h$ is a path connecting $g$ and $h$ in $\dcCay{G, S}$ and so the \mbox{number} of colors associated with this path, say $\ell$, does not exceed $\abs{T}$. It follows from the minimality of $n$ that $n\leq \ell\leq \abs{T} = d_C(g, h)$. Thus, $d_C(g, h)=n$.
\end{proof}
Let $(G, S)$ be a generated group. Denote by $\aut{\dcCay{G, S}}$ the group of graph automorphisms of $\dcCay{G, S}$.
\begin{definition}[Color-permuting automorphisms, \cite{MR3546658}]\label{def: color-permuting automorphisms}
Let $(G, S)$ be a generated group. A map $\alpha$ in $\aut{\dcCay{G, S}}$ is called a {\it color-permuting} automorphism of $\dcCay{G, S}$ if there exists a permutation $\sigma\in\sym{S}$ such that $(g, h)$ has color $c$ if and only if $(\alpha(g), \alpha(h))$ has color $\sigma(c)$ for all $g, h$ in $G$.
\end{definition}
A natural characterization of color-permuting automorphisms of $\dcCay{G, S}$ is presented in the following theorem.
\begin{theorem}[p. 66, \cite{MR1206550}]\label{thm: characterization of color-permuting automorphism}
Let $(G, S)$ be a generated group and let $\alpha$ be an automorphism of $\dcCay{G, S}$. Then $\alpha$ is a color-permuting automorphism of $\dcCay{G, S}$ if and only if there exists a permutation $\sigma\in\sym{S}$ such that $\alpha(gc) = \alpha(g)\sigma(c)$
for all $g\in G$ and $c\in S$.
\end{theorem}
Clearly, the color-permuting automorphisms of $\dcCay{G, S}$ corresponding to the identity permutation of $S$ are precisely the {\it color-preserving} automorphisms of $\dcCay{G, S}$. It is a standard result in graph theory that the group of all color-preserving automorphisms of $\dcCay{G, S}$, denoted by $\caut{\dcCay{G, S}}$, is iso-morphic to $G$ \cite{RFHVG1939}. In fact,
\begin{equation}
\caut{\dcCay{G, S}} = \cset{L_a}{a\in G},\qquad L_a\colon g\mapsto ag,
\end{equation}
and $\cset{L_a}{a\in G}$ is isomorphic to $G$ by the famous Cayley theorem in abstract algebra; see, for instance, Theorem 7.12 of \cite{MR2014620}.
From the characterization of a cardinal metric described in Theorem \ref{thm characterization of cardinal metric}, we immediately obtain a class of isometries of $(G, d_C)$:
\begin{theorem}
If $(G, S)$ is a generated group, then the color-preserving automorphisms of $\dcCay{G, S}$ are isometries of $G$ with respect to the cardinal metric.
\end{theorem}
Denote by $\paut{\dcCay{G, S}}$ the group of {\it color-permuting} automorphisms of $\dcCay{G, S}$. The well-known characterization of $\paut{\dcCay{G, S}}$ is given by
\begin{equation}
\paut{\dcCay{G, S}} = \cset{L_a\circ \tau}{a\in G, \tau\in\aut{G, S}},
\end{equation}
where $\aut{G, S}$ denotes the group of automorphisms $\tau$ of $G$ such that $\tau(S) = S$; see, for instance, \cite[Lemma 2.1]{MR1206550}. In view of Theorem \ref{thm characterization of cardinal metric}, we obtain another class of isometries of $(G, d_C)$.
\begin{theorem}
If $(G, S)$ is a generated group, then the color-permuting automorphisms of $\dcCay{G, S}$ are isometries of $G$ with respect to the cardinal metric.
\end{theorem}
\begin{corollary}\label{cor: class of isometry La and automorphism}
If $(G, S)$ is a generated group, then
\begin{equation}
\cset{L_a\circ \tau}{a\in G, \tau\in\aut{G, S}}\subseteq\Iso{G, d_C}.
\end{equation}
\end{corollary}
In general, the inclusion in Corollary \ref{cor: class of isometry La and automorphism} is proper. For instance, if $G = \mathbb{Z}$, the additive infinite cyclic group, then there exists an isometry of $(\mathbb{Z}, d_C)$ that is not an automorphism of $\mathbb{Z}$. In fact, define a map $T$ by $T(2) = 3, T(3) = 2$, and $T(x) = x$ for all $x\in\mathbb{Z}\setminus\set{2, 3}$. It is clear that $T$ is a bijection from $\mathbb{Z}$ to itself. If $x = y$, then $T(x) = T(y)$ and so
$$
d_C(T(x), T(y)) = 0 = d_C(x, y).
$$
If $x\ne y$, then $T(x)\ne T(y)$ and so
$$
d_C(T(x), T(y)) = 1 = d_C(x, y).
$$
This proves that $T$ is an isometry of $(\mathbb{Z}, d_C)$. However, $T$ does not define an automorphism of $\mathbb{Z}$; for example, $T(4) = 4$, whereas $T(2)+T(2) = 6$.
\vspace{0.3cm}
\noindent{\bf Acknowledgements.} This work was financially supported by the Research Center in Mathematics and Applied Mathematics, Chiang Mai University. The author would like to thank the referee for comments that improve Examples \ref{exm: isometric (G, dW) and (G, dC)} and \ref{exm: not isometric (G, dW) and (G, dC)}.
\bibliographystyle{amsplain}\addcontentsline{toc}{section}{References}
|
1,108,101,564,985 | arxiv | \section{Abstract}
The \textbf{stopp} \texttt{R} package deals with spatio-temporal point processes which might have occurred on the Euclidean space or on some specific linear networks such as roads of a city.
The package contains functions to summarize, plot, and perform different kinds of analyses on point processes, mainly following the methods proposed in some recent papers in the stream of scientific literature. The main topics of such works, and of the package in turn, include
modeling, statistical inference, and simulation issues on spatio-temporal point processes on Euclidean space and linear networks, with a focus on their local characteristics. We contribute to the existing literature by collecting many of the most widespread methods for the analysis of spatio-temporal point processes into a unique package, which is intended to welcome many further proposals and extensions.
\section{Introduction}
Modelling real problems through space-time point processes is crucial in many scientific and engineering fields such as environmental sciences, meteorology, image analysis, seismology, astronomy, epidemiology and criminology.
The growing availability of data is a challenging opportunity for the scientific research, aiming at more detailed information through the application of statistical methodologies suitable for describing complex phenomena.
The aim of the present work is to contribute to the existing literature by gathering many of the most widespread methods for the analysis of spatio-temporal point processes into a unique package, which is intended to host many further extensions.
The \textbf{stopp} \citep{R} package provides codes, related to methods and models, for analysing complex spatio-temporal point processes, proposed in the papers \cite{siino2018joint,siino2018testing,adelfio2020some,dangelo2021assessing,dangelo2021local,d2022locally}.
The main topics include modelling, statistical inference, and simulation issues on spatial and spatio-temporal point processes, point processes on linear networks, non-Euclidean spaces.
The context of application is very broad, as the proposed methods are of interest in describing any phenomenon with a complex spatio-temporal dependence.
Some examples, include seismic events \citep{dangelo2021locall}, GPS data \citep{dangelo2021inhomogeneous}, crimes \citep{dangelo2021self}, and traffic accidents.
Moreover, local methods and models can be applied to different scientific fields and could be suitable for all those phenomena for which it makes sense to hypothesize interdependence in space and time.
The main dependencies of the \textbf{stopp} package are \textbf{spatstat} \cite{spatstat}, \textbf{stpp} \cite{gabriel2009second}, and \textbf{stlnpp} \cite{moradi2020first}.
In the purely spatial context, \textbf{spatstat} is by far the most comprehensive open-source toolbox for analysing spatial point patterns, focused mainly on two-dimensional point patterns. We exploit many functions from this package when needing purely spatial tools while performing spatio-temporal analyses.
Turning to the spatio-temporal context, \textbf{stpp} represents the main reference of statistical tools for analyzing the global and local second-order properties of spatio-temporal point processes, including estimators of the space-time inhomogeneous $K$-function and pair correlation function. The package is documented in the paper \cite{gabriel:rowlingson:diggle:2013}.
While \textbf{stpp} allows for the simulation of Poisson, inhibitive and clustered patterns, the \textbf{stppSim} \citep{stppSim} package generates artificial spatio-temporal point patterns through the integration of microsimulation and agent-based models.
Moreover, \textbf{splancs} \citep{splancs} fosters many tools for the analysis of both spatial and spatio-temporal point patterns \citep{rowlingson1993splancs,bivand2000implementing}.
Moving to spatio-temporal point patterns on linear networks, the package \textbf{stlnpp} provides tools to visualise and analyse such patterns using the first- and second-order summary statistics developed in \cite{moradi2020first,mateu2020spatio}.
Other worth-to-mention packages dealing with spatio-temporal point pattern analysis include \textbf{etasFLP} \cite{chiodi:adelfio:14}, mainly devoted to the estimation of the components of an ETAS (Epidemic Type Aftershock Sequence) model for earthquake description with the non-parametric background seismicity estimated through FLP (Forward Likelihood Predictive) \cite{adelfio2020including}, and
\textbf{SAPP},
the Institute of Statistical Mathematics package \citep{ogata2006timsac84,ogata2006statistical}, which provides functions for the statistical analysis of series of events and seismicity.
Finally, we highlight some \texttt{R} packages that implement routines to simulate and fit log-Gaussian Cox processes (LGCPs). In particular, the package \textbf{stpp} implements code to simulate spatio-temporal LGCP with a separable and non-separable covariance
structure for the Gaussian Random Field (GRF). Instead, the package \textbf{lgcp} \cite{taylor:davies:barry:15} implements code to fit LGCP models using methods
of the moments and a Bayesian inference for spatial, spatio-temporal,
multivariate and aggregated point processes. Furthermore, the minimum contrast method is used to estimate parameters assuming a separable structure of the covariance of
the GRF. Both packages do not handle for non-separable (and anisotropic)
correlation structures of the covariance structure of the GRF.
The outline of the paper is as follows.
First, we set the notation of spatio-temporal point processes, both occurring on Euclidean space and on linear networks. Then, we introduce the main functions for handling point processes objects, data, and simulations from different point process models. We then move to the Local Indicators of Spatio-Temporal Association functions, recalling their definition on the spatio-temporal Euclidean space and introducing the new functions to compute the LISTA functions on linear networks. Then, we illustrate how to perform a local test for assessing the local differences in two point patterns occurring on the same metric space. Hence, the functions available in the package for fitting models are illustrated, including separable Poisson process models on both the Euclidean space and networks, global and local non-separable inhomogeneous Poisson processes and LGCPs. Then, methods to perform global and local diagnostics on both models for point patterns on planar and linear network spaces are presented. The paper ends with some conclusions.
\section{Spatio-temporal point processes and their second-order properties}
\label{sec:stpp}
We consider a spatio-temporal point process with no multiple points as a random countable subset $X$ of $\mathbb{R}^2 \times \mathbb{R}$, where a point $(\textbf{u}, t) \in X$ corresponds to an event at $ \textbf{u} \in \mathbb{R}^2$ occurring at time $t \in \mathbb{R}$.
A typical realisation of a spatio-temporal point process $X$ on $\mathbb{R}^2 \times \mathbb{R}$ is a finite set $\{(\textbf{u}_i, t_i)\}^n_{
i=1}$ of distinct points within a
bounded spatio-temporal region $W \times T \subset \mathbb{R}^2 \times \mathbb{R}$, with area $\vert W\vert > 0$ and length $\vert T\vert > 0$, where $n \geq 0$ is not fixed in
advance.
In this context, $N(A \times B)$ denotes the number of points of a set $(A \times B) \cap X$, where $A \subseteq W$ and $B \subseteq T$. As usual \citep{daley:vere-jones:08}, when $N(W \times T) < \infty $ with probability 1, which holds e.g. if $X$ is defined on a bounded set, we call $X$ a finite spatio-temporal point process.
For a given event $(\textbf{u}, t)$, the events that are close to $(\textbf{u}, t)$ in both space and time, for each spatial distance $r$ and time lag $h$, are given by the corresponding spatio-temporal cylindrical neighbourhood of the event $(\textbf{u}, t)$, which can be expressed by the Cartesian product as
$$
b((\textbf{u}, t), r, h) = \{(\textbf{v}, s) : \vert \vert\textbf{u} - \textbf{v}\vert \vert \leq r, \vert t - s \vert \leq h\} , \quad \quad
(\textbf{u}, t), (\textbf{v}, s) \in W \times T,
$$
where $ \vert \vert \cdot \vert \vert$ denotes the Euclidean distance in $\mathbb{R}^2$. Note that $b((\textbf{u}, t), r, h)$ is a cylinder with centre (\textbf{u}, t), radius $r$, and height $2h$.
Product densities $\lambda^{(k)}, k \in \mathbb{N} \text{ and } k \geq 1 $, arguably the main tools in the statistical analysis of point processes, may be defined through the so-called Campbell Theorem (see \cite{daley:vere-jones:08}), that constitutes an essential result in spatio-temporal point process theory, stating that, given a spatio-temporal point process $X$, for any non-negative function $f$ on $( \mathbb{R}^2 \times \mathbb{R} )^k$
\begin{equation*}
\mathbb{E} \Bigg[ \sum_{\zeta_1,\dots,\zeta_k \in X}^{\ne} f( \zeta_1,\dots,\zeta_k)\Bigg]=\int_{\mathbb{R}^2 \times \mathbb{R}} \dots \int_{\mathbb{R}^2 \times \mathbb{R}} f(\zeta_1,\dots,\zeta_k) \lambda^{(k)} (\zeta_1,\dots,\zeta_k) \prod_{i=1}^{k}\text{d}\zeta_i,
\label{eq:campbell0}
\end{equation*}
where $\neq$ indicates that the sum is over distinct values. In particular, for $k=1$ and $k=2$, these functions are respectively called the \textit{intensity function} $\lambda$ and the \textit{(second-order) product density} $\lambda^{(2)}$.
Broadly speaking, the intensity function describes the rate at which the events occur in the given spatio-temporal region, while the second-order product densities are used for describing spatio-temporal variability and correlations between pair of points of a pattern. They represent the point process analogues of the mean function and the covariance function of a real-valued process, respectively.
Then, the first-order intensity function is defined as
\begin{equation*}
\lambda(\textbf{u},t)=\lim_{\vert \text{d}\textbf{u} \times \text{d}t\vert \rightarrow 0} \frac{\mathbb{E}[N(\text{d}\textbf{u} \times \text{d}t )]}{\vert \text{d}\textbf{u} \times \text{d}t\vert },
\end{equation*}
where $\text{d}\textbf{u} \times \text{d}t $ defines a small region around the point $(\textbf{u},t)$ and $\vert \text{d}\textbf{u} \times \text{d}t\vert $ is its volume. The second-order intensity function is given by
\begin{equation*}
\lambda^{(2)}((\textbf{u},t),(\textbf{v},s))=\lim_{\vert \text{d}\textbf{u} \times \text{d}t\vert ,\vert \text{d}\textbf{v} \times \text{d}s\vert \rightarrow 0} \frac{\mathbb{E}[N(\text{d}\textbf{u} \times \text{d}t )N(\text{d}\textbf{v} \times \text{d}s )]}{\vert \text{d}\textbf{u} \times \text{d}t\vert \vert \text{d}\textbf{v} \times \text{d}s\vert }.
\end{equation*}
Finally, the pair correlation function
$g((\textbf{u},t),(\textbf{v},s))=\frac{ \lambda^{(2)}((\textbf{u},t),(\textbf{v},s))}{\lambda(\textbf{u},t)\lambda(\textbf{v},s)}$
can be interpreted formally as the standardised probability density that an event occurs in each of two small volumes, $\text{d}\textbf{u} \times \text{d}t$ and $\text{d}\textbf{v} \times \text{d}s$, in the sense that for a Poisson process, $g((\textbf{u},t),(\textbf{v},s))=1.$
In this package, the focus is on second-order characteristics of spatio-temporal point patterns, with an emphasis on the $K$-function \citep{ripley:76}.
This is a measure of the distribution of the inter-point distances and captures the spatio-temporal dependence of a point process.
A spatio-temporal point process is second-order intensity reweighted stationary and isotropic if its intensity function is bounded away from zero and its pair correlation function depends only on the spatio-temporal difference vector $(r,h)$, where $r= \vert \vert \textbf{u}-\textbf{v} \vert \vert $ and $h= \vert t-s \vert$ \citep{gabriel2009second}.
For a second-order intensity reweighted stationary, isotropic spatio-temporal point process, the space-time inhomogeneous $K$-function takes the form
\begin{equation}
K(r,h)=2 \pi \int_{-r}^{r} \int_0^{h} g(r',h')r'\text{d}r'\text{d}h'
\end{equation}
where $g(r,h)=\lambda^{(2)}(r,h)/(\lambda(\textbf{u},t)\lambda(\textbf{v},s)), r=\vert \vert\textbf{u}-\textbf{v}\vert \vert,h= \vert t-s \vert$ \citep{gabriel2009second}.
The simplest expression of an estimator of the spatio-temporal $K$-function is given as
\begin{equation}
\hat{K}(r,h)=\frac{1}{ \vert W \vert \vert T \vert}\sum_{i=1}^n \sum_{j > i} I( \vert \vert \textbf{u}_i-\textbf{u}_j \vert \vert \leq r, \vert t_i-t_j \vert \leq h).
\label{eq:k}
\end{equation}
For a homogeneous Poisson process $\mathbb{E}[\hat{K}(r,h)]=\pi r^2 h$, regardless of the intensity $\lambda$.
The $K$-function can be used as a measure of spatio-temporal clustering and interaction \citep{gabriel2009second,moller2012aspects}.
Usually, $\hat{K}(r,h)$ is compared with the theoretical $\mathbb{E}[\hat{K}(r,h)]=\pi r^2 h$. Values $\hat{K}(r,h) > \pi r^{2} h$ suggest clustering, while $\hat{K}(r,h) < \pi r^2 h$ points to a regular pattern.
Point processes on linear networks are recently considered to analyse events occurring on particular network structures such as the traffic accidents on a road network.
Spatial patterns of points along a network of lines are indeed found in many applications.
The network might reflect a map of railways, rivers, electrical wires, nerve fibres, airline routes,
irrigation canals, geological faults or soil cracks \citep{baddeley2020analysing}. Observations of interest could be the locations of
traffic accidents, bicycle incidents, vehicle thefts or street crimes, and many others.
%
A linear network $ L=\cup_{i=1}^{n} l_{i} \subset \mathbb{R}^{2} $ is commonly taken as a finite union of line segments $l_i\subset \mathbb{R}^{2}$ of positive length.
A line segment is defined as $l_i=[u_i,v_i]=\{ku_i+(1-k)v_i: 0 \leq k \leq 1\}$, where $u_i,v_i \in \mathbb{R}^2$ are the endpoints of $l_i$. For any $i \ne j$, the intersection of $l_i$ and $l_j$ is either empty or an endpoint of both segments.\\
A spatio-temporal linear network point process is a point process on the product space $L \times T$, where $L$ is a linear network and $T$ is a subset (interval) of $\mathbb{R}$.
We hereafter focus on a spatio-temporal point process $X$ on a linear network $L$ with no
overlapping points $(\textbf{u},t)$, where $\textbf{u} \in L$ is the location of an event and $t \in T (T \subseteq \mathbb{R}^+)$
is the corresponding time occurrence of $\textbf{u}$. Note that the temporal state-space $T$ might be
either a continuous or a discrete set. A realisation of $X$ with $n$ points is represented by
$\textbf{x} = {(\textbf{u}_i ,t_{i} ),i = 1,\dots,n}$ where $(\textbf{u}_i ,t_{i} ) \in L \times T$.
A spatio-temporal disc with centre
$(\textbf{u},t) \in L \times T$, network radius $r > 0$ and temporal radius $h > 0$ is defined as
$b((\textbf{u},t ),r,h) = \{(\textbf{v},s ) : d_L (\textbf{u},\textbf{v}) \leq r , \vert t - s \vert \leq h\},
(\textbf{u}, t), (\textbf{v}, s) \in L \times T $
where $\vert \cdot\vert $ is a numerical distance, and $d_L(\cdot,\cdot)$ stands for the appropriate distance in the network, typically taken as the shortest-path distance between any two points. The cardinality of any subset $A \subseteq L \times T, N(X \cap A) \in
{0,1,\dots}$, is the number of points of $X$ restricted to $A$, whose expected value is denoted
by
$\nu(A) = \mathbb{E}[N(X \cap A)],
A \subseteq L \times T,$
where $\nu$, the intensity measure of $X$, is a locally finite product measure on $L\times T$ \citep{baddeley2006stochastic}.
We now recall Campbell's theorem for point processes on linear networks \citep{cronie2020inhomogeneous}.
Assuming that the product densities/intensity functions $\lambda^{(k)}$ exist, for any non-negative measurable function $f(\cdot)$ on the product space $L^k$, we have
\begin{equation}
\mathbb{E} \Bigg[ \sum_{\zeta_1,\dots,\zeta_k \in X}^{\ne} f( \zeta_1,\dots,\zeta_k)\Bigg]=\int_{L^k}
f(\zeta_1,\dots,\zeta_k) \lambda^{(k)} (\zeta_1,\dots,\zeta_k) \prod_{i=1}^{k}\text{d}\zeta_i.
\label{eq:campbelL}
\end{equation}
Assume that $X$ has an intensity function $\lambda(\cdot,\cdot)$, hence Equation \eqref{eq:campbelL} reduces to
$\mathbb{E}[N(X \cap A)] =\int_{A} \nu(d(\textbf{u},t )) =
\int_{A} \lambda(\textbf{u},t)d_2(\textbf{u},t), A \subseteq L \times T,$
where $d_2 (\textbf{u},t)$ corresponds to integration over $L \times T$.
%
The second-order Campbell's theorem is obtained from \eqref{eq:campbelL} with $k=2$
\begin{equation}
\mathbb{E} \Bigg[ \sum_{(\textbf{u},t),(\textbf{v},s)\in X}^{\ne} f\big((\textbf{u},t),(\textbf{v},s)\big)
\Bigg] =
\int_{L \times T} \int_{L \times T} f\big((\textbf{u},t),(\textbf{v},s)\big) \lambda^{(2)}\big((\textbf{u},t),(\textbf{v},s)\big)\text{d}_2(\textbf{u},t)\text{d}_2(\textbf{v},s).
\label{eq:campbell}
\end{equation}
Assuming that $X$ has a second-order product density function $\lambda^{(2)} (\cdot,\cdot)$, we then obtain
\begin{equation*}
\mathbb{E}[N(X \cap A)N(X \cap B)] =
\int_{A} \int_{B}
\lambda^{(2)} ((\textbf{u},t ),(\textbf{v},s ))d_2 (\textbf{u},t)d_2 (\textbf{v},s ), \quad A,B \subseteq L \times T.
\end{equation*}
Finally, an important result concerns the conversion of the integration over $L \times T$ to that over $\mathbb{R} \times \mathbb{R}$ \citep{rakshit2017second}.
For any measurable function $f: L \times T \rightarrow \mathbb{R}$
\begin{equation}
\int_{L \times T} f(\textbf{u},t)\text{d}_2(\textbf{u},t)=\int_0^{\infty} \int_0^{\infty} \sum_{\substack{ (\textbf{u},t)\in L \times T:\\
d_L(\textbf{u},\textbf{v})=r,\\
|t-s|=h }} f(\textbf{u},t) \text{d}r\text{d}h.
\label{eq:change}
\end{equation}
Letting $f(\textbf{u},t) = \eta(d_L(\textbf{u},\textbf{v}), \vert t-s\vert)$
then
$$
\int_{L \times T} \eta(d_L(\textbf{u},\textbf{v}), \vert t-s\vert) \text{d}_2(\textbf{u},t)= \int_0^{\infty} \int_0^{\infty} \eta(r,h)M((\textbf{u},t),r,h)\text{d}r \text{d}h
$$
where $M((\textbf{u},t),r,h)$ is the number of points lying exactly at the shortest-path distance $r \geq 0$ and the time distance $h \geq 0$ away from $(\textbf{u},t)$.
\section{Main functions for handling point processes objects, data, and simulations}\label{sec:main}
The \texttt{stp} function creates a \texttt{stp} object as a dataframe with three columns: \texttt{x}, \texttt{y}, and \texttt{t}. If the linear network \texttt{L}, of class \texttt{linnet}, is also provided, a \texttt{stlp} object is created instead.
The methods for this class of objects: (1) print the main information on the spatio-temporal point pattern stored in the \texttt{stp} object: the number of points, the enclosing spatial window, the temporal time period; (2) print the summary statistics of the spatial and temporal coordinates of the spatio-temporal point pattern stored in the \texttt{stp} object; (3) plot the point pattern stored in the \texttt{stp} object given in input, in a three-panel plot representing the 3Dplot of the coordinates, and the marginal spatial and temporal coordinates.
\begin{example}
> set.seed(12345)
> rpp1 <- stpp::rpp(lambda = 200, replace = FALSE)
> is.stp(rpp1)
[1] FALSE
> stp1 <- stp(cbind(rpp1$xyt[, 1], rpp1$xyt[, 2], rpp1$xyt[, 3]))
> is.stp(stp1)
[1] TRUE
> stp1
Spatio-temporal point pattern
208 points
Enclosing window: rectangle = [0.0011366, 0.9933775] x [0.0155277, 0.9960438] units
Time period: [0.004, 0.997]
\end{example}
Some functions are implemented to convert the \texttt{stp} and \texttt{stlp} classes to those of the \textbf{stpp} and \textbf{stlnpp} packages, and vice-versa.
Moreover, the package is furnished with the \texttt{greececatalog} dataset in the \texttt{stp} format containing the catalog of
Greek earthquakes of magnitude at least 4.0 from year 2005 to year 2014,
analysed by mean of local log-Gaussian Cox processes in \cite{dangelo2021locall}
and \cite{d2022locally}.
Data come from the Hellenic Unified Seismic Network (H.U.S.N.).
The same data have been analysed in \cite{siino2017spatial} by hybrids of Gibbs models,
and more recently by \cite{gabriel2022mapping}.
\begin{example}
> plot(greececatalog, tcum = TRUE)
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=.8\textwidth]{Art2.pdf}
\caption{Plots of Greek data.}
\label{fig:p2}
\end{figure}
A dataset of crimes occurred in Valencia, Spain, in 2019 is available, together with the linear
network of class \texttt{linnet} of the Valencian roads, named \texttt{valenciacrimes} and \texttt{valencianet}, respectively.
Finally, the linear network of class \texttt{linnet} of the roads of Chicago (Illinois, USA) close to the University of Chicago, is also available.
It represents the linear network of the Chicago dataset published and analysed in \cite{ang2012geometrically}. The network adjacency
matrix is stored as a sparse matrix.
Moving to simulations, the \texttt{rstpp} function creates a \texttt{stp} object, simulating a spatio-temporal Poisson point pattern, following either a homogeneous or inhomogeneous intensity.
\begin{example}
> h1 <- rstpp(lambda = 500, nsim = 1, seed = 2, verbose = TRUE)
> plot(h1, tcum = TRUE)
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=.8\textwidth]{Art3.pdf}
\caption{Simulated homogeneous point pattern.}
\label{fig:p3}
\end{figure}
\begin{example}
> inh <- rstpp(lambda = function(x, y, t, a) {exp(a[1] + a[2]*x)}, par = c(2, 6),
nsim = 1, seed = 2, verbose = TRUE)
> plot(inh, tcum = TRUE)
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=.8\textwidth]{Art4.pdf}
\caption{Simulated inhomogeneous point pattern.}
\label{fig:p4}
\end{figure}
The \texttt{rstlpp} function creates a \texttt{stlp} object instead, simulating a spatio-temporal Poisson point pattern
on a linear network.
Furthermore, the \texttt{rETASp} function creates a \texttt{stp} object, simulating a spatio-temporal ETAS (Epidemic Type Aftershock Sequence) process.
It follows the generating scheme for simulating a pattern from an ETAS process \citep{ogata:1988likelihood} with conditional intensity function (CIF) as in \cite{adelfio2020including}.
The \texttt{rETAStlp} function creates a \texttt{stlp} object, simulating a spatio-temporal ETAS process on a linear network. The simulation scheme previously introduced is adapted for the space location of events to be constrained on a linear network, being firstly introduced and employed for simulation studies in \cite{dangelo2021assessing}.
\section{Local Indicators of Spatio-Temporal Association functions}
\label{sec:lista}
Local Indicators of Spatio-Temporal Association (LISTA) are a set of functions that are individually associated with each one of the points of the point pattern, and can provide information about the local behaviour of the pattern.
This operational definition of local indicators was introduced by \cite{anselin:95} for the spatial case, and extended by \cite{siino2018testing} to the spatio-temporal context.\\
If $\lambda^{(2)i}(\cdot,\cdot)$ denotes the local version of the spatio-temporal product density for the event $(\textbf{u}_i,t_i)$,
then, for fixed $r$ and $h$, it holds that
\begin{equation}
\hat{\lambda}^{(2)}_{\epsilon,\delta}(r,h)=\frac{1}{n-1}\sum_{i=1}^n\hat{\lambda}^{(2)i}_{\epsilon,\delta}(r,h),
\label{eq:op}
\end{equation}
where $
\hat{\lambda}^{(2)i}_{\epsilon,\delta}(r,h)=\frac{n-1}{4\pi r \vert W \times T \vert}\sum_{j\ne i}\kappa_{\epsilon,\delta}( \vert \vert \textbf{u}_i-\textbf{v}_j \vert \vert -r, \vert t_i-s_j \vert -h),
$
with $r>\epsilon>0$ and $h>\delta>0$, and $\kappa$ a kernel function with spatial and temporal bandwidths $\epsilon$ and $\delta$, respectively.
Any second-order spatio-temporal summary statistic that satisfies the operational definition in \eqref{eq:op}, which means that the sum of spatio-temporal local indicator functions is proportional to the global statistic, can be called a LISTA statistic \citep{siino2018testing}.\\
In \cite{adelfio2020some}, local versions of both the homogeneous and inhomogeneous spatio-temporal $K$-functions on the Euclidean space are introduced.
Defining an estimator of the overall intensity by $\hat{\lambda}=n/(\vert W \vert \vert T \vert)$, they propose the local version of \eqref{eq:k} for the i-th event $(\textbf{u}_i,t_i)$
\begin{equation}
\hat{K}^i(r,h)=\frac{1}{\hat{\lambda}^2 \vert W \vert \vert T \vert}\sum_{(\textbf{u}_i,t_i)\ne (\textbf{v},s)} I( \vert \vert \textbf{u}_i-\textbf{v} \vert \vert\leq r,\vert t_i-s\vert \leq h)
\label{eq:kl}
\end{equation}
and the inhomogeneous version
\begin{equation}
\hat{K}^i_{I}(r,h)=\frac{1}{ \vert W \vert \vert T \vert}\sum_{(\textbf{u}_i,t_i)\ne (\textbf{v},s)} \frac{I(||\textbf{u}_i-\textbf{v} \vert \vert \leq r,\vert t_i-s\vert \leq h)}{\hat{\lambda}(\textbf{u}_i,t_i)\hat{\lambda}(\textbf{v},s)},
\label{eq:kinhl}
\end{equation}
with $(\textbf{v},s)$ being the spatial and temporal coordinates of any other point.
The authors extended the spatial weighting approach of \cite{veen2006assessing} to spatio-temporal local second-order statistics, proving that the inhomogeneous second-order statistics behave as the corresponding homogeneous ones, basically proving that the expectation of both \eqref{eq:kl} and \eqref{eq:kinhl} is equal to $\pi r^2 h$.
\subsection{LISTA on linear networks}
The functions \texttt{localSTLKinhom} and \texttt{localSTLginhom} implement the inhomogeneous LISTA functions proposed in \cite{dangelo2021local}.
The \textit{local spatio-temporal inhomogeneous}
K-function for the i-th event $(\boldsymbol{u}_i,t_i)$ on a linear network
is $$\hat{K}^i_{L,I}(r,h)=\frac{1}{ \vert L \vert \vert T \vert}\sum_{(\boldsymbol{u}_i,t_i)\ne (\boldsymbol{v},s)} \frac{I\{ d_L(\boldsymbol{u}_i,\boldsymbol{v})<r,\vert t_i-s\vert <h\} }{\hat{\lambda}(\boldsymbol{u}_i,t_i)\hat{\lambda}(\boldsymbol{v},s)M((\boldsymbol{u}_i,t_i),d_L(\boldsymbol{u}_i,\boldsymbol{v}),\vert t_i-s\vert )},$$
and the corresponding \textit{local pair correlation function} (pcf)
$$\hat{g}^i_{L,I}(r,h)=\frac{1}{ \vert L \vert \vert T \vert}\sum_{(\boldsymbol{u}_i,t_i)\ne (\boldsymbol{v},s)} \frac{\kappa( d_L(\boldsymbol{u}_i,\boldsymbol{v})-r)\kappa(\vert t_i-s\vert -h) }{\hat{\lambda}(\boldsymbol{u}_i,t_i)\hat{\lambda}(\boldsymbol{v},s)M((\boldsymbol{u}_i,t_i),d_L(\boldsymbol{u}_i,\boldsymbol{v}),\vert t_i-s\vert )},$$
with
$$D(X) = \frac{n-1}{ \vert L \vert \vert T \vert}\sum_{i=1}^n\sum_{i \ne j}\frac{1}{\hat{\lambda}(\textbf{u}_i,t_i)\hat{\lambda}(\textbf{u}_j,t_j)}$$
normalization factor. This leads to the unbiased estimators $\frac{1}{D(X)}\hat{K}^i_{L,I}(r,h)$ and
$\frac{1}{D(X)}\hat{g}^i_{L,I}(r,h)$.
The homogeneous versions \citep{dangelo2021assessing} can be obtained by weighting the second-order
summary statistics (either K or pcf) by a constant intensity
$\hat{\lambda}=n/( \vert L \vert \vert T \vert)$, giving
$$\hat{K}_L^i(r,h)=\frac{1}{\hat{\lambda}^{2} \vert L \vert \vert T \vert}\sum_{(\boldsymbol{u}_i,t_i)\ne (\boldsymbol{v},s)} \frac{I\{ d_L(\boldsymbol{u}_i,\boldsymbol{v})<r,\vert t_i-s\vert <h\} }{M((\boldsymbol{u}_i,t_i),d_L(\boldsymbol{u}_i,\boldsymbol{v}),\vert t_i-s\vert )},$$
and
$$\hat{g}_L^i(r,h)=\frac{1}{\hat{\lambda}^{2} \vert L \vert \vert T \vert}\sum_{(\boldsymbol{u}_i,t_i)\ne (\boldsymbol{v},s)} \frac{\kappa( d_L(\boldsymbol{u}_i,\boldsymbol{v})-r)\kappa(\vert t_i-s\vert -h) }{M((\boldsymbol{u}_i,t_i),d_L(\boldsymbol{u}_i,\boldsymbol{v}),\vert t_i-s\vert )}.$$
These can be computed easily with the functions \texttt{localSTLKinhom} and \texttt{localSTLKginhom}, by imputing a lambda vector of constant intensity values, the same for each point.
The proposed functions are the local counterparts of \texttt{STLKinhom} and \texttt{STLKginhom} by \cite{moradi2020first}, available in the \texttt{stlnpp} package \citep{stlnpp}.
\begin{example}
> set.seed(10)
> X <- stlnpp::rpoistlpp(.2, a = 0, b = 5, L = stlnpp::easynet)
> lambda <- density(X, at = "points")
> x <- as.stlp(X)
> k <- localSTLKinhom(x, lambda = lambda, normalize = TRUE)
## select an individual point
> j = 1
> k[[j]]
## plot the lista function and compare it with its theoretical value
> inhom <- list(x = k[[j]]$r, y = k[[j]]$t, z = k[[j]]$Kinhom)
> theo <- list(x = k[[j]]$r, y = k[[j]]$t, z = k[[j]]$Ktheo)
> diff <- list(x = k[[j]]$r, y = k[[j]]$t, z = k[[j]]$Kinhom - k[[j]]$Ktheo)
> oldpar <- par(no.readonly = TRUE)
> par(mfrow = c(1, 3))
> fields::image.plot(inhom, main= "Kinhom", col = hcl.colors(12, "YlOrRd", rev = FALSE),
xlab = "Spatial distance", ylab = "Temporal distance")
> fields::image.plot(theo, main = "Ktheo", col = hcl.colors(12, "YlOrRd", rev = FALSE),
xlab = "Spatial distance", ylab = "Temporal distance")
> fields::image.plot(diff, main = "Kinhom - Ktheo", col = hcl.colors(12, "YlOrRd", rev = FALSE),
xlab = "Spatial distance", ylab = "Temporal distance")
> par(oldpar)
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=.9\textwidth]{Art5.pdf}
\caption{Observed vs theoretical K-function.}
\label{fig:p5}
\end{figure}
\subsection{Local test for assessing the second-order differences between of two point patterns}\label{sec:test}
The function \texttt{localtest} performs the permutation test of the local structure of spatio-temporal point pattern data, proposed in \cite{siino2018testing}.
The network counterpart is also implemented, following \cite{dangelo2021assessing}.
This test detects local differences in the second-order structure of two observed point patterns $\textbf{x}$ and $\textbf{z}$
occurring on the same space-time region.
This procedure was firstly introduced in \cite{moraga:montes:11} for the purely spatial case, and then extended in
the spatio-temporal context by \cite{siino2018testing}. Finally,
test has been made suitable also for spatio-temporal point patterns
with spatial domain coinciding with a linear network by \cite{dangelo2021assessing}.
In general, for each point $(\textbf{u},t)$ in the spatio-temporal observed
point pattern $\textbf{x}$, we test
$$
\begin{cases}
\mathcal{H}_{0}: & \text{no difference in the second-order local structure of } (\textbf{u},t) \quad \text{ w.r.t } \quad \{ \{ \textbf{x} \setminus (\textbf{u},t) \} \cup \textbf{z} \}\\
\mathcal{H}_{1}: & \text{significant difference in the second-order local } \text{structure of} (\textbf{u},t) \quad \text{ w.r.t } \quad \{ \{ \textbf{x} \setminus (\textbf{u},t) \} \cup \textbf{z} \}
\end{cases}$$
The sketch of the test is as follows:
\begin{enumerate}
\item Set $k$ as the number of permutations
\item For each point $(\textbf{u}_i,t_i) \in \textbf{x}, i = 1, \ldots, n$:
\begin{itemize}
\item Estimate the LISTA function $\hat{L}^{(i)}(r,h)$
\item Compute the local deviation test
$$T^i=\int_{0}^{t_0} \int_{0}^{r_0} \Big(
\hat{L}^{(i)}(r,h)- \hat{L}^{-(i)}_{H_0}(r,h)
\Big)^2 \text{d}r \text{d}h,$$
where $\hat{L}^{-(i)}_{H_0}(r,h)$
is the LISTA function for the $i^{th}$ point,
averaged over the $j=1,\dots,k$ permutations
\item Compute a $p$-value as
$p^i=\sum_{j=1}^{k} \textbf{1}(T^{i,j}_{H_0} \geq T^i)/k$
\end{itemize}
\end{enumerate}
The test ends providing a vector $p$ of $p$- values, one for each point
in $\textbf{x}$.
If the test is performed for spatio-temporal point patterns as in
\cite{siino2018testing}, that is, on an object of class \texttt{stp}, the LISTA
functions $\hat{L}^{(i)}$ employed are the local $K$-functions of
\cite{adelfio2020some}, computed by the function
\texttt{KLISTAhat}
of the \textbf{stpp} package \citep{gabriel:rowlingson:diggle:2013}
.
%
If the function is applied to a \texttt{stlp} object, that is, on two spatio-temporal
point patterns observed on the same linear network \texttt{L},
the local $K$-functions
used are the ones proposed in \cite{dangelo2021assessing}, documented
in \texttt{localSTLKinhom}.
%
Details on the performance of the test are found in \cite{siino2018testing} and
\cite{dangelo2021assessing} for Euclidean and network spaces, respectively.
Alternative LISTA functions that can be employed to run the test are \texttt{LISTAhat} of \textbf{stpp} and \texttt{localSTLginhom} of \textbf{stopp}, that is, the pcfs on Euclidean space and
linear networks respectively.
The methods for this class of objects: (1) print the main information on the result of the local permutation test performed with \texttt{localtest} on either a \texttt{stp} or \texttt{stlp} object: whether the local test was run on point patterns lying on a linear network or not; the number of points in the background \texttt{X} and alternative \texttt{Z} patterns; the number of points in \texttt{X} which exhibit local differences in the second-order structure with respect to \texttt{Z}, according to the performed test;
(2) plot the result of the local permutation test performed with \texttt{localtest}: it highlights the points of the background pattern \texttt{X}, which exhibit local differences in the second-order structure with respect to \texttt{Z}, according to the previously performed test. The remaining points of \texttt{X} are also represented; it also shows the underlying linear network, if the local test has been applied to point patterns occurring on the same linear network, that is, if \texttt{localtest} has been applied to a \texttt{stlp} object. In the following, we provide an example of two point processes, both occurring on the unit cube.
\begin{example}
## background pattern
> set.seed(12345)
> X <- rstpp(lambda = function(x, y, t, a) {exp(a[1] + a[2]*x)}, par = c(.05, 4),
nsim = 1, seed = 2, verbose = TRUE)
## alternative pattern
> set.seed(12345)
> Z <- rstpp(lambda = 25, nsim = 1, seed = 2, verbose = TRUE)
## run the local test
> test <- localtest(X, Z, method = "K", k = 9, verbose = FALSE)
> test
Test for local differences between two
spatio-temporal point patterns
--------------------------------------
Background pattern X: 17
Alternative pattern Z: 20
1 significant points at alpha = 0.05
> plot(test)
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Art14.pdf}
\caption{Output of the local test.}
\label{fig:p14}
\end{figure}
\section{Model fitting}\label{sec:models}
The description of the observed point pattern intensity is a crucial issue dealing with spatio-temporal point pattern data, and specifying a statistical model is a very effective way compared to analyzing data by calculating summary statistics. Formulating and adapting a statistical model to the data allows taking into account effects that otherwise could introduce distortion in the analysis \citep{baddeley2015spatial}. In this section, we outline the main functions to fit different specifications of inhomogeneous spatio-temporal Poisson process models.
\subsubsection{Spatio-temporal Poisson point processes with separable intensity}
When dealing with intensity estimation for spatio-temporal point processes, it is quite common to assume that the intensity function $\lambda(\textbf{u},t)$ is separable \citep{diggle2013statistical,gabriel2009second}. Under this assumption, the intensity function is given by the product
\begin{equation}
\lambda(\textbf{u},t)={\lambda}(\textbf{u}){\lambda}(t)
\label{eq:sep}
\end{equation}
where ${\lambda}(\textbf{u})$ and ${\lambda}(t)$ are non-negative functions on $W$ and $T$, respectively \citep{gabriel2009second}.
Under this assumption, any non-separable effects are interpreted as second-order, rather than first-order. Suitable estimates of $\lambda(\textbf{u})$ and $\lambda(t)$ in \eqref{eq:sep} depend on the characteristics of each application. The functions here implemented use a combination of a parametric spatial point pattern model, potentially depending on the spatial coordinates and/or spatial covariates, and a parametric log-linear model for the temporal component. Also, non-parametric kernel estimate form(s) are legit but still not implemented.
The spatio-temporal intensity is therefore obtained by multiplying the purely spatial and purely temporal intensities, previously fitted separately. The resulting intensity is normalised, to make the estimator unbiased, making the expected number of points
$$\mathbb{E}\bigg[ \int_{W \times T} \hat{\lambda}(\textbf{u},t)d_2(\textbf{u},t) \bigg] = \int_{W \times T} \lambda(\textbf{u},t)d_2(\textbf{u},t)=n,$$
and the final intensity function is obtained as
$$\hat{\lambda}(\textbf{u},t)=\frac{\hat{\lambda}(\textbf{u})\hat{\lambda}(t)}{\int_{W \times T} \hat{\lambda}(\textbf{u},t)d_2(\textbf{u},t)}.$$
The function \texttt{sepstppm} fits such a separable spatio-temporal Poisson process model.
The function \texttt{plot.sepstppm} shows the fitted intensity, displayed both in space and in space and time.
\begin{example}
> df1 <- valenciacrimes[valenciacrimes$x < 210000 & valenciacrimes$x > 206000
& valenciacrimes$y < 4377000 & valenciacrimes$y > 4373000, ]
> mod1 <- sepstppm(df1, spaceformula = ~x * y, timeformula = ~ crime_hour + week_day)
\end{example}
For linear network point patterns, non-parametric estimators of the intensity function $\lambda(\cdot,\cdot)$ have been proposed \citep{mateu2020spatio}, suggesting any variation of the distribution of the process over its state-space $L \times T$.
A kernel-based intensity estimator for spatio-temporal linear network point processes, based on the first-order separability assumption, considered in \cite{moradi2020first}, is obtainable with the package \textbf{stnlpp}.
The functions \texttt{sepstlppm} and \texttt{plot.sepstlppm} implement the network counterparts of the spatio-temporal Poisson point process with separable intensity and fully parametric specification.
\begin{example}
> mod1 <- sepstlppm(valenciacrimes[1:2500, ], spaceformula = ~x,
timeformula = ~ crime_hour + week_day, L = valencianet)
\end{example}
\subsubsection{Global inhomogeneous spatio-temporal Poisson processes trough quadrature scheme}
For a non-separable spatio-temporal specification, we assume that the template model is a Poisson process, with a parametric intensity or rate function
\begin{equation}
\lambda(\textbf{u}, t; \theta), \quad \textbf{u} \in
W,\quad t \in T, \quad \theta \in \Theta.
\label{eq:pois}
\end{equation}
The log-likelihood of the template model is
$$\log L(\theta) = \sum_i
\lambda(\textbf{u}_i, t_i; \theta) - \int_W\int_T
\lambda(\textbf{u}, t; \theta) \text{d}t\text{d}u$$
up to an additive constant, where the sum is over all points $\textbf{u}_i$
in the spatio-temporal point process $X$.
We might consider intensity models of log-linear form
\begin{equation}
\lambda(\textbf{u}, t; \theta) = \exp(\theta Z(\textbf{u}, t) + B(\textbf{u},t )), \quad
\textbf{u} \in W,\quad t \in T
\label{eq:glo_mod}
\end{equation}
where $Z(\textbf{u}, t)$ is a vector-valued covariate function, and $B(\textbf{u}, t)$ is a scalar offset.
In point process theory, the variables $Z(\textbf{u}, t)$ are referred to as spatio-temporal covariates. Their observable values are assumed to be knowable, at least in principle, at each location in the spatio-temporal window.
For inferential purposes, their values must be known at each point of the data point pattern and at least at some other locations.
This is the reason why we first implmented the dependence of the intensity function $\lambda(\textbf{u}, t; \theta)$ on the space and time coordinates first.\\
The \texttt{stppm} function fits a Poisson process model to an observed spatio-temporal point pattern stored in a \texttt{stp} object, assuming the template model \eqref{eq:pois}.
Estimation is performed by fitting a \texttt{glm} using a spatio-temporal version of the quadrature scheme by \cite{berman1992approximating}.
We use a finite quadrature approximation
to the log-likelihood. Renaming the data points as $\textbf{x}_1,\dots ,
\textbf{x}_n$ with $(\textbf{u}_i,t_i) = \textbf{x}_i$ for $i = 1, \dots , n$,
then generate $m$ additional 'dummy points' $(\textbf{u}_{n+1},t_{n+1})
\dots , (\textbf{u}_{m+n},t_{m+n})$ to
form a set of $n + m$ quadrature points (where $m > n$).
%
Then we determine quadrature weights $a_1, \dots , a_m$
so that a Riemann sum can approximate integrals in the log-likelihood
$$ \int_W \int_T \lambda(\textbf{u},t;\theta)\text{d}t\text{d}u \approx \sum_{k = 1}^{n + m}a_k\lambda(\textbf{u}_{k},t_{k};\theta)$$
where $a_k$ are the quadrature weights such that
$\sum_{k = 1}^{n + m}a_k = l(W \times T)$ where $l$ is the Lebesgue measure.
%
Then the log-likelihood of the template model can be approximated by
$$ \log L(\theta) \approx \sum_i \log \lambda(\textbf{x}_i; \theta) +\sum_j(1 - \lambda(\textbf{u}_j,t_j; \theta))a_j=\sum_je_j \log \lambda(\textbf{u}_j, t_j; \theta) + (1 - \lambda(\textbf{u}_j, t_j; \theta))a_j$$
where $e_j = 1\{j \leq n\}$ is the indicator that equals $1$ if
$u_j$ is a data point. Writing $y_j = e_j/a_j$ this becomes
$$ \log L(\theta) \approx
\sum_j
a_j
(y_j \log \lambda(\textbf{u}_j, t_j; \theta) - \lambda(\textbf{u}_j, t_j; \theta))
+
\sum_j
a_j.$$
%
Apart from the constant $\sum_j a_j$, this expression is formally equivalent
to the weighted log-likelihood of
a Poisson regression model with responses $y_j$ and means
$\lambda(\textbf{u}_j,t_j; \theta) = \exp(\theta Z(\textbf{u}_j,t_j) +
B(\textbf{u}_j,t_j))$.
%
This is
maximised by this function by using standard GLM software.
%
In detail, we define the spatio-temporal quadrature scheme by considering a
spatio-temporal
partition of $W \times T$ into cubes $C_k$ of equal volume $\nu$,
assigning the weight $a_k=\nu/n_k$
to each quadrature point (dummy or data) where $n_k$ is the number of
points that lie in the same cube as the point $u_k$ \citep{raeisi2021spatio}.
%
The number of dummy points should be sufficient for an accurate estimate of the
likelihood. Following \cite{baddeley2000non} and \cite{raeisi2021spatio},
we start with a number of dummy points $m \approx 4 n$, increasing it until
$\sum_k a_k = l(W \times T)$.
The \texttt{AIC.stppm} and \texttt{BIC.stppm} functions return the $AIC = 2k - 2 \log(\hat{L})$ and $BIC = k\log{n} - 2 \log(\hat{L})$ of a point process
model fitted through the
function \texttt{stppm} applied to an observed
spatio-temporal point pattern of class \texttt{stp}.
%
As the model returned by \texttt{stppm} is fitted through a quadrature scheme,
the log-likelihood is computed through the quantity
$$- \log{L(\hat{\theta}; \boldsymbol{x})} = \frac{D}{2} + \sum_{j = 1}^{n}I_j\log{w_j}+n(\boldsymbol{x}).$$
\begin{example}
## Homogeneous
> set.seed(2)
> ph <- rstpp(lambda = 200, nsim = 1, seed = 2, verbose = TRUE)
> hom1 <- stppm(ph, formula = ~ 1)
> hom1
Homogeneous Poisson process
with Intensity: 202.093
Estimated coefficients:
(Intercept)
5.309
## plot(hom1) won't show any plot, due to the constant intensity
> coef(hom1)
(Intercept)
5.308728
## Inhomogeneous
> set.seed(2)
> pin <- rstpp(lambda = function(x, y, t, a) {exp(a[1] + a[2]*x)}, par = c(2, 6),
nsim = 1, seed = 2, verbose = TRUE)
1.
> inh1 <- stppm(pin, formula = ~ x)
> inh1
Inhomogeneous Poisson process
with Trend: ~x
Estimated coefficients:
(Intercept) x
2.180 5.783
> plot(inh1)
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=.8\textwidth]{Art6.pdf}
\caption{Output of the model fitting.}
\label{fig:p6}
\end{figure}
\subsubsection{Local inhomogeneous spatio-temporal Poisson processes trough local log-likelihood}
The \texttt{locstppm} function fits a Poisson process model to an observed spatio-temporal
point pattern stored in a \texttt{stp} object, that is, a Poisson model with
a set of parameters $\theta_i$ for each point $i$.
We assume that the template model is a Poisson process, with a parametric
intensity or rate function $\lambda(\textbf{u}, t; \theta_i)$ with space
and time localtions $\textbf{u} \in W, t \in T$ and parameters $\theta_i \in \Theta.$
Estimation is performed through the fitting of a \texttt{glm} using a localised version of the quadrature scheme by \cite{berman1992approximating}, firstly introduced
in the purely spatial context by \citep{baddeley:2017local}, and in the spatio-temporal
framework by \cite{d2022locally}.
The local log-likelihood associated with the spatio-temporal location
$(\textbf{v},s)$ is given by
$$\log L((\textbf{v},s);\theta) = \sum_i w_{\sigma_s}(\textbf{u}_i - \textbf{v}) w_{\sigma_t}(t_i - s)
\lambda(\textbf{u}_i, t_i; \theta) - \int_W \int_T
\lambda(\textbf{u}, t; \theta) w_{\sigma_s}(\textbf{u}_i - \textbf{v}) w_{\sigma_t}(t_i - s) \text{d}t \text{d}u$$
where $w_{\sigma_s}$ and $w_{\sigma_t}$ are weight functions, and
$\sigma_s, \sigma_t > 0$ are the smoothing bandwidths. It is not
necessary to assume that $w_{\sigma_s}$ and $w_{\sigma_t}$
are probability densities. For simplicity, we shall consider only kernels of fixed
bandwidth, even though spatially adaptive kernels could also be used.
%
Note that if the template model is the homogeneous Poisson process with intensity
$\lambda$, then the local
likelihood estimate $\hat{\lambda}(\textbf{v}, s)$
reduces to the kernel estimator of the point process intensity with
kernel proportional to $w_{\sigma_s}w_{\sigma_t}$.
%
We now use an approximation similar to
$\log L(\theta) \approx
\sum_j
a_j
(y_j \log \lambda(\textbf{u}_j, t_j; \theta) - \lambda(\textbf{u}_j, t_j; \theta))
+
\sum_j
a_j,$
but for the local log-likelihood associated
with each desired location $(\textbf{v},s) \in W \times T$, that is:
$$\log L((\textbf{v},s); \theta) \approx
\sum_j
w_j(\textbf{v},s)a_j
(y_j \log \lambda(\textbf{u}_j,t_j; \theta) - \lambda(\textbf{u}_j,t_j; \theta))
+
\sum_j
w_j(\textbf{v},s)a_j ,$$
where $w_j(\textbf{v},s) = w_{\sigma_s}(\textbf{v} - \textbf{u}_j)
w_{\sigma_t}(s - t_j)$.
%
Basically, for each
desired location $(\textbf{v},s)$,
we replace the vector of quadrature weights $a_j$ by
$a_j(\textbf{v},s)= w_j(\textbf{v},s)a_j$ where
$w_j (\textbf{v},s) = w_{\sigma_s}(\textbf{v} - \textbf{u}_j)w_{\sigma_t}(s - t_j)$,
and use the GLM software to fit the Poisson regression.
%
The local likelihood is defined at any location $(\textbf{v},s)$ in continuous space.
In practice, it is sufficient to
consider a grid of points $(\textbf{v},s)$.
%
We refer to \cite{d2022locally} for further discussion on bandwidth selection
and on computational costs.
\begin{example}
> inh00_local <- locstppm(pin, formula = ~ 1)
> inh00_local
Homogeneous Poisson process
with median Intensity: 7.564067
Summary of estimated coefficients
V1
Min. :3.981
1st Qu.:7.291
Median :7.564
Mean :7.316
3rd Qu.:7.669
Max. :7.854
> inh01_local <- locstppm(pin, formula = ~ x)
> inh01_local
Inhomogeneous Poisson process
with Trend: ~x
Summary of estimated coefficients
V1 V2
Min. :1.282 Min. :0.7667
1st Qu.:2.634 1st Qu.:4.5470
Median :3.059 Median :5.0662
Mean :3.082 Mean :5.0373
3rd Qu.:3.528 3rd Qu.:5.5636
Max. :4.709 Max. :6.9729
\end{example}
\subsubsection{Log-Gaussian Cox processes estimation trough (locally weighted) joint minimum contrast}
In the Euclidean context, LGCPs are one of the most prominent clustering models. By specifying the intensity of the process and the moments of the underlying GRF, it is possible to estimate both the first and second-order characteristics of the process.
Following the inhomogeneous specification in \cite{diggle:moraga:13}, a LGCP for a generic point in space and time has the intensity
\begin{equation*}
\Lambda(\textbf{u},t)=\lambda(\textbf{u},t)\exp(S(\textbf{u},t))
\end{equation*}
where $S$ is a Gaussian process with $\mathbb{E}(S(\textbf{u},t))=\mu=-0.5\sigma^2$ and so $\mathbb{E}(\exp{S(\textbf{u},t)})=1$ and with variance and covariance matrix $\mathbb{C}(S(\textbf{u}_i,t_i),S(\textbf{u}_j,t_j))=\sigma^2 \gamma(r,h)$ under the stationary assumption, with $\gamma(\cdot)$ the correlation function of the GRF, and $r$ and $h$ some spatial and temporal distances. Following \cite{moller1998log}, the first-order product density and the pair correlation function of an LGCP are $\mathbb{E}(\Lambda(\textbf{u},t))=\lambda(\textbf{u},t)$ and $g(r,h)=\exp(\sigma^2\gamma(r,h))$, respectively.
The \texttt{stlgcppm} function estimates a local log-Gaussian Cox process (LGCP), following the locally weighted minimum contrast procedure introduced in \cite{d2022locally}.
Three covariances are available: separable exponential, Gneiting, and DeIaco-Cesare.
If both the first and second arguments are set to global, a log-Gaussian Cox process is fitted by means of the joint minimum contrast procedure proposed in \cite{siino2018joint}.
We may consider a separable structure for the covariance function of the GRF \citep{brix2001spatiotemporal} that has exponential form for both the spatial and the temporal components,
\begin{equation}
\mathbb{C}(r,h)=\sigma^2\exp \bigg(\frac{-r}{\alpha}\bigg)\exp\bigg(\frac{-h}{\beta}\bigg),
\label{eq:cov}
\end{equation}
where $\sigma^2$ is the variance, $\alpha$ is the scale parameter for the spatial distance and $\beta$ is the scale parameter for the temporal one.
The exponential form is widely used in this context and nicely reflects the decaying correlation structure with distance or time.\\
Moreover, we may consider a non-separable covariance of the GRF useful to describe
more general situations.
Following the parametrisation in \cite{schlather2015analysis}, Gneiting covariance function \citep{gneiting2006geostatistical} can be written as
$$
\mathbb{C}(r,h) = (\psi(h) + 1)^{ - d/2} \varphi \bigg( \frac{r}{\sqrt{\psi(h) + 1}} \bigg) \qquad r \geq 0, \quad h \geq 0,
$$
where $\varphi(\cdot)$ is a complete monotone function associated to
the spatial structure, and $\psi(\cdot)$ is a positive function with a
completely monotone derivative associated to the temporal
structure of the data. For example, the choice $d = 2$,
$\varphi(r)=\sigma^2 \exp ( - (\frac{r}{\alpha})^{\gamma_s})$ and
$\psi(h)=((\frac{h}{\beta})^{\gamma_t} + 1)^{\delta/\gamma_t}$
yields to the parametric family
\begin{equation}
\mathbb{C}(r,h) = \frac{\sigma^2}{((\frac{h}{\beta})^{\gamma_t} + 1)^{\delta/\gamma_t}} \exp \Biggl( - \frac{(\frac{r}{\alpha})^{\gamma_s}}{((\frac{h}{\beta})^{\gamma_t} + 1)^{\delta/(2\gamma_t)}} \Biggl),
\label{eq:nonsep}
\end{equation}
where $\alpha > 0$ and $\beta > 0$ are scale parameters of space and time, $\delta$ takes values in $(0, 2]$, and $\sigma^2$ is the variance.\\
Another parametric covariance implemented belongs to the Iaco-Cesare family \citep{de2002fortran,de2002nonseparable}, and there is a wealth of covariance families that could well be used for our purposes.
%
Following \cite{siino2018joint}, the second-order
parameters $\boldsymbol{\psi}$ are found by minimising
$$M_J\{ \boldsymbol{\psi}\}=\int_{h_0}^{h_{max}} \int_{r_0}^{r_{max}} \phi(r,h) \{\nu[\hat{J}(r,h)]-\nu[J(r,h;\boldsymbol{\psi})]\}^2 \text{d}r \text{d}h,$$
where $\phi(r, h)$ is a weight that depends on the space-time
distance and $\nu$ is a transformation function.
%
They suggest $\phi(r,h)=1$ and $\nu$ as
the identity function, while $r_{max}$ and $h_{max}$ are selected as 1/4
of the maximum observable spatial and temporal distances.\\
%
Following \cite{d2022locally}, we can fit a localised version of the LGCP,
that is, obtain a
vector of parameters $\boldsymbol{\psi}_i$ for each point $i$, by
minimising
$$M_{J,i}\{ \boldsymbol{\psi}_i \}=\int_{h_0}^{h_{max}}\int_{r_0}^{r_{max}}
\phi(r,h) \{ \nu[\bar{J}_i(r,h)]-\nu[J(r,h;\boldsymbol{\psi})]\}^2 \text{d}r \text{d}h
\qquad \text{with} \qquad
\bar{J}_i(r,h)= \frac{\sum_{i=1}^{n}\hat{J}_i(r,h)w_i}{\sum_{i=1}^{n}w_i}$$
is the average of the local functions
$\hat{J}_i(r,h)$, weighted by some point-wise kernel estimates.
In particular, we consider $\hat{J}_i(\cdot)$ as the local
spatio-temporal pair correlation function \citep{gabriel:rowlingson:diggle:2013} documented in \texttt{LISTAhat}.
The \texttt{print} and \texttt{summary} functions give the main information on the fitted model. In case of local parameters (both first- and second-order), the summary function contains information on their distributions.
Next, we perform and example with a complex seismic point pattern.
\begin{example}
> data("greececatalog")
\end{example}
If both first and second arguments are set to "global", a log-Gaussian Cox process is fitted by means of the joint minimum contrast.
\begin{example}
> lgcp1 <- stlgcppm(greececatalog, formula = ~ 1, first = "global", second = "global")
> lgcp1
Joint minimum contrast fit
for a log-Gaussian Cox process with
global first-order intensity and
global second-order intensity
--------------------------------------------------
Homogeneous Poisson process
with Intensity: 0.00643
Estimated coefficients of the first-order intensity:
(Intercept)
-5.046
--------------------------------------------------
Covariance function: separable
Estimated coefficients of the second-order intensity:
sigma alpha beta
6.989 0.225 156.353
--------------------------------------------------
Model fitted in 1.014 minutes
\end{example}
If first = "local", local parameters for the first-order intensity are provided. In this case, the summary function contains information on their distributions.
\begin{example}
> lgcp2 <- stlgcppm(greececatalog, formula = ~ x, first = "local", second = "global")
> lgcp2
Joint minimum contrast fit
for a log-Gaussian Cox process with
local first-order intensity and
global second-order intensity
--------------------------------------------------
Inhomogeneous Poisson process
with Trend: ~x
Summary of estimated coefficients of the first-order intensity
(Intercept) x
Min. :-6.400 Min. :-0.90689
1st Qu.:-2.526 1st Qu.:-0.38710
Median : 2.333 Median :-0.26876
Mean : 2.153 Mean :-0.26744
3rd Qu.: 5.070 3rd Qu.:-0.06707
Max. :16.323 Max. : 0.10822
--------------------------------------------------
Covariance function: separable
Estimated coefficients of the second-order intensity:
sigma alpha beta
2.612 0.001 36.415
--------------------------------------------------
Model fitted in 3.634 minutes
\end{example}
The \texttt{plot} function shows the fitted intensity, displayed
both in space (by means of a density kernel smoothing) and in space and time. In the case of local covariance parameters, the function returns the mean of the random intensity, displayed both in space (by means of a density kernel smoothing) and in space and time.
The \texttt{localsummary.stlgcppm} function breaks up the contribution of the local estimates to the fitted intensity, by plotting the overall intensity and the density kernel smoothing of some artificial intensities obtained by imputing the quartiles of the local parameters' distributions.
Finally, the function \texttt{localplot.stlgcppm} function plots the local estimates. In the case of local covariance parameters, the function displays the local estimates of the chosen covariance function.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Art7.pdf
\caption{Output of the \texttt{localsummary} function.}
\label{fig:p7}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Art8.pdf}
\caption{Estimated local coefficients.}
\label{fig:p75}
\end{figure}
\section{Diagnostics}\label{sec:diag}
Inhomogeneous second-order statistics can be constructed and used for assessing the goodness-of-fit of fitted first-order intensities.
Nevertheless, it is a widespread practice in the statistical analysis of spatial and spatio-temporal point pattern data primarily comparing the data with a homogeneous Poisson process, which is generally the null model in applications for the fitted model. Indeed, when dealing with diagnostics in point processes, often two steps are needed: the transformation of data into residuals (thinning or rescaling \citep{schoenberg2003multidimensional}) and the use of tests to assess the consistency of the residuals with the homogeneous Poisson process \citep{adelfio:schoenberg:09}. Usually, second-order statistics estimated for the residual process (i.e. the result of a thinning or rescaling procedure) are analysed.
Essentially, to each observed point a weight inversely proportional to the conditional intensity at that point is given. This method was adopted by \cite{veen2006assessing} in constructing a weighted version of the $K$-function of \cite{ripley1977markov}; the resulting weighted statistic is in many cases more powerful than residual methods \citep{veen2006assessing}. \\
The spatio-temporal inhomogeneous version of the $K$-function in \eqref{eq:k} is given by \cite{gabriel2009second} as
\begin{equation}
\hat{K}_{I}(r,h)=\frac{ \vert W \vert \vert T \vert }{n(n-1)}\sum_{i=1}^n \sum_{j > i} \frac{I( \vert \vert \textbf{u}_i-\textbf{u}_j \vert \vert \leq r,\vert t_i-t_j\vert \leq h)}{\hat{\lambda}(\textbf{u}_i,t_i)\hat{\lambda}(\textbf{u}_j,t_j)},
\label{eq:kinh}
\end{equation}
where $\lambda(\cdot,\cdot)$ is the first-order intensity at an arbitrary point.
We know that $\mathbb{E}[\hat{K}_{I}(r,h)]=\pi r^2 h$, that is the same as the expectation of $\hat{K}(r,h)$ in \eqref{eq:k}, when the intensity used for the weighting is the true generator model.
This is a crucial result that allows the use of the weighted estimator $\hat{K}_{I}(r,h)$ as a diagnostic tool, for assessing the goodness-of-fit of spatio-temporal point processes with generic first-order intensity functions.
Indeed, if the weighting intensity function is close to the true one $\lambda(\textbf{u},t)$, the expectation of $\hat{K}_{I}(r,h)$ should be close to $\mathbb{E}[\hat{K}(r,h)]=\pi r^2 h$ for the Poisson process. For instance, values $\hat{K}_{I}(r,h)$ greater than $\pi r^{2} h$ indicates that the fitted model is not appropriate, since the distances computed among points exceed the Poisson theoretical ones.
The \texttt{globaldiag} function performs global diagnostics of a model fitted for the first-order intensity of an spatio-temporal point pattern, using the spatio-temporal inhomogeneous K-function \citep{gabriel2009second} documented by the function \texttt{STIKhat} of the \textbf{stpp} package \citep{stpp}.
It can also perform global diagnostics of a model fitted for the first-order intensity of an spatio-temporal point pattern on a linear network, by means of the spatio-temporal inhomogeneous K-function on a linear network \citep{moradi2020first} documented by the function \texttt{STLKinhom} of the \textbf{stlnpp} package \citep{stlnpp}.
They both return the plots of the inhomogeneous K-function weighted by the provided intensity to diagnose, its theoretical value, and their difference.
\begin{example}
> globaldiag(greececatalog, lgcp1$l)
[1] "Sum of squared differences = 318213525081.852"
> globaldiag(greececatalog, lgcp2$l)
[1] "Sum of squared differences = 147029066885.741"
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Art9.pdf}\\
\vspace{-.5cm}
\includegraphics[width=\textwidth]{Art10.pdf}
\caption{Output of the global diagnostics for the two fitted LGPCs.}
\label{fig:p9}
\end{figure}
Moving to the local diagnostics, \cite{adelfio2020some} derived the expectation of the local
inhomogeneous spatio-temporal K-function, under the Poisson case:
$\mathbb{E}[\hat{K}^i(r,h) ]= \pi r^ 2 h.$
%
Moreover, they found that when the local estimator is weighted by the true
intensity function,
its
expectation, $\mathbb{E}[\hat{K}_{I}^i(r,h)]$, is the same as the expectation of
$\hat{K}^i(r,h)$.
%
These results motivate the usage of such local estimator
$\hat{K}_{I}^i(r,h)$ as a diagnostic tool for general spatio-temporal
point processes for assessing the goodness-of-fit of spatio-temporal
point processes of any generic first-order
intensity function $\lambda$.
%
Indeed, if the estimated intensity function
used for weighting in our proposed LISTA functions is
the true one, then the LISTA functions should behave as the
corresponding ones of a homogeneous Poisson process,
resulting in small discrepancies between the two.
%
Therefore, this function computes such discrepancies
by means of the $\chi_i^2$ values, obtained following the expression
$$ \chi_i^2=\int_L \int_T \Bigg(
\frac{\big(\hat{K}^i_{I}(r,h)- \mathbb{E}[\hat{K}^i(r,h) ]
\big)^2}{\mathbb{E}[\hat{K}^i(r,h) ]}
\Bigg) \text{d}h \text{d}r ,$$
one for each point in the point pattern.
%
Basically, departures of the LISTA functions $\hat{K}_{I}^i(r,h)$ from
the Poisson expected value $rh$ directly suggest the unsuitability of
the intensity function $\lambda(\cdot)$ used in the weighting of the
LISTA functions for that specific point. This can be referred to as an \textit{outlying point}.
%
Given that \cite{dangelo2021local} proved the same results for the network case,
that is,
$\mathbb{E}[\hat{K}_{L}^i(r,h) ]= rh$ and
$\mathbb{E}[\hat{K}_{L,I}^i(r,h) ]=\mathbb{E}[\hat{K}_{L}^i(r,h) ]$
when $\hat{K}_{L,I}^i(r,h)$ is weighted by the true intensity function,
we implemented the same above-mentioned diagnostics procedure to work on
intensity functions fitted on spatio-temporal point patterns occurring on
linear networks.
%
Note that the Euclidean procedure is implemented by means of the
local K-functions of
\cite{adelfio2020some}, documented in
\texttt{KLISTAhat} of the \textbf{stpp} package \citep{gabriel:rowlingson:diggle:2013}.
The network case uses the local K-functions on networks \citep{dangelo2021local},
documented
in \texttt{localSTLKinhom}.
The \texttt{localdiag} function performs local diagnostics of a model fitted for the first-order intensity of an spatio-temporal point pattern, by means of the local spatio-temporal inhomogeneous K-function \citep{adelfio2020some} documented by the function KLISTA of the \textbf{stpp} package \citep{gabriel:rowlingson:diggle:2013}.
It returns the points identified as outlying following the diagnostics procedure on individual points of an observed point pattern, as introduced in \cite{adelfio2020some}.
The points resulting from the local diagnostic procedure provided by this function can be inspected via the \texttt{plot}, \texttt{print}, \texttt{summary}, and \texttt{infl} functions.
\texttt{localdiag} is also able to perform local diagnostics of a model fitted for the first-order intensity of an spatio-temporal point pattern on a linear network, by means of the local spatio-temporal inhomogeneous K-function on linear networks \cite{dangelo2021assessing} documented by the function \texttt{localSTLKinhom}.
It returns the points identified as outlying following the diagnostics procedure on individual points of an observed point pattern, as introduced in \cite{adelfio2020some}, and applied in \cite{dangelo2021local} for the linear network case.
\begin{example}
> set.seed(12345)
> stlp1 <- rETASlp(cat = NULL, params = c(0.078915 / 2, 0.003696, 0.013362, 1.2,
0.424466, 1.164793),
betacov = 0.5, m0 = 2.5, b = 1.0789, tmin = 0, t.lag = 200,
xmin = 600, xmax = 2200, ymin = 4000, ymax = 5300,
iprint = TRUE, covdiag = FALSE, covsim = FALSE, L = chicagonet)
> res <- localdiag(stlp1, intensity = density(as.stlpp(stlp1), at = "points"))
> res
Points outlying from the 0.95 percentile
of the anaysed spatio-temporal point pattern on a linear network
--------------------------------------------------
Analysed pattern X: 65 points
4 outlying points
> plot(res)
> infl(res)
\end{example}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Art12.pdf
\caption{Output of the local diagnostics via the \texttt{plot.localdiag} function.}
\label{fig:p12}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=.95\textwidth]{Art13.pdf}
\caption{Output of the local diagnostics via the \texttt{infl.localdiag} function.}
\label{fig:p13}
\end{figure}
\section{Conclusions}\label{sec:concl}
This work has introduced the \textbf{stopp} \texttt{R} package, which deals with spatio-temporal point processes occurring either on the Euclidean space or on some specific linear networks, such as streets of a city.
The package includes functions for summarizing, plotting, and performing various analyses on point processes; these functions mostly use the approaches suggested in a few recent works in scientific literature. Modelling, statistical inference, and simulation difficulties on spatio-temporal point processes on Euclidean space and linear networks, with a focus on their local properties, are the core topics of such research and the package in turn.
To start with, we set the notation for spatio-temporal point processes that can occur in both linear networks and Euclidean spaces. After that, we went over the main methods implemented in the \textbf{stopp} package for dealing with simulations, data, and objects in point processes. After having recalled the definition of Local Indicators of Spatio-Temporal Association (LISTA) functions, we have moved to introduce the new functions that compute the LISTAs on linear networks. We then illustrated functions to run a local test to evaluate the local differences between two point patterns occurring on the same metric space. Moreover, many examples of the models included in the package are provided. These examples include: models for separable Poisson processes on both Euclidean space and networks, global and local non-separable inhomogeneous Poisson processes, and LGCPs. Then, techniques for performing both global and local diagnostics on such models (but not limited to those only) for point patterns on linear networks and planar spaces are provided.
The package tools are not exhaustive. This work represents the creation of a toolbox for different kinds of spatio-temporal analyses to be performed on observed point patterns, following the growing stream of literature on point process theory.
The presented work contributes to the existing literature by framing many of the most widespread methods for the analysis of spatio-temporal point processes into a unique package, which is intended to foster many further extensions.
|
1,108,101,564,986 | arxiv | \section{}
\section{Introduction}\label{sec-intro}
High-dimensional data sets have posed both statistical and computational challenges in recent decades \citep{babu2004some, fan2014challenges, wainwright2014structured}. In the ``large $n$, small $m$'' regime, where $n$ refers to the problem dimension and $m$ refers to the sample size, it is well known that obtaining consistent estimators is impossible unless the model is endowed with some additional structures. Consequently, in the statistical aspect, a variety of research have imposed some low-dimensional constraint on the parameter space, such as sparse vectors \citep{bickel2009simultaneous}, low-rank matrices \citep{recht2010guaranteed}, or structured covariance matrices \citep{li2018efficient}. In the computational aspect, a lot of well-known estimators are formulated as solutions to optimization problems comprised of a loss function with a weighted regularizer, where the loss function measures the data fidelity and the regularizer represents the low-dimensional constraint. Estimators with this formulation are usually referred to as regularized $M$-estimators \citep{agarwal2012fast, negahban2012unified}. For instance, in high-dimensional liner regression, the Lasso \citep{tibshirani1996regression} is based upon solving a convex optimization problem, formed by a combination of the least squares loss and the $\ell_1$-norm regularizer. Significant progress has been achieved in studying the recovery bounds of convex $M$-estimators and designing both effective and efficient numerical algorithms for optimization; see \cite{agarwal2012fast, bickel2009simultaneous, negahban2012unified} and references therein.
Though the convex $M$-estimation problems have gained a great success, nonconvex regularized $M$-estimators have recently attracted increasing attention thanks to the better statistical properties they might enjoy. As an example, nonconvex regularizers such as the smoothly clipped absolute deviation penalty (SCAD) \citep{fan2001variable} and minimax concave penalty (MCP) \citep{zhang2010nearly} can eliminate the estimation bias to some extent and achieve more refined statistical rates of convergence, while the convex $\ell_1$-norm regularizer always induces significant estimation bias for parameters with large absolute values \citep{wang2014optimal, zhang2008sparsity}. Meanwhile, the loss function can also be nonconvex in real applications, such as error-in-variables linear regression; see \cite{carroll2006measurement, loh2012high} and references therein.
Standard statistical results for nonconvex $M$-estimators often only provide recovery bound for global solutions \citep{fan2001variable, zhang2010nearly, zhang2012general}, while several numerical methods previously proposed to optimize nonconvex functions, such as local quadratic approximation (LQA) \citep{fan2001variable}, minorization-maximization
(MM) \citep{hunter2005variable}, local linear approximation (LLA) \citep{zou2008one}, and coordinate descent \citep{breheny2011coordinate}, may attain the local solutions. This results in a noticeable gap between theory and practice. Therefore, it is necessary to analyse the statistical properties of the local solutions obtained by certain numerical procedures.
Recently, researchers in \cite{wang2014optimal} and \cite{loh2015regularized} have independently investigated the local solutions of nonconvex regularized $M$-estimators that can be formulated as
\begin{equation}\label{M-esti-intr}
\hat{\beta} \in \argmin_{\beta\in \Omega\subseteq {\mathbb{R}}^n}\{{\mathcal{L}}_m(\beta)+{\mathcal{R}}_\lambda(\beta)\},
\end{equation}
where ${\mathcal{L}}_m(\cdot)$ is the loss function and ${\mathcal{R}}_\lambda(\cdot)$ is the regularizer with regularization parameter $\lambda$. Both of ${\mathcal{L}}_m$ and ${\mathcal{R}}_\lambda$ can be nonconvex. \cite{wang2014optimal} proposed an approximate regularization path-following method leveraging the proximal gradient method \citep{nesterov2007gradient} within each path-following stage. The recovery bounds for all the approximate local solutions along the full regularization path were established. \cite{loh2015regularized} considered the $M$-estimator \eqref{M-esti-intr} with a convex side constraint $\Omega=\{\beta: g(\beta)\leq r\}$. They proved that any stationary points of the nonconvex optimization problem lie within statistical precision of the true parameter and modified the proximal gradient method to obtain a near-global optimum.
However, these works both focus on the sparsity assumption that the underlying parameter $\beta^*$ is exact sparse, i.e., $\|\beta^*\|_0=s$, which may be too strict for some problems. Let us consider the standard linear regression $y=\sum_{j=1}^n\beta^*_jx_j+e$ with $e$ being the observation noise. The exact sparsity assumption means that only a small subset of entries of the regression coefficients $\beta^*_j$'s are nonzeros or equivalently most of the covariates $x_j$'s absolutely have no effect on the response $y$, which is sometimes too restrictive in real applications. For instance, in image processing, it is standard that wavelet coefficients for images always exhibit an exponential decay, but do not need to be almost $0$ (see, e.g., \cite{joshi1995image, lustig2007sparse}). Other applications in high-dimensional scenarios include signal processing \citep{candes2006stable}, medical imaging reconstruction \citep{lustig2008compressed}, data mining \citep{orre2000bayesian} and so on, where it is not reasonable to impose exact sparsity assumption on the model space. Therefore, it is necessary to investigate the statistical properties of the nonconvex $M$-estimators when the exact sparsity assumption does not hold. In addition, as for the algorithmic aspect, the numerical procedures proposed in \cite{loh2015regularized} is based on the regularity conditions on the regularizer. Particularly, for commonly-used regularizer such as the SCAD and MCP, the side constraint is set as $g(\cdot)=\frac{1}{\lambda}\left\{{\mathcal{R}}_\lambda(\cdot)+\frac{\mu}{2}\|\cdot\|_2^2\right\}$ for a suitable constant $\mu>0$, and thus $g(\cdot)$ is a piecewise function. The iteration takes the form
\begin{equation}\label{Loh-ite}
\beta^{t+1}\in \argmin_{\beta\in {\mathbb{R}}^n, g(\beta)\leq r}\left\{\frac{1}{2}\Big{\|}\beta-\left(\beta^t-\frac{\nabla{\bar{{\mathcal{L}}}_m(\beta^t)}}{v}\right)\Big{\|}_2^2+\frac{\lambda}{v}g(\beta)\right\},
\end{equation}
where $\bar{{\mathcal{L}}}_m(\cdot)={\mathcal{L}}_m(\cdot)-\frac{\mu}{2}\|\cdot\|_2^2$, and $\frac{1}{v}$ is the step size. This process involves projection onto the sublevel set of a piecewise function, which may cost a large amount of computation in the high-dimensional scenario.
Our main purpose in the present paper is to deal with the more general case that the coefficients of the true parameter are not almost zeros and to design a algorithm with better recovery performance. More precisely, we assume that for $q\in [0,1]$ fixed, the $\ell_q$-norm of $\beta^*$ defined as $\|\beta^*\|_q^q:=\sum_{j=1}^n|\beta^*_j|^q$ is bounded from above by a constant. Note that this assumption is reduced to the exact sparsity assumption aforementioned when $q=0$. When $q\in (0,1]$, this type of sparsity is known as the soft sparsity and has been used to analyse the minimax rate for linear regression \citep{raskutti2011minimax}. In the aspect of computation, we apply the proximal gradient method \citep{nesterov2007gradient} to solve a modified version of the nonconvex optimization problem \eqref{M-esti-intr}. The main contributions of this paper are as follows. First, under the general sparsity assumption on the true parameter $\beta^*$ (i.e., $\|\beta^*\|_q^q\leq R_q,\ q\in[0,1]$), we provide the $\ell_2$ recovery bound for the stationary point $\tilde{\beta}$ of the nonconvex optimization problem as $\|\tilde{\beta}-\beta^*\|_2^2=O(\lambda^{2-q}R_q)$. When $\lambda$ is chosen as $\lambda=\Omega\left(\sqrt{\frac{\log n}{m}}\right)$, the recovery bound implies that the any stationary point is statistically consistent; see Theorem \ref{thm-sta}. Second, we consider the more general case that the regularizer can be decomposed as ${\mathcal{R}}_\lambda(\cdot)={\mathcal{H}}_\lambda(\cdot) +{\mathcal{Q}}_\lambda(\cdot)$, where ${\mathcal{H}}_\lambda$ is convex and ${\mathcal{Q}}_\lambda$ is concave. By virtue of this assumption, we establish that the proximal gradient algorithm linearly converge to a global solution of the nonconvex regularized problem; see Theorem \ref{thm-algo}. Since the proposed algorithm relies highly on the decomposition of the regularizer, this more general condition provides us the potential to consider different decompositions for the regularizer so as to construct different numerical iterations. In particular, for the SCAD and MCP regularizer, we can choose ${\mathcal{H}}_\lambda(\cdot)=\lambda\|\cdot\|_1$, then the iterative sequence is generated as follows
\begin{equation}\label{Mine-ite}
\beta^{t+1}\in \argmin_{\beta\in {\mathbb{R}}^n, \|\beta\|_1\leq r}\left\{\frac{1}{2}\Big{\|}\beta-\left(\beta^t-\frac{\nabla{\bar{{\mathcal{L}}}_m(\beta^t)}}{v}\right)\Big{\|}_2^2+\frac{\lambda}{v}\|\beta\|_1\right\},
\end{equation}
where $\bar{{\mathcal{L}}}_m(\cdot)={\mathcal{L}}_m(\cdot)+{\mathcal{Q}}_\lambda(\cdot)$, and $\frac1v$ is the step size. This numerical procedure involves a soft-threshold operator and $\ell_2$ projection onto the $\ell_1$-ball of radius $r$, and thus requires lower computational cost than \eqref{Loh-ite}. The advantage of iteration \eqref{Mine-ite} is illustrated in Fig. \ref{f-com-add} and \ref{f-com-mis}.
The remainder of this paper is organized as follows. In section \ref{sec-prob}, we provide background on nonconvex $M$-estimation problems and some regularity conditions on the loss function and the regularizer. In section \ref{sec-main}, we establish our main results on statistical consistency and algorithmic rate of convergence. In section \ref{sec-simul}, we perform several numerical experiments to demonstrate our theoretical results. We conclude this paper in Section \ref{sec-concl}. Technical proofs are presented in Appendix.
We end this section by introducing some notations for future reference. We use Greek lowercase letters $\beta,\delta$ to denote the vectors, capital letters $J,S$ to denote the index sets. For a vector $\beta\in {\mathbb{R}}^n$ and an index set $S\subseteq \{1,2,\dots,n\}$, we use $\beta_S$ to denote the vector in which $(\beta_S)_i=\beta_i$ for $i\in S$ and zero elsewhere, $|S|$ to denote the cardinality of $S$, and $S^c=\{1,2,\dots,n\}\setminus S$ to denote the complement of $S$. A vector $\beta$ is supported on $S$ if and only if $S=\{i\in \{1,2,\dots,n\}:\beta_i\neq 0\}$, and $S$ is the support of $\beta$ denoted by ${\text{supp}}(\beta)$, namely ${\text{supp}}(\beta)=S$. For $m\geq 1$, let $\mathbb{I}_m$ stand for the $m\times m$ identity matrix. For a matrix $X\in {\mathbb{R}}^{m\times n}$, let $X_{ij}\ (i=1,\dots,m,j=1,2,\cdots,n)$ denote its $ij$-th entry, $X_{i\cdot}\ (i=1,\dots,m)$ denote its $i$-th row, $X_{\cdot j}\ (j=1,2,\cdots,n)$ denote its $j$-th column, and diag$(X)$ stand for the diagonal matrix with its diagonal entries equal to $X_{11},X_{22},\cdots,X_{nn}$. We write $\lambda_{\text{min}}(X)$ and $\lambda_{\text{max}}(X)$ to denote the minimal and maximum eigenvalues of a matrix $X$, respectively. For a function $f:{\mathbb{R}}^n\to {\mathbb{R}}$, $\nabla f$ is used to denote a gradient or subgradient depending on whether $f$ is differentiable or nondifferentiable but convex, respectively.
\section{Problem setup}\label{sec-prob}
In this section we begin with a precise formulation
of the problem, and then impose some suitable assumptions on the loss function as well as the regularizer.
\subsection{Nonconvex regularized $M$-estimation}
Following the work of \cite{negahban2012unified, wainwright2014structured}, we first review some basic concepts on the $M$-estimation problem. Let $Z_1^m:=(Z_1,Z_2,\cdots,Z_m)$ denote a sample of $m$ identically independent observations of a given random variable $Z:\mathcal{S} \to \mathcal{Z}$ defined on the probability space $(\mathcal{S},\mathcal{F},\mathbb{P})$, where $\mathbb{P}$ lies within a parameterized set $\mathcal{P}=\{\mathbb{P}_\beta: \beta\in \Theta \subseteq {\mathbb{R}}^n\}$. It is always assumed that there is a ``true" probability distribution $\mathbb{P}_{\beta^*}\in \mathcal{P}$ that generates the observed data $Z_1^m$ and the goal is to estimate the unknown true parameter $\beta^*\in \Theta$. To this end, a loss function ${\mathcal{L}}_m:{\mathbb{R}}^n\times \mathcal{Z}^m\to {\mathbb{R}}_+$ is introduced, whose value ${\mathcal{L}}_m(\beta;Z_1^m)$ measures the ``fit'' between any parameter $\beta\in \Theta$ and the observed data set $Z_1^m\in \mathcal{Z}^m$ and smaller value means better fit.
However, when the number of observations $m$ is smaller than the ambient dimension $n$, consistent estimators can no longer be obtained. Fortunately, there is empirical evidence showing that the underlying true parameter $\beta^*$ in the high-dimensional space is sparse in a wide range of applications; see, e.g., \cite{joshi1995image, lustig2007sparse}. One popular way to measure the degree of sparsity is to use the $\ell_q$-ball\footnote{Accurately speaking, when $q\in[0,1)$, these sets are not real ``ball''s , as they fail to be convex.}, which is defined as, for $q\in [0,1]$, and a radius $R_q>0$,
\begin{equation}\label{lq-ball}
{\mathbb{B}}_q(R_q):=\{\beta\in {\mathbb{R}}^n:||\beta||_q^q=\sum_{j=1}^n|\beta_j|^q\leq R_q\}.
\end{equation}
Note that the $\ell_0$-ball corresponds to the case of exact sparsity, meaning that any vector $\beta\in {\mathbb{B}}_0(R_0)$ is supported on a set of cardinality at most $R_0$, while the $\ell_q$-ball for $q\in (0,1]$ corresponds to the case of soft sparsity, which enforces a certain decay rate on the ordered elements of $\beta\in {\mathbb{B}}_q(R_q)$. The exact sparsity assumption has been widely used for establishing statistical recovery bounds, while the soft sparsity assumption attracts relatively little attention. Throughout this paper, we fix $q\in [0,1]$, and assume that the true parameter $\beta^*\in {\mathbb{B}}_q(R_q)$ unless otherwise specified.
Now for the purpose of estimating $\beta^*$ based on the observed data $Z_1^m$, many researchers proposed to consider the regularized $M$-estimator (see, e.g., \cite{agarwal2012fast, negahban2012unified}), which is formulated as
\begin{equation}\label{M-esti}
\hat{\beta} \in \argmin_{\beta\in \Omega\subseteq {\mathbb{R}}^n}\{{\mathcal{L}}_m(\beta;Z_1^m)+{\mathcal{R}}_\lambda(\beta)\},
\end{equation}
where $\lambda>0$ is a user-defined regularization parameter, and ${\mathcal{R}}_\lambda:{\mathbb{R}}^n\to {\mathbb{R}}$ is a regularizer depending on $\lambda$ and is assumed to be separable across coordinates, written as ${\mathcal{R}}_\lambda(\beta)=\sum_{j=1}^n\rho_\lambda(\beta_j)$ with the decomposable component $\rho_\lambda:{\mathbb{R}}\to {\mathbb{R}}$ specified in the following. The loss function ${\mathcal{L}}_m$ is required to be differentiable, but do not need to be convex. The regularizer ${\mathcal{R}}_\lambda$, which serves to impose certain type of sparsity constraint on the estimator, can also be nonconvex. Due to this potential nonconvexity, we include a side constraint $g:{\mathbb{R}}^n\to {\mathbb{R}}_+$, which is required to be convex and satisfy
\begin{equation}\label{g-l1}
g(\beta)\geq \omega\|\beta\|_1,\quad \forall\beta\in {\mathbb{R}}^n
\end{equation}
for some positive number $\omega>0$. The feasible region is then specialized as
\begin{equation}\label{feasible}
\Omega:=\{\beta\in {\mathbb{R}}^n: g(\beta)\leq r\}.
\end{equation}
The parameter $r>0$ must be chosen carefully to ensure $\beta^*\in \Omega$. Any point $\beta\in \Omega$ will also satisfy $\|\beta||_1\leq r/\omega$, and provided that ${\mathcal{L}}_m$ and ${\mathcal{R}}_\lambda$ are continuous, it is guaranteed by the Weierstrass extreme value theorem that a global solution $\hat{\beta}$ always exists. Hereinafter in order to ease the notation, we adopt the shorthand ${\mathcal{L}}_m(\cdot)$ for ${\mathcal{L}}_m(\cdot;Z_1^m)$.
\subsection{Nonconvex loss function and restricted strong convexity/smoothness}
Throughout this paper, the loss function ${\mathcal{L}}_m$ is required to be differentiable with respect to $\beta$, but needs not to be convex. Instead, some weaker conditions known as restricted strong convexity (RSC) and restricted strong smoothness (RSM) are required, which have been discussed precisely in former literature \citep{agarwal2012fast, loh2015regularized}. Specifically, the RSC/RSM conditions imposed on the loss function ${\mathcal{L}}_m$ are the same as those used in \cite{loh2015regularized}, thus we here only provide the expressions so as to make this paper complete. For more detailed discussions, see \cite{loh2015regularized}.
We begin with defining the first-order Taylor series expansion around a vector $\beta'$ in the direction of $\beta$ as
\begin{equation}\label{taylor}
{\mathcal{T}}(\beta,\beta'):={\mathcal{L}}_m(\beta)-{\mathcal{L}}_m(\beta')-\langle \nabla{\mathcal{L}}_m(\beta'),\beta-\beta' \rangle.
\end{equation}
Then concretely speaking, the RSC condition takes two types of forms, one is used for the analysis of statistical recovery bounds, defined as
\begin{subequations}\label{sta-RSC}
\begin{numcases}{\langle \nabla{\mathcal{L}}_m(\beta^*+\delta)-\nabla{\mathcal{L}}_m(\beta^*),\delta \rangle\geq}
\gamma_1\|\delta\|_2^2-\tau_1\frac{\log n}{m}\|\delta\|_1^2,\quad \forall\|\delta\|_2\leq 3,\label{sta-RSC1}\\
\gamma_2\|\delta\|_2-\tau_2\sqrt{\frac{\log n}{m}}\|\delta\|_1,\quad \forall\|\delta\|_2>3,\label{sta-RSC2}
\end{numcases}
\end{subequations}
where $\gamma_i\ (i=1,2)$ are positive constants and $\tau_i\ (i=1,2)$ are nonnegative constants; the other one is used for the analysis of algorithmic convergence rate, defined in terms of the Taylor series error \eqref{taylor}
\begin{subequations}\label{alg-RSC}
\begin{numcases}{{\mathcal{T}}(\beta,\beta')\geq}
\gamma_3\|\beta-\beta'\|_2^2-\tau_3\frac{\log n}{m}\|\beta-\beta'\|_1^2,\quad \forall\|\beta-\beta'\|_2\leq 3,\label{alg-RSC1}\\
\gamma_4\|\beta-\beta'\|_2-\tau_4\sqrt{\frac{\log n}{m}}\|\beta-\beta'\|_1,\quad \forall\|\beta-\beta'\|_2>3,\label{alg-RSC2}
\end{numcases}
\end{subequations}
where $\gamma_i\ (i=3,4)$ are positive constants and $\tau_i\ (i=3,4)$ are nonnegative constants. In addition, the RSM condition is also defined by the Taylor series error \eqref{taylor} as follows:
\begin{equation}\label{alg-RSM}
{\mathcal{T}}(\beta,\beta')\leq \gamma_5\|\beta-\beta'\|_2^2+\tau_5\frac{\log n}{m}\|\beta-\beta'\|_1^2,\quad \forall \beta,\beta'\in {\mathbb{R}}^n,
\end{equation}
where $\gamma_5$ is a positive constant and $\tau_5$ is a nonnegative constant.
\subsection{Nonconvex regularizer and regularity conditions}
Now we impose some regularity conditions on the nonconvex regularizer, which are defined in terms of the decomposable component $\rho_\lambda:{\mathbb{R}}\to {\mathbb{R}}$.
\begin{Assumption}\mbox{}\par\label{asup-regu}
\begin{enumerate}[\rm(i)]
\item $\rho_\lambda$ satisfies $\rho_\lambda(0)=0$ and is symmetric around zero, that is, $\rho_\lambda(t)=\rho_\lambda(-t)$ for all $t\in {\mathbb{R}}$;
\item On the nonnegative real line, $\rho_\lambda$ is nondecreasing;
\item For $t>0$, the function $t\mapsto \frac{\rho_\lambda(t)}{t}$ is nonincreasing in $t$;
\item $\rho_\lambda$ is differentiable for all $t\neq 0$ and subdifferentiable at $t=0$, with $\lim\limits_{t\to 0^+}\rho'_\lambda(t)=\lambda L$;
\item $\rho_\lambda$ is subadditive, that is, $\rho_\lambda(t+t')\leq \rho_\lambda(t)+\rho_\lambda(t')$ for all $t,t'\in {\mathbb{R}}$;
\item $\rho_\lambda$ can be decomposed as $\rho_\lambda(\cdot)=h_\lambda(\cdot)+q_\lambda(\cdot)$, where $h_\lambda$ is convex, and $q_\lambda$ is concave with $q_\lambda(0)=q'_\lambda(0)=0$, $q_\lambda(t)=q_\lambda(-t)$ for all $t\in {\mathbb{R}}$, and for $t>t'$, there exists two constants $\mu_1\geq 0$ and $\mu_2\geq 0$ such that
\begin{equation}\label{cond-qlambda}
-\mu_1\leq \frac{q'_\lambda(t)-q'_\lambda(t')}{t-t'}\leq -\mu_2\leq 0.
\end{equation}
\end{enumerate}
\end{Assumption}
Conditions (i)-(iv) are the same as those proposed in \cite{loh2015regularized}, and we here explicitly add the condition of subadditivity, though it is relatively mild and are satisfied by a wide range of regularizers. Note that the last condition is a generalization of the weak convexity assumption \cite[Assumption 1(v)]{loh2015regularized}: There exists $\mu>0$, such that $\rho_{\lambda,\mu}(t):=\rho_\lambda(t)+\frac{\mu}{2}t^2$ is convex.
As we will see in the next section, one of the main advantage of adopting condition (vi) in Assumption \ref{asup-regu} is that the proposed algorithm to solve the optimization problem \eqref{M-esti} highly depends on the decomposition in Assumption \ref{asup-regu}(vi) for the regularizer. For general regularizers, condition (vi) provides us the potential to consider different decompositions so as to construct different estimators as well as the iterations.
In particular, as is shown in Example \ref{decomp}, besides the natural decomposition for the SCAD and MCP regularizer inspired by \citet[Assumption 1(v)]{loh2015regularized}, there exists another simpler decomposition, which leads to iterations with simpler forms. Moreover, it is easy to construct functions that satisfy our condition (vi) while do not satisfy \citet[Assumption 1(v)]{loh2015regularized}, but we omit the construction here as many commonly-used regularizers such as the $\ell_1$-norm regularizer $\lambda\|\cdot\|_1$ (Lasso), SCAD and MCP satisfy both our condition (vi) and \citet[Assumption 1(v)]{loh2015regularized}.
It is easy to check that the Lasso regularizer satisfies all these conditions in Assumption \ref{asup-regu}. Other nonconvex regularizers such as the SCAD and MCP regularizers are also contained in our framework. More precisely, it has been shown in \cite{loh2015regularized} that both the SCAD and MCP satisfy conditions (i)-(v) with $L=1$ for condition (iv). To verify condition (vi), in the following, we provide an example showing two different decompositions.
\begin{Example}\label{decomp}
\indent Consider the SCAD regularizer:
\begin{equation*}
\rho_\lambda(t):=\left\{
\begin{array}{l}
\lambda|t|,\ \ \text{if}\ \ |t|\leq \lambda,\\
-\frac{t^2-2a\lambda|t|+\lambda^2}{2(a-1)},\ \ \text{if}\ \ \lambda<|t|\leq a\lambda,\\
\frac{(a+1)\lambda^2}{2},\ \ \text{if}\ \ |t|>a\lambda,
\end{array}
\right.
\end{equation*}
where $a>2$ is a fixed parameter, and the MCP regularizer:
\begin{equation*}
\rho_\lambda(t):=\left\{
\begin{array}{l}
\lambda|t|-\frac{t^2}{2b},\ \ \text{if}\ \ |t|\leq b\lambda,\\
\frac{b\lambda^2}{2},\ \ \text{if}\ \ |t|>b\lambda,
\end{array}
\right.
\end{equation*}
where $b>0$ is a fixed parameter.
To verify condition (vi), the first way is to set
\begin{equation}\label{SCAD-h-1}
h_\lambda(t)=\left\{
\begin{array}{l}
\lambda|t|+\frac{t^2}{2(a-1)},\ \ \text{if}\ \ |t|\leq \lambda,\\
\frac{2a\lambda|t|-\lambda^2}{2(a-1)},\ \ \text{if}\ \ \lambda<|t|\leq a\lambda,\quad q_\lambda(t)=-\frac{t^2}{2(a-1)}\\
\frac{t^2}{2(a-1)}+\frac{(a+1)\lambda^2}{2},\ \ \text{if}\ \ |t|>a\lambda,
\end{array}
\right.
\end{equation}
for SCAD, and
\begin{equation}\label{MCP-h-1}
h_\lambda(t)=\left\{
\begin{array}{l}
\lambda|t|,\ \ \text{if}\ \ |t|\leq b\lambda,\\
\frac{t^2}{2b}+\frac{b\lambda^2}{2},\ \ \text{if}\ \ |t|>b\lambda,\quad q_\lambda(t)=-\frac{t^2}{2b}
\end{array}
\right.
\end{equation}
for MCP, then both the two regularizers satisfy condition (vi). In fact, this decomposition is inspired by \citet[Assumption 1(v)]{loh2015regularized}.
The other way is to set $h_\lambda(\cdot)=\lambda|\cdot|$ for both SCAD and MCP, then
\begin{equation}\label{SCAD-q-2}
q_\lambda(t)=\left\{
\begin{array}{l}
0,\ \ \text{if}\ \ |t|\leq \lambda,\\
-\frac{t^2-2\lambda|t|+\lambda^2}{(2(a-1)},\ \ \text{if}\ \ \lambda<|t|\leq a\lambda,\\
\frac{(a+1)\lambda^2}{2}-\lambda|t|,\ \ \text{if}\ \ |t|>a\lambda,
\end{array}
\right.
\end{equation}
for SCAD with $\mu_1=\frac{1}{a-1}$ and $\mu_2=0$, and
\begin{equation}\label{MCP-q-2}
q_\lambda(t)=\left\{
\begin{array}{l}
-\frac{t^2}{2b},\ \ \text{if}\ \ |t|\leq b\lambda,\\
\frac{b\lambda^2}{2}-\lambda|t|,\ \ \text{if}\ \ |t|> b\lambda,
\end{array}
\right.
\end{equation}
for MCP with $\mu_1=\frac{1}{b}$ and $\mu_2=0$, respectively. Hence, both the SCAD and MCP regularizers satisfy our condition (vi) in Assumption \eqref{asup-regu}.
\end{Example}
We shall see that the second decomposition plays an important role in constructing iterations with more simple forms, so as to be solved more efficiently, with SCAD and MCP as regularizers in the next section.
Now for notational simplicity we define
\begin{equation}\label{HQ-lambda}
{\mathcal{H}}_\lambda(\cdot):=\sum_{j=1}^{n}h_\lambda(\cdot)\quad \mbox{and}\quad
{\mathcal{Q}}_\lambda(\cdot):=\sum_{j=1}^{n}q_\lambda(\cdot),
\end{equation}
that is, ${\mathcal{H}}_\lambda$ and ${\mathcal{Q}}_\lambda$ denote the decomposable convex component and concave component of the nonconvex regularizer ${\mathcal{R}}_\lambda$, respectively.
\section{Main results}\label{sec-main}
In this section, we establish our main results and proofs including statistical guarantee and algorithmic convergence rate. We begin with several lemmas, which are beneficial to the proofs of Theorems \ref{thm-sta} and \ref{thm-algo}. Recall the true parameter $\beta^*\in {\mathbb{B}}_q(R_q)$. Then for any positive number $\eta>0$, we define the set corresponding to $\beta^*$:
\begin{equation}\label{S-eta}
S_\eta:=\{j\in\{1,2,\cdots,n\}:|\beta^*_j|>\eta\}.
\end{equation}
Then, by a standard argument (see, e.g., \cite{negahban2012unified}), one checks that
\begin{equation}\label{s-eta}
|S_\eta|\leq \eta^{-q}R_q\quad \mbox{and}\quad \|\beta^*_{S_\eta^c}\|_1\leq\eta^{1-q}R_q.
\end{equation}
\begin{Lemma}\label{lem-regu}
Let $\beta,\ \delta\in {\mathbb{R}}^n$ and $S\subseteq \{1,2,\cdots,n\}$ be any subset with $|S|=s$.
Let $J\subseteq \{1,2,\cdots,n\}$ be the index set of the $s$ largest elements of $\delta$ in absolute value.
Then one has that
\begin{equation}\label{lem-regu-1}
{\mathcal{R}}_\lambda(\beta)-{\mathcal{R}}_\lambda(\beta+\delta)\leq
\lambda L(\|\delta_J\|_1-\|\delta_{J^c}\|_1)+2\lambda L\|\beta_{S^c}\|_1.
\end{equation}
\end{Lemma}
\begin{proof}
By the decomposiability and the subadditivity of the regularizer ${\mathcal{R}}_\lambda$, one has that
\begin{equation}\label{lem-regu-2}
\begin{aligned}
{\mathcal{R}}_\lambda(\beta)-{\mathcal{R}}_\lambda(\beta+\delta)
&\leq {\mathcal{R}}_\lambda(\delta_S)+{\mathcal{R}}_\lambda(\beta_{S^c})-{\mathcal{R}}_\lambda(\beta_{S^c}+\delta_{S^c})\\ &\leq {\mathcal{R}}_\lambda(\delta_S)-{\mathcal{R}}_\lambda(\delta_{S^c})+2{\mathcal{R}}_\lambda(\beta_{S^c})\\
&\leq {\mathcal{R}}_\lambda(\delta_J)-{\mathcal{R}}_\lambda(\delta_{J^c})+2{\mathcal{R}}_\lambda(\beta_{S^c}),
\end{aligned}
\end{equation}
where the last inequality is from the definition of the set $J$. Then it follows from \citet[Lemma 6]{loh2013local} (with $A=J$ and $k=s$) and \citet[Lemma 4(a)]{loh2015regularized} that
$${\mathcal{R}}_\lambda(\delta_J)-{\mathcal{R}}_\lambda(\delta_{J^c})\leq \lambda L(\|\delta_J\|_1-\|\delta_{J^c}\|_1),$$
$${\mathcal{R}}_\lambda(\beta_{S^c})\leq \lambda L\|\beta_{S^c}\|_1.$$
Combining these two inequalities with \eqref{lem-regu-2}, we obtain \eqref{lem-regu-1}. The proof is complete.
\end{proof}
The following two lemmas tell us some general properties of ${\mathcal{H}}_\lambda$ and ${\mathcal{Q}}_\lambda$ defined in \eqref{HQ-lambda}, respectively.
\begin{Lemma}\label{lem-hlambda}
Let ${\mathcal{H}}_\lambda$ be defined in \eqref{HQ-lambda}. Then it holds that
\begin{equation*}
{\mathcal{H}}_\lambda(\beta)\geq \lambda L\|\beta\|_1,\quad \forall\beta\in {\mathbb{R}}^n.
\end{equation*}
\begin{proof}
It suffices to show that for all $t\in {\mathbb{R}}$,
\begin{equation}\label{H-lambda-1}
h_\lambda(t)\geq \lambda L|t|.
\end{equation}
When $t=0$, \eqref{H-lambda-1} follows trivially by Assumption \ref{asup-regu}. To consider the case when $t\not =0$, by the symmetry, we may assume, without loss of generality, that $t>0$. Then since $h_\lambda$ is convex, one has that
for any $t'\in (0,t)$,
\begin{equation*}
\frac{h_\lambda(t)-h_\lambda(0)}{t-0}\geq h'_\lambda(t')=\rho'_\lambda(t')-q'_\lambda(t').
\end{equation*}
Taking $t'\to 0^+$, we have that \eqref{H-lambda-1} holds.
The proof is complete.
\end{proof}
\end{Lemma}
\begin{Lemma}\label{lem-qlambda}
Let ${\mathcal{Q}}_\lambda$ be defined in \eqref{HQ-lambda}. Then for any $\beta,\beta'\in {\mathbb{R}}^n$, the following relations are true:
\begin{subequations}
\begin{align}
\langle \nabla{\mathcal{Q}}_\lambda(\beta)-\nabla{\mathcal{Q}}_\lambda(\beta'),\beta-\beta' \rangle \geq -\mu_1\|\beta-\beta'\|_2^2,\label{lem-qlambda-11}\\
\langle \nabla{\mathcal{Q}}_\lambda(\beta)-\nabla{\mathcal{Q}}_\lambda(\beta'),\beta-\beta' \rangle \leq -\mu_2\|\beta-\beta'\|_2^2, \label{lem-qlambda-12}\\
{\mathcal{Q}}_\lambda(\beta)\geq {\mathcal{Q}}_\lambda(\beta')+\langle \nabla{\mathcal{Q}}_\lambda(\beta'),\beta-\beta' \rangle- \frac{\mu_1}{2}\|\beta-\beta'\|_2^2,\label{lem-qlambda-13}\\
{\mathcal{Q}}_\lambda(\beta)\leq {\mathcal{Q}}_\lambda(\beta')+\langle \nabla{\mathcal{Q}}_\lambda(\beta'),\beta-\beta' \rangle- \frac{\mu_2}{2}\|\beta-\beta'\|_2^2.\label{lem-qlambda-14}
\end{align}
\end{subequations}
\end{Lemma}
\begin{proof}
By \eqref{cond-qlambda}, we have that for any $j=1,2,\cdots,n$,
\begin{equation*}
-\mu_1(\beta_j-\beta_j')^2\leq (q'_\lambda(\beta_j)-q'_\lambda(\beta_j'))(\beta_j-\beta_j')\leq -\mu_2(\beta_j-\beta_j')^2,
\end{equation*}
from which \eqref{lem-qlambda-11} and \eqref{lem-qlambda-12} follow directly. Then by \citet[Theorem 2.1.5 and Theorem 2.1.9]{nesterov2013introductory}, it follows from \eqref{lem-qlambda-11} and \eqref{lem-qlambda-12} that
the convex function $-{\mathcal{Q}}_\lambda(\beta)$ satisfies
\begin{align*}
-{\mathcal{Q}}_\lambda(\beta)&\leq -{\mathcal{Q}}_\lambda(\beta')+\langle \nabla(-{\mathcal{Q}}_\lambda(\beta')),\beta-\beta' \rangle+ \frac{\mu_1}{2}\|\beta-\beta'\|_2^2,\\
-{\mathcal{Q}}_\lambda(\beta)&\geq -{\mathcal{Q}}_\lambda(\beta')+\langle \nabla(-{\mathcal{Q}}_\lambda(\beta')),\beta-\beta' \rangle- \frac{\mu_2}{2}\|\beta-\beta'\|_2^2,
\end{align*}
which respectively implies that the function ${\mathcal{Q}}_\lambda(\beta)$ satisfies \eqref{lem-qlambda-13} and \eqref{lem-qlambda-14}.
The proof is complete.
\end{proof}
\subsection{Statistical results}
Recall that the feasible region $\Omega$ is specified in \eqref{feasible}. We shall provide the recovery bound for each stationary point $\tilde{\beta}\in \Omega$ of the optimization problem \eqref{M-esti}, that is, $\tilde{\beta}$ satisfies the first-order necessary condition:
\begin{equation}\label{1st-cond}
\langle \nabla{\mathcal{L}}_m(\tilde{\beta})+\nabla{\mathcal{R}}_\lambda(\tilde{\beta}),\beta-\tilde{\beta} \rangle\geq 0,\quad \text{for all}\ \beta\in \Omega.
\end{equation}
\begin{Theorem}\label{thm-sta}
Let $R_q>0$ and $r>0$ be positive numbers such that $\beta^*\in {\mathbb{B}}_q(R_q)\cap \Omega$. Let $\tilde{\beta}$ be a stationary point of the optimization problem \eqref{M-esti}. Suppose that the empirical loss function ${\mathcal{L}}_m$ satisfies the RSC conditions \eqref{sta-RSC}, and $\gamma_1>\frac{2\mu_1-\mu_2}{2}$.
Assume that the regularization parameter $\lambda$ is chosen to satisfy
\begin{equation}\label{thm1-lambda}
\frac{2}{L}\max\left\{\|\nabla{\mathcal{L}}_m(\beta^*)\|_\infty, \gamma_2\sqrt{\frac{\log n}{m}}
\right\}\leq \lambda\leq \frac{\gamma_2\omega}{2rL},
\end{equation}
and the sample size satisfies
\begin{equation}\label{thm1-m}
m\geq \frac{16r^2\max(\tau_1^2,\tau_2^2)}{\gamma_2^2\omega^2}\log n.
\end{equation}
Then we have that
\begin{equation}\label{l2-rate}
\|\tilde{\beta}-\beta^*\|_2^2\leq (\sqrt{57}+7)^2 R_q\left(\frac{2\lambda L}{2\gamma_1-2\mu_1+\mu_2}\right)^{2-q},
\end{equation}
\begin{equation}\label{l1-rate}
\|\tilde{\beta}-\beta^*\|_1\leq 4(2\sqrt{57}+15)R_q\left(\frac{2\lambda L}{2\gamma_1-2\mu_1+\mu_2}\right)^{1-q}.
\end{equation}
\end{Theorem}
\begin{proof}
Set $\tilde{\delta}:=\tilde{\beta}-\beta^*$. We first show that $\|\tilde{\delta}\|_2\leq 3$. Suppose on the contrary that $\|\tilde{\delta}\|_2>3$. Then one has the following inequality by \eqref{sta-RSC2}:
\begin{equation}\label{thm1-1}
\langle \nabla{\mathcal{L}}_m(\tilde{\beta})-\nabla{\mathcal{L}}_m(\beta^*),\tilde{\delta} \rangle\geq \gamma_2\|\tilde{\delta}\|_2-\tau_2\sqrt{\frac{\log n}{m}}\|\tilde{\delta}\|_1.
\end{equation}
Noting $\beta^*\in \Omega$, and combining \eqref{thm1-1} and \eqref{1st-cond} (with $\beta^*$ in place of $\beta$), we arrive at
\begin{equation}\label{thm1-2}
\langle -\nabla{\mathcal{R}}_\lambda(\tilde{\beta})-\nabla{\mathcal{L}}_m(\beta^*),\tilde{\delta} \rangle\geq \gamma_2\|\tilde{\delta}\|_2-\tau_2\sqrt{\frac{\log n}{m}}\|\tilde{\delta}\|_1.
\end{equation}
Applying H{\"o}lder's inequality and the triangle inequality to the left-hand side of \eqref{thm1-2}, and noting that ${\mathcal{R}}_\lambda$ satisfies Assumption \ref{asup-regu}, one has by \citet[Lemma 4]{loh2015regularized} and \eqref{thm1-lambda} that
\begin{equation*}
\langle -\nabla{\mathcal{R}}_\lambda(\tilde{\beta})-\nabla{\mathcal{L}}_m(\beta^*),\tilde{\delta} \rangle
\leq \{\|\nabla{\mathcal{R}}_\lambda(\tilde{\beta})\|_\infty+\|\nabla{\mathcal{L}}_m(\beta^*)\|_\infty\}\|\tilde{\delta}\|_1\\ \leq \left\{\lambda L+\frac{\lambda L}{2}\right\}\|\tilde{\delta}\|_1.
\end{equation*}
Then combining this inequality with \eqref{thm1-2} and noting that $\|\tilde{\delta}\|_1\leq \|\tilde{\beta}\|_1+\|\beta^*\|_1\leq g(\tilde{\beta})/\omega+g(\beta^*)/\omega\leq 2r/\omega$
(due to \eqref{g-l1}),
we obtain that
\begin{equation*}
\|\tilde{\delta}\|_2\leq \frac{\|\tilde{\delta}\|_1}{\gamma_2}\left(\frac{3\lambda L}{2}+\tau_2\sqrt{\frac{\log n}{m}}\right)\leq \frac{2r}{\gamma_2\omega}\left(\frac{3\lambda L}{2}+\tau_2\sqrt{\frac{\log n}{m}}\right).
\end{equation*}
Since $\lambda$ satisfies \eqref{thm1-lambda} and $m$ satisfies \eqref{thm1-m},
we obtain that $\|\tilde{\delta}\|_2\leq 3$, a contradiction. Thus, $\|\tilde{\delta}\|_2\leq 3$.
Therefore, by \eqref{sta-RSC1}, one has that
\begin{equation}\label{thm1-3}
\langle \nabla{\mathcal{L}}_m(\tilde{\beta})-\nabla{\mathcal{L}}_m(\beta^*),\tilde{\delta} \rangle\geq \gamma_1\|\tilde{\delta}\|_2^2-\tau_1\frac{\log n}{m}\|\tilde{\delta}\|_1^2.
\end{equation}
On the other hand, it follows from \eqref{lem-qlambda-11} and \eqref{lem-qlambda-14} in Lemma \ref{lem-qlambda} that
\begin{equation*}
\begin{aligned}
\langle \nabla{\mathcal{R}}_\lambda(\tilde{\beta}),\beta^*-\tilde{\beta} \rangle
&= \langle \nabla{\mathcal{Q}}_\lambda(\tilde{\beta})+\nabla{\mathcal{H}}_\lambda(\tilde{\beta}),\beta^*-\tilde{\beta} \rangle \\
&\leq \langle \nabla{\mathcal{Q}}_\lambda(\beta^*),\beta^*-\tilde{\beta} \rangle+\mu_1\|\beta^*-\tilde{\beta}\|_2^2+\langle \nabla{\mathcal{H}}_\lambda(\tilde{\beta}),\beta^*-\tilde{\beta} \rangle \\
&\leq {\mathcal{Q}}_\lambda(\beta^*)-{\mathcal{Q}}_\lambda(\tilde{\beta})+\frac{2\mu_1-\mu_2}{2}\|\beta^*-\tilde{\beta}\|_2^2+\langle \nabla{\mathcal{H}}_\lambda(\tilde{\beta}),\beta^*-\tilde{\beta} \rangle.
\end{aligned}
\end{equation*}
Moreover, since the function ${\mathcal{H}}_\lambda$ is convex, one has that
\begin{equation*}
{\mathcal{H}}_\lambda(\beta^*)-{\mathcal{H}}_\lambda(\tilde{\beta})\geq \langle \nabla{\mathcal{H}}_\lambda(\tilde{\beta}),\beta^*-\tilde{\beta} \rangle.
\end{equation*}
This, together with the former inequality, implies that
\begin{equation}\label{thm1-6}
\langle \nabla{\mathcal{R}}_\lambda(\tilde{\beta}),\beta^*-\tilde{\beta} \rangle \leq {\mathcal{R}}_\lambda(\beta^*)-{\mathcal{R}}_\lambda(\tilde{\beta})+\frac{2\mu_1-\mu_2}{2}\|\beta^*-\tilde{\beta}\|_2^2.
\end{equation}
Then combining \eqref{thm1-3}, \eqref{thm1-6} and \eqref{1st-cond} (with $\beta^*$ in place of $\beta$), we obtain that
\begin{equation}\label{thm1-9}
\begin{aligned}
\gamma_1\|\tilde{\delta}\|_2^2-\tau_1\frac{\log n}{m}\|\tilde{\delta}\|_1^2
&\leq -\langle \nabla{\mathcal{L}}_m(\beta^*),\tilde{\delta} \rangle+{\mathcal{R}}_\lambda(\beta^*)-{\mathcal{R}}_\lambda(\tilde{\beta})+\frac{2\mu_1-\mu_2}{2}\|\beta^*-\tilde{\beta}\|_2^2\\
&\leq \|\nabla{\mathcal{L}}_m(\beta^*)\|_\infty\|\tilde{\delta}\|_1+{\mathcal{R}}_\lambda(\beta^*)-{\mathcal{R}}_\lambda(\tilde{\beta})+\frac{2\mu_1-\mu_2}{2}\|\tilde{\delta}\|_2^2.
\end{aligned}
\end{equation}
Let $J$ denote the index set corresponding to the $|S_\eta|$ largest coordinates in
absolute value of $\tilde{\delta}$ (recalling the set $S_\eta$ defined in \eqref{S-eta}. It then follows from Lemma \ref{lem-regu} (with $S=S_\eta$) that
\begin{equation}\label{thm1-8}
{\mathcal{R}}_\lambda(\beta^*)-{\mathcal{R}}_\lambda(\tilde{\beta})\leq
\lambda L(\|\tilde{\delta}_J\|_1-\|\tilde{\delta}_{J^c}\|_1)+2\lambda L\|\beta^*_{S_\eta^c}\|_1.
\end{equation}
Then combining \eqref{thm1-8} with \eqref{thm1-9} and noting assumption \eqref{thm1-lambda}, one has that
\begin{equation*}
\begin{aligned}
\gamma_1\|\tilde{\delta}\|_2^2-\tau_1\frac{\log n}{m}\|\tilde{\delta}\|_1^2
&\leq \|\nabla{\mathcal{L}}_m(\beta^*)\|_\infty\|\tilde{\delta}\|_1+\lambda L(\|\tilde{\delta}_J\|_1-\|\tilde{\delta}_{J^c}\|_1)+2\lambda L\|\beta^*_{S_\eta^c}\|_1+\frac{2\mu_1-\mu_2}{2}\|\tilde{\delta}\|_2^2\\
&\leq \frac{3\lambda L}{2}\|\tilde{\delta}_J\|_1-\frac{\lambda L}{2}\|\tilde{\delta}_{J^c}\|_1+2\lambda L\|\beta^*_{S_\eta^c}\|_1+\frac{2\mu_1-\mu_2}{2}\|\tilde{\delta}\|_2^2.
\end{aligned}
\end{equation*}
This, together with the fact $\|\tilde{\delta}\|_1\leq 2r/\omega$ and assumptions \eqref{thm1-lambda} and \eqref{thm1-m}, implies that
\begin{equation}\label{thm1-15}
\begin{aligned}
2(\gamma_1-\frac{2\mu_1-\mu_2}{2})\|\tilde{\delta}\|_2^2
&\leq 3\lambda L\|\tilde{\delta}_J\|_1-\lambda L\|\tilde{\delta}_{J^c}\|_1+4\tau_1\frac{r}{\omega}\frac{\log n}{m}\|\tilde{\delta}\|_1+2\lambda L\|\beta^*_{S_\eta^c}\|_1\\
&\leq 3\lambda L\|\tilde{\delta}_J\|_1-\lambda L\|\tilde{\delta}_{J^c}\|_1+\gamma_2\sqrt{\frac{\log n}{m}}\|\tilde{\delta}\|_1+2\lambda L\|\beta^*_{S_\eta^c}\|_1\\
&\leq \frac{7\lambda L}{2}\|\tilde{\delta}_J\|_1-\frac{\lambda L}{2}\|\tilde{\delta}_{J^c}\|_1+2\lambda L\|\beta^*_{S_\eta^c}\|_1.
\end{aligned}
\end{equation}
Since $\gamma_1>\frac{2\mu_1-\mu_2}{2}$ by assumption, we have by \eqref{thm1-15} that $\|\tilde{\delta}_{J^c}\|_1\leq 7\|\tilde{\delta}_J\|_1+4\|\beta^*_{S_\eta^c}\|_1$. Consequently,
\begin{equation}\label{thm1-12}
\|\tilde{\delta}\|_1\leq
8\|\tilde{\delta}_J\|_1+4\|\beta^*_{S_\eta^c}\|_1\leq 8\sqrt{|S_\eta|}\|\tilde{\delta}_J\|_2+4\|\beta^*_{S_\eta^c}\|_1
\leq 8\sqrt{|S_\eta|}\|\tilde{\delta}\|_2+4\|\beta^*_{S_\eta^c}\|_1.
\end{equation}
Furthermore, \eqref{thm1-15} implies that
\begin{equation}\label{thm1-13}
2\left(\gamma_1-\frac{2\mu_1-\mu_2}{2}\right)\|\tilde{\delta}\|_2^2 \leq \frac{7\lambda L}{2}\|\tilde{\delta}_J\|_1+2\lambda L\|\beta^*_{S_\eta^c}\|_1
\leq 28\lambda L\sqrt{|S_\eta|}\|\tilde{\delta}\|_2+16\lambda L\|\beta^*_{S_\eta^c}\|_1.
\end{equation}
Combining \eqref{s-eta} with \eqref{thm1-13} and setting $\eta=\frac{\lambda L}{\gamma_1-\frac{2\mu_1-\mu_2}{2}}$, we obtain \eqref{l2-rate}.
Moreover, it follows from \eqref{thm1-12} that \eqref{l1-rate} holds. The proof is complete.
\end{proof}
\begin{Remark}
{\rm (i)} Theorem \ref{thm-sta} tells us that the $\ell_2$ recovery bound for all the stationary points of the nonconvex optimization problem \eqref{M-esti} scales as $\|\tilde{\beta}-\beta^*\|_2^2=O(\lambda^{2-q}R_q)$. When $\lambda$ is chosen as $\lambda=\Omega\left(\sqrt{\frac{\log n}{m}}\right)$, the $\ell_2$ recovery bound implies that the estimator $\tilde{\beta}$ is statistically consistent. Note that this result is independent of any specific algorithms, meaning that any numerical algorithm for solving the nonconvex optimization problem \eqref{M-esti} can stably recover the true sparse parameter as long as it is guaranteed to converge to a stationary point.
{\rm (ii)} In the case when $q=0$, the underlying parameter $\beta^*$ is of exact sparsity with $\|\beta^*\|_0=R_0$, and Theorem \ref{thm-sta} is reduced to \citet[Theorem 1]{loh2015regularized} up to constant factors. More generally, Theorem \ref{thm-sta} provides the $\ell_2$ recovery bound when $\beta^*\in {\mathbb{B}}_q(R_q)$ with $q\in [0,1]$. Note that the sparsity of $\beta^*$ is measured via the $\ell_q$-norm, with larger values meaning lesser sparsity, \eqref{l2-rate} indicates that the rate of the recovery bound slows down as $q$ increases to 1.
\end{Remark}
\subsection{Algorithmic results}
We now apply the proximal gradient method \citep{nesterov2007gradient} to solve a modified version of the nonconvex optimization problem \eqref{M-esti} and then establish the linear convergence rate under the RSC/RSM conditions. Recall that the regularizer can be decomposed as ${\mathcal{R}}_\lambda(\cdot)={\mathcal{Q}}_\lambda(\cdot)+{\mathcal{H}}_\lambda(\cdot)$ by Assumption \ref{asup-regu}. In the following, we shall consider the side constraint function specialized as
\begin{equation*}
g(\cdot):=\frac{1}{\lambda}{\mathcal{H}}_\lambda(\cdot),
\end{equation*}
which is convex by Assumption \ref{asup-regu} and satisfies $g(\beta)\geq L\|\beta\|_1$ for all $\beta\in {\mathbb{R}}^n$ by Lemma \ref{lem-hlambda}, meeting our requirement \eqref{g-l1} with $\omega=L$. The optimization problem \eqref{M-esti} now can be written as
\begin{equation}\label{M-esti-alg}
\hat{\beta} \in \argmin_{\beta\in {\mathbb{R}}^n, g(\beta)\leq r}\{\bar{{\mathcal{L}}}_m(\beta)+{\mathcal{H}}_\lambda(\beta)\},
\end{equation}
with $\bar{{\mathcal{L}}}_m(\cdot):={\mathcal{L}}_m(\cdot)+{\mathcal{Q}}_\lambda(\cdot)$. By means of this decomposition, the objective function is decomposed into a differentiable but possibly nonconvex function and a possibly nonsmooth but convex function. Applying the proximal gradient method proposed in \cite{nesterov2007gradient} to \eqref{M-esti-alg}, we obtain a sequence of iterates $\{\beta^t\}_{t=0}^\infty$ as
\begin{equation}\label{algo-pga}
\beta^{t+1}\in \argmin_{\beta\in {\mathbb{R}}^n, g(\beta)\leq r}\left\{\frac{1}{2}\Big{\|}\beta-\left(\beta^t-\frac{\nabla{\bar{{\mathcal{L}}}_m(\beta^t)}}{v}\right)\Big{\|}_2^2+\frac{1}{v}{\mathcal{H}}_\lambda(\beta)\right\},
\end{equation}
where $\frac{1}{v}$ is the step size.
Given $\beta^t$, one can follow \cite{loh2015regularized} to generate the next iterate $\beta^{t+1}$ via the following three steps; see \citet[Appendix C.1]{loh2015regularized} for details.
\begin{enumerate}[(1)]
\item First optimize the unconstrained optimization problem
\begin{equation*}
\hat{\beta^t}\in \argmin_{\beta\in {\mathbb{R}}^n}\left\{\frac{1}{2}\Big{\|}\beta-\left(\beta^t-\frac{\nabla{\bar{{\mathcal{L}}}_m(\beta^t)}}{v}\right)\Big{\|}_2^2+\frac{1}{v}{\mathcal{H}}_\lambda(\beta)\right\}.
\end{equation*}
\item If $g(\hat{\beta^t})\leq r$, define $\beta^{t+1}=\hat{\beta^t}$.
\item Otherwise, if $g(\hat{\beta^t})> r$, optimize the constrained optimization problem
\begin{equation*}
\beta^{t+1}\in \argmin_{\beta\in {\mathbb{R}}^n, g(\beta)\leq r}\left\{\frac{1}{2}\Big{\|}\beta-\left(\beta^t-\frac{\nabla{\bar{{\mathcal{L}}}_m(\beta^t)}}{v}\right)\Big{\|}_2^2\right\}.
\end{equation*}
\end{enumerate}
\indent For specific regularizers such as SCAD and MCP, one could consider two different decompositions for the regularizer as we did in Example \ref{decomp}. Particularly, if we use the first decomposition,
then ${\mathcal{H}}_\lambda$ is a piecewise function (cf. \eqref{SCAD-h-1}, \eqref{MCP-h-1}, and \eqref{HQ-lambda}), and implementing iteration \eqref{algo-pga} may require large computational cost. However, if we use the second decomposition,
then \eqref{algo-pga} turns to
\begin{equation}\label{algo-pga-1}
\beta^{t+1}\in \argmin_{\beta\in {\mathbb{R}}^n, \|\beta\|_1\leq r}\left\{\frac{1}{2}\Big{\|}\beta-\left(\beta^t-\frac{\nabla{\bar{{\mathcal{L}}}_m(\beta^t)}}{v}\right)\Big{\|}_2^2+\frac{\lambda}{v}\|\beta\|_1\right\},
\end{equation}
corresponding to first performing the soft-threshold operator and then performing $\ell_2$ projection onto the $\ell_1$-ball of radius $r$, which can be computed rapidly in $\mathcal{O}(n)$ time using a procedure proposed in \cite{Duchi2008Efficient}. The advantage of iteration \eqref{algo-pga-1} is due to the more general condition (vi) in Assumption \ref{asup-regu}. We shall further compare these two decompositions by simulations in section \ref{sec-simul}.
Before we state our main result that the algorithm defined by \eqref{algo-pga} converges linearly to a small neighbourhood of any global solution $\hat{\beta}$, we shall need some notations to simplify our expositions.
Let $\phi(\cdot):={\mathcal{L}}_m(\cdot)+{\mathcal{R}}_\lambda(\cdot)=\bar{{\mathcal{L}}}_m(\cdot)+{\mathcal{H}}_\lambda(\cdot)$ denote the optimization objective function. The Taylor error $\bar{{\mathcal{T}}}(\beta,\beta')$ for the modified loss function $\bar{{\mathcal{L}}}_m$ is defined as follows:
\begin{equation}\label{taylor-modified}
\bar{{\mathcal{T}}}(\beta,\beta')={\mathcal{T}}(\beta,\beta')+{\mathcal{Q}}_\lambda(\beta)-{\mathcal{Q}}_\lambda(\beta')-\langle \nabla{\mathcal{Q}}_\lambda(\beta'),\beta-\beta' \rangle.
\end{equation}
Recall the RSC and RSM conditions in \eqref{alg-RSC} and \eqref{alg-RSM}, respectively. Throughout this section, we assume $2\gamma_i>\mu_1$ for all $i=3,4,5$, and set $\gamma:=\min\{\gamma_3,\gamma_4\}$ and $\tau:=\max\{\tau_3,\tau_4,\tau_5\}$. Recall that the true underlying parameter $\beta^*\in {\mathbb{B}}_q(R_q)$ (cf. \eqref{lq-ball}). Let $\hat{\beta}$ be a global solution of the optimization problem \eqref{M-esti}. Then unless otherwise specified, we define
\begin{align}
&\bar{\epsilon}_{\text{stat}}:=8R_q^{\frac12}\left(\frac{\log n}{m}\right)^{-\frac{q}{4}}\left(\|\hat{\beta}-\beta^*\|_2+R_q^{\frac12}\left(\frac{\log n}{m}\right)^{\frac{1}{2}-\frac{q}{4}}\right),\label{bar-epsilon}\\
&\kappa:= \left\{1-\frac{2\gamma-\mu_1}{8v}+\frac{256R_q\tau\left(\frac{\log n}{m}\right)^{1-\frac{q}2}}{2\gamma-\mu_1}\right\}\left\{1-\frac{256R_q\tau\left(\frac{\log n}{m}\right)^{1-\frac{q}2}}{2\gamma-\mu_1}\right\}^{-1},\label{lem-bound-kappa}\\
&\xi:= 2\tau\frac{\log n}{m}\left\{\frac{2\gamma-\mu_1}{8v}+\frac{512R_q\tau\left(\frac{\log n}{m}\right)^{1-\frac{q}2}}{2\gamma-\mu_1}+5\right\}\left\{1-\frac{256R_q\tau\left(\frac{\log n}{m}\right)^{1-\frac{q}2}}{2\gamma-\mu_1}\right\}^{-1}.\label{lem-bound-xi}
\end{align}
For a given number $\Delta>0$ and an integer $T>0$ such that
\begin{equation}\label{lem-cone-De1}
\phi(\beta^t)-\phi(\hat{\beta})\leq \Delta, \quad \forall\ t\geq T,
\end{equation}
define
\begin{equation*}
\epsilon(\Delta):=\frac{2}{L}\min\left(\frac{\Delta}{\lambda},r\right).
\end{equation*}
With this setup, we now state our main algorithmic result.
\begin{Theorem}\label{thm-algo}
Let $R_q>0$ and $r>0$ be positive numbers such that $\beta^*\in {\mathbb{B}}_q(R_q)\cap \Omega$. Let $\hat{\beta}$ be a global solution of the optimization problem \eqref{M-esti}.
Suppose that the empirical loss function ${\mathcal{L}}_m$ satisfies the RSC/RSM conditions \eqref{alg-RSC} and \eqref{alg-RSM}, and $\gamma>\frac{1}{2}\mu_1$. Let $\{\beta^t\}_{t=0}^\infty$ be a sequence of iterates generated via \eqref{algo-pga} with an initial point $\beta^0$ satisfying $\|\beta^0-\hat{\beta}\|_2\leq 3$ and step size $v\geq \max\{2\gamma_5-\mu_2,\mu_1\}$.
Assume that the regularization paramter $\lambda$ is chosen to satisfy
\begin{equation}\label{thm2-lambda}
\frac{4}{L}\max\left\{\|\nabla{\mathcal{L}}_m(\beta^*)\|_\infty,
\tau\sqrt{\frac{\log n}{m}}\right\}\leq \lambda\leq \frac{6\gamma-9\mu_1}{4r},
\end{equation}
and the sample size satisfies
\begin{equation}\label{thm2-m}
m\geq \max\left\{\frac{4r^2}{L^2}, \left(\frac{128R_q\tau}{2\gamma-\mu_1}\right)^{1-\frac{q}{2}}\right\}\log n.
\end{equation}
Then for any tolerance $\Delta^*\geq\frac{8\xi}{1-\kappa}\bar{\epsilon}_{\emph{stat}}^2$ and any iteration $t\geq T(\Delta^*)$, we have that
\begin{equation}\label{thm2-error}
\|\beta^t-\hat{\beta}\|_2^2\leq \left(\frac{4}{2\gamma-\mu_1}\right)\left(\Delta^*+\frac{{\Delta^*}^2}{2\tau}+4\tau\frac{\log n}{m}\bar{\epsilon}_{\emph{stat}}^2\right),
\end{equation}
where
\begin{equation*}
\begin{aligned}
T(\Delta^*)&:=\log_2\log_2\left(\frac{r\lambda}{\Delta^*}\right)\left(1+\frac{\log 2}{\log(1/\kappa)}\right) +\frac{\log((\phi(\beta^0)-\phi(\hat{\beta}))/\Delta^*)}{\log(1/\kappa)},
\end{aligned}
\end{equation*}
and $\bar{\epsilon}_{\emph{stat}}$, $\kappa$, $\xi$ are defined in
\eqref{bar-epsilon}-\eqref{lem-bound-xi}, respectively.
\end{Theorem}
\begin{Remark}
{\rm (i)} Note that in the case when $q=0$, the underlying parameter $\beta^*$ is exact sparse with $\|\beta^*\|_0=R_0$, and Theorem \ref{thm-algo} is reduced to \citet[Theorem 2]{loh2015regularized} up to constant factors.\\
{\rm (ii)} Generally speaking, Theorem \ref{thm-algo} has established the linear convergence rate when $\beta^*\in {\mathbb{B}}_q(R_q)$ with $q\in [0,1]$ and pointed out some significant differences between the case of exact sparsity and that of soft sparsity. Specifically, it is ensured that the algorithm in \citet[Theorem 2]{loh2015regularized} converges linearly to a small neighbourhood of the global solution $\hat{\beta}$ and the optimization error only depends on the statistical recovery bound $\|\hat{\beta}-\beta^*\|_2$. In contrast, besides the statistical error $\|\hat{\beta}-\beta^*\|_2$, our optimization error \eqref{thm2-error}
in the case when $q\in (0,1]$ also has an additional term $R_q\left(\frac{\log n}{m}\right)^{1-\frac q2}$ (cf. \eqref{bar-epsilon}), which appears because of the statistical nonidentifiability over the $\ell_q$-ball, and it is no larger than $\|\hat{\beta}-\beta^*\|_2$
with overwhelming probability.
\end{Remark}
Before providing the proof of Theorem \ref{thm-algo}, we give several useful lemmas with the corresponding proofs deferred to Appendix.
\begin{Lemma}\label{lem-radius}
Under the conditions of Theorem \ref{thm-algo}, it holds that for all $t\geq 0$
\begin{equation}\label{lem-radius-1}
\|\beta^t-\hat{\beta}\|_2\leq 3.
\end{equation}
\end{Lemma}
\begin{Lemma}\label{lem-cone}
Suppose that the conditions of Theorem \ref{thm-algo} are satisfied, and
that there exists a pair $(\Delta, T)$ such that \eqref{lem-cone-De1} holds.
Then for any iteration $t\geq T$, it holds that
\begin{equation*}
\begin{aligned}
\|\beta^t-\hat{\beta}\|_1&\leq 4R_q^{\frac12}\left(\frac{\log n}{m}\right)^{-\frac{q}{4}}\|\beta^t-\hat{\beta}\|_2+\bar{\epsilon}_{\emph{stat}}+\epsilon(\Delta).
\end{aligned}
\end{equation*}
\end{Lemma}
\begin{Lemma}\label{lem-Tphi-bound}
Suppose that the conditions of Theorem \ref{thm-algo} are satisfied and that there exists a pair $(\Delta, T)$ such that \eqref{lem-cone-De1} holds.
Then for any iteration $t\geq T$, we have that
\begin{align}
\bar{{\mathcal{T}}}(\hat{\beta},\beta^t)&\geq -2\tau\frac{\log n}{m}(\bar{\epsilon}_{\emph{stat}}^2+\epsilon^2(\Delta))^2,\label{lem-bound-T1}\\
\phi(\beta^t)-\phi(\hat{\beta})&\geq \left(\frac{2\gamma-\mu_1}{4}\right)\|\beta^t-\hat{\beta}\|_2^2-2\tau\frac{\log n}{m}(\bar{\epsilon}_{\emph{stat}}^2+\epsilon^2(\Delta))^2, \label{lem-bound-T2}\\
\phi(\beta^t)-\phi(\hat{\beta})&\leq \kappa^{t-T}(\phi(\beta^T)-\phi(\hat{\beta}))+\frac{2\xi}{1-\kappa}(\bar{\epsilon}_{\emph{stat}}^2+\epsilon^2(\Delta)).\label{lem-bound-0}
\end{align}
\end{Lemma}
By virtue of the above lemmas, we are now ready to prove Theorem \ref{thm-algo}.
\begin{proof}(Proof of Theorem \ref{thm-algo})
We first prove the inequality as follows:
\begin{equation}\label{thm2-1}
\phi(\beta^t)-\phi(\hat{\beta})\leq \Delta^*,\quad \forall t\geq T(\Delta^*).
\end{equation}
Divide the iterations $t=0,1,\cdots$ into a series of disjoint epochs $[T_k,T_{k+1}]$ and define an associated sequence of tolerances $\Delta_0>\Delta_1>\cdots$ such that
\begin{equation*}
\phi(\beta^t)-\phi(\hat{\beta})\leq \Delta_k,\quad \forall t\geq T_k,
\end{equation*}
as well as the associated error term $\epsilon_k:=\frac{2}{L}\min \left\{\frac{\Delta_k}{\lambda},r\right\}$. The values of $\{(\Delta_k,T_k)\}_{k\geq 1}$ will be chosen later.
Then at the first iteration,
we apply Lemma \ref{lem-Tphi-bound} (cf. \eqref{lem-bound-0}) with $\epsilon_0=2r/L$ and $T_0=0$
to conclude that
\begin{equation}\label{thm2-T0}
\phi(\beta^t)-\phi(\hat{\beta})\leq \kappa^t(\phi(\beta^0)-\phi(\hat{\beta}))+\frac{2\xi}{1-\kappa}(\bar{\epsilon}_{\text{stat}}^2 +\frac{4r^2}{L^2}),\quad \forall t\geq T_0.
\end{equation}
Set $\Delta_1:=\frac{4\xi}{1-\kappa}(\bar{\epsilon}_{\text{stat}}^2 +\frac{4r^2}{L^2})$. Noting that $\kappa\in (0,1)$ by assumption, one has by \eqref{thm2-T0} that for $T_1:=\lceil \frac{\log (2\Delta_0/\Delta_1)}{\log (1/\kappa)} \rceil$,
\begin{equation*}
\begin{aligned}
\phi(\beta^t)-\phi(\hat{\beta})&\leq \frac{\Delta_1}{2}+\frac{2\xi}{1-\kappa}\left(\bar{\epsilon}_{\text{stat}}^2 +\frac{4r^2}{L^2}\right)=\Delta_1\leq \frac{8\xi}{1-\kappa}\max\left\{\bar{\epsilon}_{\text{stat}}^2,\frac{4r^2}{L^2}\right\},\quad \forall t\geq T_1.
\end{aligned}
\end{equation*}
For $k\geq 1$, we define
\begin{equation}\label{thm2-DeltaT}
\Delta_{k+1}:= \frac{4\xi}{1-\kappa}(\bar{\epsilon}_{\text{stat}}^2+\epsilon_k^2)
\quad \mbox{and}\quad
T_{k+1}:= \left\lceil \frac{\log (2\Delta_k/\Delta_{k+1})}{\log (1/\kappa)}+T_k \right\rceil.
\end{equation}
Then Lemma \ref{lem-Tphi-bound} (cf. \eqref{lem-bound-0}) is applicable to concluding that for all $t\geq T_k$,
\begin{equation*}
\phi(\beta^t)-\phi(\hat{\beta})\leq \kappa^{t-T_k}(\phi(\beta^{T_k})-\phi(\hat{\beta}))+\frac{2\xi}{1-\kappa}(\bar{\epsilon}_{\text{stat}}^2+\epsilon_k^2),
\end{equation*}
which implies that
\begin{equation*}
\phi(\beta^t)-\phi(\hat{\beta})\leq \Delta_{k+1}\leq \frac{8\xi}{1-\kappa}\max\{\bar{\epsilon}_{\text{stat}}^2,\epsilon_k^2\},\quad \forall t\geq T_{k+1}.
\end{equation*}
From \eqref{thm2-DeltaT}, we obtain the following recursion for $\{(\Delta_k,T_k)\}_{k=0}^\infty$
\begin{subequations}
\begin{align}
\Delta_{k+1}&\leq \frac{8\xi}{1-\kappa}\max\{\epsilon_k^2,\bar{\epsilon}_{\text{stat}}^2\},\label{thm2-recur-Delta}\\
T_k&\leq k+\frac{\log (2^k\Delta_0/\Delta_k)}{\log (1/\kappa)}.\label{thm2-recur-T}
\end{align}
\end{subequations}
Then by \cite[Section 7.2]{agarwal2012supplementaryMF}, one sees that \eqref{thm2-recur-Delta} implies that
\begin{equation}\label{thm2-recur2}
\Delta_{k+1}\leq \frac{\Delta_k}{4^{2^{k+1}}}\quad \mbox{and}\quad \frac{\Delta_{k+1}}{\lambda}\leq \frac{r}{4^{2^k}},\quad \forall k\geq 1.
\end{equation}
Now let us show how to decide the smallest $k$ such that $\Delta_k\leq \Delta^*$ by applying \eqref{thm2-recur2}. If we are in the first epoch, \eqref{thm2-1} is clearly from \eqref{thm2-recur-Delta}.
Otherwise, by \eqref{thm2-recur-T}, we see that $\Delta_k\leq \Delta^*$ holds after at most $$k(\Delta^*)\geq \frac{\log(\log(r\lambda/\Delta^*)/\log 4)}{\log(2)}+1=\log_2\log_2(r\lambda/\Delta^*)$$ epoches.
Combining the above bound on $k(\Delta^*)$ with \eqref{thm2-recur-T}, we conclude
that $\phi(\beta^t)-\phi(\hat{\beta})\leq \Delta^*$ holds for all iterations
\begin{equation*}
t\geq
\log_2\log_2\left(\frac{r\lambda}{\Delta^*}\right)\left(1+\frac{\log 2}{\log(1/\kappa)}\right)+\frac{\log(\Delta_0/\Delta^*)}{\log(1/\kappa)},
\end{equation*}
which proves \eqref{thm2-1}.
Finally, as \eqref{thm2-1} is proved, it follows from \eqref{lem-bound-T2} in Lemma \ref{lem-Tphi-bound}
and assumption \eqref{thm2-m} that, for any $t\geq T(\Delta^*)$,
\begin{equation*}
\left(\frac{2\gamma-\mu_1}{4}\right)\|\beta^t-\hat{\beta}\|_2^2
\leq \phi(\beta^t)-\phi(\hat{\beta}) +2\tau\frac{\log n}{m}\left(\epsilon(\Delta^*)+\bar{\epsilon}_{\text{stat}}\right)^2
\leq \Delta^*+2\tau\frac{\log n}{m}\left(\frac{2\Delta^*}{\lambda L}+\bar{\epsilon}_{\text{stat}}\right)^2.
\end{equation*}
Consequently,
by assumption \eqref{thm2-lambda}, we conclude that for any $t\geq T(\Delta^*)$,
\begin{equation*}
\|\beta^t-\hat{\beta}\|_2^2\leq \left(\frac{4}{2\gamma-\mu_1}\right)\left(\Delta^*+\frac{{\Delta^*}^2}{2\tau}+4\tau\frac{\log n}{m}\bar{\epsilon}_{\text{stat}}^2\right).
\end{equation*}
The proof is complete.
\end{proof}
\section{Simulations on the corrupted linear regression model}\label{sec-simul}
In this section, we carry out several numerical experiments to illustrate our theoretical results and compare
the performance of the estimators obtained by two different decompositions for the regularizer. Specifically, we consider high-dimensional linear regression with corrupted observations. Recall the standard linear regression model
\begin{equation}\label{ordi-linear}
y_i=\langle \beta^*,X_{i\cdot} \rangle+e_i,\quad \text{for}\ i=1,2\cdots,m,
\end{equation}
where $\beta^*\in {\mathbb{R}}^n$ is the unknown parameter and $\{(X_{i\cdot},y_i)\}_{i=1}^m$ are i.i.d. observations, which are assumed to be fully-observed in standard formulations. However, this assumption is not realistic for many applications, in which the covariates may be observed only partially and one can only observe the pairs $\{(Z_{i\cdot},y_i)\}_{i=1}^m$ instead, where $Z_{i\cdot}$'s are corrupted versions of the corresponding $X_{i\cdot}$'s. As has been discussed in \cite{loh2012high, loh2015regularized}, there are mainly two types of corruption:
\begin{enumerate}[(a)]
\item Additive noise: For each $i=1,2,\cdots,m$, we observe $Z_{i\cdot}=X_{i\cdot}+W_{i\cdot}$, where $W_{i\cdot}\in {\mathbb{R}}^n$ is a random vector independent of $X_{i\cdot}$ with mean \textbf{0} and known covariance matrix $\Sigma_w$.
\item Missing data: For each $i=1,2,\cdots,m$, we observe a random vector $Z_{i\cdot}\in {\mathbb{R}}^n$, such that for each $j=1,2\cdots,n$, we independently observe $Z_{ij}=X_{ij}$ with probability $1-\vartheta$, and $Z_{ij}=0$ with probability $\vartheta$, where $\vartheta\in [0,1)$.
\end{enumerate}
Following a line of past works \citep{loh2012high, loh2015regularized}, we fix $i\in \{1,2,\cdots,m\}$ and use $\Sigma_x$ to denote the covariance matrix of the covariates $X_{i\cdot}$ (i.e., $\Sigma_x=\text{cov}(X_{i\cdot})$). The population loss function is
${\mathcal{L}}(\beta) =\frac12\beta^\top\Sigma_x\beta-{\beta^*}^\top\Sigma_x\beta$.
Let $(\hat{\Gamma},\hat{\Upsilon})$ denote the estimators for $(\Sigma_x,\Sigma_x\beta^*)$ that depend only on the observed data $\{(Z_{i\cdot},y_i)\}_{i=1}^m$, and the empirical loss function is then written as
\begin{equation}\label{empi-corr}
{\mathcal{L}}_m(\beta)=\frac12\beta^\top\hat{\Gamma}\beta-\hat{\Upsilon}^\top\beta.
\end{equation}
Substituting the empirical loss function \eqref{empi-corr} into \eqref{M-esti}, and recalling the side constraint \eqref{g-l1} as well as the feasible region $\Omega$ \eqref{feasible}, the following optimization problem is used to estimate $\beta^*$ in the corrupted linear regression
\begin{equation*}
\hat{\beta} \in \argmin_{\beta\in \Omega}\left\{\left(\frac12\beta^\top\hat{\Gamma}\beta-\hat{\Upsilon}^\top\beta\right)+{\mathcal{R}}_\lambda(\beta)\right\}.
\end{equation*}
As discussed in \cite{loh2012high}, an appropriate choice of the surrogate pair $(\hat{\Gamma},\hat{\Upsilon})$ for the additive noise and missing data cases is given respectively by
\begin{equation*}
\begin{aligned}
\hat{\Gamma}_{\text{add}}&:= \frac{Z^\top Z}{m}-\Sigma_w \quad \mbox{and}\quad \hat{\Upsilon}_{\text{add}}:=\frac{Z^\top y}{m}, \\
\hat{\Gamma}_{\text{mis}}&:= \frac{\tilde{Z}^\top \tilde{Z}}{m}-\vartheta\cdot \text{diag}\left(\frac{\tilde{Z}^\top \tilde{Z}}{m}\right) \quad \mbox{and}\quad \hat{\Upsilon}_{\text{mis}}:=\frac{\tilde{Z}^\top y}{m}\quad \left(\tilde{Z}=\frac{Z}{1-v}\right).
\end{aligned}
\end{equation*}
The following simulations will be performed with the loss function ${\mathcal{L}}_m$ corresponding to linear regression with additive noise and missing data, respectively, and three regularizers, namely the Lasso, SCAD and MCP. All numerical experiments are performed in MATLAB R2014b and executed on a personal desktop (Intel Core i7-4790, 3.60 GHz, 8.00 GB of RAM).
The numerical data are generated as follows. We first generate i.i.d. samples $X_{i\cdot}\sim N(0,\mathbb{I}_n)$ and the noise term $e\sim N(0,(0.1)^2\mathbb{I}_m)$. Then the true parameter $\beta^*$ is generated as a compressible signal whose entries are all nonzeros but obey a power low decay. Specifically, the signal $\beta^*$ is generated by taking a fixed sequence
$\{5.0\times i^{-2}:i=1,2,\cdots,n\}$, multiplying the sequence by a random sign sequence and permuting at random finally. The data $y$ are generated according to \eqref{ordi-linear}. The corrupted term is set to $W_{i\cdot}\sim N(0,(0.2)^2\mathbb{I}_n)$ and $\vartheta=0.2$ for the additive noise and missing data cases, respectively. The problem size $n$ and $m$ will be specified based on the experiments. The data are then generated at random for 100 times. The performance of the $M$-estimator $\hat{\beta}$ is characterized by the relative error $\|\hat{\beta}-\beta^*\|_2/\|\beta^*\|_2$ and is illustrated by averaging across the 100 numerical results.
As we have mentioned in the preceding sections, there are two different decompositions for the SCAD and MCP regularizers, respectively, which result in two specific forms for \eqref{algo-pga} as follows:
\begin{align}
\beta^{t+1}&\in \argmin_{\beta\in {\mathbb{R}}^n, \frac{1}{\lambda}{\mathcal{H}}_\lambda(\beta)\leq r}\left\{\frac{1}{2}\Big{\|}\beta-\left(\beta^t-\frac{\nabla{\bar{{\mathcal{L}}}_m(\beta^t)}}{v}\right)\Big{\|}_2^2+\frac{1}{v}{\mathcal{H}}_\lambda(\beta)\right\},\label{simu-pga-1}\\
\beta^{t+1}&\in \argmin_{\beta\in {\mathbb{R}}^n, \|\beta\|_1\leq r}\left\{\frac{1}{2}\Big{\|}\beta-\left(\beta^t-\frac{\nabla{\bar{{\mathcal{L}}}_m(\beta^t)}}{v}\right)\Big{\|}_2^2+\frac{\lambda}{v}\|\beta\|_1\right\},\label{simu-pga-2}
\end{align}
where $\bar{{\mathcal{L}}}_m(\cdot)={\mathcal{L}}_m(\cdot)+{\mathcal{Q}}_\lambda(\cdot)$ with ${\mathcal{Q}}_\lambda(\cdot)$ and ${\mathcal{H}}_\lambda(\cdot)$ specified in \eqref{SCAD-h-1}, \eqref{MCP-h-1}, and \eqref{HQ-lambda} for \eqref{simu-pga-1}, and ${\mathcal{Q}}_\lambda(\cdot)$ given by \eqref{SCAD-q-2}, \eqref{MCP-q-2} and \eqref{HQ-lambda} for \eqref{simu-pga-2}. In the following, we use SCAD$\_1$ and SCAD$\_2$ to denote the estimators obtained by iterations \eqref{simu-pga-1} and \eqref{simu-pga-2} with SCAD as the regularizer, respectively, and use MCP$\_1$ and MCP$\_2$ to denote the estimators produced by iterations \eqref{simu-pga-1} and \eqref{simu-pga-2} with MCP as the regularizer, respectively. Note that for the Lasso regularizer, these two iterations become the same one, and we use Lasso to stand for the estimator produced by either \eqref{simu-pga-1} or \eqref{simu-pga-2} with ${\mathcal{R}}_\lambda(\cdot)=\lambda\|\cdot\|_1$.
For all simulations, the regularization parameter is set to $\lambda=\sqrt{\frac{\log n}{m}}$, $r=\frac{1.1}{\lambda}{\mathcal{H}}_\lambda(\beta^*)$ for \eqref{simu-pga-1} and $r=1.1\|\beta^*\|_1$ for \eqref{simu-pga-2}, to ensure the feasibility of $\beta^*$. Both \eqref{simu-pga-1} and \eqref{simu-pga-2} are implemented with the step size $\frac1{v}=\frac{1}{2\lambda_{\max}(\Sigma_x)}$ and the initial point $\beta^0=\textbf{0}$. We choose the parameter $a=3.7$ for SCAD and $b=1.5$ for MCP.
The first experiment is performed to demonstrate the statistical guarantee for corrupted linear regression in additive noise and missing data cases with Lasso, SCAD and MCP as regularizers, respectively. For the sake of simplification, we here only report results obtained by iteration \eqref{simu-pga-2} though iteration \eqref{simu-pga-1} is also applicable. Fig. \ref{f-stat} plots the relative error versus the rescaled sample size $\frac{m}{\log n}$ for three different vector dimensions $n\in \{256,512,1024\}$. The estimators Lasso, SCAD$\_2$ and MCP$\_2$ are represented by solid, dotted and dashed lines, respectively. We can see from Fig. \ref{f-stat} that the three curves corresponding to the same regularizer in each panel (a) and (b) nearly match with one another under different problem dimensions $n$,
coinciding with Theorem \ref{thm-sta}. Moreover, as the sample size increases, the relative error decreases to zero, implying the statistical consistency of the estimators.
\begin{figure}[htbp]
\centering
\subfigure[]{
\includegraphics[width=0.49\textwidth]{li1.pdf}}
\subfigure[]{
\includegraphics[width=0.49\textwidth]{li2.pdf}}
\caption{Statistical consistency for corrupted linear regression with Lasso, SCAD and MCP as the regularizers.}
\label{f-stat}
\end{figure}
The second experiment is designed to compare the performance of the estimators produced by two different decompositions for the SCAD and MCP regularizer, namely the estimators obtained by iterations \eqref{simu-pga-1} and \eqref{simu-pga-2}, respectively. We have investigated the performance for a broad range of dimensions $n$ and $m$, and the results are comparatively consistent across these choices. Hence we here report results for $n=1024$ and a range of the sample sizes $m=\lceil\alpha\log n\rceil$ specified by $\alpha\in\{10,30,80\}$.
In the additive noise case, we can see from Fig. \ref{f-add-scad} that for the SCAD regularizer, there seems no difference in the accuracy between the two decompositions across the range of sample sizes.
However, for the MCP regularizer, it is shown in Fig. \ref{f-add-mcp} that estimators obtained by iteration \eqref{simu-pga-2} achieve a more accurate level and a faster convergence rate than those produced by iteration \eqref{simu-pga-1} whenever the sample size is small or large.
\begin{figure}[htbp]
\centering
\subfigure[]{
\label{f-add-scad}
\includegraphics[width=0.48\textwidth]{li3.pdf}}
\subfigure[]{
\label{f-add-mcp}
\includegraphics[width=0.48\textwidth]{li4.pdf}}
\caption{Comparison of decompositions for the SCAD and MCP regularizers in the additive noise case.}
\label{f-com-add}
\end{figure}
Fig. \ref{f-com-mis} depicts analogous results to Fig. \ref{f-com-add} in the case of missing data. For the SCAD regularizer, we can see from Fig. \ref{f-mis-scad} that when the sample size is small (e.g., $\alpha=10$), the difference in accuracy between these two decompositions is slight. Then as the sample size becomes larger (e.g., $\alpha=30,80$), estimators obtained by iteration \eqref{simu-pga-2} achieve a more accurate level and a faster convergence rate than those produced by iteration \eqref{simu-pga-1}. For the MCP regularizer, it is illustrated in Fig. \ref{f-mis-mcp} that estimators obtained by iteration \eqref{simu-pga-2} outperform those obtained by iteration \eqref{simu-pga-1} on both accuracy and convergence rate whenever the sample size is small or large.
\begin{figure}[htbp]
\centering
\subfigure[]{
\label{f-mis-scad}
\includegraphics[width=0.48\textwidth]{li5.pdf}}
\subfigure[]{
\label{f-mis-mcp}
\includegraphics[width=0.48\textwidth]{li6.pdf}}
\caption{Comparison of decompositions for the SCAD and MCP regularizers in the missing data case.}
\label{f-com-mis}
\end{figure}
In a word, estimators produced by iteration \eqref{simu-pga-2} perform better than those obtained by iteration \eqref{simu-pga-1} on both accuracy and convergence speed, especially in the missing data case. This advantage is due to the more general condition (cf. Assumption \ref{asup-regu}(vi)), which provides the potential to consider different decompositions for specific regularizers and then to design a more efficient algorithm.
\section{Conclusion}\label{sec-concl}
In this work, we investigated the theoretical properties of local solutions of nonconvex regularized $M$-estimators, where the underlying true parameter is assumed to be of soft sparsity. We provided guarantees on statistical consistency for all stationary points of the nonconvex regularized $M$-estimators. Then we applied the proximal gradient algorithm to solve a modified version of the nonconvex optimization problem and established the linear convergence rate. Particularly, for SCAD and MCP, our assumption on the regularizer provides the possibility to consider different decompositions so as to construct estimators with better performance. Finally, the theoretical consequences and the advantage of the assumption on the regularizer were demonstrated by several simulations. However, there exist some other regularizers that do not satisfy our assumptions, such as the bridge regularizers widely used in compressed sensing and machine learning. We are still working to deal with this problem.
\section*{Appendix}\label{sec-appe}
\subsection*{Proof of Lemma \ref{lem-radius}}
The conclusion will be proved by induction on the iteration count $t$. Note that in the base case when $t=0$, the conclusion holds by assumption.
Now in the induction step, let $k\geq 0$ be given and suppose that \eqref{lem-radius-1} holds for $t=k$. Then it remains to show that \eqref{lem-radius-1} holds for $t=k+1$.
Suppose on the contrary that $\|\beta^{k+1}-\hat{\beta}\|_2>3$. By the RSC condition \eqref{alg-RSC2} and \eqref{taylor-modified}, one has that
\begin{equation*}
\bar{{\mathcal{T}}}(\beta^{k+1},\hat{\beta})\geq \gamma_4\|\beta^{k+1}-\hat{\beta}\|_2-\tau_4\sqrt{\frac{\log n}{m}}\|\beta^{k+1}-\hat{\beta}\|_1+{\mathcal{Q}}_\lambda(\beta^{k+1})-{\mathcal{Q}}_\lambda(\hat{\beta})-\langle \nabla{\mathcal{Q}}_\lambda(\hat{\beta}),\beta^{k+1}-\hat{\beta} \rangle.
\end{equation*}
It then follows from \eqref{lem-qlambda-13} in Lemma \ref{lem-qlambda} that
\begin{equation*}
\begin{aligned}
\bar{{\mathcal{T}}}(\beta^{k+1},\hat{\beta})&\geq \gamma_4\|\beta^{k+1}-\hat{\beta}\|_2-\tau_4\sqrt{\frac{\log n}{m}}\|\beta^{k+1}-\hat{\beta}\|_1-\frac{\mu_1}{2}\|\beta^{k+1}-\hat{\beta}\|_2^2.
\end{aligned}
\end{equation*}
Moreover, since the function ${\mathcal{H}}_\lambda$ is convex, one has that
\begin{equation*}
{\mathcal{H}}_\lambda(\beta^{k+1})-{\mathcal{H}}_\lambda(\hat{\beta})\geq \langle \nabla{\mathcal{H}}_\lambda(\hat{\beta}),\beta^{k+1}-\hat{\beta} \rangle.
\end{equation*}
This, together with the former inequality, implies that
\begin{equation*}
\phi(\beta^{k+1})-\phi({\beta})-\langle \nabla\phi(\hat{\beta}),\beta^{k+1}-\hat{\beta} \rangle\geq \gamma_4\|\beta^{k+1}-\hat{\beta}\|_2-\tau_4\sqrt{\frac{\log n}{m}}\|\beta^{k+1}-\hat{\beta}\|_1-\frac{\mu_1}{2}\|\beta^{k+1}-\hat{\beta}\|_2^2.
\end{equation*}
Since $\hat{\beta}$ is the optimal solution, one has by the first-order optimality condition $\langle \nabla\phi(\hat{\beta}),\beta^{k+1}-\hat{\beta} \rangle\geq 0$ that
\begin{equation}\label{lem-radius-3}
\begin{aligned}
\phi(\beta^{k+1})-\phi(\hat{\beta})&\geq \gamma_4\|\beta^{k+1}-\hat{\beta}\|_2-\tau_4\sqrt{\frac{\log n}{m}}\|\beta^{k+1}-\hat{\beta}\|_1-\frac{\mu_1}{2}\|\beta^{k+1}-\hat{\beta}\|_2^2.
\end{aligned}
\end{equation}
On the other hand, since $\|\beta^k-\hat{\beta}\|_2\leq 3$ by the induction hypothesis, applying the RSC condition \eqref{alg-RSC1} on the pair $(\hat{\beta},\beta^k)$, we have by \eqref{lem-qlambda-13} that
\begin{equation*}
\begin{aligned}
\bar{{\mathcal{L}}}_m(\hat{\beta})&\geq \bar{{\mathcal{L}}}_m(\beta^k)+\langle \nabla\bar{{\mathcal{L}}}_m(\beta^k),\hat{\beta}-\beta^k \rangle+\left(\gamma_3-\frac{\mu_1}{2}\right)\|\hat{\beta}-\beta^k\|_2^2-\tau_3\frac{\log n}{m}\|\hat{\beta}-\beta^k\|_1^2.
\end{aligned}
\end{equation*}
This, together with ${\mathcal{H}}_\lambda(\hat{\beta})\geq {\mathcal{H}}_\lambda(\beta^{k+1})+\langle \nabla{\mathcal{H}}_\lambda(\beta^{k+1}),\hat{\beta}-\beta^{k+1} \rangle$
and the assmption that $\gamma>\frac12\mu_1$, implies that
\begin{equation}\label{lem-radius-4}
\begin{aligned}
\phi(\hat{\beta})
&\geq \bar{{\mathcal{L}}}_m(\beta^k)+\langle \nabla\bar{{\mathcal{L}}}_m(\beta^k),\hat{\beta}-\beta^k \rangle+{\mathcal{H}}_\lambda(\beta^{k+1})+\langle \nabla{\mathcal{H}}_\lambda(\beta^{k+1}),\hat{\beta}-\beta^{k+1} \rangle+ \left(\gamma_3-\frac{\mu_1}{2}\right)\|\hat{\beta}-\beta^k\|_2^2-\tau_3\frac{\log n}{m}\|\hat{\beta}-\beta^k\|_1^2\\
&\geq \bar{{\mathcal{L}}}_m(\beta^k)+\langle \nabla\bar{{\mathcal{L}}}_m(\beta^k),\hat{\beta}-\beta^k \rangle+
{\mathcal{H}}_\lambda(\beta^{k+1})+\langle \nabla{\mathcal{H}}_\lambda(\beta^{k+1}),\hat{\beta}-\beta^{k+1} \rangle-\tau_3\frac{\log n}{m}\|\hat{\beta}-\beta^k\|_1^2.
\end{aligned}
\end{equation}
Applying the RSM condition \eqref{alg-RSM} on the pair $(\beta^{k+1},\beta^k)$, one has by \eqref{lem-qlambda-14} and the assumption $v\geq 2\gamma_5-\mu_2$ that
\begin{equation}\label{lem-radius-5}
\begin{aligned}
\phi(\beta^{k+1})
&\leq \bar{{\mathcal{L}}}_m(\beta^k)+\langle \nabla\bar{{\mathcal{L}}}_m(\beta^k),\beta^{k+1}-\beta^k \rangle+{\mathcal{H}}_\lambda(\beta^{k+1})+ \left(\gamma_5-\frac{\mu_2}{2}\right)\|\beta^{k+1}-\beta^k\|_2^2+\tau_5\frac{\log n}{m}\|\beta^{k+1}-\beta^k\|_1^2\\
&\leq \bar{{\mathcal{L}}}_m(\beta^k)+\langle \nabla\bar{{\mathcal{L}}}_m(\beta^k),\beta^{k+1}-\beta^k \rangle+{\mathcal{H}}_\lambda(\beta^{k+1})+ \frac{v}{2}\|\beta^{k+1}-\beta^k\|_2^2+4\frac{r^2\tau_5}{L^2}\frac{\log n}{m}.
\end{aligned}
\end{equation}
Moreover, it is easy to verify that update \eqref{algo-pga} can be written equivalently as
\begin{equation}\label{lem-radius-8}
\begin{aligned}
\beta^{k+1}\in \argmin_{\beta\in {\mathbb{R}}^n, g(\beta)\leq r}\left\{\bar{{\mathcal{L}}}_m(\beta^k)+\langle \nabla{\bar{{\mathcal{L}}}_m(\beta^k)},\beta-\beta^k \rangle+\frac{v}{2}\|\beta-\beta^k\|_2^2+{\mathcal{H}}_\lambda(\beta)\right\}.
\end{aligned}
\end{equation}
Since $\beta^{k+1}$ is the optimal solution of \eqref{lem-radius-8}, it follows that
\begin{equation}\label{lem-radius-6}
\langle \nabla{\bar{{\mathcal{L}}}_m(\beta^k)}+v(\beta^{k+1}-\beta^k)+\nabla{\mathcal{H}}_\lambda(\beta^{k+1}),\beta^{k+1}-\hat{\beta} \rangle\leq 0.
\end{equation}
Combining \eqref{lem-radius-4}, \eqref{lem-radius-5} and \eqref{lem-radius-6}, one has that
\begin{equation*}
\begin{aligned}
\phi(\beta^{k+1})-\phi(\hat{\beta})
&\leq \frac{v}{2}\|\beta^{k+1}-\beta^k\|_2^2+v\langle \beta^k-\beta^{k+1},\beta^{k+1}-\hat{\beta} \rangle +\tau_3\frac{\log n}{m}\|\hat{\beta}-\beta^k\|_1^2+4\frac{r^2\tau_5}{L^2}\frac{\log n}{m}\\
&\leq \frac{v}{2}\|\beta^k-\hat{\beta}\|_2^2-\frac{v}{2}\|\beta^{k+1}-\hat{\beta}\|_2^2+\tau\frac{\log n}{m}\|\hat{\beta}-\beta^k\|_1^2+4\frac{r^2\tau}{L^2}\frac{\log n}{m}\\
&\leq \frac{v}{2}\|\beta^k-\hat{\beta}\|_2^2-\frac{v}{2}\|\beta^{k+1}-\hat{\beta}\|_2^2+8\frac{r^2\tau}{L^2}\frac{\log n}{m}.
\end{aligned}
\end{equation*}
This, together with \eqref{lem-radius-3} and the assumption $v\geq \mu_1$, implies that
\begin{equation*}
\begin{aligned}
\gamma_4\|\beta^{k+1}-\hat{\beta}\|_2-\tau_4\sqrt{\frac{\log n}{m}}\|\beta^{k+1}-\hat{\beta}\|_1
&\leq
\frac{v}{2}\|\beta^k-\hat{\beta}\|_2^2-\frac{v-\mu_1}{2}\|\beta^{k+1}-\hat{\beta}\|_2^2+8\frac{r^2\tau}{L^2}\frac{\log n}{m}\\
&\leq \frac{9v}{2}-\frac{3(v-\mu_1)}{2}\|\beta^{k+1}-\hat{\beta}\|_2+8\frac{r^2\tau}{L^2}\frac{\log n}{m},
\end{aligned}
\end{equation*}
where the last inequality follows from $\|\beta^k-\hat{\beta}\|_2\leq 3$ by the induction hypothesis and $\|\beta^{k+1}-\hat{\beta}\|_2>3$ by assumption.
Since $\|\beta^{k+1}-\hat{\beta}\|_1\leq \|\beta^{k+1}\|_1+\|\hat{\beta}\|_1\leq g(\beta^{k+1})/L+g(\hat{\beta})/L\leq 2r/L$, it follows that
\begin{equation}\label{lem-radius-contra}
\begin{aligned}
\left(\gamma+\frac{3(v-\mu_1)}{2}\right)\|\beta^{k+1}-\hat{\beta}\|_2
&\leq \frac{9v}{2}+\tau\sqrt{\frac{\log n}{m}}\|\beta^{k+1}-\hat{\beta}\|_1+8\frac{r^2\tau}{L^2}\frac{\log n}{m}\\
&\leq \frac{9v}{2}+2\frac{r\tau}{L}\sqrt{\frac{\log n}{m}}+8\frac{r^2\tau}{L}\frac{\log n}{m}.
\end{aligned}
\end{equation}
By assumptions \eqref{thm2-lambda} and \eqref{thm2-m},
we obtain that $2\frac{r\tau}{L}\sqrt{\frac{\log n}{m}}\leq \frac32(\gamma-\frac{3\mu_1}{2})$ and that $8\frac{r^2\tau}{L^2}\frac{\log n}{m}\leq \frac32(\gamma-\frac{3\mu_1}{2})$. Combining these two inequalities with \eqref{lem-radius-contra}, one has that
\begin{equation*}
\left(\gamma+\frac{3(v-\mu_1)}{2}\right)\|\beta^{k+1}-\hat{\beta}\|_2
\leq 3\left(\gamma+\frac{3(v-\mu_1)}{2}\right),
\end{equation*}
Hence, $\|\beta^{k+1}-\hat{\beta}\|_2\leq 3$, a contradiction. Thus, \eqref{lem-radius-1} holds for $t=k+1$.
By the principle of induction, \eqref{lem-radius-1} holds for all $t\geq 0$. The proof is complete.
\subsection*{Proof of Lemma \ref{lem-cone}}
We first show that if $\lambda\geq \frac{4}{L}\|\nabla{\mathcal{L}}_m(\beta^*)\|_\infty$, then for any $\beta\in \Omega$ satisfying
\begin{equation}\label{lem-cone-De2}
\phi(\beta)-\phi(\beta^*)\leq \Delta,
\end{equation}
it holds that
\begin{equation}\label{lem-cone-1}
\begin{aligned}
\|\beta-\beta^*\|_1&\leq 4R_q^{\frac12}\left(\frac{\log n}{m}\right)^{-\frac{q}{4}}\|\beta-\beta^*\|_2+4R_q\left(\frac{\log n}{m}\right)^{\frac12-\frac{q}2}+\frac{2}{L}\min\left(\frac{\Delta}{\lambda},r\right).
\end{aligned}
\end{equation}
Set $\delta:=\beta-\beta^*$. From \eqref{lem-cone-De2}, we obtain that
\begin{equation*}
{\mathcal{L}}_m(\beta^*+\delta)+{\mathcal{R}}_\lambda(\beta^*+\delta)\leq {\mathcal{L}}_m(\beta^*)+{\mathcal{R}}_\lambda(\beta^*)+\Delta.
\end{equation*}
Then subtracting $\langle \nabla{\mathcal{L}}_m(\beta^*),\delta \rangle$ from both sides of the former inequality, one has that
\begin{equation}\label{lem-cone-2}
{\mathcal{T}}(\beta^*+\delta)+{\mathcal{R}}_\lambda(\beta^*+\delta)-{\mathcal{R}}_\lambda(\beta^*)\leq -\langle \nabla{\mathcal{L}}_m(\beta^*),\delta \rangle+\Delta.
\end{equation}
We now claim that
\begin{equation}\label{lem-cone-claim}
{\mathcal{R}}_\lambda(\beta^*+\delta)-{\mathcal{R}}_\lambda(\beta^*)\leq \frac{\lambda L}{2}\|\delta\|_1+\Delta.
\end{equation}
The following argument is divided into two cases. First assume $\|\delta\|_2\leq 3$.
Then it follows from the RSC condition \eqref{alg-RSC1} and \eqref{lem-cone-2} that
\begin{equation*}
\gamma_3\|\delta\|_2^2-\tau_3\frac{\log n}{m}\|\delta\|_1^2+{\mathcal{R}}_\lambda(\beta^*+\delta)-{\mathcal{R}}_\lambda(\beta^*)
\leq \|\nabla{\mathcal{L}}_m(\beta^*)\|_\infty\|\delta\|_1+\Delta\leq \frac{\lambda L}{4}\|\delta\|_1+\Delta.
\end{equation*}
By assumptions \eqref{thm2-lambda} and \eqref{thm2-m}, we obtain that $\lambda L\geq 8\frac{r\tau}{L}\frac{\log n}{m}$. This, together with the facts that $\gamma_3>0$ and that $\|\delta\|_1\leq \|\beta^*\|_1+\|\beta\|_1\leq g(\beta^*)/L+g(\beta)/L\leq 2r/L$, implies \eqref{lem-cone-claim}.
In the case when $\|\delta\|_2>3$, the RSC condition \eqref{alg-RSC2} yields that
\begin{equation*}
\gamma_4\|\delta\|_2-\tau_4\sqrt{\frac{\log n}{m}}\|\delta\|_1+{\mathcal{R}}_\lambda(\beta^*+\delta)-{\mathcal{R}}_\lambda(\beta^*)
\leq \|\nabla{\mathcal{L}}_m(\beta^*)\|_\infty\|\delta\|_1+\Delta
\leq \frac{\lambda L}{4}\|\delta\|_1+\Delta,
\end{equation*}
thus by assumption \eqref{thm2-lambda},
we also arrive at \eqref{lem-cone-claim}.
Let $J$ denote the index set corresponding to the $|S_\eta|$ largest coordinates in absolute value of $\tilde{\delta}$ (recalling the set $S_\eta$ defined in \eqref{S-eta}. It then follows from Lemma \ref{lem-regu} (with $S=S_\eta$) that
\begin{equation}\label{lem-cone-8}
{\mathcal{R}}_\lambda(\beta^*)-{\mathcal{R}}_\lambda(\beta^*+\delta)\leq
\lambda L(\|\delta_J\|_1-\|\delta_{J^c}\|_1)+2\lambda L\|\beta^*_{S_\eta^c}\|_1.
\end{equation}
Summing up \eqref{lem-cone-8} and \eqref{lem-cone-claim}, one has that $0\leq \frac{3\lambda L}{2}\|\delta_J\|_1-\frac{\lambda L}{2}\|\delta_{J^c}\|_1+2\lambda L\|\beta^*_{S_\eta^c}\|_1+\Delta$,
and consequently, $\|\delta_{J^c}\|_1\leq 3\|\delta_J\|_1+4\|\beta^*_{S_\eta^c}\|_1+\frac{2\Delta}{\lambda L}$. By the definition of the index set $J$ and using the trivial bound $\|\delta\|_1\leq
2r/L$, one has that
\begin{equation}\label{lem-cone-9}
\|\delta\|_1\leq 4\sqrt{|S_\eta}|\|\delta\|_2+4\|\beta^*_{S_\eta^c}\|_1+\frac{2}{L}\min\left(\frac{\Delta}{\lambda},r\right).
\end{equation}
Combining \eqref{s-eta} with \eqref{lem-cone-9} and setting $\eta=\sqrt{\frac{\log n}{m}}$, we arrive at \eqref{lem-cone-1}.
We now verify that \eqref{lem-cone-De2} is held by the vector $\hat{\beta}$ and $\beta^t$, respectively. Since $\hat{\beta}$ is the optimal solution, it holds that $\phi(\hat{\beta})\leq \phi(\beta^*)$, and by assumption \eqref{lem-cone-De1}, it holds that $\phi(\beta^t)\leq \phi(\hat{\beta})+\Delta\leq \phi(\beta^*)+\Delta$. Consequently, it follows from \eqref{lem-cone-1} that
\begin{equation*}
\begin{aligned}
\|\hat{\beta}-\beta^*\|_1
&\leq 4R_q^{\frac12}\left(\frac{\log n}{m}\right)^{-\frac{q}{4}}\|\hat{\beta}-\beta^*\|_2+4R_q\left(\frac{\log n}{m}\right)^{\frac12-\frac{q}2},\quad \mbox{and}\\
\|\beta^t-\beta^*\|_1
&\leq 4R_q^{\frac12}\left(\frac{\log n}{m}\right)^{-\frac{q}{4}}\|\beta^t-\beta^*\|_2+4R_q\left(\frac{\log n}{m}\right)^{\frac12-\frac{q}2} +\frac{2}{L}\min\left(\frac{\Delta}{\lambda},r\right).\\
\end{aligned}
\end{equation*}
By the triangle inequality, we then conclude that
\begin{equation*}
\begin{aligned}
\|\beta^t-\hat{\beta}\|_1
&\leq \|\hat{\beta}-\beta^*\|_1+\|\beta^t-\beta^*\|_1
\leq 4R_q^{\frac12}\left(\frac{\log n}{m}\right)^{-\frac{q}{4}}(\|\hat{\beta}-\beta^*\|_2+\|\beta^t-\beta^*\|_2)+8R_q\left(\frac{\log n}{m}\right)^{\frac12-\frac{q}2}+\frac{2}{L}\min\left(\frac{\Delta}{\lambda},r\right)\\
&\leq 4R_q^{\frac12}\left(\frac{\log n}{m}\right)^{-\frac{q}{4}}\|\beta^t-\hat{\beta}\|_2+\bar{\epsilon}_{\text{stat}}+\epsilon(\Delta).
\end{aligned}
\end{equation*}
The proof is complete.
\subsection*{Proof of Lemma \ref{lem-Tphi-bound}}
By the RSC condition \eqref{alg-RSC1}, Lemma 2 and Lemma \ref{lem-qlambda} (cf. \eqref{lem-qlambda-13}) , one has that
\begin{equation}\label{lem-bound-1}
\bar{{\mathcal{T}}}(\beta^t,\hat{\beta})\geq (\gamma_3-\frac{\mu_1}{2})\|\beta^t-\hat{\beta}\|_2^2-\tau_3\frac{\log n}{m}\|\beta^t-\hat{\beta}\|_1^2.
\end{equation}
It then follows from Lemma \ref{lem-cone} and assumption \eqref{thm2-m} that
\begin{equation*}
\bar{{\mathcal{T}}}(\hat{\beta},\beta^t)\geq (\gamma_3-\frac{\mu_1}{2})\|\beta^t-\hat{\beta}\|_2^2-\tau_3\frac{\log n}{m}\|\beta^t-\hat{\beta}\|_1^2\geq -2\tau\frac{\log n}{m}(\bar{\epsilon}_{\text{stat}}+\epsilon(\Delta))^2,
\end{equation*}
which establishes \eqref{lem-bound-T1}.
Furthermore, it follows from the convexity of ${\mathcal{H}}_\lambda$ that
\begin{equation}\label{lem-bound-2}
{\mathcal{H}}_\lambda(\beta^t)-{\mathcal{H}}_\lambda(\hat{\beta})-\langle \nabla{\mathcal{H}}_\lambda(\hat{\beta}),\beta^t-\hat{\beta} \rangle\geq 0,
\end{equation}
and the first-order optimality condition for $\hat{\beta}$ that
\begin{equation}\label{lem-bound-3}
\langle \nabla\phi(\hat{\beta}),\beta^t-\hat{\beta} \rangle\geq 0.
\end{equation}
Combining \eqref{lem-bound-1}, \eqref{lem-bound-2} and \eqref{lem-bound-3}, one has that
\begin{equation*}
\phi(\beta^t)-\phi(\hat{\beta})\geq (\gamma_3-\frac{\mu_1}{2})\|\beta^t-\hat{\beta}\|_2^2-\tau_3\frac{\log n}{m}\|\beta^t-\hat{\beta}\|_1^2.
\end{equation*}
Then using Lemma \ref{lem-cone} to bound the term $\|\beta^t-\hat{\beta}\|_2^2$ and noting assumption \eqref{thm2-m},
we arrive at \eqref{lem-bound-T2}.
Now we turn to prove \eqref{lem-bound-0}. Define
\begin{equation*}
\phi_t(\beta):=\bar{{\mathcal{L}}}_m(\beta^t)+\langle \nabla{\bar{{\mathcal{L}}}_m(\beta^t)},\beta-\beta^t \rangle+\frac{v}{2}\|\beta-\beta^t\|_2^2+{\mathcal{H}}_\lambda(\beta),
\end{equation*}
which is the optimization objective function minimized over the feasible region $\Omega=\{\beta:g(\beta)\leq r\}$ at iteration count $t$. For any $a\in [0,1]$, it is easy to see that the vector $\beta_a=a\hat{\beta}+(1-a)\beta^t$ belongs to $\Omega$ by the convexity of $\Omega$. Since $\beta^{t+1}$ is the optimal solution of the optimization problem \eqref{algo-pga}, we have that
\begin{equation*}
\begin{aligned}
\phi_t(\beta^{t+1}) &\leq \phi_t(\beta_a)=\bar{{\mathcal{L}}}_m(\beta^t)+\langle \nabla{\bar{{\mathcal{L}}}_m(\beta^t)},\beta_a-\beta^t \rangle +\frac{v}{2}\|\beta_a-\beta^t\|_2^2+{\mathcal{H}}_\lambda(\beta_a)\\
&\leq \bar{{\mathcal{L}}}_m(\beta^t)+\langle \nabla{\bar{{\mathcal{L}}}_m(\beta^t)},a\hat{\beta}-a\beta^t \rangle +\frac{va^2}{2}\|\hat{\beta}-\beta^t\|_2^2+a{\mathcal{H}}_\lambda(\hat{\beta})+(1-a){\mathcal{H}}_\lambda(\beta^t),
\end{aligned}
\end{equation*}
where the last inequality is from the convexity of ${\mathcal{H}}_\lambda$.
Then by \eqref{lem-bound-T1}, one has that
\begin{equation}\label{lem-bound-4}
\begin{aligned}
\phi_t(\beta^{t+1})
&\leq (1-a)\bar{{\mathcal{L}}}_m(\beta^t)+a\bar{{\mathcal{L}}}_m(\hat{\beta})+2a\tau\frac{\log n}{m}(\epsilon(\Delta)+\bar{\epsilon}_{\text{stat}})^2+\frac{va^2}{2}\|\hat{\beta}-\beta^t\|_2^2+a{\mathcal{H}}_\lambda(\hat{\beta})+(1-a){\mathcal{H}}_\lambda(\beta^t)\\
&\leq \phi(\beta^t)-a(\phi(\beta^t)-\phi(\hat{\beta}))+2\tau\frac{\log n}{m}(\epsilon(\Delta)+\bar{\epsilon}_{\text{stat}})^2+\frac{va^2}{2}\|\hat{\beta}-\beta^t\|_2^2.
\end{aligned}
\end{equation}
Applying the RSM condition \eqref{alg-RSM} on the pair $(\beta^{t+1},\beta^t)$, one has by \eqref{lem-qlambda-11} and the assumption $v\geq 2\gamma_5-\mu_2$ that
\begin{equation*}
\bar{{\mathcal{T}}}(\beta^{t+1},\beta^t)\leq \left(\gamma_5-\frac{\mu_2}{2}\right)\|\beta^{t+1}-\beta^t\|_2^2+\tau_5\frac{\log n}{m}\|\beta^{t+1}-\beta^t\|_1^2\leq \frac{v}{2}\|\beta^{t+1}-\beta^t\|_2^2+\tau\frac{\log n}{m}\|\beta^{t+1}-\beta^t\|_1^2.
\end{equation*}
Adding ${\mathcal{H}}_\lambda(\beta^{t+1})$ to both sides of the former inequality yields that
\begin{equation*}
\begin{aligned}
\phi(\beta^{t+1})
&\leq \bar{{\mathcal{L}}}_m(\beta^t)+\langle \nabla{\bar{{\mathcal{L}}}_m(\beta^t)},\beta^{t+1}-\beta^t \rangle+{\mathcal{H}}_\lambda(\beta^{t+1})+\frac{v}{2}\|\beta^{t+1}-\beta^t\|_2^2+\tau\frac{\log n}{m}\|\beta^{t+1}-\beta^t\|_1^2\\
&=\phi_t(\beta^{t+1})+\tau\frac{\log n}{m}\|\beta^{t+1}-\beta^t\|_1^2.
\end{aligned}
\end{equation*}
This, together with \eqref{lem-bound-4}, implies that
\begin{equation}\label{lem-bound-5}
\phi(\beta^{t+1})\leq \phi(\beta^t)-a(\phi(\beta^t)-\phi(\hat{\beta}))+\frac{va^2}{2}\|\hat{\beta}-\beta^t\|_2^2+\tau\frac{\log n}{m}\|\beta^{t+1}-\beta^t\|_1^2+2\tau\frac{\log n}{m}(\epsilon(\Delta)+\bar{\epsilon}_{\text{stat}})^2.
\end{equation}
Define $\delta^t:=\beta^t-\hat{\beta}$. Then it follows that
$\|\beta^{t+1}-\beta^t\|_1^2\leq (\|\delta^{t+1}\|_1+\|\delta^t\|_1)^2\leq 2\|\delta^{t+1}\|_1^2+2\|\delta^t\|_1^2$. Combining this inequality with \eqref{lem-bound-5}, one has that
\begin{equation*}
\phi(\beta^{t+1})\leq \phi(\beta^t)-a(\phi(\beta^t)-\phi(\hat{\beta}))+\frac{va^2}{2}\|\hat{\beta}-\beta^t\|_2^2+2\tau\frac{\log n}{m}(\|\delta^{t+1}\|_1^2+\|\delta^t\|_1^2)+2\tau\frac{\log n}{m}(\bar{\epsilon}_{\text{stat}}+\epsilon(\Delta))^2.
\end{equation*}
To simlify the notations, we define $\psi:=\tau\frac{\log n}{m}(\bar{\epsilon}_{\text{stat}}+\epsilon(\Delta))^2$, $\zeta:=\tau R_q\left(\frac{\log n}{m}\right)^{1-\frac{q}2}$ and $\Delta_t:=\phi(\beta^t)-\phi(\hat{\beta})$. Applying Lemma \ref{lem-cone} to bound the term $\|\delta^{t+1}\|_1^2$ and $\|\delta^t\|_1^2$, we obtain that
\begin{equation}\label{lem-bound-6}
\begin{aligned}
\phi(\beta^{t+1})
&\leq \phi(\beta^t)-a(\phi(\beta^t)-\phi(\hat{\beta}))+\frac{va^2}{2}\|\delta^t\|_2^2+64R_q\tau\left(\frac{\log n}{m}\right)^{1-\frac{q}2}(\|\delta^{t+1}\|_2^2+\|\delta^t\|_2^2)+10\psi\\
&= \phi(\beta^t)-a(\phi(\beta^t)-\phi(\hat{\beta}))+\left(\frac{va^2}{2}+64\zeta\right)\|\delta^t\|_2^2+64\zeta\|\delta^{t+1}\|_2^2+10\psi.\\
\end{aligned}
\end{equation}
Subtracting $\phi(\hat{\beta})$ from both sides of \eqref{lem-bound-6}, we have by \eqref{lem-bound-T2} that
\begin{equation*}
\begin{aligned}
\Delta_{t+1}
&\leq (1-a)\Delta_t+\frac{va^2+128\zeta}{\gamma-\mu_1/2}(\Delta_t+2\psi) +\frac{128\zeta}{\gamma-\mu_1/2}(\Delta_{t+1}+2\psi)+10\psi.
\end{aligned}
\end{equation*}
Setting $a=\frac{2\gamma-\mu_1}{4v}\in (0,1)$, one has by the former inequality that
\begin{equation*}
\begin{aligned}
\left(1-\frac{256\zeta}{2\gamma-\mu_1}\right)\Delta_{t+1}&\leq \left(1-\frac{2\gamma-\mu_1}{8v}+\frac{256\zeta}{2\gamma-\mu_1}\right)\Delta_t +2\left(\frac{2\gamma-\mu_1}{8v}+\frac{512\zeta}{2\gamma-\mu_1}+5\right)\psi,
\end{aligned}
\end{equation*}
or equivalently, $\Delta_{t+1}\leq \kappa\Delta_t+\xi(\bar{\epsilon}_{\text{stat}}+\epsilon(\Delta))^2$, where $\kappa$ and $\xi$ were previously defined in \eqref{lem-bound-kappa} and \eqref{lem-bound-xi}, respectively. Finally, we obtain that
\begin{equation*}
\begin{aligned}
\Delta_t &\leq \kappa^{t-T}\Delta_T+\xi(\bar{\epsilon}_{\text{stat}}+\epsilon(\Delta))^2(1+\kappa+\kappa^2+\cdots+\kappa^{t_T+1})\\
&\leq \kappa^{t-T}\Delta_T+\frac{\xi}{1-\kappa}(\bar{\epsilon}_{\text{stat}}+\epsilon(\Delta))^2\leq
\kappa^{t-T}\Delta_T+\frac{2\xi}{1-\kappa}(\bar{\epsilon}_{\text{stat}}^2+\epsilon^2(\Delta)).
\end{aligned}
\end{equation*}
The proof is complete.
\section*{Acknowledgments}
\noindent Chong Li was supported in part by the National Natural Science Foundation of China [grant number 11971429] and Zhejiang Provincial Natural Science Foundation of China [grant number LY18A010004]; Jinhua Wang was supported in part by the National Natural Science Foundation of China [grant number 11771397] and Zhejiang Provincial Natural Science Foundation of China [grant numbers LY17A010021, LY17A010006]; Jen-Chih Yao was supported in part by the Grant MOST [grant number 108-2115-M-039-005-MY3].
\bibliographystyle{elsarticle-harv}
|
1,108,101,564,987 | arxiv | \section{Introduction}
The seminal paper of K.J. Palmer \cite{Palmer} provides sufficient conditions ensuring the
topological equivalence between the solutions of the linear system
\begin{equation}
\label{lin}
y'=A(t)y,
\end{equation}
and the solutions of the quasilinear one
\begin{equation}
\label{no-lin}
x'=A(t)x+f(t,x),
\end{equation}
where the bounded and continuous $n\times n$ matrix $A(t)$ and the continuous function $f\colon \mathbb{R}\times \mathbb{R}^{n}\to \mathbb{R}^{n}$
satisfy some technical conditions.
Roughly speaking, (\ref{lin}) and (\ref{no-lin}) are topologically equivalents
if there exists a map $H\colon \mathbb{R}\times \mathbb{R}^{n}\to \mathbb{R}^{n}$
such that $\nu \mapsto H(t,\nu)$ is an homeomorphism for any fixed $t$. In particular,
if $x(t)$ is a solution of (\ref{no-lin}), then $H[t,x(t)]$ is a solution of (\ref{lin}).
To the best of our knowledge, there are no results concerning the differentiability of the map
$H$ and the purpose of this paper is to find sufficient conditions
ensuring that the map above is a preserving orientation diffeomorphism of class $C^{1}$ (Theorem \ref{anexo-palmer})
and $C^{2}$ (Theorem \ref{SegDer}), both under the assumption that (\ref{lin}) is uniformly asymptotically stable.
As an application of our results, we will construct a density
function for the system (\ref{no-lin}) when $f(t,0)=0$ (Theorem \ref{ex-dens0}), generalizing a
converse result in the autonomous case presented
in \cite{Monzon-0}.
\subsection{Palmer's linearization Theorem}
We are interested in the particular case:
\begin{proposition}[Palmer \cite{Palmer}]
\label{difeo}
If the assumptions:
\begin{itemize}
\item[\textbf{(H1)}] $|f(t,x)|\leq \mu$ for any $t\in \mathbb{R}$ and $x\in \mathbb{R}^{n}$.
\item[\textbf{(H2)}] $|f(t,x_{1})-f(t,x_{2})|\leq \gamma |x_{1}-x_{2}|$ for any $t\in \mathbb{R}$,
where $|\cdot|$ denotes a norm in $\mathbb{R}^{n}$.
\item[\textbf{(H3)}] There exist some constants $K\geq 1$ and $\alpha>0$ such that
the transition matrix $\Psi(t,s)=\Psi(t)\Psi^{-1}(s)$ of \textnormal{(\ref{lin})} verifies
\begin{equation}
\label{cota-exponencial}
||\Psi(t,s)||\leq Ke^{-\alpha(t-s)}, \quad \textnormal{for any} \quad t\geq s.
\end{equation}
\item[\textbf{(H4)}] The Lipschitz constant of $f$ verifies:
\begin{equation}
\label{palmer}
\gamma \leq \alpha/4K,
\end{equation}
\end{itemize}
are satisfied, there exists a unique function
$H\colon \mathbb{R}\times \mathbb{R}^{n}\to \mathbb{R}^{n}$ satisfying
\begin{itemize}
\item[i)] $H(t,x)-x$ is bounded in $\mathbb{R}\times \mathbb{R}^{n}$,
\item[ii)] If $t\mapsto x(t)$ is a solution of \textnormal{(\ref{no-lin})}, then
$H[t,x(t)]$ is a solution of \textnormal{(\ref{lin})}.
\end{itemize}
Morevoer, $H$ is continous in $\mathbb{R}\times \mathbb{R}^{n}$ and
$$
|H(t,x)-x|\leq 4K\mu\alpha^{-1}
$$
for any $(t,x)\in \mathbb{R}\times \mathbb{R}^{n}$. For each fixed $t$, $H(t,x)$ is a homeomorphism
of $\mathbb{R}^{n}$. $L(t,x)=H^{-1}(t,x)$ is continous in $\mathbb{R}\times \mathbb{R}^{n}$ and if $y(t)$ is any solution
of \textnormal{(\ref{lin})}, then $L[t,y(t)]$ is a solution of \textnormal{(\ref{no-lin})}.
\end{proposition}
\begin{remark}
The original Palmer's result assumes that (\ref{lin}) has an exponential dichotomy
property. The condition \textbf{(H3)} is a particular case considering the identity as projector,
which implies that (\ref{lin}) is exponentially stable at $+\infty$. In addition, let us
recall that uniform asymptotical and exponential stability are equivalent in the linear case
(see \cite{Coppel} or Theorem 4.11 from \cite{Khalil}).
\end{remark}
This result has been extended and generalized in several directions \cite{Barreira}, \cite{Jiang}, \cite{Lopez}
\cite{Palmer-79}, \cite{Palmer-79-2}, \cite{Xia} but there are no results about the differentiability
of $x\mapsto H(t,x)$. In this article we provides sufficient conditions, described in term of $\Psi(t,s)$,
$Df$ and $D^{2}f$, such that $H$ is a $C^{p}$ ($p=1,2$) preserving orientation diffeomorphism.
\subsection{Density functions}
Let us consider the system
\begin{equation}
\label{generico}
z'=g(t,z) \quad \textnormal{with} \quad g(t,0)=0 \quad \textnormal{for any $t\in \mathbb{R}$},
\end{equation}
where $g\colon \mathbb{R}\times \mathbb{R}^{n}\to \mathbb{R}^{n}$ is such that the existence, uniqueness
and unbounded continuation of the solutions is verified.
\begin{definition}
\label{density}
A density function of \textnormal{(\ref{generico})} is a $C^{1}$ function
$\rho\colon \mathbb{R}\times \mathbb{R}^{n}\setminus \{0\}\to [0,+\infty)$, integrable outside a ball
centered at the origin that satisfies
\begin{displaymath}
\frac{\partial \rho(t,z)}{\partial t}+\triangledown \cdot [\rho(t,z)g(t,z)]>0
\end{displaymath}
almost everywhere with respect to $\mathbb{R}^{n}$ and for every $t\in \mathbb{R}$, where
$$
\triangledown \cdot [\rho g] = \triangledown \rho \cdot g + \rho[\triangledown\cdot g],
$$
and $\triangledown \rho$, $\triangledown \cdot g$ denote respectively the gradient of $\rho$ and divergence
of $g$.
\end{definition}
The density functions were introduced by Rantzer in 2001 \cite{Rantzer} in order to obtain sufficient conditions
for almost global stability of autonomous systems, we refer
to \cite{Angeli}, \cite{Dimarogonas}, \cite{Vasconcelos}, \cite{Loizou} and \cite{Meinsma} for a deeper discussion
and applications. The extension to the nonautonomous case has been proved in \cite{Monzon}, \cite{Schlanbusch}:
\begin{proposition}[Theorem 4, \cite{Schlanbusch}]
Consider the system \textnormal{(\ref{generico})} such that $z=0$ is a locally stable equilibrium point.
If there exists a density function associated to \textnormal{(\ref{generico})}, then for every initial time $t_{0}$, the sets
of points that are not asymptotically attracted by the origin has zero Lebesgue measure.
\end{proposition}
Converse results (\emph{i.e.}, global asymptotic stability implies the existence of a density function) were
presented simultaneously by Rantzer \cite{Rantzer-2002} and Monz\'on \cite{Monzon-0} in the autonomous case by using different
methods. In particular, in \cite{Monzon-0} the author constructs a density function associated to the system
\begin{equation}
\label{no-lin-PM}
z'=g(z) \quad \textnormal{with} \quad g(0)=0,
\end{equation}
where $0$ is a globally asymptotically stable equlibrium and $g\colon \mathbb{R}^{n}\to \mathbb{R}^{n}$ is a $C^{2}$ function,
whose jacobian matrix at $z=0$ has eigenvalues with negative real part.
Such construction has two steps: i) As $u'=Dg(0)u$ is globally asymptotically stable, it is well known that
there exists a density function $\rho(z)$ (we refer to Proposition 1 from \cite{Rantzer} for details). ii)
A $C^{2}$ preserving orientation diffeomorphism $h\colon \mathbb{R}^{n}\to \mathbb{R}^{n}$ is constructed, such that
$\bar{\rho}(z)$ defined by
\begin{equation}
\label{DR}
\bar{\rho}(z)=\rho(h(z))\det Dh(z)
\end{equation}
is a density function for (\ref{no-lin-PM}).
To the best of our knowledge, there are few converse results in the nonautonomous framework. A first one was presented
by Monz\'on in 2006:
\begin{proposition}[Monz\'on, \cite{Monzon}]
\label{dens-lin}
If \textnormal{(\ref{lin})} is globally asymptotically stable, then there exists a $C^{1}$ density
function $\rho\colon \mathbb{R}\times \mathbb{R}^{n}\setminus \{0\}\to [0,+\infty)$, associated to \textnormal{(\ref{lin})}.
\end{proposition}
If we assume that
\begin{itemize}
\item[\textbf{(H5)}] The function $f$ satisfies $f(t,0)=0$ for any $t\in \mathbb{R}$,
\end{itemize}
By Gronwall's inequality, combined with \textbf{(H4)}, it is easy to deduce that
any solution $t\mapsto \phi(t,t_{0},x_{0})$ of (\ref{no-lin}) passing throuhg $x_{0}$ at $t=t_{0}$
verifies
$$
|\phi(t,t_{0},x_{0})|\leq Ke^{\{-\alpha+K\gamma\}(t-t_{0})}|x_{0}|, \quad \textnormal{for} \quad t\geq t_{0}
$$
which implies the exponential stability of (\ref{no-lin}) at $t=+\infty$.
This property prompt us to extend the Monzon's converse result \cite{Monzon-0} to the nonlinear system
(\ref{no-lin}) by constructing a density function in the same way as in (\ref{DR}), where $\rho$ is defined
by Proposition \ref{dens-lin} and replacing $h$ by $H(t,x)$ defined in Proposition \ref{anexo-palmer}.
The paper is organized as follows. The section 2 states our main results and its proofs. The section 3 is devoted
to prove the converse result and to construct a density function for (\ref{no-lin}) and an illustrative example is presented
in section 5.
\section{Main Results}
As usual, given a matrix $M(t)\in M_{n}(\mathbb{R})$, its trace will be denoted by $\tr M(t)$ while its
determinant by $\det M(t)$, the identity matrix is denoted by $I$.
The solution of (\ref{no-lin}) passing through $\xi$ at $t_{0}$ will be denoted
by $\phi(t,t_{0},\xi)$. It will be interesting to consider the map $\xi\mapsto \phi(t,t_{0},\xi)$
and its properties. Indeed, if $f$ is $C^{1}$, it is well known (see \emph{e.g.} \cite[Chap. 2]{Coddington}) that
$\partial \phi(t,t_{0},\xi)/\partial \xi=\phi_{\xi}(t,t_{0},\xi)$ satisfies the matrix differential equation
\begin{equation}
\label{MDE1}
\left\{
\begin{array}{rcl}
\displaystyle \frac{d}{dt}\phi_{\xi}(t,t_{0},\xi)&=&\{A(t)+Df(t,\phi(t,t_{0},\xi))\}\phi_{\xi}(t,t_{0},\xi).\\\\
\phi_{\xi}(t_{0},t_{0},\xi)&=&I.
\end{array}\right.
\end{equation}
Moreover, it is proved that (see \emph{e.g.}, Theorem 4.1 from \cite[Ch.V]{Hartman}) if $f$ is $C^{r}$ with $r>1$, then the map
$\xi \mapsto \phi(t,t_{0},\xi)$ is also $C^{r}$. In particular, if $f$ is $C^{2}$, we can verify that
the second derivatives $\partial^{2}\phi(s,t_{0},\xi)/\partial \xi_{j}\partial\xi_{i}$ are solutions
of the system of differential equations
\begin{equation}
\label{MDE15}
\left\{
\begin{array}{rcl}
\displaystyle \frac{d}{dt}\frac{\partial^{2}\phi}{\partial\xi_{j}\partial\xi_{i}}&=&\displaystyle
\{A(t)+Df(t,\phi)\}\frac{\partial^{2}\phi}{\partial\xi_{j}\partial\xi_{i}}
+D^{2}f(t,\phi)\frac{\partial\phi}{\partial\xi_{j}}\frac{\partial\phi}{\partial\xi_{i}} \\\\
\displaystyle \frac{\partial^{2}\phi}{\partial\xi_{j}\partial\xi_{i}} &=&0,
\end{array}\right.
\end{equation}
with $\phi=\phi(t,t_{0},\xi)$, for any $i,j=1,\ldots,n$.
Now, let us introduce the following conditions
\begin{enumerate}
\item[\textbf{(D1)}] $f(\cdot,\cdot)$ is
$C^{2}$ and, for any fixed $t$, its first derivative is such that
\begin{displaymath}
\int_{-\infty}^{t}||\Psi(t,r)Df(r,\phi(r,0,\xi))\Psi(r,t)||_{\infty}\,dr<1.
\end{displaymath}
\item[\textbf{(D2)}] For any fixed $t$, $A(t)$ and $Df(t,\phi(t,0,\xi))$ are such that
\begin{displaymath}
\liminf\limits_{s\to-\infty}-\int_{s}^{t}\hspace{-0.2cm}\tr A(r)\,dr>-\infty \hspace{0.2cm} \textnormal{and} \hspace{0.2cm}
\liminf\limits_{s\to-\infty}-\int_{s}^{t}\hspace{-0.2cm}\tr \{A(r)+Df(r,\phi(r,0,\xi))\}\,dr>-\infty.
\end{displaymath}
\item[\textbf{(D3)}] For any fixed $t$ and $i,j=1,\ldots,n$, the following limit exists
\begin{equation}
\label{D2}
\lim\limits_{s\to -\infty}\frac{\partial Z(s,x(t))}{\partial x_{j}(t)}e_{i},
\end{equation}
where $x(t)=(x_{1}(t),\ldots,x_{n}(t))=\phi(t,0,\xi)$, $e_{i}$ is the $i$--th component of the canonical
basis of $\mathbb{R}^{n}$ and $Z(s,x(t))$ is a fundamental matrix of the $x(t)$--parameter dependent system
\begin{equation}
\label{L2}
z'=F(s,x(t))z
\end{equation}
satisfying $Z(t,x(t))=I$, where $F(r,x(t))$ is defined as follows
\begin{equation}
\label{L1}
F(r,x(t))=\Psi(t,r)Df(r,\phi(r,t,x(t)))\Psi(r,t),
\end{equation}
\end{enumerate}
\begin{remark}
\label{about-hyp}
We will see that the construction of the homeomorphism $H$ considers the behavior of
$\phi(t,0,\xi)$ for any $t\in (-\infty,\infty)$. In particular, to prove that $H$ is a $C^{2}$ preserving orientation diffeomorphism
will require to know the behavior on $(-\infty,t]$. Indeed, notice that:
\textbf{(D1)} is a technical assumption, introduced to ensure that the homeomorphism $H(t,x)$
stated in Proposition \ref{difeo} is a $C^{1}$ diffeomorphism. It is interesting to point out that,
by using a result of p. 11 of \cite{Coppel}, we know that $|H[s,\phi(s,0,\xi)]|\to +\infty$ when $s\to -\infty$,
this fact combined with statement ii) from Proposition \ref{anexo-palmer} implies that $|\phi(s,0,\xi)|\to +\infty$. In consequence,
\textbf{(D1)} suggest that the asymptotic behavior of $Df(s,x)$ (when $|x|\to +\infty$ and $s\to -\infty$) ensures integrability.
Moreover, let us note that appears in some results about asymptotical equivalence (see \emph{e.g.}, \cite{Akhmet},\cite{Rab}).
\textbf{(D2)} is introduced in order to assure that $H$ is a preserving orientation diffeomorphism.
We emphasize that this asumption is related to Liouville's formula and is used in the asymptotic integration
literature (see \emph{e.g.}, \cite{Eastham}).
\textbf{(D3)} is introduced to ensure that $H$ is a $C^{2}$ diffeomorphism. Notice that, as stressed by Palmer
\cite[p.757]{Palmer}, the uniqueness of the solution of (\ref{no-lin}) implies
the identity
\begin{equation}
\label{identity-IC}
\phi(s,t,\phi(t,0,\xi))=\phi(s,0,\xi),
\end{equation}
and this fact combined with $x(t)=\phi(t,0,\xi)$ allows us to see that $F(r,x(t))$ is the same function
of \textbf{(D1)}.
\end{remark}
\begin{theorem}
\label{anexo-palmer}
If \textnormal{\textbf{(H1)--(H4)}} and \textnormal{\textbf{(D1)--(D2)}} are satisfied, then, for
any fixed $t$, the function $x\mapsto H(t,x)$ is a preserving orientation diffeomorphism.
In particular, if $t\mapsto x(t)$ is a solution of \textnormal{(\ref{no-lin})}, then, for any fixed $t$,
$x(t)\mapsto H[t,x(t)]$ is a preserving orientation diffeomorphism.
\end{theorem}
\begin{proof}
In order to make the proof self contained, we will recall some facts of the Palmer's proof \cite[Lemma 2]{Palmer}
tailored for our purposes.
Firstly, let us consider the system
$$
z'=A(t)z-f(t,\phi(t,\tau,\nu)),
$$
where $t\mapsto \phi(t,\tau,\nu)$ is the unique solution of (\ref{no-lin}) passing through $\nu$ at $t=\tau$. Moreover,
it is easy to prove that the unique bounded solution of the above system is given by
$$
\chi(t,(\tau,\nu))=-\int_{-\infty}^{t}\Psi(t,s)f(s,\phi(s,\tau,\nu))\,ds.
$$
The map $H$ is constructed as follows:
\begin{displaymath}
H(\tau,\nu)=\nu+\chi(\tau,(\tau,\nu))=\nu-\int_{-\infty}^{\tau}\Psi(\tau,s)f(s,\phi(s,\tau,\nu))\,ds.
\end{displaymath}
It will be essential to note that the particular case $(\tau,\nu)=(t,\phi(t,0,\xi))$, leads to
\begin{equation}
\label{lect1}
\chi(t,(t,\phi(t,0,\xi)))=-\int_{-\infty}^{t}\Psi(t,s)f(s,\phi(s,t,\phi(t,0,\xi)))\,ds.
\end{equation}
In addition, (\ref{identity-IC}) allows us to reinterpret (\ref{lect1}) as
\begin{equation}
\label{lect2}
\chi(t,(t,\phi(t,0,\xi)))=-\int_{-\infty}^{t}\Psi(t,s)f(s,\phi(s,0,\xi)))\,ds.
\end{equation}
In consequence, when $(\tau,\nu)=(t,\phi(t,0,\xi))$, we have:
\begin{equation}
\label{hom-palm}
H[t,\phi(t,0,\xi)]=\phi(t,0,\xi)+\chi(t,(t,\phi(t,0,\xi)))
\end{equation}
and the reader can notice that the notation $H[\cdot,\cdot]$ is reserved to the case where
$H$ is defined on a solution of (\ref{no-lin}).
Having in mind the double representation (\ref{lect1})--(\ref{lect2})
of $\chi(t,(t,\phi(t,0,\xi)))$, the map $H[t,\phi(t,0,\xi)]$ can be written as
\begin{equation}
\label{hom-palm1}
H[t,\phi(t,0,\xi)]=\phi(t,0,\xi)-\int_{-\infty}^{t}\Psi(t,s)f(s,\phi(s,0,\xi))\,ds,
\end{equation}
or
\begin{equation}
\label{hom-palm2}
H[t,\phi(t,0,\xi)]=\phi(t,0,\xi)-\int_{-\infty}^{t}\Psi(t,s)f(s,\phi(s,t,\phi(t,0,\xi)))\,ds.
\end{equation}
The proof that $\phi(t,0,\xi)\mapsto H[t,\phi(t,0,\xi)]$ is an homeomorphism for any fixed $t$ is given by Palmer in
\cite[pp.756--757]{Palmer}. In addition, it is straightforward to verify that
(\ref{hom-palm1}) is a solution of (\ref{lin}) passing through $H[0,\xi]$ at $t=0$.
Turning now to the proof that $\phi(\tau,\nu)\mapsto H(\tau,\nu)$ is a
preserving orientation diffeomorphism for any fixed $\tau$, we will only consider the case when $(\tau,\nu)=(t,\phi(t,0,\xi))$
by using (\ref{hom-palm2}). The general case can be proved analogously and is left to the reader.
The proof is decomposed in several steps.
\noindent\textit{Step 1:} Differentiability of the map $\phi(t,0,\xi)\mapsto H[t,\phi(t,0,\xi)]$.
Let us denote $x(t)=\phi(t,0,\xi)$ for any fixed $t$. By using the fact that $f$ is $C^{1}$
combined with (\ref{MDE1}) and
\begin{equation}
\label{MDE2}
\frac{d}{dt}\Psi(t,s)=A(t)\Psi(t,s) \quad \textnormal{and} \quad \frac{d}{ds}\Psi(t,s)=-\Psi(t,s)A(s),
\end{equation}
we can deduce that:
\begin{equation}
\label{delicat}
\begin{array}{rcl}
\displaystyle \frac{\partial H[t,x(t)]}{\partial x(t)}&=& \displaystyle I-\int_{-\infty}^{t}\Psi(t,s)Df(s,\phi(s,t,x(t)))\frac{\partial \phi(s,t,x(t))}{\partial x(t)}\,ds\\\\
\displaystyle &=&\displaystyle I-\int_{-\infty}^{t}\frac{d}{ds}\Big\{\Psi(t,s)\frac{\partial \phi(s,t,x(t))}{\partial x(t)}\Big\}\,ds\\\\
\displaystyle &=&\displaystyle \lim\limits_{s\to -\infty}\Psi(t,s)\frac{\partial \phi(s,t,x(t))}{\partial x(t)}.
\end{array}
\end{equation}
In consequence, the differentiability of $x(t)\mapsto H[t,x(t)]$ follows if and only if the limit above exists.
\noindent\textit{Step 2:} (\ref{delicat}) is well defined.
By (\ref{MDE1}), we know that $\partial \phi(s,t,x(t))/\partial x(t)$ is solution of the equation:
\begin{equation}
\label{perturbacion1}
\left\{\begin{array}{rcl}
Y'(s)&=&\{A(s)+Df(s,\phi(s,t,x(t)))\}Y(s)\\
Y(t)&=&I.
\end{array}\right.
\end{equation}
By (\ref{MDE2}),(\ref{identity-IC}) and (\ref{perturbacion1}), the reader can
verify that $Z(s,x(t))=\Psi(t,s)\frac{\displaystyle \partial \phi(s,t,x(t))}{\displaystyle \partial x(t)}$
is solution of the $x(t)$--parameter dependent matrix differential equation
\begin{equation}
\label{modif1}
\left\{\begin{array}{rcl}
\displaystyle \frac{dZ}{ds}&=&\big\{\Psi(t,s)Df(s,\phi(s,0,\xi))\Psi(s,t)\big\}Z(s) \\\\
Z(t)&=&I.
\end{array}\right.
\end{equation}
A well known result of sucessive approximations (see \emph{e.g.}, \cite{Adrianova},\cite{Bellman}) states that
\begin{displaymath}
Z(s,x(t))=I-
\int_{s}^{t}F(r,\xi)\,dr+\sum\limits_{k=2}^{+\infty}(-1)^{k}\Bigg(\int_{s}^{t}F(r_{1},\xi)\,dr_{1}\cdots\int_{r_{k-1}}^{t}F(r_{k},\xi)\,dr_{k}\Bigg),
\end{displaymath}
where $F(r,\xi)$ is defined by (\ref{L1}). Moreover, we also know that
$$
||Z(s,x(t))||\leq \exp\Big(\Big|\int_{s}^{t}||F(r,\xi)||\,dr\Big|\Big)
$$
and \textbf{(D1)} implies that (\ref{delicat}) is well defined.
\noindent \emph{Step 3:} $x\mapsto H[t,x(t)]$ is a preserving orientation diffeomorphism.
Notice that the continuity of $\partial \phi(s,t,x(t))/\partial x(t)$ for any $s\leq t$ (ensured by Theorem 4.1 from \cite[Ch.V]{Hartman}) implies
the continuity of $\partial H[t,x(t)]/\partial x(t)$ and we conclude that $H[t,x(t)]$ is a diffeomorphism.
The Liouville's formula (see \emph{e.g.}, Theorems 7.2 and 7.3 from \cite[Ch.1]{Coddington}) says that
$$
\det\Psi(t,s)>0 \quad \textnormal{and} \quad \det\frac{\partial \phi(s,t,x(t))}{\partial x(t)}>0 \quad
\textnormal{for any} \quad s\leq t
$$
and \textbf{(D2)} implies that these inequalities are preserved at $s=-\infty$, and we conclude
that $x(t)\mapsto H[t,x(t)]$ is a preserving orientation diffeomorphism.
\end{proof}
\begin{remark}
As $t\mapsto H[t,x(t)]$ is solution of (\ref{lin}), the uniqueness of the solution implies that
\begin{equation}
\label{TH}
H[t,\phi(t,0,\xi)]=\Psi(t,0)H[0,\xi].
\end{equation}
\end{remark}
\begin{remark}
\label{equiv-asympt}
The matrix differential equation (\ref{perturbacion1}) can be seen as a perturbation of the matrix equation
\begin{equation}
\label{Lin-one}
\left\{\begin{array}{rcl}
X'(s)&=&A(s)X(s)\\
X(t)&=&I
\end{array}\right.
\end{equation}
related to (\ref{lin}). In addition, (\ref{Lin-one}) has a solution $s\mapsto X(s)=\Psi(s,t)$.
Notice that $\Psi(t,s)X(s)=I$ while Theorem \ref{anexo-palmer} says
that $s\mapsto \Psi(t,s)Y(s)$ exists at $s=-\infty$. This fact prompt us that the behavior of
(\ref{perturbacion1}) and (\ref{Lin-one}) at $s\to -\infty$ has some relation weaker than asymptotic equivalence. Indeed,
in \cite{Akhmet},\cite{Rab} it is proved that \textbf{(D1)} is a necessary condition for asymptotic equivalence between a linear system
and a linear perturbation.
\end{remark}
\begin{theorem}
\label{SegDer}
If \textnormal{\textbf{(H1)--(H4)}} and \textnormal{\textbf{(D1)--(D3)}} are satisfied, then, for
any fixed $t$, the function $x\mapsto H(t,x)$ is a $C^{2}$ preserving orientation diffeomorphism. In particular, if
$t\mapsto x(t)$ is a solution of \textnormal{(\ref{no-lin})}, then, for any fixed $t$,
$x(t)\mapsto H[t,x(t)]$ is a $C^{2}$ preserving orientation diffeomorphism.
\end{theorem}
\begin{proof}
Let us denote $x(t)=\big(x_{1}(t),\ldots,x_{n}(t)\big)=\phi(t,0,\xi)$. As in the previous result, the proof will
be decomposed in several steps:\\
\noindent \textit{Step 1:} About $\partial^{2}H[t,x(t)]/\partial x_{j}(t)\partial x_{i}(t)$.\\
\noindent For any $i,j\in \{1,\ldots,n\}$, we can verify that
\begin{displaymath}
\begin{array}{rcl}
\displaystyle \frac{\partial^{2}H}{\partial x_{j}\partial x_{i}}[t,x(t)]&=& \displaystyle -\int_{-\infty}^{t}\hspace{-0.3cm}\Psi(t,s)D^{2}f(s,\phi(s,t,x(t)))
\frac{\partial \phi(s,t,x(t))}{\partial x_{j}}\frac{\partial \phi(s,t,x(t))}{\partial x_{i}}\,ds\\\\
&&\displaystyle -\int_{-\infty}^{t}\Psi(t,s)Df(s,\phi(s,t,x(t)))\frac{\partial^{2}\phi(s,t,x(t))}{\partial x_{j}\partial x_{i}}\,ds,
\end{array}
\end{displaymath}
where $x_{i}=x_{i}(t)$ and $x_{j}=x_{j}(t)$.
Now, by using (\ref{MDE15}) and (\ref{MDE2}), the reader can verify that
\begin{displaymath}
\begin{array}{rcl}
\displaystyle \frac{\partial^{2}H}{\partial x_{j}\partial x_{i}}[t,x(t)] &=&\displaystyle -\int_{-\infty}^{t}\frac{d}{ds}\Big\{\Psi(t,s)\frac{\partial^{2}\phi(s,t,x(t))}{\partial x_{j}\partial x_{i}}\Big\}\,ds\\\\
&=&\displaystyle \lim\limits_{s\to -\infty}\Psi(t,s)\frac{\partial^{2}\phi(s,t,x(t))}{\partial x_{j}\partial x_{i}}
\end{array}
\end{displaymath}
and the existence of $\partial^{2}H[t,x(t)]/\partial x_{j}(t)\partial x_{i}(t)$ follows if and only if the limit above exists.
\noindent\textit{Step 2:} $\partial^{2}H[t,x(t)]/\partial x_{j}(t)\partial x_{i}(t)$ is well defined.\\
By using (\ref{MDE1}) and (\ref{MDE2}), we can see that $s\mapsto \Psi(t,s)\partial\phi(s,t,x(t))/\partial x_{i}$ is a
solution of (\ref{L2}) passing through $e_{i}$ at $s=t$. In consequence, we can deduce that
$$
\Psi(t,s)\frac{\partial \phi(s,t,x(t))}{\partial x_{i}}=Z(s,x(t))e_{i}
$$
and
$$
\Psi(t,s)\frac{\partial^{2} \phi(s,t,x(t))}{\partial x_{j}\partial x_{i}}=\frac{\partial Z}{\partial x_{j}}(s,x(t))e_{i}.
$$
By \textbf{(D3)}, the last identity has limit when $s\to -\infty$ and
$\partial^{2}H[t,x(t)]/\partial x_{j}\partial x_{i}$ is well defined and continuous with respect to
$x(t)$.
\end{proof}
\begin{remark}
A careful reading of the results above, shows that our methods can be generalized in order to prove that
$H$ is a $C^{r}$ diffeomorphism with $r\geq 2$.
\end{remark}
\section{Density function}
As we pointed out in subsection 1.2, \textbf{(H1)} implies that (\ref{lin})
is uniformly asymptotically stable, which is a particular case of global asymptotical stability.
Now, by Proposition \ref{dens-lin}, there exists a density function
$\rho\in C^{1}(\mathbb{R}\times \mathbb{R}^{n}\setminus\{0\},[0,+\infty))$ associated
to (\ref{lin}). By following the ideas for the autonomous case studied by Monz\'on \cite[Prop. III.1]{Monzon-0}
combined with the function $\rho$, we state the following result:
\begin{theorem}
\label{ex-dens0}
If \textnormal{\textbf{(H1)--(H5)}} and \textnormal{\textbf{(D1)--(D3)}} are satisfied,
then there exists a density function $\bar{\rho}\in
C(\mathbb{R}\times \mathbb{R}^{n}\setminus\{0\},[0,+\infty))$
associated to \textnormal{(\ref{no-lin})}, defined by
\begin{equation}
\label{dens-nl}
\bar{\rho}(t,x)=\rho(t,H(t,x))\Big|\frac{\partial H(t,x)}{\partial x}\Big|,
\end{equation}
where $H(\cdot,\cdot)$ is the $C^{2}$ preserving orientation diffeomorphism defined
before, $x$ is any initial condition of \textnormal{(\ref{no-lin})} and
$|\cdot|$ denotes a determinant.
\end{theorem}
\begin{proof}
In spite that in the previous sections, the initial condition and the determinant were respectively denoted
by $\xi$ and $\det(\cdot)$, the notation of (\ref{dens-nl}) is classical in the density function literature. The reader will not be
disturbed by this fact.
We shall prove that (\ref{dens-nl}) satisfies the properties of Definition \ref{density}
with $g(t,x)=A(t)x+f(t,x)$. Indeed, $\bar{\rho}$ is non--negative since $\rho$ is non--negative
and $H$ is preserving orientation. In addition, $\bar{\rho}$ is $C^{1}$ since $H$ is $C^{2}$.
The rest of the proof will be decomposed in several steps:
\noindent\emph{Step 1:} $\bar{\rho}(t,x)$ is integrable outside any ball centered in the origin.
Let $B$ be an open ball centered at the origin. By using $H(t,0)=0$ and statement (i) from Proposition \ref{difeo}, we can
conclude that $H(t,B)$ is an open and bounded set containing the origin. In consequence, for any fixed $t$, the outside of $B$ is mapped in the
outside of another ball centered at the origin and contained in $H(t,B)$.
Let $\mathcal{Z}$ be a measurable set whose closure does not contain the origin. The property stated above implies that
$H(t,\mathcal{Z})$ is outside of some ball centered at the origin. Now, by the change of variables
theorem, we can see that
$$
\displaystyle \int_{\mathcal{Z}}\bar{\rho}(t,x)\,dx
=
\int_{\mathcal{Z}}\rho(t,H(t,x))\Big|\frac{\partial H(t,x)}{\partial x}\Big|\,dx
=
\int_{H(t,\mathcal{Z})}\rho(t,y)\,dy.
$$
Finally, as $\rho(t,\cdot)$ is integrable outside any open ball centered at the origin, the same follows for $\bar{\rho}(t,\cdot)$.
\noindent\emph{Step 2:} $\bar{\rho}(t,x)$ verifies
\begin{equation}
\label{postividad}
\frac{\partial \bar{\rho}}{\partial t}(t,x)+\triangledown\cdot (\bar{\rho}g)(t,x)>0 \quad
\textnormal{a.e. in $\mathbb{R}^{n}$}.
\end{equation}
Firstly, by using the Liouville's formula (see \emph{e.g.}, \cite[Corollary 3.1]{Hartman}), we know that
\begin{displaymath}
\frac{\partial}{\partial \eta}\Big|\frac{\partial \phi(\tau+t,t,x)}{\partial x}\Big|\Bigg|_{\tau=0}=\triangledown \cdot g(t,x),
\end{displaymath}
where $\eta=\tau+t$. Now, it is easy to verify that:
\begin{displaymath}
\begin{array}{rcl}
\displaystyle \frac{\partial \bar{\rho}}{\partial t}(t,x)+\triangledown \cdot (\bar{\rho}g)(t,x)&=&
\displaystyle \frac{\partial}{\partial \eta}\Big\{\bar{\rho}(\tau+t,\phi(\tau+t,t,x))\displaystyle \Big|\frac{\partial \phi(\tau+t,t,x)}{\partial x}\Big|\Big\}\Big|_{\tau=0}\\\\
&=&\displaystyle \frac{\partial}{\partial \eta}\Big\{\rho(\tau+t,H[\tau+t,\phi(\tau+t,t,x)])\\\\\
& & \displaystyle \Big|\frac{\partial H[\tau+t,\phi(\tau+t,t,x)]}{\partial \phi(\tau+t,t,x)}\Big|\Big|\frac{\partial \phi(\tau+t,t,x)}{\partial x}\Big|\Big\}\Big|_{\tau=0}\\\\
&=& \displaystyle \frac{\partial}{\partial \eta}\Big\{\rho(\tau+t,H[\tau+t,\phi(\tau+t,t,x)])\\\\
& & \displaystyle \Big|\frac{\partial H[\tau+t,\phi(\tau+t,t,x)]}{\partial x}\Big|\Big\}\Big|_{\tau=0}.\\\\
\end{array}
\end{displaymath}
Secondly, a consequence of (\ref{TH}) is
\begin{displaymath}
H[\tau+t,\phi(\tau+t,t,x)]=\Psi(\tau+t,t)H(t,x),
\end{displaymath}
which implies:
\begin{displaymath}
\begin{array}{rcl}
\displaystyle \frac{\partial \bar{\rho}}{\partial t}(t,x)+\triangledown \cdot (\bar{\rho}g)(t,x) &=& \displaystyle \frac{\partial}{\partial \eta}\Big\{\rho(\tau+t,\Psi(\tau+t,t)H(t,x))
\displaystyle \Big|\frac{\partial \Psi(\tau+t,t)H(t,x)}{\partial x}\Big|\Big\}\Big|_{\tau=0}\\\\
&=&A_{1}(\tau+t,x)+A_{2}(\tau+t,x)\Big|_{\tau=0},
\end{array}
\end{displaymath}
where $A_{1}(\cdot,\cdot)$ and $A_{2}(\cdot,\cdot)$ are respectively defined by
\begin{displaymath}
\begin{array}{rcl}
A_{1}(\tau+t,x)&=&\displaystyle \frac{\partial }{\partial \eta}\Big\{\rho(\tau+t,\Psi(\tau+t,t)H(t,x))\Big\}\Big|\frac{\partial \Psi(\tau+t,t)H(t,x)}{\partial x}\Big|\\\\
&=&\displaystyle \Big\{\frac{\partial \rho}{\partial \eta}(\tau+t,\Psi(\tau+t,t)H(t,x))+\\\\
& &\triangledown\rho\Big(\tau+t,\Psi(\tau+t,t)H(t,x)\Big)A(\tau+t)\Psi(\tau+t,t)H(t,x)\Big\}\\\\
& &\displaystyle \Big|\frac{\partial \Psi(\tau+t,t)H(t,x)}{\partial x}\Big|
\end{array}
\end{displaymath}
and
\begin{displaymath}
\begin{array}{rcl}
A_{2}(\tau+t,x)&=&\rho(\tau+t,\Psi(\tau+t,t)H(t,x))\displaystyle \frac{\partial}{\partial \eta}
\Big\{\Big|\frac{\partial \Psi(\tau+t,t)H(t,x)}{\partial x}\Big|\Big\}\\\\
&=&\rho(\tau+t,\Psi(\tau+t,t)H(t,x))\\\\
& &\displaystyle \frac{\partial}{\partial \eta}\Big\{\Big|\frac{\partial \Psi(\tau+t,t)H(t,x)}{\partial H(t,x)}\Big|\Big|\frac{\partial H(t,x)}{\partial x}\Big|\Big\}
\end{array}
\end{displaymath}
As
\begin{displaymath}
\begin{array}{rcl}
\displaystyle A_{1}(t,x)&=&\displaystyle \Big\{\frac{\partial \rho}{\partial \eta}(t,H(t,x))+\triangledown\rho(t,H(t,x))A(t)H(t,x)\Big\}\Big|\frac{\partial H(t,x)}{\partial x}\Big|
\end{array}
\end{displaymath}
and
\begin{displaymath}
\begin{array}{rcl}
A_{2}(t,x)&=&\displaystyle \rho(t,H(t,x))\tr A(t)H(t,x)\Big|\frac{\partial H(t,x)}{\partial x}\Big|,
\end{array}
\end{displaymath}
we can conclude that
\begin{displaymath}
\begin{array}{rcl}
\displaystyle \frac{\partial \bar{\rho}}{\partial t}(t,x)+\triangledown \cdot (\bar{\rho}g)(t,x)&=&A_{1}(t,x)+A_{2}(t,x)\\\\
&=&\displaystyle \Big\{\frac{\partial \rho}{\partial \eta}(t,H(t,x))+\triangledown \cdot \rho(t,H(t,x)) A(t)H(t,x)\Big\}\Big|\frac{\partial H(t,x)}{\partial x}\Big|,
\end{array}
\end{displaymath}
which is positive since is the product of two positive terms. The positiveness of the first one is ensured by Proposition \ref{dens-lin},
while the second follows by Theorem \ref{anexo-palmer}.
\noindent\emph{Step 3:} End of proof.
As we commented before, the existence of density function associated to (\ref{no-lin}) is based on the homeomorphism $H$ constructed by
Palmer (Proposition \ref{difeo}) and the existence of the density function $\rho(t,x)$ associated to (\ref{lin}) constructed by
Monz\'on (Proposition \ref{dens-lin}). Proposition \ref{anexo-palmer} and Theorem \ref{SegDer} ensure that $H$ is a $C^{2}$ preserving orientation
diffeomorphism while the previous steps state that (\ref{dens-nl}) is indeed a density function associated to (\ref{no-lin}) and the result follows.
\end{proof}
\subsection{An application to nonlinear systems}
Let us consider the nonlinear system
\begin{equation}
\label{dugma}
x'=g(t,x)
\end{equation}
where $g$ is a $C^{2}$ function satisfying
\begin{itemize}
\item[\textbf{(H1')}] $g(t,0)=0$ and $|g(t,x)|\leq \tilde{\mu}$ for any $t\in \mathbb{R}$ and $x\in \mathbb{R}^{n}$.
\item[\textbf{(H2')}] $|g(t,x_{1})-g(t,x_{2})|\leq L |x_{1}-x_{2}|$ for any $t\in \mathbb{R}$.
\end{itemize}
\begin{corollary}
\label{application}
If:
\begin{itemize}
\item[\textbf{(G1)}] The linear system $y'=Dg(t,0)y$ is exponentially stable and its transition matrix satisfy
$$
||\Phi(t,s)||\leq Ke^{-\alpha(t-s)} \quad \textnormal{for some} \quad K\geq 1 \quad \textnormal{and} \quad \alpha>0.
$$
\item[\textbf{(G2)}] The Lipschitz constant $L$ satisfies
$$
L+||Dg(t,0)|| \leq \alpha/4K \quad \textnormal{for any} \quad t\in \mathbb{R},
$$
\item[\textbf{(G3)}] The first derivative of $g$ is such that
\begin{displaymath}
\int_{-\infty}^{t}||\widetilde{F}(r,\xi)||_{\infty}\,dr< 1
\end{displaymath}
for any fixed $t$, with
\begin{displaymath}
\widetilde{F}(r,\xi)=\Phi(t,r)\{Dg(r,\varphi(r,0,\xi))-Dg(r,0)\}\Phi(r,t),
\end{displaymath}
where $\varphi(r,0,\xi)$ is the solution of \textnormal{(\ref{dugma})} passing through $\xi$ at $r=0$.
\item[\textbf{(G4)}] For any fixed $t$, $Dg(t,0)$ and $Dg(t,\varphi(t,0,\xi))$ are such that
\begin{displaymath}
\liminf\limits_{s\to-\infty}-\int_{s}^{t}\tr Dg(r,0)\,dr>-\infty \hspace{0.2cm} \textnormal{and} \hspace{0.2cm}
\liminf\limits_{s\to-\infty}-\int_{s}^{t}\tr Dg(r,\varphi(r,0,\xi))\,dr>-\infty
\end{displaymath}
for any initial condition $\xi$.
\item[\textbf{(G5)}] For any fixed $t$ and $i,j=1,\ldots,n$, the following limit exists
\begin{displaymath}
\lim\limits_{s\to -\infty}\frac{\widetilde{Z}(s,x(t))}{\partial x_{j}(t)}e_{i},
\end{displaymath}
where $x(t)=\varphi(t,0,\xi)$ and $\widetilde{Z}(s,x(t))$ is a fundamental matrix of
\begin{displaymath}
\widetilde{Z}'=\widetilde{F}(s,x(t))\widetilde{Z}.
\end{displaymath}
\end{itemize}
then there exists a density function $\bar{\rho}\in
C(\mathbb{R}\times \mathbb{R}^{n}\setminus\{0\},[0,+\infty))$ associated to \textnormal{(\ref{dugma})}.
\end{corollary}
\section{Illustrative Example}
Let us consider the scalar equation
\begin{equation}
\label{baby}
x'=-ax+h(t)\arctan(x),
\end{equation}
where $a>0$ and $h\colon \mathbb{R}\to \mathbb{R}_{+}$ is bounded and continuous. In addition, we will suppose that
\begin{equation}
\label{baby1}
r\mapsto h(r)e^{-ar} \quad \textnormal{is integrable on} \quad (-\infty,\infty).
\end{equation}
It is easy to see that \textbf{(H1)--(H2)} are satisfied with $\mu=||h||_{\infty}\pi/2$
and $\gamma=||h||_{\infty}$.
Notice that that \textbf{(H3)} is satisfied since $\Psi(t,s)=e^{-a(t-s)}$ and \textbf{(H4)}
is satisfied if and only if $4||h||_{\infty}\leq a$.
Moreover, \textbf{(D1)} is satisfied if for any solution $r\mapsto \phi(r,0,\xi)$ of (\ref{baby})
\begin{equation}
\label{int-conver}
\int_{-\infty}^{\infty}\frac{h(r)}{1+\phi^{2}(r,0,\xi)}\,dr<1.
\end{equation}
It is interesting to point out $\phi(t,0,\xi)$ is unbounded and have exponential growth at $t=-\infty$. Now, it is easy to note that
$$
\lim\limits_{s\to -\infty} -as=+\infty,
$$
which implies that
$$
\liminf\limits_{s\to -\infty}-\int_{s}^{t}\Big\{-a+\frac{h(r)}{1+\phi^{2}(r,0,\xi)}\Big\}\,dr>-\infty
$$
for any fixed $t$, and \textbf{(D2)} is satisfied.
Letting $f(t,x)=h(t)\arctan(x)$
and noticing that
$$
Z(s,x(t))=\exp\Big\{-\int_{s}^{t}Df(u,\phi(u,t,x(t)))\,du\Big\},
$$
and
$$
\frac{\partial \phi(s,t,x(t))}{\partial x(t)}=\exp\Big\{a(t-s)-\int_{s}^{t}Df(u,\phi(u,t,x(t)))\,du\Big\}
$$
with $x(t)=\phi(t,0,\xi)$. Consequently, a straigthforward computation shows that \textbf{(D3)} is satisfied if and only if
$$
Z(s,x(t))\Big[\int_{s}^{t}\exp\Big(a\{t-u\}-\int_{u}^{t}Df(r,\phi(r,t,x(t)))\,dr\Big)D^{2}f(u,\phi(u,t,x(t)))\,du\Big],
$$
has limit when $s\to -\infty$.
Finally, (\ref{baby1}) and (\ref{int-conver}) imply that \textbf{(D3)} is satisfied since the function
$$
u\mapsto \frac{h(u)e^{-a(u-t)}\phi(u,t,x(t))}{(1+\phi^{2}(u,t,x(t)))^{2}}=\frac{h(u)e^{-a(u-t)}\phi(u,0,\xi)}{(1+\phi^{2}(u,0,\xi))^{2}}
$$
is integrable on $(-\infty,t]$ for any $t\in \mathbb{R}$.
|
1,108,101,564,988 | arxiv | \section{Introduction}
Engineering has relied on identification of system dynamics from first principle methods for decades in order to understand the underlying the mechanisms driving dynamics.These first principle equations form the fundamentals that are used to design and operate systems for desired outputs such as heat transfer, operation of process plants, fluid flow operations etc. The equations are also augmented with observation driven empirical relationships which are not fundamentally a law but form basic governing rules. Combined both mechanistic and empirical rules form core of engineering in design and operations.
However, in several cases the first principle based equations may not be available for the system or might be extremely complicated for quick computation and analysis. In these scenarios, it becomes necessary to develop an understanding of the system using data-driven methods. With the advancement in tools to generate, store, transport and analyze high quality and high quantity data, it has become inevitable to rely on data-driven methods to extract governing equations and patterns for a system. However the use of data-driven methods to understand dynamical systems has been limited. In chemical engineering systems a single unit can exhibit complex dynamics. This motivates the need to explore the potential of data based methods to extract dynamic governing equations in such systems.
In this work, we try to identify system dynamics of a distillation column. Distillation columns are one of the well studied and established units in Chemical Engineering. We first build a dynamic process flow simulation of this distillation column to generate time series data. We then apply data driven system identification on this system and try to answer the question of whether data based machine learning methods can replace first principles.
The paper is organized as follows. In Section \ref{sec:sparse} we explain the method of Sparse Regression of Dynamical Systems. This method has been shown \cite{Brunton3932, hoffmann2019} to possess the ability to extract sparse governing equations for dynamic systems and can balance model complexity with model accuracy. In Section \ref{subsec:column}, we explain the process flow simulation development on Aspen Plus\textsuperscript{\textregistered} for the distillation column. The rest of Section \ref{sec:method} deals with creating a dynamic simulation, data generation, model training, selection and testing. In Section \ref{sec:results} we show the results of the research and interpret them with the goal to establish the extent to which machine learning can substitute for first principles. We finally discuss the key takeaways and the prospects for future research in Section \ref{sec:discuss}
\section{Approach : Physical System Identification Using Sparse Identification of Non-Linear Dynamics (SINDy)}
\subsection{Method of Sparse Identification of Non-Linear Dynamics}
\label{sec:sparse}
Sparse regression is a machine learning methodology which works under the assumption that the governing equations of most dynamical systems contain very few terms. These equation can be considered sparse in the function space and the system is expected to evolve on a low dimensional manifold. Sparse Identification tries to discover these equations from noisy time series data.\\
We consider systems whose governing equations are non-linear ODEs of the form,\\
$$
\frac{d\bm{x(t)}}{dt} = f(\bm{x(t)}, u(t))
$$
where $\bm{x(t)} \in \mathbb{R}^n$ denotes the state of the system at time $t$, $u(t) \in \mathbb{R}$ is the input function value at time $t$ and $f\left(x(t)\right)$ is a linear combination of non-linear functions of $\bm{x(t)}$ and $u(t)$. Mathematically,
\begin{align*}
\bm{\dot{x}(t)} &=
\displaystyle\Sigma_{i=1}^k\xi_i\theta_i\left(\bm{x(t)}, u(t)\right)\\
\bm{x(t)} &= \begin{bmatrix}
x_1(t) & x_2(t) & \hdots x_n(t)
\end{bmatrix}
\end{align*}
Where $x_1, \hdots x_n$ are the states of the system, $\theta$s are non-linear functions called the candidate terms of $f(x)$, and $\xi$s are the coefficients of the terms. We expect most of $\xi$s to be $0$ making $f(x, u)$ sparse in the number of terms. The goal of the algorithm is to identify the very few terms which make up $f(x, u)$ from a very large set of candidates. Instead of a combinatorial search for these terms by brute force, the algorithm includes a penalty for model complexity. This forces the selected function to be sparse. By forcing sparse functions the algorithm also ensures that the model obtained does not overfit the data.\\
The data required for the algorithm is a time series of states arranged in a matrix $\bm{X(t)} \in \mathbb{R}^{m\times n}$ of the form
\begin{align*}
\bm{X(t)} = \begin{bmatrix}
x_1(t) & x_2(t) & \hdots & x_n(t)\\
x_1(t-1) & x_2(t-1) & \hdots & x_n(t-1)\\
\vdots & & &\vdots\\
x_1(t-m+1) & x_2(t-m+1) & \hdots & x_n(t-m+1)
\end{bmatrix}
\end{align*}
The derivative of $\bm{X(t)}, \bm{\dot{X}(t)} \in \mathbb{R}^{m\times n}$ is matrix of the form
\begin{align*}
\bm{\dot{X}(t)} = \begin{bmatrix}
\dot{x_1}(t) & \dot{x_2}(t) & \hdots & \dot{x_n}(t)\\
\dot{x_1}(t-1) & \dot{x_2}(t-1) & \hdots & \dot{x_n}(t-1)\\
\vdots & & &\vdots\\
\dot{x_1}(t-m+1) & \dot{x_2}(t-m+1) & \hdots & \dot{x_n}(t-m+1)
\end{bmatrix}
\end{align*}
obtained by numerically differentiating $X(t)$. And,
$$
\bm{u(t)} =
\begin{bmatrix} u(t), u(t-1), \hdots u(t-m+1) \end{bmatrix}^T
$$
The governing equation becomes
$$
\bm{\dot{X}(t)} = \Theta\left(\bm{X(t), u(t)}\right)\Xi
$$
With $\Theta\left(\bm{X(t)}\right) \in \mathbb{R}^{m\times k}$ given by
\begin{math}
\begin{bmatrix}
\theta_1\left(\bm{X(t)}\right) & \theta_2\left(\bm{X(t)}\right) \hdots
\end{bmatrix}
\end{math}\\
And $\Xi \in \mathbb{R}^{k\times n}$ given by
$ \begin{bmatrix}
\xi_1 & \xi_2 & \hdots \xi_n
\end{bmatrix} $\\~\\
We try to identify the sparse matrix $\Xi$ by solving the least squares optimization problem. However, this includes an optimization for every column of $\bm{\dot{X}(t)}$. So in this case, we have to solve n optimization problems, one for each of the n states of the system.\\
The algorithm forces sparsity by adding a regularization term to the objective function. The ideal regularization to force sparsity would be minimizing the $L_0$ norm of the coefficients (number of non zero terms in the vector). But this an NP-hard problem \cite{natarjan1995sparse}. However, it has been shown \cite{Donoho2197} that mininmizing the $L_1$ norm is a convex optimization and also produces solutions which are sparse. This is referred to as the lease absolute shrinkage and selection operator (LASSO). The LASSO optimization problem is
$$
\xi_i^* = \underset{\xi_i}{\text{argmin }}\left||\bm{\dot{x_i}}-\Theta\left(\bm{X(t)}\right)\xi_i\right||_2+\alpha||\xi_i||_1 \quad \quad i = 1,2 \hdots n
$$
$\alpha$ is the regularization parameter which has to be tuned inorder to achieve a trade-off between accuracy and sparsity. This optimization problem can be solved by the standard convex optimization algorithms. We have used coordinate descent algorithm which is available as a prebuilt function in the \textit{scikit} Python library.\\
The capability of the algorithm to capture the dynamics of the system depends mainly on the candidate functions provided. Some prior knowledge of the process might help identify these candidate functions. This is a place where domain knowledge becomes important. The method also depends on the quality of the data. Therefore, we need to filter the derivatives and/or variables as we are using numerical differentiation to obtain the derivatives.
\subsection{Physical System for Identification : A Distillation Column}
\label{subsec:column}
In order to study the application of machine learning approach to identify the governing equations for dynamics in engineered systems we selected the unit operation of distillation column. Distillation columns are one of the most ubiquitous unit operations in process industries ranging from petrochemicals, biomass to now the next generation biorefineries. While the column looks simple from the operations perspective (after multiple decades of theory and design development), the dynamics of this system is complex. The dynamics of the system is dependent on multiple physical laws such as heat transfer, diffusion principles, mass flow dynamics, hueristics that relate the pressure to chemical properties, temperature and pressure relationships etc. A standard software used to model the operation of such an unit can include upto 1000s of equation. While control principles using linearized models are already being used to deverlop control systems for these units, an overall simple law that govern the dynamics of these systems is not known. Our goal of using machine learning based approach was to test the applicability of simplified data driven approach to identify the governing laws as data can be more easily generated. First principles approach to identify complex dynamics of these systems will certainly be a much difficult task. Next we describe the system selected, generation of time series data and selected system variables that describe the state of system to apply SINDy method.
\subsubsection{Test Distillation Column} \label{distColumnStudied}
The system considered was an extractive distillation column used to recover methylcyclohexane (MCH) from a mixture of MCH and toulene. Since MCH (Boiling Point = $101 \degree$ C) and toluene (Boiling Point = $110.6 \degree$ C) have very close boiling points they cannot be separated by a conventional distillation column. Therefore, we use phenol (Boiling Point = $181.7 \degree$ C) which has a higher affinity towards toluene to alter the relative volatility and promote separation. An equimolar mixture of MCH and toluene forming the feed stream and a pure phenol stream are fed to the distillation column . MCH is extracted as the overhead product while toluene and phenol leave as the bottoms products. The process flow diagram for the column is given in Fig.\ref{fig:column}. The column was modelled as a RadFrac unit. The specifications of the distillation column used are listed in Table.\ref{tab:column_specs} and the feed conditions are given in Table.\ref{tab:feed_specs}
\begin{figure}[H]
\centering
\includegraphics[width=0.5\linewidth]{Column.PNG}
\caption{Process Flow Diagram - Distillation Column}
\label{fig:column}
\end{figure}
The column is able to recover $97.3\%$ of the MCH originally present in the feed stream. The process flow diagram is then exported to Aspen Dynamics for running dynamic simulations that can allow extracting the rules governing dynamics for this system. The first-principle based mechanistic model has 2403 variables and 1848 equations as identified by Aspen Dynamics, however structure of all these equations are not known.
\begin{longtable}[h!]{|ll|r|}
\caption{Distillation Column Specifications}\\
\hline
\multicolumn{2}{|c|}{\textbf{Specification}} & \multicolumn{1}{l|}{Value} \\ \hline
%
%
No. of stages & & 22 \\ \hline
Reflux Ratio & & 8 \\ \hline
Distillate Rate (lbmol/hr) & & 200 \\ \hline
FEED Stage & & 14 \\ \hline
PHENOL Stage & & 7 \\ \hline
Stage 1 (Condenser) Pressure (psia) & & 16 \\ \hline
Stage 22 (Reboiler) Pressure (psia) & & 20.2 \\ \hline
\multicolumn{1}{|l|}{} & Diameter (ft) & 5 \\ \cline{2-3}
\multicolumn{1}{|l|}{} & Spacing (ft) & 2 \\ \cline{2-3}
\multicolumn{1}{|l|}{} & Weir Height (ft) & 0.164 \\ \cline{2-3}
\multicolumn{1}{|l|}{} & Lw/D & 0.7267 \\ \cline{2-3}
\multicolumn{1}{|l|}{Tray Geometry} & \% Active Area & 90 \\ \cline{2-3}
\multicolumn{1}{|l|}{} & Overall Efficiency & 1 \\ \cline{2-3}
\multicolumn{1}{|l|}{} & \% Hole Area & 10 \\ \cline{2-3}
\multicolumn{1}{|l|}{} & Hole Diameter (ft) & 0.0833 \\ \cline{2-3}
\multicolumn{1}{|l|}{} & \% Downcomer Escape Area & 10 \\ \cline{2-3}
\multicolumn{1}{|l|}{} & Foaming Factor & 1 \\ \hline
\multicolumn{1}{|l|}{} & Length (ft) & 6 \\ \cline{2-3}
\multicolumn{1}{|l|}{Reflux Drum} & Diameter (ft) & 3 \\ \cline{2-3}
\multicolumn{1}{|l|}{} & Head Type & Horizontal \\ \hline
%
\multicolumn{1}{|l|}{} & Height (ft) & 5 \\ \cline{2-3}
\multicolumn{1}{|l|}{Sump} & Diameter (ft) & 3 \\ \cline{2-3}
\multicolumn{1}{|l|}{} & Head Type & Elliptical \\ \hline
%
\multicolumn{1}{|l|}{} & Type & Total \\ \cline{2-3}
\multicolumn{1}{|l|}{} & Heat Transfer & LMTD \\ \cline{2-3}
\multicolumn{1}{|l|}{Condenser} & Medium Temperature (F) & 68 \\ \cline{2-3}
\multicolumn{1}{|l|}{} & Temperature Approach (F) & 18 \\ \cline{2-3}
\multicolumn{1}{|l|}{} & Heat Capacity (Btu/lb-R) & 1.00315 \\ \hline
%
\multicolumn{1}{|l|}{} & Type & Kettle \\ \cline{2-3}
\multicolumn{1}{|l|}{Reboiler} & Heat Transfer & Constant Duty \\ \cline{2-3}
\multicolumn{1}{|l|}{} & Heat Duty (Btu/hr) & 31615232.6 \\ \hline
\label{tab:column_specs}
\end{longtable}
\begin{longtable}[h!]{|ll|l|l|}
\caption{Feed Specifications}\\
\hline
\multicolumn{2}{|c|}{} & PHENOL & FEED \\ \hline
%
%
Molar Flow (lbmol/hr) & & 1200 & 400 \\ \hline
\multicolumn{1}{|l|}{} & Phenol & 1 & 0 \\ \cline{2-4}
\multicolumn{1}{|l|}{Mole Fraction} & Toluene & 0 & 0.5 \\ \cline{2-4}
\multicolumn{1}{|l|}{} & MCH & 0 & 0.5 \\ \hline
Temperature (F) & & 220 & 220 \\ \hline
Pressure (psia) & & 20 & 20 \\ \hline
\label{tab:feed_specs}
\end{longtable}
\subsubsection{Dynamics and Time Series Generation}
In order to capture the dynamics, system was perturbed by adding perturbations to the phenol feed stream and the feed flow rate was kept constant. This will allow the approach to identify equations that govern the dynamics developed in the system due to changes in the extracting agent's flow rate. An initial sensitivity analysis was carried out in Aspen Plus Steady State (results shown in Fig.\ref{fig:sensitivity} to identify the valid values for phenol flow rates for which the column can operate without errors.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{Sensitivity.PNG}
\caption{Sensitivity Analysis on Exit Streams for Phenol Flow Rate Variations}
\label{fig:sensitivity}
\end{figure}
The perturbations were restricted to a fraction of this zone and the rest of the valid region was used for testing. \\
\noindent{\textbf{Perturbations :}}
The phenol feed perturbation was implemented by executing a Task in Aspen Dynamics The perturbation was a random mix of step changes, linear ramps and sigmoidal ramps with a time period of 1 hour each and amplitudes between 1000 lbmol/hr to 3000 lbmol/hr generated randomly with a uniform probability distribution function. The simulation was run for 100 hours with a calculation step size of 0.01 hours. One such feed flow rate time series plot is given in Fig.\ref{fig:feed_perturb} This generates 50001 equally spaced (in time) data points. The phenol feed time series becomes $\bm{u(t)}$ which is equivalent to a forcing function that drives the dynamics of the system.
\begin{figure}[H]
\centering
\includegraphics[width=1\linewidth]{Perturbations.PNG}
\caption{Perturbations - Phenol Flow Rate (lbmol/hr) vs Time (hr)}
\label{fig:feed_perturb}
\end{figure}
\noindent{\textbf{Operating Conditions :}}
In order to define the system, the following variables were fixed as operating condition parameters : Reflux ration, toluene feed rate, MCH feed rate, distillation column sizing, tray geometry, reboiler geometry and sizing, condenser geometry and sizing, reboiler duty and condenser heat transfer coefficients.These conditions play a crucial role in operation of selected distillation column hence fixing these parameters would allow us to identify the governing equations for mechanisms that drive the dynamics of flow streams. Further, in order to test the robustness of the equations extracted, the structure of the obtained equations were compared across different operating conditions obtained by altering these parameters. The different operating conditions tested are listed in Table.\ref{tab:op_cond}. This testing method has been further explained in Section \ref{subsec:testing}\\
\vspace{-1.0in}
\begin{table}[H]
\caption{Different Operating Conditions Tested}
\begin{center}
\label{tab:op_cond}
\begin{tabular}{|l|l|l|l|l|}
\hline
\textbf{Parameter} & \textbf{System 1}& \textbf{System 2} & \textbf{System 3} & \textbf{System 4} \\ \hline
\textbf{Reflux Ratio} & 6 & 10 & 8 & 8 \\ \hline
\textbf{Toluene Feed} & 200 & 200 & 200 & 400 \\ \hline
\textbf{MCH Feed} & 200 & 200 & 400 & 200 \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsubsubsection{\textbf{States of the System}}
Studying the dynamics of a systems requires following the state of system by mapping the state to observable variables. In this case, for the machine learning algorithm to capture the complete dynamics of the system, we included the set of variables which change with the perturbations and are not fixed as operating conditions. The result will be a system of ODEs that can describe the evolution of the whole system as state space dynamics for these variables. For the system under consideration, we initially chose the following variables:
\begin{table}[H]
\begin{center}
\begin{tabular}{|p{1.3in}|p{0.58in}|p{1.1in}|}
\hline
Variables & Description & Symbol Used in ODEs \\
\hline
OVERHEAD Stream Temperature& Top T & \(TOP_{T}\)\\
\hline
OVERHEAD Stream – Phenol Flow Rate & Top Ph & ${Top}_{\text{Ph}}$\\
\hline
OVERHEAD Stream –Toluene Flow Rate & Top Tol & ${Top}_{\text{Tol}}$\\
\hline
BOTTOMS Stream - MCH Flow Rate & Bot MCH & ${Bot}_{\text{MCH}}$\\
\hline
BOTTOMS Stream – Phenol Flow Rate & Bot Ph & ${Bot}_{\text{Ph}}$ \\
\hline
BOTTOMS Stream –Toluene Flow Rate & Bot Tol & $\text{Bot}_{\text{Tol}}$\\
\hline
BOTTOMS Stream Temperature & Bot T & $\text{Bot}_{\text{T}}$\\
\hline
Condenser Duty & Q Cond & $\text{Q}_{\text{cond}}$\\
\hline
Reboiler Vapour Flow Rate & Vep Reb & $\text{Vap}_{\text{Reb}}$\\
\hline
Stage 1 (Condenser) Pressure & P1 & $\text{P}_{1}$\\
\hline
Stage 22 (Reboiler) Pressure & P22 & ${P}_{22}$ \\
\hline
\end{tabular}
\end{center}
\caption{State Space Variables for Distillation Column Dynamics}
\label{StateVariables}
\end{table}
These variables hold significance in terms of column requirements as the equations developed can later be used for obtaining a specific extent of separation, quality of product, ensure pressure in the column within safety limits or estimate energy requirements. ODEs in terms of these variables will make these use cases possible.\\
However, due to the presence of trace quantities of chemicals in streams, the equations for those chemicals produced inaccurate results. This can be attributed to the low Signal to Noise Ratio (noise arising from numerical integration) for these variables. To improve the model, the total flow rate of the overhead stream was also included as state variables. The total flow rate is a redundant variable as it can be estimated as a summation of the individual flow rates. But, since the total flow has a higher SNR, it is expected to produce better results.
\subsubsubsection{\textbf{Candidate Functions}}
The variables were first mean shifted and auto scaled before generating the candidate functions.\\
We used 360 candidate functions of the form,
\begin{align*}
f_i &=\; x_1^{a_1^{(i)}}x_2^{a_2^{(i)}}\dotsm x_{14}^{a_{14}^{(i)}} \quad \quad i = 1,2\hdots k \\
&\text{Where,} \\
&\displaystyle\Sigma_{j=1}^{14} \leq 2\\
-&2 \leq a_j^{(i)} \leq 2\\
& a_j^{(i)} \in \mathbb{Z}
\end{align*}
And 70 candidate functions of the form, $\sin(x_i),\,\cos(x_i),\, ln(|x_i|),\, e^{x_i},\,\sqrt{|x_i|}\; \forall \; i = 1,2\dots14$. These functions were chosen without using any strong understanding of the system to check if the algorithm can work with very little to no domain knowledge.
\subsubsection{Algorithm Execution}
The algorithm was implemented on Python 3.6.5 using the libraries - \textit{pandas}, \textit{numpy}, \textit{sklearn}, \textit{scipy}, \textit{matplotlib} and \textit{itertools}. We used numerical differentiation with total variance regularization method developed in \cite{chartrand2011numdiff} to obtain the derivatives of the variables. The data was split in the ratio 3:1:1 for training, cross validation and testing. T
\subsection{Model Selection Metrics}
Two methods were used to select models. These two methods differ on the final use of the model and navigate a trade-off between accuracy of prediction and interpretability of the obtained equations. The models differed in the value of the L1 norm regularization parameter. Models with low regularization parameter had a higher number of terms and higher training set accuracy than those with high regularization.
\subsubsection{Cross Validation Accuracy}
This selected the model with highest cross validation accuracy. For some variables, cross validation had a clear peak as shown in Fig.\ref{fig:x8}. The accuracy is expected to initially increase with reducing regularization but later reduce due to over fitting. However, some of the selected models had too many terms making it difficult to interpret their physical meaning. Also, some variables like in Fig.\ref{fig:x3} did not exhibit this clear peak characteristic and the accuracy kept increasing with smaller regularization. The implications of this observation are discussed in Section \ref{sec:results}.
\begin{figure*}[h!]
\centering
\begin{subfigure}[t]{0.49\textwidth}
\centering
\includegraphics[height=1.2in]{x8.PNG}
\caption{Clear cross-validation peak}
\label{fig:x8}
\end{subfigure}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[height=1.2in]{x3.PNG}
\caption{Without a clear cross-validation peak}
\label{fig:x3}
\end{subfigure}
\caption{Cross Validation Model Selection}
\end{figure*}
\subsubsection{Cross Validation Accuracy with Model Complexity Penalty}
To avoid selecting a lot of terms and to break ties in cases without a clear cross validation peak, a selection score based on model complexity was defined. Based on the score given by Eq.\ref{eq:score}, the model with the highest score was selected.
\begin{align}
\alpha k-\beta ln\left(R^2_{CV}\right) \label{eq:score}
\end{align}
where $\alpha$ and $\beta$ are weights selected based on inspection of the trade off graphs. $k$ denotes the number of terms in the obtained equation and $R^2_{CV}$ is the cross validation $R^2$ accuracy.
\subsection{Testing}
\label{subsec:testing}
Different testing methods were employed to quantify the goodness of the developed equations for different purposes. We looked at the accuracy of predicting $\bm{\dot{X}(t)}$ given $\bm{X(t)}$ and $\bm{u(t)}$ and the accuracy in predicting $\bm{X(t)}$ from $\bm{u(t)}$ and the initial condition, by integrating the ODEs obtained. The results of these tests along with their interpretations are available in Section \ref{sec:results} and Appendix \ref{app:result}. The methods employed were:
\begin{description}
\item [Test Data] Tests the accuracy of the developed model on the $20\%$ data selected randomly and excluded from training. This gives an idea about the overfitting and the predictive ability of the model under conditions similar to which the training data was obtained. Low success under this test could indicate overfitting.
\item [Outside Perturbation Region] This creates a new data set by changing the feed perturbation region and testing the model on this new data. This checks if the model was able to capture the complete dynamics of the model. Low accuracy under this test would indicate incompleteness of the model in terms of missing critical state variables or insufficient candidate functions.
\item [Long Time Accuracy] In this testing, the dynamical system is run for a longer time (250 hours) than the training time (100 hours) to generate test data. This will help identify long time dynamic effects or time based evolution of the system which could have been missed by the algorithm.
\item [Similar System Structural Comparison] 4 additional systems were created as mentioned in Table. \ref{tab:op_cond} by altering the operating conditions. The model was trained on these 4 systems. The structure of the equations obtained were compared across these 5 systems for similar terms (only for the presence or absence of terms and not for the similarity of regression coefficients). If the algorithm is able to extract the entire dynamics of the system, irrespective of the operating condition, the equation would contain the same terms and differ only in the parameter values.
\end{description}
\section{Results and Analysis}
\label{sec:results}
\subsection{Derivative Predictions}
\label{subsec:accuracy}
The model was trained on the four systems mentioned in Table. \ref{tab:op_cond}. The developed equations were used to predict $\bm{\dot{X}(t)}$ from $\bm{X(t)}$ and the input for the test data. These results along with the sparsity of the models given by the number of terms N are given in Table. \ref{tab:test_accuracy1} and \ref{tab:test_accuracy2}. The training and testing were done for 2 values of $\alpha$ corresponding to low regularization and high regularization.
\begin{longtable}[c]{|l|
>{\columncolor[HTML]{EFEFEF}}r |
>{\columncolor[HTML]{EFEFEF}}r |
>{\columncolor[HTML]{EFEFEF}}r |
>{\columncolor[HTML]{EFEFEF}}r |
>{\columncolor[HTML]{EFEFEF}}r |
>{\columncolor[HTML]{EFEFEF}}r |r|r|r|r|r|r|}
\caption{Training and Test $R^2$ values for the 4 systems}
\label{tab:test_accuracy1}\\
\hline
\multicolumn{1}{|c|}{} & \multicolumn{6}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{Basic}} & \multicolumn{6}{c|}{\textbf{MCH400}} \\ \cline{2-13}
\multicolumn{1}{|c|}{} & \multicolumn{3}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{\begin{tabular}[c]{@{}c@{}}Low\\ Regularization\end{tabular}}} & \multicolumn{3}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{High Regularization}} & \multicolumn{3}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Low\\ Regularization\end{tabular}}} & \multicolumn{3}{c|}{\textbf{High Regularization}} \\ \cline{2-13}
\multicolumn{1}{|c|}{\multirow{-3}{*}{\textbf{Variable}}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{Train}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{Test}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{N}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{Train}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{Test}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{N}} & \multicolumn{1}{c|}{\textbf{Train}} & \multicolumn{1}{c|}{\textbf{Test}} & \multicolumn{1}{c|}{\textbf{N}} & \multicolumn{1}{c|}{\textbf{Train}} & \multicolumn{1}{c|}{\textbf{Test}} & \multicolumn{1}{c|}{\textbf{N}} \\ \hline
\endhead
\textbf{Top F} & {\color[HTML]{333333} 0.99} & 0.986 & 25 & 0.979 & 0.979 & 14 & 0.959 & 0.942 & 39 & 0.93 & 0.897 & 19 \\ \hline
\textbf{Top T} & 0.99 & 0.985 & 27 & 0.978 & 0.977 & 15 & 0.965 & 0.943 & 36 & 0.933 & 0.894 & 18 \\ \hline
\textbf{Top MCH} & 0.99 & 0.986 & 28 & 0.979 & 0.979 & 14 & 0.96 & 0.943 & 39 & 0.93 & 0.898 & 19 \\ \hline
\textbf{Top Ph} & 0.282 & 0.315 & 6 & 0.282 & 0.315 & 6 & 0.435 & 0.389 & 50 & 0.377 & 0.358 & 40 \\ \hline
\textbf{Top Tol} & 0.962 & 0.974 & 25 & 0.959 & 0.969 & 18 & 0.954 & 0.924 & 41 & 0.866 & 0.712 & 9 \\ \hline
\textbf{Bot T} & 0.972 & 0.966 & 50 & 0.854 & 0.835 & 19 & 0.875 & 0.774 & 60 & 0.748 & 0.669 & 23 \\ \hline
\textbf{Bot MCH} & 0.975 & 0.977 & 45 & 0.827 & 0.815 & 22 & 0.856 & 0.772 & 63 & 0.698 & 0.58 & 22 \\ \hline
\textbf{Bot Ph} & 0.904 & 0.871 & 57 & 0.718 & 0.632 & 23 & 0.795 & 0.755 & 78 & 0.544 & 0.495 & 23 \\ \hline
\textbf{Bot Tol} & 0.873 & 0.722 & 50 & 0.723 & 0.5 & 15 & 0.77 & 0.769 & 63 & 0.58 & 0.53 & 21 \\ \hline
\textbf{Cond Q} & 0.972 & 0.956 & 36 & 0.958 & 0.948 & 14 & 0.955 & 0.926 & 50 & 0.909 & 0.858 & 20 \\ \hline
\textbf{Vap Reb} & 0.914 & 0.866 & 37 & 0.844 & 0.783 & 20 & 0.861 & 0.822 & 64 & 0.686 & 0.651 & 20 \\ \hline
\textbf{P1} & 0.976 & 0.96 & 31 & 0.963 & 0.939 & 14 & 0.952 & 0.908 & 32 & 0.927 & 0.862 & 19 \\ \hline
\textbf{P22} & 0.965 & 0.948 & 30 & 0.947 & 0.923 & 16 & 0.932 & 0.902 & 52 & 0.873 & 0.812 & 18 \\ \hline
\end{longtable}
\begin{longtable}[c]{|l|
>{\columncolor[HTML]{EFEFEF}}r |
>{\columncolor[HTML]{EFEFEF}}r |
>{\columncolor[HTML]{EFEFEF}}r |
>{\columncolor[HTML]{EFEFEF}}r |
>{\columncolor[HTML]{EFEFEF}}r |
>{\columncolor[HTML]{EFEFEF}}r |r|r|r|r|r|r|}
\caption{Training and Test $R^2$ values for the 4 systems}
\label{tab:test_accuracy2}\\
\hline
\multicolumn{1}{|c|}{} & \multicolumn{6}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{T400}} & \multicolumn{6}{c|}{\textbf{RR6}} \\ \cline{2-13}
\multicolumn{1}{|c|}{} & \multicolumn{3}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{\begin{tabular}[c]{@{}c@{}}Low\\ Regularization\end{tabular}}} & \multicolumn{3}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{High Regularization}} & \multicolumn{3}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Low\\ Regularization\end{tabular}}} & \multicolumn{3}{c|}{\textbf{High Regularization}} \\ \cline{2-13}
\multicolumn{1}{|c|}{\multirow{-3}{*}{\textbf{Variable}}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{Train}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{Test}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{N}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{Train}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{Test}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{N}} & \multicolumn{1}{c|}{\textbf{Train}} & \multicolumn{1}{c|}{\textbf{Test}} & \multicolumn{1}{c|}{\textbf{N}} & \multicolumn{1}{c|}{\textbf{Train}} & \multicolumn{1}{c|}{\textbf{Test}} & \multicolumn{1}{c|}{\textbf{N}} \\ \hline
\endhead
\textbf{Top F} & 0.983 & 0.957 & 53 & 0.944 & 0.921 & 21 & 0.989 & 0.963 & 37 & 0.97 & 0.955 & 16 \\ \hline
\textbf{Top T} & 0.985 & 0.981 & 52 & 0.951 & 0.951 & 19 & 0.989 & 0.966 & 40 & 0.964 & 0.949 & 16 \\ \hline
\textbf{Top MCH} & 0.974 & 0.963 & 39 & 0.942 & 0.926 & 20 & 0.989 & 0.963 & 36 & 0.97 & 0.955 & 16 \\ \hline
\textbf{Top Ph} & 0.624 & 0.605 & 34 & 0.625 & 0.605 & 34 & 0.562 & 0.565 & 43 & 0.511 & 0.554 & 30 \\ \hline
\textbf{Top Tol} & 0.97 & 0.609 & 33 & 0.933 & 0.704 & 20 & 0.892 & 0.826 & 8 & 0.892 & 0.826 & 8 \\ \hline
\textbf{Bot T} & 0.878 & 0.734 & 70 & 0.733 & 0.696 & 27 & 0.98 & 0.889 & 41 & 0.892 & 0.491 & 18 \\ \hline
\textbf{Bot MCH} & 0.832 & 0.7 & 67 & 0.649 & 0.49 & 24 & 0.863 & 0.794 & 48 & 0.759 & 0.66 & 31 \\ \hline
\textbf{Bot Ph} & 0.792 & 0.493 & 87 & 0.792 & 0.493 & 87 & 0.832 & 0.762 & 48 & 0.703 & 0.504 & 23 \\ \hline
\textbf{Bot Tol} & 0.803 & 0.389 & 88 & 0.803 & 0.389 & 88 & 0.952 & 0.866 & 30 & 0.923 & 0.89 & 14 \\ \hline
\textbf{Cond Q} & 0.969 & 0.964 & 50 & 0.935 & 0.93 & 18 & 0.966 & 0.926 & 50 & 0.93 & 0.904 & 15 \\ \hline
\textbf{Vap Reb} & 0.889 & 0.875 & 67 & 0.758 & 0.864 & 38 & 0.91 & 0.843 & 51 & 0.822 & 0.677 & 21 \\ \hline
\textbf{P1} & 0.976 & 0.934 & 53 & 0.939 & 0.901 & 25 & 0.982 & 0.97 & 43 & 0.968 & 0.946 & 20 \\ \hline
\textbf{P22} & 0.958 & 0.935 & 51 & 0.91 & 0.908 & 21 & 0.977 & 0.948 & 38 & 0.958 & 0.906 & 21 \\ \hline
\end{longtable}
We find that the model is able to predict $\bm{\dot{X}(t)}$ with a reasonable accuracy from $\bm{X(t)}$ and $u(t)$. Reducing the regularization increases the accuracy in the test data. This trend is seen across variables and till very small regularization parameter values. This indicates that we are unable to capture enough information from the data using the provided candidate functions and number of terms. This could either indicate insufficient candidate function and state variables or absence of a low dimensional function space representation for the system. Ways to analyze and possibly overcome this are discussed in Section. \ref{sec:discuss}.\\~\\
System 1 was also tested on two other simulations (one run for a longer time and the other outside the training perturbation region) explained in Section. \ref{subsec:testing}. The results for these two tests are in Table. \ref{tab:long_out}. Sample result plots of $\bm{\dot{X}(t)}$ vs $t$ for Prediction vs True Values for these tests are in Fig. \ref{fig:dydx_vs_t_long} and \ref{fig:dydx_vs_t_out}.
\begin{longtable}[c]{|l|
>{\columncolor[HTML]{EFEFEF}}r |
>{\columncolor[HTML]{EFEFEF}}r |r|r|}
\caption{Long Time and Testing Outside the Training Perturbation Region}
\label{tab:long_out}\\
\hline
\multicolumn{1}{|c|}{} & \multicolumn{2}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{Long Time}} & \multicolumn{2}{c|}{\textbf{Outside Training}} \\ \cline{2-5}
\multicolumn{1}{|c|}{\multirow{-2}{*}{\textbf{Variable}}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{Low $\alpha$}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{High $\alpha$}} & \multicolumn{1}{c|}{\textbf{Low $\alpha$}} & \multicolumn{1}{c|}{\textbf{High $\alpha$}} \\ \hline
\endfirsthead
\endhead
\textbf{Top F} & 0.948 & 0.945 & 0.778 & 0.803 \\ \hline
\textbf{Top T} & 0.953 & 0.944 & 0.704 & 0.817 \\ \hline
\textbf{Top MCH} & 0.951 & 0.946 & 0.819 & 0.799 \\ \hline
\textbf{Top Ph} & 0.198 & 0.198 & -0.588 & -0.588 \\ \hline
\textbf{Top Tol} & 0.516 & 0.523 & 0 & 0.19 \\ \hline
\textbf{Bot T} & 0.95 & 0.813 & -4.94 & -0.477 \\ \hline
\textbf{Bot MCH} & 0.931 & 0.76 & 0 & 0.13 \\ \hline
\textbf{Bot Ph} & 0.786 & 0.585 & -9.034 & -9.034 \\ \hline
\textbf{Bot Tol} & 0.75 & 0.514 & -30.744 & -30.744 \\ \hline
\textbf{Cond Q} & 0.949 & 0.922 & 0.626 & 0.789 \\ \hline
\textbf{Vap Reb} & 0.852 & 0.78 & -0.498 & -1.424 \\ \hline
\textbf{P1} & 0.868 & 0.86 & 0.24 & 0.783 \\ \hline
\textbf{P22} & 0.851 & 0.877 & 0 & 0.726 \\ \hline
\end{longtable}
We see that the model performs very well in the Long Time data. This cements the fact that the evolution of the system with time (if present) has been captured. If this weren't the case the model performance would have deteriorated with longer tests.
\begin{figure}[H]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.8\linewidth]{P1vsT_Long.PNG}
\caption{Condenser Pressure Derivative vs Time - Good Predictions}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.8\linewidth]{QcondvsT_Long.PNG}
\caption{Reboiler Duty Derivative vs Time - Good Predictions}
\end{subfigure}
\caption{Long Time Data Set}
\label{fig:dydx_vs_t_long}
\end{figure}
However the performance is sub par in region outside the training perturbation for most of the states. Also, with higher regularization, the model marginally improves as opposed to all the previous observations where the model kept getting better on the test set with decreasing regularization. This indicates that the available variables and candidate functions are over fitting not the training data but the state of the system in the training region. This can be resolved by including new state variables which will make the model obtained invariant to the training perturbation region.
\begin{figure}[H]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.8\linewidth]{Top_MCH_vs_T_Out.PNG}
\caption{Outside Training Perturbation - Good Predictions}
\label{fig:out_good}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.8\linewidth]{Bot_MCH_vs_T_Out.PNG}
\caption{Outside Training Perturbation - Poor Predictions}
\label{img:out_bad}
\end{subfigure}
\caption{Outside Training Perturbation Data Set}
\label{fig:dydx_vs_t_out}
\end{figure}
In Fig. \ref{img:out_bad} we can see a variable for which the algorithm has poor results. The model predicts peaks in regions of steady operation. This could be because a limiting variable which dictates the dynamics in this region has been missed. So, the system does not exhibit significant dynamics here, while the model predicts dynamics. However, in Fig. \ref{fig:out_good} the prediction closely follows the model. By using these two equations for the same component in different outlet streams we can try to understand which states might be missed. By incorporating the missed states we can iteratively improve the model.
\subsection{ODE Structure Comparison}
The ODEs obtained for the 4 systems were compared with each other for similarity in the terms selected. The number of such similar terms for two levels of regularization along with the total number of terms is provided in Appendix \ref{app:ode_common}. Appendix \ref{app:ode_common} also has a list of the terms with most repetitions cross the 4 systems tested. These results can be interpreted as dynamic equivalent of sensitivity analysis in steady systems. If the complete dynamics had been captured, most of the terms in the ODEs would have been repeated across the systems. However, this was not observed. Hardly $10\%$ of the total terms were common across 3 systems where the feed compositions were altered. This could mean that we have not completely described our system with the current set of states. We need to look for variables which are crucial in deciding the dynamics by performing sensitivity analyses on the operating conditions too.\\
However, by reducing regularization we notice that the fraction of terms retained across the systems either increases or remains the same in most cases. This indicates that by increasing the number of candidate functions selected, they are able to explain the model better, even if only by a small increment. This result correlates with the prediction accuracy explained in Section. \ref{subsec:accuracy} which kept improving with smaller regularization. We also find the same terms repeating across all 4 systems more commonly. The system with a different Reflux Ratio (which is the only column specification varied) had the no common terms with the other systems under high regularization but had an increasing number of common terms under low regularization. This could further indicate that the system might not be truly sparse in function space, highlighting the possible limitations of using SINDy in identifying the complex dynamics of unknown system without some knowledge about functional space that may govern the dynamics of these systems.
A similar analysis was carried out between the training set and the test set with phenol feed outside the training perturbation region. The results of this analysis are listed in Table \ref{tab:ode_comp_out}
\begin{longtable}[c]{|l|
>{\columncolor[HTML]{EFEFEF}}r |
>{\columncolor[HTML]{EFEFEF}}r |r|r|}
\caption{Structural Similarity of ODEs}
\label{tab:ode_comp_out}\\
\hline
\multicolumn{1}{|c|}{} & \multicolumn{2}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{Low $\alpha$}} & \multicolumn{2}{c|}{\textbf{High $\alpha$}} \\ \cline{2-5}
\multicolumn{1}{|c|}{\multirow{-2}{*}{\textbf{Variable}}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{Common}} & \multicolumn{1}{c|}{\cellcolor[HTML]{EFEFEF}\textbf{Total}} & \multicolumn{1}{c|}{\textbf{Common}} & \multicolumn{1}{c|}{\textbf{Total}} \\ \hline
\endhead
\textbf{Top F} & 5 & 23 & 5 & 14 \\ \hline
\textbf{Top T} & 4 & 24 & 1 & 3 \\ \hline
\textbf{Top MCH} & 7 & 28 & 4 & 14 \\ \hline
\textbf{Top Ph} & 1 & 6 & 1 & 6 \\ \hline
\textbf{Top Tol} & 13 & 28 & 5 & 18 \\ \hline
\textbf{Bot T} & 18 & 39 & 5 & 16 \\ \hline
\textbf{Bot MCH} & 21 & 39 & 3 & 18 \\ \hline
\textbf{Bot Ph} & 19 & 35 & 6 & 13 \\ \hline
\textbf{Bot Tol} & 21 & 37 & 4 & 12 \\ \hline
\textbf{Cond Q} & 13 & 36 & 3 & 14 \\ \hline
\textbf{Vap Reb} & 17 & 35 & 5 & 17 \\ \hline
\textbf{P1} & 5 & 30 & 3 & 9 \\ \hline
\textbf{P22} & 17 & 34 & 3 & 11 \\ \hline
\end{longtable}
We see that lowering regularization in most of the cases decreases the fraction of common terms. However in the other systems tested inside the perturbation region, this was not the case. This along with the interpretations of Table. \ref{tab:long_out} further confirm the fact that some crucial state variables or functional forms are being missed. Even though the prediction accuracy is high, if the true mechanisms are captured for these complex systems, same terms should appear in the governing equations. However, if the aim of the work is to obtain simpler equations that can capture the non-linear dynamics, the algorithm performs well. But, in order to understand the true physical mechanisms, the SINDy algorithm perhaps need to be provided with functional forms determined by domain experts. As was done in the reaction kinetics identification \cite{hoffmann2019}, the authors provided functional form determined by ``law of mass action" which is a known physical law that drives rate kinetics and mechanisms. To improve on the distillation column differential equation identification, such knowledge about relationship between top and bottom feed, temperature and pressure need to be used to construct appropriate functions. This is challenging for the distillation column system because there are several heuristic based equations that are used in design of the seperation system. Our future work will address this need of converting these complex design equations that govern non-equilibrium system in distillation column to appropriate functional forms to be used in extracting the governing differential equations.
\subsection{Structure of ODEs and Physical Interpretations}
The Structure of the ODE obtained for the basic system under high regularization is available in Appendix \ref{app:ode}. While these ODEs are very complex to interprete, it is still a win for representing the dynamics of this system using one equation for each state variable as compared to over 1000 complex equations that relate the dynamics of system. However, there was no direct interpretation of most of the terms in physical sense. Some of the terms such as $sin(Top_{Tol})$ which represents sin of Toluene concentration in Top flow is physically not interpretable. Some of the commonly recurring terms that we found physically relevant were : $Conc^{2}$ which basically meant that second order terms in concentration were found relevant for controlling the dynamics. The only feasible interpretation can be that diffusion of two components or cross diffusion is driving the dynamics. This can be because of fick's law of diffusion acting on both the component involved. For example one terms in Appendix is $Bot_{MCH}/Bot_{Tol}$ which is the ratio of concentration of MCH and Toluene in bottom flow. Apperance of this term in the equation driving the MCH in top stream denotes some relationship between diffusivity difference of MCH and Toluene in the extracting component Phenol. While the form of equation is surprising because the functional form did not give the fick's law of diffusion which actually needs concentration variation rather than just concentration, the appearance of this term provides some hope of these data driven approaches to learn about the governing mechanisms of dynamics in complex systems. Another term that we relate to a physical law is ratio of concentration and Pressure such as the term $Bot_{Tol}/P_{22}$. This term represents concentration of Toluene in bottom feed and pressure of the last plate. We related this term to the Henry's law which relates concentration of a solute in liquid phase to the partial pressure of the solute in gas phase. This term probably represents the relationship that the dynamics of concentration in bottom stream for toluene is related to the pressure on plates where the component may exist in vapor phase. The gas-liquid mass transfer in these systems are interconnected and complex, hence it is difficult to pin-point one single mechanism driving dynamics. However, it is encouraging to see some functional forms that may be related to physical laws being picked up in these equations. In order to be able to identify the laws, more complex rules for application of machine learning in complex engineered systems must be developed.
\section{Discussions}
\label{sec:discuss}
In this work our goal was to apply a machine learning approach on data generated from mechanistic model for distillation column to test the hypothesis of identifying governing physical laws for dynamics of the system. We began with the approach of Sparse Identification of Non-Linear Dynamics (SINDy) because of it's ability to give white box models and allow interpretation of terms that drive the dynamics. We tested both for accuracy and also the changes in structure of equations obtained under different design consideration of the system. We picked a distillation column because of it's ubiquitious role in chemical engineering world from petrochemical industries to biomass refining. The results for prediction on the test data generated from mechanistic models were very encouraging with most variables showing more than 80 \% accuracy. Outside the perturbation range, the equations did not perform very well which may be because of the change in dynamic regime. If the training data set only captured a particular dynamic regime, it cannot capture the dynamics in a different regime. However, this is still an un-resolved question from mechanistic perspective, that if DEs capture truly physical mechanisms this should provide insights into impending regime change as well. From physical interpretation perspective of the equations obtained, it was encouraging to see terms such as $Concentration^{2}$ and ratio of concentration with pressure. The prior can be related to fick's law of diffusion for two components in the column whereas the later can be related to the Henry's law controlling the solubility of the components in the mixture controlled by pressure at different plates in the column. One interesting finding from extracting these DEs is the simplified relationship that was obtained between component flow rate in top stream to the component flow rates in the bottom flow rate along with the pressure of last plate. In actual distillation column design, there is a mass balance equation solved for each plate that finally relates the component concentration in top stream to the bottom stream. Use of this one simplified equation captures this whole dynamics. We think this is the strength of machine learning approach. Based on the accuracy of prediction within certain time steps, a moving time window to train the model would be more appropriate. We expect that this can be used in better control systems design because the method can capture the non-linear dynamics much better and the need of linearization as prevalent in traditional control design may be relaxed. At the end, novel machine learning advancement is opening up new avenues of looking at complex engineered systems where traditional first principle method of extracting governing equations may fail. However, we are still a long way to go. A greater cross communication between engineering and data science would be required to achieve breakthroughs in limitations of engineering dynamical studies using machine learning approach. Both fields must inform each other for overcoming the limitations in algorithms as well.
\bibliographystyle{unsrtnat}
|
1,108,101,564,989 | arxiv | \section{}\label{}
\section{Introduction}
Direct photons produced in Pb+Pb collisions can be divided into prompt
photons produced in hard processes in the initial collision, and
non-prompt photons produced by jet fragmentation, in-medium gluon
conversion and medium-induced bremsstrahlung. Prompt processes such as
\mbox{$q+g \rightarrow q+\gamma$} and \mbox{$q+\bar{q} \rightarrow
g+\gamma$} lead to final states with a high $p_{\rm T}$ parton (gluon
or quark) balanced by a prompt photon with roughly comparable $p_{\rm
T}$~\cite{incnll}. They thus provide {\em a calibrated parton} inside
of the medium, allowing a direct, quantitative measurement of the
energy loss of partons in the medium and of the medium response.
ATLAS has a unique capability to study such processes because of the
large-acceptance calorimeter with longitudinal and fine-transverse
segmentation~\cite{ATLAScal}. In particular the first main layer of
the calorimeter is read out in narrow transverse strips. This
segmentation allows us to purify our sample of $\gamma$-jet events by
rejecting jet-jet background. It further allows us to identify photons
which are near or even inside of a jet, where isolation cuts cannot be
used. This provides access to non-prompt photons from jet
fragmentation, from in-medium gluon conversion and from the
medium-induced bremsstrahlung.
\section{Technique}
The design of the ATLAS electromagnetic calorimeter is optimal for
direct photon identification. The first layer of the electromagnetic
calorimeter, which covers the full azimuth and $|\eta|<2.4$, has very
fine segmentation along the $\eta$ direction (ranging from 0.003 to
0.006 units). This layer provides detailed information on the shower
shape, which allows a direct separation of $\gamma$'s, $\pi^0$'s, and
$\eta$'s on a particle-by-particle level. Deposited strip energy
distributions as a function of eta relative to the cluster centroid
for a typical single $\gamma$, single $\pi^0$, and single $\eta$ meson
are shown in the upper panels of Fig.~\ref{fig:strip6}.
Characteristically different shower profiles are seen. The energy of a
single photon is concentrated across a few strips, with a single
maximum in the center, while the showers for $\pi^0\rightarrow
\gamma\gamma$ and $\eta\rightarrow \gamma\gamma$ are distributed
across more strips, often with two or more peaks. The broad shower
profile for $\pi^0$ and $\eta$ reflects the overlap of showers for two
or more decay photons. Even when the two peaks are not resolved, the
multi-photon showers are measurably broader on a statistical basis.
The lower panels of Fig.~\ref{fig:strip6} show the strip layer energy
distributions surrounding the direction of single particles embedded
in central Pb+Pb events. The $\gamma$, $\pi^0$ and $\eta$ in these
panels are the same ones used in the upper panels. Despite the large
background of low-energy particles produced in Pb+Pb
events~($dN_{ch}/d\eta=2650$ in this case), the shower shape for the
embedded particle is almost unchanged by the background. Thus the
strip layer allows the rejection of $\pi^0$ and $\eta$ clusters over a
very broad energy range, and the performance for the background
rejection and identification efficiency should not depend strongly on
the event centrality.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.31\textwidth]{Gamma_prelim.pdf}
\includegraphics[width=0.31\textwidth]{Piz_prelim.pdf}
\includegraphics[width=0.31\textwidth]{Eta_prelim.pdf}
\includegraphics[width=0.31\textwidth]{GamHIJb2_prelim.pdf}
\includegraphics[width=0.31\textwidth]{PizHIJb2_prelim.pdf}
\includegraphics[width=0.31\textwidth]{EtaHIJb2_prelim.pdf}
\caption[]{\label{fig:strip6} The energy deposition in the
strip layers around the direction of (upper left) a single photon,
(upper middle) a single $\pi^0$ and (upper right) a single $\eta$ as
well as for (lower panels) the identical particles embedded in a
central ($b=2$~fm, $dN_{ch}/d\eta=2650$) Pb+Pb event. Reconstructed $E_{\rm T}$
values are indicated.}
\end{center}
\end{figure}
\section{Results}
To distinguish direct photons from neutral hadrons, cuts have been
developed based on the shower shape in the strip layer. These cuts
reject those showers that are anomalously wide or exhibit a double
peak around the maximum. In general, better rejection can be achieved
using a tighter cut, but at the expense of reduced efficiency. The
performance has been quantified via photon efficiency
($\epsilon_{\gamma}$) and relative rejection ($R_{\rm rel} \equiv
\epsilon_{\gamma}/\epsilon_{\rm hadron}$). The relative rejection
basically reflects the gain in the signal (direct photon yield)
relative to background (neutral hadron yield).
In this analysis, two sets of cuts have been developed, a ``loose''
cut set and a ``tight'' cut set. The performance for these two sets is
summarized in Fig.~\ref{fig:bothcuts}. The loose cuts (upper panels)
yield a factor of 1.3--3 relative rejection with a photon efficiency
of about 90\%; the tight cuts (lower panels) yield a factor of 2.5--5
relative rejection with an efficiency of about 50\%.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.75\linewidth]{striploose_prelim.pdf}
\includegraphics[width=0.75\linewidth]{striptight_prelim.pdf}
\caption[] {\label{fig:bothcuts} (upper panels) Photon identification
efficiency and relative rejection factor (averaged over $|\eta|<2.4$)
for neutral hadrons for the loose cut set for single particles (open
circles) and central ($b=2$~fm, $dN_{ch}/d\eta=2650$) Pb+Pb collisions
(filled triangles). (lower panels) As above but for the tight cut set.
Note the change in scale between the upper and lower right-hand
panels.}
\end{center}
\end{figure}
In addition to the photon identification cuts, isolation cuts have
been developed which, on their own, provide relative rejection factors
of 7--10 for \mbox{$E_{\rm T}>50$~GeV}. These isolation cuts cannot be
used to study non-isolated photons, but in the case of $\gamma$-jet,
they can be combined with the photon identification cuts to
significantly reduce the background from jet-jet
events. Figure~\ref{fig:rejsncent} shows the signal-to-background
ratio after applying the loose shower shape cuts, the isolation cuts,
and the combined cuts. The signal-to-background ratio is the best in
p+p collisions, which is about factor of 4--5 larger than that for
most central Pb+Pb events. However, by taking into account the benefit
one gains from the likely hadron suppression~($R_{AA}=0.2$), we expect
to achieve a similar level of performance that is approximately
independent of the event centrality.
\begin{figure}[h!t]
\begin{center}
\includegraphics[width=1.0\linewidth]{rejsncent_prelim.pdf}
\caption[]
{\label{fig:rejsncent} The ratio of direct photons over background
neutral hadrons passing the loose shower shape cuts only (solid
squares), isolation cuts only (open circles) and combined cuts (solid
circles) for different occupancies under the assumption that there is
no hadron suppression for any centrality.}
\end{center}
\end{figure}
The left-hand panel of Fig.~\ref{fig:phospect} shows the performance
for reconstructing the direct photon spectrum for a central Pb+Pb data
sample, indicating that the spectrum can be measured out to at least
200~GeV at the expected luminosity per LHC Pb+Pb year (0.5 nb$^{-1}
\times $50\%). The right-hand panel shows the $\gamma$-jet correlation
for 60--80~GeV photons and jets in central Pb+Pb collisions (without
jet quenching or modification). For more details on the jet
reconstruction, see Ref.~\cite{jets}.
\begin{figure}[h!t]
\begin{center}
\includegraphics[width=0.49\linewidth]{photonspectrum_prelim.pdf}
\includegraphics[width=0.50\linewidth]
{GammaJet_Gamma_60-80_Jet_60-80_Overlay_Prelim.pdf}
\caption[]
{\label{fig:phospect} (left panel) A simulated photon spectrum is
shown along with expected statistical error bars after background
subtraction for a central 10\% Pb+Pb sample with $dN_{ch}/d\eta=2650$
from a nominal Pb+Pb run. (right panel) Correlations in $\Delta\phi$
for $\gamma$-jet pairs embedded in central Pb+Pb events, where both the
photon and jet have an $E_{\rm T}$ of 60--80~GeV. Filled circles refer to
jets passing a tighter jet quality cut than those represented by the
open circles.}
\end{center}
\end{figure}
\section{Conclusions}
This writeup has presented the ATLAS performance for direct photon
identification. The first layer of the ATLAS electromagnetic
calorimeter provides an unbiased relative rejection factor of either
1.3--3 (loose shower shape cuts) or 2.5--5 (tight shower shape cuts)
for neutral hadrons. The loose $\gamma$ identification cuts can be
combined with isolation cuts, resulting in a total relative rejection
of about 20, even in central Pb+Pb collisions, providing a relatively
pure sample of calibrated partons interacting with the medium. The
expected luminosity per LHC Pb+Pb year (0.5 nb$^{-1} \times $50\%)
will provide 200k photons above 30~GeV, and 10k above 70~GeV per LHC
year.
The tight shower shape cuts alone provide sufficient rejection against
hadron decays within jets to allow the study of fragmentation photons,
in-medium gluon conversion and medium-induced bremsstrahlung. This
capability combined with a large acceptance is unique to ATLAS.
|
1,108,101,564,990 | arxiv | \section{Introduction}
\label{intro}
Modern theoretical cosmology includes an early period of accelerated expansion
named {\em inflation\/}~\cite{inflation}, whose driving force is commonly modeled
using a scalar field (the {\em inflaton\/}) of uncertain nature.
A similarly accelerated phase is undergoing now~\cite{supernovae} and has led
to conceive the existence of an equally unspecified {\em dark energy\/} component
in the matter content of the Universe \cite{DarkEnergy}.
\par
An alternative scenario has been recently proposed in
Refs.~\cite{Easson:2010av,Easson:2010xf}, based on the idea of entropic gravity
introduced in Ref.~\cite{verlinde}.
In this context, the equations governing the time-evolution of the cosmic scale
factor contain terms proportional to the Hubble function squared $H^2$ and
its time derivative $\dot H$ originating at the boundary of spatial sections of
our universe.
According to the authors of Refs.~\cite{Easson:2010av,Easson:2010xf},
such terms could explain the acceleration occurring both in the early stages
and at present.
Boundary terms, whose nature is well-known in
General Relativity~\cite{Carroll:1997ar},
have indeed been analyzed in various contexts,
for example in Refs.~\cite{otherPaper}.
\par
In this work, we will not analyze how these terms emerge from an
action principle, nor if a unique Lagrangian can be defined at all.
We shall instead assume general modifications of the form considered in
Refs.~\cite{Easson:2010av,Easson:2010xf} and then try to constrain
their possible effectiveness by comparing the corresponding Cosmic Microwave
Background (CMB) acoustic scale (see, e.g., Refs.~\cite{Page:2003fa,Gruppuso:2005xy})
with the most recent available WMAP data~\cite{Komatsu:2010fb}.
Note that a standard Monte-Carlo Markov Chain analysis
(usually employed to extract the cosmological parameters
by comparison with available observations)
is not feasible if the model is unknown at the linear order.
On the contrary, the CMB acoustic scale can be computed directly
from the background equations, and this will allow us to obtain a constraint
for the free parameters of the model by comparing with the most
recent CMB data.
\par
The paper is organized as follows:
in Section~\ref{mfe} we obtain the complete set of Friedman equations
for a generalization of the entropic models introduced in
Refs.~\cite{Easson:2010av,Easson:2010xf}.
This, in Section~\ref{effDE}, will allow us to regard the model in terms of an effective
dark energy contribution depending on one parameter $\gamma$.
In particular, bounds on $\gamma$ will be obtained by comparing
the CMB acoustic scale with the 7yr~WMAP data in Section~\ref{acousticscale},
after computing the deceleration parameter in Section~\ref{decpar}.
Conclusions will be drawn in Section~\ref{conclusions}.
\section{Modified Friedman equations}
\label{mfe}
In the flat Friedman-Robertson-Walker metric
\begin{eqnarray}
ds^2 = -dt^2 + a^2(t)\, d \vec x\cdot d\vec x
\ ,
\end{eqnarray}
with the scale factor $a(t)$ normalized so that $a(t_{\rm now})=1$,
the model of universe considered in Refs.~\cite{Easson:2010av,Easson:2010xf}
features a Friedman equation given by
\begin{eqnarray}
\frac{\ddot a}{a}
=
- \frac{4\, \pi\, \tilde G}{ 3}\, \sum_i \left( \tilde\rho_i + 3\, \tilde p_i \right) + C_H\, H^2 + C_{\dot H}\, \dot H
\ ,
\label{eqFriedman}
\end{eqnarray}
where $H = \dot a / a\equiv a^{-1}\,da/dt$, $\tilde G$ is the ``bare'' Newton constant,
$\tilde \rho_i$ and $\tilde p_i$ are the ``bare'' energy density and pressure of
the $i$-th fluid filling the universe, while $C_H$ and $C_{\dot H}$ are constants
coming from the boundary terms.
As already stated in the Introduction, we take such terms as given and do
not derive them from the Einstein-Hilbert action on a manifold with boundaries.
Instead, we shall determine the full set of cosmological (and continuity) equations
consistent with Eq.~(\ref{eqFriedman}) without {\em a priori\/} fixing
$C_H$ and $C_{\dot H}$.
\par
We first note that Eq.~(\ref{eqFriedman}) can be rewritten as
\begin{eqnarray}
\dot H + \gamma \,H^2
=
- \frac{4\, \pi\, G}{ 3}\, \sum_i \left( \tilde \rho_i + 3\,\tilde p_i \right)
\, ,
\label{eqFriedman2}
\end{eqnarray}
where
\begin{eqnarray}
\gamma=\frac{1 - C_H}{1- C_{\dot H}}
\ ,
\label{ab}
\end{eqnarray}
and we rescaled~\footnote{This rescaling~\cite{sorbo} and Eq.~(\ref{ab}) are meaningful only
if $C_{\dot H} \neq 1$, a condition we assume throughout the paper.
If $C_{\dot H} =1$, Eq.~(\ref{eqFriedman2}) does not contain $\dot H$ and is
therefore not an equation of motion but a constraint.}
\begin{eqnarray}
G=\tilde G/(1- C_{\dot H})
\ .
\label{newton}
\end{eqnarray}
Noting that $d/dt=(a\,H)\,d/da$ and assuming
\begin{eqnarray}
\tilde \rho_i = \tilde\rho_i^{(0)} \, a^{-k_i}
\ ,
\quad
\tilde p_i=w_i\,\tilde \rho_i
\ ,
\label{w}
\end{eqnarray}
with $ \tilde\rho_i^{(0)}$ and $w_i$ constant,
Eq.~(\ref{eqFriedman2}) can be integrated exactly and yields
\begin{eqnarray}
H^2 = \frac{8\, \pi\, G}{ 3}\, \left( \sum_i \, c_i \,\tilde\rho_i + \frac{C}{a^{2\,\gamma}} \right)
\ ,
\label{eqFriedman3}
\end{eqnarray}
where the coefficients
\begin{eqnarray}
c_i=\frac{1+3\,w_i}{k_i-2\,\gamma}
\label{ci}
\end{eqnarray}
are well-defined only for $k_i\not=2\,\gamma$
and $C$ is a constant of integration.
Further, on deriving Eq.~(\ref{eqFriedman3}) with respect to time and using
Eqs.~(\ref{eqFriedman2}) and~(\ref{ci}), we obtain the continuity equation
\begin{eqnarray}
\dot{\tilde \rho}_i + \frac{H}{c_i} \left[ \left(2 \, \gamma \, c_i + 1\right)\tilde\rho _i+ 3 \,\tilde p_i \right]
=\dot{\tilde\rho}_i+H\,k_i\,\tilde\rho_i
=0
\ ,
\label{coneq}
\end{eqnarray}
which is identically satisfied for the fluids~(\ref{w}).
For example, for dust we have $w_{\rm dust} =0$ and requiring $k_{\rm dust}=3$
yields $c_{\rm dust}=1/(3 -2\, \gamma)$.
Likewise, radiation has $w_{\rm rad}=1/3$ and requiring $k_{\rm rad}=4$ results in
$c_{\rm rad}=1/(2 - \gamma)$.
Specifying these parameters and assuming that the matter content of the universe
is a mixture of dust and radiation, the Friedman equations~(\ref{eqFriedman2}) and
(\ref{eqFriedman3}) can be rewritten as (see also Appendix \ref{appendice})
\begin{eqnarray}
\frac{\ddot a}{a}
&\!=\!&
- \frac{4\, \pi\, G}{ 3} \left[
\frac{\rho_{\rm dust}^{(0)}}{a^3}
+ 2 \, \frac{\rho_{\rm rad}^{(0)}}{a^4}
-2\,(1-\gamma) \, \frac{C}{a^{2 \gamma}}
\right]
\label{EqFriedman_ddota_eff}
\\
H^2
&\!=\!&
\frac{8\, \pi\, G}{ 3} \left[
\frac{\rho_{\rm dust}^{(0) }}{a^3}
+ \frac{\rho_{\rm rad}^{(0) }}{a^4}
+ \frac{C}{a^{2 \gamma}}
\right]
\label{EqFriedman_dota_eff}
\ ,
\end{eqnarray}
where $\rho_{\rm dust}^{(0) } = c_{\rm dust }\,\tilde\rho_{\rm dust}^{(0)}$
and $\rho_{\rm rad}^{(0) } = c_{\rm rad}\,\tilde\rho_{\rm rad}^{(0)}$ are
the present matter and radiation densities involved in observations.
Note that the (bare) densities $\tilde\rho_{\rm rad}$ and $\tilde\rho_{\rm dust}$
and the corresponding constant and dimensionless coefficients $c_{\rm rad}$
and $c_{\rm dust}$ are not observable separately, since only their products
appear in the equations (as we remark in Appendix~\ref{appendice}).
\par
Eqs.~(\ref{EqFriedman_ddota_eff}) and (\ref{EqFriedman_dota_eff})
are precisely the standard Friedman equations for a universe filled with dust
and radiation that scale in the usual way, namely
\begin{eqnarray}
\rho_{\rm dust}=\rho_{\rm dust}^{(0)}/a^3
\ ,
\qquad
\rho_{\rm rad}=\rho_{\rm rad}^{(0)}/a^4
\ ,
\label{newrho}
\end{eqnarray}
corrected by terms proportional to $C$.
Note also that, in the limit $\gamma \to 1$, we recover
the standard cosmological equations (with no dark energy component!)
with the $C$-term playing the role of an effective curvature contribution.
However, for $\gamma\not=1$, there is a region where the corrections
can be interpreted as an effective dark energy component if $C >0$.
This is the case we consider in the following, with
$\rho_{\rm dust}^{(0)}$ and $\rho_{\rm rad}^{(0)}$
equal to the present dust and radiation densities.
For example, $\rho_{\rm rad}^{(0)}$ is the present energy density of
the black-body radiation with temperature $T=2.725\,$K (multiplied by the
contribution from neutrinos).
\section{Effective Dark Energy}
\label{effDE}
The next step is to find whether there exist values of $\gamma$
corresponding to an accelerating universe, i.e.,~such that $\ddot a >0$.
This can be understood analyzing Eq.~(\ref{EqFriedman_ddota_eff}).
The effective dark energy term proportional to $C$ will then drive
the present acceleration of the universe if it dominates in the r.h.s.~of
Eq.~(\ref{EqFriedman_ddota_eff}) at recent times.
We therefore require that $\gamma< 3/2$.
Moreover, since we also assume $C>0$, $\ddot a>0$ implies
\begin{eqnarray}
\gamma <1
\ .
\label{b<a}
\end{eqnarray}
Hence, when Eq.~(\ref{b<a}) is satisfied, the $C$-term
mimics the behavior of a dark energy fluid
[see Eq.~(\ref{EqFriedman_dota_eff})]
with constant parameter of state $w_X = -1 + 2\, \gamma / 3$.
\subsection{Deceleration parameter}
\label{decpar}
The deceleration parameter is defined as
\begin{eqnarray}
q = - \frac{\ddot a}{a\, H^2}
\ .
\label{defq}
\end{eqnarray}
Plugging Eqs.~(\ref{EqFriedman_ddota_eff}) and~(\ref{EqFriedman_dota_eff})
into the above definition and neglecting radiation~\footnote{Of course, this
approximation is valid for recent cosmological times (like in Fig.~\ref{FigDue}),
when the transition from matter to dark energy dominated epochs was taking place
and the contribution of radiation was subleading.}
yields
\begin{eqnarray}
q =
\frac{1}{2 \, a }
\left( \frac{\Omega_C\,a^{-3} - 2 \, (1-\gamma)\, \Omega_{\Lambda}\,a^{-2 \gamma}}
{\Omega_C\,a^{-3} + \Omega_{\Lambda}\,a^{-2\gamma}}\right)
\ ,
\end{eqnarray}
where $\Omega_{\rm C} = \rho_{\rm dust}^{(0)} / \rho_{\rm c}$,
$\Omega_{\Lambda} = C / \rho_{\rm c}$ and $\rho_{\rm c} \equiv 3\, H_0^2/ (8\, \pi\, G)$
with $H_0$ the present value of the Hubble function.
\par
In Fig.~\ref{FigDue}, we show $q$ as a function of the redshift $z$ for various values of
$\gamma$ from $-0.5$ to $0.5$ in steps of $0.1$.
Note that, on specializing $q$ at present time, we find
\begin{eqnarray}
q = \frac{1}{2}\, \Omega_C - (1-\gamma)\, \Omega_{\Lambda}
\ ,
\end{eqnarray}
which turns out to be the standard expression for the $\Lambda$CDM model when
$\gamma =0$~\cite{Sahni:2002fz,Visser:2003vq,Gruppuso:2005xy}.
\begin{figure}
\includegraphics[width=8.0cm]{Fig1.eps}
\caption{$q$ vs $z$ for $\gamma$ from $-0.5$ to $0.5$ with step equal to $0.1$.
Dotted line is for $\gamma=-0.5$, dashed line for $\gamma=0.5$ and
solid lines for values in between.}
\label{FigDue}
\end{figure}
\subsection{CMB acoustic scale}
\label{acousticscale}
The characteristic angular scale $\theta_A$ of the peaks of the angular power
spectrum in CMB anisotropies is defined as~\cite{Page:2003fa}
\begin{eqnarray}
\theta_A = {\frac{r_{\rm s}(z_{\rm dec})}{r(z_{\rm dec})}} = \frac{\pi}{\ell_A}
\ ,
\label{acousticangularscale}
\end{eqnarray}
where $r_{\rm s}(z_{\rm dec})$ is the comoving size of the sound horizon at decoupling,
$r(z_{\rm dec})$ the comoving distance at decoupling and $\ell _A$ the multipole associated
with the angular scale $\theta_A$, also called the {\em acoustic scale\/}.
Let us recall that $\ell_A$ is not exactly the scale of the first peak.
In general, the position of the $m$-th peak is given by $ \ell_m = \left( m - \phi_m \right) \ell_A $
where $\phi_m$ is a phase that depends on other cosmological parameters~\cite{Page:2003fa}.
\par
In order to make explicit the dependence of $\ell_A$ on the cosmological parameters,
we now consider separately numerator and denominator of Eq.~(\ref{acousticangularscale}).
The comoving size of the sound horizon at decoupling can be written as~\cite{huandsugiyama}
\begin{eqnarray}
r_{s}(z_{\rm dec})
&\!\!=\!\!&
\frac{4}{3\, H_0}\, \sqrt{\frac{\Omega_{\gamma}}{\Omega_C\, \Omega_b}}
\nonumber
\\
&&
\times
\ln \left[
\frac{\sqrt{1+ R_{\rm dec}} + \sqrt{R_{\rm dec}+R_{\rm eq}}}{1+\sqrt{R_{\rm eq}}}
\right]
\ ,
\label{rscomputed}
\end{eqnarray}
with $R(z)= 3 (\Omega_b/ (4 \Omega_{\gamma})) / (1+z)$ and where $\Omega_b$
and $\Omega_{\gamma}$ are the present density ratios for baryons
and photons respectively [note the index $\gamma$ in $\Omega_{\gamma}$
must not to be confused with the parameter $\gamma$ defined in Eq.~(\ref{ab})].
Moreover, the label ``dec'' stands for ``computed at decoupling'', while ``eq''
stands for ``computed at equivalence'' (between radiation and matter).
By definition the comoving distance at decoupling reads
\begin{eqnarray}
r (z_{\rm dec}) = \int_0^{z_{\rm dec}} \frac{d z'}{H(z')}
\ ,
\label{cdz}
\end{eqnarray}
where $H(z)$ is given by Eq.~(\ref{EqFriedman_dota_eff}) and can be recast as
\begin{eqnarray}
H(z)
&\!\!=\!\!&
H_0 \left[ (1+z)^3 \, \Omega_C + (1+z)^4 \, \Omega_{\rm rad} +
\right.
\nonumber
\\
&&
\left.
+ (1+z)^{2 \gamma} \, \Omega_{\Lambda} \right]^{1/2}
\ ,
\label{Hznew}
\end{eqnarray}
where $ \Omega_{\rm rad} = \rho_{\rm rad}^{(0)} / \rho_{\rm c}$.
We can therefore write the acoustic scale $\ell_A$ as
\begin{widetext}
\begin{eqnarray}
\ell_A
=
\frac{3\, \pi}{4}\, \sqrt{\frac{\Omega_b}{\Omega_\gamma}}\,
\frac{ \int_0^{z_{\rm dec}} dz
\left[ (1+z)^3 + (1+z)^4 (\Omega_{\rm rad}/\Omega_C)
+ (1 +z)^{2 \gamma} (\Omega_{\Lambda}/\Omega_C) \right]^{-1/2}}
{\ln \left[ \sqrt{1+ R_{\rm dec}} +
\sqrt{R_{\rm dec}+R_{\rm eq}} \right]
- \ln\left[ {1+\sqrt{R_{\rm eq}}} \right]}
\, .
\label{la}
\end{eqnarray}
\end{widetext}
Let us remark that Eq.~(\ref{la}) was obtained by neglecting $\Omega_{\Lambda}$
in $r_{\rm s}(z_{\rm dec})$ (the comoving size of the sound horizon at decoupling).
However, it was shown in Ref.~\cite{Gruppuso:2005xy} that this approximation
at most leads to $10^{-5}\,\%$ error, much smaller than the precision of our
result below [see Eq.~(\ref{rangepergamma})].
\par
Eq.~(\ref{la}) can now be used to constrain the models under study by
comparing with the value obtained from the recent 7yr~WMAP
data~\cite{Komatsu:2010fb}~\footnote{http://lambda.gsfc.nasa.gov/
\label{foot1}}
\begin{eqnarray}
\ell_A^{\rm WMAP} = 302.44 \pm 0.8
\ .
\label{WMAPvalue}
\end{eqnarray}
Note that our choices for $\rho_{\rm dust}^{(0)}$ and $\rho_{\rm rad}^{(0)}$
were made in order to minimize deviations from the $\Lambda$CDM model.
In fact, departures of the background equations~(\ref{EqFriedman_ddota_eff})
and (\ref{EqFriedman_dota_eff}) from $\Lambda$CDM are completely parameterized
by the single parameter $\gamma$ and we are therefore allowed to estimate
Eq.~(\ref{la}) with the values of the other parameters that best fit WMAP data.
We insert in Eq.~(\ref{la}) the 7yr~WMAP best fit values~\cite{Komatsu:2010fb}
$\Omega_b = 0.0449$,
$\Omega_{\gamma}=4.89 \times 10^{-5}$, $z_{\rm dec}=1088.2$, $z_{\rm eq}=3196$,
$\Omega_C=0.266$, $\Omega_{\Lambda}=0.734$ and
$\Omega_{\rm rad}=1.69 \, \Omega_{\gamma}$.
In Fig.~\ref{FigTwo}, we show the acoustic scale (long and short dashed lines) versus
$\gamma$, along with the 1-$\sigma$ levels of the WMAP measurement
(solid horizontal lines).
We also display the dependence of the acoustic scale on $z_{\rm eq}$:
the long dashed line stands for the 7yr~WMAP best fit ($z_{\rm eq}=3196$) whereas
the short dashed lines stand for its 1-$\sigma$ values, $z_{\rm eq}=3196^{+ 134}_{-133}$.
\par
As Fig.~\ref{FigTwo} shows clearly, the parameter $\gamma$ must be very close to $0$
in order to have consistency with the WMAP observations.
More precisely, from Eq.~(\ref{la}) computed at the best fit values of the
7yr~WMAP parameters~\footnote{This means taking into
account only the long dashed line in Fig.~\ref{FigTwo}.},
we obtain
\begin{eqnarray}
\gamma = -0.02 \pm 0.04
\ .
\label{rangepergamma}
\end{eqnarray}
This result is consistent with Eq.~(\ref{b<a}) and
implies $-1.040 < w_X < $ -0.986,
so that the added contributions must closely mimic a
cosmological constant.
Further, from Eq.~(\ref{ab}), this implies that
\begin{eqnarray}
|1-C_H|
\ll
|1-C_{\dot H}|
\label{cCC}
\end{eqnarray}
in Eq.~(\ref{eqFriedman}).
For example, if $0<C_H, C_{\dot H}<1$, then the strong inequality~(\ref{cCC})
is satisfied for $C_H \approx 1$.
\begin{figure}
\includegraphics[width=8.0cm]{Fig2.eps}
\caption{Acoustic scale for entropic universe as function of $\gamma$.
Horizontal lines represent 1-$\sigma$ WMAP measurement (colored region displays
1-$\sigma$ contour).
Long dashed line is for $\ell_A$ computed at best fit of $z_{\rm eq}=3196$.
Short dashed lines are for $\ell_A$ computed at 1-$\sigma$ level of
$z_{\rm eq}=3196^{+ 134}_{-133}$.}
\label{FigTwo}
\end{figure}
\section{Conclusions}
\label{conclusions}
In the present paper, we have shown how it is possible to recover standard
background scalings for radiation and matter and standard effective cosmological
equations [see Eqs.~(\ref{EqFriedman_ddota_eff}) and (\ref{EqFriedman_dota_eff})]
when the Friedman equation for $\ddot a$ is modified by adding
terms proportional to $H^2$ and $\dot H$ like in Eq.~(\ref{eqFriedman}).
An example that requires such a modification is given by the
entropic accelerating universe of Refs.~\cite{Easson:2010av,Easson:2010xf},
although our considerations are more general.
Moreover we have shown how to obtain the recent cosmological acceleration
within the considered model, without adding a dark energy fluid.
We note that for the range of parameters considered here,
the model under analysis does not modify the evolution of the universe when
it was matter or radiation dominated.
Therefore, none of the standard cosmological constraints coming from such early
epochs, as for instance the Big Bang Nucleosynthesis (BBN), are affected.
\par
Specifically, we have shown that the parameter space admits a region
(i.e., $\gamma < 1$) where the universe accelerates at recent cosmological times
(i.e., $z \sim 0.5$).
In fact, the additional terms mimic the behavior of a fluid with a constant
parameter of state $w_X = -1 + 2\, \gamma /3$.
This has been studied by computing the deceleration parameter $q$
[see Fig.~\ref{FigDue} and Section~\ref{decpar}] and stringent constraints
have been obtained comparing the CMB acoustic scale $\ell_A$
with the WMAP~7yr release data.
Note that a standard Monte-Carlo Markov Chain analysis
(usually employed to extract the cosmological parameters
by comparison with available observations) is not feasible if the model
is not known at linear order~\footnote{In fact, as far as we know, no Lagrangian
is known for these models.
However, as we stated in the Introduction, we are not debating
the theoretical ground the ``entropic-like'' proposal is based on,
but are rather interested in which constraints we can provide
for such class of models from what we have at hand, i.e.~the
background equations.}.
On the contrary, the CMB acoustic scale can be computed directly
from the background equations, and this has allowed us to obtain a constraint
for the free parameters of the model by comparing with the most
recent CMB data.
This comparison has told us that $|\gamma|\ll 1$ (so that $w_X\simeq -1$) and the
coefficients of $\dot H$ and of $H^2$ in Eq.~(\ref{eqFriedman}) must therefore
satisfy Eq.~(\ref{cCC}) for the model to be phenomenologically viable.
In particular, the entropic accelerating universe corresponds to a specific choice
of the constants $C_H$ and $C_{\dot H}$, that
is $\gamma_{\rm I} = 0$ and $\gamma_{\rm II}=0.68$ for the two cases
explicitly mentioned in Ref.~\cite{Easson:2010av}.
The latter is at odd with the constraint~(\ref{rangepergamma}),
whereas the former is consistent.
\par
Future CMB observations coming from the {\sc Planck} satellite~\footnote{Planck
(http://www.esa.int/Planck) is a project of the European Space Agency, ESA.}
are expected to improve the error on the acoustic scale by about
one order of magnitude~\cite{Colombo:2008ta}.
The same improvement is therefore expected for the estimate of the parameter
$\gamma$, which, in principle, should allow us to distinguish these
models from the pure $\Lambda$CDM model with $\gamma=0$.
We finally mention that our findings are in agreement with those of
Ref.~\cite{koivisto}.
\begin{acknowledgments}
We thank F.~Finelli for fruitful discussions.
R.C.~would like to thank D.~Easson and R.~Woodard.
A.G.~also thanks L.~Sorbo for comments on the manuscript
and L.~Colombo for some clarification on Ref.~\cite{Colombo:2008ta}.
We acknowledge the use of the Legacy Archive for Microwave Background
Data Analysis (LAMBDA).
Support for LAMBDA is provided by the NASA Office of Space Science.
A.G.~acknowledges support by ASI through ASI/INAF agreement
I/072/09/0 for the Planck LFI activity of Phase E2.
\end{acknowledgments}
|
1,108,101,564,991 | arxiv | \section{Introduction}
Post-Newtonian (PN) approximation methods in general relativity
are based on the weak-field limit in which the metric is close to the
Minkowski metric and the assumption that the typical
velocity $v$ in a system divided by the speed of light is
very small. In post-Minkowskian (PM) approximation methods only the weakness
of the gravitational field is assumed but no assumption
about slowness of motion is made. In the PM approximation we obtain\cite{LSB}
the Hamiltonian for gravitationally interacting particles
that includes all terms linear in gravitational constant $G$.
It thus yields PN approximations to {\it any} order in $1/c$ when terms linear
in $G$ are considered; and it can also describe
particles with ultrarelativistic velocities or with zero rest mass.
We use the canonical formalism of Arnowitt, Deser, and Misner
(ADM) \cite{ADM} where the independent degrees of freedom of the gravitational field are described
by $h_{ij}^{TT}$, the transverse-traceless part of $h_{ij}=g_{ij}-\delta_{ij}$
($h^{TT}_{ii}=0$, $h^{TT}_{ij,j}=0$, $i,j=1,2,3$), and by conjugate
momenta $c^3/(16\pi G) {\pi}^{ij\,TT}$. The field is generated by $N$ particles with rest
masses $m_a$ located at ${\bf x}_a$, $a=1, ... N$, and with momenta ${\bf p}_a$.
We start with the Hamiltonian\cite{S86} correct up to $G^2$ found by
the expansion of the Einstein equations (the energy and momentum constraints)
in powers of $G$ and by the use of suitable regularization procedures.
When we consider only terms linear in $G$ and put $c=1$ this Hamiltonian reads
\begin{align}
\label{HlinGS}
H_{\rm lin}=&
\sum_a {\overline{m}}_a - \frac{1}{2}G\sum_{a,b\ne a} \frac{{\overline{m}}_a {\overline{m}}_b }{ r_{ab} }
\left( 1+ \frac{p_a^2}{{\overline{m}}_a^2}+\frac{p_b^2}{{\overline{m}}_b^2}\right)
\\
\nonumber
&+ \frac{1}{4}G\sum_{a,b\ne a} \frac{1}{r_{ab}}\left( 7\, {\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b + ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ab})({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ab}) \right)
-\frac{1}{2}\sum_a \frac{p_{ai}p_{aj}}{{\overline{m}}_a}\,h_{ij}^{TT}({\bf x}={\bf x}_a)
\\\nonumber
\nonumber
&+\frac{1}{16\pi G}\int d^3x~ \left( \frac{1}{4} h_{ij,k}^{TT}\, h_{ij,k}^{TT} +\pi^{ij\,TT} \pi^{ij\,TT}\right)~,
\end{align}
where ${\overline{m}}_a=\left( m_a^2+{\bf p}^2_a \right)^\frac{1}{2}$,
${\bf n}_{ab} r_{ab} = {\bf x}_a-{\bf x}_b$, $|{\bf n}_{ab}|=1$.
The equations of motion for particles are standard Hamilton equations, the Hamilton equations for the field read
\begin{equation}
\dot {\pi}^{ij\,TT}~=~-16\pi G~\delta_{kl}^{TT\,ij} \frac{\delta H}{\delta h_{kl}^{TT}}
~,~~~
\dot h_{ij}^{TT}~=~~16\pi G~\delta_{ij}^{TT\,kl} \frac{\delta H}{\delta {\pi}^{kl\,TT}}~;
\end{equation}
here the variational derivatives and the TT-projection operator
$\delta_{kl}^{TT\,ij} = \frac{1}{2}\left( \Delta_{ik}\Delta_{jl}+\Delta_{il}\Delta_{jk}-\Delta_{ij}\Delta_{kl}\right){\Delta^{-2}}$,
$\Delta_{ij} = \delta_{ij}\Delta - \partial_i\,\partial_j$, appear.
These equations imply the equations for the gravitational field
in the first PM approximation to be the wave equations with point-like sources $\sim\delta^{(3)}( {\bf x}-{\bf x}_a)$.
Since both the field and the accelerations $\dot {\bf p}_a$
are proportional to $G$, the changes of the field due to the accelerations of particles are
of the order $O(G^2)$. Thus, in this approximation,
wave equations can be solved assuming field to be generated by unaccelerated motion of particles,
i.e., it can be written as a sum of boosted static spherical fields:
\begin{equation}
\label{LiWi4h}
h_{ij}^{TT}({\bf x}) =
\delta_{ij}^{TT\,kl} \sum_b
\frac{4G}{{\overline{m}}_b}
\frac{1}{|{\bf x}-{\bf x}_b|}
\frac{p_{bk}p_{bl}}{\sqrt {1-{\dot {\bf x}_b}^2\sin^2 \theta_b}}~,
\end{equation}
where ${\bf x}-{\bf x}_a={\bf n}_a |{\bf x}-{\bf x}_a|$ and $\cos \theta_a={{\bf n}_a{\hspace{-1.3pt}\cdot\!\,} \dot {\bf x}_a /|\dot {\bf x}_a|}$.
Surprisingly, it is possible to convert the projection $\delta_{ij}^{TT\,kl}$ (which involves solving two Poisson
equations) into an inhomogeneous linear second order ordinary differential equation and write
\begin{align}
&h_{ij}^{TT}({\bf x}) ~=~ \sum_b
\frac{G}{|{\bf x}-{\bf x}_b|} \frac{1}{{\overline{m}}_b}\frac{1}{y(1+y)^2}
\Big\{
\left[y{\bf p}_b^2-({\bf n}_b{\!\,\cdot\!\,}{\bf p}_b)^2(3y+2)\right]\delta_{ij}
\\\nonumber&
+2\left[
1- \dot {\bf x}_b^2(1 -2\cos^2 \theta_b)\right]{p_{bi}p_{bj}}
+\left[
\left( 2+y\right)({\bf n}_b{\!\,\cdot\!\,}{\bf p}_b)^2
\!-\!\left( 2+{3}y -2\dot {\bf x}_b^2\right){\bf p}_b^2
\right]{n_{bi}n_{bj}}
\\\nonumber&
+2({\bf n}_b{\!\,\cdot\!\,}{\bf p}_b) \left(1-\dot {\bf x}_b^2+2y\right) \left(n_{bi}p_{bj}+p_{bi}n_{bj}\right)
\Big\}
+O({\overline{m}}_b \dot {\bf x}_b-{\bf p}_b)G\!+\!O(G^2)~;
\label{unprojected_h}
\end{align}
here $y = y_b \equiv\sqrt {1-{\dot {\bf x}_b}^2\sin^2 \theta_b}$ and we anticipate $O({\overline{m}}_b \dot {\bf x}_b-{\bf p}_b)\sim G$.
In the next step we use the Routh functional (see, e.g., Ref.\cite{DJS98})
\begin{equation}
R( {\bf x}_a,{\bf p}_a, h_{ij}^{TT}, \dot h_{ij}^{TT} ) =
H - \frac{1}{16\pi G}\int d^3x~ \pi^{TT\,ij}\, \dot h_{ij}^{TT}~,
\end{equation}
which is ``the Hamiltonian for the particles but the Lagrangian for the field.''
Since the functional derivatives of Routhian vanish if the field equations hold,
the (non-radiative) solution (\ref{LiWi4h}) can be substituted into the Routh functional
without changing the Hamilton equations for the particles.
Using the Gauss's law, an integration by parts and similar standard steps
(such as dropping out total time derivatives, i.e. a canonical transformation)
and the explicit substitution for $h_{ij}^{TT}({\bf x}={\bf x}_a)$
we get the Hamiltonian for a $N$-particle gravitating system
in the PM approximation:
\begin{align}
\label{H1PM}
\nonumber
&H_{\rm lin}=
\sum_a {\overline{m}}_a
+ \frac{1}{4}G\sum_{a,b\ne a} \frac{1}{r_{ab}}\left( 7\, {\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b + ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ab})({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ab}) \right)
- \frac{1}{2}G\sum_{a,b\ne a} \frac{{\overline{m}}_a {\overline{m}}_b }{ r_{ab}}
\\ &
\times\left( 1+ \frac{p_a^2}{ {\overline{m}}_a^2}+\frac{p_b^2}{{\overline{m}}_b^2}\right)
-\frac{1}{4}
G\sum_{a,b\ne a} \frac{1}{r_{ab}}
\frac{({\overline{m}}_a {\overline{m}}_b)^{-1}}{ (y_{ba}+1)^2 y_{ba}}
\Bigg[
2\Big(2
({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b)^2 ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba})^2
\\\nonumber&
\!-\!2 ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba}) ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba}) ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b) {\bf p}_b^2
\!+\!({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba})^2 {\bf p}_b^4
\!-\!({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b)^2 {\bf p}_b^2
\Big ) \frac{1}{{\overline{m}}_b^2}
+2 \Big[-\!{\bf p}_a^2 ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba})^2
\\ \nonumber&
+ ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba})^2 ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba})^2 +
2 ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba}) ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba}) ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b) +
({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b)^2 - ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba})^2 {\bf p}_b^2\Big]
\\ \nonumber&
+
\Big[-3 {\bf p}_a^2 ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba})^2 +({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba})^2 ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba})^2
+8 ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba}) ({\bf p}_b{\!\,\cdot\!\,}{\bf n}_{ba}) ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf p}_b)
\\ \nonumber&
+ {\bf p}_a^2 {\bf p}_b^2 - 3 ({\bf p}_a{\hspace{-1.3pt}\cdot\!\,}{\bf n}_{ba})^2 {\bf p}_b^2 \Big]y_{ba}
\Bigg]~,~~~~~~~~~~~~~~~~~~~~~~~~~y_{ba} = \frac{1}{{\overline{m}}_b} \sqrt{ m_b^2+ \left ({\bf n}_{ba}{\!\,\cdot\!\,}{\bf p}_b\right)^2}~.
\end{align}
Since the PM approximation can describe ultrarelativistic or zero-rest-mass particles,
we calculated gravitational scattering of two such particles using the Hamiltonian (\ref{H1PM}).
If perpendicular separation $\bf b$ of trajectories ($|{\bf b}|$ is the impact parameter)
in the center-of-mass system (${\bf p}_1=-{\bf p}_2\equiv{\bf p}$) is used, ${\bf p}{{\!\,\cdot\!\,}}{\bf b}=0$,
we find, after evaluating a few simple integrals, that the exchanged momentum in the
system is given by
\begin{align}
\label{delta_p}
\Delta {\bf p} &= -2\frac{{\bf b}}{{\bf b}^2} \frac{G}{|{\bf p}|}
\frac{{\overline{m}}_1^2 {\overline{m}}_2^2}{{\overline{m}}_1 +{\overline{m}}_2 }
\left[
1+\left(\frac{1}{{\overline{m}}_1^2}+\frac{1}{{\overline{m}}_2^2}+\frac{4}{{\overline{m}}_1{\overline{m}}_2} \right){\bf p}^2
+\frac{{\bf p}^4}{{\overline{m}}_1^2 {\overline{m}}_2^2 }
\right]~.
\end{align}
The quartic term is all that remains from the field part $h^{TT}_{ij}$ in agreement with Westpfahl\cite{W85}
who used a very different approach.
The Hamiltonian (\ref{H1PM}) can also describe a binary system with one massless and one massive particle orbiting around each other. This is not obvious: the second, fourth or even sixth-order PM approximation would not be able to describe massless test particles orbiting around a Schwarzschild black hole.
\bigskip
We acknowledge the support from SFB/TR7 in Jena,
from the Grant GA\v CR 202/09/0772 of the Czech Republic, and of Grants No LC 06014 and the MSM 0021620860
of Ministry of Education.
|
1,108,101,564,992 | arxiv | \section{Introduction}
The banded structures observed at Jupiter's surface correlate with strong
prograde (or eastward) and retrograde (or westward) winds. A strong prograde
equatorial jet reaching $150$~m/s extends over $\pm 15^\circ$ latitude.
It is flanked by alternating jets with weaker amplitudes around
$10-20$~m/s up to the polar regions.
The depth to which those winds penetrate into Jupiter
has been debated intensely over the last
decades \citep[for a review, see][]{Vasavada05}. In the ``weather layer''
scenario, the zonal jets are confined to a thin layer close to the cloud levels
\citep[e.g.][]{Cho96,Lian10}, while under the ``deep convection'' hypothesis
the zonal winds could penetrate deep over $10^3$ to $10^4$~km
\citep[e.g.][]{Busse76,Christensen02,Heimpel05,Jones09}. Those two end-member
scenarios also differ in the nature of the physical mechanism responsible for
sustaining the jets. Possible candidates range from shallow moist convection at
the cloud level to deep convective motions in Jupiter's interior.
For both physical forcings, rapid rotation is instrumental
for providing a statistical correlation that allows feeding energy from small
scale convection in the larger scale jets \citep{Rhines75}.
Because of rapid rotation and the associated Taylor-Proudman theorem, the
jets could penetrate deep into the molecular envelope even when they are only
driven in a shallow weather layer \citep{Showman06}.
Determining the actual depth of the Jovian zonal jets is one of the
main goals of the ongoing NASA Juno mission \citep{Bolton17}.
Using Juno's gravity measurements \citep{Iess18}, \cite{Kaspi18} infer that the
equatorially-antisymmetric component of the zonal jets are reduced to an
amplitude of $1\%$ of their surface values $3000$~km below the one bar level.
However, the interpretation of gravity perturbations in terms of zonal flows is
complicated and other zonal flow profiles could be envisioned
\citep[e.g.][]{Kong18,Wicht20a,Galanti20}.
Several additional arguments favour comparable
depths of $3000$ to $4000$~km. A first indication comes from studies of
rapidly-rotating convection in thin spherical shells. Such numerical models
have been developed to focus on the dynamics of the molecular envelope
of the gas giants. They succeed in reproducing several
key features of the observed zonal flow pattern such as a dominant prograde
equatorial jet \citep[e.g.][]{Christensen02}, multiple jets of alternated
directions \citep{Heimpel05,Jones09,Gastine14}, or the formation
of large scale vorticies \citep{Heimpel16}. The width of the main prograde
equatorial jet directly depends on the thickness of the simulated
spherical shell \citep[e.g.][]{Heimpel07}. Best agreement with Jupiter is
obtained when the lower boundary is set to $0.95\,R_J$.
\begin{figure}
\centering
\includegraphics[width=8.3cm]{Br_Juno}
\caption{ Hammer projections of the
radial component of the magnetic field at the surface of Jupiter (upper
panel) and at $0.9\,R_J$ (lower panel). These maps have been recontructed using
the JRM09 Jovian field model by \cite{Connerney18}.}
\label{fig:BrJup}
\end{figure}
Another set of constraints on the zonal winds depth comes from the
Jovian magnetic field. Using Juno's first nine orbits, \cite{Connerney18} have
constructed the JRM09 internal field model up to the harmonic degree $\ell=10$
shown in Fig.~\ref{fig:BrJup}. The surface field (upper panel) is dominated by
a tilted dipole and features intense localised flux concentrations.
The downward continuation of the surface field to $0.9\,R_J$
(Fig.~\ref{fig:BrJup}\textit{b}) reveals an
intricate field morphology with
clear differences between the northern and southern hemispheres. In the
northern hemisphere, the field is strongly concentrated in a latitudinal band,
while the southern hemisphere is dominated by a pronounced field concentration
just below the equator \citep{Moore18}.
A comparison of Juno's measurements with magnetic data from previous
space missions, such as Pioneer or Voyager, shows only mild changes over a time
span of 45 years \citep{Ridley16,Moore19}. This suggests an upper bound for the
jet speed of roughly $1$~cm/s at a depth where magnetic effects start to
matter at about $0.94\,R_J$.
Different lines of arguments therefore suggest a lower boundary for the jets
located around $0.94-0.96\,R_J$. Which mechanism could possibly quench the jets
in this depth range? Two alternatives have been suggested so far: Lorentz forces
or a stably stratified layer.
Lorentz forces rely on electric currents and thus depend on the electrical
conductivity. Experimental data
\citep[e.g.][]{Weir96,Nellis99,Knudson18} and \textit{ab initio} simulations
\citep[see][and references therein]{French12,Knudson18} indicate that the
electrical conductivity increases at at super-exponential rate with depth due to
the ionization of molecular hydrogen. At pressures of about one Mbar, however,
hydrogen assumes a metallic state and the conductivity increases much more
mildly. Here we use a model based on the \textit{ab initio} simulations by
\cite{French12}, which puts the transition to metallic hydrogen at about
$0.9\,R_J$.
However, many aspects of the conductivity profile remain debated. This includes
the depth of the phase transition and the question of whether it is a first
order or a gradual second order transition \citep[for a review
see][]{Stevenson20}.
A key parameter for estimating dynamo action is the magnetic Reynolds
number $Rm$ which quantifies the ratio of induction and magnetic diffusion. In
the outer envelope where the electrical conductivity increases
extremely steeply, \cite{Liu08} showed that $Rm = U_z d_\sigma \sigma
\mu_0 $ provides a more appropriate definition of the magnetic Reynolds number
associated with zonal motions \citep[see also][]{Cao17}. Here $U_z$ is the
typical zonal flow velocity, $\mu_0$ the vacuum permeability and
$d_\sigma=|\partial \ln \sigma / \partial r|^{-1}$ the electrical
conductivity scale height. As long as $Rm$ remains smaller than unity, the
zonal flows merely modify the field that is produced in the deeper
interior \citep{Wicht19a}. In Jupiter, this region extends
down to about $0.96\,R_J$ \citep{Wicht19}.
Lorentz forces then simply scale with $\sigma$ \citep{Wicht19a} and thus
remain negligible in the very outer region but kick in abruptly at a certain
depth.
While this suggest that Lorentz forces are a good candidate for quenching the
jets, several numerical simulations reveal a different picture.
Instead of producing multiple alternating jets as the non-magnetic models,
global dynamo simulations that adopt Jupiter's electrical conductivity profile
only feature one main prograde equatorial jet
aligned with the rotation axis, that mostly resides in the outer weakly
conducting region.
Strong azimuthal Lorentz forces in the metallic interior
suppress zonal motions along the axis of rotation
and kill or significantly brake all other jets
\citep{Heimpel11,Duarte13,Jones14,Gastine14a,Dietrich18,Duarte18}.
Instead of explaining the
observed depth, Lorentz forces seem to yield an unrealistic jet amplitude
and structure \citep{Christensen20}.
Another candidate that could prevent
jets from penetrating deeper is a stably stratified layer that would
inhibit the convective mixing.
The Juno gravity observations suggest that Jupiter consists of several distinct
layers: (\textit{i}) an outer envelope with reduced
He (and Ne) abundance compared to the primordial solar value,
(\textit{ii}) an intermediate envelope with a higher He abundance and possibly a
lower abundance of heavier elements,
(\textit{iii}) a deeper interior sometimes called a diluted core with an
increased heavier element abundance, and (\textit{iv}) possibly a denser core
\citep{Wahl17,Debras19,Stevenson20}.
Stable stratification could help to explain how the different layers formed
and were preserved over time. In gas giant planets, such stable
layers could possibly occur when helium segregates from hydrogen due to its
poor miscibility \citep[e.g.][]{Stevenson80,Lorenzen11}.
Below a critical temperature, helium tends to separate from hydrogen and
forms droplets that rain towards the interior. This leads to helium depletion of
the outer envelope and leaves a helium stably-stratifying gradient that
separates the outer envelope from the interior. However, it remains unclear
whether this process has already started in Jupiter
\citep{Militzer16,Schottler18}.
If so, estimates put the upper boundary of the related stable layer around
$1$~Mbar, which rougly corresponds to $0.9\,R_J$.
The recent interior models by \cite{Debras19} put the upper boundary of
the stable layer at $0.93\,R_J$ and the lower bound somewhere between
$0.8\,R_J$ and $0.9\,R_J$.
In the limit of rapid rotation, the dynamical influence of a stably-stratified
layer (hereafter SSL) depends on the ratio of the
Brunt-V\"ais\"al\"a frequency $N$ to the rotation rate $\Omega$.
Using a linear model of non-magnetic rotating convection,
\cite{Takehiro01} have shown that the distance of penetration $\delta$ of a
convective feature of size $d_c$ into a stratified layer follows $\delta \sim
(N/\Omega)^{-1}\,d_c$. Numerical models by \cite{Gastine20} showed that
this scaling still holds in nonlinear dynamo models.
The penetration of zonal flows into such layers is more intricate since it
directly depends on the thermal structure at the edge of the SSL
\citep[e.g.][]{Showman06}. The global numerical models of
solar-type stars by \cite{Brun17} show that the zonal motions do not penetrate
into the stably-stratified interior when $N /\Omega \gg 1$ \citep[see
also][]{Browning04,Augustson16}.
This ratio, however, remains poorly known in Jupiter's interior. The internal
models by \cite{Debras19} suggest $1 \leq N/\Omega \leq 3$.
\citep{Christensen20}.
Considering simplified 2-D axisymmetric numerical models where the zonal
flows are forced by an analytical source term, \cite{Christensen20} claim
that stable stratification alone is not sufficient to brake the geostrophic
zonal winds.
They suggest that weak Lorentz forces drive a weak meridional flow
that penetrate the upper edge of the SSL, encountering the strong
stable stratification. This in turn alters the latitudinal entropy
structure that explains the quenching of the jets according to the thermal
wind balance.
Here we adopt the idea of a stably-stratified sandwich layer and, for the first
time, study its impact on the zonal jets and overall dynamics in a full 3-D
global dynamo simulation.
The paper is organised as follows. Numerical model and methods are detailed in
\S~\ref{sec:models}. Section~\ref{sec:results} is dedicated to the description
of the results, while the implications for Jupiter are further discussed in
\S~\ref{sec:disc}.
\section{Model and methods}
\label{sec:models}
\subsection{Defining a non-adiabatic reference state}
We consider a magnetohydrodynamic simulation of a conducting fluid in a
spherical shell of radius ratio $r_i/r_o$ rotating at a constant rotation rate
$\Omega$ about the $z$-axis. We adopt the so-called ``Lantz-Braginsky-Roberts''
anelastic approximation of the Navier-Stokes equations introduced by
\cite{Brag95} and \cite{Lantz99}. It allows the incorporation of the
radial dependence
of the background state while filtering out the fast acoustic waves that would
otherwise significantly hamper the timestep size. Within the anelastic
approximation, one actually solves for small perturbations around a
background state that is frequently assumed to be well-mixed and adiabatic
\citep[e.g.][]{Jones11,Verhoeven15}.
Here we follow a slightly different approach. Since we aim at modelling the
effects of a stably stratified layer at the top of the metallic region, we
define a reference state that can depart from the adiabat.
This is a common approach in solar convection models that
incorporate both the radiative core and the convective envelope
\citep[e.g.][]{Alvan14}. Practically, this implies that any physical quantity
$x$ is expanded in spherical coordinates ($r,\theta,\phi)$ as follows
\begin{equation}
x(r,\theta,\phi,t)=\tilde{x}(r)+x'(r,\theta,\phi,t),
\end{equation}
where the tilde denotes the spherically-symmetric and static
background state, while the primes correspond to fluctuations about this mean.
To ensure the validity of the anelastic approximation
when using a non-adiabatic reference state, the perturbations should remain
small as compared to the background state
\citep[e.g.][]{Gough69}, i.e.
\[
\dfrac{| x'|}{| x|} \ll 1,\quad \forall (r,\theta,\phi,t)\,.
\]
In the following, we adopt a dimensionless formulation of the MHD equations.
Starting with the background reference state, the physical quantities such as
the background density $\tilde{\rho}$, temperature $\tilde{T}$, gravity $\tilde{g}$ and entropy
gradient $\mathrm{d}\tilde{s}/\mathrm{d} r$ are non-dimensionalised with respect
to their value at the outer radius $r_o$. We adopt the spherical shell gap
$d=r_o-r_i$ as the reference lengthscale.
To precisely control the location and the degree of stratification of the SSL, a
possible approach consists in prescribing the functional form of the background
entropy gradient $\mathrm{d}\tilde{s}/\mathrm{d}r$ \citep[see for instance ][for
geodynamo models]{Takehiro01,Gastine20}. Regions with a
negative gradient $\mathrm{d}\tilde{s}/\mathrm{d} r < 0$ are super-adiabatic and
hence prone to harbour convective motions, while the fluid layers
with $\mathrm{d}\tilde{s}/\mathrm{d} r > 0$ are stably
stratified. In the following, we assume a constant degree of
stratification
$\mathrm{d}\tilde{s}/\mathrm{d} r = \Gamma_s$ between the radii $\mathcal{R}_i$
and $\mathcal{R}_o$ and a constant dimensionless negative gradient
$\mathrm{d}\tilde{s}/\mathrm{d} r = -1$ in the surrounding convective layers.
Those regions are then smoothly connected with $\tanh$
functions centered at $\mathcal{R}_i$ and $\mathcal{R}_o$:
\begin{equation}
\dfrac{\mathrm{d}\tilde{s}}{\mathrm{d}r}=\dfrac{1+\Gamma_s}{4}\left[1+f_{\mathcal{R}_i}
(r)\right ]\left[1-f_{\mathcal{R}_o}(r)\right]-1,
\label{eq:entropy}
\end{equation}
where
\[
f_{a}(r)=\tanh[\zeta_s(r-a)],\quad \mathcal{R}_o=\mathcal{R}_i+\mathcal{H}_s,
\]
$\mathcal{H}_s$ is the thickness of the stably stratified layer and $\zeta_s$ the
stiffness of the transition.
As we will see below, the degree of stratification $\Gamma_s$ can be directly
related to the value of the Brunt-V\"ais\"al\"a frequency of the
stably-stratified layer.
Figure~\ref{fig:dsdr} shows the radial profile of $\mathrm{d}\tilde{s} /\mathrm{d}
r$ employed in this study. It features a stably-stratified layer between the
radii
$\mathcal{R}_i=0.84\,r_o$ and $\mathcal{R}_o=0.88\,r_o$, which correspond to $\mathcal{H}_s=0.05$.
The degree of stratification is set to $\Gamma_s=2000$, while the
stiffness of the transition is $\zeta_s=200$. The
location and thickness of the SSL have been chosen according to the internal
models by \cite{Militzer16} and \cite{Wahl17}.
Because of the finite size of the transitions, $\mathrm{d}\tilde{s}/\mathrm{d} r$
changes sign before (after) $\mathcal{R}_i$ ($\mathcal{R}_o$), yielding a stably-stratified
layer with an effective thickness larger than $\mathcal{H}_s$.
Imposing a background entropy gradient coming from stellar evolution
models is commonly used in simulations of stellar interior dynamics
\citep[e.g.][]{Browning04,Augustson16,Brun17} for introducing a
stably-stratified region.
In absence of a more realistic entropy profile coming from internal models of
Jupiter, we adopt here a parametrized background entropy gradient.
While being convenient, it lacks a proper physical justification and simply
maintains the stratification by introducing an effective entropy or heat sink.
A more realistic distribution of entropy or heat sources in the convective
layer of Jupiter is discussed by \cite{Jones14}.
\begin{figure}
\centering
\includegraphics[width=8.3cm]{dsdr}
\caption{Background entropy gradient $\mathrm{d}
\tilde{s}/\mathrm{d} r$ as a function of the normalised radius $r/r_o$ as defined
by Eq.~(\ref{eq:entropy}) with $\mathcal{H}_s=0.05$, $\Gamma_s=2000$,
$\mathcal{R}_i/r_o=0.84$, $\mathcal{R}_o/r_o=0.88$ and $\zeta_s=200$. The two vertical solid
lines mark the boundaries of the SSL $\mathcal{R}_i$ and $\mathcal{R}_o$. The horizontal
dashed line corresponds to the neutral stratification $\mathrm{d}
\tilde{s}/\mathrm{d} r=0$, which delineates the separation between super adiabatic
and stable stratification. To highlight the values of the profile in the
convective regions, the $y$ axis has been split into logarithmic scale when
$\mathrm{d}\tilde{s}/\mathrm{d} r > 1.5$ (upper panel) and linear scale for
the values between $-1.5$ and $1.5$ (lower panel).}
\label{fig:dsdr}
\end{figure}
Once the background entropy gradient has been specified,
the reference temperature and density gradients can be expressed via
the following thermodynamic relations
\begin{equation}
\dfrac{\mathrm{d}\ln \tilde{T}}{\mathrm{d}r} = \epsilon_S
\dfrac{\mathrm{d}\tilde{s}}{\mathrm{d}r}-Di\,\tilde{\alpha} \tilde{g},
\label{eq:temp}
\end{equation}
and
\begin{equation}
\dfrac{\mathrm{d}\ln \tilde{\rho}}{\mathrm{d}r} = -Co\,\epsilon_S \tilde{\alpha} \tilde{T}
\dfrac{\mathrm{d}\tilde{s}}{\mathrm{d} r} - \dfrac{Di}{\Gamma_o} \dfrac{\tilde{\alpha}
\tilde{g}}{\tilde{\Gamma}},
\label{eq:rho}
\end{equation}
where $\tilde{\alpha}$ denotes the dimensionless expansion
coefficient, while $\tilde{\Gamma}$ is the Gr\"uneisen parameter normalised by
its value at $r_o$. The equations
(\ref{eq:temp}-\ref{eq:rho}) involve four dimensionless parameters
\begin{equation}
Di = \dfrac{\alpha_o g_o d}{c_p},\ Co=\alpha_o T_o, \
\Gamma_o, \ \epsilon_s =
\dfrac{d}{c_p}\left|\dfrac{\mathrm{d}s}{\mathrm{d} r}\right|_{r_o}\,.
\end{equation}
According to the \textit{ab initio} calculations by \cite{French12},
the heat capacity $c_p$ exhibits little variation in most of Jupiter's interior
and is hence assumed to be constant in the above equations.
$Di$ denotes the dissipation number,
which characterises the ratio between the fluid layer thickness and the
temperature scale, $Di=d/d_T$ with $d_T = c_p/\alpha_o g_o$. In the so-called
\emph{thin-layer limit} of $d\ll d_T$, $Di$ vanishes and yields
the Boussinesq approximation of the Navier-Stokes equations
\cite[e.g.][]{Verhoeven15}. $Co$ is the compressibility number that is equal to
unity when the fluid is an ideal gas, and is $\mathcal{O}(10^{-2})$
in liquid iron cores of terrestrial planets \citep[see][]{Anufriev05}. In the
above
equations, $\Gamma_o$ corresponds to
the Gr\"uneisen parameter at the outer boundary, while
$\epsilon_s$ characterises the departure of the
background state from the adiabat. It has to satisfy $\epsilon_s \ll 1$ to
ensure the consistency of the anelastic approximation \citep{Glatz1}.
In standard anelastic models such as the ones employed in the benchmarks
by \cite{Jones11},
the background state is assumed to be a perfectly
adiabatic ideal gas (i.e. $\epsilon_s=0$, $Co=1$).
The background state is in this case entirely specified by two parameters only:
$Di$ and $\Gamma_o$, $Di$ being directly related to the number of
density scale heights of the reference state \citep[see][their
Eq.~2.9]{Jones09}, and $\Gamma_o$ is the inverse of the polytropic index.
At this stage, given that $\tilde{\alpha}$ and $\tilde{\Gamma}$ directly depend on
$\tilde{\rho}$ and $\tilde{T}$, the equations (\ref{eq:temp}-\ref{eq:rho}) coupled with
the additional Poisson equation for gravity form a nonlinear problem that would
necessitate an iterative solver \citep[for an example, see e.g.][]{Brun11}.
For the sake of simplicity and to ensure the future reproducibility of our
results, we adopt here a grosser approach which consists of
approximating $\tilde{\alpha}$, $\tilde{g}$ and $\tilde{\Gamma}$ by analytical functions
which fit the interior model of \cite{French12}. The \ref{sec:ref_coeffs}
enlists the numerical values of the approximated
profiles of $\tilde{g}$, $\tilde{\alpha}$ and $\tilde{\Gamma}$.
A comparable approach was followed by \cite{Jones14} to define the reference
state of his Jupiter dynamo models.
Figure~\ref{fig:profs} shows a comparison between the reference state
considered in this study using $Di =28.417$, $Co=0.73$ and $\Gamma_o=0.4$
(solid lines) with the \textit{ab initio}
models from \cite{French12} (dashed lines). Most of the density and temperature
contrasts are accommodated in the external $10\%$ of Jupiter's interior. Global
models of rotating convection in anelastic spherical shells indicate that
a steeply-decreasing background density goes along with smaller convective
flow lengthscales \citep[e.g.][their Fig.~5]{Gastine12}. Resolving the entire
density contrast up to the $1$~bar level would yield a lengthscale
range that would become numerically prohibitive. As shown in
Fig.~\ref{fig:profs}, we hence restrain the numerical fluid domain to an
interval that spans $0.196\,R_J$ to $0.98\,R_J$, with $r_i/r_o=0.2$. Except
explicitly-stated otherwise, the conversion between dimensionless and
dimensional units is done by simple multiplication with the reference values at
$r_o=0.98\,R_J$ given in Tab.~\ref{tab:ref}.
Though not fully thermodynamically consistent, the approximated reference
state hence provides background profiles in good agreement with the
interior models while keeping the reference state definition tractable.
\begin{table*}
\centering
\caption{Estimates of the physical properties of Jupiter's interior
at two different
depths. The material properties come from the \textit{ab initio} calculations
from \cite{French12}. The magnetic field amplitude at $0.98\,R_J$ comes from
\cite{Connerney18}, while the velocity and magnetic field estimates at depth
come from the anelastic scaling laws by \cite{Yadav13a} and \cite{Gastine14a}.}
\begin{tabular}{lrrr}
\toprule
Quantity & Notation & \multicolumn{2}{c}{Value} \\
\midrule
Radius & $R_J$ & \multicolumn{2}{c}{$6.989\times 10^7$~m}\\
Lengthscale & $d=0.8\times0.98 \times R_J$ & \multicolumn{2}{c}{$5.479\times
10^7$~m}\\
Rotation rate & $\Omega$ & \multicolumn{2}{c}{$1.75\times 10^{-4}$~s$^{-1}$}
\\
\midrule
& & Value at $0.196~R_J$ & Value at $0.98~R_J$ \\
\midrule
Density & $\rho$ & $3990$~kg$/$m$^{3}$ & $84.8$~kg$/$m$^{3}$\\
Temperature & $T$ & $18000$~K & $2500$~K\\
Gravity & $g$ & $18.1$~m$/$s$^{2}$ &$27.2$~m$/$s$^{2}$ \\
Heat capacity & $c_p$ & $1.36\times 10^4$~J$/$kg$/$K &
$1.29\times
10^4$~J$/$kg$/$K\\
Thermal expansion & $\alpha$ & $5.46\times 10^{-6}$~K$^{-1}$ &
$2.58\times 10^{-4}$~K$^{-1}$ \\
Viscosity & $\nu$ & $2.66\times 10^{-7}$~m$^2/$s & $3.92\times
10^{-7}$~m$^2/$s \\
Thermal diffusivity & $\kappa$ & $2.70\times 10^{-5}$~m$^2/$s &
$1.32\times 10^{-6}$~m$^2/$s\\
Electrical conductivity & $\sigma$ & $3.05\times 10^6$~S$/$m &
$3.5\times 10^{-4}$~S$/$m \\
Magnetic diffusivity & $\lambda$ & $0.261$~m$^2/$s &
$2.3\times 10^{9}$~m$^2/$s\\
Convective velocity & $u_c$ &$\mathcal{O}(10^{-2}-10^{-1})$~m$/$s &
$1$~m$/$s
\\
Zonal velocity & $u_Z$ &$\mathcal{O}(10^{-2}-10^{-1})$~m$/$s &
$10$~m$/$s \\
Magnetic field strength & $B$ &$\mathcal{O}(10^{-2})$~T & $10^{-3}$~T \\
\bottomrule
\end{tabular}
\label{tab:ref}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{ref}
\caption{Comparison of the reference state considered in this study (solid
lines in all panels) using $Di =28.417$, $Co=0.73$ and $\Gamma_o=0.4$ with the
\textit{ab initio} models from \cite{French12} (dashed lines in all panels).
(\textit{a}) Background density profile as a function of the normalised radius
$r/R_J$. (\textit{b}) Background temperature profile as a function of the
normalised radius. (\textit{c}) Gravity profile as a function of the normalised
radius. (\textit{d}) Thermal expansion coefficient as a function of the
normalised radius. (\textit{e}) Gr\"uneisen number as a function of the
normalised radius. (\textit{f}) Electrical conductivity as a function of the
normalised radius. (\textit{c}). The reference model employed in the numerical
simulations spans from $0.196~R_J$ to $0.98~R_J$. Those boundaries are
highlighted by gray shaded areas on each panel. The conversion between
dimensional and dimensionless units is done by simple multiplication by the
reference values expressed in Tab.~\ref{tab:ref}.}
\label{fig:profs}
\end{figure*}
\subsection{Transport properties}
The \textit{ab initio} calculations by \cite{French12} suggest that the
kinematic viscosity is almost homogeneous in Jupiter's interior with
values around $\nu\simeq 3\times 10^{-7}$~m$^2$/s (see
Tab.~\ref{tab:ref}). In the following, we
therefore simply adopt a constant kinematic viscosity. The thermal
diffusivity exhibits a more complex variation. It gradually decreases
outward up to $0.9\,R_J$, above which it increases due to
additional ionic transport becoming relevant there.
The overall variations are, however, limited to a factor of roughly
$30$.
Following our previous models \citep{Gastine14a}, we neglect those
variations and assume a constant thermal diffusivity $\kappa$ for
simplicity.
The electrical conductivity exhibits much steeper variations. A very abrupt
increase inwards of the conductivity in the molecular envelope
transitions around $0.9~R_J$ to shallower variations in the metallic core.
This profile is approximated in the numerical models by the continuous
functions introduced by \cite{Gomez10}
\begin{equation}
\tilde{\lambda}=\dfrac{1}{\tilde{\sigma}},\quad \tilde{\sigma} =\left\lbrace
\begin{aligned}
1+\left(\tilde{\sigma}_m-1\right)\left(\dfrac{r-r_i}{\mathcal{H}_m}\right)^{\xi_m},
\quad
r\leq r_m, \\
\tilde{\sigma}_m\exp\left(\xi_m\dfrac{r-r_m}{\mathcal{H}_m}\dfrac{\tilde{\sigma}_m-1}{
\tilde{\sigma}_m} \right), \quad r \geq r_m, \\
\end{aligned}
\right.
\label{eq:cond}
\end{equation}
where $r_m$ is the radius that separates the two functions,
$\tilde{\sigma}_m$ denotes the dimensionless conductivity at $r_m$, $\xi_m$ the
rate of the exponential decay and $\mathcal{H}_m=r_m-r_i$ is the thickness of the
metallic region. Given the abrupt decay of electrical conductivity
in the outer layer, we choose the value at the inner boundary $r_i$ for
defining the reference magnetic diffusivity, in contrast with the other
internal properties.
Figure~\ref{fig:profs}\textit{f} shows a comparison between the electrical
conductivity profile from \cite{French12} and Eq.~(\ref{eq:cond})
with the parameters $r_m=0.9\,r_o$, $\tilde{\sigma}_m=0.07$ and $\xi_m=11$
adopted in this study. The main difference between the two profiles arises in
the metallic interior where we assume a constant electrical
conductivity, while the \textit{ab initio} calculations suggest a
linear increase with depth. While $Rm$ is limited to a few thousands in global
models, it is expected to reach $\mathcal{O}(10^5-10^6)$ in Jupiter's interior
\citep[e.g.][]{Yadav13a}.
We hence anticipate that the linear
decrease of conductivity would have a much stronger dynamical impact at the
moderate values of $Rm$ accessible to numerical dynamos than in
Jupiter. Assuming a constant electrical conductivity in the lower layer at
least guarantees that $Rm$ stays at a high level in this region.
To ensure that no spurious currents develop when the
conductivity becomes too low at the external boundary, we assume that the
electrical currents actually vanish when $\tilde{\sigma} < 10^{-5}$, i.e. when
$r\geq 0.94\,r_o$ \citep[see][]{Elstner90,Dietrich18}.
\subsection{MHD equations}
Now that the spherically-symmetric and static background state and material
properties have been specified, we consider the set of equations that govern the
time evolution of
the velocity $\vec{u}$, the magnetic field $\vec{B}$ and the entropy
fluctuation $s'$. The equations are non-dimensionalised using
the viscous diffusion time $d^2/\nu$ as the reference time scale, $\nu/d$ as
the velocity unit and $\sqrt{\Omega \mu_0 \lambda_i \rho_o}$ as the reference
scale for the magnetic field. The entropy fluctuations $s'$ are
non-dimensionalised using the same unit as for $\tilde{s}$, i.e. $d | \mathrm{d}
s/\mathrm{d} r |_{r_o}$. This yields the following set of non-dimensional
equations
\begin{equation}
\vec{\nabla} \cdot (\tilde{\rho} \vec{u}) = 0, \quad \vec{\nabla} \cdot \vec{B} = 0,
\label{eq:soleno}
\end{equation}
\begin{equation}
\dfrac{D \vec{u}}{D
t}+\dfrac{2}{E}\vec{e_z}\times\vec{u} = -\vec{\nabla}
\left(\dfrac{p'}{\tilde{\rho}}\right)+\dfrac{1}{E
Pm\,\tilde{\rho}}\vec{j}\times\vec{B}-\dfrac{
Ra } { Pr } \tilde{\alpha}\tilde{T} \vec{g} s'+ \dfrac{1}{\tilde{\rho}}\vec{\nabla}\cdot\tens{S},
\label{eq:NS}
\end{equation}
\begin{equation}
\dfrac{\partial \vec{B}}{\partial t} = \vec{\nabla}\times
\left(\vec{u}\times\vec{B}-\dfrac{\tilde{\lambda}}{Pm}\vec{\nabla}\times
\vec{B} \right),
\label{eq:ind}
\end{equation}
and
\begin{equation}
\tilde{\rho}\tilde{T}\left(\dfrac{D s'}{D
t}+u_r\dfrac{\mathrm{d}\tilde{s}}{\mathrm{d}r}\right) =
\dfrac{1}{Pr}\vec{\nabla}\cdot\left(\tilde{\rho}\tilde{T}\vec{\nabla} s'\right)+\dfrac{Pr
Di}{Ra}\left(\mathcal{Q}_\nu+\mathcal{Q}_\lambda\right),
\label{eq:heat}
\end{equation}
where $D/Dt=\partial/\partial t+\vec{u}\cdot\vec{\nabla}$ corresponds to the
substantial time derivative, $p'$ is the pressure fluctuation,
$\vec{j}=\vec{\nabla}\times\vec{B}$ is the current and $\tens{S}$ is the
traceless rate-of-strain tensor expressed
by
\begin{equation}
\tens{S}_{ij} = 2\tilde{\rho}\left(\tens{e}_{ij} - \dfrac{1}{3}\dfrac{\partial
u_i}{\partial
x_i}\right),\quad \tens{e}_{ij} = \dfrac{1}{2}\left(\dfrac{\partial
u_i}{\partial x_j}+
\dfrac{\partial u_j}{\partial x_i}\right)\,.
\end{equation}
In Eq.~(\ref{eq:heat}), $\mathcal{Q}_\nu$ and $\mathcal{Q}_\lambda$ correspond
to the viscous and Ohmic heating terms defined by
\begin{equation}
\mathcal{Q}_\nu =
2\tilde{\rho}\left[\sum_{i,j}\tens{e}_{ij}\tens{e}_{ji}-\dfrac{1}{3}\left(\vec{\nabla}
\cdot\vec { u } \right)^2\right],\quad
\mathcal{Q}_\lambda = \dfrac{\tilde{\lambda}}{E\,Pm^2}\vec{j}^2\,.
\end{equation}
Since global models cannot handle the small diffusivities of astrophysical
bodies, we adopt here entropy diffusion as a primitive sub grid-scale model of
thermal conduction \citep[see][]{Jones11}. This is a common approach in
anelastic convective models \citep[see][]{Lantz99} which becomes more
questionable when modelling the transition to stably-stratified layers.
Comparison of numerical models with temperature and entropy diffusion by
\cite{Lecoanet14}
however yield quantitatively similar results. We hence adopt entropy diffusion
throughout the entire fluid domain.
The set of equations (\ref{eq:soleno}-\ref{eq:heat}) is controlled by four
dimensionless numbers, namely the Rayleigh number $Ra$, the Ekman number $E$,
the Prandtl number $Pr$ and the magnetic Prandtl number $Pm$
\begin{equation}
Ra=\dfrac{\alpha_o T_o g_o d^4 }{c_p \nu
\kappa}\left|\dfrac{\mathrm{d} s}{\mathrm{d} r}\right|_o,\ E=\dfrac{\nu}{\Omega
d^2},\ Pr=\dfrac{\nu}{\kappa},\ Pm=\dfrac{\nu}{\lambda_i}\,.
\end{equation}
For rapidly-rotating fluids, a relevant measure of the degree of stratification
is the ratio of the Brunt-V\"ais\"al\"a frequency to the rotation rate
\citep{Takehiro01}. This is related to the control parameter
$\Gamma_s$ via
\begin{equation}
\dfrac{N_m}{\Omega} = \max_r
\sqrt{\tilde{\alpha}(r)\tilde{T}(r)\tilde{g}(r)\dfrac{Ra\,E^2}{Pr}\Gamma_s}\,.
\label{eq:N_m}
\end{equation}
\subsection{Boundary conditions}
We assume stress-free and impenetrable boundary conditions at both boundaries:
\begin{equation}
u_r = \dfrac{\partial}{\partial r}\left(\dfrac{u_\theta}{r}\right)=
\dfrac{\partial}{\partial r}\left(\dfrac{u_\phi}{r}\right)=0,\quad r=\lbrace
r_i,r_o\rbrace\,.
\label{eq:bc_flow}
\end{equation}
Entropy is assumed to be fixed at the outer boundary, while the entropy
gradient is imposed at the inner boundary:
\begin{equation}
\left.\dfrac{\partial s'}{\partial r}\right|_{r=r_i}=0,\quad
s'(r=r_o)=0\,.
\label{eq:bc_ent}
\end{equation}
Fixing $s'$ at the outer boundary grossly reflects the entropy mixing in
the neglected outer $2\%$ of Jupiter.
The material outside the simulated spherical shell is assumed to be
electrically insulating. Hence, the magnetic field matches a potential
field at both boundaries.
\subsection{Numerical methods}
The dynamo model presented in this study has been computed using the
open-source MHD code \texttt{MagIC} \citep[freely available at
\url{https://github.com/magic-sph/magic}, see][]{Wicht02}. \texttt{MagIC} has
been tested and validated against several anelastic benchmarks \citep{Jones11}.
The set of equations (\ref{eq:soleno}-\ref{eq:heat}) complemented by the
boundary conditions (\ref{eq:bc_flow}-\ref{eq:bc_ent}) is solved in spherical
coordinates by expanding the velocity and the magnetic fields into poloidal and
toroidal potentials:
\begin{equation}
\begin{aligned}
\tilde{\rho} \vec{u} & =\vec{\nabla}\times(\vec{\nabla}\times
W\,\vec{e_r})+\vec{\nabla} \times Z\,\vec{e_r}, \\
\vec{B} & =\vec{\nabla}\times(\vec{\nabla}\times G\,\vec{e_r})+\vec{\nabla}
\times H\,\vec{e_r}\,.
\end{aligned}
\end{equation}
The quantities $W$, $Z$, $G$, $H$, $s'$ and $p'$ are expanded in spherical
harmonics up to a degree $\ell_\text{max}$ in the angular directions and in
Chebyshev polynomials up to the degree $N_c$ in the radial direction. For the
latter, a Chebyshev collocation method is employed using the Gauss-Lobatto
interval with $N_r$ grid points defined by
\[
x_k = \cos\left[\dfrac{(k-1)\pi}{N_r-1}\right],\quad k\in[1,N_r]\,.
\]
This interval that ranges between $-1$ and $1$ is usually directly remapped
onto $[r_i,r_o]$ by using a simple affine mapping
\citep[e.g.][p.~468]{Glatz84}. However, because of the clustering of grid points
in the vicinity of the boundaries, the Gauss-Lobatto grid features a minimum
grid spacing that decays with $N_r^{-2}$.
The propagation of Alfv\'en waves close to the boundaries then imposes severe
restrictions on the time step size \citep{Christensen99}. To alleviate this
limitation, we rather employ the mapping by \cite{Kosloff93} defined by
\[
r_k = \dfrac{r_o-r_i}{2}\dfrac{\arcsin(\alpha_{\text{map}} x_k)}{\arcsin
\alpha_\text{map}}+\dfrac{r_o+r_i}{2}, \quad k\in[1,N_r]\,.
\]
To ensure the spectral convergence of the collocation method, the mapping
coefficient $\alpha_\text{map}$ has to be kept under a maximum value that
depends on $N_r$
\[
\alpha_\text{map} \leq \left[\cosh\left(\dfrac{|\ln
\epsilon_m|}{N_r-1}\right)\right]^{-1},
\]
where $\epsilon_m$ is the machine precision \citep{Kosloff93}.
The equations are advanced in time using an implicit-explicit
Crank-Nicolson Adams-Bashforth second order scheme, which handles the nonlinear
terms and the Coriolis force explicitly and the remaining terms implicitly
\citep{Glatz84}. Because of the stable stratification, the advection of the
background entropy gradient, $u_r \mathrm{d}\tilde{s} /\mathrm{d} r$, that enters
Eq.~(\ref{eq:heat}) is also handled implicitly to avoid severe time step
restrictions when the Brunt-V\"ais\"al\"a frequency exceeds the rotation rate
\citep[see][]{Brown12}. \texttt{MagIC} uses the open-source library
\texttt{SHTns} \citep[freely available at
\url{https://bitbucket.org/nschaeff/shtns}, see][]{Schaeffer13} for the
spherical harmonic transforms. A more comprehensive description of the
numerical method can be found in \cite{Glatz84}, \cite{Tilgner99} or
\cite{Christensen15}.
\begin{table*}
\centering
\caption{Definitions and estimates of dimensionless parameters in Jupiter's
interior along with values adopted in the numerical model. Estimates
for Jupiter have been obtained using the dimensional values from
Tab.~\ref{tab:ref}. The deviation from
the adiabat $\epsilon_s$ has been obtained by using a simple thermal wind
balance $\epsilon_s \sim \Omega\, u / \alpha_o g_o T_o$ \citep[see][]{Jones15}.
The estimates of the degree of stratification and the location of a possible
SSL in Jupiter come from \cite{Militzer16} and \cite{Debras19}. The
mean density $\rho_m=1300$~kg$/$m$^3$ and the mean magnetic
diffusivity $\lambda_m = 1.15$~m$^2/$s come from \cite{French12}.}
\begin{tabular}{lllrr}
\toprule
Symbol & Name & Definition & Jupiter & This model \\
\midrule
$Di$ & Dissipation & $\alpha_o T_o g_o/c_p$ & $29.8$ & $28.42$ \\
$Co$ & Compressibility &$\alpha_o T_o$ &$0.645$ & $0.73$ \\
$\Gamma_o$& Gr\"uneisen && $0.470$ & $0.4$ \\
$\epsilon_s$ & Adiabaticity & $d\,|\mathrm{d}s/\mathrm{d}r|_{r_o}/c_p$ &
$\mathcal{O}(10^{-6})$ & $10^{-4}$ \\
$\mathcal{R}_i$ & SSL inner radius & &$0.8-0.9\,R_J$ & $0.82\,R_J$ \\
$\mathcal{R}_o$ & SSL outer radius & &$0.88-0.93\,R_J$ & $0.86\,R_J$ \\
$N_m/\Omega$ & Degree of stratification & & $1-3$ & $10.4$ \\
\midrule
$Ra$ & Rayleigh & $\alpha_o T_o g_o d^4 | \mathrm{d} s
/\mathrm{d} r|_{r_o}/\nu\kappa c_p$ & $10^{31}$ & $3.7\times 10^{10}$ \\
$E$ & Ekman & $\nu /\Omega\,d^2$ & $10^{-18}$ & $10^{-6}$ \\
$Pr$& Prandtl & $\nu / \kappa$ & $10^{-2}-1$ & $0.2$ \\
$Pm$ & Magnetic Prandtl &$\nu/\lambda_i$ & $10^{-6}$ & $0.4$ \\
\midrule
$Rm$ & Magnetic Reynolds &$ u\,d /\lambda_i$ & $\mathcal{O}(10^6)$ &
$4.11\times 10^2$ \\
$Re$ & Reynolds &$ u\,d /\nu$ & $\mathcal{O}(10^{12})$ & $6.23\times 10^3$
\\
$Ro$ & Rossby & $ u / \Omega\,d$ & $\mathcal{O}(10^{-6})$ & $6.23\times
10^{-3}$
\\
$Re_Z$ & Zonal Reynolds &$ u_z\,d/\nu$ & $\mathcal{O}(10^{12}-10^{15})$ &
$5.79\times 10^3$ \\
$Re_c$ & Convective Reynolds &$ u_c\,d/\nu$ & $\mathcal{O}(10^{12})$ & $2.33
\times 10^3$ \\
$\Lambda$ & Elsasser & $ B^2 / \rho_m \lambda_m \mu_0 \Omega$ &
$\mathcal{O}(10^{1}-10^{2})$ & $8.52$
\\
$\overline{E_M}/\overline{E_K}$ & Energy ratio & $B^2 / \mu_0 \rho_m u^2 $
& $\mathcal{O}(10^2-10^3)$ & $3.50$ \\
$f_{ohm}$ & Ohmic fraction
&$\overline{\mathcal{D}_\lambda}/(\overline{\mathcal{D}_\lambda}
+\overline{\mathcal{D}_\nu})$ & $1$ & $0.78$\\
$f_{dip}$ & Axial-dipole fraction & $B^2_{\ell=1,m=0} (R_J)/
B^2_{\ell,m\leq
12}(R_J)$ & $0.75$ & $0.95$ \\
\bottomrule
\end{tabular}
\label{tab:params}
\end{table*}
\subsection{Control parameters}
The formation of zonal flows in global spherical models requires
a combination of strong turbulent convective motions (i.e. large Reynolds
numbers) and rapid rotation (i.e. low Rossby numbers). This
regime, frequently referred to as the \emph{quasi-geostrophic turbulent regime}
of
convection \citep[e.g.][]{Julien12a}, can only be reached a low enough Ekman
numbers, where global numerical simulations become
extremely demanding. We therefore focus here on one single global
dynamo model with $E=10^{-6}$, $Ra=3.7\times 10^{10}$,
$Pm=0.4$, $Pr=0.2$. We adopted a spatial resolution
of $N_r=361$ (with $\alpha_\text{map}=0.994$) and $\ell_\text{max}=597$ for
most of the run. For the alias-free mapping used in the horizontal directions,
this corresponds to $N_\theta=896$ latitudinal grid points and
$N_\phi=1792$ longitudinal grid points.
Spatial convergence of the solution has been tested by
increasing the angular resolution to $\ell_\text{max}=1024$ ($N_\phi=3072$)
towards the end of the run, without any noticeable change in the average
properties.
To ease the transients, the numerical model was initiated from
another dynamo simulation computed at a larger Ekman number, and mild
hyper-diffusion of the velocity and entropy fields were used over the first half
of the computation time before their gradual removal \citep[e.g.][]{Kuang99}.
In rapidly-rotating convection \citep[e.g.][]{Takehiro01,Dietrich18a,Gastine20},
the distance of penetration $\delta$ of a convective eddy of size $d_c$ is
directly related to the ratio of the Brunt-V\"ais\"al\"a frequency to the
rotation rate via
\begin{equation}
\delta = \left(\dfrac{N_m}{\Omega}\right)^{-1} d_c\,.
\label{eq:penet}
\end{equation}
Ensuring that $\delta$ remains smaller than the thickness of the SSL
$\mathcal{H}_s$ requires $N_m/\Omega >
d_c/\mathcal{H}_s$. A thinner layer would thus require a stronger stratification or
a slower rotation to remain effective. Here we adopt $\mathcal{H}_s=0.05$ and
$N_m/\Omega\simeq 10.4$ ($\Gamma_s=2000$).
The strong degree of stratification should suffice to stop even very large
eddies of half the system size, $d_c\simeq 0.5\,d$.
While thinner and shallower stable layers may be compatible with gravity
observations, they would also considerably increase the numerical costs.
Increasingly fine spatial grids are required to resolve the dynamics of
thinner
layers. Moreover, the tendency to form multiple jets increases with decreasing
Ekman number. Relevant here is the effective Ekman number of the
outer layer $E_o=E (d/d_o)^2$ with
$d_o=r_o-\mathcal{R}_o$. Multiple jets may start to form below $E_o\approx 10^{-4}$
\citep[e.g.][]{Jones09,Gastine14}, a value barely reached for $d_o=0.12$ and
$E=10^{-6}$.
The upper parts of Tab.~\ref{tab:params} summarises our control parameters as
well as the corresponding values for Jupiter.
Because of its significant numerical cost, the dynamo model has been integrated
for a bit more than $0.13$ magnetic diffusion time (or $8400$ rotation periods),
which required roughly $3$ million core hours on Intel Haswell CPUs.
\subsection{Diagnostics}
We analyse the numerical solution by defining several diagnostic properties.
In the following, triangular brackets denote volume
averaging, square brackets azimuthal averaging and overlines
time averaging
\[
\langle f\rangle = \dfrac{1}{V}\int_V f\,\mathrm{d}V,\ [
f ] = \dfrac{1}{2\pi}\int_0^{2\pi} f\,\mathrm{d}\phi,\
\bar{f} = \dfrac{1}{\tau}\int_{t_o}^{t_o+\tau} f\,\mathrm{d}
t\,,
\]
where $\tau$ is the averaging interval and $V$ is the spherical shell volume.
Since the background state strongly varies with radius, it is also convenient
to explore averages over a spherical surface
\[
\| f \|(r,t)=\int_{0}^{2\pi}\int_{0}^{\pi} |f|
\sin\theta\,\mathrm{d}\theta\,\mathrm{d}\phi\,.
\]
The typical convective flow amplitude is measured by the Reynolds number $Re$,
the Rossby number $Ro$ or the magnetic Reynolds number $Rm$ defined by
\begin{equation}
Re = \sqrt{\overline{\langle \vec{u}^2 \rangle}},\quad Ro = Re\,E,\quad Rm
=\overline{\dfrac{1}{V}\int_{r_i}^{r_o} \dfrac{\sqrt{\| \vec{u}^2
\|}}{\tilde{\lambda}} r^2 \mathrm{d} r}\,.
\label{eq:vel_measure}
\end{equation}
To better separate the different flow components, we define two
additional measures based on the zonal flow velocity, $Re_z$, and on the
convective flow velocity, $Re_c$:
\begin{equation}
Re_z = \sqrt{\overline{ \langle [u_\phi]^2 \rangle}},\quad Re_c =
\sqrt{Re^2-Re_z^2}\,.
\end{equation}
The magnetic field amplitude is characterised by the Elsasser number
\begin{equation}
\Lambda = \left\langle \dfrac{B^2}{\tilde{\rho}\tilde{\lambda}}\right\rangle\,.
\end{equation}
The geometry of the surface magnetic field is expressed by its
axial dipolar
fraction $f_{dip}$, which is defined as the ratio of the energy of the
axisymmetric dipole component to the magnetic energy in the spherical harmonic
degrees $\ell \leq 12$ at $r_o$ \citep{Christensen06}.
The numerical solution is also examined in terms of its power budget.
Taking the inner product of the Navier-Stokes equation (\ref{eq:NS})
by $\vec{u}$ and the
induction equation (\ref{eq:ind}) by $\vec{B}$ yields
\begin{equation}
\dfrac{d}{dt}\left(E_K +E_M\right) =
\mathcal{P}-\mathcal{D}_\nu-\mathcal{D}_\lambda\,.
\label{eq:power_bal}
\end{equation}
In the above equation, $E_K$ and $E_M$ denote the mean kinetic and magnetic
energy densities
\[
E_K(t) = \dfrac{1}{2}\left \langle \tilde{\rho} \vec{u}^2 \right \rangle,
\quad
E_M(t) = \dfrac{1}{2}\dfrac{1}{E\,Pm}\left \langle \vec{B}^2 \right \rangle,
\]
$\mathcal{P}$ is the buoyancy power density
\[
\mathcal{P}(t)= \dfrac{Ra E}{Pr} \left \langle
\tilde{\alpha}\tilde{T} \tilde{g} s' u_r \right \rangle\,,
\]
and $\mathcal{D}_\nu$ and $\mathcal{D}_\lambda$ the power dissipated by viscous
and Ohmic effects
\[
\mathcal{D}_\nu(t) = \langle \tens{S}^2 \rangle,\quad
\mathcal{D}_\lambda(t) = \dfrac{1}{E Pm^2}\langle \tilde{\lambda} \vec{j}^2
\rangle\,.
\]
Once a statistically-steady state has been reached, time averaging
Eq.~(\ref{eq:power_bal}) yields a balance between buoyancy input
power and heat losses by Ohmic and viscous dissipations
\begin{equation}
\overline{\mathcal{P}}-\overline{\mathcal{D}_\nu}-\overline{\mathcal{D}_\lambda}
= \overline{\mathcal{P}}
-\dfrac{1}{f_{ohm}}\overline{\mathcal{D}_\lambda}= 0\,,
\end{equation}
where $f_{ohm}$ quantifies the fraction of heat dissipated ohmicly.
The residual in the above equation
can serve as a good indicator of the time and spatial convergence of a
numerical solution \citep[e.g.][their Fig.~2]{King12}.
Here, this identity is obtained to a high degree of fidelity with
$|\overline{\mathcal{P}}-\overline{\mathcal{D}_\nu}-\overline{\mathcal{D}
_\lambda}| / \overline{\mathcal{P}} < 0.3\%$.
Table~\ref{tab:params} summarises the control parameters and the main
diagnostics of the dynamo model presented here along with the expected values
for Jupiter. For comparison we note that the Jovian dynamo model by
\cite{Gastine14a} was computed using $Pm=0.6$ and $E=10^{-5}$. It produced a
relatively weak-field solution with $\overline{E_M} /\overline{E_K} \simeq
0.1$ and $f_{ohm}\simeq 0.15$. In contrast, by employing much larger
magnetic Prandtl number ($Pm \geq 3$), several dynamo simulations by
\cite{Jones14} and \cite{Duarte18} yielded a stronger magnetic field with
$\overline{E_M}/\overline{E_K} \simeq 3$. Adopting a significantly lower Ekman
number enables us to reach a comparable energy fraction
$\overline{E_M}/\overline{E_K} \simeq 3.5$ while using a magnetic
Prandtl number almost one order of magnitude smaller. This yields an Ohmic
fraction $f_{ohm} \simeq 0.8$, much closer to the value expected for
Jupiter where Ohmic dissipation dominates by far because of the
small magnetic Prandtl number.
Using $E=10^{-6}$ also ensures that $Re \gg 1 $ and yet $Ro
\ll 1$, two prerequisites to develop turbulent quasi-geostophic convection
conducive for sustaining strong zonal jets.
\section{Results}
\label{sec:results}
\subsection{Convective flow and magnetic field morphology}
\begin{figure*}
\centering
\includegraphics[width=.95\textwidth]{snap_light}
\caption{ 3-D renderings of the radial velocity $u_r$ (\textit{a}), of the
azimuthal velocity $u_\phi$ (\textit{b}), of the
radial component of the magnetic field $B_r$ (\textit{c}) and of the azimuhtal
component of the magnetic field $B_\phi$ (\textit{d}). The inner sphere in
panels (\textit{a}) and (\textit{d}) is located very close to the inner boundary
at $r=r_i+0.01$, while in panel (\textit{b}) and (\textit{c}) it depicts the
lower boundary of the SSL.
The intermediate radial cut that spans $30^\circ$ in longitude
in panels (\textit{a})-(\textit{c}) and $90^\circ$ in panel (\textit{d}) is
located close to the upper boundary of the stably-stratified layer at
$r=0.904\,r_o$. The external radial cut corresponds to $r=0.992\,r_o$ in panel
(\textit{a}) and to the surface $r_o$ in the other panels.}
\label{fig:snap}
\end{figure*}
We start by examining the typical convective flow and magnetic field
produced by the numerical dynamo model. Figure~\ref{fig:snap} shows a selected
snapshot of the radial and azimuthal components of the velocity and magnetic
fields.
An immediate effect of the strong stratification $N_m/\Omega \simeq 10$ is to
significantly inhibit the convective motions between $\mathcal{R}_i$ and $\mathcal{R}_o$.
The equatorial and meridional cuts of the radial velocity
(Fig.~\ref{fig:snap}\textit{a}) clearly show that the SSL forms a
strong dynamical
barrier between two different convective regions. In the deep interior, the
convective pattern takes the form of radially-elongated quasi-geostrophic
sheets that
span most of the metallic core. This is a typical flow pattern commonly
observed in geodynamo models when the magnetic energy
exceeds the kinetic energy \citep[e.g.][their Fig.~2]{Yadav16}.
In contrast, the outer layer is dominated by small-scale turbulent features.
Because of the rapid decrease of density there, the convective flow become
smaller-scale and more turbulent towards the surface. The azimuthal
flows are dominated by a strong prograde equatorial jet which penetrates down
to
$\mathcal{R}_o$ (Fig.~\ref{fig:snap}\textit{b}) but are then effectively quenched in
the stable layer. Flanking weaker jets of alternating direction appear up to
about $\pm40^\circ$ in latitude. They become somewhat more pronounced with depth
and show clearer at $r=0.9\,r_o$ ((radial cut in Fig.~\ref{fig:snap}\textit{b})
The magnetic field is predominantly produced in the metallic region below
$\mathcal{R}_i$ where both the
conductivity and the convective flow amplitude are sufficient to sustain dynamo
action (Fig.~\ref{fig:snap}\textit{c}-\textit{d}).
The magnetic Reynolds number (Eq.~\ref{eq:vel_measure}) reaches values
of more than $600$ in this region.
The magnetic field at the top of the inner convective region features a
dominant axisymmetric dipole accompanied by intense localised flux
patches (inner radial cut in Fig.~\ref{fig:snap}\textit{c}).
Because of the strong inhibition of the flow motions between $\mathcal{R}_i$ and
$\mathcal{R}_o$, there is little to no dynamo action happening in the SSL. Instead,
the SSL filters out the faster varying field components via a magnetic
skin effect as will be discussed in the next section
\citep[e.g.][]{Christensen06a,Gastine20}. Since smaller scale
contributions vary on shorter time scales, the remaining field at $\mathcal{R}_o$ is
of much larger scale than at $\mathcal{R}_i$.
Because of the abrupt drop of electrical conductivity in the molecular
envelope, the dynamo action in the outer layer is very inefficient since
the magnetic Reynolds number $Rm$ is mostly smaller than one in the
outer convective layer of our simulation.
Consequently, the locally-induced poloidal field remains practically
negligible and the
magnetic field decays like a potential field with radius \citep{Wicht19}.
The surface magnetic field is dominated by a strong axial dipole combined with
large scale non-axisymmetric flux patches.
In the fully-convective models by \cite{Gastine14a},
the prograde equatorial jet shears the upper layers of the metallic
region to produce strong azimuthal magnetic bands \citep{Wicht19}. Such
structures are not observed here (Fig.~\ref{fig:snap}\textit{d}), likely because
the zonal motions are hampered in the stable layer.
\subsection{Energetics}
\begin{figure*}
\centering
\includegraphics[width=16cm]{rad_profiles}
\caption{(\textit{a}) Time-averaged radial profiles of magnetic and kinetic
energies. (\textit{b}) Time-averaged radial profiles of Ohmic and viscous
dissipation and buoyancy power. The shaded area correspond
to one standard-deviation accross the mean. The vertical lines mark the
location of the stably-stratified layer between $\mathcal{R}_i$ and $\mathcal{R}_o$
(see Fig.~\ref{fig:dsdr}).}
\label{fig:radprofs}
\end{figure*}
For a more quantitative assessment, we now examine the power balance.
Figure~\ref{fig:radprofs} shows the time-averaged radial profiles of magnetic
and kinetic energy as well as the different source and sinks which enter the
power balance (\ref{eq:power_bal}). In the metallic interior, the total
magnetic energy exceeds the
kinetic energy by one order of magnitude and zonal winds (toroidal axisymmetric)
contribute only about $10$\% of the kinetic energy
(Fig.~\ref{fig:radprofs}\textit{a}).
In the SSL, the total kinetic energy drops with radius by about a factor
of two.
Non-axisymmetric contributions drop more rapidly, but this is partly
compensated by an increase in the zonal kinetic energy due to
penetation from the upper convective region. In
the external convective layer, fast zonal winds clearly dominate and the
kinetic energy reaches its peak value at about $0.98\,r_o$.
Because of the skin-effect in the SSL and the decay
of conductivity, the magnetic field becomes more axisymmetric and poloidal
towards the surface.
While kinetic and magnetic energy reach a comparable level at the top of
the SSL, the former exceeds the latter by up to a factor $50$ in the outer
convective layer.
The sign changes of the buoyancy power (Fig.~\ref{fig:radprofs}\textit{b}) mark
the actual separation between the convective and the stably-stratified layers.
In the convective regions, the eddies which carry a positive entropy
fluctuation compared to their surroundings ($s'>0$) rise outwards, while the
ones with $s'<0$ sink inwards, yielding a positive correlation between $u_r$ and
$s'$ and hence a positive buoyancy power. The opposite happens when a
convective feature overshoots in an adjacent sub-adiabatic region. A rising
parcel of fluid with $u_r>0$ now carries a perturbation $s'<0$ (and hence
$\mathcal{P} < 0$) until it is homogenised with its surroundings by heat
conduction ($\mathcal P \simeq 0$). Because of the finite
stiffness of the background entropy gradient (Fig.~\ref{fig:dsdr}),
the actual thickness of the region with $\mathcal{P}<0$ exceeds the interval
$[\mathcal{R}_i, \mathcal{R}_o]$ delineated by vertical lines in Fig.~\ref{fig:radprofs}.
The measure of the vertical extent of the
regions with $\mathcal{P}<0$ actually provide a good estimate of the distance of
penetration of the convective eddies into a stably-stratified layer
\citep[e.g.][]{Browning04,Takehiro18a,Gastine20}.
In line with the partitioning between magnetic and kinetic energies, the heat
losses are dominated by Ohmic heating in the metallic region, while viscous
heating takes over when the electrical conductivity drops.
To sum up, Fig.~\ref{fig:radprofs} highlights the separation between two
different dynamical regions: an internal metallic region, which harbours the
production of a strong magnetic field, and an external envelope where most of
the kinetic energy is pumped into zonal motions.
\begin{figure*}
\centering
\includegraphics[width=16cm]{ekin_r}
\includegraphics[width=16cm]{emag_r}
\caption{Time-averaged 2-D spectra in $(r/r_o,\ell)$ plane for several kinetic
(upper panels) and magnetic (lower panels) contributions: (\textit{a})
non-axisymmetric kinetic energy, (\textit{b}) axisymmetric kinetic
energy, (\textit{c}) poloidal magnetic energy, and (\textit{d}) toroidal
magnetic energy. The thick dashed line in panel (\textit{a}) mark the location
of the maxima of non-axisymmetric kinetic energy. The solid lines mark the
location of the stably-stratified layer between $\mathcal{R}_i$ and $\mathcal{R}_o$
(see Fig.~\ref{fig:dsdr}). Because of the different dynamics, the colorbars are
different for each panel.}
\label{fig:spec_l_r}
\end{figure*}
To better characterise the dynamics in the different layers, we now examine the
spectral energy distributions in the $(r/r_o,\ell)$ plane.
Figure~\ref{fig:spec_l_r} illustrates 2-D spectra of kinetic (upper panels) and
magnetic (lower panels) energy contributions. For a
more insightful analysis, the kinetic energy has been split into
non-axisymmetric (Fig.~\ref{fig:spec_l_r}\textit{a}) and axisymmetric
(Fig.~\ref{fig:spec_l_r}\textit{b}) motions, while the magnetic spectra
have been separated into poloidal (Fig.~\ref{fig:spec_l_r}\textit{c}) and
toroidal (Fig.~\ref{fig:spec_l_r}\textit{d}) contributions.
We introduce the local peak of the
non-axisymmetric energy $\hat{\ell}$ and the corresponding convective flow
lengthscale $d_c$ defined by
\begin{equation}
\hat{\ell}(r) = \argmax_\ell E_K^{\text{nas}}, \quad
d_c(r)=\dfrac{\pi\,r}{\hat{\ell}},
\label{eq:hat}
\end{equation}
where $E_K^{\text{nas}}$ denotes the non-axisymmetric energy
\citep[e.g.][]{Schwaiger19}.
The kinetic energy spectra clearly differ for the three regions.
In the external layers ($r>\mathcal{R}_o$), the convective
lengthscale rapidly decreases outwards, reaching $\hat{\ell}\sim 100$, i.e.
$d_c\approx 0.03\,r_o$. The scale of the zonal flows remains roughly an order
of magnitude larger with $\ell\le20$.
In the metallic core
($r<\mathcal{R}_i$), $\hat{\ell}$ decreases only mildly from about $15$ at $\mathcal{R}_i$ to
about $4$ at $r_i$.
In the physical space this corresponds to the large
scale convective sheets visible in Fig.~\ref{fig:snap}\textit{a}. In between
those two regions,
the stably-stratified layer significantly reduces the amplitude of the
convective motions. The inhibition of the convective flow depends on
the size of the convective eddies: the smaller the lengthscale, the stronger
the attenuation of the kinetic energy. This phenomenon can be understood when
considering the distance of penetration $\delta$ of a turbulent feature of
horizontal size $d_c$ into a stably stratified layer (Eq.~\ref{eq:penet}).
Approximating the horizontal scale $d_c$ by $\pi \mathcal{R}_i/\ell$ then yields
\begin{equation}
\delta_\ell \sim \dfrac{\pi}{\mathcal{R}_i}\left(\dfrac{N_m
\ell}{\Omega}\right)^{-1}\,.
\end{equation}
The penetration distance $\delta_\ell$ is hence inversely proportional to the
degree $\ell$, explaining the stronger damping of small convective scales
\citep{Dietrich18a}.
At the dominant lengthscale of convection $\hat{\ell}\simeq 20$ at the edges of
the SSL, the above scaling yields $\delta_{\hat{\ell}} \simeq 0.015\,d$, in
good agreement with the actual thickness of the overshoot regions characterised
by $\mathcal{P}<0$ (Fig.~\ref{fig:profs}\textit{b}).
The poloidal magnetic energy is dominated by its dipolar component throughout
the entire volume. In the metallic interior, it features a secondary peak
around $\ell \simeq 10-20$ which roughly follows the variations of the peak of
the non-axisymmetric kinetic energy $\hat{\ell}(r)$ \citep{Aubert17}.
The toroidal field reaches its maximum amplitude in the upper half of the
metallic core ($0.6 \leq r \leq \mathcal{R}_i$) and also peaks at comparable scales.
Beyond $\mathcal{R}_i$, the magnetic energy decreases up to the surface $r_o$ and is
significantly more
attenuated at small scales. This phenomenon arises because of two distinct
scale-dependent physical processes:
\begin{enumerate}
\item Within the
SSL, the electrical conductivity is almost as large as in the metallic core but
the convective motions are significantly hampered. A first order approximation
assumes that the SSL behaves as a
stagnant layer of size $\mathcal{H}_s$ with a constant electrical conductivity. Such
a layer will attenuate the poloidal magnetic
energy by \emph{skin effect} \citep[e.g.][]{Christensen06a} by a factor
\begin{equation}
\ln\dfrac{E_{M}^{P,\ell} (\mathcal{R}_o)}{E_{M}^{P,\ell}(\mathcal{R}_i)} \sim
-\dfrac{\mathcal{H}_s}{\delta_\ell^{\text{SK}}},
\label{eq:skin}
\end{equation}
where $E_{M}^{P,\ell}$ is the poloidal magnetic energy at the
harmonic degree $\ell$ and
$\delta_\ell^{\text{SK}}$ is the skin depth associated with a
feature of scale $d_c$ expressed by \citep[see][]{Gastine20}
\begin{equation}
\delta_\ell^{\text{SK}} \sim \left(\dfrac{d_c}{Rm}\right)^{1/2} \sim
\left(\dfrac{\pi\,\mathcal{R}_i}{\ell\,Rm}\right)^{1/2}\,.
\end{equation}
The skin effect (\ref{eq:skin}) thus increases with $\ell$.
\item Beyond $\mathcal{R}_o$, the electrical conductivity decreases exponentially and
the local dynamo effect is rather inefficient. The magnetic field is
dominated by the field produced in the deeper dynamo region and approaches a
potential field \citep[e.g.][]{Wicht19}.
The characteristic radial dependence of a potential field in the outer
convective region predicts:
\begin{equation}
\dfrac{E^{P,\ell}_{M}(r_o)}{E^{P,\ell}_M(\mathcal{R}_o)} \simeq
\left(\dfrac{\mathcal{R}_o}{r_o}\right)^{2\ell+4}\,.
\label{eq:vacuum}
\end{equation}
\end{enumerate}
The attenuation factors (\ref{eq:skin}) and (\ref{eq:vacuum})
should provide idealised upper bounds of the poloidal magnetic
energy damping since (\textit{i}) the convective flows can penetrate into the
SSL and
(\textit{ii}) the electrical conductivity beyond $\mathcal{R}_o$ still allows for
some local dynamo action. This local action is responsible for the rise in
magnetic energy around $\mathcal{R}_o$ at intermediate to small scales corresponding
to $\ell>40$ (Fig.~\ref{fig:spec_l_r}\textit{c}-\textit{d}).
\begin{figure}
\centering
\includegraphics[width=8.3cm]{epol_damping}
\caption{Time-averaged poloidal magnetic energy spectra at different
depths up to $\ell=30$. The solid (dash-dotted)
lines correspond to the poloidal magnetic spectra above (below)
the SSL, the circles to the downward
continuation of the surface field (Eq.~\ref{eq:vacuum}) and the dashed line to
the field at $\mathcal{R}_o$ upward-continued from $\mathcal{R}_i$ using the
skin-depth approximation
(Eq.~\ref{eq:skin}). The shaded regions correspond to one standard deviation
accross the time-averaged values.}
\label{fig:specs}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.3cm]{spectra}
\caption{Normalised time-averaged magnetic spectra at the
surface of the numerical model as well as the potential field upward
continuation of the poloidal field at $\mathcal{R}_i$
along with the Jovian magnetic field model JRM09
by \cite{Connerney18} for the first $15$ harmonic degrees. The shaded area
corresponds to one standard deviation across the time-averaged values.}
\label{fig:compJuno}
\end{figure}
Figure~\ref{fig:specs} compares spectra of the poloidal
magnetic energy at different depths (solid lines)
to the predictions coming from Eq.~(\ref{eq:skin}) (dashed lines) and
Eq.~(\ref{eq:vacuum}) (circles).
Beyond $r=0.9\,r_o$, dynamo action is negligible and the poloidal
energy
spectra closely follows the downward continuation of the surface field.
At the top of the stable layer ($\mathcal{R}_o=0.88\,r_o$), however, the energy
of the downward-continued field is noticeably smaller than the actual poloidal
energy. The reason is the dynamo action just above or in the top part of the
stable layer, which is also apparent in Fig.~\ref{fig:spec_l_r}\textit{c} and
\textit{d}. Here the zonal winds induce toroidal field which is then
converted to poloidal field by the non-axisymmetric flow components
\citep{Wicht19,Tsang20}.
Using the poloidal field spectrum at $\mathcal{R}_i$ combined
with the attenuation factor from the skin effect (\ref{eq:skin}) captures the
magnetic energy spectrum at $\mathcal{R}_o$ reasonably well.
Large scale contributions ($\ell < 15$) are overestimated, while smaller
scale contributions are slightly underestimated. The latter could be
explained by the local dynamo action around $\mathcal{R}_o$, which intensifies
the field and counteracts the skin effect.
The weaker large-scale field, on the other hand, indicates that the
locally-induced field opposes the field produced below
the stable layer. Dipole and octupole are less affected and therefore stick out
above the stable layer. Another reason for the discrepancy could be that
approximating the SSL by an electrically-conducting stagnant layer is
too simplistic despite the large degree
of stratification considered here ($N_m/\Omega \simeq 10$).
In the deep
interior ($r\leq \mathcal{R}_i$), the octupole is in line with other spherical
harmonics and the scales around $\ell\simeq 10$ nearly reach half the amplitude
of the dipole contributions (see also
Fig.~\ref{fig:spec_l_r}\textit{c}-\textit{d}).
\subsection{Comparison with JRM09}
\begin{figure}
\centering
\includegraphics[width=8.3cm]{Br_multi_depth}
\caption{Hammer projection of the radial component of the magnetic field at
the surface (\textit{a}), at the upper edge of the SSL $r=\mathcal{R}_o$
(\textit{b}) and at the lower edge of the SSL $r=\mathcal{R}_i$ (\textit{c}).}
\label{fig:br}
\end{figure}
Figure~\ref{fig:compJuno} compares the normalised surface magnetic
spectra in our simulation with the Jovian magnetic field model JRM09 by
\cite{Connerney18}. The relative energy contained in the non-dipolar components
is roughly one order of magnitude lower in the simulation than in the JRM09
model. The surface
magnetic field produced by our dynamo model is thus too dipolar, as is
illustrated by Fig.~\ref{fig:br}, which shows the radial component of the
magnetic field at different depths for a snapshot of our simulation. The strong
difference between northern and southern field in JRM09 (see
Fig.~\ref{fig:BrJup}) is not present in the simulation. There are some strong
localised flux patches in our model, but they are
more evenly distributed and do not stand out as clearly as in JRM09. From the
many small scale patches at the bottom of the SSL (panel \textit{c}), the
strongest can still be identified at the top of the SSL (panel \textit{b}) and
are the origin of the larger scale patches at the outer boundary (panel
\textit{a}).
Figure~\ref{fig:compJuno} also shows an upward continuation of the
poloidal magnetic field at the base of the stable layer
$\mathcal{R}_i$ using Eq.~(\ref{eq:vacuum}).
This potential field approximation provides a theoretical estimate for the
end-member attenuation when skin effect and dynamo action above the stable layer
would be weak. The decent similarity of this approximation to the JRM09
spectrum could indicate that both are too strong in our simulation.
\subsection{Force balances}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{forces_jup}
\caption{Time-averaged force balance spectra as a function of the harmonic
degree integrated over the metallic core (\textit{a}) and over the
molecular envelope (\textit{b}). The shaded area correspond
to one standard-deviation across the mean. The vertical segments mark the
location of the so-called ``cross-over lengthscales'' where three forces are
in balance \citep[see][]{Aubert17,Schwaiger21}.}
\label{fig:forces}
\end{figure*}
We now turn to examining the forces that govern the numerical dynamo model. To
do so, we resort to the analysis of the spectral decomposition of forces
introduced by \cite{Aubert17} and \cite{Schwaiger19}. Each force vector
$\vec{f}$ is expanded in vector spherical harmonics
\begin{equation}
\vec{f}(r,\theta,\phi,t) = \sum_{\ell=0}^{\ell_{\text{max}}}
\mathcal{Q}_\ell^m
Y_\ell^m
\vec{e_r} + \mathcal{S}_\ell^m \,r \vec{\nabla} Y_\ell^m + \mathcal{T}_\ell^m
\vec{r}\times \vec{\nabla} Y_\ell^m,
\end{equation}
where $\vec{r}$ is the vector along the radial direction and
$Y_\ell^m(\theta,\phi)$ is the spherical harmonic of degree $\ell$ and order
$m$. The energy of the vector $\vec{f}$ is then retrieved by the following
identity
\[
\begin{aligned}
F^2 & = \int_{V} \vec{f}^2 \mathrm{d}V, \\
& = 2 \int_{r_i}^{r_o} \sum_{\ell=0}^{\ell_\text{max}}
\sideset{}{'}\sum_{m=0}^{\ell} |\mathcal{Q}_\ell^m|^2 +
\ell(\ell+1)\left(|\mathcal{S}_\ell^m|^2+|\mathcal{T}_\ell^m|^2\right)\,
r^2 \mathrm {d } r\,,
\end{aligned}
\]
where the prime on the summation over the order $m$ indicates that the $m=0$
coefficient is multiplied by one half. To examine the spectral
distribution of the forces, the above expression is rearranged as
follows:
\begin{equation}
F^2 = \sum_\ell \mathcal{F}_\ell^2(r_i,r_o),
\end{equation}
where
\begin{equation}
\mathcal{F}_\ell^2(r_b,r_t) = 2 \int_{r_b}^{r_t}
\sideset{}{'}\sum_{m=0}^\ell
|\mathcal{Q}_\ell^m|^2
+
\ell(\ell+1)\left(|\mathcal{S}_\ell^m|^2+|\mathcal{T}_\ell^m|^2\right)\, r^2
\mathrm{d}r\,.
\end{equation}
We adapt the bounds of the radial integration $r_b$ and $r_t$ to
either focus on the metallic core or on the convective envelope.
Figure~\ref{fig:forces} shows the time-averaged force balance spectra
$\overline{F_\ell}(r_i,\mathcal{R}_i)$ (left) and $\overline{F_\ell}(\mathcal{R}_o,r_o)$
(right).
We find a primary geostrophic balance (QG) between pressure gradient and
Coriolis force at large scales with $\ell<70$. At smaller scales, the pressure
gradient is superseded by Lorentz forces in a magnetostrophic balance
(MS) \citep{Aurnou17}. Beyond this
primary balance, the difference between pressure gradient and Coriolis force,
termed
\emph{ageostrophic Coriolis force}, is in balance with buoyancy at large scales
($\ell < 10$) and with Lorentz force at small scales. Inertia and viscosity are
respectively one and two orders of magnitude below this first-order
balance. This forms the so-called \emph{QG-MAC
balance} (Magneto, Archimedean, Coriolis) introduced theoretically by
\cite{Davidson13} and identified in reduced numerical models by \cite{Calkins18}
and full dynamo simulations by \cite{Schwaiger19}.
This hierarchy of forces is structurally similar to the ones obtained in the
geodynamo models of
\cite{Schwaiger19} when the magnetic energy exceeds the kinetic one. We note
that the separation between Lorentz force and inertia is of comparable amplitude
to the ratio of magnetic and kinetic energies (see
Fig.~\ref{fig:radprofs}\textit{a}).
In the molecular envelope, the leading-order quasi-geostrophic equilibrium is
accompanied by a secondary balance between ageostrophic Coriolis force and
buoyancy up to $\ell \simeq 70$ and between ageostrophic Coriolis force and
inertia beyond. Because of the decrease of electrical conductivity, Lorentz
forces play a much weaker role and have a comparable amplitude to
the viscous force. The convective flows in the outer convective layer therefore
obey the
so-called \emph{QG-IAC balance} (Inertia, Archimedean, Coriolis) derived by
\cite{Cardin94} in the context of quasi-geostrophic convection \cite[see
also][]{Aubert01,Gillet06,Gastine16}.
The spectral representations shown in Fig.~\ref{fig:forces} also reveal
the cross-over lengthscales \citep{Aubert17,Schwaiger21}
defined by the
harmonic degree at which at least two forces are of equal amplitude.
Of particular interest are the intersections between buoyancy and Lorentz
forces in the metallic core and between buoyancy and inertia in the molecular
envelope. The respective degrees $\ell_{\text{MA}}\approx 10$ and
$\ell_{\text{IA}} \approx
75$, marked by the two vertical segments in Fig.~\ref{fig:forces},
characterise the lengthscale of optimal QG-MAC and QG-IAC balances. As already
reported by \cite{Aubert17}, those cross-over
lengthscales are in good agreement with
the dominant lengthscale of convection $\hat{\ell}$ defined by the peak of
the non-axisymmetric kinetic energy (Fig.~\ref{fig:spec_l_r}\textit{a}).
This implies that the most energetic convective features are controlled by a
QG-MAC balance in the metallic core and a QG-IAC balance in the external
convective region, two force balance hierarchies expected to hold in the
interiors of gas giants.
\subsection{Zonal and meridional flows}
\begin{figure}
\centering
\includegraphics[width=8.3cm]{vp_psi}
\caption{(\textit{a}) Time-averaged zonal flows $\overline{[u_\phi]}$.
(\textit{b}) Time-averaged stream function of the
meridional circulation $\overline{\Psi}$. Solid (dashed) contour lines
correspond to clockwise (counter clockwise) meridional circulation. In both
panels, the dashed half circles mark the bounds of the SSL $\mathcal{R}_i$ and
$\mathcal{R}_o$.}
\label{fig:vp_psi}
\end{figure}
We now examine the structure of the axisymmetric flows produced in this
numerical dynamo model. Figure~\ref{fig:vp_psi} shows the time-averaged zonal
flow $\overline{[u_\phi]}$ and the stream function $\overline{\Psi}$
associated with the meridional circulation defined by
\[
\tilde{\rho}\,\vec{u_m} = \vec{\nabla} \times (\tilde{\rho} \,\Psi \vec{e_\phi})\,,
\]
where $\vec{u_m}=([u_r],[u_\theta])$ is the meridional circulation vector and
$\vec{e_\phi}$ is the unit vector in the $\phi$ direction.
In the molecular envelope, the zonal motions are dominated by a strong
prograde equatorial jet.
On each side of the equatorial jet we find two retrograde and two prograde
secondary jets. The innermost prograde jets, located at about $40^\circ$
latitude
north and south, are particularly faint.
This jet system persisted over our simulation time, which is equivalent to
about $8400$ rotations.
Zonal winds at high latitudes form broader structures, which are often
dominated by
thermal wind features and change over time. The deeper convective region
exhibits much weaker differential rotation \citep[see][]{Jones14}.
The meridional flows are one to two orders of magnitude weaker than the
typical non-axisymmetric convective flows. In the external convective region,
it forms pairs of equatorially
anti-symmetric cells elongated along the rotation axis.
The cells are highly correlated with the zonal jets.
The stable stratification effectively prevents the meridional circulations from
penetrating the SSL.
Within the metallic interior, the meridional circulation resides on more
intricate columnar cellular patterns which also show some correlation with the
zonal winds.
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{thWind}
\caption{Meridional cuts of the time-averaged terms that enter the thermal
wind balance (\ref{eq:thwind}). Because of its much weaker amplitude, the
viscous contribution $\mathcal{V}_\omega$ entering Eq.~(\ref{eq:thwind}) has
been omitted. The dashed half circles mark the bounds of the SSL $\mathcal{R}_i$ and
$\mathcal{R}_o$.}
\label{fig:thWind}
\includegraphics[width=0.99\textwidth]{azimuthal_forcebal}
\caption{Meridional cuts of the time-averaged terms that enter the angular
momentum transport equation (\ref{eq:zon}). The dashed half circles mark the
bounds of the SSL $\mathcal{R}_i$ and $\mathcal{R}_o$.}
\label{fig:vp_bal}
\end{figure*}
In order to understand the quenching of the jets and the correlation with the
meridional circulation, we consider two fundamental equations. The first one is
the thermal wind equation, which can be derived from the azimuthal component of
the curl of the Navier-Stokes equation (\ref{eq:NS}):
\begin{equation}
\begin{aligned}
\dfrac{D \omega_\phi}{D t}
=& \dfrac{2}{E}\dfrac{\partial
u_\phi}{\partial z} -\dfrac{Ra}{Pr}\dfrac{\tilde{\alpha}\tilde{T} \tilde{g}}{r}\dfrac{\partial
s'}{\partial\theta}+\varsigma\vec{\omega}\cdot\vec{\nabla}\left(\dfrac{
u_\phi } { \varsigma } \right)-\omega_\phi\vec{\nabla}\cdot\vec{u}\\
&+\vec{e_\phi}\cdot\vec{\nabla}\times
\left(\dfrac{\vec{j}\times\vec{B}}{E Pm\,\tilde{\rho}}\right)+
\vec{e_\phi}\cdot\vec{\nabla}\times\left(\dfrac{\vec{\nabla}\cdot\tens{S}}{\tilde{\rho}
} \right)\,.
\end{aligned}
\label{eq:vortzon}
\end{equation}
Here $\omega_\phi=\vec{e_\phi}\cdot\vec{\nabla}\times\vec{u}$ and
$\varsigma=r\sin\theta$ denotes the cylindrical radius. When averaging over time and
azimuth, Eq.~(\ref{eq:vortzon}) yields
\begin{equation}
2 \dfrac{\partial \overline{[u_\phi]}}{\partial z} = \dfrac{Ra
E}{Pr}\dfrac{\tilde{\alpha} \tilde{T} \tilde{g}}{r}\dfrac{\partial \overline{[s']}}{\partial
\theta} + \mathcal{R}_\omega + \mathcal{M}_\omega +\mathcal{V}_\omega\,.
\label{eq:thwind}
\end{equation}
In the above equation, $\mathcal{R}_\omega$ is a nonlinear term defined by
\[
\mathcal{R}_\omega = E\left(
\overline{\left[\vec{u}\cdot\vec{\nabla}\omega_\phi\right]}
-\varsigma\overline{\left[\vec{\omega}\cdot\vec{\nabla}\dfrac{u_\phi}{\varsigma}\right]}
-\dfrac{\mathrm{d}\ln\tilde{\rho}}{\mathrm{d} r}\overline{[u_r\omega_\phi]}\right),
\]
where the three contributions entering the right-hand-side respectively
correspond to
advection, stretching and compressional sources of vorticity.
$\mathcal{M}_\omega$ and $\mathcal{V}_\omega$ denote the magnetic and viscous
stresses defined by
\[
\mathcal{M}_\omega =-\dfrac{1}{Pm}\overline{\left[\vec{e_\phi}\cdot
\vec{\nabla}\times\left(\dfrac{\vec{j}\times\vec{B}}{\tilde{\rho}}\right)\right]}\,,
\]
and
\[
\mathcal{V}_\omega = -E \overline{\left[\vec{e_\phi}
\cdot
\vec{\nabla}\times\left(\dfrac{\vec{\nabla}\cdot\tens{S}}{\tilde{\rho}}\right)\right]}
\,.
\]
Figure~\ref{fig:thWind} shows meridional cuts of the different terms in
Eq.~(\ref{eq:thwind}). The axial gradient of $\overline{[u_\phi]}$
almost perfectly balances the latitudinal gradient of entropy, with small
remaining contributions of magnetic winds $\mathcal{M}_\omega$ inside the
tangent cylinder and from
inertia close to the upper edge of the SSL around $45^\circ$ latitude. The
classical thermal wind balance
\begin{equation}
2 \dfrac{\partial \overline{[u_\phi]}}{\partial z} \approx \dfrac{Ra
E}{Pr}\dfrac{\tilde{\alpha} \tilde{T} \tilde{g}}{r}\dfrac{\partial \overline{[s']}}{\partial
\theta}\,,
\label{eq:thWindShort}
\end{equation}
is hence realised to a high degree of fidelity, indicating that
Lorentz forces have no direct impact on the $z$-variations of the zonal flows.
The strongest latitudinal entropy gradients are found at the upper edge of
the SSL between $20^\circ$ and $45^\circ$ latitude, where the alternating
zonal flows rapidly decay. The entropy gradients are much weaker in the middle
of the external convective region where the zonal winds remain nearly
geostrophic. The braking of $\overline{[u_\phi]}$ at
$\mathcal{R}_o$ is accommodated by intense localised entropy variations.
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{blow_up}
\caption{Zoomed-in insets of Fig.~\ref{fig:vp_psi}, \ref{fig:thWind} and
\ref{fig:vp_bal} for $r\in[\mathcal{R}_i,0.92\,r_o]$ and
$\theta\in[50^\circ,90^\circ]$. (\textit{a}) Time-averaged zonal flows
$\overline{[u_\phi]}$. (\textit{b}) Time-averaged axial gradient of the zonal
flows $2\,\partial \overline{[u_\phi]}/\partial z$. (\textit{c}) Time-averaged
meridional gradient of temperature $(Ra E/Pr)(\tilde{\alpha}\tilde{T}\tilde{g}/r)\partial
\,\overline{[s']}/\partial \theta$. (\textit{d}) Time-averaged stream function
of the meridional circulation $\overline{\Psi}$. (\textit{e}) Time-averaged
axisymmetric component of Coriolis force $2\,\overline{[u_\varsigma]}/E$.
(\textit{f}) Time-averaged axisymmetric $\phi$-component of the Lorentz force
$-\overline{\vec{\nabla}\cdot\mathcal{F}_L}/\tilde{\rho}\varsigma$. (\textit{b}) and
(\textit{c}) correspond to the dominant terms of thermal wind balance
(\ref{eq:thWindShort}) shown in Fig.~\ref{fig:thWind}. (\textit{e}) and
(\textit{f}) correspond to the dominant terms of the angular momentum transport
equation (\ref{eq:zon}) shown in Fig.~\ref{fig:vp_bal}. In each panel, the
horizontal dashed line corresponds to $r=\mathcal{R}_o$.}
\label{fig:blow_up}
\end{figure*}
To examine the force balance that sustains the meridional circulation
pattern, we now consider the zonal component of the Navier-Stokes equation
(\ref{eq:NS}):
\begin{equation}
\tilde{\rho}\dfrac{\partial [u_\phi]}{\partial t}+\dfrac{2}{E}\tilde{\rho}[u_\varsigma] =
-\dfrac{1}{\varsigma} \vec{\nabla}\cdot \vec{\mathcal{F}},
\label{eq:zon}
\end{equation}
where $u_\varsigma$ corresponds to the cylindrically-radial component of the
velocity. The angular momentum flux $\vec{\mathcal{F}}$ can be decomposed
into three contributions,
\[
\vec{\mathcal{F}} = \vec{\mathcal{F}}_\text{R} +
\vec{\mathcal{F}}_\text{M} + \vec{\mathcal{F}}_\text{V},
\]
accounting for Reynolds, Maxwell and viscous stresses
\[
\vec{\mathcal{F}}_\text{R}=\tilde{\rho} \varsigma [\vec{u}u_\phi],\
\vec{\mathcal{F}}_\text{M}=-\dfrac{\varsigma [\vec{B}B_\phi]}{EPm},\
\vec{\mathcal{F}}_\text{V}=-\tilde{\rho}
\varsigma^2\vec{\nabla} \left(\dfrac{[u_\phi]}{\varsigma}\right)\,.
\]
On time-average,
the flow perpendicular to the rotation axis $\overline{[u_\varsigma]}$ responds to
the imbalance between those
different axial forces, a physical phenomenon termed ``geostrophic pumping''
by \cite{McIntyre98}. Figure~\ref{fig:vp_bal} shows meridional
cuts of the different time-averaged contributions to Eq.~(\ref{eq:zon}).
In the metallic interior, the axisymmetric components of the Lorentz and
Coriolis
forces balance each other almost perfectly with secondary contributions of
inertia close to the inner boundary. This pattern is typical of
rapidly-rotating convection when Lorentz forces play a dominant role in
the force balance \citep[see, e.g.][his Fig.~7]{Aubert05}.
The situation in the external convective region (beyond $\mathcal{R}_o$) is more
intricate. The Reynolds stresses that maintain the observed alternating zonal
jet pattern mainly act in the upper parts of the external convective layer,
where the typical convective flows are more vigorous. This driving is
compensated partly by viscous stresses in the intense shear regions and
partly by Maxwell stresses at the bottom of the external convective region
($r\gtrsim \mathcal{R}_o$) where the electrical conductivity is still sizeable.
Maxwell stresses play a negligible role for the equatorial jet since
it penetrates less deep. However, they are definitely important for braking the
flanking jets.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{radialHeat}
\caption{Zoomed-in insets for $r\in[0.83\,r_o,0.9,r_o]$ and
$\theta\in[50^\circ,90^\circ]$. (\textit{a}) Time-averaged axisymmetric
advection of the entropy background by the meridional flow $\tilde{\rho} \tilde{T}\,
\overline{[u_r]} \mathrm{d} \tilde{s} /\mathrm{d} r$. (\textit{b}) Time-averaged
radial part of the entropy diffusion $1/Pr\,\vec{\nabla}\cdot(\tilde{\rho}
\tilde{T} \vec{\nabla} \overline{[s']} \cdot\vec{e_r})$. In both panels the
horizontal dashed lines correspond to $r=\mathcal{R}_i$ and $r=\mathcal{R}_o$.}
\label{fig:radialHeat}
\end{figure}
At the upper edge of the SSL, the delicate balance between Maxwell and Reynolds
stresses drives a meridional circulation pattern which slightly penetrates the
stable layer. This is the main player in establishing the latitudinal entropy
variation that explains the quenching of the zonal winds.
Figure~\ref{fig:blow_up} illustrates the interesting dynamics in the region
where the jets touch the upper edge of the stable layer.
The upper row highlights the importance of the thermal wind balance
(Eq.~\ref{eq:thWindShort}) for limiting the depth of the flanking jets. The
$z$-variation (panel \textit{b}) in the zonal flows (panel \textit{a}) are
nearly perfectly explained by the thermal wind term that depends on
axisymmetric latitudinal entropy variations (panel \textit{c}).
The lower row of Fig.~\ref{fig:blow_up} illustrates how the stable
stratification effectively prevents the meridional circulation (panels
\textit{d} and \textit{e}) from penetrating the SSL. Azimuthal Lorentz force
(panel \textit{f}) shapes the meridional circulation pattern (panel \textit{e})
according to Eq.~(\ref{eq:zon}). This force is a direct result of the electric
currents induced by the zonal winds \citep{Wicht19a}.
Figure~\ref{fig:radialHeat} shows that the time-averaged advection of the
entropy background $\mathrm{d}\tilde{s}/\mathrm{d}r$ by the meridional flow
$\overline{[u_r]}$ is balanced to a large degree by the radial
diffusion $1/Pr\,\vec{\nabla}\cdot(\tilde{\rho} \tilde{T} \vec{\nabla} \overline{[s']}
\cdot\vec{e_r})$. Other entropy transport contributions are of secondary
importance close to the SSL.
This implies that the meridional circulation cells which scratch the
upper edge of the SSL build up the local latitudinal entropy gradients
visible in panel (\textit{c}) of Fig.~\ref{fig:blow_up}.
\begin{figure}
\centering
\includegraphics[width=8.3cm]{forces_cyl}
\caption{(\textit{a}) Time-averaged surface zonal flows in the Northern
hemisphere (dashed lines) and
geostrophic zonal flows (solid lines) as a function of the normalised
cylindrical radius. (\textit{b}) Time-averaged axial torques integrated over
cylinders as a function of $\varsigma/r_o$. The vertical lines correspond to the
upper edge of the SSL $\mathcal{R}_o$.}
\label{fig:forces_cyl}
\end{figure}
To study the roles played by the Lorentz force and viscosity
in controlling the amplitude of the zonal jets, we integrate
Eq.~(\ref{eq:zon}) over axial cylinders for the fluid regions above the
middle of the SSL.
\begin{equation}
\dfrac{2}{E}s \left\langle \tilde{\rho} \overline{ [u_s]} \right\rangle_h =
-\left\langle\overline{\vec{\nabla}\cdot\vec{\mathcal{F}_\text{R}}}
\right\rangle_h
-\left\langle\overline{\vec{\nabla}\cdot\vec{\mathcal{F}_\text{M}}}
\right\rangle_h
-\left\langle\overline{\vec{\nabla}\cdot\vec{\mathcal{F}_\text{V}}}
\right\rangle_h.
\label{eq:vpgeos}
\end{equation}
The operator $\langle f \rangle_h$ is defined by
\[
\left\langle f\right \rangle_h = \dfrac{1}{h^{+}-h^{-}}\int_{h^{-}}^{h^{+}}
f(\varsigma,z)\, \mathrm{d} z\,.
\]
where the bounds of integration $h^{+}$ and $h^{-}$ depend on the
radius $\mathcal{R}_c=\frac{1}{2}(\mathcal{R}_o+\mathcal{R}_i)$. For $\varsigma \geq \mathcal{R}_c$,
$h^{\pm} = \pm\sqrt{r_o^2 -\varsigma^2}$, while the integration bounds are
restricted to the Northern hemisphere for $s < \mathcal{R}_c$, i.e.
$h^{+}=\sqrt{r_o^2-\varsigma^2}$ and $h^{-} = \sqrt{\mathcal{R}_c^2-\varsigma^2}$.
Figure~\ref{fig:forces_cyl}\textit{a} shows the geostrophic component of the
zonal flows, $\langle\overline{[u_\phi]}\rangle_h$, along
with the surface profile $\overline{[u_\phi]}(r_o)$, while
Fig.~\ref{fig:forces_cyl}\textit{b} portrays the different
time-averaged axial torques which enter Eq.~(\ref{eq:vpgeos}).
As already observed in Fig.~\ref{fig:vp_psi}, the upper edge of the SSL marks a
clear separation of the zonal flow morphology. For $s > \mathcal{R}_o$, the
geostrophic component of the zonal flows closely follows the surface profile,
indicating the high degree of geostrophy of the main prograde equatorial jet.
Because of the decay of the zonal flows in the SSL, the secondary jets between
$0.7\,r_o < \varsigma < \mathcal{R}_o$ feature a much weaker geostrophic component.
This dynamical change in the vicinity of $\mathcal{R}_o$ is also recovered in the
spatial distribution of the axial torques. The strong prograde equatorial jet
is driven by positive Reynolds stresses equilibrated by viscosity, while the
geostrophic part of the secondary alternating jets are driven by undulating
Reynolds stresses balanced by a combination of Lorentz and viscous torques.
The cylindrical integration of the Coriolis term vanishes indicating the
cancellation of the mass flux over the considered fluid domain $r \geq \mathcal{R}_c$.
\section{Discussion and conclusion}
\label{sec:disc}
Several recent Jupiter's interior models suggest that Helium demixing could
happen in a thin layer located close to the transition to metallic
hydrogen \citep[e.g.][]{Militzer16,Wahl17,Debras19}.
To examine the effects of such a layer, we have developed the
first global dynamo model of Jupiter that incorporates a stably-stratified layer
between $0.82\,R_J$ and $0.86\,R_J$. The chosen degree of
stratification characterised by the ratio of the Brunt-V\"ais\"al\"a frequency
to the rotation rate is rather strong with $N_m/\Omega\simeq 10$ to ensure that
convection would not penetrate through the stably-stratified layer (SSL).
Such an SSL effectively separates the dynamics of the regions below and above.
Previous simulations without such a layer suggest that only the equatorial jet
is compatible with Jupiter-like dynamo action \citep{Jones14,Gastine14a}.
Stronger flanking jets would always penetrate into the highly-conducting
interior and lead to too complex fields unlike Jupiter
\citep{Duarte13,Dietrich18}. For the first time, we show that the SSL allows
flanking jets to develop while maintaining dipole-dominated dynamo action.
The flanking jets only extend up to $\pm 40^\circ$ degree in latitude and are
weaker than observed on Jupiter.
The dynamics below and above the SSL obey different underlying force
balances. By directly measuring the spectral distribution
of forces, we have shown that the metallic region is controlled by a triple
force balance between the non-geostrophic part of Coriolis force, buoyancy and
Lorentz forces, with secondary contributions of inertia and viscosity. This
forms the so-called \emph{QG-MAC} balance which has been devised by
\cite{Davidson13}, and is expected to hold in the dynamo regions of gas giants.
The outer convective region where the electrical conductivity drops follows
a different force balance with dominant contributions of ageostrophic Coriolis
force, buoyancy and inertia. This corresponds to the so-called \emph{QG-IAC}
balance \citep[see][]{Cardin94,Aubert03,Gillet06,Gastine16},
a physical regime at work in convective regions of rapidly-rotating
astrophysical bodies when the magnetic effects are negligible.
Despite diffusivities orders of magnitudes larger than in the gas giants, the
dynamo model presented here obeys the leading order force balances expected to
hold in Jupiter's interior.
The mechanism that prevents the jets from penetrating the SSL in our simulations
follows the scenario outlined by \cite{Christensen20}. Where the zonal
winds reach to high conductivities, their induction yields Lorentz forces that
in turn drive a complex meridional circulation pattern. Where this circulation
penetrates the SSL and encounters the strong background stratification, the
entropy pattern is significantly altered, resulting in a thermal wind balance
consistent with the quenching of the winds
\citep[e.g.][]{Showman06,Augustson12}.
Whether the magnetic effects are always required to confine the
meridional circulation remains unclear. Indeed, in non-magnetic simulations,
viscous and thermal diffusion would mediate the penetration of the zonal winds
into the SSL \citep{Spiegel92}.
Given the large diffusivities adopted in global dynamo models, the penetration
would be likely much more effective than realistic. In the context of
modelling solar-type stars, \cite{Brun17} developed several non-magnetic
numerical models in which the diffusivities are several orders of
magnitude smaller in the SSL than in the convective envelope. This yields
zonal flows that do not spread into the stably-stratified core (see their
Fig.~11), at least on timescales smaller than the thermal diffusion time of the
SSL.
The surface field in our simulation is too dipolar and shows too little
localised field concentration when compared with the Jupiter field model JRM09
by \cite{Connerney18}. The magnetic spectrum at the bottom of the stable
layer at $0.84\,R_J$ is roughly compatible with JRM09 when upward-continued as
a potential field. However, the skin effect and to a large degree also the
dynamo action just above the stable layer heavily modifies the field, making it
less realistic.
The efficiency of the dynamo action above the stable layer depends on the
magnetic Reynolds number $Rm = U_z d_\sigma \sigma
\mu_0 $ that is based on the zonal flow amplitude $U_z$
and the electrical conductivity scale height $d_\sigma=|\partial \ln \sigma /
\partial r|^{-1}$ \citep{Liu08,Cao17,Wicht19a}.
Observations of the magnetic field variations suggest that this magnetic
Reynolds number, which increases with depth, reaches a value around unity
at $0.95\,R_J$ \citep{Moore19}. Gravity measurements indicate that this is also
about the depth where the zonal wind velocity decreases rapidly
\citep{Kaspi18,Galanti20}.
Additional support for the upper boundary comes from the fact that the width
of the dominant equatorial jet on Jupiter ($\approx 30^\circ$) is only
reproduced in numerical models when $\mathcal{R}_o =0.95\,R_J$
\citep{Gastine14,Heimpel16}.
Recent interior models by \cite{Debras19} suggest
a stably-stratified layer starting around $0.1$~Mbar, which
would correspond to a somewhat deeper radius around $\mathcal{R}_o=0.93\,R_J$.
However, the observational constraints (gravity, He abundance) likely also allow
for a shallower layer.
Helium demixing, considered as the best candidate to promote stable
stratification in Jupiter, is expected where
hydrogen becomes metallic and thus likely significantly deeper around
$0.9\,R_J$. The possible physical origin of a stable layer that would
start around $0.95\,R_J$ remains unclear.
The simulation presented here is the first to demonstrate that multiple zonal
jets and Jupiter-like dynamo action can be consolidated in a global simulation.
The necessary ingredient is a stably-stratified layer that allows zonal jets to
develop in the outer envelope without contributing to the dynamo action
in the deeper metallic region.
While the simulation presented here is an important step towards more
Jupiter-like models, there is certainly room for improvements. The simulation
was performed at an Ekman number of $E=10^{-6}$
with considerable numerical costs. We speculate that an even smaller Ekman
number, and possibly a larger Rayleigh number, is required to drive a stronger
jet system that extends to yet higher latitudes. The magnetic field in our
simulation could become more Jupiter-like for a stable layer that is
thinner and lies closer to the surface than in our simulations. However, this
would further increase the numerical costs. The magnetic field also lacks the
characteristic banded structure that \cite{Gastine14a} attributed to zonal wind
dynamo action. An increase of the conductivity, or rather the magnetic Reynolds
number $Rm$, in the outer envelope in our simulation could help here.
These open questions pave the way of future global Jovian dynamo models.
\section*{Acknowledgements}
We thank Dave Stevenson and an anonymous reviewer for their useful
comments. Numerical computations have been carried out on the \texttt{S-CAPAD}
platform at IPGP, on the \texttt{occigen} cluster at
GENCI-CINES (Grant A0020410095) and on the \texttt{cobra} cluster in Garching.
All the figures have been generated using \texttt{matplotlib} \citep{Hunter07}
and \texttt{paraview} (\url{https://www.paraview.org}). The colormaps come from
the \texttt{cmocean} package by \cite{cmocean}.
|
1,108,101,564,993 | arxiv | \section{ Introduction }
Current photometric and spectroscopic large scale structure surveys, such as DES \cite{Abbott:2017wau,Abbott:2017wcz} and BOSS \cite{Alam:2016hwk}, have contributed significantly in improving our understanding of the early- and late-time universe. This trend will continue in the future through upcoming surveys such as Euclid \cite{EuclidRedBook} and LSST \cite{LSSTScienceBook} as they are expected to cover a larger volume and wider redshift range with an unprecedented precision.
The key question is how much cosmological information can be extracted from such high-fidelity data. In particular, we are interested in quantifying the information content of higher-order correlation functions, focusing on the bispectrum. An important ingredient required to answer this question is to correctly model the covariance matrix.
The covariance of the polyspectra can be generally classified into three parts: the Gaussian covariance due to the random phases of the Fourier modes, the non-Gaussian covariance due to the mode coupling between the modes inside the survey window (we sometimes call this the small-scale covariance), and the covariance due to the coupling of the modes outside the survey window with those inside. The small-scale covariance can be studied using the standard periodic boundary condition setup.
Ref.~\cite{Hamilton:2005dx} pointed out that because of the presence of the window function in a real survey, the long modes larger than the survey window size can modulate the small scale modes and lead to large covariance on small scales. The authors coined the term beat coupling to refer to the covariance due to the long mode outside the window. The wave vectors are sharp in simulations with periodic boundary conditions, and so this type of covariance cannot be studied in the standard periodic box setup; instead it can be studied by dividing a gigantic simulation box into multiple subboxes. Ref.~\cite{Takada:2013bfn} formulated this type of covariance using the response function formalism. In this work we follow \cite{Takada:2013bfn} in referring to the covariance due to the modes outside the survey window as the supersample covariance and the perturbative part of it as beat coupling since these terminologies are widely used now. The response function approach borrows the technique of the consistency relation, first derived in the context of inflation \cite{Maldacena:2002vr,Creminelli:2004yq}, and later applied in large scale structure context \cite{Peloso:2013zw, Kehagias:2013yd, Creminelli:2013mca, Horn:2014rta}. The response approach provides a powerful scheme to model the coupling of the long mode with the small scale modes.
In previous studies on the information content of the bispectrum, a Gaussian covariance was assumed, e.g.~\cite{Takada:2003ef,Sefusatti:2004xz}. In the context of weak lensing, Refs.~\cite{Kayo:2013aha,KayoTakadaJain_2013,Sato:2013mq} found that when using realistic non-Gaussian covariance, the information content of the lensing bispectrum is overestimated relative to the Gaussian covariance approximation. Recently, \cite{Chan:2016ehg} studied the bispectrum covariance matrix using a large suite of simulations, and found that the Gaussian covariance significantly overestimates the information content since the Gaussian covariance approximation is a poor approximation beyond the mildly linear regime (see \cite{Sefusatti:2006pa,Byun:2017fkz} for the constraint on the cosmological parameters). However, \cite{Chan:2016ehg} measured the covariance from periodic simulations, and so the supersample covariance was not present. It is the goal of this paper to address how important the supersample covariance is to the budget of the bispectrum covariance.
This paper is organized as follows. In Sec.~\ref{sec:bisp_ssc_derivation}, using the response function formalism, we derive the supersample bispectrum covariance and the cross covariance between the power spectrum and the bispectrum. The general bispectrum response to the long mode is studied in Sec.~\ref{sec:Bk_response_effect_general}. We compute the bispectrum response function using the standard perturbation theory and halo model in Sec.~\ref{sec:Bk_reponse_SPT_HM}. In Sec.~\ref{sec:predictions_measurements}, we quantify the magnitudes of the supersample covariance on the bispectrum covariance and the cross covariance by comparing the numerical measurements obtained from the periodic box and subbox setups. We also compare the predictions obtained with the halo model prescription with the numerical results. We conclude in Sec.~\ref{sec:conclusions}. In Appendix ~\ref{sec:beat_coupling_derivation}, we compute the supersample covariance using a simple beat coupling approach and check it against the response formalism. We generalize the calculations to compute the effect of the tidal perturbations on the bispectrum supersample covariances in Appendix ~\ref{sec:SSC_tides}.
\section{Derivation of the bispectrum supersample covariance}
\label{sec:bisp_ssc_derivation}
In this section, we derive the supersample covariance contributions to the bispectrum covariance and the cross covariance between the power spectrum and the bispectrum. We model the supersample covariance using the response function formalism. The derivation is a straightforward generalization of the computation of the supersample covariance for the power spectrum in Ref. \cite{Takada:2013bfn}.
\subsection{ The bispectrum estimator and window function }
Suppose we have measured the Fourier mode of the density contrast, $\hat{\delta}(\bm{k})$, from a survey or simulation. To estimate the bispectrum, the Fourier modes are binned into shells of width $\Delta k $. From the definition of the bispectrum
\begin{equation}
\langle \hat{\delta}(\bm{k}_1) \hat{\delta}(\bm{k}_2) \hat{\delta}(\bm{k}_3) \rangle = ( 2 \pi)^3 \delta_{\rm D} ( \bm{k}_{123} ) \hat{B}(k_1,k_2,k_3) ,
\end{equation}
(where $ \delta_{\rm D} $ is the Dirac delta function and $\bm{k}_{123}$ denotes $\bm k_1 + \bm k_2 + \bm k_3 $) we can construct an estimator as \cite{SCFFHM98,Scoccimarro:2003wn}
\begin{align}
\label{eq:Bisp_estimator}
\hat{B}(k_1,k_2,k_3)
& = \frac{ 1 }{ V V_{\triangle} }\int_{ k_1 }d^3 p \int_{ k_2 }d^3 q \int_{ k_3 }d^3 r \, \ \nonumber \\
& \times \delta_{\rm D} ( \bm{p} + \bm{q} + \bm{r} ) \, \hat{\delta}(\bm{p}) \hat{\delta}(\bm{q} ) \hat{\delta}(\bm{r} ) ,
\end{align}
where $k_i$ indicates that the integration is over a spherical shell of width $[ k_i - \Delta k /2 , k_i + \Delta k /2 ) $. $V$ is the volume of the survey/simulation, and $ V_{\triangle }$ counts the number of modes satisfying the triangle condition
\begin{equation}
\label{eq:Vtriangle}
V_{ \triangle }(k_1,k_2,k_3) = \int_{k_1} d^3 p \int_{k_2} d^3 q \int_{k_3} d^3 r \, \delta_{\rm D} ( \bm{p} + \bm{q} + \bm{r} ).
\end{equation}
$ V_{ \triangle } $ can be computed analytically (\cite{Scoccimarro:2003wn}, see \cite{Chan:2016ehg} for a review of the derivation)
\begin{equation}
\label{eq:Vtriangle_final}
V_{\Delta} = 8 \pi^2 k_1 k_2 k_3 ( \Delta k )^3 \beta(\mu ) ,
\end{equation}
where $\mu $ is defined as
\begin{equation}
\label{eq:mu_triangle}
\mu = \hat{\bm{k}}_1 \cdot \hat{\bm{k}}_2 = \frac{ k_3^2 - k_1^2 - k_2^2 }{ 2 k_1 k_2 },
\end{equation}
and $\beta (\mu) $ is given by
\begin{equation}
\label{eq:beta_mu}
\beta(\mu ) = \begin{cases} \frac{1}{2} \, & \mbox{if } \mu = \pm 1 \\
1 &\mbox{if } 0 < \mu < 1 \\
0 &\mbox{otherwise }
\end{cases} .
\end{equation}
In a realistic survey, there is a survey window function and the measured $\hat{ \delta}(\bm{k})$ is a convolution of the survey window with the underlying density field. Here we study the implications of the survey window on $\hat{B}$.
The survey volume $V$ can be expressed in terms of a general window function $W$ \footnote{ The survey window function considered here is dimensionless in real space, and its value in real space falls within the interval [0,1]. This is different from the one used to define Lagrangian halos (e.g., \cite{Chan:2015zjt}), which has the dimension of inverse volume. In that case, the window function convolves with the density field in real space, and the size of the window is close to the Lagrangian size of the halo. },
\begin{equation}
V = \int d^3 x W(\bm{x} ).
\end{equation}
The density contrast in real space $ \delta_{W} (\bm{x} ) $ reads
\begin{equation}
\delta_{W} (\bm{x} ) = W(\bm{x} ) \delta(\bm{x}),
\end{equation}
where $ \delta(\bm{x}) $ can be the density contrast of the dark matter or other tracers. In Fourier space, we have
\begin{equation}
\label{eq:deltaW}
\delta_W(\bm{k}) = \int \frac{d^3 p }{ (2 \pi)^3 } \delta(\bm{p}) W(\bm{k} - \bm{p} ).
\end{equation}
Thus the effect of the selection window is to smooth the density contrast in Fourier space. The width of the window $W$ is of the order $1/L$, where $L= V^{1/3}$. The wave vector is effectively broadened by $\sim 1/L$ in Fourier space. In contrast, for simulations with periodic boundary conditions, only wave vectors in units of the fundamental mode are supported and hence they are sharp. This is why the window function effect is not captured by the standard periodic simulation setup.
Plugging the smoothed density Eq.~\eqref{eq:deltaW} into the estimator $\hat{B}$, we get
\begin{align}
\label{eq:B_w_form1}
\hat{B}_W ( k_1, k_2,k_3)
&= \frac{ 1 }{V V_{\triangle} } \int_{k_1} d^3 p_1 \int_{k_2} d^3 p_2 \int_{k_3} d^3 p_3 \delta_{\rm D} (\bm{p}_{123} ) \nonumber \\
& \quad \times \prod_{i=1}^3 \int \frac{d^3 q_i }{(2 \pi)^3 } W(\bm{q}_i ) \delta( \bm{p}_i - \bm{q}_i ).
\end{align}
We are interested in how the broadening of the Fourier modes affects the estimator. Because $ q_i \lesssim 1/L $, to extract the effect of the long mode $ q_i$ we take the limit $ q_i \ll k_j$. In this limit, we can do a change of variables without modifying the integration limits of $\bm{p}$-integrals, and write Eq.~\eqref{eq:B_w_form1} as
\begin{align}
\label{eq:B_w_form2}
& \hat{B}_W ( k_1, k_2,k_3)
\approx \frac{ 1 }{V V_{\triangle} } \int_{k_1} d^3 p_1 \int_{k_2} d^3 p_2 \int_{k_3} d^3 p_3 \delta_{\rm D} (\bm{p}_{123} ) \nonumber \\
& \quad \quad \times \prod_{i=1}^3 \int \frac{d^3 q_i }{(2 \pi)^3 } W(\bm{q}_i ) \delta( \bm{p}_1 ) \delta( \bm{p}_2 ) \delta( \bm{p}_3 - \bm{q}_{123} ).
\end{align}
Eq.~\eqref{eq:B_w_form2} reveals that although we try to measure the bispectrum of $ \delta_W $ satisfying the closed triangle condition by imposing the Dirac delta function in Eq.~\eqref{eq:Bisp_estimator}, the presence of the window function opens the triangle slightly by an amount $\bm{q}_{123} $ for $ \delta $. Analogously, for the case of the power spectrum, although we try to measure the power spectrum of $ \delta_W $ using wave vectors that are equal in magnitude but opposite in direction, the long mode causes a slight misalignment of these two vectors for $ \delta $.
In Eq.~\eqref{eq:B_w_form2}, we have isolated the effect of the long mode in one of the Fourier modes to facilitate the analysis later on. This form appears to break the symmetry among $k_1$, $k_2$, and $k_3$; however, this breaking is of higher order in $q_i$ and our final results will be symmetric about $k_1$, $k_2$, and $k_3$. Our only approximation in Eq.~\eqref{eq:B_w_form2} is that the limits of $\bm{p}$-integrals are unchanged. The effect is expected to be small as it only slightly changes the total number of configurations satisfying the constraint, while the dominant effect comes from the fact that \textit{each} of the triangle configurations is opened by the long mode.
\begin{widetext}
\subsection{ Effect of the long mode on the covariance }
\label{sec:SSC_Bk_derivation}
The window function convolves the bispectrum in Fourier space, and hence it can bias the amplitude and imprint wiggles on the measured bispectrum. We will discuss this more later on. In this section we are interested in the effect of the long mode on the small scale measurements. To leading order, the effect of the long mode on the expectation value of $ \hat B_W $ vanishes.
We now examine how the window function affects the covariance of the estimator $\hat{B}_W$. The covariance of $\hat{B}_W$ is given by
\begin{equation}
\mathrm{cov} \big( \hat{B}_W(k_1, k_2,k_3), \hat{B}_W(k_1', k_2',k_3') \big) = \langle \hat{B}_W (k_1, k_2,k_3 ) \hat{B}_W(k_1',k_2', k_3') \rangle - \langle \hat{B}_W (k_1, k_2,k_3 ) \rangle \langle \hat{B}_W(k_1',k_2', k_3') \rangle.
\end{equation}
Our task is to compute the connected part of $\langle \hat{B}_W (k_1, k_2,k_3 ) \hat{B}_W(k_1',k_2', k_3') \rangle $ due to the long mode
\begin{align}
\label{eq:BB_expectation}
\langle \hat{B}_W (k_1, k_2,k_3 ) \hat{B}_W(k_1',k_2', k_3') \rangle
=& \frac{1 }{ V^2 V_{\triangle} V_{\triangle}' } \prod_{j=1}^3 \int_{k_j} d^3 p_j \delta_{\rm D} ( \bm{p}_{123}) \int_{k_j'} d^3 p_j' \delta_{\rm D} ( \bm{p}_{123}') \prod_{i=1}^3 \int \frac{d^3 q_i}{(2 \pi)^3 } W(q_i) \int \frac{d^3 q_i'}{(2 \pi)^3 } W(q_i') \nonumber \\
&\qquad \times \langle \delta(\bm{p}_1) \delta(\bm{p}_2) \delta(\bm{p}_3 - \bm{q}_{123} ) \delta(\bm{p}_1') \delta(\bm{p}_2') \delta(\bm{p}_3' - \bm{q}_{123}' ) \rangle.
\end{align}
The effect of the long modes $\bm{q}_{123}$ and $\bm{q}'_{123}$ on the small scale bispectrum can be computed similar to \cite{Takada:2013bfn} by employing the argument of consistency relations for a soft internal mode
\begin{align}
\label{eq:6point_longmode_expand1}
\Big\langle \delta(\bm{p}_1) \delta(\bm{p}_2) \delta(\bm{p}_3 - \bm{q}_{123} ) & \delta(\bm{p}_1') \delta(\bm{p}_2') \delta(\bm{p}_3' - \bm{q}_{123}' ) \Big\rangle
\approx \Big\langle \Big[ \langle \delta(\bm{p}_1) \delta(\bm{p}_2) \delta(\bm{p}_3) \rangle + \delta_{\rm l}( - \bm{q}_{123} ) \frac{ \partial }{ \partial \delta_{\rm l}( \bm{q} ) } \langle \delta(\bm{p}_1) \delta(\bm{p}_2) \delta(\bm{p}_3 + \bm{q}) \rangle \Big|_{\delta_{\rm l} =0 } \Big] \nonumber \\
& \qquad \times \Big[ \langle \delta(\bm{p}_1') \delta(\bm{p}_2') \delta(\bm{p}_3') \rangle + \delta_{\rm l} (- \bm{q}_{123}' ) \frac{ \partial }{ \partial \delta_{\rm l} (\bm{q}') } \langle \delta(\bm{p}_1') \delta(\bm{p}_2') \delta(\bm{p}_3' + \bm{q}' ) \rangle \Big|_{\delta_{\rm l} =0 } \Big] \Big\rangle_{\delta_{\rm l}} ,
\end{align}
where the expectation value sign $\langle \dots \rangle_{\delta_{\rm l} } $ denotes the average over the long mode ${\delta_{\rm l} } $, while inside the expectation value sign $\delta_{\rm l} $ is kept fixed. The long wavelength perturbation can be expressed as
\begin{equation}
\label{eq:delta_b_3Dform}
\delta_{\rm l} ( \bm{q} ) = (2 \pi)^3 \delta_{\rm D} ( \bm{ q} ) \delta_{\rm b} ,
\end{equation}
where $ \delta_{\rm b} $ is the \textit{dimensionless} amplitude of perturbation. Then Eq.~\eqref{eq:6point_longmode_expand1} can be written as
\begin{align}
\label{eq:6point_longmode_expand2}
& \Big\langle \delta(\bm{p}_1) \delta(\bm{p}_2) \delta(\bm{p}_3 - \bm{q}_{123} ) \delta(\bm{p}_1') \delta(\bm{p}_2') \delta(\bm{p}_3' - \bm{q}_{123}' ) \Big\rangle \nonumber \\
= & \langle \delta(\bm{p}_1) \delta(\bm{p}_2) \delta(\bm{p}_3 ) \rangle \langle \delta(\bm{p}_1') \delta(\bm{p}_2') \delta(\bm{p}_3' ) \rangle + \langle \delta_{\rm l}( - \bm{q}_{123} ) \delta_{\rm l}( - \bm{q}_{123}' ) \rangle \frac{\partial}{\partial \delta_{\rm b} } B(p_1,p_2,p_3 | \delta_{\rm b} ) \Big|_{\delta_{\rm b} = 0 } \frac{\partial}{\partial \delta_{\rm b} } B(p_1',p_2',p_3'|\delta_{\rm b}) \Big|_{\delta_{\rm b} = 0 } \nonumber \\
=& \langle \delta(\bm{p}_1) \delta(\bm{p}_2) \delta(\bm{p}_3 ) \rangle \langle \delta(\bm{p}_1') \delta(\bm{p}_2') \delta(\bm{p}_3' ) \rangle + (2 \pi)^3 P_{\rm l}(q_{123} ) \delta_{\rm D} ( \bm{q}_{123} + \bm{q}'_{123} ) \frac{\partial}{\partial \delta_{\rm b} } B(p_1,p_2,p_3 | \delta_{\rm b} ) \Big|_{\delta_{\rm b} = 0 } \frac{\partial}{\partial \delta_{\rm b} } B(p_1',p_2',p_3' | \delta_{\rm b} ) \Big|_{\delta_{\rm b} = 0 } .
\end{align}
The first term in Eq.~\eqref{eq:6point_longmode_expand2} is canceled by $\langle B_W \rangle \langle B_W' \rangle $, and only the second one contributes to the bispectrum covariance. Note that in Eq.~\eqref{eq:6point_longmode_expand2}, $P_{\rm l } $ is the power spectrum of the long mode and it is assumed to be linear, while the bispectrum $B$ can be highly nonlinear.
For the power spectrum covariance, an analogous relation, which was called the trispectrum consistency relation in Ref. \cite{Takada:2013bfn}, can be established. The consideration of the effects of the long mode on short scales is the key to construct the large-scale structure consistency relation \cite{Peloso:2013zw, Kehagias:2013yd, Creminelli:2013mca, Horn:2014rta}. The position-dependent power spectrum, which is equivalent to squeezed bispectrum, is constructed by isolating the effects of the long modes on the local power spectrum \cite{Chiang:2014oga} (see \cite{Adhikari:2016wpj} for a generalization to the position-dependent bispectrum).
Plugging the second term in the last line of Eq.~\eqref{eq:6point_longmode_expand2} into Eq.~\eqref{eq:BB_expectation}, we can perform the $\bm{q} $ and $\bm{q}'$ integrals as
\begin{align}
\label{eq:Wconvolution_intg}
& \frac{1}{V^2} \prod_{i=1}^3 \int \frac{d^3 q_i}{(2 \pi)^3 } W(q_i) \int \frac{d^3 q_i'}{(2 \pi)^3 } W(q_i') (2 \pi)^3 \delta_{\rm D} ( \bm{q}_{123} + \bm{q}'_{123} ) P_{\rm l} (q_{123} ) \nonumber \\
= & \frac{1}{V^2} \prod_{i=1}^3 \int \frac{d^3 q_i}{(2 \pi)^3 } W( \bm{q}_i ) W_3( \bm{q}_{123} ) P_{\rm l} (q_{123} ) \nonumber \\
=& \frac{1 }{ V^2 } \prod_{i=1}^3 \int \frac{ d^3 Q_i }{ (2\pi)^3 } P_{\rm l}(Q_3) W_3( Q_3) \int \frac{ d^3 Q_3' }{ (2\pi)^3 } (2 \pi)^3 \delta_{\rm D} ( \bm{Q}_3' - \bm{Q}_3 + \bm{ Q}_{12} ) W(Q_1) W(Q_2) W( Q_3' ) \nonumber \\
= & \int \frac{ d^3 Q_3 }{(2\pi)^3 } \bigg[ \frac{ W_3 ( Q_3) }{V} \bigg]^2P_{\rm l} (Q_3) \equiv \sigma_{W_3}^2 .
\end{align}
In the first equality, we have simply defined the notation $W_n$ (with $n=3$)
\begin{equation}
W_n ( \bm{k} ) \equiv \int \frac{d^3 k_1 }{ (2 \pi)^3 } \int \frac{d^3 k_2 }{ (2 \pi)^3 } \dots \int \frac{d^3 k_n } { (2 \pi)^3 } ( 2 \pi)^3 \delta_{\rm D} ( \bm{k} - \bm{k}_{12 \dots n } ) W( \bm{k}_1) \dots W( \bm{k}_n),
\end{equation}
and in real space we have $W_n ( \bm{x} ) = W^n ( \bm{x} )$. For the second equality we have changed the variables as $\bm{Q}_1= \bm{q}_1 $, $\bm{Q}_2= \bm{q}_{2} $, and $\bm{Q}_3= \bm{q}_{123} $, and have explicitly introduced a Dirac delta function for $ \bm{Q}_3'$. By including the volume in the definition, $ \sigma_{W_3}^2 $ is the usual RMS variance of the long wavelength fluctuations across the survey window computed using $W_3$.
Finally we arrive at the supersample covariance for the bispectrum
\begin{equation}
\label{eq:CB_SSC}
C^B_{\rm SSC}(k_1,k_2,k_3, k_1',k_2',k_3') = \sigma_{W_3}^2 \frac{\partial }{ \partial \delta_{\rm b} } B( k_1,k_2,k_3 | \delta_{\rm b} ) \Big|_{\delta_{\rm b} = 0 } \frac{\partial }{ \partial \delta_{\rm b} } B( k_1',k_2',k_3' | \delta_{\rm b} ) \Big|_{\delta_{\rm b} = 0 }.
\end{equation}
We call $\partial B / \partial \delta_b |_{ \delta_{\rm b} =0 } $ the bispectrum response function.
The supersample covariance for the power spectrum derived in \cite{Takada:2013bfn} is similar to Eq.~\eqref{eq:CB_SSC}, simply with $B$ replaced by $P$ and $ \sigma_{W_3}^2 $ replaced by $ \sigma_{W_2}^2 $, which is defined by substituting $W_3 $ with $W_2 $ in the definition of $ \sigma_{W_3}^2 $ [\cite{Takada:2013bfn} only explicitly considered the specific window Eq.~\eqref{eq:window_0_1}, so $W_2$ and $W_3$ are the same].
It is worth stressing that the perturbative expansion in Eq.~\eqref{eq:6point_longmode_expand2} is about the long wavelength mode that opens up the triangle, and it is distinctly different from the perturbative expansion about the small scales $\delta $ studied in \cite{Chan:2016ehg}. The supersample covariance arises from the coupling of the long mode with the small scale modes, while the non-Gaussianity investigated in \cite{Chan:2016ehg} is purely from small scale couplings.
In Appendix \ref{sec:SSC_tides}, we extend the computations to include the supersample covariance contributions due to the tidal perturbations. The final result, Eq.~\eqref{eq:SSC_tide_appendix} is analogous to the density one Eq.~\eqref{eq:CB_SSC}.
\
\end{widetext}
\subsection{Supersample cross covariance between the power spectrum and the bispectrum}
As a by-product, it is straightforward to compute the supersample cross covariance between the power spectrum and the bispectrum. The power spectrum can be estimated by (e.g.~\cite{FeldmanKaiserPeacock1994,Scoccimarro:1999kp})
\begin{equation}
\label{eq:Pk_estimator}
\hat{P}(k) = \frac{1}{V} \int_{k} \frac{ d^3 p }{ V_{\rm s}( k ) } \hat{\delta}( \bm{p} ) \hat{\delta}( - \bm{p} ),
\end{equation}
where the integration is over a spherical shell of width $[ k - \Delta k /2, k + \Delta k /2 ) $. $ V_{\rm s} $ is the volume of the spherical shell
\begin{equation}
V_{\rm s} (k) = \int_k d^3 p = 4 \pi k^2 \Delta k + \frac{\pi}{3} \Delta k^3.
\end{equation}
The cross covariance between the power spectrum and the bispectrum is then given by
\begin{equation}
\mathrm{cov}( \hat{P}, \hat{B} )= \langle \hat{P} \hat{B} \rangle - \langle \hat{P}\rangle \langle \hat{B} \rangle.
\end{equation}
Similar to the derivation in Sec.~\ref{sec:SSC_Bk_derivation}, it is easy to show that the supersample covariance contribution to cross covariance is given by
\begin{align}
\label{eq:SSC_PB_prediction}
& C_{\rm SSC}^{PB}(k, k_1,k_2,k_3) \nonumber \\
= & \sigma_{W_{2,3}}^2 \frac{ \partial }{ \partial \delta_b } P(k| \delta_{\rm b} ) \bigg|_{ \delta_{\rm b} = 0 } \frac{ \partial }{ \partial \delta_b } B( k_1, k_2, k_3 | \delta_{\rm b} ) \bigg|_{ \delta_{\rm b} = 0 } ,
\end{align}
where the mixed window variance $ \sigma_{W_{2,3}}^2 $ is defined as
\begin{equation}
\sigma_{W_{2,3}}^2 = \int \frac{ d^3 Q }{(2\pi)^3 } \frac{ W_2 ( Q ) }{V} \frac{ W_3 ( Q ) }{V} P_{\rm l} (Q) .
\end{equation}
As the effects of a general window function is captured in the variance computed using $W_n $; now, similar to \cite{Takada:2013bfn}, we specialize to the window function $W$
\begin{equation}
\label{eq:window_0_1}
W(\bm{x} ) = \begin{cases} 1 \, & \mbox{inside survey } \\
0 &\mbox{otherwise }
\end{cases} .
\end{equation}
This window function obeys the nice property that
\begin{equation}
\label{eq:Wn_identity_real}
W ( \bm{x} ) = W^n( \bm{x} ), \quad W ( \bm{k} ) = W_n( \bm{k} ).
\end{equation}
Hence, all of the variances are $\sigma_W^2 $ computed using $W$. For the rest of the paper, we will use the form Eq.~\eqref{eq:window_0_1} for $W$.
\section{ The response of the bispectrum to the long mode }
\label{sec:Bk_response_effect_general}
The long mode can affect the local measurement of the polyspectra in three ways: shifting the mean density used to define the density contrast, modifying the scale factor of the local patch, and changing the intrinsic growth \cite{Hamilton:2005dx,Baldauf:2011bh,Sherwin:2012nh,dePutter:2012, Kehagias:2013paa,Takada:2013bfn,Li:2014sga, Wagner:2015gva, Baldauf:2015vio}. These effects can be understood using the separate universe picture \cite{Sirko:2005uz,Baldauf:2011bh,Sherwin:2012nh,Li:2014sga,Wagner:2014aka,Baldauf:2015vio}, in which the long wavelength perturbation is absorbed into the background of a separate curved universe. We now describe each of them separately.
First, the density contrast is defined relative to the mean density, and we need to distinguish between the local and global mean densities \cite{dePutter:2012}. For galaxy surveys the density contrast is defined with respect to the local density contrast, while for weak lensing the global density is used \cite{Takada:2013bfn}. The global and local mean densities, $\bar{\rho} $ and $\bar{\rho}_{W} $, are related by
\begin{equation}
\rho(\bm{x}) = \bar{\rho} ( 1 + \delta(\bm{x}) ) = \bar{\rho}_{W}( 1 + \delta_{W} (\bm{x}) ) ,
\end{equation}
where $\delta$ and $\delta_{W}$ are the global and local density contrast. Because the global mean density and the local one are related by the background perturbation as
\begin{equation}
\bar{\rho}_{W} = ( 1 + \delta_{\rm b} ) \bar{\rho} ,
\end{equation}
we have
\begin{equation}
\delta_{W }(\bm{x} ) = \frac{\delta (\bm{x} ) - \delta_{\rm b}}{1 + \delta_{\rm b} }.
\end{equation}
Or in Fourier space, we get
\begin{equation}
\delta_{W}(\bm{k}) = \frac{\delta (\bm{k}) - \delta_{\rm b}\delta_{\rm D} (\bm{k})}{1 + \delta_{\rm b} }.
\end{equation}
As we consider finite \textit{external} wave numbers, the Dirac delta function will not contribute, and so if the bispectrum is defined with respect to the local density we make the replacement
\begin{equation}
\label{eq:global_local_mean_replacement}
B(k_1,k_2,k_3) \rightarrow \frac{ B(k_1,k_2,k_3) }{ ( 1 + \delta_{\rm b} )^3 }.
\end{equation}
Second, the long mode modifies the background expansion rate in the local patch. By absorbing the long mode into the background density, the scale factor of the local universe, $a_W$ is related to the global one, $a$ as \cite{Sirko:2005uz,Baldauf:2011bh,Sherwin:2012nh,Li:2014sga,Wagner:2014aka}
\begin{equation}
\label{eq:density_pert_a3scaling}
\frac{ 1+ \delta_{\rm b} }{ a^3 } = \frac{ 1}{ a_W^3 }.
\end{equation}
The separate universe and the global universe describe the same physical system in different ways, thus the physical quantities in these descriptions must agree. By matching the physical length scale in these two universes, we infer that the comoving wave number in the local universe $\bm{k}_W$ is related to the global one $\bm{k}$ as \cite{Sherwin:2012nh,Li:2014sga,Wagner:2015gva}
\begin{equation}
\label{eq:kW}
\bm{k}_W = ( 1 + \delta_{\rm b} )^{ - \frac{1}{3} } \bm{k}.
\end{equation}
In Ref.~\cite{Li:2014sga}, this rescaling of the wave number was referred to as the dilation effect. As we shall see in Sec.~\ref{sec:response_SPT_longshortcoupling}, the dilation effect is incorporated into the standard perturbation theory.
Taking into account the transformation of the Dirac delta function, the dilation effect on the bispectrum is given by
\begin{equation}
B(k_1,k_2,k_3) = \frac{ B( k_{W1}, k_{W2}, k_{W3} ) }{ (1 - \delta_{\rm b} ) } .
\end{equation}
The last effect is the modification of the intrinsic growth. If the background perturbation is positive, then gravity is stronger in the local universe, and so the intrinsic growth is enhanced. This effect can be studied by separate universe simulation \cite{Baldauf:2011bh,Li:2014sga,Wagner:2014aka}, perturbation theory \cite{Hamilton:2005dx}, or non-perturbative models such as hyper extended perturbation theory \cite{Hamilton:2005dx} or halo model \cite{Takada:2013bfn}. We compute this effect for dark matter bispectrum using perturbation theory in Sec.~\ref{sec:response_SPT_longshortcoupling} and the halo model in Sec.~\ref{sec:bisp_response_HM}.
We are now ready to check how the bispectrum response function depends on these effects. If the global mean density is used, the bispectrum response function is given by
\begin{align}
& \frac{\partial }{\partial \delta_{\rm b} } \frac{ B_W (k_{W1}, k_{W2}, k_{W3} ) }{ 1 - \delta_{\rm b} } \bigg|_{\delta_{\rm b}=0 } \nonumber \\
= & B( k_{1}, k_{2}, k_{3} ) + \frac{ \partial}{ \partial \delta_{\rm b} } B_W( k_{W1},k_{W2},k_{W3}) \bigg|_{\delta_{\rm b}=0 } .
\end{align}
We use $B_W$ to denote the bispectrum resulting from the modified intrinsic growth. As in \cite{Li:2014sga}, the second term can be analyzed by the chain rule
\begin{align}
\label{eq:dBw_ddeltab_chainrule}
& \frac{ \partial}{ \partial \delta_{\rm b} } B_W( k_{W1},k_{W2},k_{W3}) \bigg|_{\delta_{\rm b}=0 } \nonumber \\
=& \bigg[ \frac{\partial }{ \partial \delta_{\rm b} } B_W (k_{W1},k_{W2},k_{W3}) \bigg|_{ \substack{ k_W \\ \mathrm{fixed} } } + \nonumber \\
& \quad \sum_{i=1}^3 \frac{\partial }{ \partial k_{Wi} } B_W (k_{W1},k_{W2},k_{W3}) \bigg|_{ \substack{ B_W \\ \mathrm{fixed} } } \frac{\partial k_{Wi} }{ \partial \delta_{\rm b}} \bigg]_{\delta_{\rm b}=0} \nonumber \\
= & \frac{\partial }{ \partial \delta_{\rm b} } B_W (k_{1},k_{2},k_{3}) \bigg|_{\delta_{\rm b}=0} - \frac{1}{3} \sum_{i=1}^3 \frac{\partial }{ \partial \ln k_{i} } B( k_{1}, k_{2}, k_{3}).
\end{align}
The first term in the last line of Eq.~\eqref{eq:dBw_ddeltab_chainrule} encodes the modification of the intrinsic growth due to the long mode.
In summary, if the global mean is used, the full response function is given by
\begin{align}
\label{eq:response_derivative_full_gb}
& \frac{\partial }{\partial \delta_{\rm b} } B (k_1, k_2, k_3| \delta_{\rm b} ) \bigg|_{\delta_{\rm b}=0 } \nonumber \\
= & B( k_{1}, k_{2}, k_{3} ) - \frac{1}{3} \sum_{i=1}^3 \frac{\partial }{ \partial \ln k_{i} } B( k_{1}, k_{2}, k_{3}) \nonumber \\
& \quad + \frac{\partial }{ \partial \delta_{\rm b} } B_W (k_{1},k_{2},k_{3}) \bigg|_{\delta_{\rm b}=0} .
\end{align}
If the local mean is used, with the replacement Eq.~\eqref{eq:global_local_mean_replacement}, there is an additional term $-3 B $ in Eq.~\eqref{eq:response_derivative_full_gb}.
These results are similar to the analogous expressions for the power spectrum \cite{Li:2014sga}.
\section{ The bispectrum response function from theory }
\label{sec:Bk_reponse_SPT_HM}
In this section, we compute the dark matter bispectrum response function using standard perturbation theory (SPT) and then the halo model. The SPT response function is valid in the low $k$ regime, while the halo model will enable us to extend the results to the deeply nonlinear regime.
\subsection{ Coupling of the long and short modes in SPT }
\label{sec:response_SPT_longshortcoupling}
Here we compute the coupling between the long and short modes using SPT (see \cite{PTreview} for a review of SPT). We see below that the dilation effect and the modification of the growth discussed in Sec.~\ref{sec:Bk_response_effect_general} appear naturally in SPT.
To obtain the supersample covariance, we need to calculate the linear response function, i.e.~the first derivative of the bispectrum with respect to the long mode. Therefore, we only need to compute the modulated density up to first order in $ \delta_{\rm b} $. To evaluate the tree-level bispectrum, second order in the small scale modes is required.
Let us start with the second order density contrast $\delta^{(2)}$, and we will see shortly that the calculations for $\delta^{(3)}$ are similar. In SPT, $\delta^{(2)}$ can be expanded as
\begin{equation}
\label{eq:delta2_SPT}
\delta^{(2)}(\bm{k} ) = \int \frac{ d^3 q }{(2 \pi)^3} F_2(\bm{q}, \bm{k}- \bm{q} ) \delta^{(1)}( \bm{q}) \delta^{(1)} ( \bm{k}- \bm{q} ),
\end{equation}
where $F_2 $ is the coupling kernel
\begin{equation}
F_2( \bm{k}_1, \bm{k}_2 ) = \frac{5}{7} + \frac{1}{2} \mu \Big( \frac{k_1}{k_2} + \frac{k_2}{k_1} \Big) + \frac{2}{7} \mu^2,
\end{equation}
with $ \mu = \hat{\bm{k}}_1 \cdot \hat{\bm{k}}_2 $. The convolution integral in Eq.~\eqref{eq:delta2_SPT}, couples $\delta^{(1)}$ of different scales. For example, if both $\delta^{(1)}$ are the small scale $\delta^{(1)}_{\rm s} $, then it gives the small-scale $\delta_{\rm s}^{(2)} $. We are particularly interested in the coupling between the long mode $\delta^{(1)}_{\rm l} $ and the short mode $\delta^{(1)}_{\rm s} $. Focusing on the long-short coupling, we have
\begin{align}
\label{eq:delta2_ls}
\delta^{(2)}_{\rm ls}(\bm{k} ) &= 2 \int \frac{ d^3 q }{(2 \pi)^3} F_2(\bm{q}, \bm{k}- \bm{q} ) \delta_{\rm l}^{(1)}( \bm{q}) \delta_{\rm s}^{(1)} ( \bm{k}- \bm{q} ),
\end{align}
where $\bm{q} $ and $\bm{k}$ represent the long and short modes respectively. As there are poles in $F_2( \bm{q}, \bm{k} - \bm{q} ) $, we consider the spherically symmetric long wavelength perturbation
\begin{align}
\label{eq:long_mode_form}
\delta_{\rm l} (\bm{q}) = \frac{ 2\pi^2 \delta_{\rm b} }{ q_{\rm b}^2 } \delta_{\rm D} (q -q_{\rm b}) ,
\end{align}
with $q_{b} \ll k$. Eq.~\eqref{eq:long_mode_form} can be obtained by spherically averaging over the angle of $\bm{q}$ in Eq.~\eqref{eq:delta_b_3Dform}, and assuming finite $q_{\rm b} $. Following \cite{Baldauf:2015vio}, we expand both $ F_2(\bm{q}, \bm{k}- \bm{q} ) $ and $ \delta^{(1)} ( \bm{k}- \bm{q} ) $ about the long mode $\bm{q}$. We note that $ \delta^{(1)} $ is a (Gaussian) random field, so normally it is not differentiable. Crucially, the small scale modes are separated by $k_{\rm F} $ and $q \ll k_{\rm F} $, thus the Taylor expanded value does not interfere the neighboring value and cause a contradiction.
Collecting terms up to order $q^0$, we have
\begin{align}
\label{eq:delta2_ls}
\delta^{(2)}_{\rm ls}(\bm{k} ) & \approx 2 \delta_{\rm b} \int d q \, \delta_{\rm D} ( q - q_{\rm b} ) \int \frac{ d \Omega_q }{4 \pi} \nonumber \\
& \times \Big\{ \frac{\bm{k} \cdot \bm{q} }{ 2 q^2 } + \Big[ \frac{3}{14} + \frac{ 2 ( \bm{k}\cdot \bm{q})^2 }{ 7k^2 q^2 } \Big] \Big\} \nonumber \\
& \times [ \delta_{\rm s}^{(1)} ( \bm{k}) - \bm{q} \cdot \partial_{\bm{k}} \delta_{\rm s}^{(1)} (\bm{k}) ] \nonumber \\
&= \Big[ \frac{ 13 }{21 } \delta^{(1)}(\bm{k}) - \frac{1}{3} \bm{k} \cdot \partial_{\bm{k}} \delta^{(1)}(\bm{k} ) \Big] \delta_{\rm b} .
\end {align}
Using Eq.~\eqref{eq:delta2_ls}, up to first order in the long and short modes, we have \cite{Baldauf:2015vio}
\begin{align}
\label{eq:density2_modulated_SphAve}
\delta_{\rm s}^{(1)}( \bm{k}| \delta_{\rm b}) & =
\delta^{(1)}_{\rm s} (\bm{k} ) + \delta^{(2)}_{\rm ls}(\bm{k} ) \nonumber \\
&\approx \delta^{(1)}_{\rm s} \Big( \bm{k}\big( 1 - \frac{1}{3} \delta_{\rm b} \big) \Big) + \frac{ 13 }{21 } \delta_{\rm b} \delta^{(1)}_{\rm s} (\bm{k}).
\end{align}
The first term is the dilation effect discussed in Sec.~\ref{sec:Bk_response_effect_general} while the second term is the modification of the small scale growth by the long mode. This shows that both the modification of the intrinsic growth and the dilation effects are incorporated into SPT automatically, while SPT is normalized with respect to the global mean.
To first order in $\delta_{\rm b} $ and the short mode, the modulated dark matter power spectrum is given by
\begin{equation}
P(k| \delta_{\rm b}) = P(k) + \delta_{\rm b} \Big[ P(k) - \frac{1}{3} \frac{d P(k)}{d \ln k} \Big] + \frac{26}{21} \delta_{\rm b} P(k),
\end{equation}
where we have split the contributions into the dilation (the term in the square brackets) and the modification of intrinsic growth (last term). We then get the power spectrum response
\begin{equation}
\label{eq:Pk_response_SPT_gb}
\frac{ \partial P(k| \delta_{\rm b}) }{ \partial \delta_{\rm b} } \bigg|_{\delta_{\rm b} = 0 } = \frac{47}{21} P(k) - \frac{1}{3} \frac{d P(k)}{d \ln k}.
\end{equation}
If the local mean is used instead, 47/21 is replaced by 5/21 in Eq.~\eqref{eq:Pk_response_SPT_gb}.
\begin{widetext}
For the tree-level bispectrum response, we also need the coupling between one long mode and two short modes through the $F_3$ kernel. This long-short-short coupling term is given by
\begin{align}
\label{eq:delta2_lss}
\delta_{\rm lss}^{(3)}( \bm{k} ) = & 3 \int \frac{ d^3 q }{ (2 \pi)^3 } \int \frac{ d^3 k_1 }{ (2 \pi)^3 } F_3( \bm{q},\bm{k}_1, \bm{k} - \bm{k}_{1} - \bm{q} )
\delta^{(1)}_{\rm l}(\bm{q} ) \delta^{(1)}_{\rm s}(\bm{k}_1 ) \delta^{(1)}_{\rm s}( \bm{k} - \bm{k}_{1} - \bm{q} ) \nonumber \\
= & 3 \delta_{\rm b} \int \frac{ d^3 k_1 }{ (2 \pi)^3 } \delta^{(1)}_{\rm s}(\bm{k}_1 ) \int d q \delta_{\rm D} ( q - q_{\rm b} ) \int \frac{ d \Omega_{q} }{ 4 \pi }
F_3( \bm{q}, \bm{k}_1, \bm{k} - \bm{k}_1 - \bm{q})
\delta^{(1)}_{\rm s}( \bm{k} - \bm{k}_1 - \bm{q} ) ,
\end{align}
where $\bm{k}$ and $\bm{k}_1$ denote the short modes while $\bm{q}$ is the long mode.
Analogous to the case for $\delta_{\rm ls}^{(2)} $, we expand $ F_3( \bm{q}, \bm{k}_1, \bm{k} - \bm{k}_1 - \bm{q}) $ to the order $q^0 $ and
\begin{equation}
\delta^{(1)}_{\rm s}( \bm{k} - \bm{k}_1 - \bm{q} ) \approx \delta^{(1)}_{\rm s}( \bm{k} - \bm{k}_1 ) - \bm{q} \cdot \partial_{ \bm{k} } \delta^{(1)}_{\rm s}( \bm{k} - \bm{k}_1 ).
\end{equation}
Up to the order $q^0 $, the terms that survive the angular integration are
\begin{align}
\label{eq:delta_lss3}
\delta_{\rm lss}^{(3)}( \bm{k} )
= - \frac{1}{3} \delta_{\rm b} \int \frac{ d^3 k_1}{(2 \pi)^3 } F_2( \bm{k}_1, \bm{k} - \bm{k}_1 ) \delta_{\rm s}^{(1)} ( \bm{k}_1 ) \bm{k} \cdot \partial_{ \bm{k} } \delta_{\rm s }^{(1)}( \bm{k} - \bm{k}_1 )
+ \delta_{\rm b} \int \frac{ d^3 k_1}{(2 \pi)^3 } A_0(\bm{k}_1, \bm{k} - \bm{k}_1 ) \delta_{\rm s}^{(1)}(\bm{k}_1 ) \delta_{\rm s}^{(1)}( \bm{k} - \bm{k}_1 ),
\end{align}
where $A_0$ denotes
\begin{align}
\label{eq:A0_term}
A_0( \bm{k}_1, \bm{k} - \bm{k}_1 )& = \frac{89}{126} + \frac{11}{ 21 }\bm{k}_1 \cdot ( \bm{k} - \bm{k}_1 ) \Big( \frac{1}{ k_1^2 } + \frac{ 1}{ | \bm{k} - \bm{k}_1|^2 } \Big) + \frac{ 23 }{63 } \frac{ [ \bm{k}_1 \cdot ( \bm{k} - \bm{k}_1 ) ]^2 }{k_1^2 | \bm{k} - \bm{k}_1 |^2 } \nonumber \\
& -\frac{1}{12} \Big[ \frac{ k_1^2 }{| \bm{k} - \bm{k}_1 |^2} + \frac{| \bm{k} - \bm{k}_1 |^2}{ k_1^2 } \Big] + \frac{ [ \bm{k}_1 \cdot ( \bm{k} - \bm{k}_1 )]^2 }{ 6 } \Big[ \frac{1}{ k_1^4 }+ \frac{1}{|\bm{k} - \bm{k}_1 |^4 } \Big] \nonumber \\
& + \frac{2}{21} \frac{ [ \bm{k}_1 \cdot ( \bm{k} - \bm{k}_1 )]^3 }{ k_1^2 |\bm{k} - \bm{k}_1 |^2 } \Big( \frac{1}{ k_1^2 } + \frac{1}{|\bm{k} - \bm{k}_1 |^2} \Big).
\end{align}
We have symmetrized the kernel $A_0$. The first term in Eq.~\eqref{eq:delta_lss3} is due to the product of the $q^{-1}$-order term in $ F_3( \bm{q}, \bm{k}_1, \bm{k} - \bm{k}_1 - \bm{q}) $ with the gradient term $-\bm{q} \cdot \partial_{\bm{k}} \delta^{(1)}_{\rm s}( \bm{k} - \bm{k}_1 )$; and the second term results from the product of the $q^0$-order term in $ F_3( \bm{q}, \bm{k}_1, \bm{k} - \bm{k}_1 - \bm{q}) $ with $ \delta^{(1)}_{\rm s}( \bm{k} - \bm{k}_1 )$.
By simply adding and subtracting the term
\begin{equation}
- \frac{1}{3} \delta_{\rm b} \int \frac{ d^3 k_1}{(2 \pi)^3 } [ \bm{k} \cdot \partial_{\bm{k}} F_2( \bm{k}_1, \bm{k} - \bm{k}_1 ) ] \delta_{\rm s}^{(1)}( \bm{k}_1 ) \delta_{\rm s}^{(1)} ( \bm{k} - \bm{k}_1 ),
\end{equation}
we can write Eq.~\eqref{eq:delta_lss3} as
\begin{align}
\delta_{\rm lss}^{(3)}( \bm{k} ) = - \frac{1 }{3} \delta_{\rm b} \bm{k} \cdot \partial_{\bm{k}} \delta_{\rm s}^{(2)} (\bm{k} ) +
\delta_{\rm b} \int \frac{ d^3 k_1}{(2 \pi)^3 } A(\bm{k}_1, \bm{k} - \bm{k}_1 ) \delta_{\rm s}^{(1)}(\bm{k}_1 ) \delta_{\rm s}^{(1)}( \bm{k} - \bm{k}_1 ),
\end{align}
where $A$ reads
\begin{equation}
\label{eq:Akernel}
A ( \bm{k}_1, \bm{k} - \bm{k}_1 ) = \frac{ 151}{ 126 } F_2(\bm{k}_1, \bm{k} - \bm{k}_1) + \frac{5}{ 126 } G_2(\bm{k}_1, \bm{k} - \bm{k}_1) ,
\end{equation}
and $ \delta_{\rm s}^{(2)} $ is defined similar to Eq.~\eqref{eq:delta2_SPT} except with $\delta^{(1)}$ replaced by $\delta^{(1)}_{\rm s} $. $G_2$ is the velocity divergence kernel
\begin{equation}
G_2( \bm{k}_1, \bm{k}_2 ) = \frac{3}{7} + \frac{1}{2} \mu \Big( \frac{k_1}{k_2} + \frac{k_2}{k_1} \Big) + \frac{4}{7} \mu^2.
\end{equation}
Note that the three types of terms in Eq.~\eqref{eq:A0_term} are canceled out in Eq.~\eqref{eq:Akernel}, and $A$ can be solely written in terms of $F_2$ and $G_2$.
Therefore up to first order in the long mode and second order in the short modes, the small scale mode reads
\begin{align}
\label{eq:delta2s_long}
\delta_{\rm s}^{(2)}(\bm{k} | \delta_{\rm b} ) &=
\delta_{\rm s}^{(1)}( \bm{k} ) + \delta_{\rm s}^{(2)}( \bm{k} ) - \frac{1}{3 }\delta_{\rm b} \bm{k} \cdot \partial_{\bm{k} } \delta_{\rm s}^{(1)} ( \bm{k} ) - \frac{1}{3 }\delta_{\rm b} \bm{k} \cdot \partial_{\bm{k} }\delta_{\rm s}^{(2)} ( \bm{k} ) + \frac{13}{21} \delta_{\rm b} \delta_{\rm s}^{(1)}(\bm{k}) \nonumber \\
& \qquad + \delta_{\rm b} \int \frac{ d^3k_1}{(2 \pi)^3 } A(\bm{k}_1,\bm{k} - \bm{k}_1) \delta_{\rm s}^{(1)}(\bm{k}_1) \delta_{\rm s}^{(1)}( \bm{k} - \bm{k}_1) \nonumber \\
&\approx \delta_{\rm s}^{(1)}\Big( \bm{k}\big(1 - \frac{1}{3} \delta_{\rm b}\big) \Big) + \delta_{\rm s}^{(2)}\Big( \bm{k}\big(1 - \frac{1}{3} \delta_{\rm b}\big) \Big) + \delta_{\rm b} \Big[\frac{13}{21} \delta_{\rm s}^{(1)}( \bm{k}) + \int \frac{ d^3k_1}{(2 \pi)^3 } A(\bm{k}_1,\bm{k} - \bm{k}_1) \delta_{\rm s}^{(1)}(\bm{k}_1) \delta_{\rm s}^{(1)}( \bm{k} - \bm{k}_1) \Big] .
\end{align}
In the last line, the first two terms are the dilation terms, while the last term is the modification of the intrinsic growth up to second order in the short mode. If the local mean is used, there is an additional overall factor $1/(1 + \delta_{\rm b}) $ in Eq.~\eqref{eq:delta2s_long}.
We are now in a position to compute the modulated bispectrum. The dilation part of the bispectrum can be obtained as
\begin{align}
& \Big\langle \delta_{\rm s}^{(1)}\Big( \bm{k}_1\big(1 - \frac{1}{3} \delta_{\rm b}\big) \Big) \delta_{\rm s}^{(1)}\Big( \bm{k}_2\big(1 - \frac{1}{3} \delta_{\rm b}\big) \Big) \delta_{\rm s}^{(2)}\Big( \bm{k}_3\big(1 - \frac{1}{3} \delta_{\rm b}\big) \Big) \Big\rangle + \,2 \, \mathrm{cyc.} \nonumber \\
= & 2 F_2\Big( \bm{k}_1( 1 - \frac{1}{3} \delta_{\rm b}), \bm{k}_2( 1 - \frac{1}{3} \delta_{\rm b}) \Big) P \Big( k_1( 1 - \frac{1}{3} \delta_{\rm b}) \Big) P \Big( k_2( 1 - \frac{1}{3} \delta_{\rm b}) \Big) \, (2 \pi)^3 \delta_{\rm D} \Big( (1 - \frac{1 }{ 3 }\delta_{\rm b}) ( \bm{k}_1 + \bm{k}_2 + \bm{k}_3 )\Big) + \,2 \, \mathrm{cyc.} \nonumber \\
\approx & (1 + \delta_{\rm b}) B_{\rm m} \Big( k_1( 1 - \frac{1}{3} \delta_{\rm b}), k_2( 1 - \frac{1}{3} \delta_{\rm b}), k_3( 1 - \frac{1}{3} \delta_{\rm b}) \Big) \, (2 \pi)^3\delta_{\rm D} ( \bm{k}_1 + \bm{k}_2 + \bm{k}_3 ) \nonumber \\
\approx & \Big[ (1 + \delta_{\rm b } ) B_{\rm m}(k_1,k_2,k_3) - \frac{1}{3} \delta_{\rm b} \sum_{j=1}^3 \frac{ d }{ d \ln k_{j } } B_{\rm m } ( k_1,k_2,k_3 )\Big] (2 \pi)^3\delta_{\rm D} ( \bm{k}_1 + \bm{k}_2 + \bm{k}_3 ) ,
\end{align}
where $ B_{\rm m }$ is the tree-level dark matter bispectrum
\begin{equation}
B_{\rm m}( k_1,k_2,k_3 ) = 2 F_2( \bm{k}_1, \bm{k}_2 ) P(k_1) P(k_2) + \,2 \, \mathrm{cyc.}
\end{equation}
Including the growth enhancement part, the tree-level bispectrum up to first order in the long mode reads
\begin{align}
\label{eq:B_SPT_modulated}
B(k_1,k_2,k_3| \delta_{\rm b} ) = & \Big(1 + \frac{433}{ 126 } \delta_{\rm b } \Big) B_{\rm m}(k_1,k_2,k_3) + \frac{5}{126} \delta_{\rm b} B_{G_2} (k_1,k_2,k_3) - \frac{1}{3} \delta_{\rm b} \sum_{j=1}^3 \frac{ d }{ d \ln k_{j } } B_{\rm m } ( k_1,k_2,k_3 ) ,
\end{align}
where $B_{G_2}$ denotes
\begin{equation}
B_{G_2} (k_1,k_2,k_3) = 2 G_2(k_1,k_2)P(k_1)P(k_2) + \,2 \, \mathrm{cyc.}
\end{equation}
The bispectrum response function is then given by
\begin{align}
\label{eq:Bk_response_SPT_global}
\frac{\partial }{ \partial \delta_{\rm b} } B(k_1,k_2,k_3| \delta_{\rm b} ) \bigg|_{\delta_{\rm b}= 0 } = \frac{433}{126} B_{\rm m}(k_1,k_2,k_3) + \frac{5}{126} B_{G_2}(k_1,k_2,k_3) - \frac{1}{3} \sum_{j=1}^3 \frac{ d }{ d \ln k_{j } } B_{\rm m } ( k_1,k_2,k_3 ) .
\end{align}
If the local mean is used, there is an extra term $-3 \delta_{\rm b} B_{\rm m} $ in Eq.~\eqref{eq:B_SPT_modulated} and hence the factor $443/126$ in Eq.~\eqref{eq:Bk_response_SPT_global} is replaced by $55/126$.
\end{widetext}
In Fig.~\ref{fig:Bresponse_component}, we plot the components of the dark matter bispectrum response function for the equilateral triangle configuration. Both the dilation effect and the modification of the growth add up in the response function. We also compare the cases of a global mean and a local mean. For galaxy surveys, a local mean is used, while for weak lensing a global mean is used. The response function is significantly reduced for the case of a local mean.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=\linewidth]{Bresponse_component.pdf}
\caption{ The components of the dark matter bispectrum response function normalized by the bispectrum for the equilateral triangle configuration. The beat coupling (dashed, blue) and the dilation effect (dashed, green) are plotted separately. The total response obtained using local mean (solid, cyan) is significantly reduced relative to the global mean case (solid, red).}
\label{fig:Bresponse_component}
\end{center}
\end{figure}
\vspace{-0.8cm}
Here spherical averaging over the angle of the long mode is used, so the effects of the large-scale tidal perturbations are averaged out. In Appendix \ref{sec:SSC_tides}, we generalize the computations to include the long-wavelength tidal contributions on the small-scale matter bispectrum using SPT; in this way, we are able to derive the bispectrum response function to the tides [Eq.~\eqref{eq:response_tide}]. A key difference of the tidal response function from the density one is that it is anisotropic. This will be important for the final tidal contribution to the supersample covariance.
\subsection{ Bispectrum response function from halo model }
\label{sec:bisp_response_HM}
The response function can also be computed using the halo model approach \cite{CooraySheth,Peacock:2000qk,Seljak:2000gq,Scoccimarroetal2001}. In the halo model, all the dark matter is assumed to reside in halos of different masses. The halo model provides a reasonably accurate phenomenological method to extend the polyspectrum to high $k$.
It is instructive to first review the computation of the halo model power spectrum response function \cite{Takada:2013bfn}. The halo model dark matter power spectrum reads
\begin{equation}
\label{eq:Pk_HM}
P_{\rm HM}(k) = [ I^1_1(k) ]^2 P(k) + I^0_2(k),
\end{equation}
where the first term is the 2-halo term, which describes the correlation of dark matter in two different halos, and the second term is the 1-halo term, which describes the correlation in the same halo. Following \cite{Cooray:2000ry} we use the general notation $I_\mu^\beta$
\begin{align}
I_\mu^\beta(k_1,k_2,...,k_\mu) &\equiv \int d M \bigg[ \left( \frac{M} {\bar \rho_m} \right)^\mu b_\beta(M) n(M) \nonumber \\
&\times u_M(k_1) u_M(k_2) \dots u_M(k_\mu) \bigg],
\end{align}
where $M$ is the halo mass, $n$ is the halo mass function, $b_\beta$ is the peak-background split bias of order $\beta$, and $ u_M(k_\mu)$ is the dimensionless Fourier transform of the halo density profile normalized such that $ u_M(0)=1$. We use the NFW halo profile \cite{Navarro:1996gj} with the concentration relation given in \cite{CooraySheth}. The Sheth-Tormen mass function \cite{ShethTormen} and the peak-background split bias derived from it \cite{Scoccimarroetal2001} are adopted. We assume only linear bias and hence $b_\beta = 0$ for $\beta\geq 2$.
In the standard halo model formula Eq.~\eqref{eq:Pk_HM}, the long mode vanishes. To compute the response function, we imagine the long mode is turned on and it modulates $P$, and also the mass function and the bias, while we assume that the halo profile is not affected by the long mode. The response of the mass function and the bias to the long mode can be derived using the relation
\begin{align}
\label{eq:nb_response_PBS}
\frac{\partial I_\mu^\beta}{\partial \delta_{\rm b}} \bigg|_{\delta_{\rm b}=0} & = \int d M \bigg[ \left(\frac{M}{\bar \rho_m}\right)^\mu \frac{\partial }{\partial \delta_{\rm b}} [ b_\beta(M) n( M) ] \nonumber \\
&\times u_M(k_1) u_M(k_2) ... u_M(k_\mu) \bigg]_{\delta_{\rm b} = 0} = I_\mu^{\beta+1} ,
\end{align}
where we have used the fact that the peak-background split bias $b_\beta$ is the response of mass function to $\delta_{\rm b}$ at order $\beta$ \cite{Mo:1995cs,Schmidt:2012ys}
\begin{equation}
b_\beta(M) = \left.\frac{1}{n(M)}\frac{\partial^\beta n(M)}{\partial \delta_{\rm b}^\beta}\right |_{\delta_{\rm b} = 0}.
\end{equation}
With this setup, it is easy to see that the power spectrum response function is given by \cite{Takada:2013bfn}
\begin{equation}
\label{eq:PHM_response}
\frac{\partial P_{\rm HM}(k | \delta_{\rm b} ) }{\partial \delta_{\rm b} } \bigg|_{\delta_{\rm b}= 0} \approx [ I_1^1(k) ]^2 \frac{\partial P( k | \delta_{\rm b} ) }{\partial \delta_{\rm b} }\bigg|_{\delta_{\rm b}= 0} + I^1_2(k),
\end{equation}
where we have dropped a term proportional to $I^2_1 $ because it is small compared to the ones we keep. The perturbative power spectrum response function is given by Eq.~\eqref{eq:Pk_response_SPT_gb}. For the case of local mean, there is an additional term $-2 P_{\rm HM} $ in Eq.~\eqref{eq:PHM_response}.
\begin{figure}[!tb]
\begin{center}
\includegraphics[width=\linewidth]{Bkresponse_HaloModel_111_simple_BkNorm.pdf}
\caption{ The bispectrum response function (solid, black) computed using the halo model. The individual terms are also shown: the term due to the 3-halo term (red, solid), that due to the 1-halo term (blue, solid), and the two dominant terms from the 2-halo term [the perturbative power spectrum response term (red, solid) and the term involving $b_2$ (brown, dashed)]. Results for the equilateral triangle configurations at $z=0 $ are shown.}
\label{fig:Bkresponse_HaloModel_111_simple_BkNorm}
\end{center}
\end{figure}
For the case of the power spectrum, Ref.~\cite{Takada:2013bfn} explicitly checked that the supersample covariance computed using the halo model trispectrum agrees with the supersample covariance formula results [the analog of Eq.~\eqref{eq:CB_SSC}] calculated using the halo model power spectrum response Eq.~\eqref{eq:PHM_response}. As it is formidable to check the results using the halo model 6-point function (for a glimpse of its complexity, see the full 6-point function in the Poisson model in \cite{Chan:2016ehg}), here we directly compute the bispectrum response function using the halo model.
In the language of the halo model, the dark matter bispectrum reads \cite{Scoccimarroetal2001}
\begin{align}
\label{eq:BHM}
B_{\rm HM}(k_1,k_2,k_3) & = B_{\rm 1h}(k_1,k_2,k_3) + B_{\rm 2h}(k_1,k_2,k_3) \nonumber \\
& + B_{\rm 3h}(k_1,k_2,k_3) ,
\end{align}
where
\begin{align}
B_{\rm 1h}(k_1,k_2,k_3) &= I_3^0(k_1,k_2,k_3), \\
B_{\rm 2h}(k_1,k_2,k_3) &= I_1^1(k_1)I_2^1(k_2,k_3)P(k_1) + 2 \, {\rm cyc.}, \\
B_{\rm 3h}(k_1,k_2,k_3) &= I_1^1(k_1) I_1^1(k_2) I_1^1(k_3) B_{\rm PT}(k_1,k_2,k_3),
\end{align}
are the 1-, 2-, and 3-halo terms, and $ B_{\rm PT}$ denotes the bispectrum from the perturbation theory. The 1-, 2-, and 3-halo terms describe the situations in which all three points are in the same halo, only two of the points are in the same halo, and none of them are in the same halo, respectively. We follow the same prescription as for the case of the power spectrum. In particular, we also first assume only a linear bias and $b_\beta = 0$ for $\beta\geq 2$. In this case, $B_{\rm PT}$ is simply $B_{\rm m}$.
With the assumption that the presence of long mode modulates $P$ and $B_{\rm PT} $, and also the mass function and the bias while the halo profile is not affected, we can write down the halo model bispectrum response function
\begin{align}
\label{eq:BHM_response}
& \frac{\partial}{\partial \delta_{\rm b}} B_{\rm HM}(k_1,k_2,k_3| \delta_{\rm b}) \bigg|_{\delta_{\rm b} = 0 } \nonumber \\
\approx & I_1^1(k_1) I_1^1(k_2) I_1^1(k_3) \frac{\partial B_{\rm PT}(k_1,k_2,k_3| \delta_{\rm b}) }{\partial \delta_{\rm b} }\bigg|_{\delta_{\rm b} = 0 } \nonumber \\
& + \bigg[ I^1_1(k_1) I^2_2(k_2,k_3) P(k_1) \nonumber \\
& + I_1^1(k_1) I_2^1(k_2,k_3) \frac{\partial P(k_1| \delta_{\rm b} )}{\partial \delta_{\rm b} }\bigg|_{\delta_{\rm b} = 0 } \bigg] + 2 \, \mathrm{cyc}. \nonumber \\
& + I_3^1(k_1,k_2,k_3) .
\end{align}
The response function $[ \partial B_{\rm PT}(\delta_{\rm b}) / \partial \delta_{\rm b} ]_{\delta_{\rm b} = 0 } $ is given by Eq.~\eqref{eq:Bk_response_SPT_global}, and the perturbative power spectrum response function is given by Eq.~\eqref{eq:Pk_response_SPT_gb}. If the local mean is used instead, there is an additional term $-3 B_{\rm HM}$ in Eq.~\eqref{eq:BHM_response}.
In Fig.~\ref{fig:Bkresponse_HaloModel_111_simple_BkNorm}, we plot the bispectrum response function at $z=0 $ for the case of the global mean. At large scales, for $k \lesssim 0.4 \, \mathrm{Mpc}^{-1} \, h $, the 3-halo contibution dominates, while at small scales, for $k\gtrsim 0.6 \, \mathrm{Mpc}^{-1} \, h $, the 1-halo contribution $I_3^1$, becomes dominant. In the scales shown, the 3-halo contribution is essentially the same as the bispectrum response function computed using perturbation theory. However, as we can see in Fig. \ref{fig:Bkresponse_HaloModel_111_simple_BkNorm}, on large scale, the halo model prediction still differs from the perturbation theory results, e.g.~at $k \sim 0.02 \, \mathrm{Mpc}^{-1} \, h $, the halo model result exceeds that of perturbation theory by 10\%. At large scales, it is well known that the standard halo model formalism predicts unphysical shot noise between matter and halo (see \cite{Ginzburg:2017mgf} for a recent attempt to resolve this issue). Our result indicates that there seems to be another artifact of the halo model at large scales. However, this effect is negligible when we compare the covariance predictions with the numerical results later on.
The terms in the square brackets in Eq.~\eqref{eq:BHM_response} are small in the low and high $k$ regimes, but they are not negligible at the transition scales. Besides the power spectrum response function term, we have also kept a term involving $b_2$. From Eq.~\eqref{eq:nb_response_PBS}, we see that although we have limited ourselves to $b_1$ only, higher order bias terms are generated by the response derivative. As we see in Fig.~\ref{fig:Bkresponse_HaloModel_111_simple_BkNorm}, this term is comparable to the perturbative power spectrum response function term, and thus we keep it as well. We have checked that all of the other terms generated by the response derivative are negligible except this one.
As we have included one of the $b_2$-terms generated, we need to check our starting assumption by including only the $b_1$-terms. In the 3-halo term, there is another possible term $I^1_1(k_1) I^1_1(k_2) I^2_1(k_3) P(k_1)P(k_2) + 2 \, \mathrm{cyc.}$ We can estimate the importance of this term by differentiating this term directly with respect to the long mode. This is not strictly correct as this term is obtained by setting the long mode to zero; however, for the purpose of estimation, it is sufficient. We find that this term is indeed small compared to the ones that we included.
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=\linewidth]{CB_SSC_response_111.pdf}
\caption{ The diagonal elements of the supersample bispectrum covariances for a suite of values of box size $L$ are compared with the Gaussian covariance (black, dashed line for using the linear power spectrum, and black, solid line for using the halo model power spectrum). The cases for the global mean (left) and local mean (right) are shown. The colorful solid lines show the halo model prediction and the dashed lines (same color) show the perturbation theory prediction. Equilateral triangle configurations at $z=0$ are shown. }
\label{fig:CB_SSC_response_111}
\end{center}
\end{figure*}
\begin{figure}[!tb]
\begin{center}
\includegraphics[width=\linewidth]{PkCovDiag_ratio_subbox_small_SSC_z0.pdf}
\caption{ The ratio between the diagonal elements of the power spectrum covariance measured from the subbox and periodic box setup. The results for the subbox setup with the global mean (green, squares) and local mean (red, triangles) are compared. The predictions using the halo model response function are also shown (yellow stars for global mean and cyan stars for local mean). }
\label{fig:PkCovDiag_ratio_subbox_small_SSC_z0}
\end{center}
\end{figure}
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=0.45\textwidth]{cov_diag_small_subbox_111_SSC_180.pdf}
\includegraphics[width=0.45\textwidth]{cov_diag_small_subbox_ratio_111_SSC_180.pdf}
\caption{ The left panel shows the diagonal element of the dark matter bispectrum covariance for the equilateral triangle configurations at $z=0$, normalized by the Gaussian covariance. The covariances measured from the periodic box (blue, circles), and the subbox setup with the global (green, squares) and local mean (red, triangles) are compared. The supersample covariance predictions for the global (yellow, stars) and local mean (cyan, stars) are shown. In the right panel, the ratios between various covariances and the one measured from the small set of simulations are plotted. }
\label{fig:diag_cov_mat_small_subbox_111_SSC}
\end{center}
\end{figure*}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.9\linewidth]{cov_diag_small_subbox_ratio_221_SSC_180.pdf}
\caption{ Similar to the right panel of Fig.~\ref{fig:diag_cov_mat_small_subbox_111_SSC}, except for the isosceles triangle of the shape $k_1: k_2 :k_3 = 2:2:1 $. }
\label{fig:diag_cov_mat_small_subbox_221_SSC}
\end{center}
\end{figure}
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=0.9\linewidth]{corr_coef_small_subbox_111_SSC_180.pdf}
\caption{ The cross correlation coefficient $r(k_i,k_j)$ for the equilateral triangle configurations at $z=0$. In each plot, $k_i$ is fixed to be 0.077, 0.19, and 0.77 $ \, \mathrm{Mpc}^{-1} \, h $ respectively (from the top to bottom row), and it is plotted as a function of $k_j$. The left panels are for the global mean case, while the right ones are for the local mean. The numerical results from the small box (blue, circles) and the subbox setup (red, triangles) are compared with the supersample covariance prediction (cyan, stars). }
\label{fig:corr_coef_small_subbox_111_SSC}
\end{center}
\end{figure*}
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=0.9\linewidth]{corr_coef_small_subbox_221_SSC_180.pdf}
\caption{ Similar to Fig.~\ref{fig:corr_coef_small_subbox_111_SSC} except the for isosceles triangle of the shape $k_1: k_2 :k_3 = 2:2:1 $. }
\label{fig:corr_coef_small_subbox_221_SSC}
\end{center}
\end{figure*}
\section{ The supersample covariances: predictions and measurements }
\label{sec:predictions_measurements}
In this section, we compute the supersample covariance for the bispectrum and the cross covariance between the power spectrum and the bispectrum. The predictions are then confronted with the measurements from simulation.
Before going to the numerical results, we first compare the supersample covariance contributions with the Gaussian covariance, which is valid in the low $k$ regime. The Gaussian bispectrum covariance reads \cite{Scoccimarro:2003wn}
\begin{equation}
\label{eq:BCov_Gaussian}
C^B_{\rm G} = \frac{( 2 \pi)^3 k_{\rm F}^3 }{V_{\triangle} } \delta_{k_1k_2k_3, k_1'k_2'k_3' } s_{123} P(k_1) P(k_2) P(k_3) ,
\end{equation}
where $\delta_{k_1k_2k_3, k_1'k_2'k_3' }$ is non-vanishing only if the shape of the triangle $ k_1k_2k_3 $ is the same as that of $ k_1'k_2'k_3' $. The symmetry factor $s_{123} $ is equal to 1, 2, and 6 for scalene, isosceles, and equilateral triangle respectively. In Eq.~\eqref{eq:BCov_Gaussian}, $P(k) $ can be the linear power spectrum or the nonlinear one, in this case, it effectively resums part of the higher-order contributions. For the cross covariance between the power spectrum and the bispectrum, the Gaussian contribution vanishes as it is a 5-point correlator.
To facilitate the comparison with the simulation results, we consider a cubic survey window
\begin{equation}
W_{\rm cubic}(\bm{x}) = \begin{cases}
1 & \mathrm{if} \, 0 \leq x_i \leq L, \\
0 & \mathrm{otherwise} .
\end{cases}
\end{equation}
Its Fourier transform reads
\begin{equation}
W_{\rm cubic}( \bm{k} ) = V e^{- i \bm{k} \cdot \bm{L} / 2 } \prod_{j=1}^3 \mathrm{sinc} \frac{ k_j L }{2} .
\end{equation}
In the supersample covariance formula, it is the RMS variance $\sigma_W^2$ that matters. If a spherical tophat window is used instead, we find that by matching the survey volume with the mapping $R_{\rm TH} = [3 /(4 \pi )]^{1/3} L $, the variance computed with either window agrees with each other within at least 3\% for the box sizes we consider here. Thus, this provides a convenient way to map our results to the tophat window case.
In Fig.~\ref{fig:CB_SSC_response_111} we compare the supersample covariance with the Gaussian one for a range of box sizes: 600, 1000, and 3000 $ \, \mathrm{Mpc} \, h^{-1} $ (at $z=0$, the corresponding $\sigma_W^2 $ are $9.8 \times 10^{-5}$, $1.7 \times 10^{-5}$, and $3.3 \times 10^{-7}$, respectively). We have compared the supersample covariance predictions using the perturbation theory and the halo model response function. Current surveys such as DES have a survey volume close to the volume of the 1000 $ \, \mathrm{Mpc} \, h^{-1} $ box, and the future survey Euclid will have a survey volume near that of the 3000 $ \, \mathrm{Mpc} \, h^{-1} $ box. In this plot, we have kept the bin width fixed at $\Delta k = 2 k_{\rm F} $; thus, the number of modes in each configuration bin and the Gaussian covariance is constant for different box sizes. Because the bispectrum Gaussian covariance is very subdominant beyond the mildly nonlinear regime relative to the small scale non-Gaussian covariance \cite{Chan:2016ehg}, the supersample covariance is expected to only give a small contribution to the overall bispectrum covariance budget. We will quantify this using the simulation results below.
We now compare the covariances measured from the periodic box and subbox setups. The simulations used in this work are from the DEUS project \cite{DEUS_FUR_paper,Rasera:2013xfa,Blot:2014pga}. A flat $\Lambda$CDM model with the WMAP7 cosmological parameters \cite{2007ApJS..170..377S} is adopted for these simulations. In particular, $h=0.72$, $\Omega_{\rm m} = 0.257$, $n_{\rm s } = 0.963$, and $\sigma_{8 } = 0.801 $. The Zel'dovich approximation is used to generate the Gaussian initial conditions at $z_i = 105$. The transfer function is computed with {\tt CAMB} \cite{CAMB}. The simulations are evolved using the adaptive mesh refinement solver {\tt RAMSES} \cite{2002A&A...385..337T}. We will only consider dark matter simulation results at $z=0$.
The periodic simulations are the small set used in \cite{Chan:2016ehg}. In each periodic simulation, there are $256^3$ particles in a cubic box of size $L=656.25 \, \mathrm{Mpc} \, h^{-1} $. There are altogether 4096 realizations. For the subbox setup, we use a gigantic simulation of box size 21 Gpc$ \, h^{-1}$ with $8192^3 $ particles from the DEUS full universe run. The gigantic box is divided into cubic subboxes of size $656.25 \, \mathrm{Mpc} \, h^{-1} $. There are altogether 32768 subboxes and we use 4096 of them.
To facilitate the comparison with the bispectrum results later on, we show the power spectrum results here as well. Similar to \cite{Takahashi:2009ty,Li:2014sga}, we find that the power spectrum measurement is biased low due to the window function convolution in the range of scales we consider. The bias depends on the shape of the power spectrum and the size of the window function. For our case, it is most substantial in the range from $k \sim 0.01$ to 0.1 $ \, \mathrm{Mpc}^{-1} \, h $. Ref.~\cite{Li:2014sga} scaled the value of the subbox case to match the periodic box measurement, here we do not apply any correction as we find that this helps little for the case of bispectrum. We plot the diagonal elements of the power spectrum covariance ratio between the subbox results and the small box results in Fig.~\ref{fig:PkCovDiag_ratio_subbox_small_SSC_z0}. At $k=0.5 \, \mathrm{Mpc}^{-1} \, h $, the supersample covariance correction to the small box results is about 60\% for local mean and 250\% for the global mean. We will see that the effect is substantially smaller for the bispectrum, and it is of similar order of magnitude for the cross covariance.
For the bispectrum, the expectation value of Eq.~\eqref{eq:B_w_form1} reads
\begin{align}
\langle & \hat{B}_W (k_1,k_2,k_3) \rangle = \frac{1}{V V_{\triangle}} \int_{k_1} d^3 p_1 \int_{k_2} d^3 p_2 \int_{k_3} d^3 p_3 \delta_{\rm D} (\bm{p}_{123} ) \nonumber \\
& \quad \times \prod_{i=1}^3 \int \frac{ d^3 q_i}{ (2 \pi)^3 } W(\bm{p}_i - \bm{q}_i ) (2 \pi)^3 \delta_{\rm D} ( \bm{q}_{123} ) B( q_1,q_2,q_3).
\end{align}
The first line simply averages the triangles within the configuration bin, and hence any bias is expected to arise from the smearing effect by the window in the second line. The window function satisfies
\begin{equation}
\label{eq:W3_convolution_intg}
\frac{1}{V} \prod_{i=1}^3 \int \frac{ d^3 q_i}{ (2 \pi)^3 } W(\bm{p}_i - \bm{q}_i ) (2 \pi)^3 \delta_{\rm D} ( \bm{q}_{123} ) = 1,
\end{equation}
where we have used $\bm{p}_{123} = \bm{0} $ and Eq.~\eqref{eq:Wn_identity_real}. Hence we can interpret that the window convolution simply results in a weighted mean of the bispectrum. For a more extended window, the value in Eq.~\eqref{eq:W3_convolution_intg} is smaller, e.g.~for Gaussian window, it is $1/ \sqrt{27} $ instead of 1, and so the windowed bispectrum is more biased relative to the underlying one. Except for the first bin, which is biased low, the subsequent bins are biased high (by roughly 2\%) and the bias decreases as $k$ increases. Apart from the bias in the amplitude, the subbox measurements also exhibit wiggles, which are strongest in the low $k$ regime and decrease as $k$ increases.
In Fig.~\ref{fig:diag_cov_mat_small_subbox_111_SSC}, we show the diagonal element of the bispectrum covariance for the equilateral triangle configuration obtained from the small box and the subbox setups. In the left panel, the covariance is normalized with respect to the Gaussian covariance. For both the global and local mean cases, the supersample covariance correction is small relative to the small scale covariance, which can be studied using a periodic setup. To see the differences more clearly, we show the ratio between the subbox covariance and the small box covariance in the right panel. We see clearly that there are wiggles in the subbox covariance due to the convolution with the cubic window.
Again the effect of the supersample covariance is small, up to $k \sim 0.5 \, \mathrm{Mpc}^{-1} \, h $, the enhancement for the covariance is about 30 \% for the global mean and about 5\% for the local mean. The ratio is roughly an order of magnitude smaller than that for the power spectrum covariance (Fig.~\ref{fig:PkCovDiag_ratio_subbox_small_SSC_z0}). We show the ratio between the subbox setup and the periodic box results for the isosceles triangle configurations $ k_1:k_2:k_3=2:2:1 $ as a function of $k_1$ in Fig.~\ref{fig:diag_cov_mat_small_subbox_221_SSC}. Clearly they are qualitatively similar to the equilateral triangle case.
We also show the supersample covariance prediction given by
\begin{equation}
C_{\rm Small+SSC } = C_{\rm Small} + C_{\rm SSC},
\end{equation}
where $ C_{\rm Small} $ is the covariance measured from the small set and $ C_{\rm SSC} $ is obtained with Eq.~\eqref{eq:CB_SSC} using the halo model response function. We find that for both the equilateral triangle and isosceles triangle configurations, besides the convolution due to the survey window, the supersample covariance prediction agrees with the subbox results well up to $k\sim 0.5 \, \mathrm{Mpc}^{-1} \, h $. However, the halo model predictions overpredict the effect for larger $k$; e.g.~at $k=1 \, \mathrm{Mpc}^{-1} \, h $, it is overpredicted by 20\% for local mean and 170\% for the global mean.
The cross correlation coefficient $r$ is generally defined as
\begin{align}
& r(k_1,k_2,k_3, k_1',k_2',k_3') \nonumber \\
= & \frac{ C(k_1,k_2,k_3, k_1',k_2',k_3') }{\sqrt{ C(k_1,k_2,k_3, k_1,k_2,k_3) C( k_1',k_2',k_3', k_1',k_2',k_3') } }.
\end{align}
By definition it is equal to 1 for the diagonal term. In Fig.~\ref{fig:corr_coef_small_subbox_111_SSC}, we plot $r$ for the equilateral triangle configurations. In these plots, one of the equilateral triangle is fixed to be of the size 0.077, 0.19, and 0.77 $ \, \mathrm{Mpc}^{-1} \, h $ respectively. Again, we find that the difference between small box and the subbox setup is small for the global mean case, and it is negligible for the local mean scenario. Furthermore, the supersample covariance prediction gives decent agreement with the subbox results. In Fig.~\ref{fig:corr_coef_small_subbox_221_SSC}, similar results are shown except it is for the isosceles triangle configuration $k_1:k_2:k_3 = 2:2:1 $. Here for the isosceles triangle $k_1':k_2':k_3'=2:2:1$, $k_1'$ is set to 0.077, 0.19, and 0.77 $ \, \mathrm{Mpc}^{-1} \, h $ respectively and it is plotted as a function of $k_1$. Again the results are similar to the equilateral triangle case.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\textwidth]{CPB_Small_subbox_SSC_111.pdf}
\caption{ The cross covariance between the matter power spectrum and bispectrum at $z=0$. The equilateral triangle configurations are used for the bispectrum and the Fourier mode of the power spectrum $k$ are chosen to be 0.15, 0.40 and 0.80 $ \, \mathrm{Mpc}^{-1} \, h $ (from left to right). In the lower panels, the covariances are normalized with respect to the small box results. The results from the small box (blue, circles) and subbox results for the global mean (green, triangles) and the local mean cases (red, squares) together with the supersample covariance prediction for the global mean (yellow, stars) and local mean (cyan, stars) cases are compared. }
\label{fig:CPB_Small_subbox_SSC_111}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\textwidth]{CPB_Small_subbox_SSC_221.pdf}
\caption{ Similar to Fig.~\ref{fig:CPB_Small_subbox_SSC_111} except for the bispectrum shape $k_1 : k_2 : k_3 = 2:2:1 $. }
\label{fig:CPB_Small_subbox_SSC_221}
\end{center}
\end{figure*}
We now look at the cross covariance between the matter power spectrum and the bispectrum. In Figs.~\ref{fig:CPB_Small_subbox_SSC_111} and \ref{fig:CPB_Small_subbox_SSC_221} we show the cross covariance, $C^{PB}(k ;k_{1},k_{2},k_{3}) $ with the Fourier mode of the power spectrum $k$ fixed to be 0.15, 0.40 and 0.80 $ \, \mathrm{Mpc}^{-1} \, h $ respectively. In Fig.~\ref{fig:CPB_Small_subbox_SSC_111} the bispectrum is chosen to be the equilateral triangle configurations, while they are isosceles triangles $ k_1:k_2:k_3=2:2:1 $ in Fig.~\ref{fig:CPB_Small_subbox_SSC_221}. By comparing with Fig.~\ref{fig:diag_cov_mat_small_subbox_111_SSC} and \ref{fig:diag_cov_mat_small_subbox_221_SSC}, we find that the fractional difference between the small box and subbox setups is larger than the bispectrum covariance alone. This is mainly because the small scale non-Gaussian covariance of the bispectrum is bigger than the power spectrum one, and thus the supersample covariance contribution is relatively small for the bispectrum. The order of magnitude is similar to that for the power spectrum. We also show the supersample covariance prediction Eq.~\eqref{eq:SSC_PB_prediction} with the response functions computed using the halo model prescriptions. Similar to the bispectrum covariance case, the prediction gives good agreement with the numerical results for both the global mean and local mean scenarios. However, we also note that for large $k$ (e.g.~$k=0.8 \, \mathrm{Mpc}^{-1} \, h $) the theory is clearly larger than the measurement for the global mean case.
In the comparison with the numerical results, we have only considered the supersample covariance contribution due to the long-wavelength density perturbutions. In Appendix ~\ref{sec:SSC_tides}, we work out the tidal perturbation contribution to the supersample covariance. Although the magnitude of the tidal response function and the corresponding variances [$S_{ijmn}$, defined in Eq.~\eqref{eq:Sijmn}] are comparable to their density counterparts, the net tidal supersample covariance contribution is smaller than the density one by a few orders of magnitude for the following reasons. In configuration space, the bispectrum depends only on the shape of the triangle, and not on its orientations. The tidal response function is anisotropic, and so after averaging over the orientations of the triangle its contribution is significantly reduced. See Appendix~\ref{sec:SSC_tides} for more details. However, in redshift space, the bispectrum is anisotropic, and so this contribution could be potentially larger. We leave the thorough investigation of this issue to future work.
Before closing this section, we would like to extrapolate the results here to estimate the relative importance of various covariance contributions for future surveys like Euclid. Here we take the survey volume of Euclid to be equivalent to a cubic box of size 4000 $ \, \mathrm{Mpc} \, h^{-1} $ and the mean redshift to be 1.2. In the perturbative regime, $ [ (\partial B / \partial \delta_{\rm b} )|_{ \delta_{\rm b }=0}]^2 \propto D^8 $ and $ (\partial P / \partial \delta_{\rm b} )|_{ \delta_{\rm b }=0} (\partial B / \partial \delta_{\rm b} )|_{ \delta_{\rm b }=0} \propto D^6 $. On the other hand, the leading perturbative non-Gaussian corrections for the covariance of $B$ and $P$-$B$ scale as $D^8$ and $D^6$ respectively \cite{Chan:2016ehg}. Because both contributions have the same perturbative time dependence, to translate the small box results at $z=0$ to the Euclid setting, we only need to compare $\sigma_W^2(z) $ with $V^{-1}$. Note that the small scale covariance scaling with volume has been checked in \cite{Chan:2016ehg} (see also \cite{Mohammed:2016sre}) using simulations of different sizes, and it was found to be in good agreement with the numerical results. Hence the supersample covariance for $B$ or $P$-$B$ is reduced by a factor of 2320 in the Euclid setting, while the small scale covariance is only suppressed by a factor of 227. The relative importance of the supersample covariance compared to the small scale one would be downgraded by a factor of 10 in Fig.~\ref{fig:diag_cov_mat_small_subbox_111_SSC}, \ref{fig:diag_cov_mat_small_subbox_221_SSC}, \ref{fig:corr_coef_small_subbox_111_SSC}, \ref{fig:corr_coef_small_subbox_221_SSC}, \ref{fig:CPB_Small_subbox_SSC_111}, and \ref{fig:CPB_Small_subbox_SSC_221}.
In this section, we have quantified the magnitudes of the supersample covariance contributions by comparing the numerical results obtained with the periodic box and subbox setups. We have also tested the supersample covariance prediction derived in the previous sections and found that it agrees well with the numerical results. Although we have only explicitly shown the two types of triangle configurations, the results are qualitatively similar for other configurations. The numerical results and the predictions are validated by their good agreement with each other. For example, the transients induced by the Zel'dovich approximation initial condition for the bispectrum \cite{McCullagh:2015oga} do not seem to be an issue here.
\section{Conclusions }
\label{sec:conclusions}
In the current and future large-scale structure surveys, the quality of the data is expected to keep on increasing. At the same time, to extract cosmological information from such high-fidelity data, the theoretical modeling precision of various systematics also needs to increase. One of the potential systematics is the supersample covariance. In the presence of the window function, the long mode with wavelengths larger than the survey window can modulate the small scales and cause large covariance inside the survey window \cite{Hamilton:2005dx}. The supersample covariance cannot be studied using the standard periodic simulation setup. The window function effectively broadens the wave vectors, while in the standard periodic simulation setup the wave vectors are sharp. This broadening can be captured by dividing a gigantic simulation into many subboxes. In \cite{Takada:2013bfn}, the power spectrum supersample covariance was formulated using the response function approach. The power spectrum supersample covariance has been recognized an important source of covariance on small scales.
In this paper we studied the supersample covariance contribution to the bispectrum covariance and cross covariance between the power spectrum and the bispectrum. In terms of the response function, we derived the supersample covariance for the bispectrum covariance [Eq.~\eqref{eq:CB_SSC}] and for the cross covariance [Eq.~\eqref{eq:SSC_PB_prediction}]. We also computed the bispectrum response function using the standard perturbation theory [Eq.~\eqref{eq:Bk_response_SPT_global}] and the halo model [Eq.~\eqref{eq:BHM_response}]. Besides the density, we also derived the bispectrum supersample covariance due to the tide [Eq.~\eqref{eq:SSC_tide_appendix}] and the bispectrum response function to the tide [Eq.~\eqref{eq:response_tide}] using SPT. However, we found that the tide contribution to the supersample covariance is a few orders of magnitude smaller than the density one because the bispectrum in configuration space is isotropic while the tide response function is anisotropic.
We quantified the magnitudes of the supersample covariance using numerical measurements with the periodic box and subbox setups. The effects are small for the bispectrum covariance with the global mean, and for the local mean case it is negligible (Figs.~\ref{fig:diag_cov_mat_small_subbox_111_SSC} -- \ref{fig:corr_coef_small_subbox_221_SSC}). Relative to the small scale covariance, the magnitude of the supersample covariance is roughly an order of magnitude smaller than the power spectrum case. This is because the small scale covariance for the bispectrum is much more significant than in the power spectrum case, e.g.~by comparing with the Gaussian covariance \cite{Chan:2016ehg}, thus the supersample covariance contribution is dwarfed. For the cross covariance, the effect is larger, and closer to that of the power spectrum covariance (Figs.~\ref{fig:CPB_Small_subbox_SSC_111} and \ref{fig:CPB_Small_subbox_SSC_221}). Thus in the combined analysis of the power spectrum and the bispectrum, the supersample covariance may not be negligible. However, in galaxy surveys, a local mean is used, and hence the supersample covariance is still a small correction to the total covariance budget. We can also directly use the halo model supersample covariance because we find that it works reasonably well and the supersample covariance is a small correction anyway.
Ref.~\cite{Chan:2016ehg} found that the small scale non-Gaussian covariance is much more significant for the bispectrum than for power spectrum, and speculated that the small scale covariance is even more serious for the higher order correlators. Along a similar vein, we surmise that the supersample covariance is even more subdominant for the higher order correlators, and hence negligible.
Our work makes it clear that for the bispectrum covariance and the cross covariance, the small scale covariance is the dominant source, at least up to $k \sim 1 \, \mathrm{Mpc}^{-1} \, h $. For the bispectrum this is probably the highest scale we can hope to model. Thankfully, the small scale non-Gaussian covariance can be studied using the standard periodic setup with small box size, which is much more accessible in terms of computational resources. On the other hand, there have been few efforts so far to model the small scale covariance \cite{Sefusatti:2006pa,Chan:2016ehg}. The perturbative approach only improves over the Gaussian covariance in the mildly nonlinear regime \cite{Chan:2016ehg}. To extend the perturbative calculation to higher $k$, one possibility is to model the bispectrum covariance using the halo model. A useful way of organizing the computation is to expand the covariance in terms of the connected correlators \cite{Sefusatti:2006pa,KayoTakadaJain_2013,Chan:2016ehg}, which in turn are computed using the halo model.
\section*{Acknowledgments}
We thank Linda Blot for discussions on the DEUS simulations and the anonymous referee for constructive comments that improves the draft. We are grateful for the members of the DEUS Consortium\footnote{\url{http://www.deus-consortium.org}} for sharing the data with us. K.C.C. acknowledges the support from the Spanish Ministerio de Economia y Competitividad grant ESP2013-48274-C3-1-P and the Juan de la Cierva fellowship. J.N. is supported by Fondecyt Grant 1171466.
|
1,108,101,564,994 | arxiv | \section{Introduction}
This is the last of a series of three papers devoted to study the differential geometry of surfaces immersed in three-dimensional real vector spaces endowed with a norm, which we call \emph{normed} (=\emph{Minkowski}) \emph{spaces}. In \cite{diffgeom}, the first paper of the series, the core of the theory was developed. There were introduced concepts of \emph{Minkowski Gaussian, mean} and \emph{principal curvatures} from regarding the normal map based on \emph{Birkhoff orthogonality}. The second paper \cite{diffgeom2} was devoted to explore the theory from the viewpoint of affine differential geometry. The aim of this third paper is to use the machinery developed previously to investigate some classical topics in our new framework. Now we briefly recall some definitions given in \cite{diffgeom} and \cite{diffgeom2}.\\
We will always work with a surface immersion $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$, where the norm $||\cdot||$ is \emph{admissible}, meaning that its \emph{unit sphere} $\partial B:=\{x\in\mathbb{R}^3:||x|| = 1\}$ has strictly positive Gaussian curvature in the usual Euclidean geometry of $\mathbb{R}^3$ (also, we denote the usual inner product in this space by $\langle\cdot,\cdot\rangle$). The norm $||\cdot||$ induces an orthogonality relation between directions and planes given as follows: we say that a non-zero vector $v \in \mathbb{R}^3$ is \emph{Birkhoff orthogonal} to a plane $P$ (denoted $v \dashv_B P$) if $||v + tw|| \geq ||v||$ for any $t \in \mathbb{R}$ and $w \in P$; see \cite{alonso}. In other words, $v$ is Birkhoff orthogonal to $P$ if $P$ supports the \emph{unit ball} $B:=\{x \in \mathbb{R}^3:||x|| \leq 1\}$ at $v/||v||$. It follows from the admissibility of the norm that this relation is unique both at left (in the sense of directions) and at right (in the sense of planes). \\
For a surface immersion $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$, we define the \emph{Birkhoff-Gauss map} $\eta:M\rightarrow\partial B$ as follows: we associate each $p \in M$ to a unit vector $\eta(p)$ such that $\eta(p) \dashv_B T_pM$. Notice that we have two possible choices for each point, and therefore such a map should be only locally defined. However, if the surface is orientable, then the Birkhoff-Gauss map can be defined globally, and hence we will assume this hypothesis throughout the text. As it is proved in \cite{diffgeom}, this defines an \emph{equiaffine transversal vector field} in $M$ (in the sense of \cite{nomizu}), and the associated \emph{Gauss formula} reads
\begin{align*} D_XY = \nabla_XY + h(X,Y)\eta,
\end{align*}
for any vector fields $X,Y \in C^{\infty}(TM)$, with $D$ denoting the standard connection on $\mathbb{R}^3$. The bilinear map $h$ is called the \emph{affine fundamental form}, and in some sense it plays the role of the classical \emph{second fundamental form}. Let $\xi$ denote the Euclidean Gauss map of $M$, and let $u^{-1}$ denote the Euclidean Gauss map of the unit sphere $\partial B$. Up to re-orientation, we clearly have $\eta = u \circ \xi$. Also, the following expression to $h$ is straightforward:
\begin{align}\label{exph} h(X,Y) = \frac{\langle D_XY,\xi\rangle}{\langle\eta,\xi\rangle} = -\frac{\langle Y,d\xi_pX\rangle}{\langle\eta,\xi\rangle} = -\frac{\langle du^{-1}_{\eta(p)}T,d\eta_pX\rangle}{\langle\eta,\xi\rangle},
\end{align}
for any $p \in M$ and $X,Y \in T_pM$. \\
The \emph{Minkowski Gaussian curvature} and the \emph{Minkowski mean curvature} of $M$ at $p$ are defined as $K:=\mathrm{det}(d\eta_p)$ and $H:=\frac{1}{2}\mathrm{tr}(d\eta_p)$, respectively. The \emph{principal curvatures} are the (real) eigenvalues of $d\eta_p$, and their existence is proved in \cite{diffgeom}. The associated eigenvectors are called \emph{principal directions}. The \emph{normal curvature} of $M$ at $p$ in a given direction $V \in T_pM$ was defined in \cite{diffgeom} in terms of planar sections, and can be equivalently defined as
\begin{align*} k_{M,p}(V) := \frac{\langle du^{-1}_{\eta(p)}V,d\eta_pV\rangle}{\langle du^{-1}_{\eta(p)}V,V\rangle}.
\end{align*}
As it is discussed in \cite{diffgeom2}, the normal curvature is closely associated to a Riemannian metric on $M$ called \emph{Dupin metric}. This is the metric whose unit circle, at each $p \in M$, is the (usual) Dupin indicatrix of $T_{\eta(p)}\partial B$. This is simply given by
\begin{align*} \langle X,Y\rangle_p := \langle du^{-1}_{\eta(p)}X,Y\rangle,
\end{align*}
for any $p \in M$ and $X,Y \in T_pM$. Dividing this metric by $\langle\eta(p),\xi(p)\rangle$, we obtain the \emph{weighted Dupin metric}, which will be important for our purposes. \\
We recall that all of the concepts above were defined and studied in the papers \cite{diffgeom} and \cite{diffgeom2}, and the reader is invited to consult them for a perfect acquitance with the area subject. We head now to briefly describe the structure of the present paper. In Section \ref{minimal} we study \emph{minimal surfaces} in our context, proving that we can re-obtain several characterizations of such surfaces, all of them being analogues of results in the Euclidean subcase. In Section \ref{global} we obtain some global theorems, such as \emph{Hadamard-type theorems}, as immediate consequences of their Euclidean versions. In Section \ref{constantwidth} we prove a result concerning the curvatures of \emph{constant Minkowski width surfaces}, which is also an extension of a known result of classical differential geometry. Finally, Sections \ref{metric} and \ref{control} are devoted to understand the behavior of the ambient induced metric on $M$. In particular, a version of \emph{Bonnet's classical theorem} is obtained, and we also give an estimate for the \emph{perimeter} of the normed space (in the sense of Sch\"{a}ffer, see \cite{schaffer}). \\
For general references in Minkowski geometry, we refer the reader to \cite{martini2}, \cite{martini1}, and \cite{thompson}. The differential geometry of curves in normed planes was studied in \cite{Ba-Ma-Sho}. Other approaches to the differential geometry of normed spaces can be found in \cite{Bus3}, \cite{Gug2} and \cite{cabezas}. Immersed surfaces with the induced ambient norm are, in particular, Finsler manifolds, and in this regard we refer the reader to \cite{Bus2} and \cite{Bus7}.
\section{Minimal surfaces}\label{minimal}
Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be a surface immersed in an admissible Minkowski space. We say that $M$ is a \emph{minimal surface} if its Minkowski mean curvature vanishes everywhere. In the Euclidean subcase, minimal surfaces are characterized in terms of critical points of the area functions of their normal variations. There is also an analogous result when the considered transversal vector field is the affine normal field (see \cite[Chapter III, Section 11]{nomizu}). We will see that the general Minkowski case has a similar behavior when one endows the surface with the \emph{induced area element} $\omega$ being the $2$-form defined on the tangent bundle $TM$ as
\begin{align*} \omega(X,Y) := \mathrm{det}[X,Y,\eta],
\end{align*}
where $\mathrm{det}$ is the usual determinant in $\mathbb{R}^3$. This $2$-form yields the standard area element if the considered norm is Euclidean. Hence we may define the \emph{area of} $M$ as
\begin{align*} A(M) := \int_M\omega.
\end{align*}
Let now $D \subseteq M$ be a \emph{domain} in $M$, which is an open, connected subset whose boundary is homeomorphic to a sphere. Assume that $\bar{D}\subseteq M$, where $\bar{D}$ is the union of $D$ with its boundary. Let $g:\bar{D}\rightarrow\mathbb{R}$ be any smooth function. The \emph{Birkhoff normal variation} of $\bar{D}$ with respect to $g$ is the map $F:\bar{D}\times(-\varepsilon,\varepsilon)\rightarrow(\mathbb{R}^3,||\cdot||)$ given by
\begin{align*} F(p,t) = F_t(p) = p + tg(p)\eta(p),
\end{align*}
where we identify $M$ within $\mathbb{R}^3$ with its image under $f$, as usual. It is clear that this construction yields a family of immersed surfaces parametrized by $t$. We will denote each of these surfaces by $D_t$. Their respective Birkhoff normal vector fields and associated area elements will be denoted by $\eta_t$ and $\omega_t$. The function which associates each $t$ to the area of the surface $\bar{D}_t$ is then given by
\begin{align*} A(t) := \int_{\bar{D}}\omega_t.
\end{align*}
\begin{teo} Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be an immersed surface whose Minkowski Gaussian curvature is negative. Then $M$ is a minimal surface if and only if for each domain $D \subseteq M$ and each Birkhoff normal variation of $D$ we have $A'(0) = 0$.
\end{teo}
\begin{proof} Assume that $(x,y)$ are coordinates in $D$ such that their coordinate vector fields $X:=\frac{\partial}{\partial x}$ and $Y:=\frac{\partial}{\partial y}$ are principal directions of $M$ in each point (this is possible since the Minkowski principal curvatures are different at each point). For each $p \in D$ and $t \in (-\varepsilon,\varepsilon)$, the vectors $X^t:=(F_{t})_{*}(X)$ and $Y^t:=(F_{t})_*(Y)$ span the tangent space $T_pF_t(D)$. If $\lambda_1$ and $\lambda_2$ denote the principal curvatures of $M$ at $p$, then we have
\begin{align*}X^t = (1+tg\lambda_1)X + tX(g)\eta \ \ \mathrm{and}\\
Y^t = (1+tg\lambda_2)Y + tY(g)\eta,
\end{align*}
for each $(p,t)\in D\times(-\varepsilon,\varepsilon)$. Therefore, the area function $A(t)$ writes
\begin{align*} A(t) = \int_D\omega_t(X^t,Y^t) \ dxdy = \int_D\mathrm{det}[X^t,Y^t,\eta_t] \ dxdy.
\end{align*}
Now we calculate
\begin{align*} \omega_t(X^t,Y^t) = \mathrm{det}[X^t,Y^t,\eta_t] = (1+tg\lambda_1)(1+tg\lambda_2)\mathrm{det}[X,Y,\eta_t] + tX(g)(1+tg\lambda_2)\mathrm{det}[\eta,Y,\eta_t] + \\ + tY(g)(1+tg\lambda_1)\mathrm{det}[\eta,X,\eta_t],
\end{align*}
where we assume that the basis $\{X,Y,\eta\}$ is positively oriented. For each fixed $p \in D$, the vector field $t \mapsto \eta_t(p)$ describes a curve on $\partial B$. Therefore
\begin{align*} \left.\frac{\partial}{\partial t}\eta_t(p)\right|_{t=0} \in \mathrm{span}\{X(p),Y(p)\} = T_pM\,,
\end{align*}
and hence
\begin{align*}\left. \frac{\partial}{\partial t}\omega_t(X^t,Y^t)\right|_{t=0} = g(\lambda_1+\lambda_2)\mathrm{det}[X,Y,\eta].
\end{align*}
It follows immediately that
\begin{align*} A'(0) = \int_Dg(\lambda_1+\lambda_2)\omega = \int_D2gH\omega,
\end{align*}
where $H$ denotes the Minkowski mean curvature of $M$. If $H = 0$, then we have clearly $A'(0) = 0$ for any domain $D \subseteq M$ and any Birkhoff normal variation of $D$. The converse follows from standard analysis arguments.
\end{proof}
As in the Euclidean subcase, we can characterize minimal surfaces (at least the ones of negative Minkowski Gaussian curvature) by means of the affine fundamental form (which, as we remember, plays the role of the second fundamental form). This is our next statement.
\begin{prop} Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be an immersed surface with Birkhoff-Gauss map $\eta$ and affine fundamental form $h$. Assume that $M$ has negative Minkowski Gaussian curvature $K$. Then $M$ is minimal (in the Minkowski sense) if and only if there exists a function $c:M\rightarrow\mathbb{R}$ such that
\begin{align}\label{hmin} h(d\eta_pX,d\eta_pY) = c(p)\cdot h(X,Y),
\end{align}
for any $p \in M$ and $X,Y \in T_pM$. In this case, $c(p) = -K(p)$ for each $p \in M$.
\end{prop}
\begin{proof} Let $p \in M$. Since $K(p) < 0$, we have that the principal curvatures $\lambda_1,\lambda_2 \in \mathbb{R}$ are different, and then we have associated principal directions $V_1,V_2 \in T_pM$ such that $h(V_1,V_2) = 0$. If $X,Y \in T_pM$, then we can decompose them as
\begin{align*} X = \alpha_1V_1 + \alpha_2V_2 \ \mathrm{and} \\ Y = \beta_1V_1 + \beta_2V_2.
\end{align*}
Therefore, rescaling $V_1$ and $V_2$ in order to have $h(V_1,V_1) = -h(V_2,V_2) = 1$, we have
\begin{align*} h(d\eta_pX,d\eta_pY) = h(\alpha_1\lambda_1V_1+\alpha_2\lambda_2V_2,\beta_1\lambda_1V_1+\beta_2\lambda_2V_2) = \alpha_1\beta_1\lambda_1^2 - \alpha_2\beta_2\lambda_2^2.
\end{align*}
On the other hand, $h(X,Y) = \alpha_1\beta_1 - \alpha_2\beta_2$. Hence we have (\ref{hmin}) for all $X,Y \in T_pM$ if and only if $\lambda_1^2 = \lambda_2^2$. This happens if and only if $\lambda_1 = -\lambda_2$, since the Minkowski Gaussian curvature $K = \lambda_1\lambda_2$ is negative.
\end{proof}
Recall that the \emph{weighted Dupin metric} of an immersion $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ is the metric given by
\begin{align*} b(X,Y) := \frac{\langle du^{-1}_{\eta(p)}X,Y\rangle}{\langle\eta(p),\xi(p)\rangle},
\end{align*}
for each $p \in M$ and $X,Y \in T_pM$. It is an important fact in classical differential geometry that minimal surfaces can be characterized by their Gauss maps being conformal. Next, we prove something similar for Minkowski minimal surfaces, replacing the usual metric by the weighted Dupin metric.
\begin{teo} An immersed surface with negative Minkowski Gaussian curvature is a minimal surface if and only if its Birkhoff-Gauss map is conformal with respect to the weighted Dupin metric (and, clearly, also with respect to the Dupin metric).
\end{teo}
\begin{proof} First, notice that
\begin{align}\label{hb} h(X,Y) = -\frac{\langle du^{-1}_{\eta(p)}Y,d\eta_pX\rangle}{\langle\eta,\xi\rangle} = -b(Y,d\eta_pX).
\end{align}
Then, due to the symmetry of $h$, it follows that $d\eta_p$ is self-adjoint with respect to the weighted Dupin metric for each $p\in M$. Using the equality above and the previous proposition, we get that the equality
\begin{align*} -b(Y,d\eta_pX) = h(X,Y) = -K(p)\cdot h(d\eta_pX,d\eta_pY) = K(p) \cdot b(d\eta_pY,d\eta_p\circ d\eta_pX)
\end{align*}
holds if and only if $M$ is minimal. Setting $Z = d\eta_pX$, the above becomes
\begin{align*} -b(Y,Z) = K(p)\cdot b(d\eta_pY,d\eta_pZ).
\end{align*}
Since $K <0$, we see that $d\eta_p$ is an isomorphism for each $p \in M$. Hence the last equality holds for any $p \in M$ and for any $Y, Z \in T_pM$ if and only if $M$ is minimal.
\end{proof}
In classical differential geometry, minimal surfaces are also characterized as immersions for which the Laplacian of the coordinate functions vanishes. We will prove something similar here. We follow \cite[Section II.6]{nomizu} to define a concept which is analogous to that of the Laplacian for functions defined over $M$. We call it the \emph{b-Laplacian} and denote it by $\Delta_bf$. For this sake (following \cite{diffgeom2} and \cite{nomizu}, and if $\hat{\nabla}$ is the \emph{Levi-Civita connection} of the metric $b$), we define the \emph{b-Hessian} of a function $f \in C^{\infty}(M)$ to be the bilinear map
\begin{align*} \mathrm{hess}_bf:= X(Yf) - (\hat{\nabla}_XY)f,
\end{align*}
for any $X,Y \in C^{\infty}(TM)$. Still following \cite{nomizu}, since $b$ is positive definite, the $b$-Laplacian can be defined simply by taking the trace of $\mathrm{hess}_bf$ with respect to $b$. Formally,
\begin{align*} \Delta_b f(p) := \mathrm{tr}_b\left(\mathrm{hess}_bf|_p\right), \ \ p \in M.
\end{align*}
Notice that this is the \emph{Laplace-Betrami operator} for the Riemannian metric $b$ on $M$. We recall here that the trace with respect to the weighted Dupin metric $b$ is calculated by taking an orthonormal basis for $b$. In the next theorem, we show that (Minkowski) minimal surfaces can be characterized as immersions for which the $b$-Laplacian of the coordinate functions vanishes.
\begin{teo} Let $f=(f_1,f_2,f_3):M\rightarrow(\mathbb{R}^3,||\cdot||)$ be an immersed surface whose Minkowski mean curvature is denoted by $H$. Then $H(p) = 0$ if and only if $\Delta_b f_1 = \Delta_b f_2 = \Delta_b f_3 = 0$. In particular, $M$ is minimal if and only if the Laplacian of its coordinate functions vanishes at every point.
\end{teo}
\begin{proof} Let $p \in M$, and assume that $(x,y,z)$ be coordinates in $\mathbb{R}^3$ given by $(x,y,z)\mapsto p+ xV_1 + yV_2 + z\eta(p)$, where $V_1$ and $V_2$ are distinct principal directions of $M$ at $p$, nor\-ma\-li\-zed in the weighted Dupin metric (this is a \emph{Monge form parametrization}, see \cite{izumiya}). Therefore, $(x,y,g(q))$ is the position vector of $M$ in a neighborhood of $p$, where $q \in M$ is the intersection of the line $t \mapsto p+xV_1+yV_2 + t\eta(p)$ with $M$. Equality (3.4) in \cite{diffgeom2} gives that, at $p$, we have $\mathrm{hess}_bg(X,Y) = -h(X,Y)$ for any $X,Y \in T_pM$. From this and equality (\ref{hb}), and since $p$ is a critical point of $g$, we get
\begin{align*} \Delta_b g(p) = \mathrm{tr}_b(\mathrm{hess}_bg|_p) = \mathrm{tr}_b(-h) = \mathrm{tr}(d\eta_p) = 2H(p).
\end{align*}
Since we clearly have $\Delta_b x(p) = \Delta_b y(p) = 0$, the proof is complete. Notice that we can ``choose" the coordinates in $\mathbb{R}^3$ because a zero Laplacian remains zero under an affine transformation.
\end{proof}
\section{Global theorems}\label{global}
In this section we will prove versions of the Hadamard theorems for suitable hypotheses regarding the Minkowski Gaussian curvature. We also prove that, analogously to the Euclidean subcase, if the Minkowski Gaussian curvature of a (closed) surface vanishes in every point, then this surface must be a plane or a cylinder. As we will see, these theorems come as consequences of their Euclidean ``counterparts". Throughout this section we assume that, as usual, all the norms involved are admissible. Also, we say that an immersed surface is \emph{topologically closed} if it is closed in the topology derived from the norm fixed in the space (which is, of course, the same as the topology endowed by the Euclidean norm). Assuming that the surface is topologically closed is an independent-of-the-norm way to deal with surfaces which are \emph{geodesically complete} (or simply \emph{complete}) in Euclidean differential geometry (see \cite{manfredo} for the definition and for the proof of this implication). In such a geometry, the completeness of the surface is an essential hypothesis for the theorems we aim to extend next. The following proposition is the key for the results of this section (see \cite{diffgeom} for a proof).
\begin{prop}\label{gaussposit} Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be a surface immersed in an admissible normed space. The signs of the Minkowski and Euclidean Gaussian curvatures are the same at any point of $M$.
\end{prop}
\begin{teo} Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be a simply connected immersed surface, which is topologically closed. If the Minkowski Gaussian curvature of $M$ is non-positive, then $M$ is diffeormorphic to a plane.
\end{teo}
\begin{proof} The hypothesis on $M$ being closed implies that it is complete in the Euclidean geometry. From Proposition \ref{gaussposit} it follows that the Euclidean Gaussian curvature is non-positive. Hence the result comes as a consequence of the Hadamard theorem in Euclidean geometry (see \cite[Section 5.6 B, Theorem 1]{manfredo}).
\end{proof}
\begin{teo} Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be a compact, connected immersed surface. If the Minkowski Gaussian curvature of $M$ is positive, then the Birkhoff-Gauss map $\eta:M\rightarrow\partial B$ is a diffeomorphism.
\end{teo}
\begin{proof} Again it follows from Proposition \ref{gaussposit} that the Euclidean Gaussian curvature of $M$ is positive. Therefore, the Euclidean Gauss map $\xi:M\rightarrow \partial B_e$ is a diffeomorphism (see \cite[Section 5.6 B, Theorem 2]{manfredo}). Since the norm is admissible, the Minkowski unit sphere $\partial B$ is itself a compact, connected immersed surface with positive Euclidean Gaussian curvature. It follows that $u^{-1}:\partial B\rightarrow\partial B_e$ is a diffeomorphism. Hence also $\eta = u\circ\xi$ is a diffeomorphism.
\end{proof}
Recall that a \emph{cylinder} is an immersed surface $M$ such that for each point $p \in M$ there is a unique line $r(p) \subseteq M$ through $p$, and if $p \neq q$, then the lines $r(p)$ and $r(q)$ are parallel or coincident.
\begin{teo} Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be a topologically closed immersed surface whose Minkowski Gaussian curvature is null. Then $M$ is a cylinder or a plane.
\end{teo}
\begin{proof} The proof of this theorem is a consequence of the observation that a principal direction, where the curvature vanishes, is a direction that \textbf{always} determines tangential covariant derivatives. Consequently, the property that a principal curvature is zero at a certain point does not depend on the considered metric. Formally, let $X \in T_pM$ be a non-zero vector such that $d\eta_pX = 0$, where $\eta:M\rightarrow\partial B$ is the Birkhoff-Gauss map of $M$, as usual. The existence of such a vector is, in an admissible Minkowski space, equivalent to saying that the Minkowski Gaussian curvature of $M$ at $p$ is null. From (\ref{exph}), it follows that $h(X,Y) = 0$ for any $Y \in T_pM$, and this means that $D_XY$ is always tangential. It follows that $d\xi_pX =0$, and hence the Euclidean Gaussian curvature of $M$ at $p$ is also null. Therefore, the general case reduces to the Euclidean version of the theorem, which is proven in \cite[Section 5.8]{manfredo}.
\end{proof}
\section{Surfaces with constant Minkowski width}\label{constantwidth}
Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be a compact, strictly convex immersed surface without boundary. We say that $M$ has \emph{constant Minkowski width} if the (Minkowski) distance between any two parallel supporting hyperplanes of $M$ is the same. This section is devoted to give a result on the principal curvatures of a surface of constant Minkowski width which is similar to its Euclidean version. In what follows, we denote by $S_p:=\{x \in T_pM:||x|| = 1\}$ the unit circle of $T_pM$.
\begin{teo} Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be a surface of constant Minkowski width having positive Gaussian curvature, and let $p, q \in M$ be any points with parallel tangent planes. Then
\begin{align*} \frac{1}{\max_{X\in S_p}(k_{M,p}(X))} + \frac{1}{\min_{Y\in S_q}(k_{M,q}(Y))} = c,
\end{align*}
where $c \in \mathbb{R}$ is the width of $M$.
\end{teo}
\begin{proof} Notice first that since $M$ is strictly convex, we can define a bijective mapping $g:M\rightarrow M$ which associates each $p \in M$ to the point $g(p) \in M$ such that $p$ and $g(p)$ have parallel tangent planes. Since $g\circ g = \mathrm{Id}|_M$, it is clear that $g$ is a diffeomorphism whose differential map is always an endomorphism. \\
Let $\eta:M\rightarrow\partial B$ be the outward point Birkhoff-Gauss normal map. By definition, we have that $\eta(p) = -\eta(g(p))$ for each $p \in M$. Our next step is to prove that the segment joining $p$ and $g(p)$ lies in the direction of $\eta(p)$. To do so, for each $p \in M$ let $h(p) \in g(p)\oplus T_{g(p)}M$ be such that $p - c\eta(p) = h(p)$, and let $w(p)$ be such that $g(p) + w(p) = h(p)$. Differentiating, we have
\begin{align*} X - cD_X\eta = D_Xg + D_Xw,
\end{align*}
for any $X \in T_pM$. It follows that $D_Xw$ is tangential for each $X \in T_pM$. Therefore, denoting the Euclidean Gauss map of $M$ by $\xi$, we have
\begin{align*} 0 = X\langle w,\xi\rangle = \langle D_Xw,\xi\rangle + \langle w,D_X\xi\rangle = \langle w,D_X\xi\rangle
\end{align*}
for each $X \in T_pM$. Since the Minkowski Gaussian curvature of $M$ is positive, it follows that the Euclidean Gaussian curvature of $M$ is also positive, and therefore $X\mapsto D_X\xi = d\xi_pX$ is an isomorphism. Then we have that $w = 0$, and we get the equality
\begin{align*} p - c\eta(p) = g(p), \ \ p \in M.
\end{align*}
Similarly, we have the equality $p + c\eta(g(p)) = g(p)$. Let $V_1,V_2 \in T_pM$ be principal directions of $M$ at $p$, associated to the principal curvatures $\lambda_1,\lambda_2 \in \mathbb{R}$, respectively. Differentiating the first equality with respect to $V_1$ and $V_2$, we have
\begin{align}\label{cw1} \begin{split} (1-c\lambda_1)V_1 = dg_pV_1 \ \ \mathrm{and}\\
(1-c\lambda_2)V_2 = dg_pV_2, \end{split}
\end{align}
respectively. Differentiating the second equality with respect to a vector $X \in T_pM$ we obtain $X + cd\eta_{g(p)}\circ dg_pX = dg_pX$. Let $W_1,W_2 \in T_{g(p}M$ be the principal directions of $M$ at $g(p)$ associated to principal curvatures $\mu_1,\mu_2 \in \mathbb{R}$, respectively. Substituting $X$ by $dg^{-1}_{g(p)}W_1$ and $dg^{-1}_{g(p)}W_2$, we get
\begin{align*} dg^{-1}_{g(p)}W_1 + c\mu_1W_1 = W_1 \ \ \mathrm{and}\\
dg^{-1}_{g(p)}W_2 + c\mu_2W_2 = W_2.
\end{align*}
Applying $dg_p$ on both sides, we have
\begin{align}\label{cw2} \begin{split} W_1 = (1-c\mu_1)dg_pW_1 \ \ \mathrm{and} \\
W_2 = (1-c\mu_2)dg_pW_2. \end{split}
\end{align}
Writing $W_1$ and $W_2$ in terms of $V_1$ and $V_2$ and using (\ref{cw1}) and (\ref{cw2}), we obtain immediately
\begin{align*} (1-c\mu_1)(1-c\lambda_1) = 1 \ \ \mathrm{or} \ \ (1-c\mu_1)(1-c\lambda_2) = 1 \ \ \mathrm{and} \\
(1-c\mu_2)(1-c\lambda_1) = 1 \ \ \mathrm{or} \ \ (1-c\mu_2)(1-c\lambda_2) = 1.
\end{align*}
Notice that in both lines we have at least one of the equalities being true, since $W_1$ and $W_2$ are non-zero vectors. Now one sees that if $\lambda_1 = \lambda_2$ or $\mu_1 = \mu_2$, then the desired comes straightforwardly (each equality implies the other). Thus, assume that $\lambda_1 > \lambda_2$ and $\mu_1 > \mu_2$. Then, if $(1-c\mu_1)(1-c\lambda_1) = 1$, we must also have $(1-c\mu_2)(1-c\lambda_2) = 1$, which is a contradiction. It follows that $(1-c\mu_1)(1-c\lambda_2) = 1$, but this equality reads
\begin{align*} \frac{1}{\mu_1} + \frac{1}{\lambda_2} = c,
\end{align*}
which is the desired relation. Observe that the argument is symmetric: we have the same equality changing $\mu_1$ and $\lambda_2$ by $\mu_2$ and $\lambda_1$, respectively.
\end{proof}
Recall that a point $p \in M$ is said to be \emph{umbilic} if the normal curvature $k_{M,p}$ is constant for every directions of $T_pM$. For a given strictly convex surface, we say that two points with parallel tangent planes are \emph{opposite points}. As an immediate consequence of the previous theorem, we have the following corollary.
\begin{coro} Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be a strictly convex surface of constant Minkowski width. If $p \in M$ is a umbilic point, then its opposite point is also umbilic. Moreover, if the global maximum value of the map $p \mapsto \mathrm{max}_{V\in S_p}\left(k_{M,p}(V)\right)$ is attained for a umbilic point, then $M$ is a Minkowski sphere. The same holds for the global minimum value of the map $p\mapsto \mathrm{min}_{V\in S_p}\left(k_{M,p}(V)\right)$.
\end{coro}
\begin{proof} We prove only the second claim, since the first one is immediate, and the third one is analogous. Suppose that
\begin{align*} \lambda:=\max_{p\in M}\left(\max_{V\in S_p}\Big(k_{M,p}(V)\Big)\right)
\end{align*}
is attained for a umbilic point $p \in M$. Thus, if $c$ is the width of $M$, we have $\lambda = 2/c$. Assume that there exists a point $q \in M$ which is not umbilic, and let $\lambda_1 > \lambda_2$ be its principal curvatures. Let $\bar{q}$ be the opposite point to $q$, and let $\mu_1 > \mu_2$ be its principal curvatures. Since $\lambda \geq \lambda_1$, we get
\begin{align*} c = \frac{1}{\lambda_1} + \frac{1}{\mu_2} \geq \frac{c}{2} + \frac{1}{\mu_2},
\end{align*}
and it follows that $\mu_2 \geq 2/c$. Hence $\mu_1 > 2/c = \lambda$, and this is a contradiction.
\end{proof}
\section{The induced metric}\label{metric}
We want to study how the ambient metric is inherited by a surface immersed in a Minkowski space. In classical differential geometry, this is mainly done via the classical Hopf-Rinow theorem, but in that context the arguments depend heavily on the fact that \emph{geodesics} locally minimize lengths (see \cite{manfredo}). Since this cannot be directly ``translated" into the language of normed spaces, we adopt the viewpoint presented in \cite{burago}, namely regarding the surface as a length space. The arguments in this section are somehow standard in Finsler geometry, but some proofs are made easier in our context since here we can use a topologically equivalent Euclidean structure. \\
Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be a connected immersed surface, and assume that $\sigma:[a,b]\rightarrow M$ is a piecewise smooth curve on $M$. The \emph{Minkowski length} $l(\sigma)$ of $\sigma$ is naturally defined as
\begin{align*} l\left(\sigma|_{[a,b]}\right):= l(\sigma) := \int_a^b||\sigma'(t)||dt.
\end{align*}
This definition endows $M$ with a \emph{length structure} (in the sense of \cite{burago}). As usual, we define a metric in $M$ as
\begin{align*} d(p,q) := \inf_{\sigma}l(\sigma),
\end{align*}
where $p,q \in M$ and the infimum is taken over all piecewise smooth curves $\sigma:[a,b]\rightarrow M$ connecting $p$ and $q$. It is easy to see that $d:M\times M\rightarrow\mathbb{R}$ defined this way is indeed a metric in the usual sense, and we call it the \emph{induced Minkowski metric} (or \emph{distance}). Now we will briefly explore the topology induced by this metric. Our main objective in this section is to determine whether any two points in $M$ can be joined by a piecewise smooth curve whose length equals the distance between them.
\begin{prop}\label{topM} Assume that $M$ is closed with respect to the topology induced by $\mathbb{R}^3$. Then $(M,d)$ is a complete metric space. Moreover, $(M,d)$ is locally compact.
\end{prop}
\begin{proof} In our context, this is slightly easier than in general Finsler manifolds. The reason is that we can just compare the Minkowski metric in $M$ with an auxiliary usual Euclidean metric. Let $||\cdot||_e:=\sqrt{\langle\cdot,\cdot\rangle}$ denote the Euclidean norm. Therefore, the Euclidean length $l_e(\sigma)$ of a curve $\sigma:[a,b]\rightarrow M$ is given by
\begin{align*} l_e(\sigma) = \int_a^b||\sigma'(t)||_edt.
\end{align*}
This length structure induces a metric $d_e$ defined in the same way as $d$. Since any two norms in a finite vector space are equivalent, we may fix a constant $c > 0$ such that
\begin{align*} \frac{1}{c}||\cdot||_e \leq ||\cdot|| \leq c||\cdot||_e.
\end{align*}
Thus, the same inequality holds for the Minkowski and the Euclidean lengths on $M$. Consequently, we have that the metrics $d$ and $d_e$ are equivalent:
\begin{align*} \frac{1}{c}d_e(\cdot,\cdot) \leq d(\cdot,\cdot) \leq cd_e(\cdot,\cdot).
\end{align*}
It follows that the topology induced by $d$ is the same as the topology induced by $d_e$, and therefore we can use the known results for the Euclidean subcase. Our result follows from the fact that if $M$ is closed in the topology of $\mathbb{R}^3$, then it is \emph{geodesically complete}, and hence complete as a metric space (see \cite[Chapter 5]{manfredo} and \cite[Chapter VII]{manfredo2}). \\
The fact that $(M,d_e)$ is locally compact comes from the observation that, for each $p \in M$, the \emph{exponential map} $\exp_p:T_pM\rightarrow M$ is a diffeomorphism in a neighborhood of $0 \in T_pM$. Again we refer the reader to \cite{manfredo} for further details.
\end{proof}
\begin{remark} Notice that the distance associated to the length structure induced by the Dupin metric determines on $M$ the same topology as the Euclidean and Minkowski distances. To verify this, one has analogously to bound the Dupin norm in terms of the Euclidean norm, by using the extremal values of the norm operator of $du^{-1}_q$ as $q$ varies through the (compact) unit circle $\partial B$. \\
\end{remark}
Combining Proposition \ref{topM} with Theorem 2.5.23 in \cite{burago}, we have immediately the main result of this section.
\begin{teo} Let $f:M\rightarrow(\mathbb{R}^3,||\cdot||)$ be an immersed surface which is closed in the topology induced by the ambient space. Then, for any $p,q \in M$, there exists a curve $\gamma:[a,b]\rightarrow M$ joining $p$ and $q$ such that $l(\gamma) = d(p,q)$.
\end{teo}
From now on, such minimizing curves will be called \emph{Minkowski geodesics}, or simply \emph{geodesics}. As a matter of fact, the Minkowski geodesics are smooth curves. For a proof, we refer the reader to \cite[Chapter 5]{shen}. There, the minimizing curves (or \emph{shortest paths}) are obtained as the \emph{trajectories} of the \emph{Finsler spray} (see the mentioned reference for precise definitions). It seems to be difficult to find further ``good" characterizations of the geodesics in our context. However, we can find a family of Finsler metrics on the usual $2$-sphere $S^2$ for which we can guarantee the existence of closed geodesics (the problem on finding closed geodesics in Riemannian and Finsler manifolds is a very active topic of research, see, e.g., \cite{long}). \\
It is easy to see that the unit sphere $\partial B$ of a normed space $(\mathbb{R}^3,||\cdot||)$ has infinitely many closed geodesics (in the induced ambient norm). Namely, any geodesic connecting two antipodal points must close, since the symmetry of the norm guarantees that the antipodal curve is also a geodesic. Therefore, intuitively, if we can deform isometrically $\partial B$ to become $S^2$ with a Finsler metric $F$, say, then $(S^2,F)$ has infinitely many closed geodesics. We fomalize this idea as follows. We say that a Finsler metric $F$ on $S^2$ is \emph{of immersion type} if the following holds: there exists a smooth and strictly convex body $K\subseteq\mathbb{R}^3$ (in the sense that its Euclidean Gaussian curvature is strictly positive) and a diffeomorphism $u:\partial K\rightarrow S^2$ such that
\begin{align*} F(u(x),du_xv) = ||v||,
\end{align*}
for any $x \in \partial K$ and $v \in T_x\partial K$, where $||\cdot||$ is the norm in $\mathbb{R}^3$ inherited from $K$ by the \emph{Minkowski functional} (see \cite{thompson}). In other words, a Finsler metric $F$ on $S^2$ is of immersion type if $(S^2,F)$ is (globally) isometric to the unit sphere of some admissible norm $||\cdot||$ on $\mathbb{R}^3$.
\begin{teo} Let $F$ be a Finsler metric on $S^2$. If $F$ is of immersion type, then $(S^2,F)$ has infinitely many closed geodesics.
\end{teo}
\begin{proof} Let $\gamma:S^1\rightarrow\partial K$ be a closed geodesic of $\partial K$ (with respect to the induced metric $||\cdot||$). By definition, we have that the diffeomorphism $u:\partial K\rightarrow S^2$ is an isometry. Hence $u\circ\gamma:S^1\rightarrow S^2$ is a closed geodesic of $(S^2,F)$. Since there are infinitely many closed geodesics in $\partial K$, we have the result.
\end{proof}
\section{Estimates for perimeter and diameter}\label{control}
This section is devoted to find bounds on the Minkowski Gaussian curvature in terms of the Euclidean Gaussian curvature, with the aim of estimating the \emph{diameter} of a surface under certain hypotheses. As a consequence, we give an upper bound for the \emph{perimeter} of $(\mathbb{R}^3,||\cdot||)$ (in the sense of Sch\"{a}ffer, see \cite{schaffer}). In what follows, $K(p)$ and $K_e(p)$ denote the Minkowski and Euclidean Gaussian curvatures of a surface $M$ in a point $p \in M$, respectively. Also, $K_{\partial B}(q)$ denotes the Euclidean Gaussian curvature of the unit sphere $\partial B$ at a point $q \in \partial B$. As usual, we define the \emph{diameter} of $M$ to be the number $\mathrm{diam}(M):=\sup_{p,q\in M}d(p,q)$, where $d$ is the Minkowski metric of $M$.
\begin{lemma}\label{boundgauss} For each $p \in M$, we have the bounds
\begin{align*} mK(p) \leq K_e(p) \leq \bar{m}K(p),
\end{align*}
where $m = \inf_{q\in\partial B}K_{\partial B}(q)$ and $\bar{m} = \sup_{q\in\partial B}K_{\partial B}(q)$.
\end{lemma}
\begin{proof} Recall that $\xi = u^{-1}\circ\eta$. For each $p \in M$, we have
\begin{align*} K_e(p) = \mathrm{det}\left(d\xi_p\right) = \mathrm{det}\left(du^{-1}_{\eta(p)}\right)\cdot\mathrm{det}\left(d\eta_p\right) = K_{\partial B}(\eta(p))\cdot K(p).
\end{align*}
The desired bounds come straightforwardly.
\end{proof}
\begin{remark} Notice that since we are assuming that the norm is admissible, together with the compactness of $\partial B$ it follows that $0 < m,\bar{m} < \infty$. \\
\end{remark}
Now we use this to estimate the diameter of a surface whose Minkowski Gaussian curvature is bounded from below by a positive constant. The estimate is optimal in the sense that for the Euclidean case we just re-obtain Bonnet's classical theorem (cf. \cite{manfredo}).
\begin{teo}\label{bonnet} Let $M$ be a closed surface whose Minkowski Gaussian curvature satisfies $K \geq \varepsilon > 0$. Then the diameter of $M$ (in the induced ambient metric) has the upper bound
\begin{align*} \mathrm{diam}(M) \leq \frac{\pi}{c\sqrt{m\varepsilon}},
\end{align*}
where
\begin{align*}c = \inf_{v\in \partial B}\frac{||v||_e}{||v||} \ \ (> 0),
\end{align*}
and $m\in\mathbb{R}$ is defined as in Lemma \ref{boundgauss}. In particular, $M$ is compact.
\end{teo}
\begin{proof} If $K \geq \varepsilon > 0$, then we have $K_e \geq m\varepsilon > 0$. Therefore, by Bonnet's theorem from classical differential geometry, it follows that
\begin{align*} \mathrm{diam}_e(M) \leq \frac{\pi}{\sqrt{m\varepsilon}},
\end{align*}
where $\mathrm{diam}_e(M)$ denotes the diameter of $M$ in the Euclidean metric. From the definition of the constant $c$, we have $c||v|| \leq ||v||_e$ for any $v \in \mathbb{R}^3$. It follows immediately that
\begin{align*}\mathrm{diam}(M) \leq \frac{1}{c}\cdot\mathrm{diam}_e(M) \leq \frac{\pi}{c\sqrt{m\varepsilon}},
\end{align*}
and the desired follows.
\end{proof}
\begin{remark} The compactness of $M$ under the hypothesis of Theorem \ref{bonnet} was already proved in \cite{diffgeom}. The new result here is the bound for the diameter of the surface. \\
\end{remark}
The \emph{perimeter} of a normed space is defined to be twice the supremum of the induced Minkowski distances between antipodal points of its unit sphere. We can use Theorem \ref{bonnet} to provide an upper bound for the perimeter of a normed space which only depends on the Euclidean Gaussian curvature of $\partial B$. \\
Assume that the Euclidean auxiliary structure in $\mathbb{R}^3$ is re-scaled in such a way that the Euclidean unit sphere bounds the largest Euclidean (closed) ball contained in the Minkowski (closed) ball $B$. This way, the constant $c \in \mathbb{R}$ defined in Theorem \ref{bonnet} becomes $1$ (indeed, it is attained for the touching points of $\partial B$ and $\partial B_e$).
\begin{teo} Let $\rho(\partial B)$ denote the perimeter of a normed space $(\mathbb{R}^3,||\cdot||)$, which is assumed to be smooth and admissible. Then we have the inequality
\begin{align*} \rho(\partial B) \leq \frac{2\pi}{\sqrt{m}},
\end{align*}
where $m = \inf_{q\in\partial B}K_{\partial B}(q)$, as usual.
\end{teo}
\begin{proof} The Minkowski Gaussian curvature of $\partial B$ equals $1$ (cf. \cite{diffgeom}). Therefore, assuming that the auxiliary Euclidean structure is re-scaled as described above and applying Theorem \ref{bonnet}, we get
\begin{align*} \mathrm{diam}(\partial B) \leq \frac{\pi}{\sqrt{m}}.
\end{align*}
Since we obviously have $\rho(\partial B) \leq 2\mathrm{diam}(\partial B)$, the result follows.
\end{proof}
|
1,108,101,564,995 | arxiv | \section{Introduction}
Hardware security has become an important aspect in modern Integrated Circuit (IC) design industry because of the global supply chain business model. Identifying and authenticating each fabricated components of a chip is a challenging task \cite{Hussain14}. A Physical Unclonable Function (PUF) has been a promising security primitive such that its behavior, or Challenge Response Pair (CRP) \cite{Roel2010}, is uniquely defined and is hard to predict or replicate. A PUF can enable low overhead hardware identification, tracing, and authentication during the global manufacturing chain.
Silicon delay based strong PUFs have been studied intensively since its first appearance in \cite{Blaise02} because of its low implementation cost and large CRP space compared with a weak PUF \cite{Herder2014}. However, there are still design challenges that restrain a strong PUF from being put in a widespread practical use. One of the major design challenges for a silicon delay based PUF is the strict symmetric delay path layout requirement. The wire delays of the competing paths should be designed and matched carefully to avoid biased responses, otherwise low inter-chip uniqueness would make the PUF unusable \cite{Daihyun2005,Sahoo2015}. In addition to asymmetric routing, another source of the biased responses for silicon based PUF is the systematic process variation, which can also degrade the quality of a PUF, such as uniqueness or unpredictability. Finally, the metastability issue of the arbiter circuit for an Arbiter PUF can cause unstable PUF responses, making a portion of the CRP unusable due to their instabilities \cite{PotkonjakM14}.
\subsection{Asymmetric Path Delay Routing} \label{section:r_works}
For a delay based PUF, the randomness should be contributed only by the subtle variations between devices, so having biased delay differences due to asymmetric routing is detrimental to delay based PUFs, and such impact should be eliminated. However, a precise control of the routing can be a difficult and time consuming task.
An implementation of an Arbiter PUF on Field Programmable Gate Array (FPGA) is considered much more difficult than a RO PUF because the connections to the arbiter circuit must also be symmetric \cite{Prasad2016}, and performing completely symmetric routing is physically infeasible in most cases \cite{Sahoo2015}, resulting small inter-Fractional Hamming Distance (FHD) for an Arbiter PUF \cite{maiti2013}. One of the most common solutions to the asymmetric routing is to use hard-macros in FPGA designs \cite{Maiti2009,Chongyan2015}, but it is not effective with Arbiter PUF, and some less commonly-used features of the FPGA would be required \cite{Morozov10}. Other approaches try to extract randomness by XORing the outputs of multiple Arbiter PUFs at the cost of large hardware overhead and less stability \cite{Machida2015}. In \cite{Kodýtek15}, the authors proposed using `middle' bits instead of the Most Significant Bit (MSB) as the RO PUF response measurement. The measurement can effectively eliminate the biased responses, but an efficient way of predicting the inspection bit is not described, and the presented RO PUF is not a strong PUF. A RTL-based PUF bit generation unit was proposed in \cite{Anderson2010}, but to the best of our knowledge, a strong PUF that can be implemented efficiently without any layout constraints has not yet been proposed.
\subsection{Systematic Process Variation}
The existence of systematic process variation can degrade the quality of silicon based PUFs because the local randomness should be the only desired entropy source of the delay based PUF \cite{Chi-En13}. The effect of systematic process variation is similar to having biased wire delay between two delay paths, which can also damage the uniqueness of the PUF. Another possible vulnerability caused by systematic variation is the induced process side channel attack as described in \cite{Wang16}. Due to intra-wafer systematic variation \cite{Lerong11}, PUFs fabricated at the same region on different wafers can have similar systematic behavior, which can be exploited as a process side channel attack.
To account for systematic variations, a compensation technique is proposed in \cite{Maiti2009}, which requires careful design decisions to compare RO pairs that are physically placed close to each other. In \cite{Chi-En13}, the systematic variation is modeled and subtracted from the PUF response to distill true randomness with the cost of model calculation. Similarly, in \cite{Feiten15}, the averaged RO frequency is subtracted from the original frequency, where the multiple measurements of each RO can lead to large latency overhead. In \cite{Zhang15}, a method is proposed to extract local random process variation from total variation, however, a second order difference calculation is needed, and hard-macro technique must be applied to construct symmetric delay paths.
\subsection{Metastability of the Arbiter Circuit}
The idea of an Arbiter PUF is to introduce a race condition on two paths and an arbiter circuit is used to decide which one of the two paths reached first. The arbiter circuit is usually a D flip-flop or a SR latch. If two signals arrive at an arbiter within a short time, the arbiter circuit may enter a metastable state due to setup time violation \cite{Portmann95}. Once the arbiter circuit is in metastable state, the response becomes unstable. To eliminate the inconsistency caused by metastability of the arbiter circuit, existing approaches use the majority vote or to choose the paths that have a delay difference larger than the metastable window $\delta$ at the cost of CRP characterization and discarding the unstable CRPs \cite{PotkonjakM14}.
\subsection{Our Contributions}
In this paper, we propose the physical implementation bias agnostic (UNBIAS) PUF that is immune to physical implementation bias. The contributions of this paper include:
\begin{itemize}
\item We proposed the first strong UNBIAS PUF that can be implemented purely by RTL without imposing any physical routing constraints.
\item Efficient inspection bit selection strategy based on intra-/inter-FHD prediction models are proposed and verified on the strong UNBIAS PUF.
\end{itemize}
\section{Proposed Strong UNBIAS PUF} \label{sec:bias_puf}
The proposed strong UNBIAS PUF compares two delay paths to generate PUF responses. Similar to Arbiter PUF, each bit of the challenge of the UNBIAS PUF specifies the path configuration of the delay path. As shown in Figure \ref{figure:biaspuf}, the challenge c1 and c2 specify the path configurations, and an one-bit response is extracted from the difference register, which can be of several bits long. Once a challenge is given, a signal is applied at Trigger. Each of the Clock counter begins to count the number of clock cycles of the system clock (CLK) whenever the the signal from Trigger is propagated to the START input of the counter, and stops counting whenever the the signal from Trigger is propagated to the STOP input of the counter. For each challenge, the difference value of the two Clock counters is stored in the difference register for further response extraction, which is described in details in Section \ref{sec:measurement}.
\begin{figure}[htb]
\centering
\includegraphics[width=3.3in]{./pics/RSAPUF.jpg}
\caption{The proposed strong UNBIAS PUF. The Clock counter starts counting clock cycles of the system clock (CLK) when START arrives and stops when STOP arrives. The difference of two Clock counters are stored in the difference register for further response extraction.}
\label{figure:biaspuf}
\end{figure}
The purpose of the ROs inserted between path configurations is to increase the path delay so that it will take multiple clock cycles for the signal to propagate to stop the clock counter. As shown in Figure \ref{figure:ro_delay}, each RO is associated with a RO counter that counts the number of oscillations of the RO. The RO counter starts counting when the signal from its previous path configuration is arrived, and propagates the signal to the next path configuration only when the count reaches a certain threshold. All the ROs are composed of same number of inverters and neither configurations nor any layout constraints are needed.
\begin{figure}[htb]
\centering
\includegraphics[width=3in]{./pics/RO_delay.jpg}
\caption{ROs are inserted between path configurations to increase the path delay for better stability. The signal from previous path configuration is propagated only when the count of the RO counter reaches a certain threshold.}
\label{figure:ro_delay}
\end{figure}
Unlike the conventional Arbiter PUF, the strong UNBIAS PUF has no metastability issues caused by a D flip-flop or a latch. The delay difference of the two paths is transformed into counter values of the system clock. By judiciously extracting the response from the difference register, the physical implementation bias can be effectively mitigated, therefore the UNBIAS PUF can be implemented purely by RTL without any routing or layout constraints. Details of the response extraction are described in Section \ref{sec:measurement}.
\section{Bias-Immune Response Extraction} \label{sec:measurement}
\subsection{Inspection Bit on Unbiased/Biased Paths} \label{sec:ins}
In this section we describe how different selections of the inspection bit can change the intra- and inter-FHD. Figure \ref{figure:ud} shows an example of a distribution of values from difference registers of symmetrically routed UNBIAS PUFs. The length of the difference register is 22-bit, so the range of the register value is between $-2^{21}$ and $2^{21}-1$ as represented in 2's-complement. The large inter-chip measurement curve gives the distribution of the values across all PUFs. Since the PUF is unbiased, roughly half of the difference values would be greater than zero due to random local process variation, therefore the inter-FHD of the UNBIAS PUFs would be close to 50\%. In this case, the inspection bit is simply the MSB, which divides the range of 22-bit difference value into two groups $bin\_1$ and $bin\_0$. All measurements fall into $bin\_1$ on the left output a 1; others output a 0. The small intra-chip measurement curve gives the distribution of multiple measurements of the PUF on a same chip. Due to noise, the difference values could be different, so the intra-FHD of the difference register may not be a perfect 0\%.
\begin{figure}[htb]
\centering
\includegraphics[width=2.8in]{./pics/unbiased_distribution.jpg}
\caption{For a symmetrically routed PUF, the inter-FHD would be close to 50\%. The intra-FHD may not be zero due to measurement noise.}
\label{figure:ud}
\end{figure}
Even though symmetric UNBIAS PUF layout is much preferred, it is difficult and takes much effort and overhead to achieve such requirement as described in Section \ref{section:r_works}. In practice, if no layout constraints are imposed, the measurement distribution of the difference register can be as shown in Figure \ref{figure:bd}, where most of the difference values across chips are greater than zero. In this case, using the MSB as the inspection bit would cause low inter-FHD of the PUFs because most MSBs are 0's.
\begin{figure}[htb]
\centering
\includegraphics[width=2.9in]{./pics/biased_distribution.jpg}
\caption{For a biased PUF, most of the difference values across all chips could be greater than zero, causing a low inter-FHD if the MSB is the inspection bit.}
\label{figure:bd}
\end{figure}
For the same biased distribution shown in Figure \ref{figure:bd}, if the $i^{th}$ bit is used as the inspection bit of the difference register as Figure \ref{figure:bd_bin} shows, the range of the 22-bit difference value is divided into multiple bins with width $2^{i}$, where the output of the measurement is decided by the bin in which it resides. Note that in this case the response is not an indicator of which delay is longer in the comparison. The smaller the width of the bin is, the closer the inter-FHD is to 50\% because roughly half of the outputs would reside in $bin\_1$ even with biased delay. On the other hand, the width of the bin should be large enough so that multiple measurements of a same PUF should always fall into the same bin. In other words, the width of the bin should be larger than the variation of the intra-chip measurement distribution. Therefore, the choice of inspection bit is a tradeoff between inter-FHD and intra-FHD for a PUF with asymmetric routing.
\begin{figure}[htb]
\centering
\includegraphics[width=2.9in]{./pics/biased_distribution_bin.jpg}
\caption{For an asymmetrically routed PUF with proper inspection bit, roughly half of the difference values across all chips would fall in $bin\_1$, therefore the inter-FHD would be close to 50\%.}
\label{figure:bd_bin}
\end{figure}
\section{Inspection Bit Identification} \label{sec:models}
\subsection{Intra-FHD Prediction Model} \label{sec:up}
The intra-FHD depends on the width of the bins $w=2^{i}$ when the inspection bit is $bit_{i}$. A straightforward way to determine the associated intra-FHD for each inspection location is to gather multiple measurements of the same challenge on a same PUF, and simply calculate the intra-FHD for each $bit_{i}$. A more efficient approach is to predict the intra-FHD without calculating it for each $bit_{i}$.
To predict intra-FHD$_{k}$ of a challenge $C_{k}$ of an inspection bit, we first obtain $t$ measured difference registers of the challenge $C_{k}$ of a same PUF. Since the bin width and the range of the difference register is known, the $t$ difference values can be divided into two groups (responses) according to the bins they reside in. Let the number of difference values fall in $bin\_1$ be $n_{one}$, and number of difference values fall in $bin\_0$ be $n_{zero}$. $n_{one}$ and $n_{zero}$ represent the number of responses of the challenge $C_{k}$ to be one and zero during the $t$ measurements, respectively. Since the intra-FHD is essentially calculated from the response difference between any two measurements, the predicted intra-FHD$_{k}$ is calculated as:
\begin{equation} \label{eq:ratio}
\begin{aligned}
\textit{intra-FHD}_{k} &= \frac{n_{one}\times n_{zero}}{\binom{t}{2}} \times 100\%,
\end{aligned}
\end{equation}
where the final predicted intra-FHD would be the averaged intra-FHD$_{k}$ of all challenges.
As shown in Figure \ref{figure:vw}, the expected intra-FHD$_{1}$ is 0\% because all measurements fall in the same bin and $n_{one} \times n_{zero} =0$. The expected intra-FHD$_{2}$ depends on the portion of measured values that fall in $bin\_1$. With larger bin width $w$, it is more likely that all responses would fall into the same bin
\begin{figure}[htb]
\centering
\includegraphics[width=2.7in]{./pics/variation_window.jpg}
\caption{Magnified view of Figure \ref{figure:bd_bin} with three bins. $w$ is the bin width and the measurement ranges for challenges $C_{1}$ and $C_{2}$ are specified. The expected intra-FHD$_{1}$ is 0\% and the expected intra-FHD$_{2}$ depends on the portion of measured values that fall in $bin\_1$.}
\label{figure:vw}
\end{figure}
\subsection{Inter-FHD Lower Bound Prediction Model}
The inter-FHD depends on the bin width $w$ with a given inspection bit $bit_{i}$.
Assume the distribution of inter-chip difference value is a Normal distribution $N\sim(\mu,\sigma^2)$. Define $\epsilon$ to be the distance between the mean $\mu$ and the closest bin boundary on the left as Figure \ref{figure:iwc} shows. We first prove that the worst-case inter-FHD happens when $\epsilon=0.5w$, followed by the prediction model of the inter-FHD for the worst-case scenario.
\subsubsection{Worst-Case Inter-FHD Identification}
Given a fixed $w$, define $A_{1}(\epsilon)$ and $A_{0}(\epsilon)$ to be the total underlying area in $bin\_1$ and $bin\_0$ as functions of $\epsilon$, respectively. For any Normal distribution, $A_{1}(\epsilon)$ and $A_{0}(\epsilon)$ are calculated as:
\vspace{-5mm}
\begin{flalign}
A_{1}(\epsilon) &= \sum_{n=-\infty}^{\infty} F(-\epsilon+2nw+w)-F(-\epsilon+2nw) \label{eq:A1} \\
A_{0}(\epsilon) &= 1 - A_{1}(\epsilon,w) \label{eq:A0}
\end{flalign}
where $F(\cdot)$ is the Cumulative Distribution Function (CDF) of the Normal distribution, and $n$ is the index for bin area summation.
The ratio $Ratio(\epsilon)$ is defined as:
\begin{equation} \label{eq:ratio}
\begin{aligned}
Ratio(\epsilon) &= \frac{A_{1}(\epsilon)}{A_{0}(\epsilon)}, \quad 0<\epsilon<w\\
\end{aligned}
\end{equation}
where the range of $\epsilon$ is from 0 to $w$ because of its periodic structure.
The closer the $Ratio(\epsilon)$ is to one, the closer the inter-FHD would be to 50\% because the two areas are closer to each other. We want to show that the largest (most unbalanced) ratio happens at $\epsilon=0.5w$ as Figure \ref{figure:iwc} shows.
To find the extreme value of $Ratio(\epsilon)$ given a fixed $w$, we take derivative with respective to $\epsilon$ of Equation \ref{eq:ratio} and replace $A_{0}(\epsilon)$ by $1-A_{1}(\epsilon)$ from Equation \ref{eq:A0}:
\begin{equation} \label{eq:ratio_de}
\begin{aligned}
\frac{d}{d\epsilon}Ratio(\epsilon) &= \frac{A'_{1}(\epsilon)}{(1-A_{1}(\epsilon))^{2}}
\end{aligned}
\end{equation}
From Equation \ref{eq:ratio_de} we see that to find the extreme value of $Ratio(\epsilon)$, it is equivalent to find the solution of $A'_{1}(\epsilon)$, which is given below:
\begin{equation} \label{eq:A1_de}
\begin{aligned}
\frac{d}{d\epsilon}A_{1}(\epsilon) &= \sum_{n=-\infty}^{\infty} f(-\epsilon+2nw+w)-f(-\epsilon+2nw) \\
\end{aligned}
\end{equation}
where $f(\cdot)$ is the Probability Density Function (PDF) of the Normal distribution. Equation \ref{eq:A1_de} shows that $A'_{1}(\epsilon)$ is the summation of differences between two PDF terms where one is a shifted version by $w$ of another. Therefore, applying $\epsilon=0.5w$ to Equation \ref{eq:A1_de}, we get a zero. Figure \ref{figure:iwc} shows that when $\epsilon=0.5w$, each difference term in Equation \ref{eq:A1_de} has its counter part at the mirrored location to the center, so that the summation becomes zero.
\begin{figure}[htb]
\centering
\includegraphics[width=2.2in]{./pics/inter_worst_case.jpg}
\caption{Worst Inter-FHD happens when the mean is at the middle of a bin.}
\label{figure:iwc}
\end{figure}
To conclude our derivation, given a $w$ of an inspection bit, the extreme value of $Ratio(\epsilon)$ happens when $\epsilon=0.5w$, and the inter-chip stander deviation $\sigma$ is needed for the $Ratio(\epsilon)$ calculation.
\subsubsection{Inter-FHD Lower Bound Prediction}
To predict inter-FHD, we calculate the probability of which any pair of chips produce different responses. The inter-FHD prediction given the width $w$ of the inspection bit is:
\begin{equation} \label{eq:iner}
\begin{aligned}
\textit{inter-FHD} &=\frac{2Ratio(\epsilon)}{(1+Ratio(\epsilon))^{2}} \\
\end{aligned}
\end{equation}
With $Ratio(\epsilon)=1$, the two areas are the same, resulting a predicted 50\% inter-FHD. Given a selected $bit\_i$, plugging in $\epsilon=0.5w$ to Equation \ref{eq:iner} would give the predicted inter-FHD lower bound.
Please note that to predict inter-FHD, the inter-chip standard deviation $\sigma$ is needed because the calculation involves the CDF. However, the mean $\mu$ does not affect the prediction because the extreme value is obtained by finding the worst-case $\epsilon$. Also, since changing the inspection bit results at least a 2x change of $w$, the inter-chip $\sigma$ does not have to be calculated with high accuracy. It can be obtained by pre-layout simulation or measuring a small number of chips.
\subsection{Inspection Bit Selection}
Given the Error Correction Code (ECC) specification corresponding to the PUF design, the intra-FHD threshold can be defined.
From the intra-FHD prediction model, choose a set of candidate bits that would satisfy the intra-FHD threshold requirement. From the candidate bits, a best inspection bit can be determined by applying the inter-FHD prediction model given the standard deviation $\sigma$ of the inter-chip delay distribution.
Please note that only one chip is needed for the inspection bit selection since the measurement noise is similar for all chips from our experiment and the $\sigma$ is obtained from pre-layout simulation. The location of the final inspection bit, which is a public information, is passed to all PUFs for the secret response generation.
\section{Experimental Results} \label{sec:exp}
\subsection{Strong UNBIAS PUF Implementation}
The strong UNBIAS PUF structure is implemented on 7 Altera DE2-115 FPGA boards. In our implementation, no physical constraints, additional XORs, tunable delay units, or any systematic variation compensation techniques are used. The design is purely a RTL design.
The ROs inserted between path configurations are composed of 19 inverters, and the signal will be propagated to the next path configuration when the RO counter associated to the RO reaches a count of 50 thousand. The UNBIAS PUF has 10 path configurations, therefore the length of the challenge is 10-bit long. The length of the difference register is 19-bit, and the length of the final response for each challenge is one bit. For our experiment, 120 challenges are applied, and 120 bits of responses are obtained for each PUF within a second. Please note that the RO structure and the count of the RO counter are selected given the 50 MHz system clock of the FPGA. The results are similar as long as no overflow occurs at the 19-bit difference register.
\subsection{Prediction Model Validation}
The inter-FHD is obtained from 7 FPGAs, and the intra-FHD is calculated by measuring each PUF 10 times. To show inter-chip variation and measurement noise of our experimental setup, we measure the frequency of a single RO across the chips 10 times, and the inter-chip variation is 6.1\% with 0.2\% measurement noise.
To validate the intra-FHD prediction model, we follow the procedure described in Section \ref{sec:up} with $t=10$ measurements. Figure \ref{figure:intra_predict} shows the results of the intra-FHD prediction of $bit_{5}$ and $bit_{10}$. The intra-FHD of $bit_{5}$ is much higher than $bit_{10}$ because its bin width is much smaller.
\begin{figure}[h]
\centering
\includegraphics[width=2.7in]{./pics/intra_predict.jpg}
\caption{Strong UNBIAS PUF intra-FHD predictions of $bit_{5}$ and $bit_{10}$ of 7 FPGAs. $bit_{5}$ has much larger intra-FHD because its bin width is smaller.}
\label{figure:intra_predict}
\end{figure}
To validate the inter-FHD prediction model, for each challenge, we obtain an inter-chip standard deviation $\sigma$ from 7 FPGAs, and the final $\sigma$ used in the prediction model is the median of the $\sigma$ from 120 challenges, which gives $\sigma=521$. The results shown in Figure \ref{figure:inter_predict} indicate that the inter-FHD lower bound prediction is well matched with the measured data.
To demonstrate that the inter-FHD prediction model does not require an accurate inter-chip $\sigma$ estimation, Figure \ref{figure:inter_predict} also shows the prediction range with $\sigma \pm 15\%$ variation. We can see that the differences of the predictions are limited, which indicates that the $\sigma$ can either be obtained from pre-layout simulation or measurements of a small number of chips. The prediction gap is relatively large when $w$ is much larger than $\sigma$. However, as $w$ becomes comparable to $\sigma$, where potential inspection bits begin to occur, the prediction curve rises up quickly and matches the measured data well. Figure \ref{figure:inter_predict} also shows that $bit_{10}$ should be a proper inspection bit because the intra-FHD is low and the inter-FHD is close to 45\%
\subsection{Uniqueness and Reliability Evaluation} \label{sec:measure}
The results of inter-FHD and intra-FHD with different inspection bit selections are shown in Figure \ref{figure:inter_predict}. As we can see from the figure, using bits closer to the MSB gives low intra-FHD but also low inter-FHD. This verifies the fact that the delay paths are biased if no physical implementation constraints are imposed. On the other hand, using bits closer to the LSB gives 50\% on both intra-FHD and inter-FHD because of the measurement noise. As predicted, the best inspection location appears at $bit_{10}$ with 45.1\% inter-FHD and 5.9\% intra-FHD. The results also indicate that the systematic variation is mitigated because no constraints are imposed at all.
\begin{figure}[htb]
\centering
\includegraphics[width=2.9in]{./pics/inter_predict.jpg}
\caption{Inter-, intra-FHD, and inter-FHD prediction using $\sigma=521$ with different inspection bit selections of the strong UNBIAS PUF.}
\label{figure:inter_predict}
\end{figure}
Table \ref{table:compare} shows comparison results with previous work. With conventional Arbiter PUF (APUF) shown in the second column, the results from \cite{maiti2013} show that the circuit is essentially a constant number generator with very little inter-FHD. The third column shows the 3-1 double Arbiter PUF with XORs \cite{Machida2015}, where symmetric layout is still required, and the hardware overhead is 2X or 3X from the duplicated circuits depending on the uniqueness requirement of the application. The inter-FHD is close to 50\% but the intra-FHD is high due to the XORs. The fourth column shows the results from Path Delay Line (PDL) PUF \cite{Sahoo2015}. Symmetric PDL and delay characterization for each CRP are required, which can cause scalability issues. Also the ability of eliminating biased responses is limited because it depends on the number of tuning stages inserted. The last column shows the proposed strong UNBIAS PUF. Its behavior is unique and stable, and most importantly no symmetric layout at all.
\begin{table}
\centering
\caption{Comparison between previous Arbiter PUFs and strong UNBIAS PUF}
\begin{tabular}{|c|c|c|c|c|}
\hline
& \makecell{APUF\\ \cite{maiti2013}} & \makecell{XOR\\ \cite{Machida2015}} & \makecell{PDL\\ \cite{Sahoo2015}} & \makecell{UNBIAS\\PUF} \\
\hline
inter-FHD & 7.2\% & 50.6\% & 45.25\% & 45.1\% \\
\hline
intra-FHD & 0.24\% & 11.8\% & 4.1\% & 5.9\% \\
\hline
Symm. Layout & No & Yes & Yes & No \\
\hline
Characterization & No & No & Yes & No \\
\hline
\end{tabular}
\label{table:compare}
\end{table}
\subsection{Temperature and Voltage Variations}
For temperature and voltage variations, the reference responses are measured at 20$^{\circ}$C with standard voltage 12V. The reference responses are then compared with responses measured at 20$^{\circ}$C and 75$^{\circ}$C with 10\% voltage variation. The results indicate the reliability of the PUF when it is enrolled at normal condition but verified at a high temperature environment with unstable voltage source.
Figure \ref{figure:variations} shows the intra-FHD using $bit_{10}$ as the inspection bit. All intra-FHD at 20$^{\circ}$C with 10\% voltage variation is below 8\%, and all intra-FHD at 75$^{\circ}$C with 10\% voltage variation is below 14\%, which is still within conventional ECC margin with error reduction techniques for PUFs \cite{Guajardo07,Yu2010}. Compared with RO PUF presented in \cite{Kodýtek15}, one possible explanation of smaller intra-FHD for our strong UNBIAS PUF is that with multiple RO delay units, the overall delay variation is canceled out, where for the RO PUF, the variation of each RO is directly compared.
\begin{figure}[h]
\centering
\includegraphics[width=2.4in]{./pics/variations.jpg}
\caption{Strong UNBIAS PUF intra-FHD under tempreature and voltage variations.}
\label{figure:variations}
\end{figure}
\vspace{-5mm}
\section{Conclusions} \label{sec:con}
We proposed the first strong UNBIAS PUF that can be implemented purely by RTL without complex post-layout analysis or hand-crafted physical design effort. The proposed measurement can effectively mitigate the impact of biased delay paths and metastability issues to extract local device randomness. The inspection bit can be determined efficiently from the intra-FHD and inter-FHD prediction models.
The strong UNBIAS PUF is implemented on 7 FPGAs without imposing any physical layout constraints. Experimental results show that the intra-FHD of the strong UNBIAS PUF is 5.9\% and the inter-FHD is 45.1\%, and the prediction models are closely fitted to the measured data. The averaged intra-FHD of the strong UNBIAS PUF at worst temperature and voltage variations is about 12\%, which is still within the margin of conventional ECC techniques. The fact that the proposed scheme is immune to physical implementation bias would allow the strong UNBIAS PUF to be designed and integrated with minimum effort in a high-level description of the design, such as during RTL design.
\footnotesize{
\bibliographystyle{unsrt}
|
1,108,101,564,996 | arxiv | \section{Introduction}
\copyrightspace
Historically, stippling was primarily used in the printing industry to create dot patterns with various shade and size effects.
This technique has proven successful to convey visual information, and was widely adopted by artists that called this rendering art {\it pointillism}.\footnote{See~\url{http://www.randyglassstudio.com/} for some renderings by award-winning artist Randy Glass.}
Informally speaking, the main idea is that many dots carefully drawn on a paper can fairly approximate different tones perceived by local differences of density, as exemplified in Figure~\ref{frank};
To the Human eyes, a high density region looks darker than a low density one.
The main difference with dithering and half-toning methods is that points are allowed to be placed anywhere, and not only on a fixed regular grid.
Thus the primary difficulty in stippling is to obtain a point distribution that adapts to a given density function, in the sense that the number of points in an area must be proportional to the underlying density.
\begin{figure}
\centering
\includegraphics[width=0.45\columnwidth]{frank.eps}
\includegraphics[width=0.45\columnwidth]{stipplingfrank600.eps}
\caption{Stippling a photo by rendering the bitmap (left) as a point set (right) distributed uniformly according to a prescribed underlying image density: the grey-level intensity (here, 600 points).\label{fig:frank}}
\label{frank}
\end{figure}
\medskip
In this work, we present a method to generate stippled videos. Generating videos with stipples or other marks is very interesting but has not yet be fully solved.
Our method produces high quality videos without point flickering artifacts.
Each point can easily be tracked during an entire shot sequence.
Our algorithm is able to render fading effects\footnote{Fading effects are so commonly met in practice in videos following a storytelling that they cannot be neglected.} with the adaptation of both point and color densities.
Our method allows us to deal robustly with an entire video without having to cut it down into pieces using shot detection algorithms.
To be able to work on the ``output video'' (time-varying point sets) and adjust dynamically some rendering parameters, we also developed an in-house video viewer that handles these vectorized stippled video.
The viewer let us choose the contrast to adjust point section, let us use color codings or introduce patterns for drawing edges to improve the rendering quality.
Moreover, the viewer application allows one to interpolate unknown frames at any time position to match the frame rate of the display device.
Thus, we can easily increase the frame rate if required, or conversely slow down some portions of the stippled videos. To improve the edge render the viewer application offers one to replace points by patterns oriented with the local gradient of the current frame.
\medskip
Secord~\cite{DBLP:conf/npar/Secord02} designed a non-interactive technique to create quality stippling still images in 2002.
Based on the celebrated Lloyd's $k$-means method~\cite{DBLP:journals/tit/Lloyd82}, Secord creates stippling adapted to a density function and improves the result by considering various point sizes.We considered this approach of Stippling for our algorithm because it fits with the goal of our algorithm: being able to render an high quality output for every type of video without knowing any other data. There are other approaches like the use of multi-agent systems to distributes stipples has been explored by~\cite{DBLP:journals/cgf/SchlechtwegGS05}. This way of rendering is easily extendable to various kind of stroke-based render. There is too a possibility of generating stipple patterns on 3D-objects by triangulating the surface and by positioning the dots on the triangle vertices~\cite{DBLP:journals/cga/PastorFS03}. This is a very fast method but the resulting patterns are not optimal. This has been improved by~\cite{VBTS07a} but we always cannot use as input every type of data --- such as an AVI file taken on the internet.
\medskip
Like all computer-generated point distributions, we are also looking for stippled images that has the so-called {\it blue noise} characteristics with large mutual inter-distances between points and no apparent regularity artifacts. An interesting approach of stippling with blue noise characteristics based on Penrose tiling is described by Ostromoukhov~\cite{DBLP:journals/tog/OstromoukhovDJ04}. More recently a method based on Wang tiles enabled the creation of real-time stippling with blue noise characteristics. This method was described by~\cite{KCODL06}. Balzer et al.~\cite{DBLP:journals/tog/BalzerSD09} developed another alternative way to create, this time Voronoi diagrams with blue noise characteristics. They introduced so-called capacity constrained diagrams that converge towards distributions that exhibit no regularity artifacts and which adaptation to given heterogeneous density function behaves better.
This capacity-constrained method enhances the spectral properties of point distribution while avoiding its drawbacks. Our algorithm can be used based on both Secord of Balzer et al. approaches and inherit of their blue noise characteristics. We choose the Secord algorithm for rapidity, or the Balzer et al. algorithm for high quality point distribution.
\medskip
Another way to render a stipple drawings is to mix different type of points instead of only use contrasted and coloured points. The work of S. Hiller~\cite{DBLP:journals/cgf/HillerHD03} explored this possibility by positioning small patterns in a visually pleasing form in the stipple image. We use in our algorithm patterns to increase the render of the edges.
\medskip
We adapted these recent set of methods to video and solved along problems that appeared while doing so.
A major inconvenient of these previous stippling methods was that while those methods can handle nicely and efficiently fine details and textures, they
fall short when dealing with objects with sharp edges.
In this paper, we overcome this drawback and propose a frequency-based approach in order to detect images edges and enhanced their support and rendering.
This frequency approach is improved by the use of patterns that replace some points and partially reconstruct the edges of the image. It is another way to use the frequency approach to improve the rendering of edges and this without increasing their size, but rather by suggesting their local orientation according to the surround shape elements~\cite{DBLP:conf/psivt/GomesSC07}.
\medskip
The roadmap of the paper is as follows:
The basic ingredient of our approach is the use of {\it centroidal Voronoi diagrams} (a fundamental structure of computational geometry) to produce good-looking distributions of points, as explained in Section~\ref{voronoi}.
This generative method will be further extended to compute a full video in Section~\ref{video}.
Then to improve the rendering, one need to change some parameters to obtain good fading effects, as explained in Section~\ref{diff}. An improved rendering must also consider both color information and point contrasts. Small adjustments described in Section~\ref{adjust} are required to get such a desirable output.
We implemented a tailored scheme to handle the stippling rendering of sharp edges.
Section~\ref{frequence} explains our solution to improve edge rendering by considering both high and low frequencies to place points accordingly.
Finally we used some oriented patterns to reconstruct sharp edges in the output.
Section~\ref{pattern} explains the method used and the describes the obtained results.
\section{Voronoi diagrams}
\label{voronoi}
In order to create a stipple image, we adapted Lloyd's $k$-means method~\cite{OkabeBootsSugihara} to obtain a Voronoi diagram that adapts to an underlying image density. To do this, we need two sets of points:
\begin{description}
\item The first point set which is called the {\it support set} of points is chosen following the grey-scale intensity density of the image.
In fact, we choose in practice $N$ points in the source image, where $N$ can be larger than the total number of pixels in the image.
Those support points should be drawn such that if a pixel is darker compared to another one, that pixel should have more chance to contain one or more support points.
To get those support points, we choose one point coordinate $(x,y)$ at random, and then an integer falling in the grey intensity range $[0,255]$.
If this integer is greater than $255$ minus the color of the pixel at $(x,y)$, we decide to put a support point in $(x,y)$.
\item Then we generate the second set of points which will create the Voronoi diagram itself.
We call these points: \textit{Sites}.
This second set of points has the same characteristics as the support points. Only the number of points changes;
We choose only $N/\alpha$ points --- were $\alpha = 10^{3}$ for example.
\end{description}
\medskip
Then every generator point is identified with the closest site according to a given metric.
The common choice is to use the Euclidean $L^{2}$ distance metric
\begin{equation*}
d(P_1,P_2)=\| P_{1}-P_{2} \| = \sqrt{(x_{1}-x_{2})^{2} + (y_{1}-y_{2})^{2}}
\end{equation*}
where $P_{i} = (x_{i},y_{i})$ are two given points.
Each set of points identified with a particular site forms a Voronoi region, and the set of Voronoi regions ``covers'' the entire image.
We can define each Voronoi region $V_{i}$ as follows:
\begin{multline*}
V_{i} = \Bigl\{ x\in \Omega \ | \ \| x-x_{i}\| \leq \| x-x_{j}\| \\
\mathrm{for} \ j=1,...,n \ \mathrm{and} \ j\neq i \Bigr\}
\end{multline*}
where $\|.\|$ denotes the Euclidean distance derived from the $L^{2}$ norm.
\medskip
The following update step is performed to obtain a good-looking random distribution of sites:
We move each site to the mass center of its associated generator points.
Given a density function $\rho (x) \geq 0$ defined on $\Omega$, for any region $V \subset \Omega$, the standard mass center $C$ of $V$ is given by:
\begin{equation*}
C = \frac{\displaystyle\int_{V} x \rho(x)dx}{\displaystyle\int_{V} \rho(x)dx}
\end{equation*}
Here, we consider $\rho(x) = 1$. Then we do again the identification for each generator point. Iterating these two steps let us converge to a Central Voronoi diagram~\cite{CVT}, a quasi-uniform partition of the domain adapted to a given image density.
The CVT algorithm operations are summarized in Algorithm~1.
\begin{algorithm}
\caption{Centroidal Voronoi Tesselation (CVT).}
\begin{algorithmic}[1]
\STATE \underline{Generation of support points}:
\FOR{$i = 1$ to Number of Sites $*\ \alpha$}
\STATE $x = $ Random Integer $\in [0;\mathrm{Width}]$
\STATE $y = $ Random Integer $\in [0;\mathrm{Height}]$
\STATE $m = $ Random Integer $\in [0;255]$
\IF {$m > 255-$Color$(x,y)$}
\STATE Add support point at $(x,y)$
\STATE $i \leftarrow i + 1$
\ENDIF
\ENDFOR
\STATE \underline{Generation of sites}:
\FOR{$i = 1$ to Number of Sites}
\STATE $x = $ Random Integer $\in [0;\mathrm{Width}]$
\STATE $y = $ Random Integer $\in [0;\mathrm{Height}]$
\STATE $m = $ Random Integer $\in [0;255]$
\IF {$m > 255-$Color$(x,y)$}
\STATE Add Site at $(x,y)$
\STATE $i \leftarrow i + 1$
\ENDIF
\ENDFOR
\STATE \underline{Associate support points to closest sites}
\FOR{$i = 1$ to Number of Sites $*\ \alpha$}
\STATE Find the closest Site to the support point $i$
\STATE $d$ Smallest distance
\STATE $id$ Id of the nearest Site
\FOR{$j = 1$ to Number of Sites}
\STATE Calculate the distance $l$ between Site $j$ and support point $i$
\IF {$l < d$}
\STATE $d = l$
\STATE $id = j$
\ENDIF
\ENDFOR
\STATE Associate support point $i$ with Site $id$
\ENDFOR
\WHILE {Convergence criteria $ < $ threshold}
\STATE Move sites to mass center
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\medskip
The convergence of Lloyd's algorithm to a centroidal Voronoi diagram on continuous domains has been proven for the one-dimensional case.
Although the higher dimensional cases seem to converge similarly in practice, no formal proof is yet reported.
Notice that here we fully discretize the space by considering support points.
The criterion we used to define the convergence is the numeric comparison between the positions of each site between two consecutive iterations.
If the sum of those distances between the two positions of each site during the last iteration falls under a prescribed threshold, we consider that the computation is terminated.
This discretized version of CVT provides good and quick results as attested in~Figure~\ref{figVoronoi}.
\begin{figure}
\centering
\includegraphics[width=0.38\textwidth]{VoronoiToCentroidalVoronoi.eps}
\caption{Centroidal Voronoi Diagram (bottom) generated from successive Voronoi diagrams (top).
Each site is iteratively relocated to the centroid (the center of mass) of its Voronoi region.}
\label{figVoronoi}
\end{figure}
\medskip
Another common solution is to stop Lloyd's relocation scheme after a prescribed number of iterations.
Lagae~\cite{DBLP:journals/cgf/LagaeD08} suggested to use the normalized Poisson disk radius $\alpha \in [0;1]$ as a quality measure for point distributions. If two points in the distribution coincide, $\alpha = 0$. If there is a hexagonal lattice, $\alpha = 1$ and Lagae~\cite{DBLP:journals/cgf/LagaeD08} chose $\alpha = 0.75$ as optimal for a reference point set obtained via dart throwing (a common rejection sampling method). The convergence of $\alpha$ can be utilized as a termination criterion, where we stop Lloyd's method as soon as $\alpha$ becomes stable.
\section{CVT method for video stippling}
\subsection{From images to video}
\label{video}
The first step consists in computing the first image stippling as explained in Section~\ref{voronoi}.
We store the image and the position of each generator point and each site in a text file.
The information we need for the remaining of the algorithm is the current density, the current number of generator points and the current number of sites. We need also to be able to identify each site.
That is why we keep in the text file the identification number of all sites.
This way of storing data gives us the possibility to pause our algorithm if needed, and resume it later.
\medskip
To compute a full video stippling, we first extract from the video an image sequence in order to be able to use image stippling methods. We then need to keep the information about stipple points on the $N-1$ images before computing the $N$-th image. If we fail to doing so, no correlation between the images of the sequence will appear and undesirable buffering effects will appear in the synthesized stipple video.
We start the computation of the $N$-th image with the relocation of generator points and sites found for the $(N-1)$-th image by reading the corresponding text file. Then we seek for all the generator points and sites that are no longer needed in this new image.
To decide whether to keep points or not, we generate a {\it difference image} where each pixel intensity is set as follows:
\begin{equation*}
\left\{
\begin{aligned}
P_{\mathrm{diff}}(x,y) & = 0 \text{ if } P_{N-1}(x,y)-P_{N}(x,y) \leq 0 \\
P_{\mathrm{diff}}(x,y) & = P_{N-1}(x,y)-P_{N}(x,y) \text{, otherwise}\\
\end{aligned}
\right.
\end{equation*}
where $P_{N}(x,y)$ is the color of the pixel at the coordinates $(x,y)$ of the image frame number $N$.
\medskip
Each generator point or site placed on a pixel where the color of the difference image is different from $0$ has a probability proportional to the value of the color to be deleted.
If the value is $255$ the probability is $100\%$. We do the same to find where we have to {\it add} new points by calculating another difference image as follows:
\begin{equation*}
\left\{
\begin{aligned}
P_{\mathrm{diff}}(x,y) & = 0 \text{ if } P_{N}(x,y)-P_{N-1}(x,y) \leq 0 \\
P_{\mathrm{diff}}(x,y) & = P_{N}(x,y)-P_{N-1}(x,y) \text{, otherwise}\\
\end{aligned}
\right.
\end{equation*}
We obtain two difference images as shown in Figure~\ref{figImgdiff}.
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{imgdiff.eps}
\caption{From two consecutive frames (left), we generate two difference images (right).
The evolution between the first two frames is shown on the central image.}
\label{figImgdiff}
\end{figure*}
\medskip
Then we need to link the global density of the image with the total number of points. Thus we calculate for each image its {\it global density} which is equal to
\begin{equation*}
\text{Density } d = \frac{\sum_{i=1}^n P(i)}{255 n}
\end{equation*}
where $P(i)$ is the grey color of the pixel $i$ and $n$ is the total number of pixel. The user enter the initial number of points. This number of point is linked to the initial density and serves as a {\it reference} during the processing of the full video sequence.
Our algorithm preserves the same ratio between the number of points and the image density during all the operations.
\medskip
We repeat this step for all the images of the video sequence. When it is done, we need to produce a final file containing all relevant information. Our program implementing the above stippling method reads all text files calculated during the process and store the evolution of each site identified by its own unique number. For each site we store at each frame its current position and color.
With this information we generate a text file where we also add other important information such as the total number of sites that will appear during the whole shot, the size of the video (eg., width and height dimensions), and the total number of frames.
\medskip
To read and ``play'' the output file, we have developed an in-house application in Java\texttrademark{} that renders the stippled video.
With the information provided in the text file, the viewer application let us resize dynamically the video on-demand, change the contrast of the sites (the difference of size between a black site and a white one) and allows users to activate or not the color mode.
Moreover, our application allows one to generate intermediate time frames by interpolating the position, size, and color of each site between two consecutive frames. Thus yielding true spatio-temporal vectorization of video media.
The operation work flow carried out by the viewer to visualize stippled videos are explained in Figure~\ref{viewer}.
\begin{figure}
\fcolorbox{grisclair}{grisclair}{
\begin{minipage}[c]{0.45\textwidth}
Read output file:
\begin{minipage}[c]{0.95\textwidth}
\centering
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|}
\hline & \multicolumn{3}{c|}{Frame 1} & \multicolumn{2}{c|}{Frame 2} \\
\hline Site ID & x & y & RGB Color & x & ... \\
\hline 1 & & & & & \\
\hline 2 & & & & & \\
\hline ... & & & & & \\
\hline
\end{tabular}
\end{minipage}
\begin{minipage}[c]{0.95\textwidth}
\centering
\ifx\JPicScale\undefined\def\JPicScale{1}\fi
\psset{unit=\JPicScale mm}
\psset{linewidth=0.3,dotsep=1,hatchwidth=0.3,hatchsep=1.5,shadowsize=1,dimen=middle}
\psset{dotsize=0.7 2.5,dotscale=1 1,fillcolor=black}
\psset{arrowsize=1 2,arrowlength=1,arrowinset=0.25,tbarsize=0.7 5,bracketlength=0.15,rbracketlength=0.15}
\begin{pspicture}(0,0)(0,15)
\newrgbcolor{userLinecolor}{0 0.8 0.8}
\psline[linewidth=3,linecolor=userLinecolor]{->}(0,15)(0,0)
\end{pspicture}
\end{minipage}
\begin{minipage}[c]{0.95\textwidth}
Draw points with these characteristics:
\begin{equation*}
\footnotesize
\begin{aligned}
N & = \mathrm{Frame Number} \\
\mathrm{Weight} & = N - \lfloor N \rfloor \\
x & = x_{\lfloor N \rfloor}*(1-\mathrm{Weight}) + x_{\lceil N \rceil}*\mathrm{Weight} \\
y & = y_{\lfloor N \rfloor}*(1-\mathrm{Weight}) + y_{\lceil N \rceil}*\mathrm{Weight} \\
\mathrm{Color} & = \mathrm{Color}_{\lfloor N \rfloor}*(1-\mathrm{Weight}) + \mathrm{Color}_{\lceil N \rceil}*\mathrm{Weight} \\
\mathrm{Size} & = \frac{\mathrm{Color}}{255} * \mathrm{Contrast} \\
\end{aligned}
\end{equation*}
\end{minipage}
\end{minipage}
}
\caption{The Java viewer first reads the source file, then renders on-the-fly the required frame by interpolating the frame and its characteristics from two consecutive discrete time frames.}
\label{viewer}
\end{figure}
\subsection{Image differences and drift adjustment of sites}
\label{diff}
\subsubsection{Handling fading effects}
The method described before is particularly well adapted to shots where objects appear and disappear instantly. The problem is that there are often objects that disappear using a fading effect.
If we use the previous formula to calculate the image differences,
we do not obtain a proper rendering for those fading transition shots.
Indeed, if an object disappears progressively during three images (saym $33\%$ each time), we suppress $33\%$ of the generator points and site at each time. But ${\left({\left(100*0.33\right)}*0.33\right)}*0.33 > 0$. So there are generator points and sites in white zones of the image after the full removal of the object, which is of course not the expected result.
To correct this step, we needed to find a difference formula that converges towards $0$ when the destination image becomes fully white.
We implemented in our program the following formula:
\medskip
If $P_{N-1}(i)<P_{N}(i)$ we store in the difference image:
\begin{equation*}
{\Bigl(P_{N}(i)-P_{N-1}(i)\Bigr)}\times\frac{255}{255-P_{N-1}(i)}
\end{equation*}
If $P_{N-1}(i)>P_{N}(i)$ we store in the other difference image:
\begin{equation*}
{\Bigl(P_{N-1}(i) - P_{N}(i)\Bigr)}\times\frac{255}{P_{N-1}(i)}
\end{equation*}
using the same notation as before.
\medskip
This yields a perfect image/vector transcoding of the fading effect during a shot sequence.
A work-out example is shown on figure \ref{figImgfading}.
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{fading.eps}
\caption{The text ``Duffy'' (left) fades out progressively to let the small character appear (right). The point density of Voronoi tesselations adapts automatically and progressively to this fading effect. (3000 points + 1500 frequency points)}
\label{figImgfading}
\end{figure*}
\subsubsection{Drift correction}
Another major drawback observed is the drift of the sites which are on an edge of an object in the image. After putting the sites on the mass center of the associated generator points, it appears that the sites are slightly located on the wrong side of the boundary, often on a white zone.
To avoid this effect, we had to add another step in our algorithm. It is needed to do another step for removal and addition of sites and generator points to clean the image. Our algorithm is reported in Algorithm~2.
\begin{algorithm}
\caption{Next image computation algorithm}
\begin{algorithmic}[2]
\STATE Difference images calculation
\STATE New density calculation
\STATE Total number of points and Sites update
\FOR{$i = 1$ to 2}
\STATE $i \leftarrow i + 1$
\STATE Support points and Sites suppression
\WHILE {Number of Sites $< $ Total number of Site}
\STATE Add Site
\ENDWHILE
\WHILE {Number of support points $< $ Total number of points}
\STATE Add support point
\ENDWHILE
\STATE Move Sites to mass center
\ENDFOR
\WHILE {Convergence criteria $ < $ threshold}
\STATE Move Sites to mass center
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\medskip
To completely suppress this annoying artefact, we need to repeat the operation of suppression and the relocation of the sites several times until the number of deleted sites falls under a threshold.
In practice, we noticed that repeating this step only one more time is already very efficient.
The result is shown in~Figure~\ref{imgDev}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.48\textwidth]{deviationpoint.eps}
\caption{(Left) Some points drift from the original shape because of their identification with the centroid of their Voronoi region.
(Right) our method removed this undesirable effect. (2000 points)}
\label{imgDev}
\end{figure}
\subsection{Color and contrast}
\label{adjust}
It would seem that to faithfully represent a grey-scale image of 256 levels, our method would require 256 support points per pixel of the original image. This is fortunately not the case. Indeed when rendering a stippling image we lose some information by the lack of support point. But we work on ``areas'' and not on ``pixels'' with stippling method. The most important information is the tone of each area of the image. The exact representation of the support of a stipple image is rather a segmented image without border and with gradient filled areas than a bunch of pixels. And in order to save the lost of ``local'' information we can easily improve the rendering by considering some other options. That is why we consider first the colour and the variation of the size of respective point sites.
\medskip
To each image, we can further add some properties to each site. It is easy to pick up the color of the pixel of the reference image which is at the same position of the site. With this information we can display the site in it original color and adapt its size in proportion to its grey color. A white site will be very small and a black site will have the biggest size. This simple operation improves considerably the perception of the rendering of the output stippled ``video'' and let the user distinguish more details in videos with low contrasts (see Figure~\ref{greytocolor}).
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{greytocolor.eps}
\caption{(Top) Stippling result without contrast and color. (Middle) stippling with contrast. (Bottom) stippling with color to enhance the rendering. (3000 points)}
\label{greytocolor}
\end{figure}
\subsection{Frequency consideration}
\label{frequence}
In order to detect edges and enhance their support and rendering we considered a novel rendering approach.
This operation is carried out in parallel of the previous video processing.
We first compute the discrete gradient at all pixels the images (using the Sobel operator) and filter the result with a threshold to put low frequencies at zero, as depicted in Figure~\ref{lenaSobel}. Then we apply the same algorithm with a number of site that can vary and that has to be adapted considering the number of pixels with high frequencies. For instance for a color image without much contrast like {\tt Lena}, we have to add twice more points than the original number to have a nice render. But this has to be adapted by the user empirically.
\begin{figure}
\centering
\includegraphics[width = 0.38\textwidth]{lenasobel.eps}
\caption{Discrete gradient Sobel operator on the image with a threshold that allows one to remove minor details.}
\label{lenaSobel}
\end{figure}
\medskip
We obtain a distribution of sites located on the edges of the image.
These edge points let us improve the overall rendering of the shapes in the image.
To get a good rendering, we merely add those sites to the sites previously calculated.
To get a better rendering we reduce the size of these points by 33\% in comparison to the other points.
The frequency-based stippling result is presented for {\tt Lena} in the Figure~\ref{lena}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Workflow.eps}
\caption{We extract two density maps from the original source image: one is the color map and the other the frequency map. Then we apply our stippling algorithm and finally add both contrast and color information. We end the process by summing these two contributions --- the classical (3000 points) and frequency approaches (6000 points).}
\label{lena}
\end{figure*}
\subsection{Pattern placement}
\label{pattern}
Detecting edges is easy but rendering them clearly by stippling is not really possible with a small amount of Sites. In order to keep a good render without having to generate a lot of Sites, we considered a way to replace the points used to represent the Sites previously obtained by the frequency approach, by patterns. The best solution is to use small segments that follow the edge. Thus, with a small quantity of segments we are able to recreate the edges.
\medskip
After placing the patterns, we had to orient them perpendicularly to the local gradient of the image. To do so we just store for each image the local Sobel gradient following the $x$ axis and the local Sobel gradient following the $y$ axis. Then we estimate the local angle of the gradient with the following formula:
\begin{equation*}
\theta(x,y) = \arctan\left(\frac{\Delta x}{\Delta y}\right)
\end{equation*}
Once this operation is done we can associate each frequency Site with its orientation. We obtain on the {\tt Lena} image the result shown on the figure~\ref{patternlena}.
\begin{figure}
\centering
\includegraphics[width = 0.38\textwidth]{lenapattern.eps}
\caption{Use of patterns to describe sharp edges in the image --- 2000 segments.}
\label{patternlena}
\end{figure}
\medskip
Contrary to the previous frequency approach, we need less point to obtain a pleasant render. Thus we increase the rapidity of our algorithm and save some place on the hard drive. To get a good rendering, we need to take in consideration the scale of these shapes into account in order to avoid the collision between them.
This is possible by calculating an individual scaling factor for each of them.
\section{Experiments and results}
\subsection{Blue noise characteristics}
Figure~\ref{fft} shows a representative distribution of 1024 random points that was generated with Lloyd's method and extracted form the paper of Balzer et al.~\cite{DBLP:journals/tog/BalzerSD09}. The distribution clearly exhibits regularities that are underlined by the FFT analysis on the right of the figure. Our method generates a random set of points (here 3700 points) with less regularities. The results is substantially better and is conserved during a whole video. Two sets where generated and shown on figure~\ref{fft}. The first has been made on a single image, with a direct point placement with our algorithm and present good characteristics. The second has been build inside a video sequence by successive point addition frame by frame until the final number of 3700 points. We notice that this distribution has the same characteristics as the previous one.
\begin{figure}
\centering
\includegraphics[width=0.40\textwidth]{FFT.eps}
\caption{Lloyd's method generates point distributions with regular structures if it is not stopped manually. The example set of 1024 points was computed with Lloyd's method to full convergence and contains regularities. The two other examples have been generated by our algorithm, the first directly and the second after the computation of a whole video as the last frame of this video. They present less regularities.}
\label{fft}
\end{figure}
\medskip
These blue noise characteristics are interesting because generating a good random point set is a challenge. But we can't explain for now the reason of those results in our algorithm.
\subsection{Algorithm's complexity}
Our method and software allows one to convert a video into a stippling video, namely a time-varying point set with dynamic color and size attributes.
The complexity of our method only depends on the number of support/site points that we used to stipple the video.
This number of points depends in turn on the following user input entries:
\begin{itemize}
\item The number of sites;
\item The factor for calculating the number of support points --- denoted by $\alpha$ previously.
\end{itemize}
The complexity of the algorithm is quadratic, $O(n^2)$ where $n$ is the number of sites.
We have carried out several time measurements to confirm this overall complexity, and in order to identify some variations depending on other parameters. We have done the timing measures on a video containing 10 frames.
The video represents a black disk whose size decreases progressively itself $10\%$ each frame.
\medskip
The timing statistics of that experiment are listed in Table~\ref{table:timing}.
Each row stands for a frame (there are 10 frames), and each column denotes a number of point (from 1000 to 10000).
All timings are expressed in seconds.
The graph on the Figure~\ref{graph1} plot the amount of time consumed to stipple a video depending on the number required sites.
\medskip
\begin{table}
\centering
{\footnotesize
\begin{tabular}{| >{\columncolor{grisclair}} c|c|c|c|c|c|}
\hline \rowcolor{grisclair} & 1000 pts & 2000 pts & 3000 pts & 4000 pts & 5000 pts\\
\hline 0 & 9.5 & 35.0 & 76.5 & 134.7 & 208.6\\
\hline 1 & 4.8 & 14.1 & 32.3 & 49.6 & 75.5\\
\hline 2 & 3.1 & 8.9 & 18.1 & 30.7 & 47.5\\
\hline 3 & 2.4 & 6.1 & 12.3 & 21.4 & 32.2\\
\hline 4 & 1.7 & 4.4 & 8.5 & 14.2 & 21.3\\
\hline 5 & 1.3 & 3.2 & 6.0 & 9.8 & 14.6\\
\hline 6 & 1.1 & 2.4 & 4.3 & 6.8 & 9.9\\
\hline 7 & 0.9 & 1.8 & 3.1 & 4.8 & 7.0\\
\hline 8 & 0.8 & 1.4 & 2.3 & 3.5 & 5.0\\
\hline 9 & 0.7 & 1.4 & 1.8 & 2.6 & 3.5 \\
\hline \rowcolor{grisclair} & 6000 pts & 7000 pts & 8000 pts & 9000 pts & 10000 pts \\
\hline 0 & 299.6 & 409.6 & 529.4 & 670.0 & 832.4\\
\hline 1 & 112.9 & 144.6 & 187.6 & 259.1 & 302.2\\
\hline 2 & 5.8 & 89.4 & 115.6 & 145.7 & 180.6\\
\hline 3 & 45.8 & 59.5 & 76.7 & 97.0 & 118.5\\
\hline 4 & 30.1 & 40.7 & 52.1 & 65.2 & 80.6\\
\hline 5 & 20.2 & 27.1 & 37.6 & 43.5 & 52.8\\
\hline 6 & 13.7 & 18.5 & 23.7 & 29.3 & 35.9\\
\hline 7 & 9.5 & 12.7 & 16.1 & 19.9 & 23.9\\
\hline 8 & 6.7 & 8.7 & 11.1 & 13.6 & 16.3\\
\hline 9 & 4.7 & 6.1 & 7.7 & 9.3 & 11.1 \\
\hline
\end{tabular}
\normalsize
}
\caption{Timing experiments for a toy sequence of $10$ frames --- in seconds.\label{table:timing}}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{graph1.eps}
\caption{Graph representing the time required to calculate a 10-frame video with varying number of sites.}
\label{graph1}
\end{figure}
\medskip
During the first set of measures, we used a video with a lot of point suppression.
We noticed that if we compute the same video backward (so the disk is getting bigger each frame) for the same number of site, the time needed was different!
In fact, we observed empirically that the addition of points requires more time than the operation of suppression.
For $3000$ sites required on the frame with the biggest density, we needed $165$ seconds to stipple the video and $602$ seconds to compute the same video backward (the one with a lot of addition of points).
Thus we emphasize on the fact that the stippling operation is not symmetrical although it visually leads to the same final result.
\medskip
To estimate the time required to compute the video we need to take into account the number of addition of points during the shot.
This operation increases the overall computation time.
Of course, one can accelerate our algorithm by porting it on graphics processor units (GPUs).
The calculation of Voronoi diagrams is known to adapt well on GPU, and is far quicker and more efficient~\cite{DBLP:conf/isvc/VasconcelosSCG08} than CPU-based algorithms.
\subsection{Size of the output file}
We measured for 3 different video shots of 91 frames the number of Sites depending on the frame density. The result is summed up on figure~\ref{pointstab}.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{pointstab.eps}
\caption{Evolution of the number of Site by frame in function of the density.}
\label{pointstab}
\end{figure}
\medskip
We noticed that algorithm maintains a perfect correlation between those to parameters following the initial ratio asked by the user. This observation implies that we can estimate the average number of Point needed to compute each frame. However, in order to estimate the total number of Sites needed --- how many Sites are created during the computation --- we have first to know all the difference images and their density. Then we can estimate the total number of Sites with the formula:
\begin{equation*}
\frac{N_{\textrm{initial}}}{d_{\textrm{initial}}} + \sum_{i=0}^{\textrm{Number of frame}} \left[d_{\textrm{diff}}(i)*\frac{N_{\textrm{initial}}}{d_{\textrm{initial}}}\right]
\end{equation*}
Where $N$ represent the number of Sites and $d$ the density.
\medskip
With this method we can quickly estimate the final size of the output video file. After a ZIP compression we observed that we need 200 bites per frame per 1000 Sites. Thus for a stipple video with 25 frames per second, that lasts 1 hour, we encode a video in about 20 Mb.
\medskip
An possible way to improve the compression of the output file would be to store for each frame the characteristics of the points that are added and those of the suppressed point. For example for a video with 2000 Sites per frame and 3000 different Sites used for the whole video we would need to store the characteristics of 3000 Sites, whereas our current method needs $2000*N_{\textrm{Frame}}$ Sites. This improvement implies to consider that the Sites generated by our method are totally stable. In fact, they can move slightly during the shot to adapt with the variation of local intensity. But if we compute each with numerous iteration to reach a great stability, this movement is really small. So suppressing all the movement of point would result only on the complete removal of what we could see as the flickering effect. Indeed in our current video output the little jittering effect results of this movement and of a to big threshold taken for judging of the stability of the Site during the computation. This choice has been made to improve the rapidity of the computation of the video.
\subsection{Primary colours}
In order to be able to produce a render closer to some paintings we implements an algorithm that produces stippling with only primary coloured Sites --- red, green and blue Sites. This extension is quite simple for a single image: we just convert each colour in red green or blue with a probability equal to its proportion of red, green or blue. An example is shown on {\tt Lena} on figure~\ref{lenaprimary}. But it raises some problems for video:
\begin{itemize}
\item We have to maintain a high density of Sites. The eye makes a mean of the different colours an reconstruct the appropriate colour for each part of the image. That's why it is important to have a very high density of small points;
\item Having to maintain a high density of Sites increases the calculation time --- proportional to $n^2$ where $n$ is the number of Sites;
\item It increases the memory space needed to store the output video;
\item It increases the time needed to read each frame of the video and slower the frame rate in the viewer.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=0.38\textwidth]{LenaPrimary.eps}
\caption{Stippling on {\tt Lena} with primary colours. (15000 points)}
\label{lenaprimary}
\end{figure}
\medskip
Even if this approach generates drawings with only three colours, we looses some of the advantages of the Stippling process such as few colour transition, few storage space needed, few memory needed to read the video... To have a nice render we finally need to have a very high resolution of the stippling image and this put the stipple image with a number of Sites of the same order or about 10 times fewer as the number of pixel of the original. For example the image on the figure~\ref{lenaprimary} is drawn with 15000 Sites whereas the original image was composed of $256*256=65536$ pixels, wich is only 4 times higher.
\section{Discussion and concluding remarks}
Stippling is an engaging art relating to non-photo-realistic depiction of images and videos by using point sets rendered with various color and size attributes.
Besides the pure artistic and aesthetic interests of producing such renderings, the stippling process finds also many other advantages in its own:
\begin{itemize}
\item
The output video is fully vectorized, both on the spatial and temporal axes.
Users can interactively rescale video to fit the device screen resolution and upscale the frame rate as wished for fluid animations.
Stippling could thus be useful for web designers that have storage capacity constraints and yet would like to provide video contents that yields the same appearance on various types of screen resolution devices (let it be PDA, laptop or TV).
Furthermore, once the stippling process has been carried out, users can still personalized the rendering by tuning on-the-fly the size and other attributes of the points, without loosing much of the original semantic of the media.
\item
Another characteristic of the stippling process is the production of a video that bear only a few pixel color transitions compared to the original medium.
Indeed, on a usual video, we have $\textrm{Width} \times \textrm{Height} \times N_{\textrm{Frame}}$ color transitions while
in our output stippled video, we only have of number of transitions that is roughly $N_{\textrm{Point}}\times N_{\textrm{Frame}}$.
This is advantageous in terms of energy savings for e-book readers for instance.
Stippling may potentially significantly increase battery life of such devices based on e-inks that consume energy only when flipping colors.
\item
To improve the stippling process and correct the problem of intensity loss of the stippled images (due to the averaging of the Human eyes), we can consider the area of each Voronoi cell and use this measure to extract a multiplicative normalization factor for the site of the corresponding Voronoi cell. With this normalization factor, we are able to nicely correct the loss of intensity of the stippled images;
The smaller a Voronoi cell, the bigger the normalization factor, and the darker the color setting.
\end{itemize}
Figure~\ref{screenshot} presents some extracted images from two stippled videos.
The accompanying video (in mp4 format, play with Quicktime please) illustrates various steps of our algorithm, and describe results on various video.
(An open source web applet allows one to play various stippled videos.)
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{screenshots.eps}
\caption{These snapshots were extracted from two different output videos. See accompanying video. \footnotesize{This second particular sequence does not take into account the frequency information of images.}}
\label{screenshot}
\end{figure*}
\bibliographystyle{acmsiggraph}
\nocite{*}
|
1,108,101,564,997 | arxiv | \section{Introduction} \label{sec:intro}
During solar flares, the Sun releases magnetic energy stored in non-potential structures into plasma heating, bulk motions, and particle acceleration. Energetic particles play a key role in the energy release of solar flares since a significant fraction of the released energy is thought to be going into such particles \citep[e.g][]{Emslie:2012aa}. Energetic particles are also a major driver of solar-terrestrial physics and space weather applications. In particular, radio emission from electron beams produced in association with solar flares provides crucial information on the relationship and connections between energetic electrons in the corona and electrons measured in situ. The most direct and quantitative signature of energetic electrons interacting at the Sun is provided by hard X-ray (HXR) emissions that allow an estimation of electron energy spectra and number density. Although studied for many years, the connections between HXRs and radio type IIIs are not fully understood. We statistically investigated the connection between events that show type III bursts in the corona and X-ray flares to further understand their connection. We also examined the occurrence of the interplanetary counterparts of the `coronal 'type III bursts to explore what electron beam properties and coronal conditions are favourable for continued radio emission in the heliosphere.
Type III radio emissions can be observed over decades of frequency ($1$~GHz to $10$~kHz). They are characterized by fast frequency drifts (around 100 MHz/s in the metric range) and are believed to be produced by high-energy ($0.05$c-$0.3$c) electron beams streaming through the corona and potentially through the interplanetary space \citep[see e.g.][for reviews]{Suzuki:1985aa,Reid:2014ab}. Type III bursts are one of the most frequent forms of solar system radio emission. They are used to diagnose electron acceleration during flares and to get information on the magnetic field configuration along which the electron beams propagate. The bump-in-tail instability produced during the propagation of energetic electrons induces high levels of Langmuir waves in the background plasma \citep{Ginzburg:1958aa}. Non-linear wave-wave interaction then converts some of the energy contained in the Langmuir waves into electromagnetic emission near the local plasma frequency or at its harmonic \citep[e.g.][]{Melrose:1980aa,Li:2014aa,Ratcliffe:2014aa}, producing radio emission that drifts from high to low frequencies as the electrons propagate through the corona and into interplanetary space. In recent years, several numerical simulations have been performed to simulate the radio coronal type III emissions from energetic electrons and investigate the effects of beam and coronal parameters on the emission \citep[e.g.][]{Li:2008ab,Li:2009aa,Li:2011ab,Tsiklauri:2011aa,Li:2014aa,Ratcliffe:2014aa}
Since the discovery of type III bursts and the advent of continuous and regular HXR observations, many studies have analysed the relationship between type III bursts and hard X-ray emissions. The first studies of the temporal correlations between metric (coronal) type III bursts and HXRs above 10 keV were achieved by \citet{Kane:1972aa,Kane:1981aa}. It was found that while about 20\% of the impulsive HXR bursts were correlated with type III radio bursts, only 3\% of the reported type III bursts were associated with HXR emissions above 10 keV. This showed that groups of metric type III bursts were more frequently detected than HXR emissions above 10 keV. For 70\% of the correlated bursts, it was also shown that the times of X-ray and radio maxima agree within $\pm9$~s. Moreover, the association rate was found to increase when the type III bursts are more intense or when the type III had a larger starting frequency. On the other hand, it was also found that HXR emissions are more often associated with type III bursts when their flux above 20 keV is larger and their spectra are harder. In a further study based on a larger number of events, \citet{Hamilton:1990aa} confirmed that the association of hard X-ray and type III bursts slightly increases for harder X-ray spectra. They also found that the intensity distribution of hard X-ray bursts associated with type III bursts is significantly different from the distribution of all hard X-ray bursts showing that both kinds of emissions are statistically dependent. They confirmed on a larger selection of events than \citet{Kane:1981aa} that higher flux radio bursts are more likely to be associated with hard X-ray bursts and vice versa. They also examined whether there is a correlation between the peak count rates of the hard X-ray burst and of the peak flux density at 237 MHz for the associated type III burst. They find no apparent correlation between these two quantities and that there is a large dispersion in the ratio of peak X-ray to radio intensities.
Other studies have confirmed that type III generating electrons can be part of the same population as the HXR generating electrons. Indeed, it was shown in different papers that there is a correlation between the characteristics of the HXR emitting electrons (non-thermal spectral index or electron temperature) and the starting frequencies of type III bursts \citep[see][for recent studies]{Benz:1983aa,Raoult:1985aa,Reid:2011aa,Reid:2014aa}. Correlations on sub-second timescales found between HXR pulses and type III radio bursts also strongly support attributing the causal relationship between HXR and radio type III emissions to a common acceleration mechanism \citep[see][]{Kane:1982aa,Dennis:1984aa,Aschwanden:1990aa,Aschwanden:1995aa}.
More statistical studies were performed recently using RHESSI X-ray observations and radio observations from Phoenix-2 in the $100$~MHz to $4$~GHz range. The association between X-ray emissions for flares larger than GOES C5.0 and radio emission (all types of events) was investigated for 201 events \citep{Benz:2005aa,Benz:2007aa}. At the peak phase of hard X-rays, different types of decimetric emissions (type IIIs but also pulsations, continua and narrowband spikes) were found in a large proportion of the events. For only 17\% of the HXR flares, no coherent emission in the decimetric/metric band was indeed found and all these flares had either radio emission at lower frequencies or were limb events. Classic meter wave type III bursts were found in 33\% of all X-ray flares they were the only radio emission in only 4\% of the events. The strong association but loose correlation between HXR and radio coherent emission could be explained in the context of multiple reconnection sites that are connected by common magnetic field lines. While reconnection sites at high altitudes could produce the energetic electrons responsible for type III bursts, the reconnection sites at lower altitude could be linked to the production of HXR emitting electrons and some high frequency radio emissions.
The correlation between hard X-ray and type III emissions can also be examined by combining spatially resolved observations. In some cases, such observations support that type III generating electrons are produced by the same acceleration mechanism as HXR emitting electrons. A close link has been found between the evolution of X-ray and radio sources on a timescale of a few seconds \citep[e.g.][]{Vilmer:2002aa}. However, in other cases the link between HXR emissions and radio decimetric/metric emissions is more complicated. The radio emissions can originate from several locations, one very close to the X-ray emitting site and the other further away from the active region and more linked to the radio burst at lower frequencies \citep[e.g.][]{Vilmer:2003aa}. Such observations are more consistent with the cartoon discussed in \citet{Benz:2005aa}, in which the electrons responsible for type III emissions at low frequencies are produced in a secondary reconnection site at higher altitudes than the main site responsible for X-rays. In a recent study, \citet{Reid:2014aa} investigated the proportion of events that are consistent with the simple scenario in which the type III generating electrons originate through the same acceleration mechanism than the HXR producing electrons, compared to a more complicated scenario of multiple acceleration regions. In order to accomplish this, they compared the evolution of the type III starting frequency and the HXR spectral index, signatures of the acceleration region characteristics. They found that that on a sample of 30 events, 50\% of the events showed a significant anti-correlation. This was interpreted as evidence that, for these events, there is strong link between type III emitting electrons and hard X-ray emitting electrons. Such a close relationship was furthermore used to deduce the spatial characteristics of the electron acceleration sites.
Radio emissions from energetic electron beams propagating in the interplanetary medium have been observed at frequencies below 10 MHz with satellite based experiments since the 1970s \citep[e.g.][]{Fainberg:1972aa,Fainberg:1974aa}. The most detailed study between coronal (metric) and interplanetary bursts (IP) was performed by \citet{Poquerusse:1996aa} who studied the association between type III groups observed by the ARTEMIS spectrograph on the ground (100-500 MHz) and the URAP radio receiver on the Ulysses spacecraft (1-940 kHz). They found that when there is an association, one single type III burst at low frequencies usually comes from a group of 10 to 100 type III bursts at higher frequencies. Based on 200 events, they found that 50\% of the events produced both strong coronal and interplanetary type III emissions. They found however that not every coronal type III burst (even if strong) produces an IP type III burst.
In this paper, we address again, using ten years of data from 2002 to 2011, the long-lasting questions concerning the link between type III emitting electrons in the corona and HXR emitting electrons, and the probability that coronal type III bursts have interplanetary counterparts. Section \ref{sec:eventlist} presents the observations and the methodology used in the study. Section \ref{sec:T3XR} presents results on the link between coronal type III bursts and hard X-ray emissions and Section \ref{sec:T3I3T} discusses the link between coronal and interplanetary type III bursts. Section \ref{sec:discussion} discusses the results in the context of previous but also future observations. Comparisons with predictions of numerical simulations of type III emissions are also tentatively presented.
\section{Observations and methodology} \label{sec:eventlist}
The present study is based on ten years of observations (2002-2011), from the start of the X-ray observations from RHESSI \citep{Lin:2002aa} to the end of the list of radio bursts reported on observations from the Phoenix 2 spectrometer \citep{Messmer:1999aa} and from the Phoenix 3 radiospectrometer \citep{Benz:2009aa}. Phoenix 2 operated until 2007 and provided observations between 4 GHz and 100 MHz. After 2007, it was replaced by Phoenix 3 which operated between 900 and 200 MHz. The type III events used in our study were extracted from published list of type III bursts\footnote{\url{http://soleil.i4ds.ch/solarradio/data/BurstLists/1998-2010_Benz/}}\footnote{\url{http://soleil.i4ds.ch/solarradio/data/BurstLists/2010-yyyy_Monstein/}}. In both catalogues, only events with `type III flags' were strictly selected. We thus excluded from our list, solar events for which other kinds of radio activity was reported. It should be noted that the period indicated in the catalogue usually contains more than one isolated type III burst and consists typically of several groups of type III bursts that could be separated by quiet periods on timescales of several minutes \citep[][see also the discussion in the next section]{Poquerusse:1996aa}. In the following, we call this list of type III bursts observed above 100 or 200 MHz (depending on the period) coronal type III bursts.
Radio fluxes of the type III bursts at different frequencies between 450 and 150 MHz were also computed from the Nan\c{c}ay Radioheliograph (NRH) \citep{Kerdraon:1997aa} observations. The flux was calculated for all events using routines that automatically generated cleaned images of the Sun from the 10 second time integration data. At each frequency, the location of the maximum radio brightness was searched for and the radio flux was computed on a square window of size 440x440 arcsecs (roughly 7x7 arcmins) around this location. This area is larger than the typical area of a type III burst \citep{Saint-Hilaire:2013aa} at 450~MHz to 150~MHz. It was chosen because the type III source can move in time and can also include double sources due to the 10 second time integration. In doing so, the automatically generated flux includes the entire type III sources but does not include significant flux from the quiet sun or other radio sources (e.g. radio noise storms from another active region).
Radio observations at lower frequencies were also used to search for the counterparts of the coronal type III bursts. Data from the Nan\c{c}ay Decametre Array (NDA) \citep{Lecacheux:2000aa} provided spectra between 80 MHz and 15 MHz. The RAD2 instrument in the WAVES experiment on board the WIND spacecraft \citep{Bougeret:1995aa} provided the observations between 14 MHz and 1 MHz. In the following, we will call the type III bursts observed by WIND below 14 MHz interplanetary type III bursts (IP bursts).
The list of X-ray flares was obtained from the catalogue of RHESSI X-ray flares. The list is automatically generated and corresponds to an increase of the X-ray count rate in the 6-12~keV range\footnote{\url{http://hesperia.gsfc.nasa.gov/hessidata/dbase/hessi_flare_list.txt}}. Count rates in the 6-12, 12-25, 25-50 and 50-100~keV energy channels were considered in the following analysis. As RHESSI attenuators automatically reduce the X-ray count rate at low energies when it exceeds given thresholds, the more quantitative analysis of the paper is restricted to the use of the 25-50 keV X-ray count rate which is very weakly attenuated even in the strongest attenuator state (A3).
An automatic search for coronal type III bursts and X-ray flares was performed using the Phoenix 2 and Phoenix 3 catalogues for type III flags and the RHESSI X-ray flare list, respectively. The search was restricted to the time window 08:00 UT to 16:00 UT (in order to have NRH observations) and excluded (for the radio observations) periods during which the RHESSI satellite was either in the Earth's shadow (night time) or in the South Atlantic Anomaly (SAA). The automatic search provided a list of 1128 coronal type III bursts and 18,206 X-ray flares above 6 keV, i.e. one order of magnitude more X-ray flares that coronal type III bursts. The low number of type III bursts with respect to X-ray flares can be explained both by the X-ray energy range considered to build the X-ray catalogue, since the emission in the 6-12 keV range is not necessarily produced by non-thermal electrons and may be attributed to thermal emission, and by the strict selection of type III flags, which excluded type III events associated with other types of radio emissions.
\section{Coronal type III bursts and X-ray flares} \label{sec:T3XR}
\subsection{Do all coronal type III bursts have X-ray counterparts?} \label{sec:T3XR_1}
The automatic comparison of the time intervals of the 1128 selected coronal type III bursts and of the 18,206 X-ray flares showed that for only 581 events, X-ray emission above 6 keV was reported in the RHESSI flare list during the time interval of radio interest. \emph{The automatic detection of combined X-ray and radio emission thus yields a 52\% association rate between groups of coronal type III bursts above 100 MHz and X-ray emission above 6 keV}.
Automatic association, while convenient, is fallible. The true association between coronal type III bursts and X-ray flares was thus controlled for all the events by combining the different observations. Figure \ref{fig:example} shows examples of two events for which the association between the reported coronal type III bursts and the RHESSI X-ray flare is excellent. The aim of the present study is not to search for detailed correlations between the two types of emissions, but just to find global association in time. Figure \ref{fig:spec_noass} shows an example of an event that has finally been rejected. The comparison of the relative time profiles of the type III burst and of the X-ray emission shows that the radio burst occurs at the end of the X-ray flare at a time when no X-ray emission is really produced.
Plotting similar figures for other events and checking the true association between coronal type III burst and X-ray emissions significantly reduced the number of associated events. This is because several type III events are reported in the catalogue during one single X-ray flare, bad data occurred in X-rays and/or radio, and, as in Figure \ref{fig:spec_noass}, X-ray flares and type III bursts are not really associated.
A close examination showed that around 100 events had duplicated type III events reported during one X-ray flare and these events were finally merged. Few events had bad X-ray or radio data. The most prominent cause of false association was when, upon visual inspection, the X-ray and radio events were not really simultaneous as we show in Figure \ref{fig:spec_noass}.
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth,trim=20 20 130 120,clip]{20020220_clear}
\includegraphics[width=0.45\textwidth,trim=20 20 110 100,clip]{200508021134_clear}
\caption{Example of two events with associated type III radio and X-ray emissions. The spectrograms from top to bottom are the WIND/WAVES RAD2 (14 MHz to 1 MHz), Nan\c{c}ay Decametre Array (80 MHz to 15 MHz), and Phoenix 2 (4 GHz to 100 MHz). The light curves are from the Nan\c{c}ay Radioheliograph (432 MHz to 164 MHz) and RHESSI (6-12 to 50-100 keV). The blue line represents the duration of the coronal type III bursts as reported in the catalogues. The red line represents the X-ray flare duration at 6-12 keV as indicated in the catalogue. There is a linear scale for the light curves.}
\label{fig:example}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth,trim=15 20 110 100,clip]{201112121018_clear}
\caption{Example of an event where the reported coronal type III emission is not associated with the reported X-ray flare. See caption from Figure \ref{fig:example} for more details on the different plots. The only exception is that Phoenix 3 was used above 100 MHz instead of Phoenix 2.}
\label{fig:spec_noass}
\end{figure}
As a result of the complete check, 321 events were finally found to be associated. \emph{The 321 events gives us a lower bound of 28\% association rate between coronal type III bursts and X-ray flares.} This is a lower bound because the sample did not include events without reported X-ray flares which could correspond to duplicated reported type III events. For more than half of the 321 associated events, it is found that the reported start and end time of the coronal type III bursts are within the reported duration of the X-ray flare at energies above 6 keV. The general shorter duration of coronal type III bursts can be understood by the fact that type III bursts should be produced by non-thermal electrons whereas X-ray emission above 6 keV is generally predominantly produced by thermal emissions that usually last longer than non-thermal emissions (see e.g. the light curves on Figures \ref{fig:example} and \ref{fig:spec_noass}). Finally, the large majority of the type III-X-ray associated events in our sample are C and B class flares (48\% C and 41\% B). The rest of the events are associated with M class flares (10\%), four X class flares, and 1 A class flare. The very small proportion of large GOES class flares can be understood as an effect of the data selection, since we selected radio events with the type III `flag'. Indeed as recalled in the introduction, GOES flares with classes higher than C5 are almost systematically associated with coherent radio emissions above 100 MHz but in only 4 \% of the events, classical type III bursts are the only radio emission \citep{Benz:2005aa,Benz:2007aa}.
\subsection {What kind of correlation exists between radio and X-ray intensities?}
In this section, the correlation between radio and X-ray intensities is investigated using radio flux measurements from Nan\c{c}ay and RHESSI X-ray count rates in the 25-50 keV range. In the period 2002 to 2011, the NRH observing frequencies underwent some changes as French national protected frequencies changed. As a consequence, our work is based on data obtained continuously between 2002 and 2011 in the four frequency ranges defined as 164+173 MHz, 237+228 MHz, 327 MHz, and 408+410 MHz. The different frequencies within each range are close enough (within 9 MHz) to consider that the physical properties governing the emissions are similar.
Among the 321 events selected in the previous subsection, there were 103 events for which no NRH information was available. For the remaining 218 events, we searched for correlations between the 25-50 keV peak X-ray count rate and the peak radio flux in the different frequency ranges. The peak X-ray count rate and the peak radio fluxes for each event were estimated on a time interval determined by earliest start time and latest end time of the radio and X-ray emissions. In both X-ray and radio domains, background subtracted peak radio fluxes and X-ray count rates were computed. To automatically determine the background, the median value of each time series was computed as well as the median values of the first and last ten samples. The background was taken as the lowest of these three values. Such a procedure is especially important for RHESSI observations since the count rates at the start or end of the time series can be strongly affected by the satellite coming out of night time or the SAA. The background found with this automatic procedure was also checked by manual inspection. For the few cases when automatic detection of the peak count rates was problematic (e.g. artefacts in the light curves due to changes of attenuators), the time period used to compute the peak count rate was defined manually. We only kept events for which the peak values were at least 1.5 times the background level in the following analysis.
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{fc_t40_s15_x25-50_r168}
\includegraphics[width=0.49\textwidth]{fc_t40_s15_x25-50_r233}
\includegraphics[width=0.49\textwidth]{fc_t40_s15_x25-50_r327}
\includegraphics[width=0.49\textwidth]{fc_t40_s15_x25-50_r410}
\caption{Scatter plot of the peak radio flux in the four frequency ranges vs. the 25-50 keV peak X-ray count rate. The red stars indicate the events where the peak X-ray count rate and peak radio flux are within 40 seconds of each other. Log-log correlation coefficients of the red stars are 0.09, 0.34, 0.16, 0.46 from around 410 MHz to around 170~MHz respectively. The dashed line at $y=0.1x$ highlights the general absence of events with high X-ray count rates and low radio flux.}
\label{fig:corr1}
\end{figure*}
Figure \ref{fig:corr1} shows the resulting scatter plot of the peak radio flux in the different frequency ranges versus the 25-50 keV peak X-ray count rate. The red stars correspond to events where peaks in X-ray and radio are within 40 seconds of each other. For these cases, there is an increased confidence that the peaks in X-ray and radio are closely related to each other; as shown on the figure most of the data points correspond to such a situation. The number of events in each frequency range is indicated for each plot. No correlation (correlation coefficient for the log-log values of 0.09 for the red stars) is found between the peak radio flux at 411 and 408 MHz and the peak of the 25-50 keV X-ray count rate. A larger correlation is however observed when comparing 25-50 keV peak count rate with peak radio fluxes at increasingly lower frequencies. The scatter is still large but the correlation coefficient in the log space is 0.34, 0.16, and 0.46 when we use the subset of events where the peaks must be within 40 seconds of each other (with around 65 events for each frequency range). The correlation coefficient in the log-space furthermore increases to 0.45, 0.33, and 0.47 only for events with peaks within 15 seconds of each other, but the number of events reduces to around 45. The most noticeable feature is the clear trend that large X-rays events are associated with large radio fluxes, especially towards the lowest frequencies, i.e. around 230 and 170 MHz, which are indicated by dashed lines in Figure \ref{fig:corr1}. However, large radio fluxes are not always associated with large X-ray events. This is discussed further in Section \ref{sec:discussion}.
\subsection {Peak radio flux as a function of frequency}
\begin{figure*}
\centering
\includegraphics[width=0.51\textwidth]{nrh_flux}
\includegraphics[width=0.49\textwidth]{nrh_flux_hist_new}
\includegraphics[width=0.49\textwidth]{nrh_si_hist}
\caption{Top, a: log-mean peak flux from the 45 (red) type III events that were observed in all four frequency bands from 164 MHz to 410 MHz, where peaks in X-rays or radio are within 40 s of each other. Errors shown are the standard deviations on the log of the peak fluxes. Also shown are the subset (32, green) that had interplanetary bursts and the subset (26, blue) that had 25-50 keV X-ray emission. Bottom left, b: the distribution of the events as a function of peak flux for the 45 type III bursts. Note the different distributions for each frequency and the large spread in peak flux. Bottom right, c: the distribution of the spectral indices found by fitting a straight line to the peak fluxes for each event. All the events (45, red), the subset (32, green) with interplanetary type IIIs, and the subset (26, blue) with 25-50 keV X-rays are shown.}
\label{fig:nrh_peak_flux}
\end{figure*}
Figure \ref{fig:nrh_peak_flux}a represents the evolution with frequency of the log mean radio flux computed on all the associated radio bursts that have signal in all four frequency bands (45 events). To be considered in the sample, the peak radio flux of the event must be 1.5 times the background in that band. As in the preceding section, only events where peaks in X-rays or radio are within 40 s of each other are included. Log-means of the radio flux rather than means are used to prevent the strongest but rarest events dominating the results; there exists more than 4 orders of magnitude in the variation of the radio flux. The error bars in Figure \ref{fig:nrh_peak_flux}a represent the standard deviation of the log fluxes and illustrate the large range of observed flux values. Figure \ref{fig:nrh_peak_flux}a also shows the same plot for two subsets: events associated with 25-50 keV X-rays (from the points in Figure \ref{fig:corr1}) or events associated with interplanetary type III bursts (IT3).
The log-mean peak radio flux increases as the frequency range decreases. A linear fit using \textbf{mpfitexy} \citep{Markwardt:2009aa} gives a spectral index of -1.78. However, because of the large scatter in peak flux values, the error on the fit is 2.3. Figure \ref{fig:nrh_peak_flux}a shows that the log-mean peak flux around 168 MHz is lower than expected from the straight line fit to the other three log-mean peak flux values. Indeed, the spectral index of the fit is -2.21 if we do not include this frequency band. The spectral index for the events with significant 25-50 keV X-ray emissions is -1.57, while the spectral index for the events associated with interplanetary events is -1.97.
Figure \ref{fig:nrh_peak_flux}b shows the histograms of the peak radio flux for each frequency band. The form of each distributions is slightly different, with 168 MHz having a negative skewness (skewed to the high flux values) and the higher frequencies having a positive skewness (skewed to the lower flux values). Moreover, 168 MHz has a higher standard deviation than the higher frequencies. Both distribution characteristics could be the cause of the slightly lower value for the log-mean peak flux at 168 MHz.
Figure \ref{fig:nrh_peak_flux}c shows the histogram of the spectral indices derived from the evolution of the peak radio flux for all the individual events. The distribution does not look strongly different for events associated with 25-50 keV signal or with interplanetary type III emission below 14 MHz. Some events have a positive spectral index. All these results are discussed in more detail in Section \ref{sec:discussion}.
\section{Coronal type III bursts, X-rays, and interplanetary type III bursts} \label{sec:T3I3T}
Interplanetary type III bursts (IT3s) are defined in this study as the extension of the coronal bursts in the frequency range below 14 MHz, observed by the RAD2 instrument on board WIND/WAVES. Figures similar to Figures \ref{fig:example} and \ref{fig:spec_noass} have been built for all the events to search for significant counterparts of the coronal type III bursts below 14 MHz. As a result it was found that \emph{for 174 (54\%) of the 321 selected events, a significant signal, i.e. an IP type III burst, was observed below 14 MHz.}
It was furthermore examined whether events with significant X-ray emission above 25 keV had a higher association rate between coronal and interplanetary type III bursts and conversely whether a coronal burst associated with an interplanetary burst had a higher probability of producing significant X-ray emission above 25 keV. Table \ref{tab:HXRIT3} shows the proportion of events in our sample associated (or not) with IT3 bursts and simultaneously associated (or not) with X-rays above 25 keV. From the different numbers and ratios, it can be seen that if the coronal type III is associated with X-rays above 25 keV, then the association rate with an interplanetary type III bursts is slightly higher (57\%) than for events for which X-rays are only detected below 25 keV. Conversely, a coronal type III burst associated with an interplanetary type III burst has a slightly higher probability (59\%) of producing significant X-ray emission above 25 keV. These tendencies are discussed in the final section in the context of models relating electron beams and type III bursts.
\begin{center}
\begin{table*}
\centering
\caption{Proportions of the 321 events that had an interplanetary (IP) type III burst, $25-50$~keV X-rays, both or neither. The ratios for each row or column are also indicated. }
\begin{tabular}{ c c c c }
\hline\hline
321 events & $25-50$~keV X-rays & NO $25-50$~keV X-rays & Ratio\\ \hline
IP Type III & $103~(32\%)$ & $71~(22\%)$ & $59:41$ \\
NO IP Type III & $77~(24\%)$ & $70~(22\%)$ & $52:48$\\
Ratio & $57:43$ & $50:50$ \\
\hline
\end{tabular}
\vspace{20pt}
\label{tab:HXRIT3}
\end{table*}
\end{center}
\begin{figure}
\centering
\includegraphics[width=0.99\columnwidth]{hsi_countrate_hist_25_50_15}
\caption{Histogram of the background subtracted peak X-ray count rate for the 96 associated events that showed interplanetary type III bursts (IT3s, red fill) and the 73 events that did not show interplanetary bursts (blue line). The X-ray energy band is 25-50 keV and all events had at least 1.5 times the background count rate.}
\label{fig:xray_hist}
\end{figure}
Exploring the relationship further, Figure \ref{fig:xray_hist} illustrates in more detail the tendency of the coronal type III bursts to be more associated with interplanetary type III bursts (IT3) when they are associated with larger X-ray emissions above 25 keV. The histogram of the background subtracted 25-50 keV peak count rate is represented for events associated or not associated with IT3s. Only events for which the peak count rate above 25 keV was at least 1.5 times the background were considered. As expected from the above discussion, the distributions are relatively similar, especially at the low count rates. The log-mean of the two distributions with interplanetary type III bursts (IT3) and without interplanetary type III bursts (no IT3) are 1.44 and 1.17 respectively, corresponding to 27 and 15 counts/s with nearly identical standard deviations. However, above 250 counts/s the distributions are different. Of the 15 events with high count rates, only 2 do not have an associated IT3 compared to the 13 that do, but the statistics are poor for only 15 events. In conclusion, the probability of a coronal type III burst to be associated with an interplanetary type III burst depends only weakly on the associated X-ray emission above 25 keV unless the count rate for RHESSI is above 1000 counts/s. In that case, an interplanetary type III burst is likely to accompany the coronal type III burst (but the statistics are still poor).
\begin{figure*}
\centering
\includegraphics[width=0.49\textwidth]{nrh_flux_hist_168_15}
\includegraphics[width=0.49\textwidth]{nrh_flux_hist_233_15}
\includegraphics[width=0.49\textwidth]{nrh_flux_hist_327_15}
\includegraphics[width=0.49\textwidth]{nrh_flux_hist_410_15}
\caption{Histograms of the background subtracted radio flux for all of the associated events that showed interplanetary type III bursts (red) and did not show interplanetary bursts (blue). The radio frequencies shown are 164+173 MHz with 109 and 49 events for IT3s and no IT3s, respectively (top left), 237+228 MHz with 103 and 54 events (top right), 327 MHz with 73 and 46 events (bottom left) and 411+408 MHz with 49 and 36 events (bottom right). All the events had at least 1.5 times the background radio flux.}
\label{fig:radio_hist}
\end{figure*}
Figure \ref{fig:radio_hist} illustrates in more detail the increase of the probability for strong coronal type III bursts to be associated with interplanetary type III bursts. The histogram of the background subtracted radio flux in the different frequency bands is indeed represented for events associated or not associated with IP type III bursts. Again only events with fluxes at least 1.5 times the background are plotted. At the highest frequencies, around 410~MHz the distributions for IT3 or no IT3 are very similar, with similar first and second order moments (log-mean peak flux and associated error, see Table \ref{tab:radio_IT3_noIT3}). We conclude that the probability of association of a type III burst observed at 410 MHz with an interplanetary type III burst does not depend on its flux at 410 MHz. A similar result is found for frequencies around 327 MHz and 230 MHz, where the log-mean flux is only slightly larger when interplanetary bursts are observed.
At the lowest frequencies around 170~MHz the distributions are however very different (see Figure \ref{fig:radio_hist}). The log-mean flux is over half an order of magnitude larger when we observe interplanetary type III bursts than the log-mean flux without interplanetary type III bursts. The events associated with IP type III bursts have a log-mean of 94 SFU compared to 27 SFU for those not associated with IP type III bursts. We conclude that the probability of detecting an IP type III burst together with a coronal type III burst is strongly enhanced in the case of strong coronal type III bursts around 170 MHz. Although based on a limited number of events, Figure \ref{fig:radio_hist} shows that contrary to what happens at higher frequencies, a high radio flux for type III bursts around 170 MHz is a very good predictor that an interplanetary burst will be observed. This is further discussed in the next section.
\begin{center}
\begin{table*}
\centering
\caption{Log-mean fluxes (in $\log_{10}$~SFU) for the distributions show in Figure \ref{fig:radio_hist} of type III bursts that are and are not interplanetary (have significant emission below 14 MHz). Values and errors are calculated from the first and second moments of the distributions. The log-mean is used to avoid biasing the results on the largest events.}
\begin{tabular}{ c c c c c}
\hline\hline
Log-mean flux & 164+173 MHz & 237+228 MHz & 327 MHz & 411+408 MHz \\ \hline
IT3 & $2.0\pm0.5$ & $1.7\pm0.4$ & $1.6\pm0.5$ & $1.6\pm0.6$ \\
No IT3 & $1.4\pm0.5$ & $1.5\pm0.6$ & $1.5\pm0.6$ & $1.5\pm0.7$ \\
\hline
\end{tabular}
\vspace{20pt}
\label{tab:radio_IT3_noIT3}
\end{table*}
\end{center}
\section{Discussion and conclusions} \label{sec:discussion}
In this paper, we performed a new statistical analysis on the link between coronal type III bursts and X-ray flares, and on the occurrence of the interplanetary counterparts of coronal type III bursts. The study was based on ten years of observations from 2002 to 2011 using radio spectra in the 4000-100 MHz range for coronal bursts, X-ray data $>6$~keV and RAD2 observations on WIND for the interplanetary type III counterparts. Based on the RHESSI catalogue of X-ray flares above 6 keV and radio catalogues of pure coronal type III bursts (without other types of radio busts), we investigated the connection between groups of type III bursts and X-ray flares in ten years of data and the link with interplanetary bursts. We summarize the important results of our study below:
\begin{itemize}
\item The automatic search provided one order of magnitude more X-ray flares at 6 keV than coronal type III bursts.
\item The large majority of the events in our sample are associated with C and B class flares.
\item Type III bursts above 100 MHz usually start after the X-ray flare is detected at 6 keV and end before the 6 keV emission ceases.
\item A lower bound of 28\% of radio events with only coronal type III bursts had associated X-ray emissions detected with RHESSI above 6 keV.
\item High 25-50 keV X-ray intensities were correlated with high radio peak fluxes below 327 MHz but the opposite was not true.
\item The peak radio flux tends to increase from 450~MHz to 150~MHz but the amount varies significantly from event to event.
\item Interplanetary type III bursts are observed with WIND/WAVES below 14 MHz for 54\% of the coronal type III bursts in our sample.
\item Events with $>250$~counts/s at 25-50~keV and/or $>1000$~SFU at 170 MHz had a high chance for the coronal type III to become interplanetary.
\end{itemize}
The finding that one order of magnitude more X-ray flares were detected by RHESSI above 6 keV than pure coronal type III bursts in the same period can be understood from the emission mechanisms. X-ray emission in the 6-12 keV range is largely produced by electrons with a thermal distribution that is different from the non-thermal electrons that are believed to be the primary cause of type III bursts. The strict selection of `type III flags' in the catalogue also reduces the number of radio events since it excludes events in which coronal type III bursts would be observed in association with other kinds of coherent radio emissions at decimetric/metric wavelengths. The thermal origin of the 6 keV X-rays naturally explains the shorter duration of coronal type IIIs as thermal emission is usually observed during the rise and decay of flares in contrast to the non-thermal emission that is localized to the impulsive phase of the flare.
The very small proportion of large GOES class flares in our sample is an effect of the data selection of `pure' type III events. Indeed, as was shown by \citet{Benz:2005aa,Benz:2007aa}, large GOES class flares (i.e. higher than C5) are almost systematically associated with coherent radio emissions above 100~MHz, but classical type III bursts are the only radio signature in only 4\% of these events.
\subsection{Coronal type III and hard X-ray flare connection}
Our results enhance the strong connection already found between electrons that drive coronal type III emission and X-ray flare emission. We found an association rate of at least 28\% between coronal type III bursts and X-ray flares, which are larger than the 3\% of \citet{Kane:1981aa} and 15\% of \citet{Hamilton:1990aa}. The larger association rate that we found could be related to our selection of type IIIs as the only radio emission present, considering 6~keV X-rays compared to 10~keV, and instrument sensitivity particularly for X-rays. The association rate of at least 28\% is further indication that the non-thermal electrons responsible for radio and X-rays are part of a common acceleration process. However, not all coronal type IIIs have associated X-ray flares. The lack of simultaneous observations can be due to instrument sensitivity preventing detection of weak simultaneous X-ray emissions associated with type III production or a lack of magnetic connectivity preventing electron transport either up or down in the solar corona from the acceleration site.
The weak log-log correlation between the peak flux of radio emissions at 327-164~MHz and the X-ray count rate at 25-50~keV arises from the notable absence of events with a high X-ray intensity and a low type III radio flux. This implies that when flares with high X-ray count rates above 25~keV produce coronal type III bursts, the radio fluxes are likely to be high. However, the correlation is only weak because the converse is not true. `Coronal' type III bursts with high radio fluxes are also often associated with low X-ray count rates above 25 keV. This results in a large scatter between X-ray and radio fluxes; this large scatter was already found by \citet{Kane:1981aa,Hamilton:1990aa}. The large scatter is explained by the increased efficiency of producing a type III radio burst even by a low density electron beam via the amplification of coherent waves. Conversely the incoherent nature of Bremsstrahlung for producing X-rays is less efficient and the number of high-energy electrons is the primary parameter for producing high rates of X-ray emissions \citep[e.g.][]{Holman:2011aa,Kontar:2011aa}. The dependency of hard X-rays on the number of high-energy electrons naturally explains the notable absence of events with high X-ray intensity and low type III radio flux.
X-ray flares associated with coronal type IIIs ($>100$~MHz) were found, of which 56\% had $>25$~keV X-rays, which is a much higher ratio than is present for all flares \citep[e.g.][]{Hannah:2011aa}. A similar result was found by \citet{Kane:1981aa} who showed an increase in the X-ray to type III correlation with the peak flux of X-ray emission. `Coronal' type III bursts observed in our study need to have a starting frequency well above 100 MHz. The result reinforces a previously observed property that starting frequencies of type III bursts are linked to the spectral index of the accelerated electron beams \citep{Reid:2011aa,Reid:2013aa,Reid:2014aa}. The other studies showed that for an electron beam with an injection function of the following form:
\begin{equation}\label{eqn:source}
S(v,r,t) = A v^{-\alpha}\exp\left(-\frac{|r|}{d}\right)\exp\left(-\frac{|t-t_{inj}|}{\tau}\right),
\end{equation}
with an initial characteristic size $d$, injection time $\tau$, and velocity spectral index $\alpha$, the initial height of type III emission depended upon
\begin{equation}
h_{typeIII}=(d+v_{av}\tau)\alpha +h_{acc},
\end{equation}
with $v_{av}$ as the average significant velocity of the electron beam and $h_{acc},~(r=0)$ the starting height. In the case of a harder electron spectrum (lower spectral index), the electron beam becomes unstable to Langmuir wave production after a shorter propagation distance. The starting frequency of the type III emission is thus found to be higher.
`Coronal' type IIIs usually occurred during the impulsive phase of the flare, starting after the 6~keV X-ray rise and ending before the 6~keV X-rays decay. A similar result was found by \citet{Aschwanden:1985aa} for decimetric emission between 300-1000~MHz, many of which included type III bursts. Additionally, the peak flux in emission between X-rays and type IIIs is usually closely related in time. For the events of our sample for which $>25$~keV emission was detected, around two-thirds of the events have peak fluxes within 40 seconds, decreasing to around half of the events for peak fluxes within 15 seconds, which is just above the limit of time cadence used in our analysis.
\subsection{Peak radio flux verses frequency}
We investigated the evolution in frequency of the type III log-mean peak radio flux. Although there is a very large dispersion in the spectral index found (see Figure \ref{fig:nrh_peak_flux}c), the peak radio flux is found as a mean (in log-space) to increase with decreasing frequency with a mean slope of -1.78. The evolution of the log-mean radio flux with frequency furthermore depends on the association of the event with either significant hard X-ray emission above 25 keV (slope of -1.57) or with an interplanetary burst (slope of -1.97). The large scatter we observed shows that the frequency evolution of type III bursts is varied from event to event. The slopes are significantly different from the slope of -2.9 found on a survey of 10,000 type III bursts observed with the Nan\c{c}ay Radioheliograph \citep{Saint-Hilaire:2013aa}. The discrepancy could be related to the fact that our events only consider bursts that emit in all NRH frequencies as opposed to a statistical average over all bursts.
Our results can be taken as further evidence that, in general, type III bursts are more numerous and stronger at low frequencies than at high frequencies. Physical reasons for the increase include the onset and increase of the bump-in-tail instability with velocity dispersion, the lower background plasma density reducing collisional damping and increasing the Langmuir wave growth rate, and the lower frequency radio waves escaping the corona more easily. The increase in radio flux with decreasing frequency is consistent with what was observed by \citet{Dulk:1998aa} who found that the spectrum of a type III burst in the $3-50$~MHz range has a negative spectral index. More recently, the statistical study performed by \citet{Krupar:2014aa} on 152 type III bursts at long wavelengths observed by STEREO/SWAVES also showed that the mean type III radio flux increases significantly from 10 MHz to 1 MHz.
\subsection{Interplanetary bursts and $>25$ keV electrons}
As discussed in Section \ref{sec:T3I3T} and summarized in Table \ref{tab:HXRIT3}, the coronal type III bursts in our sample are more often associated with IP type III bursts when they are also associated with detectable 25-50 keV emission. Figure \ref{fig:xray_hist} furthermore shows that for the HXR events in this study that produce more than 250 counts/s in the 25-50 keV channel, there is a much higher chance of detecting an IP burst in connection with the coronal burst.
The production of high X-ray count rates at 25-50~keV requires a high number of injected electrons above 25 keV. Increasing the number of injected electrons above 25~keV can be achieved by increasing the number density of electrons at all energies or by hardening the energy distribution (smaller spectral index) of the accelerated electron beam. A larger number of high-energy electrons increases the Langmuir wave energy obtained from the unstable electron beam. From quasilinear theory \citep{Vedenov:1963aa,Drummond:1964aa}, the growth rate of Langmuir waves is
\begin{equation}\label{eqn:growthrate}
\gamma_{ql}=\frac{\pi\omega_{pe}v^2}{n_e}\frac{\partial f}{\partial{v}}.
\end{equation}
for an electron distribution $f(v)$. An increase in the density of beam electrons or an increase in the velocity of the electrons increases $\gamma_{ql}$. Increasing Langmuir wave energy density likely increases type III radio flux \citep[e.g.][]{Melrose:1986aa} and increases the probability of generating a detectable type III burst. Other factors can inhibit interplanetary type III production (e.g. magnetic connectivity), so we do not expect all beams with a large electron flux above 25~keV to produce interplanetary type III bursts.
The increase in type III flux when the number of electrons above 25~keV is increased was demonstrated numerically by \citet{Li:2008ac,Li:2009aa,Li:2011ab} using a hot, propagating Maxwellian. They showed that increasing the temperature of the initial Maxwellian beam increases the type III radio flux and increases the bandwidth; the burst starts at higher frequencies and stops at lower frequencies. The simulations were restricted to frequencies above 150~MHz and the type III flux peaked above 200 MHz, which is inconsistent to the general trend of increasing flux with decreasing frequency reported in this and other studies. A further study by \citet{Li:2013aa} demonstrated via an initial power-law electron beam that decreasing the power-law spectral index increased the fundamental radio flux emitted at high and low radio frequencies. Both sets of simulations can explain why a coronal burst produced by an electron beam with a smaller (harder) power-law spectral index, i.e. associated with a larger 25-50 keV count rate, would have a higher probability to be associated with an interplanetary type III burst than if the electron beam
had a larger (softer) initial power-law spectral index.
Electron beams can also be diluted as the cross-section of the guiding magnetic flux tube increases with height. A high density of electrons above 25 keV is more likely to still produce detectable type III radio flux at altitudes related to 14 MHz, even in the case of a diverging magnetic flux tube. The effect of flux tube radial expansion has been shown recently by \citet{Reid:2015aa} on type III stopping frequencies. The numerical simulations of beam electrons, and their corresponding resonant interaction with Langmuir waves in diverging magnetic flux tubes, are used to compute the type III stopping frequency; this is defined as the frequency in which the beam is no longer able to produce a sufficient level of Langmuir wave energy density as compared to the thermal level. Denser electron beams and harder electron beams (low spectral index) are more likely to produce significant Langmuir wave energy densities further away from the injections site than sparse or softer electron beams, and therefore produce type III bursts with lower stopping frequencies. This result provides further understanding as to why coronal bursts associated with stronger HXR bursts are more likely associated with IP type III bursts.
\subsection{Interplanetary bursts and magnetic connectivity}
We found the detection of a strong radio flux at frequencies around 170 MHz (Figure \ref{fig:radio_hist}) to be a very good indication that the type III burst will become an interplanetary type III burst at lower frequencies. Previously it has not been clear whether the absence of radio emission below 14 MHz is from a weak beam or unfavourable magnetic connectivity. When electron beams contain enough density to produce high flux type IIIs around 170 MHz we often observe them below 14 MHz. We deduce that magnetic connectivity plays less of an effect on the transport of electrons from the high corona into interplanetary space. A strong radio flux at frequencies around 170 MHz indicates the electrons \emph{do} have access to the high corona and subsequently are more likely to access the interplanetary medium.
We did not find the same trend that strong radio flux is a good indication of interplanetary type III bursts at frequencies at and above 237~MHz. The magnetic connectivity of bursts with a large radio flux at high frequencies, which are produced low in the solar atmosphere, is not necessarily favourable for the electron beam exciter to escape into the upper corona and interplanetary space (see e.g.\ the example in \citet{Vilmer:2003aa}), even if the radio flux is high. The access of particles to open field lines and then the possible association between coronal and interplanetary type III bursts may evolve during flares due to, for example, processes of interchange reconnection in which newly emerging flux tubes can reconnect with previously open field lines \citep[see e.g.][]{Masson:2012aa,Krucker:2011aa}.
The statistical results presented in this paper were based on radio spectra and flux time profiles but did not include spatially resolved observations. This latter aspect will be considered in a following study, which will examine in detail the combination of HXR images provided by RHESSI with the multi-frequency images of the radio bursts produced in the decimeter/meter wavelengths by the Nan\c{c}ay Radioheliograph. Tracing the magnetic connectivity between the solar surface, the corona and the interplanetary medium will be one of the key questions of the Solar Orbiter mission. As shown in the present paper, X-ray and radio emissions from energetic electron beams can be used to trace the electron acceleration and propagation sites from the solar surface to the interplanetary medium. The combination of ground-based radio spectrographs and imagers with the radio, X-ray, and in-situ electron measurements aboard Solar Orbiter will undoubtedly largely contribute in the next decade to a better understanding of the magnetic connectivity between the Sun and the interplanetary medium, and on the release and distribution in space and time of the energetic particles from the Sun.
\begin{acknowledgements}
Hamish Reid acknowledges the financial support from a SUPA Advanced Fellowship and from the STFC consolidated grant ST/L000741/1. Nicole Vilmer acknowledges support from the Centre National d'Etudes Spatiales (CNES) and from the French programme on Solar-Terrestrial Physics (PNST) of INSU/CNRS for the participation to the RHESSI project. The European Commission is acknowledged for funding from the HESPE Network (FP7-SPACE-2010-263086). Financial support by the Royal Society grant (RG130642) is gratefully acknowledged. Support from a Marie Curie International Research Staff Exchange Scheme RadioSun PEOPLE-2011-IRSES-295272 RadioSun project is greatly appreciated. Collaborative work was supported by a British council Franco-British alliance grant and funding from the Paris Observatory. The NRH is funded by the French Ministry of Education and the R\'{e}gion Centre. The Institute of Astronomy, ETH Zurich and FHNW Windisch, Switzerland is acknowledged for funding Phoenix spectrometers. The RHESSI team, the WIND/WAVES team and the DAM team are acknowledged for providing data access and analysis software.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,564,998 | arxiv | \subsection{Sub-Theories of Arithmetic and Complexity}
\paragraph{Very Weak, Weak and Strong Fragments.}
Sub-theories of $\lc{PA}$ have been obtaining
increasing interest for their intimate connection with
complexity classes.
Buss divided them into three main categories:
strong, weak and very weak fragments~\cite{Buss98}.
\emph{Very weak} theories do not admit any induction axioms.
Among them there is well-known Robinson arithmetic
$\textsf{\textbf{Q}}$, introduced in the 1950s by Robinson, Tarski, and Mostowski~\cite{Robinson,TarskiMostowskiRobinson}.
\emph{Weak theories} are defined by a language which
extends that of $\lc{PA}$ by additional symbols of specific
growth rate (sometimes together with explicit bounded quantifiers) and
by limited induction schemas.
In \emph{strong theories} the language
is enlarged so to include symbols for all primitive
recursive functions.\footnote{Examples of strong theories
are $\textsf{\textbf{I}}\Sigma_n$, that is
$\textsf{\textbf{Q}}_\leq$ (the conservative extension
of $\textsf{\textbf{Q}}$ with $x \leq y \leftrightarrow
(\exists z)(x+z=y)$) plus $\Sigma_n$-IND, and
$\textsf{\textbf{I}}\Delta$ obtained adding $\Delta_0$-IND to $\textsf{\textbf{Q}}_\leq$.}
\paragraph{Bounded Theories.}
In particular, bounded theories of arithmetics
are weak fragments of $\lc{PA}$,
typically including bounded quantifiers
and in which induction is limited.
Buss defined a bounded arithmetic as a theory
axiomatized by $\Pi_1$-formulas~\cite{Buss98}.
The potential strength of such theories
and their ability to characterize complexity,
depends on the (sub-exponential)
growth rate of the function symbols in the language.
The study of $\lc{BA}$ was initiated by Parikh in 1971~\cite{Parikh},
who introduced $\textsf{\textbf{I}}\Delta_0$ to give an appropriate
proof theory to linear bounded automata, namely to predicates
computable by linear space-bounded TMs.
Then, other bounded theories were introduced by Buss~\cite{Buss86}
and
extensively studied.
\subsection{Buss' Bounded Arithmetic}
Buss' Ph.D. thesis~\cite{Buss86} provided a groundbreaking result
in the study of the arithmetical characterization of complexity classes.
He started by considering the \emph{definability}
of a function in a theory: an arithmetic theory $T$
defines a function $f$ when there is a formula $F$
in the language of $T$ such that $f$ satisfies $F(x,f(x))$
for any $x$ and $T\vdash (\forall x)(\exists !y) F (x,y)$.
The constructive proof of $(\forall x)(\exists y)F (x,y)$
also provides an algorithm to compute $f$.
So, the given procedure is effectively computable
but not necessarily \emph{feasible}, that is
\emph{computable in polynomial time}.
Due to his bounded theories, Buss was able to arithmetically
characterize functions computable with given resource bound,
thus to characterize interesting complexity classes.
\small
\begin{figure}[h!]\label{fig:schema2}
\begin{center}
\framebox{
\parbox[t][2.6cm]{5cm}{
\footnotesize{
\begin{center}
\begin{tikzpicture}[node distance=2cm]
\node at (0,2) (a) {$\mathcal{PTCA}$};
\node at (-1.6,0.7) (b) {$\Sigma^b_1$-NIA};
\node at (1.6,0.7) (c) {$\cc{FP}$};
\draw[<->,thick] (a) to [bend right=20] (b);
\draw[<->,thick] (b) to [bend right=20] (c);
\draw[<->,thick] (c) to [bend right=20] (a);
%
\end{tikzpicture}
\end{center}
}}}
\caption{Ferreira's Proof Schema~\cite{Ferreira88}}
\end{center}
\end{figure}
\normalsize
\paragraph{Language.}
The language of Buss' fragments extends that of $\lc{PA}$
with three special predicate symbols:
$\llcorner \frac{1}{2} \cdot \lrcorner$ which divides for two
and round down the argument,
$|\cdot|$
returning the length of the binary representation the argument,
and the Nelson's function $\#$, such that $x\#y=2^{|x|\cdot |y|}$.
In the language of $\lc{BA}$, standard quantifiers are called \emph{unbounded},
while quantifiers of the form
$(\forall x\leq t)$ or $(\exists x\leq t)$ are called \emph{bounded}
and are such that $(\forall x\leq t)F$ is an shorthand
of $(\forall x)(x\leq t\rightarrow F)$, and
$(\exists x)F$ abbreviates
$(\exists x)(x\leq t\wedge F)$.\footnote{Otherwise, the
syntax can be expanded so to directly include them.
In this case the calculus must be extended accordingly.}
A special kind of bounded quantifiers are \emph{sharply bounded}
ones, namely $(\forall x\leq |t|)$ and $(\exists x\leq |t|)$.
Bounded formulas (converted into PNF) are classified in a hierarchy of classes,
$\Sigma^b_k$ and $\Pi^b_k$,
by counting alternations of bounded quantifiers
(ignoring sharply ones).
\longv{
\begin{defn}
The set $\Delta^b_0=\Sigma^b_0=\Pi^b_0$
is equal to the set of formulas in which all
quantifiers are sharply bounded.
For $i\ge 1$, the sets $\Sigma^b_i,\Pi^b_i$
are inductively defined by the following conditions:
\begin{itemize}
\itemsep0em
\item If $F,G$ are $\Sigma^b_i$-formulas,
then $F\vee G$ and $F \wedge G$
are $\Sigma^b_i$-formulas.
If $F$ is a $\Pi^b_i$-formula and $G$
a $\Sigma^b_i$-formula, then $F \rightarrow G$,
and $\neg F$ are $\Sigma^b_i$-formulas.
\item if $F$ is a $\Pi^b_{i-1}$-formula,
then $(\exists x \leq t) F$ is a $\Sigma^b_i$-formula.
\item if $F$ is a $\Sigma^b_i$-formula
and $t$ a term,
then $(\forall x\leq |t|)F$ is a $\Sigma^b_i$-formula
\item if $F$ is a $\Sigma^b_i$-formula
and $t$ a term, then $(\exists x\leq t)F$
is a $\Sigma^b_i$-formula.
\end{itemize}
The four inductive conditions defining $\Pi^p_i$
are dual to the given ones.
\end{defn}
}
\longv{
\begin{defn}
The following sets of formulas
are defined by induction on the complexity of formulas:
\begin{enumerate}
\itemsep0em
\item $\Pi^b_0=\Sigma^b_0=\Delta_0^b$ is
the set of formulas the quantifiers of which
are sharply bounded.
\item $\Sigma^b_{k+1}$ is defined inductively by
\begin{enumerate}
\itemsep0em
\item $\Sigma^b_{k+1}\subseteq \Pi^b_k$
\item if $F\in\Sigma^b_{k+1}$, then $(\exists x\leq t)F,
(\forall x\leq |t|) F \in \Sigma^b_{k+1}$
\item if $F,G\in \Sigma^b_{k+1}$, then
$F\wedge G, F \vee G\in \Sigma^b_{k+1}$
\item if $F \in \Sigma^b_{k+1}$
and $G\in \Pi^b_{k+1}$, then $\neg G,
G \rightarrow F \in \Sigma^b_{k+1}$.
\end{enumerate}
\item Similarly for $\Pi^b_{k+1}$.
\end{enumerate}
\end{defn}}
\paragraph{Axiomatization.}
Bounded theories are then defined by adding 32 basic
axioms~\cite[pp. 30-31]{Buss86} to the ones for
$\lc{PA}$ and restricting the induction schema.
Buss himself noticed that there is a certain amount of flexibility
in the choice of basic axioms and the ones he introduced
in his thesis are not optimal~\cite[p. 101]{Buss98}.
Alternative sets were proposed for example by Cook and Urquhart~\cite{CookUrquhart}
and Buss and Ignjatovi\'c~\cite{BussIgnjatovic}.\footnote{For further details, see~\cite{Buss98}}
In particular, Buss introduced the class
$\textsf{\textbf{S}}^i_2$ as axiomatized by basic axioms plus $\Sigma^b_i$-PIND:
$$
F(x) \wedge (\forall x)(F (\llcorner \frac{1}{2}x\lrcorner)
\rightarrow F(x)) \rightarrow (\forall x)F (x),
$$
where $F$ is a $\Sigma^b_i$-formula,
while
$\textsf{\textbf{T}}^i_2$ are defined by the same set of basic
axioms together with the induction schema $\Sigma^b_i$-IND,
$$
F(\mathtt{0}) \wedge (\forall x)(F(x) \rightarrow
F(\mathtt{S}(x)) \rightarrow (\forall x)F(x),
$$
where $F$ is a $\Sigma^b_i$-formula.
\paragraph{Relating $\textsf{\textbf{S}}^1_2$ and poly-time computable functions.}
The main result of Buss' Thesis is the proof that these arithmetic theories actually provide a logical characterization of complexity classes
in $\hc{PH}$.
In particular, he proved that poly-time computable functions
are $\Sigma^b_1$-definable in $\textsf{\textbf{S}}^1_2$~\cite[Cor. 8, p. 99]{Buss86}.
On the one hand,
a function is poly-time computable when there is a TM $\mathscr{M}$
and a polynomial $p(n)$ such that $\mathscr{M}$ computes the given function
and runs in time smaller or equal than $p(n)$, for any input of length
$n$.
In 1964, this notion was made precise by Cobham in the form of
a function algebra~\cite{Cobham}, on which Buss' proof relies.\footnote{Buss also noticed that a proof ``directly'' \textcolor{red}{based} on the machine definition is also possible.}
On the other,
a function is $\Sigma^b_1$-definable in $\textsf{\textbf{S}}^1_2$
when there is a $\Sigma^b_1$-formula $F$ such that
conditions above hold.
The link between complexity of computing and
quantified formulas is then established in two main steps:
\footnotesize
\begin{center}
\begin{tikzpicture}[node distance=2cm]
\node at (7,0) (a) {$\Sigma^b_1$-representability in $\textsf{\textbf{S}}^1_2$~\cite{Buss86}};
\node at (2,0) (b) {Class $\mathcal{FP}$~\cite{Cobham}};
\node at (4.5,1) {\textcolor{gray}{\scriptsize{witness theorem}}};
\node at (4.5,-1) {\textcolor{gray}{\scriptsize{bootstrapping}}};
\draw[->,thick,gray] (a) to [bend right=25] (b);
\draw[->,thick,gray] (b) to [bend right=25] (a);
%
\end{tikzpicture}
\end{center}
\normalsize
\noindent
That every poly-time function is $\Sigma^b_1$-definable
in $\textsf{\textbf{S}}^1_2$ is proved due to the so-called \emph{bootstrapping},
that is a series of coding functions was introduced.
The proof of the converse direction is more difficult.
Buss presented a sequent calculus, extending
$\mathbf{LK}$ with rules
for limited induction and bounded quantifiers.
Then, due to cut elimination, he proved the ``witness'' theorem,
showing that proofs in this calculus contain explicit
algorithms to compute the output of the function from the input
\emph{in polynomial time}.
\subsection{Ferreira's Bounded Arithmetic}
In~\cite{Ferreira88,Ferreira90}, Ferreira introduced
a ``(supposedly) more natural''~\cite[p. 2]{Ferreira90}
bounded theory defined in a word language,
instead of the standard language of arithmetic $\mathcal{L}_{\mathbb N }$.
This arithmetic characterizes poly-time
computable functions – this time defined over
strings – and, indeed, can interpret Buss'
$\textsf{\textbf{S}}^1_2$~\cite{FerreiraOitavem}.
\textcolor{red}{For clarity's sake, we sum up its salient aspects
following notation and axiomatization
by~\cite{FerreiraOitavem}, which is slightly different from~\cite{Ferreira88,Ferreira90} in the surface
but essentially equivalent.}
\paragraph{The Function Algebra $\mathcal{PTCA}$.}
Ferreira defined an algebra of functions \emph{over strings}
$\mathcal{PTCA}$
(poly-time computable arithmetic) analogous
to Cobham's one but made of:
\begin{itemize}
\itemsep0em
\item initial functions:
\begin{itemize}
\itemsep0em
\item $E_{\mathcal{F}}(x)=\emptyset$
\item $P_{\mathcal{F}}^{n,i}(x_1,\dots, x_n)=x_i$, with $1\leq i\leq n$
\item $C^b_{\mathcal{F}}(x)=x\mathbb{b}$, where if $b=1$, then $\mathbb{b}=\mathbb{1}$ and
if $b=0$, then $\mathbb{b}=\mathbb{0}$
\item \begin{align*}
Q_{\mathcal{F}}(x,y) =\mathbb{1} &\leftrightarrow x\subseteq y \\
Q_{\mathcal{F}}(x,y) = \mathbb{0} &\vee Q_{\mathcal{F}}(x,y)=\mathbb{1}
\end{align*}
\end{itemize}
\item functions obtained by:
\begin{itemize}
\itemsep0em
\item composition, i.e. $f$ is obtained from $g,h_1,\dots,h_k$
as $f=(x_1,\dots, x_n)=g(h_1(x_1,\dots, x_n),\dots, h_k(x_1,\dots, x_n))$
\item bounded iteration, i.e. $f$ is obtained from $g,h_0,h_1$ as,
\begin{align*}
f(x_1,\dots, x_n,\emptyset) &= g(x_1,\dots, x_n) \\
f(x_1,\dots, x_n,y\mathbb{b}) &=
h_b(x_1,\dots, x_n,y,f(x_1,\dots, x_n,y))|_{t(x_1,\dots, x_n,y)}
\end{align*}
where $t$ is a term called \emph{bound} and $\cdot |_{\cdot}$
denote truncation.
\end{itemize}
\end{itemize}
\paragraph{The Word Language $\mathcal{L}_{\mathbb{W}}$.}
As anticipated, Ferreira's theory is defined in a word
language $\mathcal{L}_{\mathbb{W}}$, which is basically
a first-order language with equality endowed with
three constants $\epsilon,\mathtt{0},\mathbb{1}$,
two function symbols $\frown, \times$
and a relation symbol $\subseteq$.\footnote{Observe that
Ferreira used different symbols for the empty
string and \textcolor{red}{for} concatenation.}
Interpretation is as predictable:
$\epsilon$ denotes the empty
word, $\mathtt{0}$ and $\mathtt{1}$
the bits $\mathbb{0}$
and $\mathbb{1}$ (resp.),
$\frown$ word concatenation,
$\times$ the binary product (i.e.~$x\times y=
x\frown\dots \frown x$, $|y|$-times),
and $\subseteq$ the \emph{initial} subword relation.
\emph{Bounded quantifiers} in $\mathcal{L}_{\mathbb{W}}$
are of the form $\forall x\preceq t$ and $\exists x\preceq t$
(with $t$ term),
where $x\preceq t$ intuitively means that the length
of $x$ is smaller or equal than that of $t$.
Bounded-quantified formulas $(\forall x\preceq t)F$
and $(\exists x\preceq t)F$
abbreviates respectively $(\forall x)(\mathbb{1}\times x \subseteq \mathbb{1} \times t
\rightarrow F)$
and $(\exists x)(\mathbb{1}\times x\subseteq \mathbb{1} \times t
\wedge F)$.
\emph{Subword quantifiers} are of the form
$\forall x\subseteq^* t$ and $\exists x\subseteq^* t$,
so defined that $(\forall x\subseteq^* t)F$
is a shorthand for $(\forall)(\exists w
\subseteq t(wx\subseteq t)\rightarrow F)$
and $(\exists x\subseteq^* t)F$
for $(\exists x)(\exists w\subseteq t(wx\subseteq t)\wedge
F)$.
\paragraph{The Theory $\Sigma^b_1$-NIA.}
Then, $\Sigma^b_1$-NIA is a first-order theory
in $\mathcal{L}_{\mathbb{W}}$
defined by the following axioms:\footnote{Again,
the name $\Sigma^b_1$-NIA \textcolor{red}{is not original by~\cite{Ferreira88}},
but was introduced in~\cite{FerreiraOitavem}.}
\begin{itemize}
\itemsep0em
\item Basic axioms:
\begin{center}
$x \epsilon = x$ \ \ \ \ \ \
$x(y\mathtt{b}) = (xy)\mathtt{b}$ \ \ \ \ \ \ \
$x \times \epsilon = \epsilon$ \ \ \ \ \ \
$x\times y\mathtt{b} = (x\times y)x$ \\
$x\subseteq \epsilon \leftrightarrow x=\epsilon$
\ \ \ \ \ \
$x\subseteq y\mathtt{b} \leftrightarrow x\subseteq y \vee x=y\mathtt{b}$ \\
$x\mathtt{b} = y\mathtt{b} \rightarrow x=y$ \ \ \ \ \ \
$x\mathtt{0} \neq y \mathtt{1}$ \ \ \ \ \ \ $x\mathtt{b} \neq \epsilon.$
\end{center}
\item Axiom schema for induction on notation:
$$
F(\epsilon) \wedge (\forall x)(F(x) \rightarrow F(x\mathbb{0})
\wedge F(x\mathbb{1})) \rightarrow (\forall x)F (x),
$$
where $F$ is a $\Sigma^b_1$-formula in $\mathcal{L}_{\mathbb{W}}$.
\end{itemize}
\longv{What is meant by talking of an \emph{effective} computational procedure?
The core idea is that an effective computation involves (1) executing an \emph{algorithm} which (2) successfully \emph{terminates}.
If executing an algorithm is computing a total function, then the procedure must \emph{terminate} in a finite number of steps for every input, and produce the right sort of output.
Observe that it it not part of the very idea of an algorithm that its execution always terminates.
By Kleene's theorem, every partial recursive function $f$ can be written as $f(\vec{n})=l(\mu y[t_f(\vec{n},y)=0])$, where $l$ and $t_f$ are primitive recursive.
The function $t_f$ describes a particular algorithm computing $f$.
Termination of this algorithm can be expressed by a formula of the form
$$
(\forall \vec{x})(\exists y)t_f(\vec{x},y)=0.
$$
If this formula has a proof, one can say that the function is provably total.
Define Kleene's $T$ predicate:
$$
T(k,\vec{n},m) \ \ \ \text{ iff } \ \ \ \mathscr{M}_k \text{halts on input } \vec{n}
\text{ in } r(m) \text{ steps without output } l(m).
$$
which is primitive recursive.
Let $f$ be a partial function computed by $\mathscr{M}_k$.
Then, the characteristic function of $T$ is
$$
t_f(\vec{n},m)=0 \ \ \ \text{ iff } \ \ \ \mathscr{M}_k \text{ halts on input }
\vec{n} \text{ in } r(m) \text{ steps with output } l(m).
$$
and is primitive recursive.
By Kleene's theorem of normal form, every partial recursive function $f$ can be written as $l(\mu y[t_f(\vec{n},y)=0])$, where $t_f$ and $l$ are primitive recursive functions.
A recursive function $f$ is \emph{provably total} in a theory $T$ when
$$
T \vdash (\forall \vec{x})(\exists y)(t_f(\vec{x},y)=0.
$$
$$
\frac{\text{totality of functions}}{\text{termination}}
= \frac{x}{\text{probabilistic termination}}
$$}
\section{Overview}
Usual characterizations of poly-time (deterministic)
functions in bounded arithmetic are obtained by two ``macro''
results~\cite{Buss86,Ferreira90}.
Some Cobham-style algebra for poly-time functions is
introduced and shown equivalent to (1) that
of functions computed by TMs running in polynomial time,
and (2) that of functions which are $\Sigma^b_1$-representable
in the proper bounded theory.
The global structure of our proof follows a similar path,
with an algebra of oracle recursive function,
called $\mathcal{POR}$, playing the role of our Cobham-style
function algebra.
In our case, functions are poly-time computable by PTMs
and the theory is randomized $\textsf{\textbf{RS}}^1_2$.
After introducing these classes, we show
that the random functions which are
$\Sigma^b_1$-representable in $\textsf{\textbf{RS}}^1_2$ are precisely those in
$\mathcal{POR}$, and
that $\mathcal{POR}$ is equivalent
(in a very specific sense)
to the class of functions computed by PTMs running
in polynomial time.
While the first part is established due to standard
arguments~\cite{Ferreira90,CookUrquhart},
the presence of randomness introduced a
delicate ingredient to be dealt with in the second part.
Indeed, functions in $\mathcal{POR}$ access randomness
in a rather different way
with respect to PTMs, and relating these
models requires some effort,
that involves long chains of
intermediate simulations.
\small
\begin{figure}[h!]\label{fig:schema2}
\begin{center}
\framebox{
\parbox[t][2.6cm]{5cm}{
\footnotesize{
\begin{center}
\begin{tikzpicture}[node distance=2cm]
\node at (0,2) (a) {$\mathcal{POR}$};
\node at (-1.6,0.7) (b) {$\textsf{\textbf{RS}}^1_2$};
\node at (1.6,0.7) (c) {$\mathcal{RFP}$};
\draw[<->,thick] (a) to [bend right=20] (b);
\draw[<->,thick] (b) to [bend right=20] (c);
\draw[<->,thick] (c) to [bend right=20] (a);
%
\end{tikzpicture}
\end{center}
}}}
\caption{Proof Schema}
\end{center}
\end{figure}
\normalsize
Concretely,
we start by defining the class of oracle functions over strings,
the new theory $\textsf{\textbf{RS}}^1_2$, strongly inspired
by~\cite{FerreiraOitavem}, but over a ``probabilistic word language'',
and considering a slightly modified notion of
$\Sigma^b_i$-representability,
fitting the domain of our peculiar oracle functions.
Then, we prove that the class of
random functions computable in polynomial time, called $\mathcal{RFP}$,
is precisely the class of
functions which are $\Sigma^b_1$-representable in $\textsf{\textbf{RS}}^1_2$
in three steps:
\begin{enumerate}
\itemsep0em
\item We prove that functions in $\mathcal{POR}$ are $\Sigma^b_1$-representable
in $\textsf{\textbf{RS}}^1_2$ by induction on the structure of oracle functions
(and relying on the encoding machinery presented
in~\cite{Buss86,Ferreira90}).
\item We show that all functions which are $\Sigma^b_1$-representable
in $\textsf{\textbf{RS}}^1_2$ are in $\mathcal{POR}$ by realizability techniques
similar to Cook and Urquhart's one~\cite{CookUrquhart}.
\item We generalize Cobham's result to probabilistic
models,
showing that functions in $\mathcal{POR}$
are precisely those in $\mathcal{RFP}$.
\end{enumerate}
\small
\begin{figure}[h!]\label{fig:schema2}
\begin{center}
\framebox{
\parbox[t][3.2cm]{10cm}{
\footnotesize{
\begin{center}
\begin{tikzpicture}[node distance=2cm]
\node at (-4,0) (a) {$\textsf{\textbf{RS}}^1_2$};
\node at (0,0) (b) {$\mathcal{POR}$};
\node at (4,0) (c) {$\mathcal{RFP}$};
\node at (-2,1) {\textcolor{gray}{realizability~\cite{CookUrquhart}}};
\node at (-2,-1) {\textcolor{gray}{induction~\cite{Ferreira88}}};
\node at (2,0.3) {\textcolor{gray}{series of simulations}};
\draw[->,thick,dotted] (a) to [bend right=30] (b);
\draw[->,thick,dotted] (b) to [bend right=30] (a);
\draw[<->,thick,dotted] (c) to (b);
%
\end{tikzpicture}
\end{center}
}}}
\caption{Our Proof in a Nutshell}
\end{center}
\end{figure}
\normalsize
\section{Introducing $\mathcal{POR}$ and $\textsf{\textbf{RS}}^1_2$}\label{sec:introRS}
In this section, we introduce
a Cobham-style function algebra for poly-time oracle
recursive functions $\mathcal{POR}$,
and a randomized bounded arithmetic $\textsf{\textbf{RS}}^1_2$.
As Ferreira's ones~\cite{Ferreira88,Ferreira90},
these classes are both associated with binary strings
rather than natural positive integer:
$\mathcal{POR}$ is a class of oracle functions over sequences of bits,
while $\textsf{\textbf{RS}}^1_2$ is defined in a probabilistic word language $\mathcal{RL}$.
Strings support a natural notion of term-size
and make it easier to deal with bounds and time-complexity.
%
Observe that working with strings is not crucial
and all results below could be spelled out in terms of natural
numbers.
%
Indeed, theories have been introduced
in both formulations – Ferreira's $\Sigma^b_1$-NIA
and Buss' $\textsf{\textbf{S}}^1_2$ – and proved equivalent~\cite{FerreiraOitavem}.
\subsection{The Function Algebra $\mathcal{POR}$}
We introduce a function algebra for
poly-time \emph{oracle} recursive functions inspired by Ferreira's
$\mathcal{PTCA}$~\cite{Ferreira88,Ferreira90},
and defined over strings.
%
\begin{notation}
Let $\mathbb B=\{\mathbb{0},\mathbb{1}\}$,
$\mathbb S=\mathbb B^*$ be the set of binary strings
of finite length, and $\mathbb O=\mathbb B^\mathbb S$ be the set of binary
strings of infinite length.
%
Metavariables $\eta',\eta'',\dots$ are used to
denote the elements of $\mathbb O$
\end{notation}
\noindent
Let $|\cdot |$ denote the length-map string, so that for any
string $x$, $|x|$ indicates the length of $x$.
Given two binary strings $x,y$ we use $x\bm{\subseteq} y$
to express that $x$ is an \emph{initial}
or \emph{prefix substring} of $y$,
$x\bm{\frown} y$ (abbreviated as $xy$) for concatenation,
and $x\bm{\times} y$ obtained by self-concatenating
$x$ for $|y|$-times.
%
Given an infinite string of bits $\eta$,
and a finite string $x$, $\eta(x)$ denotes
\emph{one} specific bit of $\eta$,
the so-called $x$-th bit of $x$.
%
%
%
A fundamental difference between oracle functions
and those of $\mathcal{PTCA}$ is that the latter ones are
of the form $f:\mathbb S^k \times \mathbb O \to \mathbb S$,
carrying an additional argument
to be interpreted as the underlying source of random bits.
Furthermore, $\mathcal{PR}$ includes the basic function \emph{query},
$Q(x,\eta)=\eta(x)$, which can be used to
observe any bit in $\eta$.
\begin{defn}[The Class $\mathcal{POR}$]
The \emph{class $\mathcal{POR}$} is the smallest class
of functions $f:\mathbb S^k \times \mathbb O\to \mathbb S$,
containing:
\begin{itemize}
\itemsep0em
\item The \emph{empty (string) function} $E(x,\eta)=\bm{\epsilon}$
\item The \emph{projection (string) functions} $P^{n}_i(x_1,\dots, x_n,\eta) =x_i$,
for $n\in\mathbb N $ and $1\leq i\leq n$
\item The \emph{word-successor} $S_{b}(x,\eta)=x\mathbb{b}$,
where $b=0$ if $\mathbb{b}=\mathbb{1}$ and $b=1$ if $\mathbb{b}=\mathbb{1}$
\item The \emph{conditional (string) function}
\begin{align*}
C(\bm{\epsilon}, y, z_0, z_1, \eta) &= y \\
C(x\mathbb{b}, y, z_0, z_1, \eta) &= z_{b},
\end{align*}
where $b=0$ if $\mathbb{b} = \mathbb{0}$ and $b=1$ if $\mathbb{b}=\mathbb{1}$
\item The \emph{query (string) function} $Q(x,\eta) = \eta(x)$
\end{itemize}
and closed under the following schemas:
\begin{itemize}
\itemsep0em
\item \emph{Composition}, where $f$ is defined from
$g,h_1,\dots, h_k$ as
$$
f(\vec{x},\eta) = g(h_1(\vec{x},\eta), \dots, h_k(\vec{x},\eta),
\eta)
$$
\item \emph{Bounded recursion on notation}, where $f$
is defined from $g,h_0,$ and $h_1$ as
\begin{align*}
f(\vec{x},\bm{\epsilon},\eta) &= g(\vec{x},\eta) \\
f(\vec{x}, y\mathbb{0}, \eta) &= h_0(\vec{x}, y, f(\vec{x}, y, \eta),
\eta) |_{t(\vec{x},y)} \\
f(\vec{x}, y\mathbb{1}, \eta) &=
h_1(\vec{x}, y, f(\vec{x},y,\eta), \eta)|_{t(\vec{x},y)}
\end{align*}
and $t$ is obtained from $\bm{\epsilon}, \mathbb{1}, \mathbb{0}, \bm{\frown},$
and $\bm{\times}$ by explicit definition,
that is $t$ can be obtained applying $\bm{\frown}$ and $\bm{\times}$
on the constants $\bm{\epsilon}, \mathbb{0},\mathbb{1}$, and
the variables $\vec{x}$ and $y$.\footnote{Notice that there is a
clear correspondence with the grammar for terms in $\mathcal{RL}$,~Definition~\ref{def:RLterms}.}
\end{itemize}
\end{defn}
\noindent
Actually, the conditional function $C$
could be defined by bounded recursion.
\longv{
As follows
\begin{align*}
C(\bm{\epsilon}, y, z_0, z_1,\eta) = y \\
C(x\mathbb{0}, y, z_0,z_1,\eta) = P_3(x\mathbb{0},y,z_0,z_1,
f(x\mathbb{0},y,z_0,z_1,\eta),\eta)|_{\mathbb{0}} \\
C(x\mathbb{1}, y, z_0,z_1,\eta) = P_4(x\mathbb{1},y,z_0,z_1,\eta)|_{\mathbb{0}}.
\end{align*}
We take it as a primitive function of $\mathcal{POR}$
to help making the realizability of Section~\ref{sec:realizability}
straightforward.
}
\begin{remark}
Neither the query function or the conditional function
appear in Ferreira's characterization~\cite{Ferreira90},
which instead contains the ``substring-conditional''
function:
$$
S(x,y,\eta) = \begin{cases}
\mathbb{1} \ \ &\text{if } x \bm{\subseteq} y \\
\mathbb{0} \ \ &\text{otherwise,}
\end{cases}
$$
which can be defined in $\mathcal{POR}$ by bounded recursion.
\longv{
First, let $Tail (x,\eta)$ be defined as follows:
\begin{align*}
Tail (\bm{\epsilon},\eta) &= \bm{\epsilon} \\
Tail (x\mathbb{b},\eta) &= x.
\end{align*}
\textcolor{red}{Then, let $Pr(x,\eta)$ be:}
$$
\textcolor{red}{\cdots}
$$
and $Eq(x,y,\eta)$ be:
\begin{align*}
Eq(x,\bm{\epsilon},\eta) &=
C(x,\mathbb{1},\bm{\epsilon},\bm{\epsilon},\eta) \\
Eq(x,y\mathbb{0},\eta) &=
C(x,\bm{\epsilon},Eq(Pr(x), y,\eta), \bm{\epsilon}, \eta) \\
Eq(x,y\mathbb{1},\eta) &=
C(x,\bm{\epsilon},\bm{\epsilon}, Eq(Pr(x),y,\eta),\eta).
\end{align*}
Finally, we can define $S(x,y,\eta)$ as:
\begin{align*}
S(x,\bm{\epsilon},\eta) &= C(x,\mathbb{1},\mathbb{0},\mathbb{0},\eta) \\
S(x,y\mathbb{0},\eta) &=
C(x,\mathbb{1}, C(Eq(x,y\mathbb{0},\eta),
S(x,y\mathbb{0},\eta), \mathbb{1},\mathbb{1},\eta),
S(x,y,\eta),\eta) \\
S(x,y\mathbb{1},\eta) &=
C(x,\mathbb{1},S(x,y,\eta), C(Eq(x,y\mathbb{1},\eta),
S(x,y,\eta), \mathbb{1},\mathbb{1},\eta)), \eta).
\end{align*}
}
\end{remark}
\subsection{Randomized Bounded Arithmetics}
First, we introduce a probabilistic word language for our bounded arithmetic,
together with its quantitative interpretation.
\paragraph{The Language $\mathcal{RL}$.}
Following~\cite{FerreiraOitavem},
we consider a first-order signature for natural numbers
in binary notation endowed with a special predicate symbol
$\mathtt{Flip}(\cdot)$.
Consequently, formulas are interpreted over $\mathbb S$ rather
than $\mathbb N $.
\begin{defn}[Terms and Formulas of $\mathcal{RL}$]\label{df:rl}
\emph{Terms} and \emph{formulas of $\mathcal{RL}$} are defined by the grammar below:
\begin{align*}
t &:= x \; \; \mbox{\Large{$\mid$}}\;\; \epsilon \; \; \mbox{\Large{$\mid$}}\;\; \mathtt{0} \; \; \mbox{\Large{$\mid$}}\;\; \mathtt{1} \; \; \mbox{\Large{$\mid$}}\;\; t\frown t \; \; \mbox{\Large{$\mid$}}\;\;
t \times t \\
F &:= \mathtt{Flip}(t) \; \; \mbox{\Large{$\mid$}}\;\; t=s \; \; \mbox{\Large{$\mid$}}\;\; \neg F \; \; \mbox{\Large{$\mid$}}\;\;
F \wedge F \; \; \mbox{\Large{$\mid$}}\;\; F \vee F \; \; \mbox{\Large{$\mid$}}\;\; (\exists x)F \; \; \mbox{\Large{$\mid$}}\;\;
(\forall x)F.
\end{align*}
\end{defn}
\noindent
\begin{notation}[Truncation]
For readability, we adopt the following abbreviations:
$ts$ for $t\frown s$,
$\mathtt{1}^t$ for $\mathtt{1} \times t$,
and $t\preceq s$ for $\mathtt{1}^t \subseteq \mathtt{1}^s$,
expressing that the length of $t$ is smaller than that of $s$.
Given three terms $t,r,$ and $s$, the abbreviation
$t|_r = s$ denotes the following formula,
$$
(\mathtt{1}^r \subseteq \mathtt{1}^t \wedge s \subseteq t \wedge
\mathtt{1}^r = \mathtt{1}^s) \vee
(\mathtt{1}^t \subseteq \mathtt{1}^r \wedge s=t),
$$
saying that $s$ is the \emph{truncation} of $t$
at the length of $r$.
\end{notation}
\noindent
Every string $\sigma\in \mathbb S$ can be seen as a term of $\mathcal{RL}$
$\overline{\sigma}$, such that
$\overline{\bm{\epsilon}}=\epsilon$,
$\overline{\sigma\mathbb{b}} = \overline{\sigma}\mathtt{b}$,
where $\mathbb{b}\in\mathbb B$ and $\mathtt{b}\in\{\mathtt{0},\mathtt{1}\}$,
e.g.~$\overline{\mathbb{0}\zero\mathbb{1}} = \mathtt{0}\zzero\mathtt{1}$.
A central feature of bounded arithmetic is the presence of
bounded quantification.
\begin{notation}[Bounded Quantifiers]
In $\mathcal{RL}$, \emph{bounded quantified expressions}
are expressions of either the form
$(\forall x)(\mathtt{1}^x \subseteq \mathtt{1}^t \rightarrow F)$ or
$(\exists x)(\mathtt{1}^x \subseteq \mathtt{1}^t \wedge F)$,
usually abbreviated as $(\forall x\preceq t)F$ and
$(\exists x \preceq t)F$ respectively.
\end{notation}
\begin{notation}
We call \emph{subword quantifications},
quantifications of the form
$(\forall x\subseteq^* t)F$ and
$(\exists x\subseteq^* t)F$,
abbreviating
$(\forall x)\big((\exists w \subseteq t)(wx\subseteq t)
\rightarrow F\big)$
and
$(\exists x)(\exists w\subseteq t)
(wx \subseteq t \wedge F)$.
Furthermore,
we abbreviate so-called \emph{initial subword
quantification}
$(\forall x)(x\subseteq t \rightarrow F)$
as $(\forall x\subseteq t)F$
and
$(\exists x)(x\subseteq t \wedge F)$
as $(\exists x\subseteq t)F$.
\end{notation}
\noindent
The distinction between bounded and subword
quantification is important for complexity reasons.
If $\sigma \in \mathbb S$ is a string of length $k$,
the witness of a subword existentially quantified
formula
$(\exists x \subseteq^* \overline{\sigma})F$
is to be looked for among all possible sub-strings of $\sigma$,
that is~within a space of size $\mathcal{O}(k)$.
On the contrary, the witness of a bounded formula
$(\exists x\preceq \overline{\sigma})F$
is to be looked for among all possible strings of
length $k$, namely~within a space of size $\mathcal{O}(2^k)$.
\begin{remark}
In order to avoid misunderstanding let us briefly sum up
the different notions and symbols used for subword relations.
We uses $\bm{\subseteq}$ to express a relation between strings,
that is $x\bm{\subseteq} y$ expresses that $x$ is an initial or prefix substring of $y$.
We use $\subseteq$ as a relation symbol in the language $\mathcal{RL}$.
We use $\preceq$ as an auxiliary symbol in the language $\mathcal{RL}$;
in particular, as seen, $t\preceq s$ is syntactic sugar for $\mathtt{1}^t\subseteq \mathtt{1}^s$.
We use $\subseteq^*$ as an auxiliary symbol in the language $\mathcal{RL}$
to denote subword quantification.
We also use $(\exists w\subseteq t)F$ as an abbreviation of
$(\exists w)(w\subseteq t\wedge F)$
and similarly for $(\forall w\subseteq t)$.
\end{remark}
\begin{defn}[$\Sigma^b_1$-Formulas]\label{df:Sigmab1}
A $\Sigma^b_0$-formula is a subword quantified
formula,
i.e.~a formula belonging to the smallest class
of $\mathcal{RL}$ containing atomic
formulas and closed under Boolean operations
and subword quantification.
A formula is said to be a
\emph{$\Sigma^b_1$-formula}, if it is of the form
$(\exists x_1\preceq t)\dots(\exists x_n\preceq t_n)
F$,
where the only quantifications in $F$ are
subword ones.
We call $\Sigma^b_1$ the class containing
all and only the $\Sigma^b_1$-formulas.
\end{defn}
\noindent
An \emph{extended $\Sigma^b_1$-formula}
is any formula of $\mathcal{RL}$ that can be constructed
in a finite number of steps, starting with
subword quantifications and bounded existential
quantifications and bounded existential quantifications.
\paragraph{The Bounded Theory $\textsf{\textbf{RS}}^1_2$.}
We introduce the bounded theory $\textsf{\textbf{RS}}^1_2$,
which can be seen as a
probabilistic version to Ferreira's
$\Sigma^b_1$-NIA~\cite{Ferreira88}.
It is expressed in the language $\mathcal{RL}$.
\begin{defn}[Theory $\textsf{\textbf{RS}}^1_2$]
The theory $\textsf{\textbf{RS}}^1_2$ is defined by axioms
belonging to two classes:
\begin{itemize}
\itemsep0em
\item \emph{Basic axioms}:
\begin{enumerate}
\itemsep0em
\item $x\epsilon=x$
\item $x(y\mathtt{b}) = (xy)\mathtt{b}$
\item $x\times \epsilon=\epsilon$
\item $x\times x\mathtt{b} = (x\times y)x$
\item $x\subseteq \epsilon \leftrightarrow x=\epsilon$
\item $x\subseteq y\mathtt{b} \leftrightarrow
x\subseteq y \vee x=y\mathtt{b}$
\item $x\mathtt{b} = y\mathtt{b} \rightarrow x=y$
\item $x\mathtt{0} \neq y\mathtt{1}$,
\item $x\mathtt{b}\neq \epsilon$
\end{enumerate}
with $\mathtt{b}\in \{\mathtt{0},\mathtt{1}\}$
\item \emph{Axiom schema for induction on notation},
$$
B(\epsilon) \wedge (\forall x)\big(B(x) \rightarrow
B(x\mathtt{0}) \wedge B(x\mathtt{1})\big)
\rightarrow (\forall x)B(x),
$$
where $B$ is a $\Sigma^b_1$-formula in $\mathcal{RL}$.
\end{itemize}
\end{defn}
\noindent
Induction on notation
adapts the usual induction schema of $\lc{PA}$
to the binary representation.
Of course, as in Buss' and Ferreira's approach,
the restriction of this schema to $\Sigma^b_1$-formulas
is essential
to characterize algorithms computed \emph{with bounded resources}.
Indeed, more general instances of the schema
would extend representability to random functions which
are not poly-time (probabilistic) computable.
\begin{prop}[\cite{Ferreira88}]
In $\textsf{\textbf{RS}}^1_2$ any extended $\Sigma^b_1$-formula
is logically equivalent to a
$\Sigma^b_1$-formula.\footnote{Actually, Ferreira
proved this result for the theory $\Sigma^b_1$-NIA~\cite[pp. 148-149]{Ferreira88},
but it clearly holds for $\mathcal{RL}$ as well.}
\end{prop}
\paragraph{Semantics for Formulas in $\mathcal{RL}$.}
We introduce a \emph{quantitative} semantics for
formulas of $\mathcal{RL}$,
which is strongly inspired by that for $\lc{MQPA}$.
In particular, function symbols of $\mathcal{RL}$ as well as the predicate
symbols ``='' and ``$\subseteq$'' have a standard interpretation
as relations over $\mathbb S$
in the canonical model $\mathscr{W}=(\mathbb S, \bm{\frown}, \bm{\times})$,
while, as we shall see,
$\mathtt{Flip}(t)$ can be interpreted either in a standard way
or following the quantitative semantics of~\cite{CiE}.
\begin{defn}[Semantics for Terms in $\mathcal{RL}$]\label{def:RLterms}
Given a set of term variables $\mathcal{G}$,
an environment $\xi: \mathcal{G} \to \mathbb S$
is a mapping that assigns to each variable a string.
Given a term $t$ in $\mathcal{RL}$ and an environment $\xi$,
the \emph{interpretation of $t$ in $\xi$} is the string
$\model{t}_\xi \in \mathbb S$ inductively defined as follows:
\begin{minipage}{\linewidth}
\begin{minipage}[t]{0.35\linewidth}
\begin{align*}
\model{\epsilon}_\xi &:= \bm{\epsilon} \\
\model{\mathtt{0}}_\xi &:= \mathbb{0} \\
\model{\mathtt{1}}_\xi &:= \mathbb{1} \\
\end{align*}
\end{minipage}
\hfill
\begin{minipage}[t]{0.55\linewidth}
\begin{align*}
\model{x}_\xi &:= \xi(x) \in \mathbb S \\
\model{t\frown s}_\xi &:= \model{t}_\xi \model{s}_\xi \\
\model{t \times s} _\xi &:= \model{t}_\xi \times \model{s}_\xi.
\end{align*}
\end{minipage}
\end{minipage}
\end{defn}
As in~\cite{CiE}, we extend the canonincal model $\mathscr{W}$ with the probability probability space
$\mathscr{P}_{\mathscr{W}} = (\mathbb O, \sigma(\mathscr{C}), \mu_{\mathscr{C}})$,
where $\sigma(\mathscr{C})\subseteq \mathcal{P}(\mathbb O)$
is the Borel $\sigma$-algebra generated by cylinders
$\textsf{C}^\mathbb{b}_\sigma = \{\eta \ | \ \eta(\sigma)=\mathbb{b}\}$,
for $\mathbb{b} \in \mathbb B$, and such that
$\mu_{\mathscr{C}}(\textsf{C}^{\mathbb{b}}_{\sigma})=\frac{1}{2}$.
To formally define cylinders over $\mathbb B^\mathbb S$,
we slightly modify Billingsley's notion of \emph{cylinder
of rank n}.
\begin{defn}[Cylinder over $S$]
For any countable set $S$,
finite $K\subset S$ and $H\subseteq \mathbb B^K$,
$$
\textsf{C}(H) = \{\eta \in \mathbb B^S \ | \ \eta |_K \in H\},
$$
is a \emph{cylinder} over $S$.
\end{defn}
\noindent
Then, $\mathscr{C}$ and $\sigma(\mathscr{C})$
are defined in the standard way and
the probability measure over it is defined as follows:\footnote{For further
details, see~\cite{Davoli}.}
\begin{defn}[Cylinder Measure]
For any countable set $S$,
$K\subseteq S$ and $H\subseteq \mathbb B^K$ such
that $\textsf{C}(H)=\{\eta \in \mathbb B^S \ | \ \eta|_K \in H\}$,
$$
\mu_{\mathscr{C}}(\textsf{C}(H))= \frac{|H|}{2^{|K|}}.
$$
\end{defn}
\noindent
This is a measure over $\sigma(\mathscr{C})$.
Then, formulas of $\mathcal{RL}$ are interpreted as sets \emph{measurable} sets.
\begin{defn}[Semantics for Formulas in $\mathcal{RL}$]\label{df:RLformulas}
Given a term $t$, a formula $F$,
and an environment $\xi: \mathcal{G} \to \mathbb S$,
where $\mathcal{G}$ is the set of term variables,
the \emph{interpretation of $F$ under $\xi$}
is the measurable set of sequences $\model{F}_\xi
\in \sigma(\mathscr{C})$ inductively defined as follows:
\begin{minipage}{\linewidth}
\begin{minipage}[t]{0.35\linewidth}
\begin{align*}
\\
\model{\mathtt{Flip}(t)}_\xi &:=
\{\eta \ | \ \eta(\model{t}_\xi) = \mathbb{1}\} \\
\model{t=s}_\xi &:=
\begin{cases}
\mathbb O \ &\text{if } \model{t}_\xi = \model{s}_\xi \\
\emptyset \ &\text{otherwise}
\end{cases} \\
\model{t\subseteq s}_\xi &:=
\begin{cases}
\mathbb O \ &\text{if } \model{t}_\xi \bm{\subseteq} \model{s}_\xi \\
\emptyset \ &\text{otherwise}
\end{cases}
\end{align*}
\end{minipage}
\hfill
\begin{minipage}[t]{0.55\linewidth}
\begin{align*}
\model{\neg G}_\xi &:= \mathbb O - \model{G}_\xi \\
\model{G \vee H}_\xi &:= \model{G}_\xi \cup \model{H}_\xi \\
\model{G \wedge H}_\xi &:= \model{G}_\xi \cap \model{H}_\xi \\
\model{G\rightarrow H}_\xi &:= (\mathbb O - \model{G}_\xi) \cup
\model{H}_\xi \\
\model{(\exists x)G}_\xi &:=
\bigcup_{i\in\mathbb S} \model{G}_{\xi\{x\leftarrow i\}} \\
\model{(\forall x)G}_\xi &:=
\bigcap_{i\in \mathbb S} \model{G}_{\xi \{x\leftarrow i\}}.
\end{align*}
\end{minipage}
\end{minipage}
\end{defn}
\noindent
As anticipated, this semantics is well-defined.
Indeed, the sets $\model{\mathtt{Flip}(t)}_\xi$,
$\model{t=s}_\xi$ and $\model{t\subseteq s}_\xi$
are measurable and measurability is preserved
by all the logical operators.
A interpretation of the language
$\mathcal{RL}$ in the usual sense is given due to an environment $\xi$
plus the choice of an interpretation $\eta$ for $\mathtt{Flip}(x)$.
\begin{defn}[Standard Semantics for Formulas in $\mathcal{RL}$]
Given a $\mathcal{RL}$-formula $F$, and an interpretation
$\rho=(\xi,\eta^{\mathtt{FLIP}})$,
where $\xi:\mathcal{G}\to \mathbb S$ and
$\eta^{\mathtt{FLIP}} \subseteq \mathbb O$,
the \emph{interpretation of $F$ in $\rho$} $\model{F}_\rho$,
is inductively defined as follows:
\begin{minipage}{\linewidth}
\begin{minipage}[t]{0.35\linewidth}
\begin{align*}
\model{\mathtt{Flip}(t)}_\rho &:=
\begin{cases}
1 \ &\text{if } \eta^{\mathtt{FLIP}}(\model{t}_\rho) =
\mathbb{1} \\
0 \ &\text{otherwise}
\end{cases} \\
\model{t=s}_\rho &:=
\begin{cases}
1 \ &\text{if } \model{t}_\rho = \model{s}_\rho \\
0 \ &\text{otherwise.}
\end{cases} \\
\model{t \subseteq s}_\rho &:=
\begin{cases}
1 \ &\text{if } \model{t}_\rho \bm{\subseteq} \model{s}_\rho \\
0 \ &\text{otherwise}
\end{cases}
\end{align*}
\end{minipage}
\hfill
\begin{minipage}[t]{0.55\linewidth}
\begin{align*}
\model{\neg G}_\rho &:= 1 - \model{G}_\rho \\
\model{G \wedge H}_\rho &:= min\{\model{G}_\rho,
\model{H}_\rho\} \\
\model{G \vee H}_\rho &:= max\{\model{G}_\rho,
\model{H}_\rho\} \\
\model{G \rightarrow H}_\rho &:=
max\{(1-\model{G}_\rho), \model{H}_\rho\} \\
\model{(\forall x)G}_\rho &:=
min\{\model{G}_{\rho\{x \leftarrow \sigma\}} \ | \ \sigma \in \mathbb S\} \\
\model{(\exists x)G}_\rho &:= max\{\model{G}_{\rho\{x\leftarrow \sigma\}}
\ | \ \sigma \in \mathbb S\}.
\end{align*}
\end{minipage}
\end{minipage}
\end{defn}
\begin{notation}
For readability's sake,
we abbreviate $\model{\cdot}_{\rho}$ simply as
$\model{\cdot}_\eta$,
and $\model{\cdot}_\xi$ as $\model{\cdot}$.
\end{notation}
\noindent
Observe that \emph{quantitative}
and \emph{qualitative}
semantics for $\mathcal{RL}$ are mutually related,
as can be proved by induction on the structure of formulas~\cite{Davoli}.
\begin{prop}
For any formula $F$ in $\mathcal{RL}$, environment $\xi$,
function $\eta \in \mathbb O$ and $\rho=(\eta,\xi)$,
$$
\model{F}_{\xi,\eta} = 1 \ \ \text{iff} \ \
\eta \in \model{F}_\rho.
$$
\end{prop}
\section{$\textsf{\textbf{RS}}^1_2$ characterizes $\mathcal{POR}$}\label{sec:PORandRS}
As said, our proof follows a so-to-say standard path~\cite{Buss86,Ferreira88}.
The first step consists in showing that functions in $\mathcal{POR}$
are precisely those which are $\Sigma^b_1$-representable in
$\textsf{\textbf{RS}}^1_2$.
To do so, we extend Buss' representability conditions by
adding a constraint to link the quantitative semantics
of formulas in $\textsf{\textbf{RS}}^1_2$ with the additional functional
parameter $\eta$ of oracle recursive functions.
\begin{defn}[$\Sigma^b_1$-Representability]\label{df:representability}
A function $f:\mathbb S^k \times \mathbb O \to \mathbb S$ is
\emph{$\Sigma^b_1$-representable
in $\textsf{\textbf{RS}}^1_2$} if there is a
$\Sigma^b_1$-formula $F(\vec{x},y)$ of $\mathcal{RL}$ such that:
\begin{enumerate}
\itemsep0em
\item $\textsf{\textbf{RS}}^1_2\vdash (\forall \vec{x})(\exists y)F(\vec{x},y)$
\item $\textsf{\textbf{RS}}^1_2 \vdash (\forall \vec{x})(\forall y)(\forall z)
\big(F(\vec{x},y) \wedge F(\vec{x},z) \rightarrow y=z\big)$
\item for all $\sigma_1,\dots, \sigma_j,\tau \in \mathbb S$ and $\eta\in \mathbb O$,
$$
f(\sigma_1,\dots, \sigma_j,\eta) = \tau \ \ \ \text{iff} \ \ \
\eta \in \model{F
(\overline{\sigma_1},\dots, \overline{\sigma_j},\overline{\tau})}.
$$
\end{enumerate}
\end{defn}
\noindent
We recall that the language $\mathcal{RL}$ allows us to associate
the formula $F$ with both a \emph{qualitative}
– namely, when dealing with 1. and 2. –
and a \emph{quantitative} interpretation – namely, in 3.
Then, in Section~\ref{sec:PORtoRS},
we prove the following theorem.
\begin{theorem}[$\mathcal{POR}$ and $\textsf{\textbf{RS}}^1_2$]\label{thm:PORandRS}
For any function $f:\mathbb S^k\times \mathbb O \to \mathbb S$,
$f$ is $\Sigma^b_1$-representable in $\textsf{\textbf{RS}}^1_2$
when $f\in\mathcal{POR}$.
\end{theorem}
\noindent
In particular, that any function in $\mathcal{POR}$
is $\Sigma^b_1$-representable in
$\textsf{\textbf{RS}}^1_2$ is proved in Section~\ref{sec:PORtoRS}
by a straightforward induction on the structure
of probabilistic oracle functions.
The other direction is established in
Section~\ref{sec:RStoPOR} by a realizability argument
very close to the one offered in~\cite{CookUrquhart}.
\begin{figure}[h!]
\begin{center}
\framebox{
\parbox[t][3.5cm]{11cm}{
\footnotesize{
\begin{center}
\begin{tikzpicture}[node distance=2cm]
\node[draw] at (-3,0) (a) {Class $\mathcal{POR}$};
\node[draw] at (3,0) (b) {$\Sigma^b_1$-Representability in $\textsf{\textbf{RS}}^1_2$};
\node at (0,1.2) {\textcolor{gray}{\scriptsize{induction on $\mathcal{POR}$, Sec.~\ref{sec:PORtoRS}}}};
\node at (0,-1.2) {\textcolor{gray}{\scriptsize{realizability as in~\cite{CookUrquhart}, Sec.~\ref{sec:RStoPOR}}}};
\draw[->,dotted,thick] (-2.5,0.4) to [bend left=20] (2.5,0.4);
\draw[->,dotted,thick] (2.5,-0.4) to [bend left=20] (-2.5,-0.4);
%
\end{tikzpicture}
\end{center}
}}}
\caption{Relating $\mathcal{POR}$ and $\textsf{\textbf{RS}}^1_2$}
\end{center}
\end{figure}
\normalsize
\subsection{Functions in $\mathcal{POR}$ are $\Sigma^b_1$-Representable in $\textsf{\textbf{RS}}^1_2$}\label{sec:PORtoRS}
We prove that any function in $\mathcal{POR}$ is
$\Sigma^b_1$-representable in $\textsf{\textbf{RS}}^1_2$ by
constructing the desired formula by induction on the structure
of oracle functions.
Preliminarily notice that, for example,
the formula $(\forall \vec{x})(\exists y)G(\vec{x},y)$
occurring in condition 1 is \emph{not} in
$\Sigma^b_1$, since its existential
quantifier is not bounded.
Hence, in order to apply the inductive steps
– namely, composition
and bounded recursion on notation –
we need to adapt Parikh's theorem~\cite{Parikh}
to $\textsf{\textbf{RS}}^1_2$.\footnote{The
theorem is usually presented in the
context of Buss' bounded theories,
stating that given a bounded formula
$F$ in $\mathcal{L}_\mathbb N $ such that $\textsf{\textbf{S}}^1_2 \vdash
(\forall \vec{x})(\exists y)F$,
then there is a term $t(\vec{x})$
such that also $\textsf{\textbf{S}}^1_2
\vdash (\forall \vec{x})(\exists y\leq t(\vec{x}))
F(\vec{x},y)$~\cite{Buss86,Buss98}.
Furthermore, due to~\cite{FerreiraOitavem},
Buss' syntactic proof can be adapted
to $\Sigma^b_1$-NIA in a natural way.
The same result hold for $\textsf{\textbf{RS}}^1_2$,
which does not contain any specific rule
concerning $\mathtt{Flip}(\cdot)$.}
\begin{prop}[``Parikh''~\cite{Parikh}]\label{prop:Parikh}
Let $F(\vec{x},y)$ be a bounded formula in $\mathcal{RL}$
such that
$\textsf{\textbf{RS}}^1_2 \vdash (\forall \vec{x})(\exists y) F(\vec{x},y)$.
Then, there is a term $t$ such that,
$$
\textsf{\textbf{RS}}^1_2 \vdash (\forall \vec{x})(\exists y \preceq t(\vec{x}))F(\vec{x},y).
$$
\end{prop}
\begin{theorem}\label{thm:PORtoRS}
Every $f\in \mathcal{POR}$ is $\Sigma^b_1$-representable
in $\textsf{\textbf{RS}}^1_2$.
\end{theorem}
\begin{proof}[Proof Sketch]
The proof is by induction on the structure
of functions in $\mathcal{POR}$.
\emph{Base Case.} Each basic function is
$\Sigma^b_1$-representable in $\textsf{\textbf{RS}}^1_2$.
There are five possible sub-cases:
\begin{itemize}
\itemsep0em
\item \emph{Empty (String) Function.}
$f=E$ is $\Sigma^b_1$-represented in $\textsf{\textbf{RS}}^1_2$ by
the formula:
$$
F_{E}(x,y) : x=x \wedge y=\epsilon.
$$
\begin{enumerate}
\itemsep0em
\item
Existence is proved considering
$y=\epsilon$.
For the reflexivity of identity both
$\textsf{\textbf{RS}}^1_2\vdash x=x$ and $\textsf{\textbf{RS}}^1_2 \vdash \epsilon=\epsilon$
hold.
So, by rules for conjunction,
we obtain $\textsf{\textbf{RS}}^1_2\vdash x=x \wedge \epsilon=\epsilon$.
We conclude
$$
\textsf{\textbf{RS}}^1_2\vdash (\forall x)(\exists y)(x=x \wedge y=\epsilon).
$$
\item Uniqueness is proved assuming $\textsf{\textbf{RS}}^1_2 \vdash
x= x \wedge z=\epsilon$.
By rules for conjunction,
in particular $\textsf{\textbf{RS}}^1_2 \vdash z=\epsilon$, and since
$\textsf{\textbf{RS}}^1_2 \vdash y=\epsilon$,
by the transitivity of identity, we conclude
$$
\textsf{\textbf{RS}}^1_2\vdash y=z.
$$
\item Assume $E(\sigma, \eta^*) = \tau$.
If $\tau=\bm{\epsilon}$, then
\begin{align*}
\model{\overline{\sigma} = \overline{\sigma}
\wedge \overline{\tau} = \epsilon}
&=
\model{\overline{\sigma} = \overline{\sigma}}
\cap
\model{\overline{\tau} = \epsilon} \\
&= \mathbb O \cap \mathbb O \\
&= \mathbb O.
\end{align*}
So, in this case, for any $\eta^*$,
$\eta^* \in \model{\overline{\sigma}=\overline{\sigma}
\wedge \overline{\tau} = \epsilon}$,
as clearly $\eta^* \in\mathbb O$.
If $\tau\neq \bm{\epsilon}$, then
\begin{align*}
\model{\overline{\sigma} = \overline{\sigma} \wedge
\overline{\tau} = \epsilon}
&=
\model{\overline{\sigma} = \overline{\sigma}}
\cap \model{\overline{\tau} =\epsilon} \\
&= \mathbb O \cap \emptyset \\
&= \emptyset.
\end{align*}
So, for any $\eta^*$,
$\eta^* \not\in \model{\overline{\sigma}= \overline{\sigma}
\vee \overline{\tau} = \epsilon}$,
as clearly $\eta^* \not\in \emptyset$.
\end{enumerate}
\item \emph{Projection (String) Function.}
$f=P^n_i$, for $1\leq i\leq n$,
is $\Sigma^b_1$-represented in $\textsf{\textbf{RS}}^1_2$ by
the formula:
$$
F_{P^{n}_{i}}(x,y) : \bigwedge_{j\in J}(x_j=x_j)
\wedge y= x_i,
$$
where $J=\{1,\dots, n\}\setminus \{i\}$.
\item \emph{Word-Successor Function.}
$f=S_b$ is $\Sigma^b_1$-represented
in $\textsf{\textbf{RS}}^1_2$ by the formula:
$$
F_{S_b}(x,y) : y= x\mathtt{b}
$$
where $\mathtt{b}=\mathtt{0}$ if $b=\mathbb{0}$
and $\mathtt{b}=\mathtt{1}$ if $b=\mathbb{1}$.
\item \emph{Conditional (String) Function.}
$f=C$ is $\Sigma^b_1$-represented
in $\textsf{\textbf{RS}}^1_2$ by the formula:
\begin{align*}
F_{C} (x,v,z_0,z_1,y) :
(x=\epsilon \wedge y=v)
&\vee (\exists x' \preceq x)
(x=x'\mathtt{0} \wedge y=z_0) \\
&\vee
(\exists x'\preceq x)
(x=x'\mathtt{1} \wedge y=z_1).
\end{align*}
\item \emph{Query (String) Function.}
$f=Q$ is $\Sigma^b_1$-represented
in $\textsf{\textbf{RS}}^1_2$ by the formula:
$$
F_{Q}(x,y) :
(\mathtt{Flip}(x) \wedge y=\mathtt{1}) \vee
(\neg \mathtt{Flip}(x) \wedge y=\mathtt{0}).
$$
Notice that, in this case, the proof crucially
relies on the fact that oracle functions
invoke \emph{exactly one} oracle.
\begin{enumerate}
\itemsep0em
\item Existence is proved by cases.
If $\textsf{\textbf{RS}}^1_2 \vdash \mathtt{Flip}(x)$,
let $y=\mathtt{1}$.
By the reflexivity of identity
$\textsf{\textbf{RS}}^1_2 \vdash \mathtt{1} = \mathtt{1}$ holds,
so also $\textsf{\textbf{RS}}^1_2 \vdash \mathtt{Flip}(x) \wedge \mathtt{1}=\mathtt{1}$.
By rules for disjunction, we conclude
$\textsf{\textbf{RS}}^1_2\vdash (\mathtt{Flip}(x) \wedge \mathtt{1} = \mathtt{1})
\vee (\neg \mathtt{Flip}(x) \wedge \mathtt{1}= \mathtt{0})$
and so,
$$
\textsf{\textbf{RS}}^1_2 \vdash (\exists y)\big((\mathtt{Flip}(x) \wedge y=\mathtt{1})
\vee (\neg \mathtt{Flip}(x) \wedge y=\mathtt{0})\big).
$$
If $\textsf{\textbf{RS}}^1_2 \vdash \neg \mathtt{Flip}(x)$,
let $y=\mathtt{0}$.
By the reflexivity of identity
$\textsf{\textbf{RS}}^1_2 \vdash \mathtt{0}=\mathtt{0}$ holds.
Thus, by the rules for conjunction,
$\textsf{\textbf{RS}}^1_2 \vdash \neg \mathtt{Flip}(x) \wedge \mathtt{0}=\mathtt{0}$
and for disjunction,
we conclude $\textsf{\textbf{RS}}^1_2 \vdash(\mathtt{Flip}(x) \wedge \mathtt{0}=\mathtt{1})
\vee (\neg \mathtt{Flip}(x) \wedge \mathtt{0}=\mathtt{0})$ and so,
$$
\textsf{\textbf{RS}}^1_2 \vdash (\exists y) \big((\mathtt{Flip}(x) \wedge
y= \mathtt{1}) \vee
(\neg \mathtt{Flip}(x) \wedge y=\mathtt{0})\big).
$$
\item Uniqueness is established
relying on the transitivity of identity.
\item Finally, it is shown that for every
$\sigma,\tau\in \mathbb S$ and
$\eta^* \in \mathbb O$,
$Q(\sigma,\eta^*)=\tau$ when
$\eta^*\in \model{F_{Q}
(\overline{\sigma},\overline{\tau})}$.
Assume $Q(\sigma,\eta^*) = \mathbb{1}$,
which is $\eta^*(\sigma)=\mathbb{1}$,
\begin{align*}
\model{F_Q(\overline{\sigma},\overline{\tau})}
&=
\model{\mathtt{Flip}(\overline{\sigma}) \wedge
\overline{\tau}=\mathtt{1}}
\cup
\model{\neg \mathtt{Flip}(\overline{\sigma})
\wedge \overline{\tau}=\mathtt{0}} \\
&= (\model{\mathtt{Flip}(\overline{\sigma})} \cap
\model{\mathtt{1}=\mathtt{1}}) \cup
(\model{\neg \mathtt{Flip}(\overline{\sigma})}
\cap \model{\mathtt{1}=\mathtt{0}}) \\
&= (\model{\mathtt{Flip}(\overline{\sigma}} \cap \mathbb O)
\cup
(\model{\neg \mathtt{Flip}(\overline{\sigma})} \cap
\emptyset) \\
&= \model{\mathtt{Flip}(\overline{\sigma})} \\
&= \{\eta \ | \ \eta(\sigma)=\mathbb{1}\}.
\end{align*}
\normalsize
Clearly, $\eta^* \in \model{(\mathtt{Flip}(\overline{\sigma}) \wedge \overline{\tau} = \mathtt{1}) \vee (\neg \mathtt{Flip}(\overline{\sigma}) \wedge \overline{\tau}=\mathtt{0})}$.
The case $Q(\sigma,\eta^*)=\mathbb{0}$
and the opposite direction are proved in a similar way.
\end{enumerate}
\end{itemize}
\emph{Inductive Case.}
If $f$ is defined by composition or bounded recursion
from $\Sigma^b_1$-representable functions,
then $f$ is $\Sigma^b_1$-representable in $\textsf{\textbf{RS}}^1_2$:
\begin{itemize}
\itemsep0em
\item \emph{Composition.}
Assume that $f$ is defined by composition from functions
$g,h_1,\dots, h_k$ so that
$$
f(\vec{x},\eta) = g(h_1(\vec{x},\eta), \dots, h_k(\vec{x},\eta),
\eta)
$$
and that $g,h_1,\dots, h_k$ are represented in $\textsf{\textbf{RS}}^1_2$
by, the $\Sigma^b_1$-formulas
$F_g,F_{h_1},$ $\dots, F_{h_k}$, respectively.
By Proposition~\ref{prop:Parikh},
there exist suitable terms $t_{g}, t_{h_1},\dots, t_{h_k}$
such that condition 1. of Definition~\ref{df:representability}
can be strengthened to $\textsf{\textbf{RS}}^1_2\vdash(\forall \vec{x})
(\exists y \preceq t_i)F_i(\vec{x},y)$ for each $i\in\{g,h_1,\dots, h_k\}$.
We conclude that $f(\vec{x},\eta)$ is $\Sigma^b_1$-represented
in $\textsf{\textbf{RS}}^1_2$ by the following formula:
\begin{align*}
F_f(x,y) : (\exists z_1\preceq t_{h_1}(\vec{x}))
\dots
(\exists z_k \preceq t_{h_k}(\vec{x}))
\big(F_{h_1}(\vec{x},z_1) &\wedge \dots
F_{h_k}(\vec{x},z_k) \\
&\wedge
F_g(z_1... z_k,y)\big).
\end{align*}
Indeed, by IH,
$F_g,F_{h_1},\dots, F_{h_k}$ are $\Sigma^b_1$-formulas.
Then, also $F_f$ is $\Sigma^b_1$.
Conditions 1.-3. are proved to hold by slightly modifying standard
proofs.
\item \emph{Bounded Recursion.} Assume that $f$
is defined by bounded recursion from
$g,h_0,$ and $h_1$ so that:
\begin{align*}
f(\vec{x},\bm{\epsilon},\eta) &= g(\vec{x},\eta) \\
f(\vec{x}, y\mathbb{b}, \eta) &= h_i(\vec{x},y,f(\vec{x},y,\eta),\eta)|_{t(\vec{x},y)},
\end{align*}
where $i \in\{0,1\}$
and $\mathbb{b}=\mathbb{0}$ when $i=0$ while
and $\mathbb{b} = \mathbb{1}$ when $i=1$.
Let $g,h_0, h_1$ be represented in $\textsf{\textbf{RS}}^1_2$
by, respectively, the $\Sigma^b_1$-formulas
$F_g,F_{h_0},$ and $F_{h_1}$.
Moreover, by Proposition~\ref{prop:Parikh},
there exist suitable terms $t_g,t_{h_0},$ and $t_{h_1}$
such that condition 1. of Definition~\ref{df:representability}
can be strengthened to its ``bounded version''.
Then, it can be proved that $f(\vec{x},y)$
is $\Sigma^b_1$-represented in $\textsf{\textbf{RS}}^1_2$ by the formula below:
\small
\begin{align*}
F_f(x,y) : \ &(\exists v \preceq t_g(\vec{x})
t_f(\vec{x}) (y \times t(\vec{x},y) t(\vec{x},y) \mathtt{1}\oone))
\big(F_{lh}(v, \mathtt{1} \times y \mathtt{1}) \\
& \wedge (\exists z\preceq t_g(\vec{x}))
(F_{eval}(v,\epsilon,z) \wedge
F_g(\vec{x},z)) \\
& \wedge (\forall u \subset y)(\exists z)
\big(\tilde{z} \preceq t(\vec{x},y)
\big(F_{eval}(v,\mathtt{1} \times u, z) \wedge
F_{eval}(v,\mathtt{1} \times u\mathtt{1},\tilde{z}) \\
& \wedge (u\mathtt{0} \subseteq y \rightarrow
(\exists z_0 \preceq t_{h_0}(\vec{x},u,z))
(F_{h_0}(\vec{x},u,z,z_0)
\wedge z_0|_{t(\vec{x},u)} = \tilde{z})) \\
& \wedge (u\mathtt{1} \subseteq y \rightarrow
(\exists z_1 \preceq t_{h_1}(\vec{x},u,z))
(F_{h_1}(\vec{x},u,z,z_1) \wedge
z_1|_{t(\vec{x},u)} = \tilde{z}))\big)\big),
\end{align*}
\normalsize
where $F_{lh}$ and $F_{eval}$ are $\Sigma^b_1$-formulas
defined as in~\cite{Ferreira90}.
Intuitively, $F_{lh}(x,y)$ states that the number of $\mathtt{1}$s
in the encoding of $x$ is $yy$,
while $F_{eval}(x,y,z)$ is a ``decoding'' formula
(strongly resembling G\"odel's $\beta$-formula),
expressing that the ``bit'' encoded in $x$ as its $y$-th bit
is $z$.
Moreover $x\subset y$ is an abbreviation for $x\subseteq y
\wedge x \neq y$.
Then, this formula $F_f$ satisfies all the requirements to
$\Sigma^b_1$-represent in $\textsf{\textbf{RS}}^1_2$ the function $f$,
obtained by bounded recursion from $g,h_0,$ and $h_1$.
In particular, conditions 1. and 2. concerning existence and
uniqueness,
have already been proved to hold by Ferreira~\cite{Ferreira90}.
Furthermore, $F_f$ expresses that,
given the desired encoding sequence $v$:
(i.) the $\bm{\epsilon}$-th bit of $v$ is (the encoding
of) $z'$ such that $F_g(\vec{x},z')$ holds,
where (for IH) $F_g$ is the $\Sigma^b_1$-formula
representing the function $g$,
and (ii.) given that for each $u\subset y$,
$z$ denotes the `` bit'' encoded in $v$ at position
$1\times u$ and, similarly, $\tilde{z}$ is the next ``bit'',
encoded in $v$ at position $\mathtt{1} \times u\mathtt{1}$,
then if $u\mathtt{b} \subseteq y$
(that is, if we are considering the initial substring of $y$
the last bit of which corresponds to $\mathtt{b}$),
then there is a $z_b$ such that
$F_{h_b}(\vec{x},y,z,z_b)$, where $F_{h_b}$ $\Sigma^b_1$-represents
the function $f_{h_b}$ and the truncation of $z_b$
at $t(\vec{x},u)$ is precisely
$\tilde{z}$, with $b=0$ when $\mathtt{b}=\mathtt{0}$ and
$b=1$ when $\mathtt{b}=\mathtt{1}$.\footnote{Otherwise said,
if $u\mathtt{0}\subseteq y$, there is a $z_0$ such that
the $\Sigma^b_1$-formula $F_{h_0}(\vec{x},
u,z,z_0)$ represents the function $h_0$
and, in this case, $\tilde{z}$ corresponds
to the truncation of $z_0$ at $t(\vec{x},u)$,
that is the ``bit'' encoded by $v$ at the position
$\mathtt{1} \times u\mathtt{1}$ (i.e.~corresponding to
$u\mathtt{0} \subseteq y$) is precisely such $\tilde{z}$.
Equally, if $u\mathtt{1}\subseteq y$, there is a
$z_0$ such that the $\Sigma^b_1$-formula
$F_{h_1}(\vec{x},u,z,z_1)$ represents now the function
$h_1$ and $\tilde{z}$ corresponds to the truncation
of $z_1$ at $t(\vec{x},u)$,
that is the ``bit'' encoded by $v$ at position
$\mathtt{1}\times u\mathtt{1}$ (i.e.~corresponding to $u\mathtt{1}
\subseteq y)$ is precisely such $\vec{z}$.}
\end{itemize}
\end{proof}
\subsection{The functions which are $\Sigma^b_1$-Representable
in $\textsf{\textbf{RS}}^1_2$ are in $\mathcal{POR}$}\label{sec:RStoPOR}
\input{TaskB}
\section{Relating $\mathcal{POR}$ and Poly-Time PTMs}\label{sec:PORandPTM}
\input{taskC}
\subsubsection{The System $\POR^\lambda$}
We define an equational theory for a simply
typed $\lambda$-calculus augmented with
primitives for functions of $\mathcal{POR}$.
Actually, these do not
exactly correspond to the ones of $\mathcal{POR}$,
although the resulting function algebra is proved
equivalent.\footnote{Our choice follows the principle
that the defining equations for the functions
different from the recursion operator should not
depend on it.
}
\paragraph{The Syntax of $\POR^\lambda$.}
We start by considering the syntax of $\POR^\lambda$.
\begin{defn}[Types of $\POR^\lambda$]
\emph{Types of $\POR^\lambda$} are defined by the
grammar below:
$$
\sigma := s \; \; \mbox{\Large{$\mid$}}\;\; \sigma \Rightarrow
\sigma.
$$
\end{defn}
\begin{defn}[Terms of $\POR^\lambda$]\label{df:termsPORl}
\emph{Terms of $\POR^\lambda$} are standard,
simply typed $\lambda$-terms plus the
constants below:
\begin{align*}
\term{0}, \term{1}, \term{\epsilon} &: s \\
\circ &: s \Rightarrow s \Rightarrow s \\
\term{Tail} &: s \Rightarrow s \\
\term{Trunc} &: s \Rightarrow s \Rightarrow s \\
\term{Cond} &: s \Rightarrow s \Rightarrow s
\Rightarrow s \Rightarrow s \\
\term{Flipcoin} &:
s \Rightarrow s \\
\term{Red} &:
s \Rightarrow (s \Rightarrow s \Rightarrow s)
\Rightarrow
(s \Rightarrow s \Rightarrow s)
\Rightarrow
(s \Rightarrow s) \Rightarrow s
\Rightarrow s.
\end{align*}
\end{defn}
\noindent
Intuitively, $\term{Tail}(x)$ computes the
string obtained by deleting the first digit
of $x$;
$\term{Trunc}(x,y)$ computes the string
obtained by truncating $x$ at the length
of $y$;
$\term{Cond}(x,y,z,w)$ computes
the function that yields $y$ when $x=\bm{\epsilon}$,
$z$ when $x=x'\mathbb{0}$,
and $w$ when $x=x'\mathbb{1}$;
$\term{Flipcoin}(x)$ indicates a random
$\mathbb{0}/\mathbb{1}$ generator;
$\term{Rec}$ is the operator for bounded recursion
on notation.
\begin{notation}
We usually denote $x\circ y$ simply as $xy$.
Moreover, to enhance readability,
$\term{T}$ being any constant $\term{Tail},
\term{Trunc}, \term{Cond}, \term{Flipcoin},
\term{Rec}$ of arity $n$,
we indicate $\term{T}\term{u}_1,\dots, \term{u}_n$
as $\term{T}(\term{u}_1,\dots, \term{u}_n)$.
\end{notation}
$\POR^\lambda$ is reminiscent of $\textsf{PV}^\omega$ by Cook and Urquhart~\cite{CookUrquhart}
without the induction rule (R5) that we do not need.
The main difference being the constant $\term{Flipcoin}$,
which, as said, intuitively denotes a function
which randomly generates either $\mathbb{0}$ or
$\mathbb{1}$ when reads a string.\footnote{These interpretations will be made clear by
Definition~\ref{df:provRepr} below.}
\begin{remark}
In the following, we often define terms
implicitly using bounded recursion on notation.
Otherwise said, we define new terms, say
$\term{F} : s \Rightarrow \dots \Rightarrow s$, by equations
of the form:
\begin{align*}
\term{F} \vec{x} \term{\epsilon} &:=
\term{G} \vec{x} \\
\term{F} \vec{x}(y\term{0}) &:=
\term{H}_0 \vec{x} y(\term{F}\vec{x}y) \\
\term{F}\vec{x}(y\term{1}) &:=
\term{H}_1 \vec{x} y(\term{F}\vec{x}y),
\end{align*}
where $\term{G}, \term{H}_0, \term{H}_1$
are already-defined terms,
and the second and third equations
satisfy a length bound given by some term $\term{K}$
(which is usually $\lambda \vec{x} \lambda y.\term{0}$).
The term $\term{F}$ can be explicitly defined as
follows:
$$
\term{F} := \lambda \vec{x}. \lambda y.
\term{Rec}(\term{G}\vec{x}, \lambda yy'.
\term{H}_0 \vec{x}yy', \lambda y y'. \term{H}_1
\vec{x}yy', \term{K}\vec{x},y).
$$
\end{remark}
\noindent
We also introduce the following abbreviations for composed
functions:
\begin{itemize}
\itemsep0em
\item $\term{B}(x) := \term{Cond}(x,\epsilon,\term{0},
\term{1})$ denotes the function that computes the last
digit of $x$,
i.e.~coerces $x$ to a Boolean value
\item $\term{BNeg}(x) := \term{Cond}(x,\epsilon,\term{1},\term{0})$ denotes the function that
computes the Boolean negation of $\term{B}(x)$
\item $\term{BOr}(x,y) := \term{Cond}(\term{B}(x),
\term{B}(y), \term{B}(y), \term{1})$ denotes
the function that coerces $x$ and $y$ to Booleans
and then performs the OR operation
\item $\term{BAnd}(x,y) := \term{Cond}(\term{B}(x),
\epsilon, \term{0}, \term{B}(y))$ denotes
the function that coerces $x$ and $y$ to Booleans
and then performs the AND operation
\item $\term{Eps}(x) := \term{Cond}(x,\term{1},\term{0}, \term{0})$ denotes the characteristic
function of the predicate ``$x=\epsilon$''
\item $\term{Bool}(x) := \term{BAnd}(\term{\mathsf{Eps}}(\term{Tail}(x)), \term{BNeg}(\term{Eps}(x)))$
denotes the characteristic function of the predicate
``$x=\mathtt{0} \vee x=\mathtt{1}$''
\item $\term{Zero}(x) := \term{Cond}(\term{Bool}(x),
\term{0}, \term{Cond}(x,\term{0},\term{0},\term{1}),
\term{0})$ denotes the characteristic function
of the predicate ``$x=\mathtt{0}$''
\item $\term{Conc}(x,y)$ denotes the concatenation
function defined by the equations below
\begin{align*}
\term{Conc}(x,\epsilon) := x \ \ \ \ \ \ \ \
\term{Conc}(x,y\term{b}) := \term{Conc}(x,y)\term{b}, \\
\end{align*}
with $\term{\{0,1\}\in\term{b}}$.
\item $\term{Eq}(x,y)$ denotes the characteristic
function of the predicate ``$x=y$'' and is defined
by double recursion by the equation below:
\begin{align*}
\term{Eq}(\epsilon,\epsilon) &:= \term{1} \ \ \ \ \ \ \ \
\term{Eq}(\epsilon, y\term{b}) := \term{0} \\
\term{Eq}(x\term{b}, \epsilon)
= \term{Eq}(x\term{0},y\term{1})
= \term{Eq}(x\term{1},y\term{0}) &:=
\term{0}
\ \ \ \ \ \
\term{Eq}(x\term{b}, y\term{b})
:= \term{Eq}(x,y),
\end{align*}
with $\term{b} \in \{\term{0}, \term{1}\}$
\item $\term{Times}(x,y)$ denotes the function
for self-concatenation, $x,y\mapsto x\bm{\times} y$,
and is defined by the equations below:
$$
\term{Times}(x,\epsilon) := \epsilon \ \ \ \ \ \ \ \
\term{Times}(x,y\term{b}) :=
\term{Conc}(\term{Times}(x,y), x),
$$
with $\term{b}\in\{\term{0},\term{1}\}$.
\item $\term{Sub}(x,y)$ denotes the initial-substring
functions, $x,y \mapsto S(x,y)$,
and is defined by bounded recursion as follows:
$$
\term{Sub}(x,\epsilon) := \term{Eps}(x) \ \ \ \
\ \ \ \
\term{Sub}(x,y\term{b}) := \term{BOr}(
\term{Sub}(x,y), \term{Eq}(x,y\term{b})),
$$
with $\term{b}\in \{\term{0},\term{1}\}$.
\end{itemize}
\begin{defn}[Formulas of $\POR^\lambda$]
\emph{Formulas of $\POR^\lambda$} are all equations
$\term{t} = \term{u}$, where $\term{t}$
and $\term{u}$ are terms of type $s$.
\end{defn}
\paragraph{The Theory $\POR^\lambda$.}
We now introduce the theory $\POR^\lambda$.
\begin{defn}[Theory $\POR^\lambda$]
Axioms of $\POR^\lambda$ are the following ones:
\begin{itemize}
\item Defining equations for the constants of $\POR^\lambda$:
\small
\begin{align*}
\epsilon x= x\epsilon = x \ \ \ \ \ \ & \ \ \ \ \ \
x(y\term{b}) = (xy)\term{b} \\
\\
\term{Tail}(\epsilon) = \epsilon \ \ \ \ \ \ & \ \ \ \ \ \
\term{Tail}(x\term{b}) = x \\
\\
\term{Trunc}(x,\epsilon) =
\term{Trunc}(\epsilon, x) &= \epsilon \\
\term{Trunc}(x\term{b}, y\term{0}) =
\term{Trunc}(x\term{b}, y\term{1})
&=
\term{Trunc}(x,y)\term{b} \\
\\
\term{Cond}(\epsilon, y,z,w) = y \ \ \ \ \ \ \
\term{Cond}(x\term{0}, y,z,w) &= z \ \ \ \ \ \ \
\term{Cond}(x\term{1},y,z,w) = w \\
\\
\term{Bool}(\term{Flipcoin}(x)) &= \term{1} \\
\\
\term{Rec}(x,h_0,h_1,k,\epsilon) &= x \\
\term{Rec}(x,h_0,h_1,k,y\mathbb{b}) &=
\term{Trunc}(h_by(\term{Rec}(x,h_0,h_1,k,y)),ky), \\
\end{align*}
\normalsize
where $\term{b}\in\{\term{0},\term{1}\}$ and
$b\in\{0,1\}$.\footnote{When if $\term{b}=\term{0}$,
then $b=0$ and $\term{b}=\term{1}$, then $b=1$.}
\item The $(\beta)$- and $(\nu)$-axioms:
\small
\begin{align*}
\textsf{C} \big[(\lambda x.\term{t})\term{u}\big]
&=
\textsf{C} \big[\term{t}\{\term{u}/x\}\big]
\tag{$\beta$} \\
\textsf{C}\big[\lambda x.\term{t} x\big] &=
\textsf{C}\big[\term{t}\big] \tag{$\nu$}.
\end{align*}
\normalsize
where $\textsf{C}\big[\cdot\big]$ indicates a context
with a unique occurrence of the hole $\big[ \ \big]$,
so that $\textsf{C}\big[\term{t}\big]$
denotes the variable capturing replacement of
$\big[ \ \big]$ by $\term{t}$ in $\textsf{C}\big[ \ \big]$.
\end{itemize}
The inference rules of $\POR^\lambda$ are the following
ones:
\small
\begin{align*}
\term{t} = \term{u} &\vdash \term{u} = \term{t}
\tag{R1} \\
\term{t} = \term{u}, \term{u} = \term{v} &\vdash
\term{t} = \term{v} \tag{R2} \\
\term{t} = \term{u} &\vdash \term{v}\{\term{t}/x\}
= \term{v}\{\term{u}/x\} \tag{R3} \\
\term{t} = \term{u} &\vdash
\term{t}\{\term{v}/x\} =
\term{u}\{\term{v}/x\}. \tag{R4}
\end{align*}
\normalsize
\end{defn}
\noindent
As predictable, $\vdash_{\POR^\lambda} \term{t}=\term{u}$
expresses that the equation $\term{t}=\term{u}$
is deducible using instances of the axioms above plus
inference rules (R1)-(R4).
Similarly, given any set $\emph{T}$ of equations,
$\emph{T} \vdash_{\POR^\lambda} \term{t}=\term{u}$
expresses that the equation $\term{t}=\term{u}$
is deducible using instances of the quoted axioms and
rules together with equations from $\emph{T}$.
\paragraph{Relating $\mathcal{POR}$ and $\POR^\lambda$.}
For any string $\sigma \in\mathbb S $, let $\ooverline{\sigma}:s$
denote the term of $\POR^\lambda$ corresponding
to it, that is:
\begin{center}
$\ooverline{\bm{\epsilon}}=\epsilon \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \
\ooverline{\sigma\mathbb{0}} = \ooverline{\sigma}\term{0} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \
\ooverline{\sigma \mathbb{1}} = \ooverline{\sigma} \term{1}.$
\end{center}
For any $\eta\in \mathbb O$, let $\emph{T}_\eta$
be the set of all equations of the form
$\term{Flipcoin}(\ooverline{\sigma})= \ooverline{\eta(\sigma)}$.
\begin{defn}[Provable Representability]\label{df:provRepr}
Let $f:\mathbb O\times \mathbb S^j \to \mathbb S$.
A term $\term{t}: s \Rightarrow \dots \Rightarrow s$
of $\POR^\lambda$ \emph{provably represents f}
when for all string $\sigma_1,\dots, \sigma_j, \sigma \in\mathbb S$,
and $\eta\in\mathbb O$,
$$
f(\sigma_1,\dots, \sigma_j,\eta) = \sigma
\ \ \Leftrightarrow \ \
\emph{T}_\eta \vdash_{\POR^\lambda} \term{t}
\ooverline{\sigma_1} \dots \ooverline{\sigma_j} = \ooverline{\sigma}.
$$
\end{defn}
\begin{ex}\label{ex:Flipcoin}
The term $\term{Flipcoin} : s \Rightarrow s$ provably
represents the query function $Q(x,\eta)=\eta(x)$ of $\mathcal{POR}$,
since for any $\sigma\in \mathbb S$ and $\eta \in \mathbb O$,
$$
\term{Flipcoin}(\ooverline{\sigma}) =
\ooverline{\eta(\sigma)} \vdash_{\POR^\lambda}
\term{Flipcoin}(\ooverline{\sigma}) = \ooverline{Q(\sigma,\eta)}.
$$
\end{ex}
\noindent
We now consider some of the terms described above
and show them to
provably represent the intended functions.
Let $Tail(\sigma,\eta)$ indicate the string obtained by
chopping the first digit of $\sigma$,
and
$Trunc(\sigma_1,\sigma_2,\eta) = \sigma_1 |_{\sigma_2}$.
\begin{lemma}
Terms $\term{Tail}, \term{Trunc}$ and $\term{Cond}$
provably represent the functions $Tail,$ $Trunc,$
and $C$, respectively.
\end{lemma}
\begin{proof}[Proof Sketch]
For $\term{Tail}$ and $\term{Cond}$, the claim follows immediately from the defining axioms
of the corresponding constants.
For example, if $\sigma_1 = \sigma_2\mathbb{0}$,
then
$Tail(\sigma_1,\eta)=\sigma_2$ (and
$\ooverline{\sigma_1}=\ooverline{\sigma_2\mathbb{0}}
= \ooverline{\sigma_2}\term{0}$).
Using the defining axioms of $\term{Tail}$,
$$
\vdash_{\POR^\lambda} \term{Tail}(\ooverline{\sigma_1}) =
\term{Tail}(\ooverline{\sigma_2 \mathbb{0}}) = \ooverline{\sigma_2}.
$$
For $Trunc$ by double induction on
$\sigma_1,\sigma_2\in \mathbb S$ we conclude:
$$
\vdash_{\POR^\lambda} \term{Trunc}(\ooverline{\sigma_1},\ooverline{\sigma_2}) =
\ooverline{\sigma_1}|_{\ooverline{\sigma_2}}.
$$
\end{proof}
\noindent
We generalize this result by Theorem~\ref{thm:PRepr} below.
\begin{theorem}\label{thm:PRepr}
\begin{enumerate}
\itemsep0em
\item Any function $f\in\mathcal{POR}$ is provably represented
by a term $\term{t} \in \POR^\lambda$.
\item For any term $\term{t}\in\POR^\lambda$,
there is a function $f\in\mathcal{POR}$
such that $f$ is provably represented by $\term{t}$.
\end{enumerate}
\end{theorem}
\begin{proof}[Proof Sketch] (1.)
The proof is by induction on the structure of
$f\in\mathcal{POR}$:
\emph{Base Case.}
Each base function is provably representable.
Let us consider two examples only:
\begin{itemize}
\itemsep0em
\item \emph{Empty Function.}
$f=E$ is provably represented by
the term $\lambda x.\epsilon$.
For any string $\sigma\in \mathbb S$,
$\ooverline{E(\sigma,\eta)}=\ooverline{\bm{\epsilon}}
= \epsilon$ holds.
Moreover,
$\vdash_{\POR^\lambda} (\lambda x.\epsilon)\ooverline{\sigma}
= \epsilon$
is an instance of the $(\beta)$-axiom.
So, we conclude:
$$
\vdash_{\POR^\lambda} (\lambda x.\epsilon) \ooverline{\sigma}
= \ooverline{E(\sigma,\eta)}.
$$
\item \emph{Query Function.} $f=Q$ is provably represented by
the term $\term{Flipcoin}$, as observed in Example~\ref{ex:Flipcoin}
above.
\end{itemize}
\emph{Inductive case.}
Each function defined by composition
or bounded recursion from provably represented functions,
is provably represented as well.
We consider the case of bounded recursion.
Let $f$ be defined as:
\begin{align*}
f(\sigma_1,\dots, \sigma_n,\bm{\epsilon},\eta) &=
g(\sigma_1,\dots, \sigma_n,\eta) \\
f(\sigma_1,\dots, \sigma_n, \sigma\mathbb{0},\eta)
&= h_0(\sigma_1,\dots, \sigma_n,\sigma, f(\sigma_1,\dots,
\sigma_n,\sigma,\eta),\eta) |_{k(\sigma_1,\dots, \sigma_n,\sigma)} \\
f(\sigma_1, \dots, \sigma_n, \sigma\mathbb{1}, \eta)
&=
h_1(\sigma_1,\dots, \sigma_n,\sigma, f(\sigma_1,\dots,
\sigma_n,\sigma,\eta),\eta) |_{k(\sigma_1,\dots, \sigma_n,\sigma)}.
\end{align*}
By IH, $g,h_0,h_1$, and $k$ are provably represented
by the corresponding terms $\term{t_{g}}, \term{t_{{h}_1}},
\term{t_{{h}_{2}}},\term{t_{k}}$, respectively.
So, for any $\sigma_1,\dots, \sigma_{n+2},\sigma\in\mathbb S$
and $\eta\in \mathbb O$,
we derive:
\begin{align*}
\emph{T}_\eta &\vdash_{\POR^\lambda}
\term{t_g} \ooverline{\sigma_1} \dots \ooverline{\sigma_n}
=
\ooverline{g(\sigma_1,\dots, \sigma_n,\eta)}
\tag{$\term{t_g}$} \\
\emph{T}_\eta &\vdash_{\POR^\lambda}
\term{t_{h_0}} \ooverline{\sigma_1} \dots
\ooverline{\sigma_{n+2}} =
\ooverline{h_0(\sigma_1,\dots, \sigma_{n+2},\eta)}
\tag{$\term{t_{h_0}}$} \\
\emph{T}_\eta &\vdash_{\POR^\lambda}
\term{t_{h_1}}\ooverline{\sigma_1} \dots
\ooverline{\sigma_{n+1}} = \ooverline{h_1(\sigma_1,\dots,
\sigma_{n+2},\eta)}
\tag{$\term{t_{h_1}}$} \\
\emph{T}_{\eta} &\vdash_{\POR^\lambda}
\term{t_k}\ooverline{\sigma_1} \dots \ooverline{\sigma_n}
= \ooverline{k(\sigma_1,\dots, \sigma_n,\eta)}.
\tag{$\term{t_k}$}
\end{align*}
We can prove by induction on $\sigma$ that,
$$
\emph{T}_\eta \vdash_{\POR^\lambda} \term{t_f}
\ooverline{\sigma_1} \dots \ooverline{\sigma_n}
\ooverline{\sigma} =
\ooverline{f(\sigma_1,\dots, \sigma_n,\eta)},
$$
where
$$
\term{t_f} : \lambda x_1\dots, \lambda x_n.
\lambda x.\term{Rec}(\term{t_g} x_1 \dots
x_n, \term{t_{h_0}} x_1 \dots x_n,
\term{t_{h_1}} x_1\dots x_n,
\term{t_k}x_1\dots, x_n,x).
$$
Then,
\begin{itemize}
\itemsep0em
\item if $\sigma=\bm{\epsilon}$, then
$f(\sigma_1,\dots, \sigma_n,\sigma,\eta)=
g(\sigma_1,\dots, \sigma_n,\eta)$.
Using the $(\beta)$-axiom, we deduce,
$$
\vdash_{\POR^\lambda} \term{t_f} \ooverline{\sigma_1}
...
\ooverline{\sigma_n} \ooverline{\sigma}
= \term{Rec}(\term{t_g} \ooverline{\sigma_1}
... \ooverline{\sigma_n},
\term{t_{h_0}}\ooverline{\sigma_1} ...
\ooverline{\sigma_n},
\term{t_{h_1}} \ooverline{\sigma_1} ...
\ooverline{\sigma_n},
\term{t_k} \ooverline{\sigma_1} ...
\ooverline{\sigma_n}, \ooverline{\sigma})
$$
and using the axiom
$\term{Rec}(\term{t_g}x_1\dots x_n,
\term{t_{h_0}} x_1\dots x_n,
\term{t_{h_1}} x_1\dots x_n,
\term{t_k} x_1\dots$
$x_n,\epsilon)
= \term{t_g}x_1\dots x_n$,
we obtain,
$$
\vdash_{\POR^\lambda} \term{t_f}\ooverline{\sigma_1}
\dots \ooverline{\sigma_n} \ooverline{\sigma}
= \term{t_g}\ooverline{\sigma_1}\dots
\ooverline{\sigma_n},
$$
by (R2) and (R3).
We conclude using ($\term{t_g}$) together
with (R2).
\item if $\sigma=\sigma_m\mathbb{0}$,
then $f(\sigma_1,\dots, \sigma_n,\sigma,\eta)=
h_0(\sigma_1,\dots, \sigma_n,\sigma_m, f(\sigma_1,
\dots,$ $\sigma_n,\sigma,\eta),$ $\eta)|_{k(\sigma_1,\dots, \sigma_n,\sigma_m)}$.
By IH,
we suppose,
$$
\emph{T}_\eta \vdash_{\POR^\lambda}
\term{t_f}\ooverline{\sigma_1}\dots
\ooverline{\sigma_n}\ooverline{\sigma_m}
= \ooverline{f(\sigma_1,\dots, \sigma_n,\sigma',\eta)}.
$$
Then, using the $(\beta)$-axiom
$\term{t_f} \ooverline{\sigma_1}\dots \ooverline{\sigma_n}
\ooverline{\sigma} = \term{Rec}(\term{t_g}$
$\ooverline{\sigma_1} \dots \ooverline{\sigma_n},
\term{t_{h_0}}\ooverline{\sigma_1} \dots \ooverline{\sigma_n},$
$\term{f_{h_1}} \ooverline{\sigma_1} \dots
\ooverline{\sigma_n}, \term{f_k}\ooverline{\sigma_1}
\dots \ooverline{\sigma_n}, \ooverline{\sigma})$
the axiom $\term{Rec}(g,h_0,h_1,k,x\term{0})$
= $\term{Trunc}(h_0x (\term{Rec}$ $(g,h_0,h_1,k,\term{0})), kx)$ and IH we deduce,
$$
\vdash_{\POR^\lambda} \term{t_f}\ooverline{\sigma_1}
... \ooverline{\sigma_n} \ooverline{\sigma}
= \term{Trunc}(\term{f_{h_0}}\ooverline{\sigma_1}
... \ooverline{\sigma_n} \ooverline{\sigma_m}
\ooverline{f(\sigma_1, ..., \sigma_n,\sigma_m,\eta)},
\term{t_k} \ooverline{\sigma_1} ...
\ooverline{\sigma_n})
$$
by (R2) and (R3).
Using ($\term{t_{h_0}}$)
and ($\term{t_k}$)
we conclude using (R3) and (R2):
$$
\vdash_{\POR^\lambda} \term{t_f}
\ooverline{\sigma_1} ... \ooverline{\sigma_n}
\ooverline{\sigma} =
\ooverline{h_0(\sigma_1,..., \sigma_n,\sigma_m,
f(\sigma_1,..., \sigma_n,\sigma_m,\eta))|_{k(\sigma_1,...,
\sigma_n,\sigma_m)}}.
$$
\item the case $\sigma=\sigma_m\mathbb{1}$ is proved in
a similar way.
\end{itemize}
(2.) It is a consequence of the normalization
property for the simply typed $\lambda$-calculus:
a $\beta$-normal term $\term{t} : s \Rightarrow \dots
\Rightarrow s$ cannot contain variables of higher
types.
By enumerating possible normal forms one can check
that these all represent functions in $\mathcal{POR}$.
\end{proof}
\begin{cor}\label{cor:provablyRepresented}
For any function $f:\mathbb S^j\times \mathbb O \to \mathbb S$,
$f\in\mathcal{POR}$ when $f$ is provably represented
by some term $\term{t} : s \Rightarrow \dots \Rightarrow s
\in \POR^\lambda$.
\end{cor}
\subsubsection{The Theory $\mathcal{I}\PORl$}
We introduce a first-order \emph{intuitionistic}
theory, called $\mathcal{I}\PORl$, which extends
$\POR^\lambda$ with basic predicate calculus
and a restricted induction principle.
We also define $\textsf{\textbf{I}}\RS$ as a variant of $\textsf{\textbf{RS}}^1_2$
having the intuitionistic – rather than classical –predicate
calculus as its logical basis.
All theorems of $\POR^\lambda$ and $\textsf{\textbf{I}}\RS$
are provable in $\mathcal{I}\PORl$.
In fact, $\mathcal{I}\PORl$ can be seen as an extension
of $\POR^\lambda$, and
provides a language
to associate derivations in $\textsf{\textbf{I}}\RS$ with
poly-time computable functions
(corresponding to terms of $\mathcal{I}\PORl$).
\paragraph{The Syntax of $\mathcal{I}\PORl$.}
The equational theory $\POR^\lambda$ is rather weak.
In particular, even simple equations,
such as $x=\term{Tail}(x)\term{B}(x)$,
cannot be proved in it.
Indeed, induction is needed:
\begin{align*}
&\vdash_{\POR^\lambda} \epsilon = \epsilon\epsilon
= \term{Tail}(\epsilon) \term{B}(\epsilon) \\
x= \term{Tail}(x) \term{B}(x) &\vdash_{\POR^\lambda}
x\term{0} = \term{Tail}(x\term{0})\term{B}(x\term{0}) \\
x=\term{Tail}(x) \term{B}(x) &\vdash_{\POR^\lambda}
x\term{1} = \term{Tail}(x\term{1})\term{B}(x\term{1}).
\end{align*}
From this we would like to deduce, by induction,
that $x=\term{Tail}(x)\term{B}(x)$.
Thus, we introduce $\mathcal{I}\PORl$, the language of which extends
that of $\POR^\lambda$ with
(a translation for) all expressions of $\textsf{\textbf{RS}}^1_2$.
In particular, the grammar for terms of $\mathcal{I}\PORl$
is precisely the same as that of Definition~\ref{df:termsPORl},
while that for formulas is defined below.
\begin{defn}[Formulas of $\mathcal{I}\PORl$]
\emph{Formulas of $\mathcal{I}\PORl$} are defined
as follows:
(i.) all equations of $\POR^\lambda$ $\term{t}=\term{u}$,
are formulas of $\mathcal{I}\PORl$;
(ii.) for any (possibly open) term of $\POR^\lambda$
$\term{t}, \term{u}$, $\term{t} \subseteq \term{u}$
and $\mathtt{Flip}(\term{t})$
are formulas of $\mathcal{I}\PORl$;
(iii.) formulas of $\mathcal{I}\PORl$ are closed under
$\wedge,\vee,\rightarrow, \forall, \exists$.
\end{defn}
\begin{notation}
We adopt the standard conventions:
$\bot:= \term{0} = \term{1}$
and
$\neg F := F \rightarrow \bot$.
\end{notation}
\noindent
The notions of $\Sigma^b_0$- and $\Sigma^b_1$-formula
of $\mathcal{I}\PORl$ are precisely those for $\textsf{\textbf{RS}}^1_2$,
as introduced in Definition~\ref{df:Sigmab1}.
\begin{remark}
Any formula of $\textsf{\textbf{RS}}^1_2$ can be seen as a formula
of $\mathcal{I}\PORl$, where each occurrence
of the symbol $\mathtt{0}$ is replaced by $\term{0}$,
of $\mathtt{1}$ by $\term{1}$,
of $\frown$ by $\circ$ (usually omitted),
of $\times$ by $\term{Times}$.
In the following, we assume that any formula
of $\textsf{\textbf{RS}}^1_2$ is a formula of $\mathcal{I}\PORl$, modulo the
substitutions defined above.
\end{remark}
\begin{defn}[The Theory $\mathcal{I}\PORl$]
The axioms of $\mathcal{I}\PORl$
include standard rules of the intuitioinstic first-order
predicate calculus,
usual rules for the equality symbol,
and axioms below:
\begin{enumerate}
\itemsep0em
\item All axioms of $\POR^\lambda$
\item $x\subseteq y \leftrightarrow \term{Sub}(x,y)=\term{1}$
\item $x=\epsilon \vee x=\term{Tail}(x)\term{0}
\vee x=\term{Tail}(x)\term{1}$
\item $\term{0}=\term{1} \rightarrow x=\epsilon$
\item $\term{Cond}(x,y,z,w)=w' \leftrightarrow
(x=\epsilon \wedge w'= y) \vee (x=\term{Tail}(x)\term{0}
\wedge w' = z) \vee
(x=\term{Tail}(x) \term{1} \wedge w'=w)$
\item $\mathtt{Flip}(x) \leftrightarrow \term{Flipcoin}(x)=\term{1}$
\item Any formula of the form,
$$
\big(F(\epsilon) \wedge (\forall x)(F(x)
\rightarrow F(x\term{0})) \wedge
(\forall x)(F(x) \rightarrow F(x\term{1}))\big)
\rightarrow (\forall y)F(y),
$$
where $F$ is of the form $(\exists z\preceq \term{t})\term{u}=\term{v}$,
with $\term{t}$ containing only first-order
open variables.
\end{enumerate}
\end{defn}
\begin{notation}[$\cc{NP}$-Predicate]
We refer to a formula
of the form $(\exists z \preceq \term{t})\term{u}=\term{v}$,
with $\term{t}$ containing only first-order open variables,
as an \emph{$\cc{NP}$-predicate}.
\end{notation}
\noindent
\paragraph{Relating $\mathcal{I}\PORl$ with $\POR^\lambda$ and $\textsf{\textbf{I}}\RS$.}
Now that $\mathcal{I}\PORl$ has been introduced we show that
theorems of both $\POR^\lambda$ and
the intuitionistic version of $\textsf{\textbf{RS}}^1_2$ are derived
in it.
First, Proposition~\ref{prop:PORltoIPOR}
is easily easily established by inspecting
all rules of $\POR^\lambda$.
\begin{prop}\label{prop:PORltoIPOR}
Any theorem of $\POR^\lambda$ is a theorem of $\mathcal{I}\PORl$.
\end{prop}
Then,
we consider $\textsf{\textbf{I}}\RS$
and establish that every theorem in it
is derivable in $\mathcal{I}\PORl$.
To do so, we prove a
few properties concerning $\mathcal{I}\PORl$.
In particular, its recursion schema
differs from that of $\textsf{\textbf{I}}\RS$
as dealing with formulas of the form $(\exists y\preceq \term{t})\term{u} = \term{v}$
and not with all the $\Sigma^b_1$-ones.
The two schemas are related by
Proposition~\ref{prop:Sigmab0}, proved by induction
on the structure of formulas.
\begin{prop}\label{prop:Sigmab0}
For any $\Sigma^b_0$-formula $F(x_1,\dots,x_n)$
in $\mathcal{RL}$,
there exists a term $\term{t_F}(x_1,\dots, x_n)$
of $\POR^\lambda$ such that:
\begin{enumerate}
\itemsep0em
\item $\vdash_{\mathcal{I}\PORl} F \leftrightarrow
\term{t_F} =\term{0}$
\item $\vdash_{\mathcal{I}\PORl} \term{t_F} = \term{0}
\vee \term{t_F} = \term{1}$.
\end{enumerate}
\end{prop}
\noindent
This leads us to the following corollary
and allows us to prove Theorem~\ref{thm:IRStoIPOR}
relating $\mathcal{I}\PORl$ and $\textsf{\textbf{I}}\RS$.
\begin{cor}
\begin{itemize}
\itemsep0em
\item[i.] For any $\Sigma^b_0$-formula
$F$, $\vdash_{\mathcal{I}\PORl} F \vee \negF$.
\item[ii.] For any closed $\Sigma^b_0$-formula
of $\mathcal{RL}$ $F$, and $\eta \in \mathbb O$,
either $\emph{T}_\eta \vdash_{\mathcal{I}\PORl} F$
or $\emph{T}_\eta \vdash_{\mathcal{I}\PORl} \negF$.
\end{itemize}
\end{cor}
\begin{theorem}\label{thm:IRStoIPOR}
Any theorem of $\textsf{\textbf{I}}\RS$ is a theorem of $\mathcal{I}\PORl$.
\end{theorem}
\begin{proof}
First, observe that, as a consequence of Proposition~\ref{prop:Sigmab0},
for any $\Sigma^b_1$-formula
$F=(\exists x_1\preceq t_1)\dots (\exists x_n
\preceq t_n)G$ in $\mathcal{RL}$,
$$
\vdash_{\mathcal{I}\PORl} F \leftrightarrow
(\exists x_1\preceq \term{t}_1) \dots
(\exists x_n \preceq \term{t}_n) \term{t_G}
= \term{0},
$$
\noindent
and instance of the $\Sigma^b_1$-recursion
schema of $\textsf{\textbf{I}}\RS$ is derivable in $\mathcal{I}\PORl$
from the $\cc{NP}$-induction schema.
Then,
it suffices to check that all basic axioms
of $\textsf{\textbf{I}}\RS$ are provable in $\mathcal{I}\PORl$.
\end{proof}
\noindent
This result also leads to the following
straightforward consequences.
\begin{cor}\label{cor:Sigmab0}
For any closed $\Sigma^b_0$-formula $F$
and $\eta\in\mathbb O$,
either $\emph{T}_\eta \vdash_{\mathcal{I}\PORl} F$
or $\emph{T}_\eta \vdash_{\mathcal{I}\PORl} \neg F$.
\end{cor}
\noindent
Due to Corollary~\ref{cor:Sigmab0},
we can even establish the following Lemma~\ref{lemma:Sigmab0IRS}.
\begin{lemma}\label{lemma:Sigmab0IRS}
Let $F$ be a closed $\Sigma^b_0$-formula
of $\mathcal{RL}$ and $\eta\in\mathbb O$, then:
$$
\emph{T}_\eta \vdash_{\mathcal{I}\PORl} F \ \ \
\text{iff} \ \ \
\eta \in \model{F}.
$$
\end{lemma}
\begin{proof}
$(\Rightarrow)$ This soundness result is established
by induction on the structure of rules for $\mathcal{I}\PORl$.
$(\Leftarrow)$ For Corollary~\ref{cor:Sigmab0},
we know that either $\emph{T}_\eta \vdash_{\mathcal{I}\PORl}
F$ or $\emph{T}_\eta \vdash_{\mathcal{I}\PORl} \negF$.
Hence, if $\eta\in\model{F}$, then it cannot
be $\emph{T}_\eta \vdash_{\mathcal{I}\PORl} \neg F$
(by soundness).
So, we conclude $\emph{T}_\eta \vdash_{\mathcal{I}\PORl}F$.
\end{proof}
\subsubsection{Realizability}
Here, we introduce realizability as internal
to $\mathcal{I}\PORl$.
As a corollary, we obtain that from any derivation in
$\textsf{\textbf{I}}\RS$ – actually, in $\mathcal{I}\PORl$ –
of a formula in the form $(\forall x)(\exists y)F(x,y)$,
one can extract a functional term of $\POR^\lambda$
$\term{f} : s\Rightarrow s$,
such that $\vdash_{\mathcal{I}\PORl}(\forall x)F(x,\term{f}x)$.
This allows us to conclude that if a function
$f$ is $\Sigma^b_1$-representable in $\textsf{\textbf{I}}\RS$,
then $f\in \mathcal{POR}$.
\begin{notation}
Let $\mathbf{x}, \mathbf{y}$ denote finite sequences
of term variables, (resp.) $x_1,\dots, x_n$ and
$y_1,\dots, y_k$, and $\mathbf{x}(\mathbf{y})$ be an
abbreviation for $y_1(\mathbf{x}),\dots, y_k(\mathbf{x})$.
Let $\Lambda$ be a shorthand for the empty sequence
and $y(\Lambda) := y$.
\end{notation}
\begin{defn}\label{df:realizability}
Formulas $x\realizeF$
are defined by induction as follows:
\begin{align*}
\Lambda \ \circledR\ F &:= F \\
\mathbf{x}, \mathbf{y} \ \circledR\ (G \wedge
H) &:=
(\mathbf{x} \ \circledR\ G) \wedge
(\mathbf{y} \ \circledR\ H) \\
z,\mathbf{x},\mathbf{y} \ \circledR\ (G \vee H)
&:=
(z=\term{0} \wedge \mathbf{x} \ \circledR\ G)
\vee
(z \neq \term{0} \wedge \mathbf{y} \ \circledR\
H) \\
\mathbf{y} \ \circledR\ (G \rightarrow H)
&:=
(\forall \mathbf{x})(\mathbf{x} \ \circledR\ G
\rightarrow \mathbf{y}(\mathbf{x}) \ \circledR\ H)
\wedge (G \rightarrow H) \\
z, \mathbf{x} \ \circledR\ (\exists y)G
&:=
\mathbf{x} \ \circledR\ G\{z/y\} \\
\mathbf{x} \ \circledR\ (\forall y)G
&:=
(\forall y)(\mathbf{x}(y) \ \circledR\ G),
\end{align*}
where no variable in $\mathbf{x}$
occurs free in $F$.
Given terms $\mathbf{t} = \term{t_1},
\dots, \term{t_n}$, we let:
$$
\mathbf{t} \ \circledR\ F :=
(\mathbf{x} \ \circledR\ F) \{\mathbf{t}/\mathbf{x}\}.
$$
\end{defn}
\noindent
We relate the derivability of these
new formulas with that of formulas
of $\mathcal{I}\PORl$.
Proofs below are by induction, respectively,
on the structure of $\mathcal{I}\PORl$-formulas
and on the height of derivations.
\begin{theorem}[Soundness]\label{thm:realSoundness}
If $\vdash_{\mathcal{I}\PORl} \mathbf{t} \ \circledR\ F$,
then $\vdash_{\mathcal{I}\PORl}F$.
\end{theorem}
\begin{notation}
Given $\Gamma=F_1,\dots, F_n$,
let $\mathbf{x} \ \circledR\ \Gamma$ be a
shorthand for $\mathbf{x}_1\ \circledR\ F_1,
\dots,$ $\mathbf{x}_n \ \circledR\ F_n$.
\end{notation}
\begin{theorem}[Completeness]\label{thm:realCompleteness}
If $\vdash_{\mathcal{I}\PORl} F$,
then there exist term $\mathbf{t}$, such
that $\vdash_{\mathcal{I}\PORl} \mathbf{t} \ \circledR\
F$.
\end{theorem}
\begin{proof}[Proof Sketch]
We prove that if $\Gamma \vdash_{\mathcal{I}\PORl}
F$, then
there exist terms $\mathbf{t}$ such that
$\mathbf{x} \ \circledR\ \Gamma \vdash_{\mathcal{I}\PORl}
\mathbf{t} \mathbf{x}_1\dots, \mathbf{x}_n \ \circledR\
F$.
The proof is by induction on the derivation
of $\Gamma\vdash_{\mathcal{I}\PORl} F$.
Let us consider just the case of
rule $\vee R_1$ as an example:
\begin{prooftree}
\AxiomC{$\vdots$}
\noLine
\UnaryInfC{$\Gamma \vdash G$}
\RightLabel{$\vee R_1$}
\UnaryInfC{$\Gamma \vdash G\vee H$}
\end{prooftree}
By IH, there exist terms $\mathbf{u}$,
such that $\mathbf{t} \ \circledR\ \Gamma
\vdash_{\mathcal{I}\PORl} \mathbf{t}\mathbf{u} \ \circledR\
G$.
Since $x,y\ \circledR\ G \vee H$
is defined as
$(x=\term{0} \wedge y \ \circledR\ G)
\vee (x\neq \term{0} \wedge y
\ \circledR\ H)$,
we can take $\mathbf{t}=\term{0},\mathbf{u}$.
\end{proof}
\begin{cor}\label{cor:IPORSigma}
Let $(\forall x)(\exists y)F(x,y)$
be a closed theorem of $\mathcal{I}\PORl$,
where $F$ is a $\Sigma^b_1$-formula.
Then, there exists a closed term
$\term{t}: s\Rightarrow s$ of $\POR^\lambda$
such that:
$$
\vdash_{\mathcal{I}\PORl} (\forall x)F(x,\term{t}x).
$$
\end{cor}
\begin{proof}
By Theorem~\ref{thm:realCompleteness},
there exist $\mathbf{t}=\term{t},w$ such that
$\vdash_{\mathcal{I}\PORl} \mathbf{t} \ \circledR\ (\forall x)(\exists y)F(x,y)$.
So, by Definition~\ref{df:realizability},
\begin{align*}
\mathbf{t} \ \circledR\ (\forall x)(\exists y)F(x,y)
&\equiv
(\forall x)(\mathbf{t} (x) \ \circledR\
(\exists y)F (x,y)) \\
&\equiv
(\forall x)(w(x) \ \circledR\ F(x,\term{t}x)).
\end{align*}
From this, by Theorem~\ref{thm:realSoundness},
we deduce,
$$
\vdash_{\mathcal{I}\PORl} (\forall x) F (x,\term{t}x).
$$
\end{proof}
\paragraph{Functions which are $\Sigma^b_1$-Representable in $\textsf{\textbf{I}}\RS$ are in $\mathcal{POR}$.}
Now, we have all the ingredients to prove
that if a function is $\Sigma^b_1$-representable
in $\textsf{\textbf{I}}\RS$, in the sense of Definition~\ref{df:representability},
then it is in $\mathcal{POR}$.
\begin{cor}\label{cor:Sigmab1toPOR}
For any function $f:\mathbb O \times \mathbb S \to \mathbb S$,
if there is a closed $\Sigma^b_1$-formula in $\mathcal{RL}$
$F(x,y)$, such that:
\begin{enumerate}
\itemsep0em
\item $\textsf{\textbf{I}}\RS \vdash (\forall x)(\exists !y) F(x,y)$
\item $\model{F(\ooverline{\sigma_1}, \ooverline{\sigma_2})} =
\{\eta \ | \ f(\eta, \sigma_1) = \sigma_2\}$,
\end{enumerate}
then $f\in\mathcal{POR}$.
\end{cor}
\begin{proof}
Since $\vdash_{\textsf{\textbf{I}}\RS} (\forall x)(\exists !y)F(x,y)$,
by Theorem~\ref{thm:IRStoIPOR}
$\vdash_{\mathcal{I}\PORl} (\forall x)(\exists !y) F(x,y)$.
Then, from $\vdash_{\mathcal{I}\PORl} (\forall x)(\exists y)F(x,y)$
we deduce $\vdash_{\mathcal{I}\PORl}(\forall x)F(x,\term{g}x)$
for some closed term $\term{g}\in\POR^\lambda$,
by Corollary~\ref{cor:IPORSigma}.
Furthermore, by Theorem~\ref{thm:PRepr}.2,
there is a $g\in \mathcal{POR}$ such that for any $\sigma_1,\sigma_2\in\mathbb S$
and $\eta\in \mathbb O$,
$g(\sigma_1,\eta)=\sigma_2$ when
$T_\eta \vdash_{\mathcal{I}\PORl} \term{g} \ooverline{\sigma_1}
= \ooverline{\sigma_2}$.
So, by
Proposition~\ref{prop:PORltoIPOR},
for any $\sigma_1,\sigma_2\in\mathbb S$ and $\eta\in\mathbb O$
if $g(\sigma_1,\eta)=\sigma_2$, then
$T_\eta \vdash_{\mathcal{I}\PORl} \term{g}\ooverline{\sigma_1}=
\ooverline{\sigma_2}$
and so
$\emph{T}_\eta \vdash_{\mathcal{I}\PORl}
F(\ooverline{\sigma_1},\ooverline{\sigma_2})$.
By Lemma~\ref{lemma:Sigmab0IRS},
$\emph{T}_\eta \vdash_{\mathcal{I}\PORl} F(\ooverline{\sigma_1},
\ooverline{\sigma_2})$
when $\eta\in\model{F(\ooverline{\sigma_1},\ooverline{\sigma_2})}$,
that is $f(\sigma_1,\eta)=\sigma_2$.
But then $f=g$, so since $g\in\mathcal{POR}$ also $f \in \mathcal{POR}$.
\end{proof}
\footnotesize
\begin{center}
\begin{figure}[h!]
\begin{center}
\framebox{
\parbox[t][7.2cm]{11cm}{
\footnotesize{
\begin{center}
\begin{tikzpicture}[node distance=2cm]
\node at (-3,3) (a) {$\vdash_{\textsf{\textbf{I}}\RS} (\forall x)(\exists y)F(x,y)$};
\node at (-3,1.5) (b) {$\vdash_{\mathcal{I}\PORl} (\forall x)(\exists y) F(x,y)$};
\node at (-3.8,2.3) {\textcolor{gray}{T.~\ref{thm:IRStoIPOR}}};
\node at (-3.8,0.9) {\textcolor{gray}{C.~\ref{cor:IPORSigma}}};
\node at (-3,0.3) (c) {there is $\term{g}\in\POR^\lambda$ (closed)};
\node at (-3,0) (c1) {$\vdash_{\mathcal{I}\PORl}(\forall x)F(x,\term{g}x)$};
\node at (-3.8,-0.8) {\textcolor{gray}{T.~\ref{thm:PRepr}}};
\node at (-3,-1.7) (d) {there is a $g\in\mathcal{POR}$};
\node at (-3,-2) (d1) {$g(\sigma_1,\eta)
=\sigma_2 \Leftrightarrow T_\eta
\vdash_{\POR^\lambda}
\term{g} \ooverline{\sigma_1} = \ooverline{\sigma_2}$};
\node at (-1.8,-3) (d2) {$T_\eta \vdash_{\mathcal{I}\PORl} \term{g}
\ooverline{\sigma_1} =\ooverline{\sigma_2}$};
\node at (-2.5,-2.5) (d3) {\textcolor{gray}{P.~\ref{prop:PORltoIPOR}}};
\node at (3,0) (j) {$\vdash_{\mathcal{I}\PORl} (\forall x)F(x,\ooverline{\sigma_2})$};
\node at (3,-1) (j1) {$f(\sigma_1,\eta)=\sigma_2$};
\node at (3,-2) (h) {$f (=g)\in\mathcal{POR}$};
\node at (3.6,-0.45) (h1) {\textcolor{gray}{L~\ref{lemma:Sigmab0IRS}}};
\draw[->,thick] (a) to (b);
\draw[->,thick] (b) to (c);
\draw[->,thick] (c1) to (d);
\draw[->,thick] (-1.9,-2.3) to (-1.9,-2.8);
\draw[->,dotted] (j1) to (h);
\draw[->,dotted] (-1,0) to (1.2,0);
\draw[->,dotted] (d2) to (j);
\draw[->,dotted] (d) to [bend left=20] (h);
\draw[->,thick] (j) to (j1);
%
\end{tikzpicture}
\end{center}
}}}
\caption{Proof Schema of Corollary~\ref{cor:Sigmab1toPOR}}
\end{center}
\end{figure}
\end{center}
\normalsize
\subsubsection{$\forall \cc{NP}$-Conservativity of $\mathcal{I}\PORl$ + $\mathbf{EM}$ over $\mathcal{I}\PORl$}
Corollary~\ref{cor:Sigmab1toPOR} is already
very close to the result we are looking for.
The remaining step to conclude our proof is its extension
from intuitioninstic $\textsf{\textbf{I}}\RS$ to classical $\textsf{\textbf{RS}}^1_2$,
showing that any function which is $\Sigma^b_1$-representable
in $\textsf{\textbf{RS}}^1_2$ is also in $\mathcal{POR}$.
The proof is obtained by adapting the method
from~\cite{CookUrquhart}.
We start by considering an extension
of $\mathcal{I}\PORl$ via $\mathbf{EM}$ and
show that the realizability interpretation
extends to it
so that for any of its closed theorems
$(\forall x)(\exists y\preceq \term{t})F(x,y)$,
being $F$ a $\Sigma^b_1$-formula,
there is a closed term $\term{t}:s\Rightarrow s$
of $\POR^\lambda$ such that $\vdash_{\mathcal{I}\PORl} (\forall x)F(x,\term{t}x)$.
\paragraph{From $\mathcal{I}\PORl$ to $\mathcal{I}\PORl + \mathbf{(Markov)}$}
Let $\mathbf{EM}$ be the excluded-middle schema,
$F \vee \neg F$, and
\emph{Markov's principle} be defined as follows,
\begin{align*}
\neg \neg (\exists x)F \rightarrow (\exists x)F,
\tag{$\mathbf{Markov}$}
\end{align*}
where $F$ is a $\Sigma^b_1$-formula.
\begin{prop}\label{prop:EMtoMarkov}
For any $\Sigma^b_1$-formula $F$,
if $\vdash_{\mathcal{I}\PORl+\mathbf{EM}} F$, then
$\vdash_{\mathcal{I}\PORl+\mathbf{(Markov)}}F$.
\end{prop}
\begin{proof}[Proof Sketch]
The claim is proved by applying the double negation
translation,
with the following two remarks: (1)
for any $\Sigma^b_0$-formula $F$,
$\vdash_{\mathcal{I}\PORl} \neg\negF \rightarrow F$;
(2) using $\mathbf{(Markov)}$, the
double negation of an instance of the $\cc{NP}$-induction
can be shown equivalent to an instance
of the $\cc{NP}$-induction schema.
\end{proof}
\noindent
We conclude showing that the realizability
interpretation defined above
extends to $\mathcal{I}\PORl+\mathbf{(Markov)}$, that is for any
closed theorem
$(\forall x)(\exists y\preceq \term{t})F(x,y)$
with $F$ $\Sigma^b_1$-formula, of $\mathcal{I}\PORl+\mathbf{(Markov)}$,
there
is a closed term of $\POR^\lambda$ $\term{t}:s \Rightarrow s$,
such that $\vdash_{\mathcal{I}\PORl} (\forall x)F(x,\term{t}x)$.
\paragraph{From $\mathcal{I}\PORl$ to $(\mathcal{I}\PORl)^*$.}
Let us assume given a subjective encoding
$\sharp:(s \Rightarrow s) \Rightarrow s$ in
$\mathcal{I}\PORl$ of first-order unary functions as strings,
together with a ``decoding'' function
$\term{app}: s \Rightarrow s \Rightarrow s$ satisfying:
$$
\vdash_{\mathcal{I}\PORl} \term{app}(\sharp \term{f},x)
= \term{f}x.
$$
Moreover, let
$$
x*y := \sharp(\lambda z.\term{BAnd}(\term{app}(x,z),
\term{app}(y,z)))
$$
and
$$
T(x) := (\exists y)(\term{B}(\term{app}(x,y))=\term{0}).
$$
There is a \emph{meet semi-lattice}
structure on the set of terms of type $s$
defined by $\term{t}\sqsubseteq \term{u}$
when $\vdash_{\mathcal{I}\PORl} T(\term{u})
\rightarrow T(\term{t})$ with top element
$\underline{\mathbf{1}} := \sharp(\lambda x.\term{1})$
and meet given by $x*y$.
Indeed, from $T(x*\term{1}) \leftrightarrow
T(x)$, $x\sqsubseteq \underline{\mathbf{1}}$ follows.
Moreover, from $\term{B}(\term{app}(x,\term{u}))=\term{0}$,
we obtain $\term{B}(\term{app}(x*y,\term{u}))=
\term{BAnd}(\term{app}(x,\term{u}), \term{app}(y,\term{u})) = \term{0}$,
whence $T(x) \rightarrow T(x*y)$,
i.e.~$x*y\sqsubseteq x$.
One can similarly prove $x*y\sqsubseteq y$.
Finally, from $T(x) \rightarrow T(v)$
and $T(y)\rightarrow T(v)$,
we deduce $T(x*y)\rightarrow T(v)$,
by observing that $\vdash_{\mathcal{I}\PORl}T(x*y)
\rightarrow T(y)$.
Notice that the formula $T(x)$ is \emph{not}
in $\Sigma^b_1$-one, as its existential quantifier
is not bounded.
\begin{defn}
For any formula of $\mathcal{I}\PORl$ $F$,
and fresh variable $x$, we define
formulas $x\Vdash F$ inductively:
\begin{align*}
x \Vdash F &:= F \vee T(x) \ \ \ (F \ atomic) \\
x \Vdash G \wedge H
&:= x \Vdash G \wedge x
\Vdash H \\
x \Vdash G \vee H
&:=
x \Vdash G \vee
x \Vdash H \\
x \Vdash G \rightarrow H
&:=
(\forall y)(y\Vdash G \rightarrow
x*y \Vdash H) \\
x \Vdash (\exists y)G &:=
(\exists y)x \Vdash G \\
x \Vdash (\forall y)G &:=
(\forall y)x \Vdash G.
\end{align*}
\end{defn}
\noindent
The following Lemma~\ref{lemma:NPind}
is established by induction on the structure
of formulas in $\mathcal{I}\PORl$, as in~\cite{CoquandHofmann}.
\begin{lemma}\label{lemma:NPind}
If $F$ is provable in $\mathcal{I}\PORl$ without using
$\cc{NP}$-induction,
then $x\Vdash F$ is provable in $\mathcal{I}\PORl$.
\end{lemma}
\begin{lemma}\label{lemma:Sigmab0IPOR}
Let $F=(\exists x\preceq \term{t})G$,
where $G$ is a $\Sigma^b_0$-formula.
Then,
there exists a term $\term{u_F}:s$ with
$FV(\term{u_F})=FV(G)$ such that:
$$
\vdash_{\mathcal{I}\PORl} F \leftrightarrow T(\term{u_F}).
$$
\end{lemma}
\begin{proof}
Since $G(x)$ is a $\Sigma^b_0$-formula,
for all terms $\term{v}:s$,
$\vdash_{\mathcal{I}\PORl} G(x) \leftrightarrow \term{t_{x\preceq t\wedgeG}} (x) =\term{0}$,
where $\term{t_{x\preceq t\wedge G}}$ has
the free variables of $t$ and $G$.
Let $H(x)$ be a $\Sigma^b_0$-formula,
it is show by induction on its structure that
for any term $\term{v}:s$,
$\term{t_{H(v)}} = \term{t_H}(\term{v})$.
Then,
$$
\vdash_{\mathcal{I}\PORl} F \leftrightarrow
(\exists x)\term{t_{x\preceq t\wedge G}}(x) =
\term{0} \leftrightarrow (\exists x)T(\sharp(\lambda
x.\term{t_{x\preceq t\wedge G}}(x))).
$$
So, we let $\term{u_F}=\sharp(\lambda x.\term{t_{x\preceq t\wedge G}}(x))$.
\end{proof}
\noindent
From which we also deduce the following three properties:
\begin{itemize}
\itemsep0em
\item[i.] $\vdash_{\mathcal{I}\PORl} (x\Vdash F)
\leftrightarrow (F \vee T(x))$
\item[ii.] $\vdash_{\mathcal{I}\PORl} (x\Vdash \negF)
\leftrightarrow (F \rightarrow T(x))$
\item[iii.] $\vdash_{\mathcal{I}\PORl} (x\Vdash \neg\neg F)
\leftrightarrow (F \vee T(x))$.
\end{itemize}
where $F$ is a $\Sigma^b_1$-formula.
\begin{cor}[Markov's Principle]
If $F$ is a $\Sigma^b_1$-formula, then
$$
\vdash_{\mathcal{I}\PORl} x \Vdash \neg \neg F
\rightarrow F.
$$
\end{cor}
To define the extension $(\mathcal{I}\PORl)^*$ of $\mathcal{I}\PORl$,
we introduce $\text{PIND}$ in a formal way.
\begin{defn}[PIND]\label{df:PIND}
Let $\text{PIND}(F)$ indicate the formula:
$$
\big(F(\epsilon) \wedge
\big((\forall x)(F(x) \rightarrow F(x\term{0}))
\wedge (\forall x)(F(x) \rightarrow F(x\term{1}))\big)
\rightarrow (\forall x)F(x).
$$
\end{defn}
\noindent
Observe that if $F(x)$ is a formula
of the form $(\exists y\preceq \term{t})\term{u}=\term{v}$,
then $z\Vdash \text{PIND}(F)$
is of the form $\text{PIND}(F(x)\vee T(z))$,
which is \emph{not} an instance of the
$\cc{NP}$-induction schema (as the formula
$T(z)=
(\exists x)\term{B}(\term{app}(z,x))=\term{0}$
is not bounded).
\begin{defn}[The Theory $(\mathcal{I}\PORl)^*$]
Let $(\mathcal{I}\PORl)^*$ indicate the theory
extending $\mathcal{I}\PORl$ with all instances
of the induction schema $\text{PIND}(F(x)
\vee G)$, where $F(x)$
is of the form $(\exists y\preceq \term{t})
\term{u}=\term{v}$,
and $G$ is an arbitrary formula with
$x\not\inFV(G)$.
\end{defn}
\noindent
We then deduce the following Proposition relating
derivability in $\mathcal{I}\PORl$ and in $(\mathcal{I}\PORl)^*$.
\begin{prop}
For any $\Sigma^b_1$-formula
$F$,
if $\vdash_{\mathcal{I}\PORl} F$,
then $\vdash_{(\mathcal{I}\PORl)^*} x\Vdash F$.
\end{prop}
Finally, we extend realizability
to $(\mathcal{I}\PORl)^*$ by constructing a realizer for
$\text{PIND}(F(x) \vee G)$.
\begin{lemma}\label{lemma:real}
Let $F(x) : (\exists y\preceq \term{t})
\term{u}=\term{0}$ and $G$
be any formula not containing free occurrences
of $x$.
Then, there exist terms $\mathbf{t}$
such that:
$$
\vdash_{\mathcal{I}\PORl} \mathbf{t} \ \circledR\ \text{PIND}(F(x)
\vee G).
$$
So, by Theorem~\ref{thm:realSoundness},
we obtain that for any
$\Sigma^b_1$-formula $F$
and formula $G$, with $x\not\in FV(F)$,
$$
\vdash_{\mathcal{I}\PORl} \text{PIND}(F(x) \vee G).
$$
\end{lemma}
\begin{prop}
For any $\Sigma^b_1$-formula $F$ and $G$
with $x\not\inFV(F)$, $\vdash_{\mathcal{I}\PORl} \text{PIND}(F(x)\vee G)$.
\end{prop}
\begin{cor}[$\forall \cc{NP}$-Conservativity of
$\mathcal{I}\PORl+\mathbf{EM}$ over $\mathcal{I}\PORl$]
Let $F$ be a $\Sigma^b_1$-formula,
if $\vdash_{\mathcal{I}\PORl+\mathbf{EM}}(\forall x)(\exists y\preceq \term{t}) F(x,y)$,
then $\vdash_{\mathcal{I}\PORl}(\forall x)
(\exists y\preceq \term{t})F(x,y)$.
\end{cor}
\paragraph{Concluding the Proof.}
We conclude our proof establishing Proposition~\ref{prop:IPORMarkov}.
\begin{prop}\label{prop:IPORMarkov}
Let $(\forall x)(\exists y\preceq \term{t}) F(x,y)$
be a closed term of $\mathcal{I}\PORl+\mathbf{(Markov)}$,
where $F$ is a $\Sigma^b_1$-formula.
Then, there exists a closed term of $\POR^\lambda$
$\term{t}:s \Rightarrow s$, such that:
$$
\vdash_{\mathcal{I}\PORl} (\forall x)F (x,\term{t} x).
$$
\end{prop}
\begin{proof}
If $\vdash_{\mathcal{I}\PORl+\mathbf{(Markov)}} (\forall x)(\exists y)F(x,y)$, then by Parikh's Proposition~\ref{prop:Parikh},
also
$\vdash_{\mathcal{I}\PORl+\mathbf{(Markov)}} (\exists y\preceq \term{t})F(x,y)$.
Moreover,
$\vdash_{(\mathcal{I}\PORl)^*}z\Vdash (\exists y \preceq \term{t}) F(x,y)$.
Then, let us consider $G = (\exists y\preceq \term{t})
F(x,y)$.
By taking $\term{v}=\term{u}_G$,
using Lemma~\ref{lemma:Sigmab0IPOR},
we deduce $\vdash_{(\mathcal{I}\PORl)^*} G$
and, thus, by Lemma~\ref{lemma:NPind} and~\ref{lemma:real},
we conclude that there exist $\mathbf{t},\mathbf{u}$
such that $\vdash_{\mathcal{I}\PORl} \mathbf{t},\mathbf{u}\ \circledR\ G$,
which implies $\vdash_{\mathcal{I}\PORl} F(x,\mathbf{t} x)$, and thus
$\vdash_{\mathcal{I}\PORl}(\forall x)(F(x),\mathbf{t} x)$.
\end{proof}
\noindent
So, by Proposition~\ref{prop:EMtoMarkov},
if $\vdash_{\mathcal{I}\PORl+\mathbf{EM}} (\forall x)(\exists y\preceq \term{t}) F(x,y)$,
being $F$ a closed $\Sigma^b_1$-formula,
then there is a closed term of $\mathcal{POR}$
$\term{t} : s \Rightarrow s$,
such that $\vdash_{\mathcal{I}\PORl} (\forall x)F(x,\term{t} x)$.
Finally, we conclude the desired
Corollary~\ref{cor:RStoPOR} for classical $\textsf{\textbf{RS}}^1_2$
arguing as in Corollary~\ref{cor:RStoPOR} above.
\begin{cor}\label{cor:RStoPOR}
Let $\textsf{\textbf{RS}}^1_2 \vdash (\forall x)(\exists y\preceq t) F (x,y)$, where $F$
is a $\Sigma^b_1$-formula with only
$x$ and $y$ free.
For any function $f:\mathbb S \times \mathbb O \to \mathbb S$,
if $(\forall x)(\exists y\preceq t)F (x,y)$
represents $f$ so that:
\begin{enumerate}
\itemsep0em
\item $\textsf{\textbf{RS}}^1_2 \vdash (\forall x)(\exists !y)F(x,y)$
\item $\model{F(\ooverline{\sigma_1},\ooverline{\sigma_2})} = \{\eta \ | \ f(\sigma_1,\eta)=
\sigma_2\}$,
\end{enumerate}
then $f\in\mathcal{POR}$.
\end{cor}
\noindent
Now, putting Theorem~\ref{thm:PORtoRS} and Corollary~\ref{cor:RStoPOR} together,
we conclude that Theorem~\ref{thm:PORandRS} holds
and that $\textsf{\textbf{RS}}^1_2$ provides an \emph{arithmetical} characterization of functions
in $\mathcal{POR}$.
\subsection{Preliminaries}
We start by defining (or re-defining)
the classes of functions (over strings) computed by poly-time
PTMs and
of functions computed by poly-time stream machines,
that is TMs with an extra oracle tape.
\subsubsection{The Class $\mathcal{RFP}$}
We start by (re-)defining the class of functions computed
by poly-time PTMs.\footnote{Clearly, there is
a strong affinity with the standard definition of random functions~\cite{Santos69}.
In this case, pseudo-distributions and functions are over strings rather than numbers.
Furthermore, we are now considering machines
explicitly associated with a (polynomial-)time resource bound.}
\begin{defn}[Class $\mathcal{RFP}$]
Let $\mathbb{D}(\mathbb S)$ denote the set of
functions $f:\mathbb S \to [0,1]$
such that $\sum_{\sigma\in \mathbb S}f (\sigma)=1$.
The \emph{class $\mathcal{RFP}$} is made of all functions
$f:\mathbb S^k \to \mathbb{D}(\mathbb S)$ such that,
for some PTM $\mathscr{M_P}$ running in polynomial time,
and every $\sigma_1,\dots, \sigma_k,\tau\in \mathbb S$,
$f(\sigma_1,\dots, \sigma_k)(\tau)$ coincides with the
probability that $\mathscr{M_P}(\sigma_1\sharp \dots \sharp \sigma_k)\Downarrow
\tau$.
\end{defn}
\noindent
So – similarly to $\langle \mathscr{M_P}\rangle$ by~\cite{Santos69,Gill77} –
the function computed by this machine
associates each possible output with a probability
corresponding to the actual probability that a run of the
machine actually produces that output, and
we need to adapt the notion of
$\Sigma^b_1$-representability accordingly.
\begin{defn}
A function $f:\mathbb S^k \to \mathbb{D}(\mathbb S)$ is
\emph{$\Sigma^b_1$-representable in $\textsf{\textbf{RS}}^1_2$}
if there is a $\Sigma^b_1$-formula of
$\mathcal{RL}$ $F(x_1,\dots, x_k,y)$,
such that:
\begin{enumerate}
\itemsep0em
\item $\textsf{\textbf{RS}}^1_2 \vdash (\forall \vec{x})(\exists !y)F(\vec{x},y)$,
\item for all $\sigma_1,\dots, \sigma_k,\tau\in \mathbb S$,
$f(\sigma_1,\dots, \sigma_k,\tau)=\mu(\model{F(\overline{\sigma_1},
\dots,\overline{\sigma_k},\tau)})$.
\end{enumerate}
\end{defn}
\noindent
Our main result can be re-stated as follows:
\begin{restatable}{theorem}{thm:RSandRFP}\label{thm:RSandRFP}
For any function $f:\mathbb S^k \to \mathbb{D}(\mathbb S)$, $f$
is $\Sigma^b_1$-representable in $\textsf{\textbf{RS}}^1_2$ when $f\in\mathcal{RFP}$.
\end{restatable}
\noindent
The proof of Theorem~\ref{thm:RSandRFP}
relies on Theorem~\ref{thm:PORandRS},
once we relate the function
algebra $\mathcal{POR}$ with the class $\mathcal{RFP}$
by the Lemma~\ref{lemma:PORandRFP} below.
\begin{restatable}{lemma}{lemma:PORandRFP}\label{lemma:PORandRFP}
For any functions $f:\mathbb S^k\times \mathbb O \to \mathbb S$ in $\mathcal{POR}$,
there exists $g:\mathbb S \to \mathbb{D}(\mathbb S)$ in $\mathcal{RFP}$ such that
for all $\sigma_1,\dots, \sigma_k,\tau \in \mathbb S$,
$$
\mu(\{\omega \ | \ f(\sigma_1,\dots, \sigma_k,\eta)=\tau \})=
g(\sigma_1,\dots,\sigma_k,\tau)
$$
and viceversa.
\end{restatable}
\noindent
However, the proof of Lemma~\ref{lemma:PORandRFP}
is convoluted,
as it is based on a chain of language simulations.
\subsubsection{Introducing the Class $\cc{SFP}$}
The core idea to relate $\mathcal{POR}$ and $\mathcal{RFP}$ is to introduce
an intermediate class, called $\cc{SFP}$.
This is the class of function computed
by a poly-time \emph{stream Turing machine} (STM, for short),
where
an STM is a deterministic TM with one extra
(read-only) tape intuitively accounting for
probabilistic choices:
at the beginning
the extra-tape
is sampled from $\mathbb B^\mathbb N $;
then, at each computation step,
the machine reads one new bit from this tape,
always moving to the right.
\begin{defn}[Class $\cc{SFP}$]\label{df:SFP}
The \emph{class $\cc{SFP}$} is made of functions
$f: \mathbb S^k \times \mathbb B^\mathbb N \to\mathbb S$,
such that there is an STM $\mathscr{M_S}$ running in polynomial time
such that
for any $\sigma_1,\dots, \sigma_k$ and
$\omega \in \mathbb B^\mathbb N $,
$f(\sigma_1,\dots,\sigma_k,\omega)=\tau$
when
for inputs $\sigma_1\sharp \dots \sharp \sigma_k$ and tape
$\eta$, the machine $\mathscr{M_S}$ outputs $\tau$.
\end{defn}
\subsection{Relating $\mathcal{RFP}$ and $\cc{SFP}$}
The global behavior of STMs and PTMs is similar,
but the former access to randomness in an explicit way:
instead of flipping a coin at each step,
the machine samples a stream of bits once, and
then reads one new bit at each step.
So, to prove the equivalence of the two models,
we pass through the following Proposition~\ref{prop:PTMandSTM}.
\begin{prop}[Equivalence of PTMs and STMs]\label{prop:PTMandSTM}
For any poly-time STM $\mathscr{M_S}$, there is a poly-time PTM
$\mathscr{M_S}^*$ such that for all string $\sigma,\tau \in \mathbb S$,
$$
\mu(\{\omega \ | \ \mathscr{M_S}(\sigma,\omega)=\tau\}) = \Pr[\mathscr{M_S}^*(\sigma)=\tau],
$$
and viceversa.
\end{prop}
\begin{cor}[Equivalence of $\mathcal{RFP}$ and $\cc{SFP}$]\label{cor:RFPandSFP}
For any $f:\mathbb S^k\to \mathbb{D}(\mathbb S)$ in
$\mathcal{RFP}$, there is a $g:\mathbb S^k\times \mathbb B^\mathbb N
\to \mathbb S$ in $\cc{SFP}$,
such that for all $\sigma_1,\dots, \sigma_k,\tau\in \mathbb S$,
$$
f(\sigma_1,\dots, \sigma_k,\tau) = \mu(\{\omega \ | \
g(\sigma_1,\dots,\sigma_k,\omega)=\tau\}),
$$
and viceversa.
\end{cor}
\subsection{Relating $\cc{SFP}$ and $\mathcal{POR}$}
Finally, we need to
prove the equivalence between $\mathcal{POR}$ and $\cc{SFP}$:
\begin{center}
\begin{tikzpicture}[node distance=2cm]
\node at (-3,0) (a) {$\mathcal{RFP}$};
\node at (-1.5,0.8) (b1) {\footnotesize{\textcolor{gray}{Cor.~\ref{cor:RFPandSFP}}}};
\node at (0,0) (b) {$\cc{SFP}$};
\node at (0,-0.5) (b2) {\footnotesize{\textcolor{gray}{Def.~\ref{df:SFP}}}};
\node at (3,0) (c) {$\mathcal{POR}$};
\draw[<->,thick] (a) to [bend left=20] (b);
\draw[<->,thick,dotted,gray] (b) to [bend left=20] (c);
%
\end{tikzpicture}
\end{center}
Moving from PTMs to STMs,
we obtain a machine model
which accesses randomness in a way
which is similar to that of functions in $\mathcal{POR}$:
as seen, at the beginning of the computation an oracle
is sampled,
and computation proceeds querying it.
Yet, there are still relevant differences in the way
in which these families of machines treat randomness.
While functions of $\mathcal{POR}$ access an oracle
in the form of a function
$\eta \in \mathbb B^\mathbb S$,
the oracle for an STM is a stream of bits
$\omega\in \mathbb B^\mathbb N $.
Otherwise said, a function in $\mathcal{POR}$
is of the form $f_{\mathcal{POR}}:\mathbb S^k \times \mathbb O \to
\mathbb S$, whereas one in $\cc{SFP}$ is
$f_{\cc{SFP}}:\mathbb S^k\times \mathbb B^\mathbb N \to \mathbb S$.
Then, we cannot compare them \emph{directly}, and
provide an indirect comparison
in two main steps.
\subsubsection{From $\cc{SFP}$ to $\mathcal{POR}$}
First, we show that any function computable by a poly-time
STM is in $\mathcal{POR}$.
\begin{prop}[From $\cc{SFP}$ to $\mathcal{POR}$]\label{prop:SFPtoPOR}
For any $f:\mathbb S^k\times \mathbb B^\mathbb N \to\mathbb S$
in $\cc{SFP}$, there is a function
$f^{\star} : \mathbb S^k \times \mathbb O \to \mathbb S$ in $\mathcal{POR}$
such that for all $\sigma_1,\dots,\sigma_k,\tau \in\mathbb S$
and $\omega\in \mathbb B^\mathbb N $,
$$
\mu(\{\omega \in \mathbb B^\mathbb N \ | \ f(\sigma_1,\dots, \sigma_k,
\omega) = \tau\}) =
\mu(\{\eta \in \mathbb O \ | \
f^{\star} (\sigma_1,\dots, \sigma_k,\eta)=\tau\}).
$$
\end{prop}
\noindent
The fundamental observation is that,
given an input $\sigma \in\mathbb S$ and the extra tape
$\omega \in \mathbb B^\mathbb N $,
an STM running in polynomial time can access a \emph{finite}
portion of $\omega$ only,
the length of which can be bounded by
some polynomial $p(|\sigma|)$.
Using this fact, we construct $f^{\star}$
as follows:
\begin{enumerate}
\itemsep0em
\item We introduce the new class $\cc{PTF}$,
made of functions
$f:\mathbb S^k\times \mathbb S \to \mathbb S$ computed by
a \emph{finite stream Turing machine}
(FSTM, for short),
the extra tape of which is a finite string.
\item
We define a function $h \in \cc{PTF}$
such that for any $f:\mathbb S\times \mathbb B^\mathbb N \to \mathbb S$
with polynomial bound $p(x)$,
$$
f(n,\omega)=h(x,\omega_{p(|x|)}).
$$
\item We define $h':\mathbb S \times \mathbb S \times \mathbb O
\to \mathbb S$ such that
$$
h'(x,y,\eta)=
h(x,y).
$$
By an encoding of FSTMs we show that
$h'\in\mathcal{POR}$.
Moreover $h'$ can be defined \emph{without}
using the query function,
since the computation of $h'$ never looks at $\eta$.
\item Finally, we define an \emph{extractor function}
$e:\mathbb S\times \mathbb O \to \mathbb S \in \mathcal{POR}$,
which mimics the prefix extractor
$\omega_{p(|x|)}$, having its outputs
\emph{the same distributions} of all possible $\omega$'s
prefixes,
even though within a different space.\footnote{Recall that $\eta\in \mathbb B^\mathbb N $,
while the second argument of $e$ is in $\mathbb O$.}
This is obtained by exploiting a bijection
$dyad:\mathbb S\to\mathbb N $,
ensuring that for each $\omega \in \mathbb B^\mathbb N $,
there is an $\eta\in\mathbb B^\mathbb S$
such that any prefix of $\omega$ is an output of
$e(y,\eta)$, for some $y$.
Since $\mathcal{POR}$ is closed under composition,
we finally define
$$
f^\star (x,\eta) := h'(x,e(x,\eta),\eta).
$$
\end{enumerate}
\subsubsection{From $\mathcal{POR}$ to $\cc{SFP}$}
In order to simulate functions
of $\mathcal{POR}$ via STMs we observe not only that
these two models
invoke oracles of different shape,
but also that the former can manipulate such oracles
in a more liberal way:
\begin{itemize}
\item STMs query the oracle before each step is
produced.
By contrast functions of $\mathcal{POR}$ may invoke the query function
$Q(x,\eta)$ freely during computation.
We call this access policy \emph{on demand}.
\item STMs query a new bit of the oracle at
each step of computation, and
cannot access previously observed bits.
We call this access policy \emph{lienear}.
By contrast, functions of $\mathcal{POR}$ can query
the same bits as many times as needed.
\end{itemize}
Consequently, a direct simulation
of $\mathcal{POR}$ via STMs is challenging
even for a basic function like $Q(x,\eta)$.
So, again, we exploit an indirect
path:
we pass through a chain of simulations,
dealing with each of these differences separately.
\begin{enumerate}
\item First, we translate $\mathcal{POR}$ into
an imperative language $\text{SIFP}_{\text{RA}}$
inspired by Winskel's IMP~\cite{Winskel},
with the \emph{same} access policy as $\mathcal{POR}$.
$\text{SIFP}_{\text{RA}}$ is endowed with assignments,
a $\mathtt{while}$ construct,
and a command $\mathtt{Flip}(e)$, which first evaluates
$e$ to a string $\sigma$ and then stores the value
$\eta(\sigma)$ in a register.
The encoding of oracle functions
in $\text{SIFP}_{\text{RA}}$ is easily obtained by induction
on the function algebra.
\item Then, we translate $\text{SIFP}_{\text{RA}}$ into another
imperative language, called $\text{SIFP}_{\text{LA}}$,
associated with a \emph{linear} policy
of access.
$\text{SIFP}_{\text{LA}}$ is defined like $\text{SIFP}_{\text{RA}}$ except for the
$\mathtt{Flip}(e)$, which is replaced by
the new command $\mathtt{RanBit}()$
generating a random bit and storing it in a register.
A weak simulation from $\text{SIFP}_{\text{RA}}$
into $\text{SIFP}_{\text{LA}}$ is defined by progressively constructing
an \emph{associative table}
containing pairs in the form (string, bit) of past
observations.
Each time $\mathtt{Flip}(e)$ is invoked,
the simulation checks whether a pair $(e,b)$ had
already been observed.
Otherwise increments the table by producing
a new pair $(e,\mathtt{RandBit}())$.
This is by far the most complex step of the whole
simulation.
\item The language $\text{SIFP}_{\text{LA}}$ can be translated
into STMs.
Observe that the access policy of $\text{SIFP}_{\text{LA}}$ is
still on-demand:
$\mathtt{RandBit}()$ may be invoked
or not before executing the instruction.
So, we first consider a translation from $\text{SIFP}_{\text{LA}}$
into a variant of STMs admitting an on-demand
access policy
– that is, a computation step may or may not access
a bit from the extra-tape.
Then, the resulting program is encoded into a regular
STM.
Observe that
we cannot expect that the machine $\mathscr{M_S}^\dagger$
simulating an on-demand machine $\mathscr{M_S}$
will produce \emph{the same} output and
oracle.
Rather, as in many other cases, we show that
$\mathscr{M_S}^\dagger$ can be defined so that,
for any $\sigma,\tau\in\mathbb S$,
the sets
$\{\omega \ | \ \mathscr{M_S}^\dagger (\sigma,\omega)=\tau\}$
and $\{\omega \ | \ \mathscr{M_S}(\sigma,\omega)=\tau\}$
have the same measure.
\end{enumerate}
\begin{figure}[h!]\label{fig:PORandSFP}
\begin{center}
\framebox{
\parbox[t][4cm]{10.6cm}{
\footnotesize{
\begin{center}
\begin{tikzpicture}[node distance=2cm]
\node(por) at (-8,0) {$\mathcal{POR}$};
\node(por1) at (-8,-0.5) {$\mathbb S\times \mathbb O\longrightarrow \mathbb S$};
\node(sfp) at (0,0) {$\cc{SFP}$};
\node(sfp1) at (0,-0.5) {$\mathbb S\times \mathbb B^{\mathbb N }\longrightarrow \mathbb S$};
\node[black](fsfp) at (-4,0) {finite $\cc{SFP}$};
\node[black](fsfp1) at (-4,-0.5) {$\mathbb S\times \mathbb S\longrightarrow \mathbb S$};
\draw[->,thick] (sfp) to node[above]{\tiny$ (x, \omega_{p(|x|)})\mapsfrom (x,\omega)$} (fsfp);
\draw[->,thick] (fsfp) to node[above]{\tiny$(x,\eta)\to (x, e(x,\eta))$} (por);
\node(sifpra) at (-8,-2.5) {\begin{tabular}{c}$\text{SIFP}_{\text{RA}}$\\ \small imperative \\ \small random access\end{tabular}};
\node(sifpla) at (-3,-2.5) {\begin{tabular}{c}$\text{SIFP}_{\text{LA}}$\\ \small imperative \\ \small linear access\end{tabular}};
\node(sfpod) at (0,-2.5) {\begin{tabular}{c}$\cc{SFP}$\\ \small on-demand\end{tabular}};
\draw[->, thick] (por1) to node[right]{\tiny\begin{tabular}{c}\tiny inductive\\\tiny encoding\end{tabular}} (sifpra);
\draw[->, thick] (sifpra) to node[above]{\tiny \begin{tabular}{c}\tiny associative\\ \tiny table\end{tabular}} (sifpla);
\draw[->, thick] (sifpla) to (sfpod);
\draw[->, thick] (sfpod) to (sfp1);
\end{tikzpicture}
\end{center}
}}}
\caption{Equivalence between $\mathcal{POR}$ and $\cc{SFP}$~\ref{thm:RSandRFP}}
\end{center}
\end{figure}
\subsubsection{Concluding the Proof.}
These ingredients are enough to conclude the proof
as outlined in Figure~\ref{fig:sketchRBA},
and to relate poly-time random functions
and $\Sigma^b_1$-formulas of $\textsf{\textbf{RS}}^1_2$.
\small
\begin{figure}[h!]\label{fig:sketchRBA}
\begin{center}
\framebox{
\parbox[t][2cm]{9cm}{
\footnotesize{
\begin{center}
\begin{tikzpicture}[node distance=2cm]
\node at (-3,0.4) (a1) {\textcolor{gray}{Thm.~\ref{thm:PORandRS}}};
\node at (-4,0) (a) {$\textsf{\textbf{RS}}^1_2$};
\node at (-2,0) (b) {$\mathcal{POR}$};
\node at (2,0.4) (b1) {\textcolor{gray}{Cor.~\ref{cor:RFPandSFP}}};
\node at (1,0) (d) {$\cc{SFP}$};
\node at (3,0) (c) {$\mathcal{RFP}$};
\node at (-0.5,0) (g) {\textcolor{gray}{Fig.~\ref{fig:PORandSFP}}};
\draw[<->,thick] (a) to(b);
\draw[->,thick] (b) to [bend right=30] (d);
\draw[->,thick] (d) to [bend right=30] (b);
\draw[<->,thick] (c) to (d);
%
\end{tikzpicture}
\end{center}
}}}
\caption{Proof Sketch of Theorem~\ref{thm:RSandRFP}}
\end{center}
\end{figure}
\normalsize
\longv{
\subsection{Relating $\mathcal{RFP}$ and $\cc{SFP}$}
To present the proof outlined above in a formal way
we need to fix a few definitions.
We start with STMs and their configurations.
\begin{defn}[Stream Turing Machine]
A \emph{stream Turing machine} (STM, for short)
is a quadruple $\mathscr{M_S}= \langle \mathbf{Q}, q_0,\Sigma,
\delta\rangle$,
where:
\begin{itemize}
\itemsep0em
\item $\mathbf{Q}$ is a finite set of states
ranged over by $q_i$ and similar meta-variables
\item $q_0\in\mathbf{Q}$ is an initial state
\item $\Sigma$ is a finite set of characters ranged over
by $c_1,c_2,\dots$
\item $\delta: \hat{\Sigma} \times \mathbf{Q} \times
\hat{\Sigma} \times \mathbb B \to \hat{\Sigma}
\times \{L,R\}$ is a transition function describing
the new configuration reached by the machine.
\end{itemize}
$L$ and $R$ are two fixed and distinct symbols
(e.g. $\mathbb{1}$ and $\mathbb{0}$),
$\hat{\Sigma}=\Sigma \cup\{\circledast\}$
and $\circledast\not\in\Sigma$ is the \emph{blank
character}
\end{defn}
\begin{defn}[Configuration of STM]
The \emph{configuration of an STM}
is a quadruple $\langle \sigma, q, \tau,\eta\rangle$,
where:
\begin{itemize}
\itemsep0em
\item $\sigma\in\{\mathbb{0},\mathbb{1},\circledast\}^*$ is the portion
of the work tape
on the left of the head
\item $q\in\mathbf{Q}$ is the current state of the machine
\item $\tau \in\{\mathbb{0},\mathbb{1},\circledast\}^*$ is the portion
of the work tape on the right of the head
\item $\eta\in \mathbb B^\mathbb N $ is the portion of the oracle
tape that has not been read yet.
\end{itemize}
\end{defn}
\noindent
Thus, we give the definition of family of reachability
relations for an STM.
\begin{defn}[Reachability function for STM]
Given an STM $\mathscr{M_S}$ with transition function $\delta$,
we use $\vdash_\delta$ to denote its standard step function
and $\{\triangleright^n_{\mathscr{M_S}}\}_n$
the smallest family of relations such that
\tiny
\begin{align*}
\langle \sigma,q,\tau,\eta\rangle &\triangleright^0_{\mathscr{M_S}}
\langle \sigma,q,\tau,\eta\rangle \\
\Big(\langle \sigma,q,\tau,\eta\rangle
\triangleright^n_{\mathscr{M_S}} \langle \sigma',q,\tau',\eta'\rangle\Big)
\wedge
\Big(\langle \sigma',q',\tau',\eta'\rangle
&\vdash_\delta
\langle \sigma'',q',\tau'', \eta''\rangle\Big)
\rightarrow \Big(
\langle \sigma, q,\tau,\eta\rangle
\triangleright^{n+1}_{\mathscr{M_S}}
\langle \sigma'',q',\tau'', \eta''\rangle \Big)
\end{align*}
\end{defn}
\begin{defn}[STM computation]
Given an STM $\mathscr{M_S}=\langle \mathbf{Q},
q_0,\Sigma,\delta\rangle$,
$\eta : \mathbb N \to \mathbb B$ and a function
$g:\mathbb N \to \mathbb B$,
we say that \emph{$\mathscr{M_S}$ computes g},
$f_{\mathscr{M_S}}=g$, when for every string $\sigma\in\mathbb S$,
and oracle tape $\eta\in\mathbb B^\mathbb N $,
there are $n\in\mathbb N , \tau\in\mathbb S,q'\in\mathbf{Q}$,
and a function $\psi: \mathbb N \to \mathbb B$ such that
$$
\langle \epsilon, q_0, \sigma, \eta\rangle
\triangleright^n_{\mathscr{M_S}}
(\gamma, q', \tau, \psi\rangle,
$$
and $\langle \gamma, q', \tau, \psi\rangle$
is a final configuration for $\mathscr{M_S}$
with $f_{\mathscr{M_S}}(\sigma,\eta)$ being the longest
suffix of $\gamma$ not including $\circledast$.
\end{defn}
We need to extend this definition to
probability distributions over $\mathbb S$,
so to deal with PTMs.
\begin{defn}
Given a PTM $\mathscr{M_P}$,
a configuration $\langle \sigma, q,\tau\rangle$,
we define the following \emph{sequence of random
variables}
\footnotesize
\begin{align*}
X^{\langle \sigma,q,\tau\rangle}_{\mathscr{M_P},0} &:=
\eta \to \langle \sigma,q,\tau\rangle \\
X^{\langle \sigma,q,\tau\rangle}_{\mathscr{M_P},n+1}
&:= \eta \to
\begin{cases}
\delta_{\mathbb{b}}\big(X^{\langle \sigma,q,\tau\rangle}(\eta)\big)
&\text{if } \eta(n) = \mathbb{b} \text{ and for some }
\langle \sigma',q',\tau'\rangle,
\delta_\mathbb{b}(X^{\langle \sigma,q,\tau\rangle}_{\mathscr{M_P},n}
(\eta))= \langle \sigma',q',\tau'\rangle \\
X^{\langle \sigma,q,\tau\rangle}_{\mathscr{M_P},n}(\eta)
&\text{if } \eta(n) = \mathbb{b} \text{ and for no }
\langle \sigma',q',\tau'\rangle .
\delta_\mathbb{b}(X^{\langle \sigma,q,\tau\rangle}_{\mathscr{M_P},n}
(\eta)) = \langle \sigma',q',\sigma'\rangle.
\end{cases}
\end{align*}
\normalsize
for any $\eta \in \mathbb B^\mathbb N $.
\end{defn}
\noindent
Intuitively, the variable $X^{\langle \sigma,q,\tau\rangle}_{\mathscr{M_P},n}$
describes the configuration reached by the machine
after exactly $n$ transitions.
We say that a PTM $\mathscr{M_P}$ computes $Y_{\mathscr{M_P},\sigma}$
when there is an $m\in \mathbb N $ such that
for any $\sigma\in\mathbb S$, $X^{\langle \sigma,q_0,\tau\rangle}_{\mathscr{M_P},m}$
is final.
In this case $Y_{\mathscr{M_P},\sigma}$ is the longest suffix of
$\pi_1\Big(X^{\langle \sigma, q_0,\epsilon\rangle}_{\mathscr{M_P},m}\Big)$,
which does not contain $\circledast$.
We now prove Proposition~\ref{prop:PTMandSTM},
which establishes the equivalence between STMs and PTMs.
\longv{
\begin{prop}\label{prop:PTMandSTM}[Equivalence between PTMs
and STMs]
For any poly-time STM $\mathscr{M_S}$, there is a poly-time PTM
$\mathscr{M_P}^*$ such that for any strings $\sigma,\tau\in \mathbb S$,
$$
\mu(\{\eta \ | \ \mathscr{M_S}(\sigma,\eta)=\tau\}) = \Pr[\mathscr{M_S}^*(\sigma)=
\tau],
$$
and viceversa.
\end{prop}
}
\begin{proof}[Proof Sketch of Proposition~\ref{prop:PTMandSTM}]
We show that for any $\sigma, \tau \in \mathbb S$,
\begin{align*}
\mu\big(\{\eta \in \mathbb B^\mathbb N \ | \ \mathscr{M_S}(\sigma,\eta)=\tau\}\big) &=
\mu\big(\mathscr{M_P}(\sigma)^{-1} (\tau)\big) \\
\mu\big(\{\eta \in \mathbb B^\mathbb N \ | \ \mathscr{M_S}(\sigma,\eta)=\tau\}\big) &=
\mu\big(\{\eta \in \mathbb B^\mathbb N \ | \ Y_{\mathscr{M_S},\sigma}(\eta)=\tau\}\big).
\end{align*}
We actually show a stronger result,
namely that there is a bijection $I$ : STMs $\to$
PTM such that for any $n\in\mathbb N $
\begin{align}
\{\eta \in \mathbb B^\mathbb N \ | \ \langle \sigma, q_0,\tau,\eta\rangle
\triangleright^n_\delta \langle
\tau, q,\psi,n\rangle\} =
\{\eta \in \mathbb B^\mathbb N \ | \ X^{\langle \epsilon,q_0,\sigma\rangle}_{I(\mathscr{M_S}),n}
(\eta) = \rangle \tau,q,\psi\rangle\}
\tag{I.}
\end{align}
This entails,
\begin{align}
\{\eta \in \mathbb B^\mathbb N \ | \ \mathscr{M_S}(\sigma,\eta)=\tau\} = \{ \eta \in \mathbb B^\mathbb N
\ | \ Y_{I(\mathscr{M_S}),\sigma}(\eta)=\tau\}.
\tag{II.}
\end{align}
The bijection $I$ splits the function $\delta$ of $\mathscr{M_S}$
so that if the corresponding character on the oracle tape is $\mathbb{0}$,
then the transition is defined by $\delta_0$;
if the oracle bit is $\mathbb{1}$, it is defined by $\delta_1$.
We prove (I.) by induction on the number of steps required by
$\mathscr{M_S}$ to compute its input value.
\end{proof}
\subsection{From $\cc{SFP}$ to $\mathcal{POR}$}
To prove Proposition~\ref{prop:SFPtoPOR}
we show the correspondence between functions
computable by an STM and those computable by
a \emph{finite-stream} STM.
\begin{lemma}\label{lemma:STACS22}
For each $f\in \cc{SFP}$ with time-bound $p\in POLY$,
there is an $h\in\cc{PTF}$ such that,
for any $\eta \in \mathbb B^\mathbb N $ and $\sigma\in\mathbb S$,
$$
f(\sigma,\eta)= h(\sigma,\eta_{p(|\sigma|)}).
$$
\end{lemma}
\begin{proof}
Since $f\in\cc{SFP}$,
there is poly-time STM $\mathscr{M_S}=\langle \mathbf{Q},q_0,\Sigma,\delta\rangle$
such that $f=f_{\mathscr{M_S}}$.
Let us consider the FSTM $\mathscr{M_S}'$,
which is defined as $\mathscr{M_S}$ but is finite.
It then holds that, for any $k\in\mathbb N $
and $\sigma', \tau',\tau'',\tau''' \in \mathbb S$,
$\langle \epsilon,q_0',\sigma,\sigma'\rangle
\triangleright^k_{\mathscr{M_S}} \langle \tau,q,\tau',\tau''\rangle$
if and only if $\langle\epsilon,q_0',\sigma,\sigma'\eta\rangle
\triangleright^k_{\mathscr{M_S}'} \langle \tau,q,\tau'',\tau'''\eta\rangle$.
Furthermore, $\mathscr{M_S}'$ requires the same number
of steps as those required by $\mathscr{M_S}$, so
is in $\cc{PTF}$ too.
We conclude the proof defining $h=f_{\mathscr{M_P}'}$.
\end{proof}
\noindent
The next step consists in showing that each function
$f\in\cc{PTF}$ corresponds to a function
which can be defined without recurring to $Q$.
\begin{lemma}\label{lemma:STACS23}
For any $f\in\cc{PTF}$ and $\sigma\in\mathbb S$,
there is a $g\in\mathcal{POR}$ such that for any
$x,y\in \mathbb S$ and $\omega\in\mathbb O$,
$f(x,y)=g(x,y,\omega)$.
Furthermore, if $f$ is defined without recurring to
$Q$,
$g$ do not include $Q$ as well.
\end{lemma}
\begin{proof}[Proof Sketch]
Let us outline the main steps of the proof:
\begin{enumerate}
\itemsep0em
\item We define encodings using strings for configuration and
transition functions – called $e_c$ and $e_t$, respectively –
for FSTM.
Moreover, there is a function $step\in\mathcal{POR}$,
which satisfies the simulation schema in Figure~\ref{fig:step}.
Observe that $e_c,e_t$ and $step$ are correct
with respect to the given simulation.
\begin{center}
\begin{figure}[h!]
\begin{center}
\framebox{
\parbox[t][3.2cm]{9.5cm}{
\footnotesize{
\centering
\begin{tikzpicture}[node distance = 8 cm]
\node at (-3.2,2) (c) {$c = \langle \sigma, q, \tau, y\rangle$};
\node at (-3.2,0) (sc) {$\sigma_c\in \mathbb S$};
\node at (3.2,2) (d) {$d = \langle \sigma, q, \tau, y\rangle$};
\node at (3.2,0) (sd) {$\sigma_d\in \mathbb S$};
\node at (0,2.3) {\textcolor{gray}{$\vdash_{\delta}$}};
\node at (-3.6,1) {\textcolor{gray}{$e_c$}};
\node at (3.6,1) {\textcolor{gray}{$e_c$}};
\node at (0,-0.3) {\footnotesize{\textcolor{gray}{for any $\omega\in\mathbb O$, $step(\sigma_c, e_t(\delta), \omega)=\sigma_d$}}};
\draw[->] (c) edge (d);
\draw[->] (sc) edge (sd);
\draw[->] (c) edge (sc);
\draw[->] (d) edge (sd);
\end{tikzpicture}
}}}
\caption{Behavior of $step$.}\label{fig:step}
\end{center}
\end{figure}
\end{center}
\item For any $f\in\mathcal{POR}$ and $\sigma,\tau\in\mathbb S$,
if there is a term of $\mathcal{RL}$ $t(x)$, which bounds the size
of $f(\sigma,\omega)$ for any possible input,
then $f_m=\lambda z,x,\omega.f^{|z|}(x,\omega)\in\mathcal{POR}$.\footnote{Observe
that if
$f$ is defined without recurring to $Q$,
also $f_m$ can be defined without using it.}
\item Given a machine $\mathscr{M_S}$, if $\sigma\in\mathbb S$ is a correct
encoding for a configuration of $\mathscr{M_S}$,
then for any $\omega\in\mathbb O$ $|step(\sigma,\omega)|$
is $\mathcal{O}(|\sigma|)$.
\item If $c=e_c(\sigma,q,\tau,y,\omega)$ for some $\omega$,
there is a function $dectape$ such that
for any $\omega\in\mathbb O$, $dectape(x,\omega)$
is the longest suffix without occurrences of
$\circledast$.
\end{enumerate}
\end{proof}
Then, as a consequence of Lemma~\ref{lemma:STACS23},
each function $f\in\cc{SFP}$ can be simulated
by a function $g\in\mathcal{POR}$ using as an additional
input a polynomial prefix of the oracle for $f$.
\begin{cor}\label{cor:STACS8}
For any $f\in\cc{SFP}$ and polynomial $p\in POLY$,
there is a function $f\in\mathcal{POR}$, such that
for any $\eta: \mathbb B\mathbb N \to\mathbb B,\omega:\mathbb O
\to\mathbb B$ and $x\in\mathbb S$,
$
f(x,\eta)=g(x,\eta_{p(|x|),\omega}).
$
\end{cor}
\begin{proof}
Assume $f\in\cc{SFP}$ and $y=\eta_{p(|x|)}$.
By Lemma~\ref{lemma:STACS22},
there is a function $h\in\cc{PTF}$ such that
for any $\eta:\mathbb N \to \mathbb B$ and $x\in\mathbb S$,
$$
f(x,\eta)=h(x,\eta_{p(|x|)}).
$$
Moreover, due to Lemma~\ref{lemma:STACS23},
there is a $g\in\mathcal{POR}$ such that for any $x,y\in\mathbb S,\omega
\in \mathbb O$,
$$
g(x,y,\omega)=h(x,y).
$$
The desired function is $g$.
\end{proof}
We now establish that there is a function $e\in\mathcal{POR}$,
which produces strings with the same distribution
of the prefixes of the functions in $\mathbb S^\mathbb N $.
Intuitively, this function extracts $|x|+1$ bits
from $\omega$ and concatenates them in the output.
The definition of $e$ passes through a dyadic representation of a number,
$dyad : \mathbb N \to \mathbb S$.
Therefore, the function $e$ creates
strings $\mathbb{1}^0,\mathbb{1}^1,\dots, \mathbb{1}^k$,
and samples the function $\omega$ on the coordinates,
namely
$dy(\mathbb{1}^0),dy(\mathbb{1}^1),\dots, dy(\mathbb{1}^k)$,
concatenating the result in a string.
\begin{defn}[Function $dyad$]
The function $dyad : \mathbb N \to \mathbb S$ associates
each $n\in \mathbb N $ to the string obtained
by stripping the left-most bit from the binary representation
of $n+1$.
\end{defn}
\noindent
In order to define $dy(\cdot,\cdot)$,
some auxiliary functions are introduced,
namely $binsucc:\mathbb S\times \mathbb O\to \mathbb S$,
\begin{align*}
binsucc(\epsilon,\omega) &:= \mathbb{1} \\
binsucc(x\mathbb{0},\omega) &:= x\mathbb{1}|_{x\mathbb{0}\zero} \\
binsucc(x\mathbb{1},\omega) &:=
binsucc(x,\omega)\mathbb{0}|_{x\mathbb{0}\zero}.
\end{align*}
and $bin:\mathbb S\times \mathbb O \to \mathbb S$,
\begin{align*}
bin(\epsilon,\omega) &:= \mathbb{0} \\
bin(x\mathbb{b},\omega) &:= binsucc(bin(x,\omega),\omega)|_{x\mathbb{b}}.
\end{align*}
\begin{defn}[Function $dy$]
The function $dy:\mathbb S\times \mathbb O \to\mathbb S$ is defined as follows:
$$
dy(x,\omega) := lrs(bin(x,\omega),\omega),
$$
where $lrs$ is a string manipulator, which removes
the left-most bit from a string if it exists; otherwise returns
$\epsilon$.\footnote{Full
details can be found in~\cite{Davoli}.}
\end{defn}
\noindent
The function $dyad(n)$ is easily shown bijective
and the following proposition is proved by induction.
\longv{
\begin{lemma}
The function $dyad(n)$ is bijective.
\end{lemma}
\begin{proof}
\begin{enumerate}
\itemsep0em
\item
The function is an injection. different numbers
have different binary encodings and, for any
$n\in\mathbb N ^+$ and $\omega\in\mathbb O$,
$bin(n,\omega)$ has $\mathbb{1}$ as its left-most bit.\footnote{This is
proved by induction on $n$, leveraging the definition of $bin$.}
Given two distinct binary encodings of $n\neq m\in\mathbb N $,
say $\mathbb{1}\sigma$ and $\mathbb{1}\tau$,
then $\sigma\neq \tau$.\footnote{Otherwise $n=m$.
Indeed, the function associating numbers to binary representations
is itself a binjection.}
\item
The function is surjective.
It is computed by removing a bit, which is
always $\mathbb{1}$.
So, each string $\sigma\in\mathbb S$ is the image of
a number $n\in\mathbb N $ such that the binary encoding
of $n+1$ is $\mathbb{1}\sigma$.
This number always exists.
\end{enumerate}
\end{proof}
}
\begin{prop}\label{prop:STACS26}
For any $n\in\mathbb N ,\sigma\in\mathbb S$ and $\omega\in\mathbb O$,
if
$|\sigma|=n+1$, then $dy(\sigma,\omega)=dyad(n)$.
\end{prop}
We also introduce the function $e$ and prove its correctness
(by induction on the structure of strings).
\begin{defn}[Function $e$]
Let $e:\mathbb S \times \mathbb O \to \mathbb S$ be defined
as follows:
\begin{align*}
e(\epsilon,\omega) &:= \epsilon \\
e(x\mathbb{b},\omega) &:= e(x,\omega)Q(dy(x,\omega),
\omega)|_{x\mathbb{b}}.
\end{align*}
\end{defn}
\begin{lemma}[Correctness of $e$]\label{lemma:eCor}
For any $\sigma\in \mathbb S$ and $i\in\mathbb N $,
if $|\sigma|=i+1$,
for each $j\leq i\in \mathbb N $ and $\omega\in\mathbb O$:
(i.)
$e(\sigma,\omega)(i)=\omega(dy(\mathbb{1}^j,\omega))$,
(ii.)
the length of $e(\sigma,\omega)$ is exactly
$i+1$.
\end{lemma}
We also introduce a relation $\sim_{dy}$,
which is again a bijection.
\begin{defn}
We define $\sim_{dy}$ as the smallest relation
in $\mathbb O \times \mathbb B^\mathbb N $ such that:
$$
\eta \sim_{dy} \omega \ \ \text{iff} \ \
\text{ for any } n\in\mathbb N , \eta(n)=\omega(dy(\mathbb{1}^{n+1},
\omega)).
$$
\end{defn}
\longv{
\begin{lemma}
The relation $\sim_{dy}$ is a bijection.
\end{lemma}
\begin{proof}
By Proposition~\ref{prop:STACS26}
and Lemma~\ref{lemma:eCorr},
there is a $\omega\in\mathbb O$
\textcolor{red}{in relation with $\eta$}.
Assume there is an $\omega'\neq \omega\in\mathbb O$,
again in relation with $\eta$.
Then there is a $\sigma\in\mathbb S$, such that
$\omega(\sigma)\neq \omega'(\sigma)$.
By Proposition~\ref{prop:STACS26},
the value of $\omega$ does not affect the output.
Furthermore $dy$ is a bijection,
so there is an $n\in\mathbb N $ such that $dy(\mathbb{1}^{n+1},
\omega)=\sigma$ and
$\eta(n)=\omega_1(\sigma) \neq \omega_2(\sigma)=
\eta(n)$ is a contradiction.
The proof is the same when dealing with $\eta \in \mathbb B^\mathbb N $.
\end{proof}
}
\begin{lemma}\label{lemma:STACS29}
If $\eta \sim_{dy} \omega$
then, there is an $n\in\mathbb N $,
$$
e(\underline{n}_\mathbb N ,\omega).
$$
\end{lemma}
\begin{proof}
The proof is by contraposition.
Suppose $\eta_n\neq e(\underline{n}_\mathbb N ,\omega)$.
As a consequence of Lemma~\ref{lemma:eCor},
there is an $m\in\mathbb N $ such that
$\eta(m)\neq \omega(dy(\underline{m}_\mathbb N ,\omega))$,
which is a contradiction.
\end{proof}
We can finally prove Proposition~\ref{prop:SFPtoPOR}.
\begin{proof}[Proof of Proposition~\ref{prop:SFPtoPOR}]
From Corollary~\ref{cor:STACS8},
there is a function $f'\in\mathcal{POR}$ and a polynomial
$p\in$ POLY such that, for any
$\sigma,\tau\in\mathbb S$, $\eta \in \mathbb B^\mathbb N $
and $\omega \in \mathbb O$,
\begin{align}
\sigma= \eta_{p(x)} \rightarrow
f(\sigma,\eta)=f'(\sigma,\tau,\omega).
\tag{$*$}
\end{align}
Let us fix $\overline{\eta}\in\{\eta\in\mathbb B^\mathbb N \ | \
f(\sigma,\eta)=y\}$,
its image with respect to $sim_{dy}$ is in
$\{\omega \in \mathbb O \ | \ f'(x,e(p'(s(x,\omega),\omega),
\omega),\omega)= \tau\}$,
where $s$ is the function of $\mathcal{POR}$ computing
$\mathbb{1}^{|x|+1}$.
By Lemma~\ref{lemma:STACS29},
$\overline{\eta}_{p(x)} = e(p(size(x,\omega),\omega)$,
where $p'\in \mathcal{POR}$
computes the polynomial $p$,
defined without recurring to $Q$.
Furthermore, given a fixed
$\overline{\omega} \in\{\omega \in \mathbb O \ | \
f'(\sigma,e(p'(size(\sigma,\omega),\omega), \omega),\omega)=
\tau\}$,
its pre-image with respect to $\sim_{dy}
\in \{\eta\in\mathbb B^\mathbb N \ | \ f(\sigma,\eta)=\tau\}$.
The proof is analogous to the one above.
Since $\sim_{dy}$ is a bijection between
$\mu(\{\eta\in\mathbb B^\mathbb N \ | \ f(\sigma,\eta)=\tau\})$
=
$\mu(\{\omega \in \mathbb O$ $|$ $f'(x,e(p(size(x,\omega),
\omega),\omega),\omega)$ = $\tau\})$,
which concludes the proof.
\end{proof}
\subsection{From $\mathcal{POR}$ to $\cc{SFP}$}
First, we define the imperative language
$\text{SIFP}_{\text{RA}}$ and prove its poly-time programs equivalent
to $\mathcal{POR}$.
So, we introduce $\text{SIFP}_{\text{RA}}$
and its big-step semantics.
\begin{defn}[Programs of $\text{SIFP}_{\text{RA}}$]
The language of programs of $\text{SIFP}_{\text{RA}}$
$\mathcal{L}(\prog{Stm}_{\text{RA}})$, i.e.~the set of strings
produced by non-terminal symbol
$\prog{Stm}_{\text{RA}}$, is defined
as follows:
\small
\begin{align*}
\prog{Id} &:= X_i \; \; \mbox{\Large{$\mid$}}\;\; Y_i \; \; \mbox{\Large{$\mid$}}\;\; S_i \; \; \mbox{\Large{$\mid$}}\;\;
R \; \; \mbox{\Large{$\mid$}}\;\; Q \; \; \mbox{\Large{$\mid$}}\;\; Z \; \; \mbox{\Large{$\mid$}}\;\; T \\
\prog{Exp} &:= \epsilon \; \; \mbox{\Large{$\mid$}}\;\;
\prog{Exp.0} \; \; \mbox{\Large{$\mid$}}\;\; \prog{Exp.1} \; \; \mbox{\Large{$\mid$}}\;\;
\prog{Id} \; \; \mbox{\Large{$\mid$}}\;\; \prog{Exp} \sqsubseteq \prog{Id} \; \; \mbox{\Large{$\mid$}}\;\;
\prog{Exp} \wedge \prog{Id} \; \; \mbox{\Large{$\mid$}}\;\;
\neg \prog{Exp} \\
\prog{Stm}_{\text{RA}} &:=
\prog{Id} \leftarrow \prog{Exp} \; \; \mbox{\Large{$\mid$}}\;\;
\prog{Stm}_{\text{RA}}; \prog{Stm}_{\text{RA}} \; \; \mbox{\Large{$\mid$}}\;\;
\mathtt{while} (\prog{Exp}) \{\prog{Stm}\}_{\text{RA}} \; \; \mbox{\Large{$\mid$}}\;\;
\mathtt{Flip}(\prog{Exp}),
\end{align*}
\normalsize
with $i\in\mathbb N $.
\end{defn}
\noindent
The big-step semantics associated to the language of $\text{SIFP}_{\text{RA}}$
programs relies on the notion of \emph{store}.
\begin{defn}[Store]
A \emph{store} is a function
$\Sigma:\prog{Id} \rightharpoonup
\mathbb B^*$,
an \emph{empty} store is a store which is total
and constant on $\epsilon$.
We represent such object as $[ \ ]$.
We define the updating of a store $\Sigma$
with a mapping from $y\in\prog{Id}$ to $\sigma\in \mathbb B^*$
as:
$$
\Sigma[y\leftarrow \sigma](x) := \begin{cases}
\sigma \ &\text{if } x=y \\
\Sigma(x) \ &\text{otherwise.}
\end{cases}
$$
\end{defn}
\begin{defn}[Semantics of Expressions in $\text{SIFP}_{\text{RA}}$]
The semantics of an expression $E\in\mathcal{L}(\prog{Exp})$
is the smallest relation
$\rightharpoonup : \mathcal{L}(\prog{Exp})
\times (\prog{Id} \to \mathbb B^*) \times \mathbb O \times
\mathbb B^*$ closed under the following rules:
\small
\begin{minipage}{\linewidth}
\begin{minipage}[t]{0.4\linewidth}
\bigskip
\begin{prooftree}
\AxiomC{}
\UnaryInfC{$\langle \epsilon,\Sigma\rangle
\rightharpoonup \bm{\epsilon}$}
\end{prooftree}
\end{minipage}
\hfill
\begin{minipage}[t]{0.6\linewidth}
\begin{prooftree}
\AxiomC{$\langle e,\Sigma\rangle \rightharpoonup
\sigma$}
\UnaryInfC{$\langle e.\prog{b},\Sigma\rangle
\rightharpoonup \sigma \frown \mathbb{b}$}
\end{prooftree}
\end{minipage}
\end{minipage}
\begin{minipage}{\linewidth}
\begin{minipage}[t]{0.4\linewidth}
\begin{prooftree}
\AxiomC{$\langle e,\Sigma\rangle
\rightharpoonup \sigma$}
\AxiomC{$\Sigma(\prog{Id})=\tau$}
\AxiomC{$\sigma\subseteq \tau$}
\TrinaryInfC{$\langle e\sqsubseteq \prog{Id},
\Sigma\rangle \rightharpoonup \mathbb{1}$}
\end{prooftree}
\end{minipage}
\hfill
\begin{minipage}[t]{0.6\linewidth}
\begin{prooftree}
\AxiomC{$\langle e,\Sigma\rangle
\rightharpoonup \sigma$}
\AxiomC{$\Sigma(\prog{Id})=\tau$}
\AxiomC{$\sigma \not\subseteq \tau$}
\TrinaryInfC{$\langle e\sqsubseteq \prog{Id},
\Sigma\rangle \rightharpoonup \mathbb{0}$}
\end{prooftree}
\end{minipage}
\end{minipage}
\begin{minipage}{\linewidth}
\begin{minipage}[t]{0.4\linewidth}
\begin{prooftree}
\AxiomC{$\Sigma(\prog{Id})=\sigma$}
\UnaryInfC{$\langle \prog{Id},\Sigma\rangle
\rightharpoonup \sigma$}
\end{prooftree}
\end{minipage}
\hfill
\begin{minipage}[t]{0.6\linewidth}
\begin{prooftree}
\AxiomC{$\prog{Id} \not \in dom(\Sigma)$}
\UnaryInfC{$\langle \prog{Id},\Sigma\rangle
\rightharpoonup \bm{\epsilon}$}
\end{prooftree}
\end{minipage}
\end{minipage}
\begin{minipage}{\linewidth}
\begin{minipage}[t]{0.4\linewidth}
\begin{prooftree}
\AxiomC{$\langle e,\Sigma\rangle \rightharpoonup
0$}
\UnaryInfC{$\langle \neg e,\Sigma\rangle
\rightharpoonup \mathbb{1}$}
\end{prooftree}
\end{minipage}
\hfill
\begin{minipage}[t]{0.6\linewidth}
\begin{prooftree}
\AxiomC{$\langle e,\Sigma\rangle \rightharpoonup\sigma$}
\AxiomC{$\sigma \neq\mathbb{0}$}
\BinaryInfC{$\langle \neg e,\Sigma\rangle
\rightharpoonup \mathbb{0}$}
\end{prooftree}
\end{minipage}
\end{minipage}
\begin{minipage}{\linewidth}
\begin{minipage}[t]{0.4\linewidth}
\begin{prooftree}
\AxiomC{$\langle e,\Sigma\rangle \rightharpoonup
\mathbb{1}$}
\AxiomC{$\Sigma(\prog{Id})=\mathbb{1}$}
\BinaryInfC{$\langle e\wedge \prog{Id},\Sigma\rangle
\rightharpoonup \mathbb{1}$}
\end{prooftree}
\end{minipage}
\hfill
\begin{minipage}[t]{0.6\linewidth}
\begin{prooftree}
\AxiomC{$\langle e,\Sigma \rangle \rightharpoonup
\sigma$}
\AxiomC{$\Sigma(\prog{Id})=\tau$}
\AxiomC{$\sigma\neq \mathbb{1} \wedge
\tau \neq \mathbb{1}$}
\TrinaryInfC{$\langle e \wedge \prog{Id},\Sigma\rangle
\rightharpoonup \mathbb{0}$}
\end{prooftree}
\end{minipage}
\end{minipage}
$$
$$
\normalsize
where $\prog{b}\in \{0,1\}$.\footnote{We assume that
if $\prog{b}=\prog{1}$, then $\mathbb{b} =\mathbb{1}$,
and if $\prog{b}=\prog{0}$, then $\mathbb{b}=\mathbb{0}$.}
\end{defn}
\begin{defn}[Big-Step Operational Semantics]
The semantics of a program $P\in\mathcal{L}(\prog{Stm}_{\text{RA}})$ is the smallest
relation $\triangleright \subseteq \mathcal{L}(\prog{Stm}_{\text{RA}}) \times
(\prog{Id} \to \mathbb B^*)
\times \mathbb O \times (\prog{Id} \to \mathbb B^*)$
closed under the following rules:
\begin{minipage}{\linewidth}
\begin{minipage}[t]{0.4\linewidth}
\begin{prooftree}
\AxiomC{$\langle e,\Sigma\rangle \rightharpoonup
\sigma$}
\UnaryInfC{$\langle \prog{Id} \rightharpoonup
e,\Sigma,\omega\rangle \triangleright
\Sigma[\prog{Id} \leftarrow \sigma]$}
\end{prooftree}
\end{minipage}
\hfill
\begin{minipage}[t]{0.6\linewidth}
\begin{prooftree}
\AxiomC{$\langle s,\Sigma,\omega\rangle
\triangleright \Sigma'$}
\AxiomC{$\langle t,\Sigma',\omega\rangle
\triangleright \Sigma''$}
\BinaryInfC{$\langle s;t,\Sigma,\omega\rangle
\triangleright \Sigma''$}
\end{prooftree}
\end{minipage}
\end{minipage}
\begin{prooftree}
\AxiomC{$\langle e,\Sigma \rangle \rightharpoonup
\mathbb{1}$}
\AxiomC{$\langle s,\Sigma,\omega\rangle
\triangleright \Sigma'$}
\AxiomC{$\langle \mathtt{while}(e)\{s\},
\Sigma',\omega\rangle \triangleright \Sigma''$}
\TrinaryInfC{$\langle \mathtt{while}(e)\{s\},
\Sigma, \omega\rangle \triangleright \Sigma''$}
\end{prooftree}
\begin{minipage}{\linewidth}
\begin{minipage}[t]{0.4\linewidth}
\begin{prooftree}
\AxiomC{$\langle e,\Sigma \rangle \rightharpoonup
\sigma $}
\AxiomC{$\sigma \neq \mathbb{1}$}
\BinaryInfC{$\langle \mathtt{while}(e)\{s\},
\Sigma,\omega\rangle \triangleright \Sigma$}
\end{prooftree}
\end{minipage}
\hfill
\begin{minipage}[t]{0.6\linewidth}
\begin{prooftree}
\AxiomC{$\langle e,\Sigma\rangle \rightharpoonup
\sigma$}
\AxiomC{$\omega(\sigma)=\mathbb{b}$}
\BinaryInfC{$\langle \mathtt{Flip}(e),\Sigma,\omega\rangle
\triangleright \Sigma[R\leftarrow \mathbb{b}]$}
\end{prooftree}
\end{minipage}
\end{minipage}
\end{defn}
\noindent
The semantics allows us to associate each
program of $\text{SIFP}_{\text{RA}}$ to the function
it evaluates:
\begin{defn}
We say that function evaluation
by a correct $\text{SIFP}_{\text{RA}}$ program $\prog{P}$
is $\model{\cdot} : \mathcal{L}(\prog{Stm}_{\text{RA}}) \to (\mathbb S^n \times \mathbb O
\to \mathbb S)$ defined as below:\footnote{Instead
of the infixed notation for $\triangleright$,
we use its prefixed notation.
So, the notation express the store associated
to the $P$,
$\Sigma$ and $\omega$ by $\triangleright$.
Moreover, we employ the same function symbol
$\triangleright$ to denote two distinct functions:
the big-step operational semantics of $\text{SIFP}_{\text{RA}}$
and the big-step operational semantics
of programs in $\text{SIFP}_{\text{LA}}$.}
$$
\model{\prog{P}} := \lambda x_1,\dots, x_n,\omega
\triangleright (\langle \prog{P}, [ \ ] [X_1 \leftarrow x_1],
\dots, [X_n \leftarrow x_n],\omega\rangle )
(R).
$$
\end{defn}
Observe that, among the different registers,
the register $R$ is meant to contain
the value computed by the program at the
end of its execution.
Similarly the $\{X_i\}_{i\in\mathbb N }$ registers
are used to store the inputs of the function.
The correspondence between
$\mathcal{POR}$ and $\text{SIFP}_{\text{RA}}$ can be stated as
follows:
\begin{lemma}\label{lemma:PORtoSIFPra}
For any function $f\in\mathcal{POR}$,
there is a poly-time $\text{SIFP}_{\text{RA}}$ program
$\prog{P}$ such that
for all $x_1,\dots, x_n$,
$\model{\prog{P}}(x_1,\dots, x_n,\omega)=f(x_1,\dots,
x_n,\omega)$.
Moreover, if $f\in\mathcal{POR}$, then
$\prog{P}$ does not contain any $\mathtt{Flip}(e)$
statement.
\end{lemma}
\begin{proof}[Proof Sketch]
The proof is quite simple under a technical
viewpoint,
relying on the fact that it is possible
to associate to any function of $\mathcal{POR}$
an equivalent poly-time program,
and on the possibility to compose them
and implement bounded recursion on notation
in $\text{SIFP}_{\text{RA}}$ with a polynomial overhead.
Concretely, for any function $f\in\mathcal{POR}$,
we define a program $\mathscr{L}_f$
such that $\model{\mathscr{L}_f}(x_1,\dots, x_n)
= f(x_1,\dots, x_n)$.
The correctness of $\mathscr{L}_f$ is given
by the following invariant properties:
\begin{itemize}
\itemsep0em
\item the result of the computation is stored in
$R$.
\item inputs are stored in the registers of the group
$X$.
\item the function $\mathscr{L}$ does not
change
the values it accesses as input.
\end{itemize}
We define $\mathscr{L}_f$
as follows:
\begin{align*}
\mathscr{L}_E &= R \leftarrow \epsilon \\
\mathscr{L}_{S_0} &= R \leftarrow X_0.\prog{0} \\
\mathscr{L}_{S_1} &= R \leftarrow X_0.\prog{1} \\
\mathscr{L}_{P^n_i} &= R \leftarrow X_i \\
\mathscr{L}_C &= R \leftarrow X_1 \sqsubseteq X_2 \\
\mathscr{L}{Q} &= \mathtt{Flip}(X_1).
\end{align*}
The correctness of base cases is trivial to prove
and the only translation containing
$\mathtt{Flip}(e)$ for some
$e\in \mathcal{L}(\prog{Exp})$ is that of $Q$.
The encoding of composition and bounded
recursion are more convoluted.\footnote{The
proof of their correctness requires a conspicuous amount of
low-level definitions and technical results.
For an extensive presentation, see~\cite{Davoli}.}
\end{proof}
\subsubsection{From $\text{SIFP}_{\text{RA}}$ to $\text{SIFP}_{\text{LA}}$}
We now show that every program in $\text{SIFP}_{\text{RA}}$
is equivalent to one in $\text{SIFP}_{\text{LA}}$.
First, we need also to define the language and semantics
for $\text{SIFP}_{\text{LA}}$.
\begin{defn}[Language of $\text{SIFP}_{\text{LA}}$]
The language of $\text{SIFP}_{\text{LA}}$ $\mathcal{L}(\prog{Stm}_{\text{LA}})$,
that is the set of strings produced by non-terminal
symbols $\prog{Stm}_{\text{LA}}$, is defined as follows:
$$
\prog{Stm}_{\text{LA}} := \prog{Id} \leftarrow \prog{Exp} \; \; \mbox{\Large{$\mid$}}\;\;
\prog{Stm}_{\text{LA}} ; \prog{Stm}_{\text{LA}} \; \; \mbox{\Large{$\mid$}}\;\;
\mathtt{while}(\prog{Exp})\{\prog{Stm}\}_{\text{LA}} \; \; \mbox{\Large{$\mid$}}\;\;
\mathtt{RandBit}().
$$
\end{defn}
\begin{defn}[Big-Step Semantics of $\text{SIFP}_{\text{LA}}$]
The semantics of a program $\prog{P}\in\mathcal{L}(\prog{Stm}_{\text{LA}})$ is the smallest relation
$\triangleright \subseteq (\mathcal{L}(\prog{Stm}_{\text{LA}}) \times (\prog{Id} \to
\mathbb B^* \times \mathbb B^\mathbb N ) \times
((\prog{Id} \to \mathbb B^* \times \mathbb B^\mathbb N )$
closed under the following rules:
\begin{minipage}{\linewidth}
\begin{minipage}[t]{0.4\linewidth}
\begin{prooftree}
\AxiomC{$\langle e,\Sigma\rangle \rightharpoonup \sigma$}
\UnaryInfC{$\langle \prog{Id} \leftarrow e,\Sigma,\eta\rangle
\triangleright \langle \Sigma[\prog{Id}\leftarrow \sigma],\eta\rangle$}
\end{prooftree}
\end{minipage}
\hfill
\begin{minipage}[t]{0.6\linewidth}
\begin{prooftree}
\AxiomC{$\langle s,\Sigma,\eta\rangle \triangleright
\langle \Sigma',\eta'\rangle$}
\AxiomC{$\langle t,\Sigma',\eta\rangle \triangleright
\langle \Sigma'',\eta''\rangle$}
\BinaryInfC{$\langle s;t,\Sigma,\eta\rangle \triangleright
\langle \Sigma'',\eta''\rangle$}
\end{prooftree}
\end{minipage}
\end{minipage}
\begin{prooftree}
\AxiomC{$\langle e,\Sigma\rangle \rightharpoonup
\mathbb{1}$}
\AxiomC{$\langle s,\Sigma,\eta\rangle \triangleright
\langle \Sigma',\eta'\rangle$}
\AxiomC{$\langle \mathtt{while}(e) \{s\}, \Sigma',\eta\rangle
\triangleright \langle \Sigma'',\eta''\rangle$}
\TrinaryInfC{$\langle \mathtt{while}(e)\{s\},
\Sigma,\eta\rangle \triangleright \langle \Sigma',\eta''\rangle$}
\end{prooftree}
\begin{minipage}{\linewidth}
\begin{minipage}[t]{0.4\linewidth}
\begin{prooftree}
\AxiomC{$\langle e,\Sigma\rangle \rightharpoonup \sigma$}
\AxiomC{$\sigma\neq \mathbb{1}$}
\BinaryInfC{$\langle \mathtt{while}(e)\{s\},\Sigma,\eta\rangle
\triangleright \langle \Sigma, \eta\rangle$}
\end{prooftree}
\end{minipage}
\hfill
\begin{minipage}[t]{0.6\linewidth}
%
\bigskip
\begin{prooftree}
\AxiomC{}
\UnaryInfC{$\langle \mathtt{RandBit}(), \Sigma,
\mathbb{b} \eta\rangle \triangleright
\langle\Sigma[R\leftarrow \mathbb{b}],\eta\rangle$}
\end{prooftree}
\end{minipage}
\end{minipage}
\end{defn}
\begin{lemma}\label{lemma:SIFPratoSIFPla}
For each total program $\prog{P}\in \text{SIFP}_{\text{RA}}$,
there is a $Q\in \text{SIFP}_{\text{LA}}$ such that,
for any $x,y\in \mathbb N $,
$$
\mu(\{\omega \in \mathbb O \ | \ \model{\prog{P}}(x,\omega)=y\})
= \mu(\{\eta \in \mathbb B^\mathbb N \ | \
\model{Q}(x,\eta)=y\}).
$$
Moreover, if $\prog{P}$ is poly-time $Q$, too.
\end{lemma}
\begin{proof}[Proof Sketch]
We prove Lemma~\ref{lemma:SIFPratoSIFPla}
showing that $\text{SIFP}_{\text{RA}}$ can be simulated in
$\text{SIFP}_{\text{LA}}$ given two novel \emph{small-step} semantic relations
($\leadsto_{\text{LA}},\leadsto_{\text{RA}}$) obtained by splitting the corresponding
big-step semantics into smallest transitions,
one per each $\cdot \ ; \ \cdot$ instruction.
The intuitive idea behind this novel semantics
is to enrich big-step operational semantics with some piece
of information, which are needed to build
an induction proof of the reduction from
$\text{SIFP}_{\text{RA}}$ to $\text{SIFP}_{\text{LA}}$.
In particular, we employ a list $\Psi$ containing
pairs $(x,\prog{b})$,
to keep track of the previous call to the primitive
$\mathtt{Flip}(x)$ forv $\text{SIFP}_{\text{RA}}$,
and to the result of the $x$-th call of
primitive $\mathtt{RandBit}()$ for $\text{SIFP}_{\text{LA}}$.
The main issue
consists in the simulation of the access to the
random tape.
Then, we define the translation so that
it stores in a specific and fresh register
an associative table recording all queries
$\sigma$ within a $\mathtt{Flip}(\sigma)$ instruction and
the result \prog{b}
picked from $\eta$ and returned.
The addition of the map $\Psi$ allows
to replicate the content of the associative table $\Psi$
explicitly in the semantics of program.
This simulation requires a translation of
$\mathtt{Flip}(e)$ into an equivalent procedure.
\begin{itemize}
\itemsep0em
\item at each simulated query $\mathtt{Flip}(e)$,
the destination program looks up the associative
table
\item if it finds the queried coordinate $e$ within
a pair $(e,\prog{b})$, it returns $\prog{b}$.
Otherwise, (i)
it reduced $\mathtt{Flip}(e)$ to a call of
$\mathtt{RandBit}()$ which outputs either $\mathbb{b}=\mathbb{0}$
or $\mathbb{b}=\mathbb{1}$,
(ii) it records the couple $\langle e,\mathbb{b}\rangle$
in the associative table and returns $\mathbb{b}$.
\end{itemize}
The construction is convoluted.
This kind of simulation preserves the distributions
of strings computed by the original program:
\begin{enumerate}
\itemsep0em
\item we show that the big-step and small-step
semantics for $\text{SIFP}_{\text{RA}}$ and $\text{SIFP}_{\text{LA}}$ are
equally expressive.
\item we define a relation $\Theta$ between configurations of
the small-step semantics of $\text{SIFP}_{\text{RA}}$
and $\text{SIFP}_{\text{LA}}$:
\begin{itemize}
\itemsep0em
\item[$*$] first, we define a function
$\alpha:\mathcal{L}(\prog{Stm}_{\text{RA}}) \to \mathcal{L}(\prog{Stm}_{\text{LA}})$ mapping the program
$\prog{P}_{\text{RA}} \in \mathcal{L}(\prog{Stm}_{\text{RA}})$ into the corresponding
$\prog{P}_{\text{LA}}\in\mathcal{L}(\prog{Stm}_{\text{LA}})$, translating
$\mathtt{Flip}(e)$ as described above.
\item[$*$] we define a relation $\beta$ between a triple
$\langle \prog{P}_{\text{RA}},\Sigma,\Psi\rangle$ and a single
store $\Gamma$,
which is meat to capture the configuration-to -store
dependencies between the configuration
of $\prog{P}_{\text{RA}}$ running with store $\Sigma$
and computed associative table $\Psi$
and the store $\Gamma$ of simulating $\prog{P}_{\text{LA}}$.
So, $\Gamma$ stores a representation of $\Psi$
into a dedicated register.
\item[$*$] a function $\gamma$ which transforms the constraints
on the oracle gathered by $\leadsto_{\text{RA}}$ to the information
collected by $\leadsto_{\text{LA}}$.
Then, $\Theta$ is so defined that $
\Theta(\langle \prog{P},\Sigma_1,\Psi\rangle, \langle Q,
\Sigma_2,\Theta\rangle)$ holds when:
(i) $\alpha(\prog{P})=Q$, (ii) $\beta(\langle
\prog{P},\Sigma_1,\Psi\rangle,
\Sigma_2)$,
(iii) $\gamma(\Psi)=\Phi$,
(iv) $\mu(\Psi)=\mu(\Phi)$.
\end{itemize}
\item we show that $\Theta$ associates to
each triple $\langle \prog{P}_{\text{RA}},\Sigma, \Psi\rangle$
other triples
$\langle \prog{P}_{\text{LA}},\Gamma,\Phi\rangle$
which weakly simulate the relation $\leadsto_{\text{RA}}$ with respect
to $\leadsto_{\text{LA}}$.
This is displayed by Figure~\ref{fig:SIFPratoSIFPla}.
\end{enumerate}
\begin{figure}[]
\centering
\begin{tikzpicture}[node distance=5cm]
\node(PRA) at (-3,2) {$\langle P_{\text{RA}};Q_{\text{RA}}, \Sigma, \Psi\rangle$};
\node(PLA) at (-3,-0) {$\langle P_{\text{LA}};Q_{\text{LA}}, \Gamma, \Phi\rangle$};
\node(P1RA) at (3,2) {$\langle Q_{\text{RA}}, \Sigma', \Psi'\rangle$};
\node(P1LA) at (3,0) {$^*\langle Q_{\text{LA}}, \Gamma', \Phi'\rangle$};
\node at (-3.3,1) {$\Theta$};
\node at (3.3,1) {$\Theta$};
\draw[->,densely dashed,thick] (PRA) to (P1RA);
\draw[->,densely dashed,thick] (PLA) to (P1LA);
\draw[->] (PRA) to (PLA);
\draw[->] (P1RA) to (P1LA);
\end{tikzpicture}
\caption{Commutation schema between $\text{SIFP}_{\text{RA}}$ and $\text{SIFP}_{\text{LA}}$}
\label{fig:SIFPratoSIFPla}
\end{figure}
%
\end{proof}
\subsubsection{From $\text{SIFP}_{\text{LA}}$ to $\cc{SFP}_{\text{OD}}$}
We introduce $\cc{SFP}_{\text{OD}}$,
the class corresponding to $\cc{SFP}$ as defined
on a variation of STMs which can read characters from the oracle
\emph{on-demand},
and show that $\text{SIFP}_{\text{LA}}$ can be reduced to it.
For readability's sake, we avoid cumbersome details,
focussing on an informal (but exhaustive) description
of how to build the on-demand STM.
%
\begin{prop}\label{prop:SIFPlatoSFPod}
For every $\prog{P}\in\mathcal{L}(\prog{Stm}_{\text{LA}})$, there is a $\mathscr{M_S}^{\prog{P}} \in \cc{SFP}$
such that for any $x\in\mathbb S$ and
$\eta \in\mathbb B^\mathbb N $,
$\prog{P}(x,\eta)=\mathscr{M_S}^{\prog{P}}(x,\eta)$.
Moreover, if $\prog{P}$ is poly-time, then $\mathscr{M_S}^\prog{P}$ is also
poly-time.
\end{prop}
\begin{proof}[Proof Sketch]
The construction relies on the implementation of a
program in $\text{SIFP}_{\text{LA}}$ by a multi-tape on-demand STM
which uses a tape to store the values of each register,
plus an additional tape containing the partial results
obtained during the evaluation
of the expressions and another tape containing $\eta$.
%
We denote the tape used for storing the
result coming from the evaluation of the expression with $e$.
%
The following invariant properties holds:
\begin{itemize}
\itemsep0em
\item on each tape, values are stored to the immediate
right of the head,
\item the result of the last expression evaluated is stored
on tape $e$ to the immediate right of the head.
\end{itemize}
The value of expressions of $\text{SIFP}$ can be computed
using the tape $e$.\footnote{In particular we prove this by induction
on the syntax of expressions. (1)
each access to the value stored in a register
consists in a copy of the content
of the corresponding tape to $e$, which is a simple operation
(due to the invariant properties above)
(2)
concatenations – namely $f.\prog{0}$ and
$f.\prog{1}$ – are implemented by the addition of a character
at the end of $e$, which contains the value of $f$.
(3) the binary expressions are non-trivial.
Since one of the two operands is a register identifier,
the machine can directly compare $e$ with the tape,
corresponding to the identifier, and to replace the content
of $e$ with the result of the comparison,
which in all cases is $\mathbb{0}$ or $\mathbb{1}$.
Further details can be found in~\cite{Davoli}.}
All operations can be implemented without consuming any
character on the oracle tape
and with linear time with respect to the
size of the value of the expression.
We assign a sequence of machine states,
$q^I_{s_1},q_{s_i}^1,\dots, q_{s_i}^F$,
to each statement $s_i$.
\longv{
\begin{itemize}
\itemsep0em
\item assignments consist in copies of the value in $e$
to the tape corresponding to the destination register
and delations of the value on $e$, as replaced by $\circledast$.
This is implemented without consuming any character on
the oracle tape.
\item The sequencing operation $s;t$ is implemented
inserting a composed transition from $q_s^F$ to $q_t^I$
in $\delta$.
This does not consume the oracle tape.
\item a $\mathtt{while}()$ statement $s=\mathtt{while}(f)\{t\}$
\end{itemize}}
In particular, the statement $\mathtt{RandBit}()$ is implemented consuming
a character on the tape and copying its value on the tape
which corresponds to the register $R$.
Furthermore, if we assume $\prog{P}$ to be poly-time,
after the simulation of each statement,
it holds that:
\begin{itemize}
\itemsep0em
\item the length of the non-blank portion of the first tapes
corresponding to the register is polynomially bounded
as their contents are precisely the the contents of
$\prog{P}$'s registers, which are polynomially bounded as a consequence
of the hypotheses of their poly-time complexity,
\item the head of all the tapes corresponding to the registers
point to the left-most symbol of the string thereby contained.
\end{itemize}
It is well-known that the reduction of the number of tapes
on a poly-time TM comes with a polynomial overhead in time.
For this reason, we conclude that the poly-time multi-tape
on-demand STM can be
reduced to a poly-time canonical on-demand STM.
This concludes our proof.
\end{proof}
%
%
%
%
It remains to show that each on-demand STM can be reduced
to an equivalent STM.
\begin{lemma}\label{lemma:STACS32}
For every $\mathscr{M_S} =\langle \mathbf{Q}, \Sigma, \delta,q_0\rangle
\in \cc{SFP}_{\text{OD}}$,
the machine $\mathscr{M_S}'=\langle \mathbf{Q}, \Sigma, H(\delta), q_0\rangle
\in \cc{SFP}$ is such that for every $n\in\mathbb N $
and configuration of $\mathscr{M_S}$ $\langle \sigma,q,\tau,\eta\rangle$,
and for every $\sigma',\tau'\in\mathbb S,q\in\mathbf{Q}$,
\scriptsize
$$
\mu(\{\eta \in \mathbb B^\mathbb N \ | \ (\exists \eta')\langle\sigma,q,\tau,\eta\rangle
\triangleright^n_\delta \langle \sigma',q',\tau',\eta'\rangle\})
= \mu(\{\xi \in \mathbb B^\mathbb N \ | \ (\exists \xi')
\langle \sigma,q,\tau,\xi\rangle \triangleright^n_{H(\delta)}
\langle \sigma',q',\tau',\xi'\rangle \}).
$$
\normalsize
\end{lemma}
\begin{proof}[Proof Sketch]
Even in this case, the proof relies on a reduction.
In particular, we show that given an on-demand STM,
it is possible to build an STM, which is equivalent
to the former.
%
Intuitively, the encoding from an on-demand STM to an ordinary
STM takes the transition function $\delta$ of the STM
and substitutes each transition not causing the oracle tape to shift
– i.e. tagged with $\natural$ – with two distinct transitions,
where $\natural$ is substituted by $\mathbb{0}$ and $\mathbb{1}$, respectively.
%
This causes the resulting machine to produce an identical transition but
shifting the head on the oracle tape on the right.
%
%
%
%
We also define an encoding from an on-demand STM
to a canonical STM as follows:
$$
H. := \langle \mathbf{Q}, \Sigma, \delta, q_0\rangle \to
\big\langle \mathbf{Q}, \Sigma, \bigcup \Delta_H(\delta), q_0\big\rangle.
$$
where $\Delta_H$ is,
\begin{align*}
\Delta_H\big(\langle p, c_r,\mathbb{0},q,c_w,d\rangle\big) &:=
\{\langle p,c_r, \mathbb{0}, q,c_w,d\rangle\} \\
\Delta_H\big(\langle p,c_r,\mathbb{1},q,c_w,d\rangle\big) &:=
\{\langle p,c_r,\mathbb{1},q,c_w,d\rangle\} \\
\Delta_H \big(\langle p,c_r,\natural, q,c_w,d\rangle\big) &:=
\{\langle p, c_r,\mathbb{0}, q,c_w,d\rangle,
\langle p,c_r,\mathbb{1},q,c_w,d\rangle\}.
\end{align*}
\end{proof}
\subsubsection{From $\mathcal{POR}$ to $\cc{SFP}$}
We then conclude the proof relating $\mathcal{POR}$ and $\cc{SFP}$.
\begin{prop}[From $\mathcal{POR}$ to $\cc{SFP}$]\label{prop:PORtoSFP}
For any $f:\mathbb S^k\times \mathbb O \to \mathbb S \in\mathcal{POR}$,
there is a function $f^\star : \mathbb S^k\times \mathbb B^\mathbb N \to \mathbb S$
such that for all $\sigma_1,\dots, \sigma_k,\tau\in \mathbb S$,
$$
\mu\big(\{\eta \in \mathbb B^\mathbb N \ | \ f(\sigma_1,\dots, \sigma_k,\eta)=\tau\}\big) =
\mu\big(\{\omega \in \mathbb O \ | \ f^\star(\sigma_1,\dots,\sigma_k,\omega)=\tau\}\big).
$$
\end{prop}
\begin{proof}
It is a straightforward consequence of Lemma~\ref{lemma:PORtoSIFPra}, Lemma~\ref{lemma:SIFPratoSIFPla}, Proposition~\ref{prop:SIFPlatoSFPod},
and Lemma~\ref{lemma:STACS32}.
\end{proof}
}
|
1,108,101,564,999 | arxiv | \section{Introduction}
It has been of long-standing interest to study the ability of analog computing systems to
solve computationally difficult problems \cite{R93,ERN08}. It is recently of growing interest to
investigate the power of quantum adiabatic time evolution in this direction \cite{F01}.
Nevertheless, it has been commonly believed, with strong theoretical and numerical evidences,
that a desired solution should not be obtained with a sufficiently large probability within
polynomial time owing to the exponential decrease in the energy gap between desired and undesired
eigenstates during an adiabatic change of Hamiltonians \cite{D01,Z05,Z06,A08,HY11,F12}.
Recently, Yamamoto {\em et al.} wrote a series of papers \cite{UTY11,TUY12,YTU12}
on their model\textemdash so called the coherence computing model\textemdash of an injection-locked slave
laser network, which uses quantum states to some extent in contrast to conventional classical optical
computing models \cite{SMDR07,OSC2008}.
It was claimed to be promising in solving the Ising spin configuration problem \cite{B82} and those
polynomial-time reducible to this problem faster than known conventional models.
The Ising spin configuration problem has been well-known as a typical NP-hard problem described by
an Ising-type Hamiltonian \cite{B82}. A typical description is as follows.
\begin{quote}
{\em Ising spin configuration problem:}
Given a graph $G=(V,E)$ with set $V$ of vertices and set $E$ of edges, and
weighting functions $J:E\rightarrow\{0,\pm 1\}$ and $B:V\rightarrow\{0,\pm 1\}$, find
the minimum eigenvalue $\lambda_{\rm g}$ of the Hamiltonian
$H=\sum_{(ij)\in E}J_{ij}\sigma_{z,i}\sigma_{z,j}+\sum_{i\in V}B_i\sigma_{z,i}$. Here,
$\sigma_{z,i}$ is the Pauli Z operator acting on the space of the $i$th spin (there are $n=|V|$
spin-1/2's).
\end{quote}
In an intuitive point of view, the problem is difficult in the sense that the number of given
parameters grows quadratically while the number of eigenvalues including multiplicity grows
exponentially. Although the Hamiltonian is diagonal in the Z basis, writing it in the matrix
form itself takes exponential time. Hereafter, we employ $n$ for representing the input
length of an instance although, precisely speaking, the bit length of an encoded instance is
$O(n^2)$. We do not go into the controversy on the definition of the input length \cite{ONeil09}.
As for known results on the complexity of the problem, it becomes P in case the graph is a
planer graph and $B_i=0~~\forall i$ (see Ref.~\cite{Is00}); for nonplaner graphs, it is in
general NP-hard, and it is so under many different conditions \cite{Is00}. In addition, a planer graph
together with nonzero $B_i$'s also makes the problem NP-hard \cite{B82}. It is also worthwhile to mention
that the typical value of $\lambda_{\rm g}$ is $c_{\rm g}n$ with coefficient $c_{\rm g}$ (so-called the ground-state energy
density) typically between $-2$ and $-1/2$ when the values of $J_{ij}$ are chosen in a certain random
manner and $B_i$ are set to zero \cite{VT77,Kirkpatrick77,MB80,Derrida80,Derrida81,Simone95,Andreanov04,Boettcher10}
($c_{\rm g}$ is between $-1.5$ and $-1$ when the graph is a ladder and $J_{ij}$ and $B_i$ are randomly
chosen from $\{\pm1\}$ \cite{Kadowaki95}). Furthermore, it should be mentioned that the distribution
of eigenenergies of $H$ (namely, the envelope of the multiplicity of eigenenergies with a normalization) is a
normal distribution with mean zero and standard deviation proportional to $\sqrt{n}$ in the random energy
model \cite{Derrida80,Derrida85,Andreanov04}. Here, the important observation is that the standard deviation
increases with $n$ in spite of the exponentially increasing number of spin configurations.
Let us also introduce the NP-complete variant of the Ising spin configuration problem as follows.
\begin{quote}
{\em NPC Ising spin configuration problem:}\\
{\em Instance:}
Positive integer $n$, integer $K$, and parameters $J_{ij}\in\{0,\pm 1\}$ ($i < j$)
and $B_i\in\{0,\pm 1\}$ for integers $0\le i,j \le n-1$.\\
{\em Question:}
Is there an eigenvalue $\lambda$ of the Hamiltonian $H=\sum_{i,j=0; i < j}^{n-1} J_{ij}\sigma_{z,i}\sigma_{z,j}
+\sum_{i=0}^{n-1}B_i\sigma_{z,i}$ such that $\lambda < K$ ?
\end{quote}
This is the problem we are going to investigate in this contribution as for its computational difficulty
under the coherent computing model.
Let us now briefly look into Yamamoto {\em et al.}'s coherent computing model \cite{UTY11,TUY12,YTU12} which
is schematically depicted as Fig.~\ref{figlasernetwork}.
\begin{figure}[ptb]
\begin{center}
\resizebox{0.82\textwidth}{!}{\includegraphics{lasernetwork}}
\caption{\label{figlasernetwork}
Schematic description of the coherent computing model. See the text for how $J_{ij}$ and $B_i$
are realized by optical instruments.}
\end{center}
\end{figure}
It has one master laser whose output is split into $n$ paths and injected to $n$ slave lasers.
Each slave laser is initially locked to the superposed state $(|R\rangle_i+|L\rangle_i)$
where $|R\rangle$ and $|L\rangle$ are the right and left circular polarized states
(see, {\em e.g.}, Refs.~\cite{H67,HY84} for physics of the injection-locked laser system).
The initial state of the $n$ slave lasers is therefore $\bigotimes_{i=0}^{n-1}(|R\rangle_i+|L\rangle_i)$.
The laser network is a macroscopic system; thus initially it holds many photons in this same state.
The computational basis is set to $\{|R\rangle,|L\rangle\}^n$ and $\sigma_z$ is written as
$|R\rangle\langle R|-|L\rangle\langle L|$.
The $i$th slave laser and the $j$th slave laser are connected for nonzero $J_{ij}$. At time
$t=0$, they mutually inject a small amount of horizontally polarized signal via an attenuator,
a phase shifter, and a horizontal linear polarizer, which determine the amplitude attenuation
coefficient that is regarded as $J_{ij}$. Among the three instruments, the attenuator's transmission
coefficient controls $|J_{ij}|$ and the other instruments controls ${\rm sgn} J_{ij}$.
In addition, a small amount of injection of horizontally polarized signal is also made from the
master laser to each slave laser at $t=0$. This amount corresponds to $B_i$ for the $i$th slave laser.
It is controlled by the combination of a half-wave plate and a quarter wave plate.
For more details of implementation of the coefficients, see section 7 of Utsunomiya {\em et al.}~\cite{UTY11}.
Then one waits for a small time duration $t_{\rm st}$ to let the system evolve. Laser modes satisfying
the matching condition with the above-mentioned setting grow rapidly and other modes are suppressed.
For $t>t_{\rm st}$, the system is thought to be in a steady state. Then for each slave laser its output
is guided to a polarization beam splitter and the right and the left polarization components are separately
detected by photodetectors. By a majority vote of photon number counting, the computational result of each
slave laser, $|a\rangle_i\in\{|R\rangle,|L\rangle\}$, is retrieved. The steady state
$|a\rangle_0\cdots|a\rangle_{n-1}$ is thus determined. Once this is determined,
it takes only polynomial time to calculate the corresponding eigenvalue since there are
only $O(n^2)$ terms in the Hamiltonian (here, we do not use its matrix form).
Thus, in short, the state starts from $(|R\rangle+|L\rangle)^{\otimes n}$ and eventually reaches a
steady state representing a configuration that corresponds to the minimum energy of the given Hamiltonian.
Yamamoto {\em et al.} \cite{UTY11,TUY12,YTU12} employed rate equations involving several factors
characterizing each oscillator and connections with other oscillators to analyze photon numbers of the right
and left polarization components for each slave laser; they concluded that the system reaches a steady state
within 10 nano seconds without obvious dependence on $n$.
It has been unknown so far if the coherent computing model is a valid computer model in view of
a rigid and fair description of computational costs. Conventional analog computing models do not
solve NP-hard problems within a polynomial cost; they require either exponentially long convergence time
or exponentially fine accuracy \cite{Aa05}. Thus it should be natural to be skeptical against the power
of the coherent computing model. In this contribution, we investigate the signal per noise ratio in the
output of the coherent computer when the NPC Ising spin configuration problem is handled.
We will reach the fact that for certain hard instances, the relative signal intensity corresponding to solutions
is bounded above by a function decreasing exponentially in $n$. This is because the number of modes that
are possibly locked in the laser network increases rapidly in $n$ owing to the fact that the locking range of
the laser network does not shrink as $n$ grows considering imperfectness of optical instruments.
The analysis of computational difficulty is described in Sec.~\ref{secanalysis}. The result is
discussed in Sec.~\ref{secdiscussion} and summarized in Sec.~\ref{secconclusion}.
\section{Computational difficulty in the coherent computing model}\label{secanalysis}
The coherent computing model illustrated in Fig.~\ref{figlasernetwork} was so far analyzed
by Utsunomiya {\em et al.} \cite{UTY11,TUY12,YTU12} on the basis of the assumption that given
coefficients $J_{ij}$ and $B_{i}$ are exactly implemented by optical instruments although fluctuations
and quantum noise in the system were considered in their analyses of time evolutions using rate equations, which led
to a quite ideal convergence taking only 10 nano seconds.
\setcounter{footnote}{1}
Here, we assume that individual optical instruments are imperfect\footnotemark[1] so that there are errors in
$J_{ij}$ and $B_{i}$, which are due to calibration errors and/or thermal fluctuations.
\footnotetext[1]{It is a common case that each optical instrument has a few permil uncertainty
in the calibration of each property (see Ref.~\cite{sp250}).
In addition, there is a quantum limit in any classical instrument \cite{Cl10,La04}
so that a manufacturing error and a manipulation error cannot be made arbitrarily small.}
Then the following proposition is achieved.
\begin{proposition}\label{prop1}
Consider the NPC Ising spin configuration problem.
Suppose calibration errors and/or thermal fluctuations of optical instruments cause
nonzero physical deviations,\footnotemark[1]
$\;\varepsilon_{ij}\in{\bf R}$ for nonzero $J_{ij}$ and $\kappa_i\in{\bf R}$ for nonzero $B_{i}$.
We assume that $\varepsilon_{ij}$ are {\em i.i.d.} random variables with mean zero and a certain standard deviation
$\sigma_\varepsilon$ and $\kappa_i$ are {\em i.i.d.} random variables with mean zero and a certain standard deviation
$\sigma_\kappa$. Then, for large $n$, there exist YES instances such that the probability to obtain a spin
configuration corresponding to one of $\lambda{\textrm 's}<K$ using the coherent computer is
$\le {\rm poly}(n)2^{-n}$.
\end{proposition}
The proof is given as below.\\
~\\
{\bf Proof of Proposition \ref{prop1}}\\
Here we consider instances generated in the way that $J_{ij}$'s and $B_i$'s are independent uniformly distributed
random variables with values in $\{0, \pm 1\}$. Since a problem instance is a given data set, the standard deviation
for $J_{ij}$ and that for $B_i$ intrinsic to the problem instance itself are not of our concern. We only consider
physical deviations as errors.
As the model is a sort of a bulk model (there are many photons), it is convenient to consider
populations of individual configurations. Let $P_{\lambda,l_\lambda}(t)$ be the population of each eigenstate
$|\varphi_{\lambda,l_\lambda}\rangle$ ($l_\lambda\in\{0, \ldots, d_\lambda-1\}$) corresponding to eigenenergy $\lambda$
of the Hamiltonian (the Hamiltonian is specified by the problem instance), where $t$ stands for time and $d_\lambda$ is
the multiplicity of $\lambda$. We also introduce $P_\lambda(t)=\sum_{l_\lambda=0}^{d_\lambda-1} P_{\lambda,l_\lambda}(t)$.
It should be kept in mind that we do not start from the thermal distribution; for the initial state,
we have identical copies of $\sum_\lambda\sum_{l_\lambda}|\varphi_{\lambda,l_\lambda}\rangle=(|R\rangle+|L\rangle)^{\otimes n}$.
In the present setting, the random-energy model \cite{Derrida80,Derrida85} is valid\footnote{
Let us pick up a certain configuration $|\varphi\rangle$. Suppose, by applying $m$ bit flips,
its energy changes by $\Delta E(\varphi\overset{m}{\mapsto}\varphi')$ with $|\varphi'\rangle$ a resultant configuration.
This process should obey the random energy change and hence for large $m$,
$\Delta E(\varphi\overset{m}{\mapsto}\varphi')$ should obey the normal distribution with mean zero and a standard
deviation proportional to $\sqrt{m}$ by the central limit theorem (in regard with a sum of random variables).
In addition, the most typical number of bit flips is $n/2$ when we generate all other configurations
from $|\varphi\rangle$. Typical bit flips generate a dominant number of configurations.
Thus the distribution of energies is approximated by the normal distribution with mean zero and a standard
deviation proportional to $\sqrt{n}$. In this way, we have just obtained the distribution of energies
in the random-energy model.
} and hence, for large $n$, with an appropriate scaling factor $M$, one can write
$P_\lambda(0)=M\mathcal{N}(0, \sigma_\lambda^2)$ with $\sigma_\lambda=\Theta(\sqrt{n})$ where $\mathcal{N}(\mu, \sigma^2)$ is
the density function of the normal distribution with mean $\mu$ and standard deviation $\sigma$.
Here, we have $M = 2^nP_{\lambda_{\rm g},0}(0)$ with $\lambda_{\rm g}$ the ground state energy
because the initial population is same for all the configurations.
Let us denote the set of solution states (spin configurations corresponding to
$\lambda{\textrm 's}<K$) as $Y$.
The total population of solution states at $t$ is given by $P_{Y}(t)=\sum_{\lambda < K}P_\lambda(t)$.
Similarly, the total population of nonsolution states is given by $P_{X}(t)=\sum_{\lambda\ge K}P_\lambda(t)$; here,
$X=\{|\varphi_{\lambda,l_\lambda}\rangle\;|\;\lambda\ge K\}$. Ideally, only $|\varphi_{\lambda,l_\lambda}\rangle{\textrm 's}\in Y$
will enjoy population enhancement by mode selections. However, there exists $v \ge K$ such that
$P_\lambda(t>t_{\rm st}) \gg 0$ for $\lambda \le v$. This is because the matching condition is imperfect in reality;
the locking range is broader than the ideal range considering errors in optical
instruments.\footnote{See, {\em e.g.}, Ref.~\cite{KK81} for an experimental gain curve.}
Let us write $P_{Z}(t)=\sum_{K \le \lambda \le v}P_\lambda(t)$; here, $Z=\{|\varphi_{\lambda,l_\lambda}\rangle\;|\;K \le \lambda \le v\}$.
By assumption, we are considering physical deviations (including calibration errors and thermal fluctuations),
$\varepsilon_{ij}$ for nonzero $J_{ij}$ and $\kappa_i$ for nonzero $B_i$.
The Hamiltonian implemented on the laser network is written as
$\widetilde{H}=\sum_{i<j|J_{ij}\not = 0}(J_{ij}+\varepsilon_{ij})\sigma_{z,i}\sigma_{z,j}
+\sum_{i|B_i\not = 0}(B_i+\kappa_i)\sigma_{z,i}$.
This suggests that $v=K+K'(n)$
with $K'(n)\simeq\sigma_\varepsilon \sqrt{n(n-1)/3}+\sigma_\kappa\sqrt{2n/3}$
by the central limit theorem in regard with a sum of random variables (see, {\em e.g.}, Refs.~\cite{Shiryaev,Klenke}),
considering the expected number of nonzero $J_{ij}$'s and that of nonzero $B_i$'s.
Therefore, $P_{Z}(0)=M\int_K^{K+K'(n)}\mathcal{N}(0, \sigma_\lambda^2){\rm d}\lambda$.
Let us write $H=H_J+H_B$ with $H_J=\sum_{i<j}J_{ij}\sigma_{z,i}\sigma_{z,j}$ and $H_B=\sum_iB_i\sigma_{z,i}$.
As we have mentioned, it is known \cite{VT77,Kirkpatrick77,MB80,Derrida80,Derrida81,Simone95,Andreanov04,Boettcher10}
that the ground state energy of $H_J$ is typically $c_{\rm g} n$ with $-2<c_{\rm g}<-1/2$.
Therefore, for any normalized vector $|v\rangle$ in the Hilbert space of the system of our concern,
$\langle v|H|v\rangle$ is typically bounded below by $-3n$. Thus, for typical instances we can choose
$K=K(n)$ with $-K(n)=O(n)$. Recall that $K'(n)=\Theta(n)$ and $\sigma_\lambda=\Theta(\sqrt{n})$.
We find that $\int_K^{K+K'(n)}\mathcal{N}(0, \sigma_\lambda^2){\rm d}\lambda
=\left[\frac{1}{2}{\rm erf}(\frac{\lambda}{\sqrt{2}\sigma_\lambda})\right]_{K}^{K+K'(n)}$ is a monotonically increasing
function of $n$.
Hence, for a certain constant $b>0$, $P_{Z}(0)\ge b2^nP_{\lambda_{\rm g},0}(0)$.
Let us assume that locked modes have equally enhanced intensities for $t>t_{\rm st}$.
This leads to the signal per noise ratio for $t>t_{\rm st}$:
$P_{Y}(t>t_{\rm st})/P_{Z}(t>t_{\rm st}) = P_{Y}(0)/P_{Z}(0)$.
(In case one can assume that only one of $|\varphi_{\lambda,l_\lambda}\rangle$'s in $Y\cup Z$ survives, the ratio of the
probability of finding $|\varphi_{\lambda,l_\lambda}\rangle$ originated from $Y$ and that of finding
$|\varphi_{\lambda,l_\lambda}\rangle$ originated from $Z$ at $t>t_{\rm st}$ is given by the same equation.)
Consider some typical instances for which $d_{\rm g}$ is small and is not clearly dependent on $n$
($d_{\rm g}$ is the multiplicity in the ground level). This is a typical situation because the multiplicity
of $\lambda$ obeys the distribution $\mathcal{N}(0,\sigma_\lambda^2)$ with $\sigma_\lambda=\Theta(\sqrt{n})$ in the
present setting, as we have explained.
It is always possible to choose\footnote{Recall that we are proving the {\em existence} of hard instances.}
the value of $K$ such that all $|\varphi_{\lambda,l_\lambda}\rangle\in Y$ are configurations with at most a
constant number of bits different from one of the ground states.
In this case, $P_{Y}(0)={\rm poly}(n)P_{\lambda_{\rm g},0}(0)$ and thus, for large $n$,
$P_{Y}(t>t_{\rm st})/P_{Z}(t>t_{\rm st})\le {\rm poly}(n) 2^{-n}$.
$\Box$
\begin{remark}\label{rem1}
It is trivial to find a similar proof for the existence of hard instances of the Ising spin
configuration problem for finding a ground level in the coherent computing model.
\end{remark}
By Proposition~\ref{prop1}, it is now easy to prove the following theorem.
\begin{theorem}\label{theo1}
There exists an instance of the NPC Ising spin configuration problem such that a decision
takes $\Omega(2^{n}/{\rm poly}(n))$ time in the coherent computing model when nonzero physical
deviations,\footnotemark[1] $\;\varepsilon_{ij}\in{\bf R}$ for nonzero $J_{ij}$ and
$\kappa_{i}\in{\bf R}$ for nonzero $B_{i}$, are considered. Here, $\varepsilon_{ij}$
($\kappa_{i}$) are assumed to be {\em i.i.d.} random variables with zero mean and a certain standard
deviation $\sigma_\varepsilon$ ($\sigma_\kappa$).
\end{theorem}
{\bf Proof of Theorem~\ref{theo1}}\\
By Proposition~\ref{prop1}, there exists an YES instance such that the probability $p_s$
for a single trial of coherent computing to find $\lambda<K$ is $\le{\rm poly}(n)2^{-n}$.
The success probability after $\tau$ trials is given by $1-(1-p_s)^\tau$. In order to make
this probability larger than a certain constant $c>0$, we need $\tau>\log(1-c)/\log(1-p_s)
= (\log\frac{1}{1-c})/[p_s+\mathcal{O}(p_s^2)]=\Omega(2^{n}/{\rm poly}(n))$.
$\Box$
\section{Discussion}\label{secdiscussion}
We have theoretically shown a weakness of the coherent computing model for the problem to examine the
existence of a suitably small (large negative) eigenvalue of an Ising spin glass Hamiltonian. As the
number $n$ of spins grows, the desired signal decreases exponentially for certain hard instances because
exponentially many undesired configurations obtain gains in a realistic setting.
Indeed, Yamamoto {\em et al.} made numerical simulations \cite{UTY11,TUY12,YTU12} to examine their
prospect that a desired configuration would be found efficiently in the coherent computing model.
But, in general, the following points should be taken into account whenever a computer simulation of
a physical system is performed.
First, in classical computing, exponentially fine accuracy is achievable by linearly increasing the
register size of a variable or an array size of combined variables. Nevertheless, in physical systems,
noise decreases as $\propto 1/\sqrt{T}$ with $T$ the number of trials or the number of identical systems
according to the well-known central limit theorem. In the field of quantum computing, this has been
well-studied in the context of NMR bulk-ensemble computation at room temperature which suffers from
exponential decrease of signal intensity corresponding to the computation result as the input size grows
(see, {\em e.g.}, \cite{KCL98,SK06}). In the coherent computing model, the ratio of the population of correct
configurations and that of wrong configurations at the steady state should not decrease in a super-polynomial
manner if the model were physically feasible for solving the problem efficiently.
So far, Yamamoto {\em et al.} reported \cite{UTY11,TUY12,YTU12} that each slave laser maintains a sufficiently large
discrepancy between the populations of $|R\rangle$ and $|L\rangle$ at the steady state for some instances with a small
number of spins ($n\le 10$), using a simulation based on rate equations. They also showed their simulation results for
$n=1000$ for a very restricted type of instances such that $J_{ij}$'s take the same value and $B_i$'s for odd
$i$ take the same value and so do for even $i$. Nevertheless, the populations (in other words, the joint probabilities)
of correct and wrong configurations and how they scale for large $n$ were not reported.
Recently, Wen \cite{Wen12} showed his simulation results for the case where the graph was a two-layer lattice
for $n$ up to $800$. Although it was reported that his simulations of the coherent computer found eigenvalues
lower than those found by a certain semidefinite programming method, the populations of correct and wrong configurations
were not shown. Thus, it is difficult to discuss the power of the coherent computing model on the basis of presently
known simulation results.
Second, the coefficients of a problem Hamiltonian cannot be implemented as they are, in reality. Seemingly
negligible errors in the coefficients might be crucial in complexity analyses for a large input size.
This point has not been considered in conventional simulation studies \cite{TUY12,YTU12,Wen12} of the coherent
computing model.
In the coherent computing model, nonzero $J_{ij}$'s and nonzero $B_i$'s in the Ising spin glass Hamiltonian should
accompany calibration errors and/or thermal fluctuations. In particular, optical instruments usually have
nonnegligible calibration errors \cite{sp250}. As we have written in the proof of Proposition~\ref{prop1},
a well-known application of the central limit theorem for the sum of random variables \cite{Shiryaev,Klenke}
indicates the important observation that the sum of such physical deviations is an increasing function of the number
of spins. This fact has led to our conclusion that the relative population of desired configurations decreases
exponentially in $n$ for certain hard instances.
The second point is also usually overlooked in computer simulations \cite{F01} of adiabatic quantum computing.
Discussions on the complexity of adiabatic time evolution are usually made as to how long time should be
spent in light of a minimum energy gap between the ground state and the nearest excited state during
adiabatically changing the Hamiltonian toward its final form. The coefficients in the starting and
the final Hamiltonians are quite often considered to be given accurate numbers \cite{F12}. Nevertheless, they
should have certain errors due to imperfect calibrations \cite{sp250} and/or fluctuations in reality,
as we have discussed.
The target state will not appear as a stable state if a nontarget state of the final Hamiltonian becomes a ground
state of the Hamiltonian owing to the errors. A real physical setup for adiabatic quantum computing should suffer
from the demand of considerably fine tuning of individual apparatus to implement desired coupling for large $n$.
So far, $n$ has not been very large in physical implementations \cite{S03,P08,J11} so that this problem has not been
significant. (In addition, even under the setting without error in Hamiltonian coefficients, adiabatic quantum
computing tends to suffer from exponentially decreasing energy gap when random instances of certain NP-hard problems
are tried, according to the numerical analysis by Farhi {\em et al}.\cite{F12})
A possible way to avoid very fine tuning is to use error correction schemes similar to those
for standard circuit-model quantum computing. There have been several studies on error correction
codes \cite{J06} and dynamical decoupling \cite{L08,Qu12} in the context of adiabatic quantum computing.
It is of interest if similar schemes apply to the coherent computing model.
As for error correction codes, each Pauli operator in an original Hamiltonian should be
encoded to a certain multi-partite coupling term in an encoded Hamiltonian. Thus one needs to find
a scheme to implement such a term in the coherent computing model. It is highly nontrivial to
introduce, {\em e.g.}, a four-partite coupling among slave lasers. Further investigation is needed for
the usability of error correction codes.
Another scheme is dynamical decoupling. This scheme looks effective for suppressing thermal fluctuations
at a glance. Consider the minimum gap between two distinct eigenvalues of a problem Hamiltonian and normalize it with
the maximum gap. This decreases only polynomially in $n$ for any instance of the Ising spin configuration problem by
the definition of the problem. Thus the minimum operation interval of dynamical decoupling required for an effective
noise suppression decreases only polynomially in $n$ according to Eq. (52) of Ref.~\cite{Ng11}. One problem is how
to use this scheme for cancelling calibration errors. In addition, we need to find an implementation of the scheme
such that the scheme itself does not introduce an uncontrollable noise. This will be difficult for large $n$ because
imperfections in decoupling operations probably lead to a similar argument as Proposition~\ref{prop1}.
As we have proved, there are hard instances of the NPC Ising spin configuration problem for which one cannot efficiently
achieve a correct decision in the coherent computing model (Theorem~\ref{theo1}). This is a reasonable result in
light of the fact that no known conventional computer model could solve an NP-complete problem within a polynomial
cost. It is still an open problem if an unreasonable computational power is achievable by combining error protection
schemes with the coherent computing model.
\section{Conclusion}\label{secconclusion}
The model of coherent computing has been theoretically investigated in view of computational
cost under a realistic setting. It has been proved that there exist hard instances of the NPC Ising spin
configuration problem, which require exponential time for a correct decision in the model.
\subparagraph*{Acknowledgements}
The author would like to thank William J. Munro, Kae Nemoto, and Yoshihisa Yamamoto for helpful
discussions. This work is supported by the Grant-in-Aid for Scientific Research from JSPS
(Grant No. 25871052).
|
1,108,101,565,000 | arxiv | \section{Introduction}
The elementary atoms formation in particles collisions and decays can give an unique information on
strong interaction dynamics. The determination of the pionium atom lifetime~\cite{dirac} allows one to
get information on $\pi\pi$ scattering lengths, whose knowledge are crucial for verification of Chiral
Perturbation Theory predictions~\cite{colangelo01}. The accuracy of scattering lengths determination from nonleptonic decays~\cite{batley09} $K^\pm\to \pi^\pm\pi^0\pi^0$ also depends on effects caused by the possibility of $\pi\pi$ bound state formation~\cite{gevorkyan07,gevorkyan08}. The creation of positronium atoms in $\pi^0$ Dalitz decay~\cite{afanasyev90} or its photoproduction on extended target~\cite{nemenov81,gevorkyan02}can give information on dependence of interaction on the spin state of the system and on mechanism of bound state formation.\\
Since the basic works of Nemenov~\cite{nemenov73} which stimulated the search of elementary atoms,
the $\pi\mu$ atom has been discovered~\cite{coombes76,aronson86} in the decays of neutral kaons $K_L\to \pi^+\mu^-\nu$ .\\
In the present work we point out the importance of investigation of the $\pi\mu$ atom formation in the decay
\begin{eqnarray}}\def\ea{\end{eqnarray}
K^+\to \pi^++\pi^-+\mu^++\nu
\ea
($K_{\mu 4}$ decay). The reason for this looks as follows.
In the last years the great efforts have been done~\cite{ckm} for experimental study of the rare decay $K^+\to\pi^+\nu\bar{\nu}$ with the major goal of determination of the value of $V_{st}$ , which is unequally predicted by the theory~\cite{buchalla99, isidori03, buras08}. At present the experiment NA42 ~\cite{na62} at CERN SPS is accepted , which plans to collect $\approx 80$ events of this rare decay\footnote{At the moment the six events are reported by CKM collaboration~\cite{ckm}.}.
Later on we calculate the probability of $\pi\mu$ atom formation in the $K_{\mu4}$ decay and show that branching rate of atom formation is not much smaller than the branching ratio of fundamental process $K^+\to\pi^+\nu\bar{\nu}$. As a result the process of $\pi\mu$ atom formation can give a certain contribution as a background to the basic decay $K^+\to\pi^+\nu\bar{\nu}$ in the relevant kinematical regions of experiment NA62.
\section{The decay rate of the $\pi\mu$ atom formation}
To obtain the decay rate of the $\pi\mu$ atom formation in $K_{\mu 4}$ decay
\begin{eqnarray}}\def\ea{\end{eqnarray}
K^+\to \pi^++A_{\pi\mu}+\nu
\ea
we begin from the well known ~\cite{cabibbo65, pais68} matrix element of the decay (1) written in the form of the product of the lepton and hadron currents
\begin{eqnarray}}\def\ea{\end{eqnarray}
M=\frac{G_F}{\sqrt{2}}V_{us}^* j_\lambda J^\lambda=
\frac{G_F}{\sqrt{2}}V_{us}^*\bar{u}(k_1)\gamma_\lambda(1-\gamma_5)v(k_2)(V^\lambda-A^\lambda)
\ea
where the axial $A^\lambda$ and vector $V^\lambda$ hadronic currents:
\begin{eqnarray}}\def\ea{\end{eqnarray}
A^\lambda&=&-\frac{i}{m_K}\left((p_1+p_2)^\lambda F+(p_1-p_2)^\lambda G+(k_1+k_2)^\lambda R\right); \nonumber\\
V^\lambda&=&-\frac{H}{m_K^3}\epsilon^{\lambda\nu\rho\sigma}(k_1+k_2)_\nu (p_1+p_2)_\rho (p_1-p_2)_\sigma
\ea
Here and later on $k, p_1, p_2, k_1, k_2$ are the invariant momenta of kaon, pions, muon and neutrino;
$m_K, m_\pi, m_\mu $ are the relevant masses.\\
Confining as usually by s and p waves and assuming the same p-wave phase $\delta_p$ for different form factors one has
\begin{eqnarray}}\def\ea{\end{eqnarray}
F=F_se^{i\delta_s}+F_pe^{i\delta_p};
\qquad
G=G_pe^{i\delta_p};
\qquad
H=H_pe^{i\delta_p}.
\qquad
R=R_pe^{i\delta_p}
\ea
The main goal of experimental investigation~\cite{rosselet77, pislak03, batley08} is measurements of the
quantities $F_s, F_p, G_p, H_p, R_p, \delta=\delta_s-\delta_p$ as a function of three
invariant combinations of pions and leptons momenta $s_\pi=(p_1+p_2)^2, s_l=(k_1+k_2)^2$
and $\Delta=-k(p_1+p_2)$ ~\cite{pais68}.\\
From the other hand to make up the $\pi\mu$ atom in the decay (1) the negative pion and muon should have a similar velocities. For such kinematic only two variables are at work, which we choose as $s_\pi, s_l$.\\
Since the binding energy of the ground state of $\pi\mu$ atom is small~\cite{staffin77} $\varepsilon= 1.6KeV$ the atom is a nonrelativistic system. According to the general rules of quantum mechanics, the amplitude of the decay (2) can be written as the product of the matrix element of the decay (1) taken at equal velocities of muon and negative pion and the square of the Coulomb wave function at the origin
\begin{eqnarray}}\def\ea{\end{eqnarray}
M(K^+\to \pi^+A_{\pi\mu}\nu)= \frac{\Psi (r=0)}{\sqrt{2 \mu}}
M( K^+\to \pi^+\pi^-\mu^+\nu)_{v_\pi=v_\mu}
\ea
The square of the Coulomb wave function evaluated at the origin summed over principal quantum number ~\cite{aronson86}
\begin{eqnarray}}\def\ea{\end{eqnarray}
\mid\Psi(r=0)\mid^2=\sum_{n=1}\mid\Psi_n(r=0)\mid^2=\frac{1.2}{\pi}(\alpha \mu)^3
\ea
with $\alpha=\frac{1}{137}$ the fine structure constant and $\mu=\frac{m_{\pi}m_{\mu}}{m_{\pi}+m_{\mu}}$ reduced mass.\\
Using the well known rules, we obtain for the decay rate of (2)
\begin{eqnarray}}\def\ea{\end{eqnarray}
\Gamma=\frac{1}{(4\pi)^3m_\pi m_K}\mid\Psi(r=0)\mid^2
\int\mid M( K^+\to \pi^+\pi^-\mu^+\nu)_{v_\pi=v_\mu}\mid^2dE_\nu dE_\pi
\ea
Integrations in this expression are going over neutrino $E_\nu$ and positive pion $E_\pi$ energies.\\
To calculate the square of matrix element in (8) we take advantage of the fact that the bilinear form of lepton current
$t_{\alpha\beta}=j_{\alpha}j_{\beta}^+$ can be written in the well known form (see e.g. ~\cite{okun65})
\begin{eqnarray}}\def\ea{\end{eqnarray}
t^{\alpha\beta}=8\left(k_1^\alpha k_2^\beta + k_2^\alpha k_1^\beta-(k_1 k_2)g^{\alpha\beta}
+ i\epsilon^{\alpha\beta\rho\sigma} k_1^\alpha k_2^\beta\right)
\ea
This expression has to be contracted with the relevant form of hadronic current $T_{\alpha\beta}$ .
As an example let us consider the convolution of lepton tensor (9) with the square of the first term of axial
hadronic current in (4)
\begin{eqnarray}}\def\ea{\end{eqnarray}
\sum t^{\alpha\beta} T_{\alpha\beta}=\frac{8}{m_K^2}\left(2(p_1k_1+p_2k_1)(p_1k_2+p_2k_2)-
(p_1+p_2)^2(k_1k_2) \right)\mid F\mid^2
\ea
Accounting that muon and negative pion which compose atom should have the equal
velocities let us express their momenta through the atom momentum $p_a$ and mass $ m_a$: $p_2=\frac{m_\pi}{m_a}p_a; ~~k_1=\frac{m_\mu}{m_a}p_a $ and introduce the following Lorentz invariant combinations
\begin{eqnarray}}\def\ea{\end{eqnarray}
q_1&=&2p_1k_2=m_K^2+m_a^2-m_\pi^2-2m_KE_a\nonumber\\
q_2&=&2p_1p_a=m_K^2-m_a^2-m_\pi^2-2m_KE_\nu \nonumber\\
q_3&=&2p_ak_2=m_K^2-m_a^2+m_\pi^2-2m_KE_\pi
\ea
As the atom energy in the kaon rest frame is $E_a=m_K-E_\pi-E_\nu$ the decay (2) describes by two independent variables, which in our case are the positive pion energy $E_\pi$ and neutrino energy $E_\nu$.\\
The expression (10) can be rewritten through the above invariants
\begin{eqnarray}}\def\ea{\end{eqnarray}
\sum t^{\alpha\beta} T_{\alpha\beta}=\frac{4m_{\mu}}{m_am_K^2}q_1\left(q_2+2m_\pi m_a\right)\mid F\mid^2
\ea
Calculating in the same way all terms in the contraction of square of axial and vector form factors with lepton part we obtain for the atom formation decay rate
\begin{eqnarray}}\def\ea{\end{eqnarray}
\Gamma(K^+\to \pi^+A_{\pi\mu}\nu)&=&\frac{G_F^2V_{us}^2}{m_\pi (4\pi m_K)^3} \frac{1.2\alpha^3 \mu^3}{\pi}\int \Phi(E_\pi,E_\nu)dE_\pi dE_\nu\nonumber\\
\Phi(E_\pi,E_\nu)&=&q_1(q_2 + 2m_\pi m_a) \mid F \mid^2+q_1(q_2 - 2m_\pi.m_a)\mid G \mid^2 +
m_\nu^2q_3\mid R\mid^2\nonumber\\ &+&2 (q_1q_2 - 2m_\pi^2 q_3)Re( FG^*) +
2m_\mu (m_aq_1 + m_\pi q_3)Re( FR^*) \nonumber\\
&+& 2m_\mu(m_aq_1 - m_\pi q_3) Re( RG^*) +\frac{m_\pi^2}{m_a^2}\left(4E_\nu E_\pi q_1-q_1^2-4m_\pi^2E_\nu^2\right)\nonumber\\
&\times&\left(\frac{q_3}{m_\pi^2}\mid H\mid^2-2\frac{m_a}{m_\pi}Re(GH^*+FH^*)\right)
\ea
The integration in this expression is going in the limits
\begin{eqnarray}}\def\ea{\end{eqnarray}
\frac{m_K^2+m_{\pi}^2-m_a^2-2m_KE_\pi}{2(m_K-E_\pi+\sqrt{E_\pi^2-m_\pi^2})}\leq &E_{\nu}&\leq
\frac{m_K^2+m_{\pi}^2-m_a^2-2m_KE_\pi}{2(m_K-E_\pi-\sqrt{E_\pi^2-m_{\pi}^2})}\nonumber\\
m_{\pi}\leq &E_\pi &\leq \frac{m_K^2+m_\pi^2-m_a^2}{2m_K}
\ea
The expression (13) is the main result of the present work. It allows one to calculate not only the decay rate of
atom formation in $K_{\mu4}$ decay, but also the differential decay rate $\frac{d\Gamma}{dE_\pi}$,
whose knowledge is important for estimation of background in the basic decay $K^+\to \pi^+\nu\bar{\nu}$.
\section{Numerical analysis}
To calculate the atom formation decay rate using expression (13) one has to know the hadronic form factors.
Accounting that hadronic form factors in $K_{\mu4}$ and $K_{e4}$ decays are the same we take
for three form factors F, G and H the standard parametrization ~\cite{rosselet77, pislak03, batley08} with
parameters\footnote{The precision of the experimental data~\cite{batley08} are better than in ~\cite{pislak03}, but unfortunately only relative parameters determining form factors are cited.} from~\cite{pislak03}.\\
The axial hadronic form factor R can not be extracted from the experimental data on $K_{e4}$ decay
\footnote{ The term with R in $K_{e4}$ decay rate is proportional to the square of electron mass and can be
neglected.}. For this quantity we use the theoretical prediction ~\cite{knecht93} $ R=\frac{2}{3}F$.
Substituting these parametrization in (13) and using the value of $K_{\mu4}$ decay rate from ~\cite{pdg} we obtain for the atom formation probability in the $K_{\mu4}$ decay
$\Gamma(K^+\to \pi^++A_{\pi\mu}+\nu)/\Gamma(K^+\to \pi^++\pi^-+\mu^++\nu)\approx 3.7\times 10^{-6}$.
This probability would be compared with the probability of $\pi\mu$ atom creation in $K_{\mu3}$ decay~\cite{aronson86} $\sim 4\times 10^{-7}$ and $\pi\pi$ atom formation in the nonleptonic decay~\cite{silagadze94} $K^+\to \pi^+\pi^+\pi^-$ $\sim 8\times 10^{-6}$.\\
As was mentioned above the atom formation in $K_{\mu4}$ decay can serves as background to the rare decay $K^+\to \pi^+\nu\bar{\nu}$ in the relevant kinematical region. The Standard Model predicts for the branching decay rate (see e. g. ~\cite{brod08}) $Br( K^+\to \pi^+\nu\bar{\nu})\approx (0.85\pm 0.07)\times 10^{-10}$ whereas the branching ratio for $\pi\mu$ atom formation considered in the present work turn out to be $Br( K^+\to \pi^+A_{\pi\mu}\nu)\approx 0.5\times 10^{-10}$.
Thus the branching ratio of the decay (2) considered in present work is comparable with branching ratio of basic decay $K^+\to \pi^+\nu\bar{\nu}$ and so has to be considered as a possible background to this decay.\footnote{The corresponding consideration will be done elsewhere.}\\
The authors are grateful to V. Kekelidze, D. Madigozhin and Yu. Potrebenikov for permanent support and useful
discussions.
|
1,108,101,565,001 | arxiv | \section{Introduction}
\label{sec:introduction}
Following this brief introduction,
section \ref{sec:methods} reviews in general terms
the techniques used for echo mapping of active galaxies.
Sections \ref{sec:geometry}, \ref{sec:kinematics}, and \ref{sec:conditions}
then discuss the use of echo mapping experiments to probe
the geometry, the kinematics, and the physical conditions
in Seyfert 1 galaxies.
Section \ref{sec:h0} considers two direct methods based on
echo mapping to determine redshift-independent distances,
and hence cosmological parameters $H_0$ and $q_0$.
\subsection{The Black Hole Accretion Disc Model for AGNs}
The standard model of an Active Galactic Nucleus (AGN)
envisions a supermassive ($10^{6-9}M_\odot$) black hole
in the core of a galaxy.
The black hole is fed gas from an accretion disc.
Around the disc is a geometrically thick torus of
dust and molecular gas.
The disc and torus receive gas from the galaxy's interstellar medium,
and from various processes (tides, bow shocks, winds, irradiation)
that strip material from the stars that pass through this region.
The ultra-violet and optical spectra of quasars,
and their lower-luminosity cousins in the nuclei Seyfert galaxies,
have two types of emission lines:
narrow forbidden and permitted emission lines ($v \raisebox{-.5ex}{$\;\stackrel{<}{\sim}\;$} 1000~$km~s$^{-1}$),
and broad permitted emission lines ($v \raisebox{-.5ex}{$\;\stackrel{>}{\sim}\;$} 10,000~$km~s$^{-1}$).
The narrow lines are constant; the continuum and the broad lines vary.
The narrow lines arise from lower-density gas at larger
distances from the nucleus.
Seyfert nuclei come in two types thought to represent
different viewing angles.
Seyfert 1 spectra exhibit both broad and narrow emission lines,
while in Seyfert 2 spectra the broad lines are absent
or visible only in a faint linearly polarized component of the spectrum.
The current consensus is that the thick dusty torus blocks
our view of the BLR when viewed at $i\gtsim60^\circ$, but
some BLR light scatters toward us after rising far enough above
the plane to clear the torus.
The Narrow Line Region (NLR) in nearby Seyfert galaxies
is resolved by {\it HST} imaging studies
with resolutions of $\sim 0.1$~arcseconds.
The NLR typically has a clumpy bi-conical morphology,
aligned with the axis of radio jets, and suggesting
broadly-collimated ionization cones emerging from the nucleus,
perhaps collimated by the dusty torus.
The BLR and continuum production regions are unresolved by {\it HST}.
Echo mapping experiments use light travel time delays
revealed by variability on 1-100 day timescales
to probe these regions on $\sim 10^{-6}$ arcsecond scales.
\subsection{Photo-Ionization Models}
Just how large is the BLR?
If the emission lines are powered by photo-ionization,
then we may employ a model to estimate the size of the
photo-ionized region.
Photo-ionization models such as CLOUDY
(Ferland, et al.~1998)
consider the energy and ionization balance inside a 1-dimensional
gas cloud parameterized by hydrogen number density $n_{\sc H}$,
column density $N_{\sc H}$, and distance $R$ from the source of
ionizing radiation $L_\lambda$.
The calculated emission-line spectrum emerging from such a
gas cloud depends primarily on the ionization parameter,
\begin{equation}
U = \frac{\rm ionizing\ photons}{\rm target\ atoms}
= \frac{Q}{4\pi R^2 c n_{\sc H}} ,
\end{equation}
where the luminosity of hydrogen-ionizing photons is
\begin{equation}
Q = \int_{0}^{\lambda_0} \frac{\lambda L_\lambda d\lambda}{h c} ,
\end{equation}
with $L_\lambda$ the luminosity spectrum emitted by the nucleus.
Re-arranging this equation yields
\begin{equation}
R = \left( \frac{Q}{4\pi U c n_{\sc H} }\right)^{1/2} .
\end{equation}
By comparing the flux ratios observed for the broad emission lines
in the spectra of AGNs
against those predicted by the single-cloud photo-ionization models,
typical parameters in the photo-ionized gas are found to be
$U \sim 10^{-2}$ and $n_{\sc H} \sim 10^{8-10}$cm$^{-3}$.
With these conditions, the light travel time
across the radius of the ionized zone is
\begin{equation}
\label{eqn:blrsize}
\frac{R}{c} \sim 200 {\rm d}
\left( \frac{Q}{10^{54}{\rm s}^{-1} } \right)^{1/2}
\left( \frac{U}{10^{-2} } \right)^{-1/2}
\left( \frac{n_{\sc H}}{10^{9}{\rm cm}^{-3} } \right)^{-1/2} ,
\end{equation}
where $Q\sim 10^{54}$s$^{-1}(H_0/100)^{-2}$ is appropriate for
the Seyfert 1 nucleus of NGC~5548.
On the basis of such predictions, astronomers searched
for reverberation effects on timescales of months
before eventually realizing that reverberation was occurring
on much shorter timescales.
\section{Echo Mapping Methods}
\label{sec:methods}
\begin{figure}
\plotfiddle{horne_fig1.eps}{5cm}{-90}{45}{45}{-200}{240}
\caption{
Ionizing photons from a compact source are reprocessed
by gas clouds in a thin spherical shell.
A distant observer sees the reprocessed photons arrive
with a time delay ranging from 0, for the near edge of the shell,
to $2R/c$, for the far edge.
The iso-delay paraboloids slice the shell into zones with
areas proportional to the range of delays.
}
\label{fig:shell}
\end{figure}
\subsection{Reverberation}
A compact variable source of ionizing radiation
launches spherical waves of heating or cooling that
expand at the speed of light, triggering
changes in the radiation emitted by surrounding gas
clouds.
The light travel time from nucleus to reprocessing site to observer
is longer than that for the direct path from nucleus to observer.
Thus a distant observer sees the reprocessed radiation arrive
with a time delay reflecting its position within the source.
The time delay for a gas cloud located at a distance $R$
from the nucleus is
\begin{equation}
\tau = \frac{R}{c} ( 1 + \cos{\theta} )\ ,
\end{equation}
where the angle $\theta$ is measured from 0 for a cloud on the
far side of the nucleus to $180^\circ$
on the line of sight between the nucleus and the observer.
The delay is 0 for gas on the line of sight between us and
the nucleus, and $2R/c$ for gas directly behind the nucleus
(Fig.~\ref{fig:shell}).
The iso-delay contours are concentric paraboloids wrapped
around the line of sight.
Such time delays are the basis of echo mapping techniques.
The validity of this basic picture is well supported by the results of
intensive campaigns designed to monitor the variable optical,
ultra-violet, and X-ray spectra of Seyfert 1 galaxies.
Notable among these are the AGN~Watch campaigns
(\verb+http://www.astronomy.ohio-state.edu/~agnwatch/+).
In Seyfert 1 galaxies,
the variations observed in continuum light are practically simultaneous
at all wavelengths throughout the ultra-violet and optical,
but the corresponding emission line variations
exhibit time delays of 1 to 100 days,
probing the photo-ionized gas 1 to 100 light days from the nucleus.
This corresponds to micro-arcsecond scales in nearby Seyfert galaxies.
\subsection{Linear and Non-Linear Reprocessing Models}
The ionizing radiation that drives the line emission
includes X-ray and EUV light that cannot be directly observed.
However, quite similar variations are seen in continuum
light curves at ultra-violet and optical wavelengths.
These observable continuum light curves provide suitable surrogates
for the unobservable ionizing radiation.
The reprocessing time (hours) is small compared
with light travel time (days) in AGNs.
In the simplest reprocessing model,
the line emission $L(t)$ is taken to be proportional to
the continuum emission $C(t-\tau)$, with a time delay
$\tau$ arising from light travel time and from local
reprocessing time, thus
\begin{equation}
L(t) = \Psi C(t-\tau)\ .
\end{equation}
Since the reprocessing sites span a range of time delays,
the line light curve $L(t)$ is a sum of many time-delayed copies
of the continuum light curve $C(t)$,
\begin{equation}
L(t) = \int_0^\infty \Psi(\tau)\ C(t-\tau)\ d\tau .
\end{equation}
This convolution integral introduces $\Psi(\tau)$,
the ``transfer function'' or ``convolution kernel'' or ``delay map'',
to describe the strength of the reprocessed light
that arrives with various time delays.
Since light travels at a constant speed, the surfaces of constant
time delay are ellipsoids with one focus at
the nucleus and the other at the observer.
Near the nucleus, these ellipsoids are effectively paraboloids
(Fig.~\ref{fig:shell}).
$\Psi(\tau)$ is in effect a 1-dimensional map
that slices up the emission-line gas,
revealing how much gas is present between each of the
iso-delay paraboloids.
The aim of echo mapping is to recover $\Psi(\tau)$ from
measurements of $L(t)$ and $C(t)$ made at specific times $t_i$.
To fit such observations,
the linear model above is too simple in at least two respects.
The first problem is additional sources of light contributing
to the observed continuum and emission-line fluxes.
Examples are background starlight, and narrow emission lines.
When these sources do not vary on human timescales,
they simply add constants to $L(t)$ and $C(t)$,
\begin{equation}
\label{eqn:linearized}
L(t) = \bar{L} + \int_0^\infty
\Psi(\tau)\
\left[ C(t-\tau) - \bar{C} \right]
d\tau .
\end{equation}
\begin{figure}
\plotfiddle{horne_fig2.eps}{5cm}{-90}{45}{45}{-180}{230}
\caption{
The emission-line response from gas clouds
exposed to different levels of photo-ionizing continuum radiation.
Left: The initially linear response saturates
and may even decrease as the cloud becomes more fully ionized.
Right: When gas clouds are present at many radii,
the ionization zone can simply expand, resulting in
an ensemble response that increases monotonically
and is less non-linear than that from a single cloud.
The tangent line is then a useful approximation.
}
\label{fig:nonlin}
\end{figure}
A second problem with the linear reprocessing model is that
the reprocessed emission is more generally a non-linear function
of the ionizing radiation.
For example, when a cloud is only partially ionized, the line
emission increases with the ionizing radiation,
but as the cloud becomes fully ionized
the line emission may saturate or even decrease
with further increases in ionizing radiation.
For single clouds we therefore expect the response function $L(C)$
to be highly non-linear (Fig.~\ref{fig:nonlin}).
The marginal response $\partial L / \partial C$
is generally less than the mean response $L/C$,
and may become negative when an increase in ionizing radiation
reduces the line emission.
The total response is of course a sum of responses
from many different gas clouds, those closer to the nucleus
being more fully ionized than those farther away.
If clouds are present at many radii, so that zones of constant
ionization parameter can simply expand, then the
effect of averaging over many clouds is to produce a total response
that is monotonically increasing and less strongly non-linear.
Such a response may be adequately approximated by a tangent line,
\begin{equation}
L(C) = \bar{L} + \frac{\partial L}{\partial C}
\left( C - \bar{C} \right)\ ,
\end{equation}
for ionizing radiation changes in some range above and below a mean level
(Fig.~\ref{fig:nonlin}).
Observations support the use of this approximation --
plots of observed line fluxes against continuum fluxes
show roughly linear relationships,
with $L/C > \partial L / \partial C$ so that
extrapolation to zero continuum flux leaves a positive
residual line flux.
For these reasons, it is usually appropriate in echo mapping
to adopt the linearized reprocessing model,
equation~\ref{eqn:linearized}, with
the ``background'' fluxes, $\bar{L}$ for the line
and $\bar{C}$ for the continuum, set somewhere in the
range of values spanned by the observations.
In this model, the delay map $\Psi(\tau)$ senses each gas cloud
in proportion to its marginal response to a change in ionizing radiation.
The roughly linear responses from partially-ionized clouds
are fully registered,
while the saturated or diminished responses of more fully ionized clouds
have a reduced effect.
\subsection{Cross-Correlation Analyses}
\begin{figure}
\plotfiddle{horne_fig3.eps}{15cm}{0}{60}{60}{-200}{-20}
\caption{
The left-hand columns show light curves of NGC~7469 obtained
with {\it IUE} during an intensive AGN~Watch monitoring campaign
during the summer of 1996.
The right-hand column shows the result of cross-correlating
the light curve immediately to the left with the
1315~\AA\ light curve at the top of the left column;
the panel at the top of the right column thus shows
the 1315~\AA\ continuum auto-correlation function.
Data from Wanders, et~al.~(1997).
}
\label{fig:ccf}
\end{figure}
With reasonably complete light curves, it is usually obvious that
the highs and lows in the line light curves occur later
than those in the continuum.
To quantify this time delay, or lag, a common practice is
to cross-correlate the line and continuum light curves.
The cross-correlation function (CCF) is
\begin{equation}
L \star C (\tau) = \int L(t)\ C(t-\tau)\ dt .
\end{equation}
Several methods have been developed and refined
to compute CCFs from noisy measurements available only
at discrete unevenly-spaced times,
either by interpolating the data
(Gaskell \& Sparke~1986, White \& Peterson~1994)
or binning the CCF (Edelson \& Krolik 1988, Alexander 1997).
The resulting CCFs generally have a peak
shifted away from zero in a direction indicating that changes in
the emission lines lag behind those in the continuum
(e.g.\ Fig.~\ref{fig:ccf}).
The CCF lag -- $\tau_{\sc CCF}$ --
at which $L \star C$ has its maximum value, serves to
quantify roughly the size of the emission-line region.
Since a range of time delays is present, described by $\Psi(\tau)$,
what delay is measured by the cross-correlation peak?
Since $L(t)$ is itself a convolution between $C(t)$ and $\Psi(\tau)$,
and since convolution is a linear operation,
we have
\begin{equation}
L \star C = ( \Psi \star C ) \star C = \Psi \star ( C \star C ).
\end{equation}
The CCF is therefore a convolution
of the delay map with the continuum auto-correlation function (ACF),
\begin{equation}
C \star C(\tau) = \int C(t)\ C(t-\tau)\ dt .
\end{equation}
The ACF is symmetric in $\tau$, and thus always
has a peak at $\tau=0$.
With rapid continuum variations, the ACF is sharp,
and the CCF peak should be close to
the strongest peak of $\Psi(\tau)$.
This tends to favour short delays from the inner regions of the BLR
(Perez, et~al. 1992a).
When continuum variations are slow, however,
the ACF is broad, smearing out
sharp peaks in $\Psi(\tau)$, and shifting $\tau_{\sc CCF}$
toward the centroid of $\Psi(\tau)$.
Thus the cross-correlation peak depends not only on the
delay structure in $\Psi(\tau)$, but also on the
character of the continuum variations.
Different observing campaigns may therefore yield different lags
even when the underlying delay map is the same.
\subsection{Three Echo Mapping Methods}
More refined echo mapping analyses aim to recover
the delay map $\Psi(\tau)$ rather than just a characteristic time lag.
These echo mapping methods generally require more complete data
than the cross-correlation analyses.
Three practical methods have been developed.
The {\bf Regularized Linear Inversion} method (RLI)
(Vio, et~al.~1994, Krolik \& Done~1995)
notes that the convolution integral,
\begin{equation}
L(t) = \int C(t-\tau)\ \Psi(\tau)\ d\tau\ ,
\end{equation}
becomes a matrix equation,
\begin{equation}
L(t_i) = \sum_j C(t_i-\tau_j)\ \Psi(\tau_j)\ d\tau\ ,
\end{equation}
when the times $t_i$ are evenly spaced.
If the times are not evenly spaced, interpolate.
If the matrix $C_{ij} = C(t_i-\tau_j)$ can be inverted,
solving the matrix equation yields
\begin{equation}
\Psi(\tau_k) = \sum_k C^{-1}_{ki} L(t_i) / d\tau .
\end{equation}
If the continuum variations are unsuitable,
the matrix has small eigenvalues and the inversion is unstable,
strongly amplifying noise in the measurements $L(t_i)$ and $C(t_i)$.
This problem is treated by altering the matrix to reduce the
influence of small eigenvalues,
thereby reducing noise but blurring the delay map.
The related method of {\bf Subtractive Optimally-Localized Averages} (SOLA)
(Pijpers \& Wanders 1994) aims to estimate the delay map
as weighted averages of the emission line measurements,
\begin{equation}
\hat{\Psi}(\tau) = \int K(\tau,t)\ L(t)\ dt.
\end{equation}
Since $L= C\star \Psi$, we may write this as
\begin{equation}
\hat{\Psi}(\tau) = \int K(\tau,t) \int C(t-s)\ \Psi(s)\ ds\ dt\ .
\end{equation}
The estimate $\hat{\Psi}$ is therefore a blurred version of
the true delay map $\Psi$.
With suitable continuum variations,
the weights $K(\tau,t)$ can be chosen to make the blur kernel
\begin{equation}
b(\tau,s) = \int K(\tau,t)\ C(t-s)\ dt
\end{equation}
resemble a narrow Gaussian of width $\Delta$ centred
at $s = \tau$.
The parameter $\Delta$ then controls the trade-off between noise
and resolution in reconstructing $\Psi(\tau)$.
These direct inversion methods work best when the observed
light curves detect continuum variations on a wide range of timescales.
The {\bf Maximum Entropy Method} (MEM)
(Horne, Welsh \& Peterson 1991, Horne~1994)
is a very general fitting method
allowing the use of any linear
or non-linear reverberation model.
As an extension of maximum likelihood techniques,
MEM employs a ``badness of fit'' statistic, $\chi^2$,
to judge whether the echo model being considered
achieves a satisfactory fit to the data.
The requirement $\chi^2/N \sim 1 \pm \sqrt{2/N}$,
where $N$ is the number of continuum and line measurements,
ensures that model fits as well as is
warranted by the error bars on the data points,
without over-fitting to noise.
MEM fits are required also to maximize the
`entropy', which is designed to measure the `simplicity' of the map.
For positive maps $p_i>0$, the entropy
\begin{equation}
S = \sum_i p_i - q_i - p_i \ln{p_i/q_i}
\end{equation}
is maximized when $p_i = q_i$.
MEM thus steers map $p_i$ toward default values $q_i$.
With the default map set to
\begin{equation}
q_i = \left( p_{i-1} p_{i+1} \right)^{1/2},
\end{equation}
the entropy steers each pixel toward its neighbors,
and MEM then finds the `smoothest' positive map that fits the data.
For further technical details see Horne (1994).
\begin{figure}
\plotfiddle{horne_fig4.eps}{6cm}{-90}{45}{45}{-180}{252}
\caption{ Simulation test of the maximum entropy method
showing the recovery of a delay map (top left) from
data points sampling an erratically varying continuum light
curve (bottom) and the corresponding delay-smeared
emission-line light curve (top right).
The reconstructed delay map closely resembles
the true map $\Psi(\tau) \propto \tau e^{-\tau}$.
}
\label{fig:faketest}
\end{figure}
Fig.~\ref{fig:faketest} illustrates recovery of a delay map
from a MEM fit to fake light curves.
The lower panel shows an erratically varying continuum light curve.
The line light curve in the upper panel is smoother and has
time-delayed peaks.
A smooth continuum light curve $C(t)$ threads through the data points,
and extrapolates to earlier times.
Convolving this light curve with the delay map $\Psi(\tau)$, shown in the
upper left panel, gives the line light curve $L(t)$,
fitting the data points in the upper right panel.
Dashed lines give the backgrounds $\bar{L}$ and $\bar{C}$.
The MEM fit with $\chi^2/N=1$ adjusts $C(t)$, $\Psi(\tau)$, and $\bar{L}$
to fit the data points while keeping $C(t)$ and $\Psi(\tau)$ as
smooth as possible.
In this simulation test, the true transfer function,
$\Psi(\tau) \propto \tau e^{-\tau}$, is accurately recovered
from the data.
The fake dataset represents the type of data that could be obtained
with daily sampling over a baseline of 1 year.
While this is rather better than has been achieved so far,
future experiments (Kronos, Robonet) specifically designed for
long-term monitoring will make this simulation more relevant.
\section{Mapping the Geometry of Emission-Line Regions}
\label{sec:geometry}
\subsection{ Spherical Shells }
\begin{figure}
\plotfiddle{horne_fig5.eps}{6cm}{-90}{45}{45}{-180}{230}
\caption{
Delay maps for spherical geometry.
Left: When the line emission is isotropic,
the contribution from each spherical shell is
constant from 0 to $2R/c$.
The total delay map, summing many spherical shells, must
decrease monotonically.
Right: When the line emission is directed inward, toward the nucleus,
the response at small delays is reduced, so that each shell's
contribution increases with $\tau$ between 0 and $2R/c$.
The total delay map then rises to a peak away from zero.
}
\label{fig:psi}
\end{figure}
Having shown that we can recover the delay map $\Psi(\tau)$,
given suitable data, how may we interpret it?
This is relatively straightforward if the geometry is
spherically symmetric.
Consider a thin spherical shell that is irradated by a brief flash
of ionizing radiation from a source located at the shell's centre.
The flash reaches every point on the shell after a time $R/c$.
Since recombination times are short,
each point responds by emitting a brief flash of recombination radiation.
A distant observer sees first the flash of ionizing radiation,
and then the response of reprocessed light from the shell
arriving with a range of time delays.
The time delay is 0 for the near edge of the shell,
and $2R/c$ on the far edge of the shell.
The iso-delay paraboloids
slice up the spherical shell
into zones with areas proportional to the range of time delay
(Fig.~\ref{fig:shell}).
The response from the spherical shell is therefore
a boxcar function,
constant between the delay limits 0 and $2R/c$
(Fig.~\ref{fig:psi}).
Any spherically symmetric geometry is just
a nested set of concentric spherical shells.
The delay map for a spherical geometry is therefore
a sum of boxcar functions (Fig.~\ref{fig:psi}).
Since all the boxcars begin at delay 0,
the delay map must peak at delay 0,
and decrease monotonically thereafter.
The contribution from the shell of radius $R$
may be identified from the slope $\partial \Psi / \partial \tau $
evaluated at the appropriate time delay $\tau = 2R/c$.
(The case of anisotropic emission is discussed
in Section~\ref{sec:aniso} below.)
\subsection{The BLR is Smaller than Expected}
Fig.~\ref{fig:hbmap} shows a maximum entropy fit
of the linearized echo model of Eqn.~\ref{eqn:linearized}
to \ifmmode {\rm H}\beta \else H$\beta$\fi\ and optical continuum
light curves of NGC~5548 (Horne, Welsh \& Peterson 1991).
Subtracting the continuum background $\bar{C}$,
convolving with the delay map $\Psi(\tau)$,
and adding the line background $\bar{L}$,
gives the \ifmmode {\rm H}\beta \else H$\beta$\fi\ light curve.
The three fits shown, all with $\chi^2/N=1$,
indicate the likely range of uncertainty
due to the trade-off between the `stiffness' of
$\Psi(\tau)$ and $C(t)$.
\begin{figure}
\plotfiddle{horne_fig6.eps}{7cm}{-90}{45}{45}{-180}{252}
\caption{
Echo maps of \ifmmode {\rm H}\beta \else H$\beta$\fi\ emission in NGC~5548 found by a maximum entropy
fit to data points from a 9-month AGN~Watch monitoring campaign
during 1989.
The optical continuum light curve (lower panel) is convolved with
the delay map (top left) to produce the \ifmmode {\rm H}\beta \else H$\beta$\fi\ emission line
light curve (top right).
Horizontal dashed lines give the mean line and continuum fluxes.
Three fits are shown to indicate likely uncertainties.
Data from Horne, Welsh \& Peterson~(1991).
}
\label{fig:hbmap}
\end{figure}
The \ifmmode {\rm H}\beta \else H$\beta$\fi\ map has a single peak at a delay of 20 days,
and declines to low values by 40 days.
This suggests that the size of the \ifmmode {\rm H}\beta \else H$\beta$\fi\ emission-line
region is 10-20 light days.
This is 10 to 20 times smaller than the 200 light day size
estimated in Eqn.~\ref{eqn:blrsize}
on the basis of single-cloud photo-ionization models.
With clouds this close to the nucleus, the
ionization parameter $U$ would be higher unless
gas densities are increased by a factor of 100,
to $n_{\sc H} \sim 10^{11}$cm$^{-3}$.
\subsection{Anisotropic Emission}
\label{sec:aniso}
The \ifmmode {\rm H}\beta \else H$\beta$\fi\ response in NGC~5548 is smaller at delay 0 than at 20 days.
This lack of a prompt response conflicts with the
monotonically decreasing delay map we expect for a
spherical geometry.
One interpretation is that there is a deficit of gas
on the line of sight to the nucleus.
However, more likely this is a signature of anisotropic
emission of the \ifmmode {\rm H}\beta \else H$\beta$\fi\ photons arising from optically
thick gas clouds that are photo-ionized only on their
inward faces (Ferland, et~al. 1992, O'Brien, et~al. 1994).
The preference for inward emission of \ifmmode {\rm H}\beta \else H$\beta$\fi\ photons
reduces the prompt response by making
the reprocessed light from clouds on the far side
stronger than that from clouds on the near side of the nucleus.
With inward anisotropy, the impulse response of a spherical shell
is a wedge, increasing with $\tau$,
rather than a boxcar (Fig.~\ref{fig:psi}).
The sum of wedges has a peak away from zero.
\subsection{Stratified Temperature and Ionization}
Ultra-violet spectra from {\it IUE} and {\it HST} provide shorter-wavelength
continuum light curves, and light curves for a variety of
high and low ionization emission lines.
These are useful probes of the temperature and
ionization structure of the reprocessing gas.
The continuum light curves display very similar structure
at all optical and ultraviolet wavelengths.
The continuum lightcurve at the shortest ultra-violet wavelength
observed is normally used as the light curve against which to
study time delays at other continuum wavelengths and in
the emission lines.
In most cases studied to date the optical and ultra-violet continuum
light curves exhibit practically simultaneous variations
(e.g. Peterson, et~al. 1998), implying
that the continuum production regions are unresolved,
smaller than a few light days.
In one case, NGC~7469, delays of 0.1 to 1.7 days are detected
between the 1350\AA\ and 7500\AA\ continuum light curves
based on 40 days of continuous {\it IUE} and optical monitoring
(Wanders et~al. 1997, Collier et~al. 1998).
The delay increases with wavelength as $\tau \propto \lambda^{4/3}$,
suggesting that the temperature decreases as $T \propto R^{-3/4}$.
A systematic pattern is generally seen in the emission-line time delays.
High-ionization lines (\ifmmode {\rm N}\,{\sc v} \else N\,{\sc v}\fi, \ifmmode {\rm He}\,{\sc ii} \else He\,{\sc ii}\fi) exhibit the shortest delays,
while lower-ionization lines (\ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi,\ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi,\ifmmode {\rm H}\beta \else H$\beta$\fi) have longer delays.
This was established in the first AGN~Watch campaign,
by cross-correlation analysis (Clavel et~al. 1991)
and by maximum entropy fits (Krolik, et~al. 1991),
using {\it IUE} and optical light curves of NGC~5548
sampled at 4 day intervals for 240 days.
The effect is now seen in many objects (Table \ref{tab:lags}).
This pattern is consistent with photo-ionization models,
in which higher ionization zones occur closer to the nucleus.
\begin{table}
\begin{center}
\caption{\bf AGN~Watch Monitoring Campaigns
\label{tab:lags}
}
\begin{tabular}{crccccl}
\hline
target & year & \multicolumn{4}{c}{$\tau_{\sc CCF}$ (days)}& references
\\ & & \ifmmode {\rm He}\,{\sc ii} \else He\,{\sc ii}\fi\ & \ifmmode {\rm Ly}\alpha \else Ly$\alpha$\fi\ & \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ & \ifmmode {\rm H}\beta \else H$\beta$\fi\ &
\\ \hline \hline
NGC~5548 & 1989 & 7 & 12 & 12 & 20 & 1, 2, 3
\\ NGC~3783 & 1992 & 0 & 4 & 4 & 8 & 4, 5
\\ NGC~5548 & 1993 & 2 & 8 & 8 & 14 & 6
\\ Fairall~9 & 1994 & 4 & 17 & -- & 23 & 7, 8
\\ 3C~390.3 & 1995 & 10 & 50 & 50 & 20 & 11, 12
\\ NGC~7469 & 1996 & 1 & 3 & 3 & 6 & 9, 10
\\ \hline
\end{tabular}
\end{center}
\noindent References:
1. Clavel, et~al.~1991;
2. Peterson, et~al.~1991;
3. Dietrich, et~al.~1993;
6. Korista, et~al.~1995;
4. Reichert, et~al.~1994;
5. Stirpe, et~al.~1994;
7. Rodriguez-Pascual, et~al.~1997;
8. Santos-Lleo, et~al.~1997;
9. O'Brien, et~al.~1998;
10. Dietrich, et~al.~1998;
11. Wanders, et~al.~1997;
12. Collier, et~al.~1998.
\end{table}
Another interesting result is that
the time delay for \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ emission
is generally smaller than that for \ifmmode {\rm C}\,{\sc iii]} \else C\,{\sc iii]}\fi\ emission.
In early work, this line ratio was the basis for
estimating the gas density in the BLR to be $n_{\sc H} \sim 10^{8-10}$cm$^{-3}$.
However, this practice is now considered dubious
because their different time delays indicate that
these lines arise in different regions,
The \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ line arises from gas clouds closer to the nucleus,
and a higher density, $n_{\sc H} \sim 10^{11}$cm$^{-3}$,
is therefore needed to maintain the ionization parameter.
\subsection{Ionized Zones Expanding with Luminosity}
\begin{figure}
\plotfiddle{horne_fig7.eps}{6cm}{00}{45}{45}{-130}{-20}
\caption{
The radii of ionized zones emitting broad \ifmmode {\rm H}\beta \else H$\beta$\fi\ emission lines
for AGNs with different ionizing continuum luminosities.
The radii are estimated from time delays that are
found by cross-correlating \ifmmode {\rm H}\beta \else H$\beta$\fi\ and optical continuum light curves.
The diagonal line with $R \propto L^{1/2}$
corresponds to a constant ionization parameter.
Figure from Kaspi, et~al. (1996).
}
\label{fig:kaspi}
\end{figure}
The ionization zones should be larger in higher luminosity objects.
This prediction has been tested using
echo mapping studies of a dozen active galaxies,
including two low-luminosity quasars, spanning three decades in luminosity
(Fig.~\ref{fig:kaspi}, Kaspi, et~al. 1996).
The lag for each object is determined by cross-correlating
\ifmmode {\rm H}\beta \else H$\beta$\fi\ and optical continuum light curves.
A log-log plot of the time delay vs luminosity
shows a standard deviation of about 0.3 dex about a correlation
of the form
\begin{equation}
\tau_{\sc \ifmmode {\rm H}\beta \else H$\beta$\fi} \sim 17{\rm d}\ L_{44}^{1/2} ,
\end{equation}
where $L_{44}$ is the hydrogen-ionizing luminosity
in units of $10^{44}$erg~s$^{-1}$.
For comparison,
if gas clouds with similar densities are present
over a wide range of radius,
then zones of constant ionization parameter
should increase with $R \propto L^{1/2}$.
In the above correlation, derived by comparing different objects,
the observed scatter may be due in part to differences
in inclination, black hole mass, and accretion rate.
The ionized zone in each object is expected expected to change size
if the ionizing luminosity changes by a substantial factor.
There is some evidence for this, e.g. from 8 years of monitoring
NGC~5548 (Peterson, et~al. 1999).
To map this effect, the echo model may be generalized to allow the
delay map to change with the driving continuum light curve,
\begin{equation}
L(t) = \int_0^\infty C(t-\tau)\ \Psi(\tau,C(t-\tau))\ d\tau\ .
\end{equation}
\section{Mapping the Kinematics of Emission-Line Regions}
\label{sec:kinematics}
Combining time delays from variability with Doppler shifts
from emission-line profiles
yields a velocity-delay map $\Psi(v,\tau)$ to
probe the kinematics of the flow.
Fig.~\ref{fig:vtmaps} illustrates this capability by sketching
the formation of velocity-delay maps for
spherical free-fall into a point-mass potential,
and for a Keplerian disc
(Welsh \& Horne 1991, Perez, et~al. 1992b).
The dramatically different appearance of the two flows
highlights the power of velocity-delay maps for kinematic
diagnosis.
\begin{figure}
\plotfiddle{horne_fig8.eps}{7cm}{-90}{45}{45}{-180}{252}
\caption{
Velocity--delay maps for spherical freefall
and Keplerian disc kinematics.
Spherical shells map to diagonal lines,
and disc annuli map to ellipses.
The central mass is $10^{7}$M$_\odot$ in both cases.
The disc
inclination angle is $60^\circ$.
}
\label{fig:vtmaps}
\end{figure}
A radial flow, inward or outward, produces a strong red--blue asymmetry
in the velocity-delay map, while an azimuthal flow does not.
Line emission from gas flowing in toward the nucleus
is redshifted ($v>0$) on the near side ($\tau$ small)
and blueshifted ($v<0$) on the far side ($\tau$ large).
The signature of inflowing gas is therefore small time delays
on the red side and large time delays on the blue side.
Outflowing gas produces just the opposite red--blue asymmetry.
Gas circulating around the nucleus has the same time delay
on the red and blue side, producing a symmetric velocity-delay map.
Any spherically-symmetric flow is
a nested set of thin spherical shells.
The time delay $\tau$ and Doppler shift $v$ are
\begin{equation}
\begin{array}{ccc}
\tau = \frac{R}{c} \left( 1 + \cos{\theta} \right) ,
& ~~~~ &
v = v_R \cos{\theta} ,
\end{array}
\end{equation}
with $R$ the shell radius,
$\theta$ the angle from the back edge,
and $v_R$ the outflow velocity.
The linear dependence of both $\tau$ and $v$ on $\cos{\theta}$
implies that each shell maps into a diagonal line in the velocity-delay
plane, as shown in Fig.~\ref{fig:vtmaps}.
The circulating disc
flow is a set of
concentric co-planar cylindrical annuli.
The time delay and Doppler shift are
\begin{equation}
\begin{array}{ccc}
\tau = \frac{R}{c} \left( 1 + \sin{i} \sin{\phi} \right) ,
& ~~~~ &
v = v_\phi \sin{i} \cos{\phi} ,
\end{array}
\end{equation}
with $R$ and $i$ the radius and inclination of the annulus,
$\phi$ the azimuth,
and $v_\phi$ the azimuthal velocity.
Each annulus maps to an ellipse on the velocity-delay plane,
as in Fig.~\ref{fig:vtmaps}.
Inner annuli map to ``squashed'' ellipses
(large $\pm v$, small $\tau$),
while outer annuli map to ``stretched'' ellipses
(small $\pm v$, large $\tau$).
While $\Psi(v,\tau)$ is a distorted 2-dimensional
projection of the 6-dimensional phase space,
it can reveal important aspects of the flow, particularly if
the velocity field is ordered and has some degree of symmetry.
The envelope of the velocity-delay map may reveal
the presence of virial motions $v^2 \propto G M / c \tau $,
with $M$ the mass of the central object.
A red/blue asymmetry, or lack thereof, gauges the relative
importance of radial and azimuthal motions.
Disordered velocity fields with a range of velocities
at each position smear the map in the $v$ direction.
The far side (large $\tau$) may be enhanced by
anisotropic emission from optically thick clouds
radiating their lines inward toward the nucleus
(Ferland, et~al. 1992, O'Brien, et~al. 1994).
To derive Doppler-delay maps from the data,
we simply slice the observed line profile into wavelength bins,
and recover the delay map at each wavelength.
This is a simple extension of the echo model
used to fit continuum and emission-line light curves.
At each wavelength $\lambda$ and time $t$ the emission-line flux
\begin{equation}
L(\lambda,t) = \bar{L}(\lambda) +
\int_0^\infty \left[ C(t-\tau) - \bar{C} \right]\
\Psi(\lambda,\tau)\ d\tau
\end{equation}
is obtained by summing time-delayed responses
to the continuum light curve and adding
a time-independent background spectrum $\bar{L}(\lambda)$.
We observe $C(t)$ and $L(\lambda,t)$ at some set of times $t_i$.
We fix $\bar{C}$, e.g. close to the mean of the observed values.
We then adjust the continuum light curve $C(t)$,
the background spectrum $\bar{L}(\lambda)$,
and the Doppler-delay map $\Psi(\lambda,\tau)$,
in order to fit the observations.
The fitting procedure maximizes the entropy
to find the ``simplest'' map(s) that fit the
observations with $\chi^2/N = 1$.
\begin{figure}
\plotfiddle{horne_fig9.eps}{6cm}{-90}{40}{40}{-180}{240}
\caption{ {\sc memecho } velocity-delay map of \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ 1550 emission
(and superimposed \ifmmode {\rm He}\,{\sc ii} \else He\,{\sc ii}\fi\ 1640 emission) in NGC~4151.
Dashed curves give escape velocity for masses
0.5, 1.0, and $2.0\times10^7$M$_\odot$.
}
\label{fig:4151}
\end{figure}
Fig.~\ref{fig:4151} exhibits the velocity-delay map for \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ 1550
and \ifmmode {\rm He}\,{\sc ii} \else He\,{\sc ii}\fi\ 1640 emission as reconstructed from
44 {\it IUE} spectra of NGC~4151
covering 22 epochs during 1991 Nov 9 -- Dec 15
(Ulrich \& Horne 1996).
While spanning only 36 days, this campaign recorded
favorable continuum variations, including a bumpy exponential decline
followed by a rapid rise, which were sufficient to support echo mapping on
delays from 0 to 20 days.
Disregard the strong \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ absorption feature, which
obliterates the delay structure at small velocities.
The wide range of velocities at small delays
and smaller range at larger delays suggests virial motions.
The dashed curves in Fig.~\ref{fig:4151} give escape
velocity envelopes
$v = \sqrt{2GM/c\tau}$ for masses 0.5, 1.0, and $2.0 \times
10^{7}$~M$_\odot$.
A mass of order $10^{7}$~M$_\odot$
may be concentrated within 1~light-day of the nucleus.
The approximate red--blue symmetry of the \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ map
rules out purely radial inflow or outflow kinematics.
The somewhat stronger \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ response
on the red side at small delays
and on the blue side at larger delays
suggests a gas component with freefall kinematics.
However, if the \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ emission arises from the irradiated faces of
optically-thick clouds, those on the far side of the nucleus
will be brighter, and the redward asymmetry can then be interpreted
as an outflow combined with the inward \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ anisotropy.
Velocity-delay maps have so far been constructed
only for \ifmmode {\rm C}\,{\sc iv} \else C\,{\sc iv}\fi\ emission in
NGC~5548 (Wanders, et~al. 1995, Done \& Krolik 1996)
and
NGC~4151 (Ulrich \& Horne 1996).
Both systems show the same trend of velocity dispersion
decreasing with delay,
and red response stronger at small delays.
As more maps are constructed, we will learn whether these
are general characteristics of Seyfert broad-line regions.
\section{Mapping Physical Conditions in Emission-Line Regions}
\label{sec:conditions}
\subsection{Quasar Tomography}
The rich information coded in high quality time-resolved spectrophotometry
of a reverberating emission-line spectrum
can be extracted only by fitting observations in far
greater detail than has previously been attempted.
Here we extend previous echo mapping methods to map simultaneously
the geometry, kinematics, and physical conditions
characterizing the population of photo-ionized gas clouds.
To do this we fit simultaneously the complete
reverberating spectrum, including the reverberations observed
in the fluxes and profiles of numerous emission lines.
These fits incorporate explicitly the predictions of
a photo-ionization model such as CLOUDY,
thus accounting for the non-linear and anisotropic responses
of emission-line clouds to changes in the ionizing radiation.
We characterize a gas cloud by its density $n_{\sc H}$,
column density $N_{\sc H}$, distance from the nucleus $R$,
azimuth $\theta$, and Doppler shift $v$.
The entire population of gas clouds is then
described by a 5-dimensional map
$\Psi( R, \theta, n_{\sc H}, N_{\sc H}, v )$.
We omit the cloud's position angle $\phi$ around the line of sight,
and the perpendicular velocity components,
because the data provide no useful constraint on these.
Assuming that the shape of the ionizing spectrum is known,
and that the time-dependent ionizing photon luminosity is $Q(t)$,
then the ionization parameter obtaining at time $t$ is
\begin{equation}
U(t) = \frac {Q(t-\tau)} {4 \pi R^2 n_{\sc H} c} .
\end{equation}
The reprocessing efficiency $\epsilon_L(U,n_{\sc H},N_{\sc H},\theta)$
for emission line $L$ depends in a unique way on $U$,
$n_{\sc H}$, and $N_{\sc H}$.
The reprocessing efficiencies
are evaluated with a photo-ionization code, e.g.\ CLOUDY,
for chosen element abundances.
To save computer time, these are pre-calculated
on a suitable grid of cloud parameters,
e.g. equally spaced in $\log{U}$, $\log{n_{\sc H}}$, and $\log{N_{\sc H}}$
(Korista, et~al. 1997).
Results required for any cloud parameters are
subsequently found rapidly by interpolation in the grid.
Because clouds are irradiated on their inward faces,
the reprocessed line emission is generally anisotropic.
We allow for this by letting $\epsilon_L$ depend on $\theta$.
CLOUDY computes the emission emerging inward, $\theta=0$,
and outward, $\theta=90^\circ$.
For intermediate angles we interpolate linearly in $\cos{\theta}$.
The observed spectrum is a sum of three components:
direct light from the nucleus,
reprocessed light from the surrounding gas clouds,
and background light, -- e.g. from stars,
\begin{equation}
f_\nu(\lambda,t) =
f^D_\nu(\lambda,t) + f^R_\nu(\lambda,t) + f^B_\nu(\lambda)\ .
\end{equation}
The direct light is
\begin{equation}
f^D_\nu(\lambda,t) =
\frac { Q(t) S_\nu(\lambda) } { 4 \pi D^2 }
\end{equation}
where $D$ is the distance and the spectral shape is
\begin{equation}
S_\nu(\lambda) = \frac{ L_\nu(\lambda,t) }{ Q(t) } .
\end{equation}
The reprocessed light is a sum over numerous emission lines,
where $\lambda_L$ and $\epsilon_L$ are the
rest wavelength and the reprocessing efficiency for line $L$.
The gas clouds are distributed over a 3-dimensional volume.
At each location the cloud population has a distribution
over density $n_{\sc H}$, column density $N_{\sc H}$, and Doppler shift $v$.
The reprocessed light arising from such a configuration is
\begin{equation}
\begin{array}{rl}
f^R_\nu(\lambda,t)
= & \int 2\pi R dR\ d\mu\ dn_{\sc H}\ dN_{\sc H}\ dv\
\Psi(R,\theta,n_{\sc H},N_{\sc H},v)\
\\ & \frac{Q(t-\tau)}{4 \pi D^2}\
\sum_L \epsilon_L(U,n_{\sc H},N_{\sc H},\theta)\
G_\nu(\lambda-(1+v/c)\lambda_L)\
\\ & \delta\left( R - \left( \frac{ Q(t-\tau) } { 4 \pi U n_{\sc H} }
\right)^{1/2} \right)\
\delta\left( \mu - \frac{\tau c}{R} + 1 \right)\ .
\end{array}
\end{equation}
Here $\mu = \cos{\theta}$,
$G_\nu$ is a Gaussian profile to apply the appropriate Doppler shift,
and the Dirac $\delta$ functions ensure that the
appropriate time delay and ionization parameter are used
at each reprocessing site.
To fit the above model to observations of $f_\nu(\lambda,t)$,
we adjust $D$, $Q(t)$, $f^B_\nu(\lambda)$,
and $\Psi(R,\theta,n_{\sc H},N_{\sc H},v)$.
The 5-dimensional cloud map $\Psi$ can have loads of pixels,
$\sim 10^{5-6}$.
The constraints available from reverberating emission-line spectra
will not be sufficient to fully determine the cloud map.
However, once again MEM may be employed
to locate the ``smoothest'' and ``most symmetric''
cloud maps that fits the data.
Computers are now fast enough to support
this type of detailed modelling and mapping of AGN emission regions.
\subsection{Mapping a Spherical Shell from Simulated Data}
\begin{figure}
\plotfiddle{horne_fig10a.eps}{6.5cm}{0}{35}{35}{-205}{-28}
\plotfiddle{horne_fig10b.eps}{0cm}{0}{35}{35}{-25}{-5}
\caption{
Reconstructed map $\Psi(R,\theta,n_{\sc H})$
of a thin spherical shell (left) from a maximum entropy
fit to 7 ultra-violet emission-line light curves (right).
The light curve fits achieve $\chi^2/N=1$.
The shell radius $R/c = 12$d and density $\log{n_{\sc H}}=11$
are correctly recovered.
For further details, see text.
}
\label{fig:shellfit}
\end{figure}
Fig.~\ref{fig:shellfit} shows a simulation test designed to determine
how well the geometry and physical conditions may be recoverable
from emission-line reverberations.
The adopted BLR model places clouds with density
$n_{\sc H}=10^{11}$cm$^{-3}$ and column density $N_{\sc H}=10^{23}$cm$^{-2}$
on a thin spherical shell of radius $R/c=12$d.
Synthetic light curves are computed showing reverberations in
7 ultra-violet emission lines, using CLOUDY to calculate the
appropriate anisotropic and non-linear emissivities.
The synthetic light curves are sampled at 121 epochs spaced by 2 days,
and noise is added to simulate observational errors.
The MEM fit does not assume a spherical shell geometry,
but rather it considers every possible cloud map $\Psi(R,\theta,n_{\sc H})$,
and tries to find the simplest such map that fits the
emission line light curves.
The MEM fit adjusts 4147 pixels in the cloud
map $\Psi(R,\theta,n_{\sc H})$,
143 points in the continuum light curve $C(t)$,
7 emission-line background fluxes $\bar{L}(\lambda)$, and
1 continuum flux $\bar{C}$.
Note that the fit assumes the correct column density and distance.
The fit to $N=968$ data points achieves $\chi^2/N=1$.
The entropy steers each pixel in the map
toward its nearest neighbors, thus encouraging smooth maps,
and toward the pixel with the opposite sign of $\cos{\theta}$,
to encourage front-back symmetry.
On the left-hand side of Fig.~\ref{fig:shellfit}, we see that the
recovered geometry displays the appearance of a hollow shell
with the correct radius, and that the density-radius projection
of the map has a peak at the correct density.
The shell spreads in radius by a few light days,
with lower densities at larger radii to
maintain the same ionization parameter.
On the right-hand side of Fig.~\ref{fig:shellfit}, we see that
the 8 light curves are well reproduced by the fit.
The highs and lows are a bit more extreme in the data --
a common characteristic of regularized fits.
To the left of each emission-line light curve, three
delay maps are shown corresponding to the maximum brightness
(solid curve), minimum brightness (dashed),
and the difference (dotted).
All the lines exhibit an inward anisotropy except \ifmmode {\rm C}\,{\sc iii]} \else C\,{\sc iii]}\fi.
All the lines have positive linear responses except \ifmmode {\rm Mg}\,{\sc ii} \else Mg\,{\sc ii}\fi,
which has a positive response on the near side of the shell
and a negative response on the far side.
This simulation test has used reverberation effects in
emission-line light curves to map the geometry and density structure
in the photo-ionized emission line zone.
The 2-dimensional map clearly reveals the correct hollow shell geometry,
and moreover the correct density is also recovered.
These promising results suggest that there are good prospects for probing
the geometry and physical conditions in real AGNs.
\section{Two New Direct Routes to $H_0$}
\label{sec:h0}
Because the primary aim of this meeting is to discuss ways in which
quasars may be useful as standard candles for cosmology,
I will conclude with a brief discussion of two methods,
both relatively new, which may allow the determination of
redshift-independent distances to active galaxies.
Both methods employ echo mapping time delays to measure the physical
size of something.
Emission line time delays measure the sizes of
regions of ionized gas surrounding the nucleus.
Continuum time delays measure the sizes of reprocessing
zones on the reverberating surface of an irradiated accretion
disc.
\subsection{$H_0$ from Sizes of Photo-Ionization Zones}
\label{sec:h0_lines}
The ionizing photon luminosity $Q$ can be estimated
from an observed flux $F$ as
\begin{equation}
Q = 4\pi D^2 S F\ ,
\end{equation}
where $D$ is the distance and $S$ is a ``spectral shape'' factor
that converts the observed flux $F$ to an ionizing photon flux
\begin{equation}
S F = \int_0^{\lambda_0}
\frac{ \lambda f_\lambda d\lambda }{ h c }\ .
\end{equation}
If the ionizing radiation is not isotropic, the flux $F$
in our direction differs from that seen by the photo-ionized gas clouds.
That uncertainty may be absorbed into the factor $S$.
From the definition of the ionization parameter $U$,
the radius of the photo-ionized zone is given by
\begin{equation}
R = \left( \frac{Q}{4\pi U n_{\sc H} c } \right)^{1/2}
= D \left( \frac{ S F }{ U n_{\sc H} c } \right)^{1/2}\ .
\end{equation}
Reverberation results allow us to measure $R = \tau c$
from a time delay $\tau$.
We therefore find that the distance is
\begin{equation}
D = \tau \left( \frac { U n_{\sc H} c^3 } { S F } \right)^{1/2} .
\end{equation}
Since $\tau$ is measured by reverberation, while
$U$, $n_{\sc H}$, $S$, and $F$ are found from the emission-line spectrum,
we have a means of estimating the distance
without using the redshift.
For redshift $z << 1 $, the Hubble constant is then
\begin{equation}
H_0 = \frac{cz}{D}
= \frac{z}{\tau} \left( \frac {S F} { U n_{\sc H} c } \right)^{1/2} .
\end{equation}
The above results apply only for the over-simplified case
of photo-ionized gas characterized by single values of
the ionization parameter $U$ and density $n_{\sc H}$.
This is intended only as an illustrative sketch of the method.
In a more realistic model, the gas clouds have not single values but rather
a cloud distribution function $\Psi(R,\theta,n_{\sc H},N_{\sc H})$.
We discussed this type of fitting in Section~\ref{sec:conditions}.
\subsection{$H_0$ from Reverberating Accretion Discs}
\label{sec:h0_continuum}
\noindent{\bf Steady-State Accretion Discs.}
The effective temperature on the surface of a steady-state accretion
disc decreases with radius as
\begin{equation}
T = \left( \frac{3 G M \dot{M}}{ 8 \pi \sigma R^3 }
\right)^{1/4} ,
\end{equation}
where $M$ is the black hole mass and $\dot{M}$ is the accretion rate.
If the disc surface radiates as a blackbody,
the spectrum arising from the disc is
obtained by summing blackbody spectra
weighted by the projected areas of the annuli,
\begin{equation}
f_\nu = \int B_\nu(T) \frac{ 2\pi R dR \cos{i} }{ D^2 }
= \left( \frac{ 1200 G^2 h }{ \pi^9 c^2} \right)^{1/3}
I\
\left( \frac { \cos{i} } {D^2} \right)
\left( M \dot{M} \right)^{2/3}
\nu^{1/3} .
\end{equation}
Here $D$ is the distance, $i$ is the inclination of the disc
axis relative to the line of sight, and
$I = \int_0^\infty x^{5/3}/(e^x-1) \sim 1.932$.
The optical/ultra-violet spectra of AGNs have ``Big Blue Bump''
components that are attributed to thermal emission from
accretion discs.
Observed spectra are generally redder than the
characteristic $f_\nu \propto \nu^{1/3}$ spectrum predicted by disc
theory (e.g.\ Francis et~al. 1991).
The spectral signature of an accretion disc is therefore not
very convincingly demonstrated.
However, observed spectra are contaminated e.g.\
by starlight from the host galaxy (e.g.\ Welsh et~al. 1999).
Taking difference spectra cancels out the contamination and
measures the variable component of the light.
The difference spectra are usually bluer and can be in satisfactory
agreement with the predicted spectrum for the change
in brightness of an irradiated accretion disc
(Collier et~al. 1999).
\noindent{\bf Echo Mapping Accretion Disc Temperature Profiles.}
In order to map the temperature profiles of accretion discs,
we need some way to measure the temperature, and some way
to measure the radius at which that temperature applies.
We use a time delay to measure the radius,
and a wavelength to measure the temperature.
The disc surface is irradiated by
the erratically variable source located near its centre,
launching heating and cooling waves that propogate
at the speed of light outward from the centre of the disc.
This effectively reprocesses the hard X-ray and EUV
photons that irradiate the disc
into softer ultra-violet, optical, and infra-red photons.
A distant observer will note that the reprocessed light
arrives with a time delay
\begin{equation}
\tau = \frac{R}{c} \left( 1 + \sin{i} \cos{\theta} \right)\ .
\end{equation}
The heating wave requires a time $R/c$ to reach radius $R$,
and the reprocessing site at radius $R$ and azimuth $\theta$
is at a distance from the observer that is larger
by $R\sin{i}\cos{\theta}$
relative to the centre of the disc.
Note that for the annulus at radius $R$,
the mean time delay, averaged around the annulus,
is always $R/c$ regardless of the inclination,
but the range of time delays depends on the inclination.
If the disc is face on ($i=0$) then all azimuths have the
same time delay.
If the disc is edge on ($i=90^\circ$)
then the far edge of the disc at $\theta=0$
has the maximum time delay $\tau = 2R/c$,
and the near edge at $\theta=180^\circ$
has a time delay of zero.
We use the blackbody spectrum to associate each
wavelength with a temperature.
The Planck spectrum for a blackbody temperature $T$,
\begin{equation}
B_\nu(T) = \frac{2 h c}
{\lambda^3 \left( e^X - 1 \right) }\ ,
\end{equation}
peaks at the dimensionless frequency $X = hc / \lambda k T \sim 2.8$.
When irradiation increases $T$ slightly,
the change in the spectrum, proportional to
$\partial B_\nu/\partial T$, peaks near $X \sim 3.8$.
To map the temperature profile of an irradiated accretion disc,
we measure the time delay at different wavelengths.
When we observe a change in the spectrum at wavelength $\lambda$,
we are measuring the reprocessed light from disc annuli
where temperatures are
\begin{equation}
T \sim \frac{hc}{\lambda k X}
\end{equation}
with $X \sim 4$.
Ultra-violet, optical, and near infra-red light curves give
time delays for hot gas with $T\sim 10^5$K,
warm gas with $T\sim 10^4$K,
and cold gas with $T\sim 10^3$K, respectively.
If the temperature decreases with radius, the time delay
should increase with wavelength.
For the temperature profile of the steady-state disc,
the time delay is
\begin{equation}
\tau = \left( \frac{ 45 G }{ 16 \pi^6 c^5 h } \right)^{1/3}\
\left( M\dot{M} \right)^{1/3}\
\left( X \lambda \right)^{4/3}\ .
\end{equation}
This $\tau \propto \lambda^{4/3}$ prediction has recently been
verified in the case of the Seyfert 1 galaxy NGC~7469
(Collier, et~al. 1998).
The observed $\tau(\lambda)$ therefore allows a measurement
of $M\dot{M}$,
\begin{equation}
M\dot{M} =
\left( \frac { 16 \pi^6 h c^5 } { 45 G } \right)\
\left( X \lambda \right)^{-4}\ \tau^{3}\ .
\end{equation}
Substituting for $M\dot{M}$ in the expression for $f_\nu$,
we find the distance,
\begin{equation}
D = \left( \frac{16 \pi h c^3 }{3} \right)^{1/2}\
I^{1/3}\
\left(\frac{\cos{i}}{f_\nu}\right)^{1/2}\
\left( X \lambda \right)^{-4/3}\
\tau\ .
\end{equation}
Note that this distance is determined independently of the
redshift of the AGN.
Finally, for redshift $z << 1 $, the Hubble constant is
\begin{equation}
H_0 = \frac{cz}{D} = \left( \frac {3} {16 \pi h c} \right)^{1/2}\
I^{-1/3}
\left( \frac {f_\nu} {\cos{i}} \right)^{1/2}\
\left( X \lambda \right)^{4/3}\
\left( \frac{z}{ \tau } \right)\ .
\end{equation}
The above is of course only an outline description of the method.
In practice one fits a model of reverberations in
an irradiated disc to observed light curves.
This new method is based on fairly straightforward physics --
light travel time delays to measure radius,
and blackbody spectra to associate a temperature with each wavelength.
The results for NGC~7496 give
$H_0 \sqrt{\cos{i}/0.7} = 42 \pm 9 $~km~s$^{-1}$Mpc$^{-1}$
(Collier, et~al. 1999).
The unknown disc inclination may not be too serious a problem
because at high inclinations the dusty torus should obscure the
broad emission line region.
Seyfert 1 galaxies are therefore expected to
have $0.7 \raisebox{-.5ex}{$\;\stackrel{<}{\sim}\;$} \sqrt{\cos{i}} < 1$.
The $\sqrt{\cos{i}}$ uncertainty may be reduced further
by applying the method to a sample of Seyfert 1 galaxies
with random inclinations in the above range.
The use of blackbody spectra may be questioned, but can be checked
by seeing if the same distance is found at different wavelengths.
This test holds up for NGC~7469 (Collier, et~al. 1999).
If the method can be shown to work for a larger sample of
objects, it may serve to calibrate AGNs as standard candles
for cosmology.
\acknowledgments
Thanks to Brad Peterson and Shai Kaspi for providing figures
\ref{fig:ccf} and \ref{fig:kaspi} respectively.
|
1,108,101,565,002 | arxiv | \section{Introduction}
The science of examining and qualifying gemstones has used optics for centuries. In a lecture delivered at the Imperial Institute of Great Britain in 1895, Sir Henry Meirs, a mineralogist, discussed the clear advantages of the use of optics for the analysis of gemstones compared to existing traditional methods\cite{miers_precious_1895}. Non-contact and nondestructive measuring methods are vital with such valuable samples. However, the common tests of the era included scratching samples to check their hardness, weighing the samples in different media to find their specific gravity, and chemical analyses, which necessitated destroying part of a sample. By contrast, Sir Meirs argued for using conventional and polarized light microscopy, and measuring refractive index and absorption as more accurate, less destructive methods. Modern versions of many of his proposed methods are still commonly used by geologists today as ready tools for qualifying and identifying stones\cite{pirard_particle_2007,turner_reflectance_2017}.
In recent years, a number of more advanced imaging and analysis techniques have been used to examine gemstones. Raman spectroscopy can identify minerals via their characteristic vibrational modes, which has been widely reported and used in the literature\cite{bersani_applications_2010,hope_raman_2001,reiche_situ_????,barone_red_????,kiefert_identification_2000,giarola_raman_????,culka_gem_2016}. This approach lacks the ability to resolve structural details, however, since each Raman spectrum must be taken in a point-by-point manner. Terahertz spectroscopy has been used to analyze gems, but without any high resolution or structural information\cite{yu_identification_2010,han_lattice_2015}. Electron microscopy can resolve high resolution structural details down to the atomic scale, but is not practical for examining a large scale portion of a sample, and cannot probe deep beneath a sample surface\cite{krivanek_atom-by-atom_2010}. X-ray micro-CT can resolve structural details in 3D, but has a long acquisition time and requires expensive equipment\cite{sahoo_surface_2016}. Proton Induced X-ray Emission (PIXE) can perform an elemental analysis of a gemstone, but cannot provide any structural information\cite{venkateswara_rao_trace_2013}. The Atomic Force Microscope can resolve incredibly fine details on the surface of a gemstone, and even examine the pyroelectric nature of certain stones, but cannot obtain any information from below the surface\cite{afm}.
The first introduction of the multiphoton microscope focused primarily on its usefulness in the biological sciences\cite{denk_two-photon_1990}. Recently, the multiphoton microscope has been shown to be very useful in the characterization of materials as well. We have designed and used several multiphoton microscopes for a variety of applications\cite{Kieu:13,armpm}. We have used them to examine the layers and grain boundaries of Molybdenum Di-sulphide\cite{mos2_2,mos2}, rapidly characterize large sections of graphene\cite{graphene}, and utilize second and third harmonic generation to investigate the layers of Galium Selenide\cite{GalSel}. In this publication, we demonstrate the utility of the multiphoton microscope (MPM) in the study of gems and minerals. The MPM can capture the beautiful structural details of these stones non-destructively with three dimensional sub-micron resolution through depths on the millimeter scale. We hope that this will be a new useful tool that will enable mineralogists to determine structural details beneath the surface, as well as crystalline axis orientation by adjusting the laser polarization. This new information obtained using the MPM may provide useful insights into the formation of these gemstones. Lastly, gems and minerals have long been valued due to their beauty and rarity. We feel that this tool is a new way to appreciate them, by revealing their hidden details beneath the surface.
\section{Methodology}
\subsection{Multiphoton Microscope Description}
The multiphoton microscope used was designed and built in-house, and is controlled by LabVIEW software created by our research group. Since its design is described elsewhere in the literature\cite{Kieu:13}, only a brief description will be given here. The multiphoton uses nonlinear optical interactions to create contrast in a sample. A pulsed laser source with very high peak power, but relatively low average power, is raster scanned by a pair of galvanometric scan mirrors, and focused onto the sample with a microscope objective, creating an image in a point-by-point fashion. The high peak power (kilowatt level) of the laser pulses is ideal for creating strong nonlinear signals from the samples. However, the low average power (milliwatt level) prevents the samples from being damaged. Our system uses a dichroic mirror to split the signal to two photomultiplier tubes (PMTS), enabling the simultaneous detection of two different signals. In this project, the microscope was configured to image with Second Harmonic Generation (SHG) and Third harmonic Generation (THG).
Two different lasers were used to image the samples, both of which were femtosecond pulsed fiber lasers, one at a wavelength of 1040nm and the other at 1560nm. Both lasers have an 8 MHz rep rate, pulse widths on the 100 femtosecond scale, and average powers around 70 mW. The samples were imaged with both lasers in order to examine the different information and different depths that could be probed by each source. To split the signal light from the 1040nm and 1560nm lasers into the two detection channels, 414nm and 560nm dichroic mirrors were used, respectively. For capturing images, a Nikon 20x .75NA objective was used. This objective was chosen as the best balance of resolution, field of view, and working distance. The resolution for this objective was calculated according to equations published in the literature\cite{Zipfel2003}. For the 1040 laser, the diffraction limited lateral resolution was 520 nm for SHG, and 425 nm for THG. For the 1550 laser, the lateral resolution was 770 nm for SHG, and 630 nm for THG. Since the small working distance associated with higher NA objectives was not an issue for most images of these samples, the Nikon 20x .75NA objective was used for every image captured unless otherwise noted. For the deeper images where a longer working distance was needed, the objective was changed to a 20x .5 NA aspheric lens, with the trade off in a decrease in resolution (about a factor of 1.5 times lower resolution). The THG and SHG signals were collected by the same microscope objective in an epi-detection imaging scheme.
\subsection{Sample Description and Imaging Methodology}
A collection of 36 gem stones (10mm tall by 15 mm wide, 2mm thick), seen in Fig. \ref{gems_collection} on the left, were purchased at the Tucson Gem and Mineral Show, a world famous event to the gem community\cite{GemandMineralShow}.
\begin{figure}[H]
\begin{center}
\includegraphics[width=4.5in]{Fig1_v2.png}
\caption{Left: Gemstone Sample Collection. The individual gemstones were cut out of the large cardboard backing to fit underneath the microscope objective. This can be seen in the bottom row of the figure. The names and locations of each mineral were labeled by the vendor. Right: The Blue Lace Agate sample underneath the microscope. This sample had particularly strong SHG, as can be seen from the green glow from the sample (excitation laser was at 1040 nm).}
\label{gems_collection}
\end{center}
\end{figure}
Each gem was cut out of the large cardboard piece so the samples could easily fit under the objective, shown on the right of Fig. \ref{gems_collection}. Each gem was first imaged using the 1040 nm laser, and regions of interest were examined with SHG and THG. Some of these signals were strong enough to be seen by the naked eye, as was the case of the Blue Lace Agate sample. The second harmonic of 1040nm is 520nm, a green wavelength, which can be clearly seen on the right of Fig. \ref{gems_collection}. These signals were isolated using bandpass filters, specifically the 340/22 filter for THG and a 517/20 filter for SHG (Semrock). The two signals split to the two detection channels by a 414 nm dichroic (Semrock). The depth limit to which a sample could be imaged through focus was also examined for each sample. This was done with a set of through focus images, commonly called a Zstack. Each Zstack was set to take an image every few microns until the signal became too weak to produce a useful image. The 1040nm laser was then swapped for the 1560nm laser, and the 414nm dichroic swapped for a 560nm dichroic. The THG was isolated by a 517/20 filter, and the SHG by a 780/12 filter (Semrock).
\section{Experiment Results}
\label{Results}
A selected set of images from varying depths are displayed in Fig. \ref{results}. The images were captured by our lab created software, and processed using the Fiji release of ImageJ\cite{Fiji}. In the four column figure, the image on the left is from a standard white light microscope (Zeiss Axioplan 2), followed by an image that shows regions of SHG, then regions of THG. The last column combines the two signals together with false color, with SHG colored red and THG colored green. Only small amounts of image adjustments were made, including adjusting the brightness of the two channels. All multiphoton images shown in Fig. \ref{results} were captured using the 1040 nm laser and the 20x .75 NA objective.
\begin{figure}[H]
\begin{center}
\includegraphics[width=4.25in]{Fig2.png}
\caption{A collection of image results from some of the more interesting looking samples. Left: An optical microscope image. Center left: SHG information. Center Right: THG information. Right: Composite image with SHG red and THG green. Samples were imaged with the 1040 nm laser, and MPM images were taken $\approx$100 $\mu m$ below the surface (See Visualization 1 for through focus video of the first sample, Blue Lace Agate). Each MPM image has the same scale of 250 $\mu m$ across, with a scale bar shown on the bottom right.}
\label{results}
\end{center}
\end{figure}
In these images, all of which were taken below the surface of the minerals, we can begin to see how THG and SHG highlights very different features in these stones. While the optical microscope images look relatively similar, the nonlinear signals from each stone vary widely in both SHG and THG. SHG is generated from a crystal structure with broken symmetry, or from structures without centro-symmetry. THG is generated from boundaries between regions of differing Third Order Nonlinear response, or $\chi_3$. THG is not generated in a bulk uniform medium due to the Gouy phase shift through the focal spot\cite{Boyd,SHGandTHGtheory}. In the Blue Lace Agate, SHG shows fine structural variations within the stone, while THG highlights the boundaries between crystalline regions (See Visualization 1 for a through-focus video). In the Mahogany Obsidian, very different structures are highlighted between SHG and THG. In the Tiger's eye, some bulk SHG is seen, while the THG highlights the fine line structure that gives Tiger's eye its characteristic appearance. In the Rainbow Jasper, both SHG and THG contain fine structural information, enough that the composite image is difficult to read. The information gathered by the microscope represents sub-micron resolved three dimensional information that could be a crucial analysis tool to a mineralogist looking to examine the components of a composite stone. The signal from many of these samples were very strong. Compared to imaging a piece of pure Quartz, which has very well-known nonlinear coefficients \cite{quartz1,quartz2}, many of the SHG signals from the gemstones were measured to be hundreds of times stronger. Since there is insufficient room to report images from all 36 samples in this paper, images from every sample can be found at \url{wp.optics.arizona.edu/kkieu/gemstone-images/}.
An Ocean Optics QE65000 spectrometer was used to confirm the signals coming off of these samples when imaged with the 1560 laser. The sample spectra can be seen in Fig. \ref{Spectra} below. THG appears around 517 nm, and SHG around 780 nm. Because of the wide spectrum of the mode locked laser, shown in the inset of Fig. \ref{Spectra}, there are multiple peaks to the SHG and THG spectra.
\begin{figure}[H]
\begin{center}
\includegraphics[width=4.5in]{fig3.png}
\caption{Signal spectra from selected samples are displayed. Most samples predominantly showed strong SHG and relatively strong THG. For comparison, the inset shows the spectra from the laser to create this data.}
\label{Spectra}
\end{center}
\end{figure}
Interestingly, we do not see any fluorescence from these stones, even from the fluorite sample in our collection. It is possible that the excitation is too long at 1560 nm and 1040 nm to get efficient excitation of any fluorphores which may be present within the stone. It is possible that a shorter wavelength femtosecond source such as a Ti:Sa laser would be able to excite them.
\subsection{3D Imaging Results}
The sample that could be imaged the deepest was the Amethyst sample at the 1560 nm wavelength. An objective with a longer working distance was required, so the 20x .5 NA objective was used. The sample was 2 mm thick, and the microscope was able to image all the way through it, as seen in Fig. \ref{Amethyst_zstack}. Several highlighted depths are shown on the right side of the figure.
\begin{figure}[H]
\begin{center}
\includegraphics[width=4.25in]{Fig4.png}
\caption{A Zstack of images from the Amethyst sample, taken with a 20x .5 NA objective and the 1560 nm laser. On the left, all of the images taken in the stack are displayed as a function of depth. On the right, three slices are displayed separately, highlighting different depths. The depths can be seen in the figure on the left as highlighted in Yellow, Cyan, and Purple respectively. As in the other figures, green shows information from THG, and red shows information from SHG. All numbered distances are in microns. See Visualization 2 for a through focus video.}
\label{Amethyst_zstack}
\end{center}
\end{figure}
This sample appeared to be the most clear to the eye, so it was not surprising that it allowed the deepest penetration. Because of this clearness, it is difficult to discern information at depth with visual techniques like light microscopy, since there is insufficient contrast. However, the MPM can see interesting structural information throughout the sample, predominantly in SHG, but smaller dot regions, possibly impurities, can be detected clearly with THG through the entire depth as well.
\subsection{Polarization Dependence of Signal}
\label{polar}
The second order nonlinear response $\chi_2$ is generally polarization dependent\cite{Boyd}. The fiber used in the 1040 nm laser is not polarization maintaining (PM), so the polarization of the femtosecond pulses was elliptical. However, the 1560 laser uses PM fiber with a linearly polarized output, and was used to examine the different information gathered from adjusting the polarization angle hitting the sample. To test SHG signal intensity at different polarization angles, a half wave plate (HWP) was rotated from 0 to 180 degrees while capturing an image every 10 degrees. The intensity vs HWP angle was extracted from the images and plotted, shown in Fig. \ref{shgangle}, alongside several images. The HWP angles are labeled in each portion of the figure. A short movie was then created showing the changes in intensity for the Dumortierite sample as the polarization angle changed (Visualization 3).
\begin{figure}[H]
\begin{center}
\includegraphics[width=4.5in]{Fig5_new.png}
\caption{Top left: SHG signal with a HWP angle of $0^\circ$. Top Center: SHG signal with a HWP angle of $40^\circ$. Top Right: The first two images are subtracted, and the difference is displayed, highlighting regions where the signal changed significantly. A scale bar is displayed for the intensity of each pixel. The dimensions on each picture are in microns. Bottom: the signal intensity at three regions of interest (ROI) are plotted as a function of angle. See Visualization 3 for a video showing the change in signal as a function of polarization angle.}
\label{shgangle}
\end{center}
\end{figure}
Two different images at different polarization angles, top left and top center in Fig. \ref{shgangle}, were subtracted, creating the image in the top right. This highlights regions where the SHG has changed between the two polarization states. The signal was plotted for every picture taken at three highlighted regions, seen in Red, Green, and Blue in the figures and plotted in those colors, respectively. Each plot was fit to a sinusoidal function. It can be seen that some portions of the sample have signal that moves in roughly the same way as a function of angle, while other regions move separately. These are regions of differing crystal axes. This angularly dependent SHG signal strength information can be used to determine the local crystalline axes\cite{crystalSHG}. We can see that the red region has its maximum signal around a HWP angle of ~10 degrees, with another peak 90 degrees later, which makes sense because of the angle doubling nature of a HWP. We can see similar information for the Blue and Green points, which share approximately the same crystalline axis direction.
\section{Results and Discussion}
The current acquisition time for these images on our microscopes can be relatively slow. For example, the Zstack shown in Fig. \ref{Amethyst_zstack} took a little over four hours to capture, since one picture was captured every micron increment in z, leading to two thousand images that were created for both SHG and THG. However, this is not a limit of this technological approach. Our multiphoton microscope is optimized for compactness and cost, not speed. Resonant galvano mirrors or acoustic-optic deflectors can decrease the rastering time significantly, allowing a much shorter acquisition time\cite{otsu_optical_2008}. Alternatively, a high sensitivity CCD can be used to remove the need to raster altogether, allowing much higher frame rates\cite{Bewersdorf:98}.
The depth to which samples could be imaged varied from a few hundred microns to the 2 mm depth reported in Fig. \ref{Amethyst_zstack}. Not surprisingly, the depth also depended on the wavelength used. Typically, the longer 1560nm wavelength allowed for approximately twice as much imaging depth, which makes sense if Rayleigh scattering is the dominant process. The different minerals certainly have different absorption characteristics, which played a role in limiting the imaging depth as well. Typically, the samples allowed for imaging between 100-300 microns of depth for the 1040 nm laser, and 200-600 microns for the 1550 laser. Future work could focus on measuring the absorption and scattering coefficients of these samples from the zstacks that were captured, as well as determining the nonlinear coefficients of the refractive index.
Many of the samples exhibited interesting variations in signal as a function of polarization angle for the 1550 laser. Our approach will allow for the determination of the crystalline axis at any depth. The images and approach demonstrated in Section \ref{polar} do this for a 2D projection of the crystalline axis, but this could be extended to a 3D measurement with multiple depths and multiple images at different polarizations.
As discussed, many of the samples had very strong nonlinearities. The strong signals observed came from naturally created structures grown with no thought to phase matching or any other efficiency concern. Some of these natural stones could be used for harmonic generation very inexpensively
\section{Conclusion}
This work has demonstrated the utility of the three dimensional imaging capability of the multiphoton microscope. Throughout the 36 samples examined, the multiphoton was able to discern crystalline boundaries, sub-micron features, and other structural information. SHG enabled the determination of crystalline axes, a useful structural and analysis feature. We believe that the multiphoton microscope will yield valuable information to mineralogists and geologists, and with our instrument have shown that it can be done affordably and compactly.
\section*{Funding}
National Science Foundation Graduate Research Fellowship under DGE-1143953, NSF ECCS under Grant Number \#1610048.
\section*{Disclosures}
The authors declare that there are no conflicts of interest related to this article.
\end{document}
|
1,108,101,565,003 | arxiv | \section{The computational challenge}
\label{sec:ComputationalChallange}
\VISION{This section describes our activity from the computational
point of view: size of the problem, the deadline, the structure of
the computation, the time constraints, the nature of static and
dynamic optimization. It refers to the key elements of the planning
process described in the previous section: different types of
requirements, iterative workflow etc.}
The compatibility assessment is CPU-intensive. In the compatibility
analyses each requirement must be run against all the others, for six
different types of analysis (d2dUHF, d2dVHF, d2oUHF, d2oVHF, o2dUHF,
o2dVHF). In this paper we use the term {\em atomic calculations} to
refer to individual, indivisible calculations defined in compatibility
analysis datasets. The term {\em task} refers a unit of work which
corresponds to a set of atomic calculations. The term {\em job} is
used in the context of Grid job submission only.
For the first planning exercise the atomic calculations were clustered
in tasks of 100 for all types of analyses. With the limited resources
available at that time, that exercise took about one week (elapsed
time), for an integrated 90 CPU days.
\begin{figure}
\centering
\includegraphics[width=13cm]{itu-FirstExe-NoReqhoursVsAdmin}
\caption{Distribution of the number of processed requirements per hour for the d2dUHF analysis as a function of the Member State. Data for the first planning exercise.}
\label{FirstPE_stats_NoReqHours}
\end{figure}
The detailed study revealed an exponential distribution of the
requirement processing time which spans almost three orders of
magnitude (Fig. \ref{FirstPE_stats_NoReqHours}). The huge variation in
running time depends, among other parameters, on the number of acceptable channels specified in
the digital broadcasting requirement, the requirement type (assignment versus
allotment), the network topology and signal propagation zones specific to
the geographical area of the Member State.
Further investigation showed that a complete static optimization of
the load \footnote{The static optimization of the load is an ability
to a priori cluster the requirements, so that the execution time of
each cluster is equal.} was not possible due to the unpredictable
nature of the data as the Member States could change their
requirements before each RRC06 iteration. On the other hand, there was
clearly a need to create smaller clusters for the most CPU demanding
type of analysis d2dUHF and d2dVHF, minimizing the spread between the
shortest and longest tasks. Table \ref{itu-grouping-table} shows the
granularity chosen for the different types of analysis in the RRC06
iterations for the Grid and ITU systems. The granularity was adjusted
manually in between the iterations. The load balancing was handled
dynamically at runtime.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
iteration & d2dUHF & d2dVHF & d2oUHF & d2oVHF & o2dUHF & o2dVHF\\ \hline
1 & 3(3) & 5(5) & 100(100) & 100(100) & 100(100) & 100(100) \\
2 & 4(3) & 4(10) & 50(100) & 50(100) & 100(100) & 100(100) \\
3 & 2(3) & 2(5) & 50(100) & 50(100) & 50(100) & 50(100) \\
4 & 2(3) & 2(10) & 50(100) & 50(100) & 50(100) & 50(100) \\ \hline
\end{tabular}
\caption{Compatibility analysis granularity for the RRC06 iterations for Grid and ITU (in parenthesis) system.}
\label{itu-grouping-table}
\end{table}
The workload for each compatibility analysis run at the RRC06
corresponded to some several hundred CPU hours. Additionally the
workload was to be completed within a deadline of a few hours. The time
constraints were critical: an hypothetical problem with timely
delivery of analysis results could have resulted in a failure of
international negotiations.
The total CPU demand decreased with each RRC06 iteration.
Member States decreased the number of
requirements and the number of acceptable channels for each
requirement, reducing therefore the total workload at each analysis
iteration. Finally, as the frequency plan was refined during successful
negotiations between the Member States, the number of conflicting
requirements also decreased. The CPU demands for the ITU and Grid
systems is presented in the next sections.
\section{Conclusions and Outlook}
\VISION{
conclusions:
- success,
- procurement/peak usage - complementary aspect
- dependable computing given time constraints,
- easy of integration of new communities
- technical aspects: unix/windows, availability of grid sites
outlook:
- need to a system support for the implementation of GE06
- local/grid/clouds scenario -> added value: developing in house expertise etc.
}
The dual system presented in this paper contributed to the success of
the RRC06 Conference which resulted in a new international treaty.
Seamless access to resources from Grid and corporate infrastructures
demonstrated in this paper may be beneficial for other user
communities. A typical use-case could include dedicated in-situ
resources for fast response and Grid resources when facing peak
demand. In such a scenario the Grid could provide a competitive
alternative to traditional procurement of resources. At RRC06 the Grid
delivered dependable peak capacity to an organization which normally
does not require a large permanent computing infrastructure. The Grid
was successfully used in a new area to provide a dependable
just-in-time service. ITU personnel needed limited support and training to adopt the Grid
technology for RRC06. This demonstrates the maturity of Grid
technology for usage in new scientific communities and technical
activities.
The outcome of RRC06 was the GE06 frequency plan which is a part of an
international agreement. Modifications to the GE06 Plan may require a
coordination examination to determine Member States potentially
affected. To bring into use a new broadcasting station a
conformity examination is required to verify that the proposed
implementation does not cause more interference than foreseen by the
GE06 Plan. Both examinations may require intensive calculations. In
addition, some Member States have already expressed the possible need
for re-planning parts of the GE06 planned bands, a process which
would imply a similar (smaller scale) approach to the one adopted at
the RRC06.
In order to prepare for future events which may require even more
computing capabilities than the RRC06, paradigms such as Cloud
computing could be investigated, where dynamically scalable resources
are provided as a service over the Internet. A system integrating
local, grid and cloud resources would allow Member States to submit
via an existing ITU web portal time-consuming calculation requests
and, at the back-end, to schedule and execute jobs transparently on
the integrated infrastructure. Such a pilot project could be a
continuation of the system accomplished for the RRC06 and a potential area
of future collaboration between ITU and CERN.
\section{Acknowledgments}
The authors would like first of all to thank the sites pledging resourcing for this activity. The sites are:
CNAF (Bologna, Italy) complemented by a few other sites belonging to the Italian infrastructure (Grid IT), CYFRONET (Krakow, Poland)
DESY (Hamburg and Zeuthen, Germany), MSU (MOSCOW, Russia), PIC (Barcelona, Spain).
Their willingness to share their resources was an essential condition for the success of this activity.
We would like to thank the MonALISA team (CERN/Caltech) and in
particular Iosif Legrand for support. We would also like to thank the
ITU-IS for the ITU farm set-up and continuous support and all the
ITU-R staff involved in the data preparation for the RRC06. We are
grateful to M. Cosic and D.Botha for providing the compilation of the
compatibility analysis executable for Linux and Windows platform, and
to J. Boursy for encouraging the developing of an ITU distributed
computing system several years before it was needed at the RRC06. A
special thank to P.N. Hai (ITU), who coordinated the overall
processing at the RRC06, for encouragement, vision and useful
feedback. Finally a special thank goes the management of the ITU-R
and CERN IT department, in particular to T. Gavrilov and
W.~von~R\"uden (CERN IT Dept head) for encouragement and useful
discussions.
\section{Future Plans}
Amongst its statutory obligations, the daily processing of space and
terrestrial radio service notices and the planning of limited radio
spectrum and orbit resources for treaty-related conferences are the
most CPU demanding ITU-R activities. The outcome of the RRC06 has been
the GE06 frequency plan and the related agreement. The provision of
the GE06 agreement have been applicable since 17 of June 2006. Like
many other agreements, the GE06 agreement contains procedures for the
modification of the plan and procedures for notification of
assignments into the Master International Frequency Register (MIFR),
which contains all the stations actually in use as notified to the
ITU-R. When a Member States proposes to bring into use an assignment
to a broadcasting station contained in the GE06 plan it needs to
notify to the ITU-R, in accordance with the provisions of the Radio
Regulations, the characteristics of this assignments, as specified in
the agreement, for inclusion in the MIFR. The plan modification
procedure, in case of addition of new assignments/allotments or of
modification of existing ones, comprises the so-called Coordination
examination, where a coordination contour is calculated around the
center of gravity of the proposed modification according to the
procedure outlined in the agreement. MIFR notification ( as well as a
subset of plan modification) requires the so-called Conformity
examination, which requires the determination of test-points and the
comparison for any of those points of the field strength from the
notified assignments and the field strenght from the plan entry, to
verify if the proposed implementation does not cause more interference
than the recorded plan entry. The details of those procedures and
calculations are outside the scope of this paper, the interested
reader may refer to /cite{RRC06FinalActs}.
Both the plan modification procedures and the MIFR notifications
requires the ITU-R to run complex time-consuming calculations, which
the ITU-R has implemented partly adapting software modules developed
by the EBU, partly writing new ones. The running time depends on the
topology of the assignments/allotments notified. A recent submission
involving 187 related assignments in a Single Frequency Network
configuration took about 40 minutes to process.
In addition, some Member States have already expressed the possible
need of re-planning parts or all of the GE06 planned bands, a process
which would imply a similar (smaller scale) approach to the one
adopted at the RRC06.
After the positive experience with the usage of Grid resources at the
RRC06 , the ITU-R in its effort to staying at the forefront of
technologies, in order to be prepared for future events which may
require even more computing capabilities than the RRC06 and to face
present challenges, is interested in keep experimenting with grid
resources and in investigating additional emerging computing paradigm,
like cloud computing /cite{cloud}, where dynamically scalable
resources are provided as a service over the Internet.
For this purpose the ITU-R has planned a pilot project system, to be
performed in close collaboration with CERN and the ITU Information
technology division, which would involve the seamless usage of a local
distributed infrastructure, grid and cloud resources. The system would
allow Member States to submit via an existing ITU web portal
time-consuming calculation requests and, at the back-end, to schedule
and execute jobs trasparently on the integrated infrastructure.
Concerning the local infrastructure at our disposal for this project,
the ITU-R does not dispose anymore of the PC farm available at the
RRC06, but it has bought three Two-Quad-core Xeon 2.33Ghz servers
which can run efficiently eight parallel processes. The distributed
system design has evolved from the push model (where the server
distributes jobs to the clients via UDP/IP protocol) towards the more
efficient pull model (where the clients directly retrieve from the
database the jobs to be run). The implementation still makes use of
Windows Services, but instead of Perl-script based we are now
developing C\# Windows services. Multiple instances of those services
are then installed at the server according to the pattern described in
/cite{MultiInstanceWS} and concurrency is handled using the properties
of the DataSet. This system, not yet released for external usage, is
already operational inside ITU-R.
\section{Grid system}
\label{sec:ImplementationGrid}
\VISION{This section describes the Grid-based solution, including:
the background info on EGEE, architecture built around Ganga/DIANE,
setting up services for reliable computing/high-availability, operation of the system including
the installation of software, summary of the runs, description of static optimization performed,
performance analysis (latency,speedup and overheads)}
Enabling Grids for E-sciencE (EGEE) is a globally distributed system
for large-scale batch job processing. At present it consists of around 300 sites
in 50 countries and offers more than 80 thousand CPU cores and 20 PB
of storage to 10 thousand users around the globe. EGEE is a
multidisciplinary Grid, supporting users in both academia and
business, in many areas of physics, biomedical applications,
theoretical fundamental research and earth sciences. The largest user
communities come from the High-Energy Physics, and in particular the
experiments active at the CERN Large Hadron Collider (LHC).
The EGEE Grid has been designed and operated for non-interactive
processing of very long jobs. A set of complex middleware services
integrate computing farms and the batch queues into a single, globally
distributed system. The access to the distributed resources is
typically controlled by the fair-share mechanisms, ensuring usage of
resources by groups of users according to predefined policies. In
typical configurations a
large number of users share individual computing resources across multiple Virtual Organizations
(VOs)\footnote{Virtual Organization is a group of users sharing the
same resources. Members of one Virtual Organization may belong to
different institutions.} This architecture is suitable for
high-throughput computing but is not efficient for high-performance,
short-deadline, dependable computing which is stipulated by the RRC06
compatibility analysis application.
In the EGEE Grid environment and on a short time-scale these
requirements may only be implemented if high-level tools are used to
control the job workload and the Grid infrastructure is appropriately
customized.
\subsection{The tools}
To run RRC06 compatibility analysis application Ganga and DIANE tools were used.
Ganga provides a uniform and flexible interface to submit, track and
manipulate jobs \cite{Ganga}. DIANE is an agent-based job scheduler which provides
fault-tolerant execution of jobs, dynamic workload-balancing and
reduced overhead in accessing the computational resources \cite{DIANE}.
\begin{figure}
\centering
\includegraphics[width=13cm]{itu-diane-architecture}
\caption{Overview of the Grid system based on Ganga/DIANE.}
\label{itu-ganga-diane-grid-overview}
\end{figure}
The outline of the architecture is presented in
Fig.\ref{itu-ganga-diane-grid-overview}. Worker agents are submitted to
the Grid and pull the tasks from the Master server
which controls the distribution of the workload. The system is
fault-tolerant and may run autonomously: a Worker agent which fails
to complete the assigned calculations is replaced by another Worker
agent. The overhead of scheduling the calculations is negligible in
comparison with the overhead of classic Grid job submission. The
system dynamically reacts to changing workload and provides dynamic
load-balancing. The results of the compatibility analysis of the
requirements are directly uploaded to the Master server. The
implementation of the RRC06 system on the EGEE Grid was based on DIANE
1.5.0 and Ganga 4.1.
The input data, including the specification of the digital broadcasting requirements
and the tuned compatibility analysis application, were distributed to
the collaborating Grid sites shortly before the analysis was
launched. The 100MB installation package was deployed into the
directory mounted on a shared file system accessible by all worker
nodes of a collaborating Grid site (so called ``software areas''). The
installation was managed by separate grid jobs running with the credentials of the VO manager and using MD5 check-sums to assure consistency of the
installation tarballs. The installation was automated and the
installation jobs checked periodically to download the installation
packages available in a central repository at CERN. This allowed to
automatically distribute the new installation packages in 15 minutes
after the ITU-R made them available.
The ITU personnel updated the software packages with 2 hours' notice. In this time window the grid system had
to be up and ready to start the computation at full speed, as soon as
the update was available.
\subsection{The infrastructure}
The access to the computing resources on the Grid for the RRC06 use
was implemented using the GEAR Virtual Organization (vo.gear.cern.ch).
The CPU demand for RRC06 was much smaller than typical Grid
applications which require huge throughput over very long periods of
time. However, conversely to many other Grid applications,
availability of resources within well-defined and strict time
constraints was critical. Therefore a number of high-availability centres
in the EGEE Grid \footnote{CERN, CNAF+few other sites(I), PIC(E),
DESY(D), MSU(RU) , CYFRONET(PL)} were involved. The resources at
these centres were not dedicated to the RRC06 activity, however the
job priority parameters were adjusted during short periods of
intensive processing of the RRC06 compatibility analysis (the weekends
between the major conference iterations). On average 300 CPUs were
observed to be available at all times with occasional peaks of
c.a. 600 CPUs.
Redundant deployment of key services, such as the Master servers, Grid
User Interfaces and Resource Brokers \cite{LCG} allowed for fail-over in case
of problems. For storing the application output the AFS and local filesystem
were used simultaneously.
\subsection{Analysis of the system}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
iteration & $N_{calc}$ & $N_{task}$ & $t_{total}$ & $t_{worker}$ & $N_{worker}$ & $r_{fail}$\\ \hline
1 & 243K & 26K & 6h40m & 425h & 190 & \textless $3\e{-4}$ \\
2 & 237K & 23K & 6h30m & 332h & 125 & $4\e{-5}$ \\
3 & 224K & 40K & 1h35m & 192h & 210 & 0 \\
4 & 218K & 39K & 1h5m & 151h & 320 & 0 \\ \hline
\end{tabular}
\caption{Summary of RRC06 compatibility analysis iterations.}
\label{itu-summary-table}
\end{table}
The summary of RRC06 iterations is presented in Table
\ref{itu-summary-table}. For each analysis iteration the total workload
consisted of $N_{calc}$ atomic calculations. The calculations were
executed in bunches according to previously defined static clustering
(section \ref{sec:ComputationalChallange}). The $N_{task}$ tasks were
distributed dynamically to the $N_{worker}$ Worker agents. The Worker
agents were submitted as {\em jobs} and executed on the Grid worker
nodes. $t_{total}$ is the makespan or the total time to complete the
compatibility analysis. $t_{worker}$ is the integrated elapsed time on
the worker nodes. $r_{fail}$ is the reliability of the system and
corresponds to the number of failed tasks which could not
automatically recover. With fewer than 10 lost tasks in run 1 and one
lost task in run 2 the reliability of the system exceeded by few orders
of magnitude the reliability of the Grid infrastructure.
\begin{figure}
\centering
\includegraphics[width=10cm]{itu-run3-60s}
\caption{Run 3 workload. Resolution=60s. }
\label{itu-run3-history-plot}
\end{figure}
Contrary to the ITU system which used a fixed set of resources, in the
Grid resources are dynamic: a different set of worker nodes is used at
each iteration. The worker node characteristics such as the CPU and
memory also show large variations. Therefore a direct comparison of
$t_{total}$ and $t_{worker}$ parameters between ITU and Grid runs is
not possible.
\begin{figure}
\centering
\includegraphics[width=10cm]{itu-run41-60s}
\caption{Run 4 workload. Resolution=60s. The point $t_1$ was selected arbitrarily. In run 4 two parallel master servers were used and this figure corresponds to one of the masters and half of the total workload.}
\label{itu-run41-history-plot}
\end{figure}
The efficiency of the system depends on the Grid job submission
latency, efficiency of task scheduling and workload balancing.
Fig. \ref{itu-run3-history-plot},\ref{itu-run41-history-plot} show the
workload distribution for selected runs. $N_w$ worker agents are submitted at $t_0=0$. In the submission
phase, $t<t_1$, the throughput of the system is limited by the
submission latency. As the pool of worker nodes increases the target of
$N_w$ workers is reached at time $t_1$. In the main processing phase,
$t_1<t<t_2$, the pool of worker nodes remains stable and the system
throughput mainly depends on the efficiency of scheduling. At time
$t_2$ the number of remaining tasks becomes smaller than the number of
processors in the pool. In this phase the execution time is dominated
by the workload-balancing effects from few slowest tasks.
The number of available worker nodes may vary significantly in the
Grid from one run to another. The contribution of the job submission
latency to the total execution time may be approximated by the area
between the target line and the worker pool size curve. In run 3 the
latency of job submission corresponded to 12\% of the total execution
time, whereas in run 4 it corresponded to 48\%: 33\% in the submission
phase and 15\% in the main processing phase.
\begin{figure}
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=6cm]{itu-run3-scatter}
\caption{Run 3 profile.}
\end{minipage}
\hspace{0.5cm}
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=6cm]{itu-run41-scatter}
\caption{Run 4 profile.}
\end{minipage}
\label{itu-run3and41-scatter-plot}
\end{figure}
The integrated difference between the worker pool size and the number
of busy workers corresponds to the scheduling overhead. This overhead
includes the network latency and throughput as well as the task
handling efficiency of the master server. In run 3 the scheduling
overhead in the submission and processing phases corresponded to
2-3\%. In run 4 the 30\% scheduling overhead in the submission phase
was observed and 10\% in the processing phase.
The unbalanced execution of the slowest tasks in the last phase
contributes 26\% of the total execution time in run 3 and to 5\% in
run 4. In this phase the utilization of available resources was very
low, 5\% in run 3 and 20\% in run 4. The majority of the workers in
the pool remained idle while the few remaining tasks were being finished.
The striking difference of scheduling and workload-balancing
efficiency between runs 3 and 4 may be explained by the task
scheduling order which reflects the internal input data structure.
The run profile plots are shown in Fig.~10,~11.
Point (t,w) in the run profile
represents a task completed by worker w at time t. In run 4 the tasks
are drawn directly from the input data in the natural order and
clusters of very short tasks created a very high load on the server. The
long tasks were processed in the middle of the run and did not affect
the overall load-balancing. In run 3 the tasks were selected in a random
order by the scheduler. The momentary load on the server was
reduced. The tasks were scheduled more uniformly across the entire
run. There were a few long tasks at the end of the run that resulted in poor
load-balancing.
The intrinsic job submission latency in the Grid prevents the running
of a large number of short jobs in a short time, unless user-level tools
such as DIANE are used. For RRC06 using DIANE allowed to reduce the
Grid overheads and provided efficient management of a large number of
tasks. Additionally a runtime workload balancing allowed to evenly
distribute a workload without precise, a priori knowledge of the
task execution times in the dataset. The overhead reduction and
workload balancing were the crucial factors of the successful usage of the Grid
for the RRC06.
\section{ITU system}
\label{sec:ImplementationITU}
\VISION{this section explaing the implementation and results obtained on the ITU cluster, including
the monitoring and performance data}
\begin{figure}
\centering
\includegraphics[width=10cm]{itu-br-dedicated-system-architecture}
\label{itu-br-dedicated-system-architecture}
\caption{Architecture of the ITU dedicated system.}
\end{figure}
The ITU system consisted of a client-server distributed system running
on a dedicated PC farm. The farm resources evolved in time. Initially it consisted of six high-end
dedicated PCs complemented by some tens of ITU staff desktop PCs, available only overnight and during weekends.
Using this configuration, the calculations for the first planning exercise
required about one week, showing that the running time was an outstanding
issue in preparation for the RRC06. The ITU-R therefore decided to buy a PC farm, which was deployed within
ITU headquarters by the ITU Infrastructure Services department (ITU
IS).In its final configuration at the RRC06 the farm was composed of 84 high-end dedicated 3.6 GHz hyper threading PCs.
Accurate measurements showed that hyper threading permits
to gain about 30\% in computing time by running two tasks in parallel on one PC with respect to the situation when the same tasks are run sequentially.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Iteration & $N_{calc}$ & $N_{task}$ & $t_{total}$ & $t_{clients}$ \\
\hline
1 & 173K & 26K & 5.9h & 621h \\
2 & 168K & 23K & 4.1h & 463h \\
3 & 154K & 23K & 3.4h & 300h \\
4 & 155K & 21K & 2.6h & 205h \\ \hline
\end{tabular}
\caption{Performance of the ITU system (84*2 simultaneous processes) during compatibily analysis calculations}
\label{itu-system-performance-table}
\end{table}
To cope with redundancy and logistic issues (available space, power and cooling consideration), ITU-IS decided
to deploy the farm into two separate clusters.
The first cluster consisted of 47 PCs and was equipped with optical fibers and a 1Gb/s network switch,
while the second cluster consisted of 37 PCs with a slower 200Mb/s network switch.
This configuration did not significantly impact on the performance of the system.
The architecture layout is presented in
Fig. \ref{itu-br-dedicated-system-architecture}. The system was
implemented with Perl scripts installed as Windows services and a
custom communication protocol based on UDP/IP. The UDP packets
carried information on the executable to be run and on the relevant
input parameters. In the reliable internal network of the ITU farm
the packet loss was not a problem. The server implemented two Windows
services, a Listener and a Dispatcher, responsible for task
submission, task management and workload balancing. To cope with
high-load, the TaskQueue file ensured asynchronous operation of the
system and prevented packet lost. The system automatically managed the
task status and resubmitted the ones which were not completed.
\begin{figure}
\centering
\includegraphics[width=10cm]{itu-cluster-job-execution-histogram-iteration-2}
\label{itu-cluster-job-execution-histogram-iteration-2}
\caption{Distribution of the elapsed time for the ITU system during RRC06 iteration 2.}
\end{figure}
The clients implemented two Windows services, the TaskManager
responsible for running tasks according to Dispatcher requests and the
TaskController responsible for monitoring and control operations. A web
application (implemented with ASP.NET and C\#) running on a
dedicated machine (WebInterface), provided monitoring and control
interfaces to operate the system.
In the first phase, the client installation on non-dedicated resources
(desktop PCs) was implemented using a MSI-compatible installation
procedure managed by Windows Systems Management Server (SMS). In the
dedicated farm, the software and data were deployed on a shared folder
and copied directly to the client PCs. MD5 checksums were performed
to insure data consistency. At system startup the server automatically triggered the software and data installation at the client.
The system supported 2*84 simultaneous tasks most of the time with negligible job loss. Software
and data installation involved ~350 MB to be deployed in 2*84 folders
and took on average 15 minutes for the entire farm.
\begin{figure}
\centering
\includegraphics[width=10cm]{itu_iter2_evolution}
\label{itu_iter2_evolution}
\caption{Number of running processes as a function of time during RRC06 iteration 2.}
\end{figure}
The performance of the ITU system is reported in Table
\ref{itu-system-performance-table}, where the total workload of atomic calculations $N_{calc}$, the number of tasks $N_{task}$,
the total time to complete the iteration $t_{total}$ and the integrated elapsed time on the
clients $t_{clients}$ are shown for each iteration.
The distribution of the tasks processing time for the ITU system during iteration 2 of the RRC06
is shown in Fig.~5.
The evolution of the number of running processes as a function of time during RRC06 iteration is shown in Fig.~6
This last figure illustrates interesting features of the ITU system: the dynamic load balancing (about 96\% of the
clients complete processing tasks practically at the same time) and limited submission latency (about 15 minutes, the time necessary for the clients
to download the latest version of software and data at server start-up).
Taking into consideration also the four runs of complementary analysis and the partial runs during multilateral negotiations,
the ITU system at the RRC06 ran more than 180 thousand tasks for an overall integrated elapsed time of 4500 CPU/hours,
i.e. more than half a CPU year.
\section{Introduction}
The RRC06 is the second session of the Regional Radiocommunication Conference (RRC)
for the planning of the digital terrestrial broadcasting service (in band III and IV/V) in European,
African, Arab, CIS countries and Iran(Fig. \ref{itu-geographical-extent}).
Delegations from 104 Member States of the International Telecommunication Union (ITU \cite{ITU}) gathered in Geneva to negotiate the frequency plan, from the 15th of May to the 15th of June 2006.
The preparation and the organization of this planning conference was
managed by the ITU-R, the Radiocommunication Sector of the ITU. The
RRC06 Final Acts \cite{RRC06FinalActs} signed by the RRC06
participants constitute a new international agreement, which comprises
the new frequency plan and the procedures for its modification.
Analogue broadcasting has been regulated since 1961
by the Stockholm Agreement in Europe (ST61) and since 1989 by the Geneva Agreement for Africa (GE89).
The introduction of digital technologies called for a re-planning process in order to optimize
the usage of those frequency bands. The new GE06 plan was designed for
DVB-T (television) and T-DAB (radio) standards, but is flexible enough
to accommodate future developments in digital broadcasting
technologies.
The technical basis for this planning conference, such as the planning criteria and parameters,
were established in the first session of the RRC ( RRC04 \cite{RRC04}), which was held in Geneva
in May 2004. During the RRC06 preparatory activities \cite{RRC06PrepAct} it became evident that
one component of the planning process, the compatibility analysis, was
very CPU intensive. The goal of the compatibility analysis is to evaluate the
interference between broadcasting requirements to identify those that can share the same channel.
The analysis includes several parameters of the broadcasting requirements such as the geographic location,
the signal strength and other technical characteristics.
The total capacity required for the compatibility analysis
corresponds to several hundred CPU-days on a high-end 2006 PC. The
compatibility analysis was performed in several iterations. For each iteration
the RRC06 required the output of the compatibility analysis to be
delivered within 12 hours. To support this requirement the
compatibility analysis was split in a large number of parallel
calculations. The ITU-R implemented a distributed client-server
infrastructure and deployed at its headquarters a dedicated farm
consisting of 84 high-end PCs. A distributed system based on the
EGEE Grid (Enabling Grids for e-ScienE, \cite{EGEE}) and supported by the IT
department of the European Organization for Nuclear Research (CERN)
was deployed, which extended the computing capacity and improved dependability,
The nature of the problem required dynamic workload-balancing and
low-latency access to the computing resources. This fundamental requirement was satisfied both by the ITU system,
with its dedicated resources, and by the Grid system, by using high-level tools and appropriate customization
of its infrastructure.
In this paper, we describe in section \ref{sec:RRC06} the RRC06
planning process and in section \ref{sec:ComputationalChallange} the
computational aspects of the compatibility analysis. The implementation of the
ITU system is presented in section \ref{sec:ImplementationITU}. The Grid-based system
is analyzed in section \ref{sec:ImplementationGrid} and the integration of the two systems
is discussed in section \ref{sec:Integration}.
\begin{figure}
\begin{center}
\includegraphics[width=100mm]{itu-geographical-extent.png}
\label{itu-geographical-extent}
\caption{The extent of the geographical area regulated by the GE06 Agreement.}
\end{center}
\end{figure}
\section{The RRC06 planning process}
\label{sec:RRC06}
\VISION{This section contains the background information about the
RRC06, including the preparatory activities (such as first planning exercise etc)
but also details the planning process workflow and explains
different types of the requirements, with the particular emphasis on
the compatibility analysis. }
The ITU Constitution\footnote{The ITU Constitution,
the ITU Convention and the Radio Regulations are the international
treaties which define the rights and obligations of ITU Member
States in the domain of the international management of the
frequency spectrum. } states that ``the radio-frequency spectrum is a
limited natural resource that must be used rationally, efficiently and
economically, in conformity with the provisions of the Radio
Regulations, so that countries or groups of countries may have
equitable access to it''\cite{ITUConstitution}.
The Radio Regulations stipulate that ``Member States
undertake that in assigning frequencies to stations which are capable
of causing harmful interference to the services rendered by the
stations of another country, such assignments are to be made in
accordance with the Table of Frequency Allocations (where the
frequency blocks are allocated to different radiocommunication services
and to different countries) and other provisions of these
Regulations''\cite{RR4p2}.
\subsection{Frequency Planning}
A frequency plan represents a key mechanism for preserving the rights
of all Member States in the context of equitable access to this
limited resource. Regional Radiocommunication Conferences (RRC) establish
agreements concerning a particular radiocommunication service in specified
frequency bands amongst participating countries.
The last RRC, the RRC06, established the frequency plans (digital and analogue) for terrestrial
broadcasting service (in band III and IV/V) in European, African, Arab,
CIS countries and Iran. The analogue broadcasting Plan will apply only during the
transition period from analogue to digital broadcasting (up to the 17 June 2015 for most Member States).
After this period the broadcasting in this band will be regulated only by the digital broadcasting Plan.
Some parts of the frequency bands to be planned at the RRC06 are shared between
broadcasting and other primary services (like fixed and mobile services). The planning process therefore had to
take into account all services which share those bands with equal
rights to operate in an interference-free environment.
\subsection{The input data}
Member States submitted the input data to the ITU-R in the form of the so-called digital broadcasting requirements.
The digital broadcasting requirements were
notified as electronic files containing a set of administrative and technical parameters representing the broadcasting
requirements. In addition to the digital broadcasting requirements (about 70K), the
planning process had to take into account assignments to analogue television stations (about 95K) and assignments to other stations (about 10K).
A fourth type of data, the so-called
administrative declarations (a few million), declared that
incompatibilities between digital broadcasting requirements, analogue television and other services stations may be ignored in the
frequency synthesis procedure that followed the compatibilities analysis.
Radio communication services are described by administrative and
technical parameters. For example, administrative parameters include
the notifying administration, site name, geographic location, site
altitude. Technical paramaters include the power levels, assigned
frequency, network topology, etc.
The digital broadcasting requirements could be submitted at the RRC06 as T-DAB
(radio) or DVB-T (television) standards. Suitable
data elements were provided to accommodate expected development in digital broadcasting technologies.
Reference Planning Configurations served as simplified models to represent the many system variants (which differ for example in data capacity
and reception modes) of the requirements. Requirements were submitted as assignments (known
location and transmitter features) or as allotments (only service area
known). Allotments were modeled using
Reference Networks (with different number, location and power of
transmitters) to approximate real networks.
The RRC06 planning approach was based on the protection of service areas for
assignments and allotments and used the statistical model outlined in
the ITU-R Recommendation P1546-1\cite{P1546} to model the
signal propagation.
\subsection{The planning process}
The ITU-R performed two planning exercises after the RRC04 and
prior to the RRC06. The first planning exercise was run in June
2005 and the second in February 2006. The second planning exercise
established a draft plan which served as input to the RRC06.
The ITU-R and the European Broadcasting Union (EBU)\cite{EBU} developed the RRC06-related software.
The ITU-R developed the software
for data-capture, data-validation and for the display of the input
data and calculation results, while the EBU developed the planning
software (compatibility analysis, plan synthesis and complementary
analysis). The ITU-R was also responsible for running the planning
software (partly on a distributed infrastructure), producing and
delivering results in due time.
At the RRC06 the frequency plan was established in an iterative way,
as outlined in Fig.2
The delegations
engaged in bilateral and multilateral coordination and negotiation
efforts which resulted in a new set of refined digital broadcasting requirements at
the end of every week. Over the weekends the ITU-R
performed the validation of the data and the compatibility analysis and
synthesis calculations. The output of these calculations and the refined frequency
plan were the input for the negotiations in subsequent week, with the last (fourth)
iteration constituting the basis for the final frequency
plan. In order to assist groups of negotiating Member
States, partial calculations were performed for parts of the planning
area in between two global iterations.
The compatibility analysis consisted of the calculation of the
interference between digital broadcasting requirements and other primary services
stations. For each requirement the compatibility assessment produces
a list of incompatible requirements and a list of available
channels. Three types of compatibility analyses were needed, for both
UHF and VHF frequency bands: digital versus digital (d2dUHF and
d2dVHF), digital versus other services (d2oUHF and d2oVHF) and other
services versus digital (o2dUHF and o2dVHF).
These lists were the input to the plan synthesis process, which
determined a suitable frequency for each requirement in order to avoid
harmful interference and to maximize the number of requirements
satisfied. The RRC06 decided to protect analogue broadcasting services
during the implementation of the digital broadcasting requirements rather than
during the establishment of the plan to maximize the number of requirements satisfied. For this reason each iteration included
a complementary analysis, which determined which analogue television assignments may suffer
interference from the implementation of a given digital broadcasting assignment or
allotment.
During pre-conference preparatory planning activities only 34\% of
requirements were satisfied. For the first iteration of the RRC06 the
percentage increased to 64\% (UHF) and 74\% (VHF), to reach a
satisfactory 93\% (UHF) and 98\% (VHF) for the final plan.
\begin{figure}
\centering
\includegraphics[width=10cm]{itu-negotiation-workflow}
\label{itu-negotiation-workflow}
\caption{ITU negotiation workflow.}
\end{figure}
\section{System Integration}
\label{sec:Integration}
\VISION{This chapter describes how the systems were integrated. Not sure if we want to put it as a separate chapter or
rather to conclusions.}
The Grid and ITU systems were integrated at the monitoring level using
the MonALISA framework (Monitoring Agents in A Large Integrated
Services Architecture, developed by Caltech University \cite{Monalisa}).
MonALISA provides a set of pluggable distributed services for
monitoring, control, management and global optimization for large
scale distributed systems.
\begin{figure}[h]
\centering
\includegraphics[width=6cm]{itu-monalisa-integration}
\label{blah-itu-monalisa-integration-diagram}
\caption{System integration via Mona Lisa monitoring.}
\end{figure}
To collect and combine monitoring information from both ITU
and Grid systems, the following software components were
deployed: instances of MonALISA collector service, web-enabled data
visualization repository and custom ApMon monitoring sensors on worker
nodes (Fig. 12)
ApMon, the monitoring API,
allows to send fine-grained custom monitoring parameters into the
MonALISA collector service. The ApMon uses UDP datagrams to transport
the XDR-encoded information \cite{XDR} and includes a sequence number to
verify the integrity of all monitoring reports. In addition, ApMon
provides out-of-the-box system monitoring of the host, including usage
of system resources such as memory or CPU. Monitoring parameters of
ApMon, such as monitoring frequency and collector destination, may be
dynamically configured by remote services. ApMon implementations are
provided for different programming languages, including C, C++, Java,
Perl and Python. The cross-language support has proven to be useful in
the case of RRC06 as the ITU system was built in Perl while the Grid
used Python.
Using pluggable modules, the MonALISA collector has been customized to
aggregate fine-grained data from Grid worker nodes and ITU farm nodes
to produce in real-time, higher level reports and
charts. Fig.\ref{blah-itu-combined-run-barplot} shows the total workload executed
by ITU clusters and the EGEE sites. The ITU clusters
are reported as {\tt RRC06-1.itu.org} and {\tt RRC06-2.itu.org}.
The complementary usage of Grid Unix-based and Windows-based resources
for numerical computations, required compilation of application
software on both platforms and verification of output in terms of
numerical accuracy.
\begin{figure}
\centering
\includegraphics[width=12cm]{itu-combined-run}
\caption{Total workload executed in Grid and ITU clusters.}
\label{blah-itu-combined-run-barplot}
\end{figure}
|
1,108,101,565,004 | arxiv | \section{Additional Experiment Results}
In this section, we present a table of dataset information and plots of test error curves for each algorithm under each policy and dataset.
We remark that the high error bars in test error curves are largely due to the inherent randomness of training sets since in practice active learning is sensitive to the order of training examples. Similar phenomenon can be observed in previous work \cite{HAHLS15}.
\begin{table}
\centering
\caption{Dataset information.}\label{tab:dataset-info}
\begin{tabular}{lll}
\toprule
Dataset & \# of examples & \# of features \\
\midrule
synthetic & 6000 & 30 \\
letter (U vs P) & 1616 & 16 \\
skin & 245057 & 3 \\
magic & 19020 & 10 \\
covtype & 581012 & 54 \\
mushrooms & 8124 & 112 \\
phishing & 11055 & 68 \\
splice & 3175 & 60 \\
svmguide1 & 4000 & 4 \\
a5a & 6414 & 123 \\
cod-rna & 59535 & 8 \\
german & 1000 & 24 \\
\bottomrule
\end{tabular}
\end{table}
\include{add_fig}
\section{Algorithm}
\subsection{Main Ideas}\label{subsec:main-ideas}
Our algorithm employs the disagreement-based active learning framework, but modifies the main DBAL algorithm in three key ways.
\subsubsection*{Key Idea 1: Warm-Start}
Our algorithm applies a straightforward way of making use of the logged data $T_0$ inside the DBAL framework: to set the initial candidate set $V_0$ to be the set of classifiers that have a low empirical error on $T_0$.
\subsubsection*{Key Idea 2: Multiple Importance Sampling}
Our algorithm uses multiple importance sampling estimators instead of standard importance sampling estimators. As noted in the previous section, in our setting, multiple importance sampling estimators are unbiased and have lower variance, which results in a better performance guarantee.
We remark that the main purpose of using multiple importance sampling estimators here is to control the variance due to the predetermined logging policy. In the classical active learning setting without logged data, standard importance sampling can give satisfactory performance guarantees \cite{BDL09,BHLZ10,HAHLS15}.
\subsubsection*{Key Idea 3: A Debiasing Query Strategy}
The logging policy $Q_0$ introduces bias into the logged data: some examples may be underrepresented since $Q_0$ chooses to reveal their labels with lower probability. Our algorithm employs a debiasing query strategy to neutralize this effect. For any instance $x$ in the online data, the algorithm would query for its label with a lower probability if $Q_{0}(x)$ is relatively large.
It is clear that a lower query probability leads to fewer label queries. Moreover, we claim that our debiasing strategy, though queries for less labels, does not deteriorate our theoretical guarantee on the error rate of the final output classifier. To see this, we note that we can establish a concentration bound for multiple importance sampling estimators that with probability at least $1-\delta$, for all $h\in \mathcal{H}$,
\begin{align}
l(h)-l(h^{\star}) \leq & 2(l(h,S)-l(h^{\star},S))\nonumber\\
+ & \gamma_{1}\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h(x)\neq h^{\star}(x)\}\log\frac{|\mathcal{H}|}{\delta}}{mQ_{0}(x)+nQ_1(x)}\nonumber\\
+ & \gamma_{1}\sqrt{\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h(x)\neq h^{\star}(x)\}\log\frac{|\mathcal{H}|}{\delta}}{mQ_{0}(x)+nQ_1(x)}l(h^{\star})}\label{eq:main-concentration}
\end{align}
where $m,n$ are sizes of logged data and online data respectively, $Q_0$ and $Q_1$ are query policy during the logging phase and the online phase respectively, and $\gamma_1$ is an absolute constant (see Corollary~\ref{cor:gen} in Appendix for proof).
This concentration bound implies that for any $x\in\mathcal{X}$, if $Q_0(x)$ is large, we can set $Q_1(x)$ to be relatively small (as long as $mQ_0(x)+nQ_1(x) \geq \inf_{x'}mQ_0(x')+nQ_1(x')$) while achieving the same concentration bound. Consequently, the upper bound on the final error rate that we can establish from this concentration bound would not be impacted by the debiasing querying strategy.
One technical difficulty of applying both multiple importance sampling and the debiasing strategy to the DBAL framework is adaptivity. Applying both methods requires that the query policy and consequently the importance weights in the error estimator are updated with observed examples in each iteration. In this case, the summands of the error estimator are not independent, and the estimator becomes an adaptive multiple importance sampling estimator whose convergence property is still an open problem \cite{CMMR12}.
To circumvent this convergence issue and establish rigorous theoretical guarantees, in each iteration, we compute the error estimator from a fresh sample set. In particular, we partition the logged data and the online data stream into disjoint subsets, and we use one logged subset and one online subset for each iteration.
\subsection{Details of the Algorithm}
\begin{algorithm}
\begin{algorithmic}[1]
\STATE{Input: confidence $\delta$, size of online data $n$, logging policy $Q_0$, logged data $T_0$.}
\STATE{$K \gets \lceil\log{n}\rceil$.}
\STATE{$\tilde{S}_0 \gets T_0^{(0)}$; $V_0 \gets \mathcal{H}$; $D_0 \gets \mathcal{X}$; $\xi_0 \gets \inf_{x\in \mathcal{X}}Q_0(x)$.}
\FOR{$k=0, \dots, K-1$}
\STATE{Define $\delta_k \gets \frac{\delta}{(k+1)(k+2)}$; $\sigma(k, \delta) \gets \frac{\log|\mathcal{H}|/\delta}{m_k\xi_k+n_k}$;
$\Delta_k(h,h')\gets\gamma_0(\sigma(k, \frac{\delta_k}{2}) + \sqrt{\sigma(k, \frac{\delta_k}{2}) \rho_{\tilde{S}_{k}}(h,h')})$.}
\STATE \(\triangleright\) {$\gamma_0$ is an absolute constant defined in Lemma~\ref{lem:h_star_in}.}
\STATE{$\hat{h}_k \gets \arg\min_{h\in V_k} l(h, \tilde{S}_k)$.}
\STATE{Define the candidate set $$V_{k+1} \gets \{ h\in V_k \mid l(h,\tilde{S}_k) \leq l(\hat{h}_k,\tilde{S}_k)+\Delta_k(h,\hat{h}_k)\}$$ and its disagreement region $D_{k+1} \gets \text{DIS}(V_{k+1})$.}
\STATE{Define $\xi_{k+1}\gets\inf_{x\in D_{k+1}}Q_0(x)$, and $Q_{k+1}(x)\gets\mathds{1}\{Q_0(x)\leq \xi_{k+1} + 1/\alpha\}$.}
\STATE{Draw $n_{k+1}$ samples $\{(X_t,Y_t)\}_{t=m+n_1\cdots+n_k+1}^{m+n_1+\cdots+n_{k+1}}$, and present $\{X_t\}_{t=m+n_1+\cdots+n_k+1}^{m+n_1+\cdots+n_{k+1}}$ to the algorithm.}
\FOR{$t=m+n_1+\cdots+n_k+1 \text{ to } m+n_1+\cdots+n_{k+1}$}
\STATE{$Z_t \gets Q_{k+1}(X_t)$.}
\IF{$Z_t=1$}
\STATE{If $X_t \in D_{k+1}$, query for label: $\tilde{Y}_t\gets Y_t$; otherwise infer $\tilde{Y}_t \gets \hat{h}_k(X_t)$.}
\ENDIF
\ENDFOR
\STATE{$\tilde{T}_{k+1} \gets \{X_t, \tilde{Y}_t, Z_t\}_{t=m+n_1+\cdots+n_k+1}^{m+n_1+\cdots+n_{k+1}}$.}
\STATE{$\tilde{S}_{k+1} \gets T_0^{(k+1)}\cup \tilde{T}_{k+1}$.}
\ENDFOR
\STATE{Output $\hat{h}=\arg\min_{h\in V_{K}} l(h, \tilde{S}_{K})$.}
\end{algorithmic}
\caption{\label{alg:main}Acitve learning with logged data}
\end{algorithm}
The Algorithm is shown as Algorithm~\ref{alg:main}. Algorithm~\ref{alg:main} runs in $K$ iterations where $K=\lceil\log{n}\rceil$ (recall $n$ is the size of the online data stream). For simplicity, we assume $n=2^K-1$.
As noted in the previous subsection, we require the algorithm to use a disjoint sample set for each iteration. Thus, we partition the data as follows. The online data stream is partitioned into $K$ parts $T_1,\cdots, T_K$ of sizes $n_1=2^0, \cdots, n_K=2^{K-1}$. We define $n_0=0$ for completeness. The logged data $T_0$ is partitioned into $K+1$ parts $T_0^{(0)},\cdots, T_0^{(K)}$ of sizes $m_0=m/3, m_1=\alpha n_1, m_2=\alpha n_2, \cdots, m_K=\alpha n_K$ (where $\alpha=2m/3n$ and we assume $\alpha\geq1$ is an integer for simplicity. $m_0$ can take other values as long as it is a constant factor of $m$). The algorithm uses $T_0^{(0)}$ to construct an initial candidate set, and uses $S_k := T_0^{(k)} \cup T_k$ in iteration $k$.
Algorithm~\ref{alg:main} uses the disagreement-based active learning framework. At iteration $k$ ($k=0,\cdots,K-1$), it first constructs a candidate set $V_{k+1}$ which is the set of classifiers whose training error (using the multiple importance sampling estimator) on $T_0^{(k)} \cup \tilde{T}_k$ is small, and its disagreement region $D_{k+1}$. At the end of the $k$-th iteration, it receives the $(k+1)$-th part of the online data stream $\{X_i\}_{i=m+n_1\cdots+n_k+1}^{m+n_1\cdots+n_{k+1}}$ from which it can query for labels. It only queries for labels inside the disagreement region $D_{k+1}$. For any example $X$ outside the disagreement region, Algorithm~\ref{alg:main} infers its label $\tilde{Y}=\hat{h}_k(X)$. Throughout this paper, we denote by $T_k$, $S_k$ the set of examples with original labels, and by $\tilde{T}_k$, $\tilde{S_k}$ the set of examples with inferred labels. The algorithm only observes $\tilde{T}_k$ and $\tilde{S_k}$.
Algorithm~\ref{alg:main} uses aforementioned debiasing query strategy, which leads to fewer label queries than the standard disagreement-based algorithms. To simplify our analysis, we round the query probability $Q_k(x)$ to be 0 or 1.
\section{Analysis}
\subsection{Consistency}
We first introduce some additional quantities.
Define $h^\star:=\min_{h\in\mathcal{H}}l(h)$ to be the best classifier in $\mathcal{H}$, and $\nu:=l(h^\star)$ to be its error rate. Let $\gamma_2$ to be an absolute constant to be specified in Lemma~\ref{lem:dis-radius} in Appendix.
We introduce some definitions that will be used to upper-bound the size of the disagreement sets in our algorithm. Let $\text{DIS}_0:=\mathcal{X}$. Recall $K=\lceil\log n\rceil$. For $k=1,\dots,K$, let $\zeta_k := \sup_{x\in\text{DIS}_{k-1}}\frac{\log(2|\mathcal{H}|/\delta_k)}{m_{k-1}Q_0(x)+n_{k-1}}$, $\epsilon_{k}:=\gamma_{2}\zeta_k+\gamma_{2}\sqrt{\zeta_kl(h^{\star})}$, $\text{DIS}_{k}:=\text{DIS}(B(h^{\star},2\nu+\epsilon_{k}))$. Let $\zeta := \sup_{x\in\text{DIS}_1}\frac1{\alpha Q_0(x)+1}$.
The following theorem gives statistical consistency of our algorithm.
\begin{thm}
\label{thm:Convergence} There is an absolute constant
$c_{0}$ such that for any $\delta>0$, with probability at least
$1-\delta$,
\begin{align*}
l(\hat{h})\leq & l(h^{\star})+c_{0}\sup_{x\in\text{DIS}_{K}}\frac{\log\frac{K|\mathcal{H}|}{\delta}}{m Q_{0}(x)+n} \\
& +c_{0}\sqrt{\sup_{x\in\text{DIS}_{K}}\frac{\log\frac{K|\mathcal{H}|}{\delta}}{m Q_{0}(x)+n}l(h^{\star})}.
\end{align*}
\end{thm}
\subsection{Label Complexity}
We first introduce the adjusted disagreement coefficient, which characterizes the rate of decrease of the query region as the candidate set shrinks.
\begin{defn}
For any measurable set $A\subseteq \mathcal{X}$, define $S(A, \alpha)$ to be
\[
\bigcup_{A'\subseteq A} \left(A'\cap\left\{ x:Q_{0}(x)\leq\inf_{x\in A'}Q_{0}(x)+\frac{1}{\alpha}\right\} \right).
\]
For any $r_{0}\geq2\nu$, $\alpha\geq1$, define the adjusted disagreement coefficient $\tilde{\theta}(r_{0},\alpha)$ to be
\[
\sup_{r>r_{0}} \frac{1}{r}\Pr(S(\text{DIS}(B(h^{\star},r)), \alpha )).
\]
\end{defn}
The adjusted disagreement coefficient is a generalization of the standard disagreement coefficient \citep{H07} which has been widely used for analyzing active learning algorithms. The standard disagreement coefficient $\theta(r)$ can be written as $\theta(r) = \tilde{\theta}(r,1)$, and clearly $\theta(r)\geq \tilde{\theta}(r,\alpha)$ for all $\alpha\geq1$.
We can upper-bound the number of labels queried by our algorithm using the adjusted disagreement coefficient. (Recall that we only count labels queried during the online phase, and that $\alpha=2m/3n\geq1$)
\begin{thm}
\label{thm:Label-Complexity}There is an absolute
constant $c_{1}$ such that for any $\delta>0$, with probability
at least $1-\delta$, the number of labels queried by Algorithm~\ref{alg:main}
is at most:
\begin{align*}
c_1\tilde{\theta}(2\nu+\epsilon_K,\alpha)( & n\nu+\zeta\log n\log\frac{|\mathcal{H}|\log n}{\delta}\\
& +\log n\sqrt{n\nu\zeta\log\frac{|\mathcal{H}|\log n}{\delta}}).
\end{align*}
\end{thm}
\subsection{Remarks}
As a sanity check, note that when $Q_{0}(x)\equiv1$ (i.e., all labels in the logged data are shown), our results reduce to the classical bounds for disagreement-based active learning with a warm-start.
Next, we compare the theoretical guarantees of our algorithm with some alternatives. We fix the target error rate to be $\nu+\epsilon$, assume we are given $m$ logged data, and compare upper bounds on the number of labels required in the online phase to achieve the target error rate. Recall $\xi_0=\inf_{x\in\mathcal{X}}Q_0(x)$. Define $\tilde{\xi}_K:=\inf_{x\in\text{DIS}_K}Q_0(x)$, $\tilde{\theta}:=\tilde{\theta}(2\nu,\alpha)$, $\theta:=\theta(2\nu)$.
From Theorem~\ref{thm:Convergence} and \ref{thm:Label-Complexity} and some algebra, our algorithm requires $\tilde{O}\left(\nu\tilde{\theta}\cdot(\frac{\nu+\epsilon}{\epsilon^2}\log\frac{|\mathcal{H}|}{\delta}-m\tilde{\xi}_K)\right)$ labels.
The first alternative is passive learning that requests all labels for $\{X_t\}_{t=m+1}^{m+n}$ and finds an empirical risk minimizer using both logged data and online data. If standard importance sampling is used, the upper bound is $\tilde{O}\left(\frac{1}{\xi_0}(\frac{\nu+\epsilon}{\epsilon^2}\log\frac{|\mathcal{H}|}{\delta}-m\xi_0)\right)$. If multiple importance sampling is used, the upper bound is $\tilde{O}\left(\frac{\nu+\epsilon}{\epsilon^2}\log\frac{|\mathcal{H}|}{\delta}-m\tilde{\xi}_K\right)$. Both bounds are worse than ours since $\nu\tilde{\theta}\leq1$ and $\xi_0\leq\tilde{\xi}_K\leq1$.
A second alternative is standard disagreement-based active learning with naive warm-start where the logged data is only used to construct an initial candidate set. For standard importance sampling, the upper bound is $\tilde{O}\left(\frac{\nu\theta}{\xi_0}(\frac{\nu+\epsilon}{\epsilon^2}\log\frac{|\mathcal{H}|}{\delta}-m\xi_0)\right)$. For multiple importance sampling (i.e., out algorithm without the debiasing step), the upper bound is $\tilde{O}\left(\nu\theta\cdot(\frac{\nu+\epsilon}{\epsilon^2}\log\frac{|\mathcal{H}|}{\delta}-m\tilde{\xi}_K)\right)$. Both bounds are worse than ours since $\nu\tilde{\theta}\leq\nu\theta$ and $\xi_0\leq\tilde{\xi}_K\leq1$.
A third alternative is to merely use past policy to label data -- that is, query on $x$ with probability $Q_0(x)$ in the online phase. The upper bound here is $\tilde{O}\left(\frac{\mathbb{E}[Q_0(X)]}{\xi_0}(\frac{\nu+\epsilon}{\epsilon^2}\log\frac{|\mathcal{H}|}{\delta}-m\xi_0)\right)$. This is worse than ours since $\xi_0\leq\mathbb{E}[Q_0(X)]$ and $\xi_0\leq\tilde{\xi}_K\leq1$.
\section{Preliminaries}
\subsection{Summary of Key Notations}
\paragraph{Data Partitions} $T_k=\{(X_t,Y_t,Z_t)\}_{t=m+n_1+\cdots+n_{k-1}+1}^{t=m+n_1+\cdots+n_k}$ ($1\leq k\leq K$) is the online data collected in $k$-th iteration of size $n_k=2^{k-1}$. $n=n_1+\cdots+n_K$, $\alpha=2m/3n$. We define $n_0=0$. $T_0=\{(X_t,Y_t,Z_t)\}_{t=1}^{t=m}$ is the logged data and is partitioned into $K+1$ parts $T_0^{(0)}, \cdots, T_0^{(K)}$ of sizes $m_0=m/3, m_1=\alpha n_1, m_2=\alpha n_2, \cdots, m_K = \alpha n_K$. $S_k=T_0^{(k)}\cup T_k$.
Recall that $\tilde{S}_k$ and $\tilde{T}_k$ contain inferred labels while $S_k$ and $T_k$ are sets of examples with original labels. The algorithm only observes $\tilde{S}_k$ and $\tilde{T}_k$.
For $(X,Z)\in T_k$ ($0\leq k\leq K)$, $Q_k(X) = \Pr(Z=1\mid X)$.
\paragraph{Disagreement Regions} The candidate set $V_{k}$ and its disagreement region $D_{k}$ are defined in Algorithm~\ref{alg:main}. $\hat{h}_{k}=\arg\min_{h\in V_{k}}l(h,\tilde{S}_{k})$. $\nu=l(h^\star)$.
$B(h,r):=\{h'\in\mathcal{H}\mid\rho(h,h')\leq r\}$, $\text{DIS}(V):=\{x\in\mathcal{X}\mid\exists h_{1}\neq h_{2}\in V\text{ s.t. }h_{1}(x)\neq h_{2}(x)\}$. $S(A, \alpha)=\bigcup_{A'\subseteq A} \left(A'\cap\left\{ x:Q_{0}(x)\leq\inf_{x\in A'}Q_{0}(x)+\frac{1}{\alpha}\right\} \right)$. $\tilde{\theta}(r_{0},\alpha)=\sup_{r>r_{0}} \frac{1}{r}\Pr(S(\text{DIS}(B(h^{\star},r)), \alpha ))$.
$\text{DIS}_0=\mathcal{X}$. For $k=1,\dots,K$, $\epsilon_{k}=\gamma_{2}\sup_{x\in\text{DIS}_{k-1}}\frac{\log(2|\mathcal{H}|/\delta_k)}{m_{k-1}Q_0(x)+n_{k-1}}+\gamma_{2}\sqrt{\sup_{x\in\text{DIS}_{k-1}}\frac{\log(2|\mathcal{H}|/\delta_k)}{m_{k-1}Q_0(x)+n_{k-1}}l(h^{\star})}$, $\text{DIS}_{k}=\text{DIS}(B(h^{\star},2\nu+\epsilon_{k}))$.
\paragraph{Other Notations} $\rho(h_1,h_2)=\Pr(h_1(X)\neq h_2(X))$, $\rho_S(h_1,h_2)=\frac{1}{|S|}\sum_{X\in S}\mathds{1}\{h_1(X)\neq h_2(X)\}$.
For $k\geq0$, $\sigma(k,\delta)=\sup_{x\in D_{k}}\frac{\log(|\mathcal{H}|/\delta)}{m_k Q_{0}(x)+ n_k}$, $\delta_k=\frac{\delta}{(k+1)(k+2)}$. $\xi_k = \inf_{x\in D_k}Q_0(x)$. $\zeta = \sup_{x\in\text{DIS}_1}\frac1{\alpha Q_0(x)+1}$.
\subsection{Elementary Facts}
\begin{prop}
\label{prop:quad-ineq}Suppose $a,c\geq0$,$b\in\mathbb{R}$. If $a\leq b+\sqrt{ca}$,
then $a\leq2b+c$.
\end{prop}
\begin{proof}
Since $a\leq b+\sqrt{ca}$, $\sqrt{a}\leq\frac{\sqrt{c}+\sqrt{c+4b}}{2}\leq\sqrt{\frac{c+c+4b}{2}}=\sqrt{c+2b}$
where the second inequality follows from the Root-Mean Square-Arithmetic
Mean inequality. Thus, $a\leq2b+c$.
\end{proof}
\subsection{Facts on Disagreement Regions and Candidate Sets}
\begin{lem}
\label{lem:Q}For any $k=0,\dots,K$, any $x\in\mathcal{X}$, any
$h_{1},h_{2}\in V_{k}$, $\frac{\mathds{1}\{h_{1}(x)\neq h_{2}(x)\}}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\leq\sup_{x'}\frac{\mathds{1}\{x'\in D_{k}\}}{m_k Q_{0}(x')+n_k}$.
\end{lem}
\begin{proof}
The $k=0$ case is obvious since $D_0=\mathcal{X}$ and $n_0=0$.
For $k>0$, since $\text{DIS}(V_k)=D_k$, $\mathds{1}\{h_{1}(x)\neq h_{2}(x)\} \leq \mathds{1}\{x\in D_{k}\}$, and thus $\frac{\mathds{1}\{h_{1}(x)\neq h_{2}(x)\}}{m_k Q_{0}(X)+ n_k Q_{k}(X)} \leq \frac{\mathds{1}\{x\in D_{k}\}}{m_k Q_{0}(X)+ n_k Q_{k}(X)}$.
For any $x$, if $Q_0(x)\leq \xi_k+1/\alpha$, then $Q_k(x)=1$, so $\frac{\mathds{1}\{x\in D_{k}\}}{m_k Q_{0}(X)+ n_k Q_{k}(X)} = \frac{\mathds{1}\{x\in D_{k}\}}{m_k Q_{0}(x)+n_k} \leq \sup_{x'}\frac{\mathds{1}\{x'\in D_{k}\}}{m_k Q_{0}(x')+n_k}$.
If $Q_0(x)> \xi_k+1/\alpha$, then $Q_k(x)=0$, so $\frac{\mathds{1}\{x\in D_{k}\}}{m_k Q_{0}(X)+ n_k Q_{k}(X)} = \frac{\mathds{1}\{x\in D_{k}\}}{m_{k} Q_{0}(x)} \leq \frac{\mathds{1}\{x\in D_{k}\}}{m_k \xi_k+n_k} \leq \sup_{x'}\frac{\mathds{1}\{x'\in D_{k}\}}{m_k Q_{0}(x')+n_k}$ where the first inequality follows from the fact that $Q_0(x)> \xi_k+1/\alpha$ implies $m_k Q_0(x)> m_k\xi_k+n_k$
\end{proof}
\begin{lem}
\label{lem:l-diff-S-S_tilde}For any $k=0,\dots,K$, if $h_{1},h_{2}\in V_{k}$,
then $l(h_{1},S_{k})-l(h_{2},S_{k})=l(h_{1},\tilde{S}_{k})-l(h_{2},\tilde{S}_{k})$.
\end{lem}
\begin{proof}
For any $(X_t, Y_t, Z_t) \in S_t$ that $Z_t=1$, if $X_t\in\text{DIS}(V_k)$, then $Y_t=\tilde{Y}_t$, so $\mathds{1}\{h_1(X_t)\neq Y_t\} - \mathds{1}\{h_2(X_t)\neq Y_t\} = \mathds{1}\{h_1(X_t)\neq \tilde{Y}_t\} - \mathds{1}\{h_2(X_t)\neq \tilde{Y}_t\}$. If $X_t\notin\text{DIS}(V_k)$, then $h_1(X_t)=h_2(X_t)$, so $\mathds{1}\{h_1(X_t)\neq Y_t\} - \mathds{1}\{h_2(X_t)\neq Y_t\} = \mathds{1}\{h_1(X_t)\neq \tilde{Y}_t\} - \mathds{1}\{h_2(X_t)\neq \tilde{Y}_t\} = 0$.
\end{proof}
The following lemma is immediate from definition.
\begin{lem}
\label{lem:dis-coefficient}For any $r\geq2\nu$, any $\alpha\geq1$, $\Pr(S(\text{DIS}(B(h^{\star},r)),\alpha))\leq r\tilde{\theta}(r,\alpha)$.
\end{lem}
\subsection{Facts on Multiple Importance Sampling Estimators}
We recall that $\{(X_t,Y_t)\}_{t=1}^{n_0+n}$ is an i.i.d. sequence. Moreover, the following fact is immediate by our construction that $S_0,\cdots,S_K$ are disjoint and that $Q_k$ is determined by $S_0,\cdots,S_{k-1}$.
\begin{fact}
\label{fact:independence}For any $0\leq k \leq K$, conditioned on $Q_k$, examples in $S_k$ are independent, and examples in $T_k$ are i.i.d.. Besides, for any $0<k\leq K$, $Q_k$, $T_0^{(k)},\dots,T_0^{(K)}$ are independent.
\end{fact}
Unless otherwise specified, all probabilities and expectations are over the random draw of all random variables (including $S_0, \cdots, S_K$, $Q_1, \cdots, Q_K$).
The following lemma shows multiple importance estimators are unbiased.
\begin{lem}
\label{lem:mis-unbiased}For any $h\in\mathcal{H}$, any $0\leq k\leq K$, $\mathbb{E}[l(h,S_{k})]=l(h)$.
\end{lem}
The above lemma is immediate from the following lemma.
\begin{lem}
\label{lem:cond-mis-unbiased}For any $h\in\mathcal{H}$, any $0\leq k\leq K$, $\mathbb{E}[l(h,S_{k})\mid Q_k]=l(h)$.
\end{lem}
\begin{proof}
The $k=0$ case is obvious since $S_0=T_0^{(0)}$ is an i.i.d. sequence and $l(h,S_{k})$ reduces to a standard importance sampling estimator. We only show proof for $k>0$.
Recall that $S_{k}=T_0^{(k)}\cup T_k$, and that $T_0^{(k)}$ and $T_k$ are two i.i.d. sequences conditioned $Q_k$. We denote the conditional distributions of $T_0^{(k)}$ and $T_k$ given $Q_k$ by $P_0$ and $P_k$ respectively. We have
\begin{eqnarray*}
\mathbb{E}[l(h,S_{k})\mid Q_k] & = & \mathbb{E}\left[\sum_{(X,Y,Z)\in T_0^{(k)}}\frac{\mathds{1}\{h(X)\neq Y\}Z}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid Q_k\right] + \mathbb{E}\left[\sum_{(X,Y,Z)\in T_k}\frac{\mathds{1}\{h(X)\neq Y\}Z}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid Q_k\right]\\
& = & m_k\mathbb{E}_{P_{0}}\left[\frac{\mathds{1}\{h(X)\neq Y\}Z}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid Q_k\right]+n_{k}\mathbb{E}_{P_{k}}\left[\frac{\mathds{1}\{h(X)\neq Y\}Z}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid Q_k\right]
\end{eqnarray*}
where the second equality follows since $T_0^{(k)}$ and $T_k$ are two i.i.d. sequences given $Q_k$ with sizes $m_k$ and $n_k$ respectively.
Now,
\begin{eqnarray*}
\mathbb{E}_{P_{0}}\left[\frac{\mathds{1}\{h(X)\neq Y\}Z}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid Q_k\right] & = & \mathbb{E}_{P_0}\left[\mathbb{E}_{P_{0}}\left[\frac{\mathds{1}\{h(X)\neq Y\}Z}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid X, Q_k\right]\mid Q_k\right] \\
& = & \mathbb{E}_{P_{0}}\left[\mathbb{E}_{P_{0}}\left[\frac{\mathds{1}\{h(X)\neq Y\}Q_{0}(X)}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid X, Q_k\right]\mid Q_k\right] \\
& = & \mathbb{E}_{P_{0}}\left[\frac{\mathds{1}\{h(X)\neq Y\}Q_{0}(X)}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid Q_k\right]
\end{eqnarray*}
where the second equality uses the definition $\Pr_{P_0}(Z\mid X)=Q_0(X)$ and the fact that $T_0^{(k)}$ and $Q_k$ are independent.
Similarly, we have $\mathbb{E}_{P_{k}}\left[\frac{\mathds{1}\{h(X)\neq Y\}Z}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid Q_k\right] = \mathbb{E}_{P_{k}}\left[\frac{\mathds{1}\{h(X)\neq Y\}Q_{k}(X)}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid Q_k\right]$.
Therefore,
\begin{eqnarray*}
\lefteqn{m_{k}\mathbb{E}_{P_{0}}\left[\frac{\mathds{1}\{h(X)\neq Y\}Z}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid Q_k\right]+n_{k}\mathbb{E}_{P_{k}}\left[\frac{\mathds{1}\{h(X)\neq Y\}Z}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid Q_k\right]} \\
& = & m_k\mathbb{E}_{P_{0}}\left[\frac{\mathds{1}\{h(X)\neq Y\}Q_{0}(X)}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid Q_k\right] + n_k\mathbb{E}_{P_{k}}\left[\frac{\mathds{1}\{h(X)\neq Y\}Q_{k}(X)}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid Q_k\right] \\
& = & \mathbb{E}_{P_{0}}\left[\mathds{1}\{h(X)\neq Y\}\frac{m_{k}Q_{0}(X)+n_{k}Q_{k}(X)}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid Q_k\right]\\
& = & \mathbb{E}_D\left[\mathds{1}\{h(X)\neq Y\}\right]=l(h)
\end{eqnarray*}
where the second equality uses the fact that distribution of $(X,Y)$ according to $P_0$ is the same as that according to $P_k$, and the third equality follows by algebra and Fact~\ref{fact:independence} that $Q_k$ is independent with $T_0^{(k)}$.
\end{proof}
The following lemma will be used to upper-bound the variance of the multiple importance sampling estimator.
\begin{lem}
\label{lem:var-mis}For any $h_1,h_2\in\mathcal{H}$, any $0\leq k\leq K$,
\[
\mathbb{E}\left[\sum_{(X,Y,Z)\in S_k}\left(\frac{\mathds{1}\{h_{1}(X)\neq h_{2}(X)\}Z}{m_k Q_{0}(X)+n_kQ_{k}(X)}\right)^2\mid Q_k\right] \leq \rho(h_1,h_2)\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h_{1}(x)\neq h_{2}(x)\}}{m_k Q_0(x)+n_kQ_k(x)}.
\]
\end{lem}
\begin{proof}
We only show proof for $k>0$. The $k=0$ case can be proved similarly.
We denote the conditional distributions of $T_0^{(k)}$ and $T_k$ given $Q_k$ by $P_0$ and $P_k$ respectively. Now, similar to the proof of Lemma~\ref{lem:cond-mis-unbiased}, we have
\begin{align*}
\lefteqn{\mathbb{E}\left[\sum_{(X,Y,Z)\in S_k}\left(\frac{\mathds{1}\{h_{1}(X)\neq h_{2}(X)\}Z}{m_k Q_{0}(X)+n_kQ_{k}(X)}\right)^2\mid Q_k\right]}\\
= & \sum_{(X,Y,Z)\in S_k}\mathbb{E}\left[\frac{\mathds{1}\{h_{1}(X)\neq h_{2}(X)\}Z}{\left(m_k Q_{0}(X)+ n_k Q_{k}(X)\right)^{2}}\mid Q_k\right]\\
= & m_{k}\mathbb{E}_{P_{0}}\left[\frac{\mathds{1}\{h_{1}(X)\neq h_{2}(X)\}Z}{\left(m_k Q_{0}(X)+ n_k Q_{k}(X)\right)^{2}}\mid Q_k\right]+n_{k}\mathbb{E}_{P_{k}}\left[\frac{\mathds{1}\{h_{1}(X)\neq h_{2}(X)\}Z}{\left(m_k Q_{0}(X)+ n_k Q_{k}(X)\right)^{2}}\mid Q_k\right]\\
= & m_{k}\mathbb{E}_{P_{0}}\left[\frac{\mathds{1}\{h_{1}(X)\neq h_{2}(X)\}Q_0(X)}{\left(m_k Q_{0}(X)+ n_k Q_{k}(X)\right)^{2}}\mid Q_k\right]+n_{k}\mathbb{E}_{P_{k}}\left[\frac{\mathds{1}\{h_{1}(X)\neq h_{2}(X)\}Q_k(X)}{\left(m_k Q_{0}(X)+ n_k Q_{k}(X)\right)^{2}}\mid Q_k\right]\\
= & \mathbb{E}_{P_0}\left[\mathds{1}\{h_{1}(X)\neq h_{2}(X)\}\frac{m_k Q_0(X) + n_k Q_k(X)}{(m_k Q_{0}(X)+ n_k Q_{k}(X))^2}\mid Q_k\right]\\
= & \mathbb{E}_{P_0}\left[\frac{\mathds{1}\{h_{1}(X)\neq h_{2}(X)\}}{m_k Q_{0}(X)+ n_k Q_{k}(X)}\mid Q_k\right]\\
\leq & \mathbb{E}_{P_0}\left[\mathds{1}\{h_{1}(X)\neq h_{2}(X)\}\mid Q_k\right]\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h_{1}(x)\neq h_{2}(x)\}}{m_k Q_0(x)+ n_k Q_k(x)} \\
= & \rho(h_1,h_2)\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h_{1}(x)\neq h_{2}(x)\}}{m_k Q_0(x)+ n_k Q_k(x)}.
\end{align*}
\end{proof}
\section{Deviation Bounds}
In this section, we demonstrate deviation bounds for our error estimators on $S_k$. Again, unless otherwise specified, all probabilities and expectations in this section are over the random draw of all random variables, that is, $S_0, \cdots, S_K$, $Q_1, \cdots, Q_K$.
We use following Bernstein-style concentration bound:
\begin{fact}
\label{fact:bernstein}Suppose $X_{1},\dots,X_{n}$ are independent
random variables. For any $i=1,\dots,n$, $|X_{i}|\leq1$, $\mathbb{E} X_{i}=0$,
$\mathbb{E} X_{i}^{2}\leq\sigma_{i}^{2}$. Then with probability at
least $1-\delta$,
\[
\left|\sum_{i=1}^{n}X_{i}\right|\leq\frac{2}{3}\log\frac{2}{\delta}+\sqrt{2\sum_{i=1}^{n}\sigma_{i}^{2}\log\frac{2}{\delta}}.
\]
\end{fact}
\begin{thm}
\label{thm:gen}For any $k=0,\dots,K$, any $\delta>0$, with probability
at least $1-\delta$, for all $h_{1},h_{2}\in\mathcal{H},$
the following statement holds:
\begin{align}
\left|\left(l(h_{1},S_{k})-l(h_{2},S_{k})\right)-\left(l(h_{1})-l(h_{2})\right)\right| & \leq2\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h_{1}(x)\neq h_{2}(x)\}\frac{2\log\frac{4|\mathcal{H}|}{\delta}}{3}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}+\sqrt{2\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h_{1}(x)\neq h_{2}(x)\}\log\frac{4|\mathcal{H}|}{\delta}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}\rho(h_{1},h_{2})}.\label{eq:thm-gen-diff-rho}
\end{align}
\end{thm}
\begin{proof}
We show proof for $k>0$. The $k=0$ case can be proved similarly. When $k>0$, it suffices to show that for any $k=1,\dots, K$, $\delta>0$, conditioned on $Q_k$, with probability at least $1-\delta$, (\ref{eq:thm-gen-diff-rho}) holds for all $h_1,h_2\in\mathcal{H}$.
For any $k=1,\dots,K$, for any fixed $h_{1},h_{2}\in\mathcal{H}$, define $A:=\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h_{1}(x)\neq h_{2}(x)\}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}$. Let $N:=|S_k|$,
$U_{t}:=\frac{\mathds{1}\{h_{1}(X_{t})\neq Y_{t}\}Z_{t}}{m_k Q_{0}(X_{t})+n_k Q_{k}(X_{t})}-\frac{\mathds{1}\{h_{2}(X_{t})\neq Y_{t}\}Z_{t}}{m_k Q_{0}(X_{t})+n_k Q_{k}(X_{t})}$,
$V_{t}:=(U_{t}-\mathbb{E}[U_{t}|Q_k])/2A$.
Now, conditioned on $Q_k$, $\{V_{t}\}_{t=1}^{N}$ is an independent sequence by Fact~\ref{fact:independence}. $|V_{t}|\leq1$, and $\mathbb{E}[V_{t}|Q_k]=0$. Besides, we have
\begin{eqnarray*}
\sum_{t=1}^{N}\mathbb{E}[V_{t}^2|Q_k] & \leq & \frac{1}{4A^{2}}\sum_{t=1}^{N}\mathbb{E}[U_{t}^{2}|Q_k]\\
& \leq & \frac{1}{4A^{2}}\sum_{t=1}^{N}\mathbb{E}\left(\frac{\mathds{1}\{h_{1}(X_{t})\neq h_{2}(X_{t})\}Z_{t}}{m_k Q_{0}(X_{t})+n_k Q_{k}(X_{t})}\right)^{2}\\
& \leq & \frac{\rho(h_1,h_2)}{4A}
\end{eqnarray*}
where the second inequality follows from $|U_{t}|\leq\frac{\mathds{1}\{h_{1}(X_{t})\neq h_{2}(X_{t})\}Z_{t}}{m_k Q_{0}(X_{t})+n_k Q_{k}(X_{t})}$, and the third inequality follows from Lemma~\ref{lem:var-mis}.
Applying Bernstein's inequality (Fact~\ref{fact:bernstein}) to $\{V_t\}$, conditioned on $Q_k$, we have with probability at least
$1-\delta$,
\[
\left|\sum_{t=1}^{m}V_{t}\right|\leq\frac{2}{3}\log\frac{2}{\delta}+\sqrt{\frac{\rho(h_{1},h_{2})}{2A}\log\frac{2}{\delta}}.
\]
Note that $\sum_{t=1}^{m}U_{t}=l(h_{1}, S_k)-l(h_{2}, S_k)$, and $\sum_{t=1}^{m}\mathbb{E}[U_{t}\mid Q_k]=l(h_{1})-l(h_{2})$ by Lemma~\ref{lem:cond-mis-unbiased}, so $\sum_{t=1}^{m}V_{t}=\frac{1}{2A}(l(h_{1},S_{k})-l(h_{2},S_{k})-l(h_{1})+l(h_{2}))$.
(\ref{eq:thm-gen-diff-rho}) follows by algebra and a union bound
over $\mathcal{H}$.
\end{proof}
\begin{thm}
\label{thm:gen-rho}For any $k=0,\dots,K$, any $\delta>0$, with probability
at least $1-\delta$, for all
$h_{1},h_{2}\in\mathcal{H},$ the following statements hold simultaneously:
\begin{align}
\rho_{S_{k}}(h_{1},h_{2}) & \leq2\rho(h_{1},h_{2})+\frac{10}{3}\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h_{1}(x)\neq h_{2}(x)\}\log\frac{4|\mathcal{H}|}{\delta}}{m_k Q_{0}(x)+ n_k Q_{k}(x)};\label{eq:thm-gen-rho_S}\\
\rho(h_{1},h_{2}) & \leq2\rho_{S_{k}}(h_{1},h_{2})+\frac{7}{6}\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h_{1}(x)\neq h_{2}(x)\}\log\frac{4|\mathcal{H}|}{\delta}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}.\label{eq:thm-gen-rho}
\end{align}
\end{thm}
\begin{proof}
Let $N=|S_{k}|$. Note that for any $h_{1},h_{2}\in\mathcal{H}$,
$\rho_{S_{k}}(h_{1},h_{2})=\frac{1}{N}\sum_{t}\mathds{1}\{h_{1}(X_{t})\neq h_{2}(X_{t})\}$,
which is the empirical average of an i.i.d. sequence. By Fact~\ref{fact:bernstein}
and a union bound over $\mathcal{H}$, with probability at least $1-\delta$,
\[
\left|\rho(h_{1},h_{2})-\rho_{S_{k}}(h_{1},h_{2})\right|\leq\frac{2}{3N}\log\frac{4|\mathcal{H}|}{\delta}+\sqrt{\frac{2\rho(h_{1},h_{2})}{N}\log\frac{4|\mathcal{H}|}{\delta}}.
\]
On this event, by Proposition~\ref{prop:quad-ineq}, $\rho(h_{1},h_{2})\leq2\rho_{S_{k}}(h_{1},h_{2})+\frac{4}{3N}\log\frac{4|\mathcal{H}|}{\delta}+\frac{2}{N}\log\frac{4|\mathcal{H}|}{\delta}\leq2\rho_{S_{k}}(h_{1},h_{2})+\frac{10}{3N}\log\frac{4|\mathcal{H}|}{\delta}$.
Moreover,
\begin{eqnarray*}
\rho_{S_{k}}(h_{1},h_{2}) & \leq & \rho(h_{1},h_{2})+\frac{2}{3N}\log\frac{4|\mathcal{H}|}{\delta}+\sqrt{\frac{2\rho(h_{1},h_{2})}{N}\log\frac{4|\mathcal{H}|}{\delta}}\\
& \leq & \rho(h_{1},h_{2})+\frac{2}{3N}\log\frac{4|\mathcal{H}|}{\delta}+\frac{1}{2}(2\rho(h_{1},h_{2})+\frac{1}{N}\log\frac{4|\mathcal{H}|}{\delta})\\
& \leq & 2\rho(h_{1},h_{2})+\frac{7}{6N}\log\frac{4|\mathcal{H}|}{\delta}
\end{eqnarray*}
where the second inequality uses the fact that $\forall a,b>0,\sqrt{ab}\leq\frac{a+b}{2}$.
The result follows by noting that $\forall x\in\mathcal{X}$, $N=|S_{k}|=m_k+n_k\geq m_k Q_{0}(x)+ n_k Q_{k}(x)$.
\end{proof}
\begin{cor}
\label{cor:gen}There are universal constants $\gamma_{0},\gamma_{1}>0$
such that for any $k=0,\dots,K$, any $\delta>0$, with probability at
least $1-\delta$, for all $h,h_{1},h_{2}\in\mathcal{H},$
the following statements hold simultaneously:
\begin{equation}
\left|\left(l(h_{1},S_{k})-l(h_{2},S_{k})\right)-\left(l(h_{1})-l(h_{2})\right)\right|\leq\gamma_{0}\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h_{1}(x)\neq h_{2}(x)\}\log\frac{|\mathcal{H}|}{2\delta}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}+\gamma_{0}\sqrt{\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h_{1}(x)\neq h_{2}(x)\}\log\frac{|\mathcal{H}|}{2\delta}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}\rho_{S}(h_{1},h_{2})};\label{eq:cor-gen-diff-rho_S}
\end{equation}
\begin{equation}
l(h)-l(h^{\star})\leq2(l(h,S_{k})-l(h^{\star},S_{k}))+\gamma_{1}\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h(x)\neq h^{\star}(x)\}\log\frac{|\mathcal{H}|}{\delta}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}+\gamma_{1}\sqrt{\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h(x)\neq h^{\star}(x)\}\log\frac{|\mathcal{H}|}{\delta}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}l(h^{\star})}.\label{eq:cor-gen-diff-h_star}
\end{equation}
\end{cor}
\begin{proof}
Let event $E$ be the event that (\ref{eq:thm-gen-diff-rho}) and
(\ref{eq:thm-gen-rho}) holds for all $h_{1},h_{2}\in\mathcal{H}$
with confidence $1-\frac{\delta}{2}$ respectively. Assume $E$ happens
(whose probability is at least $1-\delta$).
(\ref{eq:cor-gen-diff-rho_S}) is immediate from (\ref{eq:thm-gen-diff-rho})
and (\ref{eq:thm-gen-rho}).
For the proof of (\ref{eq:cor-gen-diff-h_star}), apply (\ref{eq:thm-gen-diff-rho})
to $h$ and $h^{\star}$, we get
\[
l(h)-l(h^{\star})\leq l(h,S_{k})-l(h^{\star},S_{k})+2\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h(x)\neq h^{\star}(x)\}\frac{2\log\frac{4|\mathcal{H}|}{\delta}}{3}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}+\sqrt{2\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h(x)\neq h^{\star}(x)\}\log\frac{4|\mathcal{H}|}{\delta}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}\rho(h,h^{\star})}.
\]
By triangle inequality, $\rho(h,h^{\star})=\Pr_D(h(X)\neq h^{\star}(X))\leq\Pr_D(h(X)\neq Y)+\Pr_D(h^{\star}(X)\neq Y)=l(h)-l(h^{\star})+2l(h^{\star})$.
Therefore, we get
\begin{eqnarray*}
l(h)-l(h^{\star}) & \leq & l(h,S_{k})-l(h^{\star},S_{k})+2\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h(x)\neq h^{\star}(x)\}\frac{2\log\frac{4|\mathcal{H}|}{\delta}}{3}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}\\
& & +\sqrt{2\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h(x)\neq h^{\star}(x))\}\log\frac{4|\mathcal{H}|}{\delta}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}(l(h)-l(h^{\star})+2l(h^{\star}))}\\
& \leq & l(h,S_{k})-l(h^{\star},S_{k})+\sqrt{2\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h(x)\neq h^{\star}(x)\}\log\frac{4|\mathcal{H}|}{\delta}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}(l(h)-l(h^{\star}))}\\
& & +2\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h(x)\neq h^{\star}(x)\}\frac{2\log\frac{4|\mathcal{H}|}{\delta}}{3}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}+\sqrt{4\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{h(x)\neq h^{\star}(x)\}\log\frac{4|\mathcal{H}|}{\delta}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}l(h^{\star})}
\end{eqnarray*}
where the second inequality uses $\sqrt{a+b}\leq \sqrt{a}+\sqrt{b}$ for $a, b\geq 0$.
(\ref{eq:cor-gen-diff-h_star}) follows by applying Proposition~\ref{prop:quad-ineq}
to $l(h)-l(h^{\star})$.
\end{proof}
\section{Technical Lemmas}
For any $0\leq k\leq K$ and $\delta>0$, define event $\mathcal{E}_{k,\delta}$ to be the event that the conclusions of Theorem~\ref{thm:gen} and Theorem~\ref{thm:gen-rho} hold for $k$ with confidence $1-\delta/2$ respectively. We have $\Pr(\mathcal{E}_{k,\delta})\geq 1-\delta$, and that $\mathcal{E}_{k,\delta}$ implies inequalities (\ref{eq:thm-gen-diff-rho}) to (\ref{eq:cor-gen-diff-h_star}).
We first present a lemma which can be used to guarantee that $h^\star$ stays in candidate sets with high probability by induction..
\begin{lem}
\label{lem:h_star_in}For any $k=0,\dots K$, any $\delta>0$. On event $\mathcal{E}_{k,\delta}$, if $h^{\star}\in V_{k}$ then,
\[
l(h^{\star},\tilde{S}_{k})\leq l(\hat{h}_{k},\tilde{S}_{k})+\gamma_{0}\sigma(k,\delta)+\gamma_{0}\sqrt{\sigma(k,\delta)\rho_{\tilde{S}_{k}}(\hat{h}_{k},h^\star)}.
\]
\end{lem}
\begin{proof}
\begin{align*}
& l(h^{\star},\tilde{S}_{k})-l(\hat{h}_{k},\tilde{S}_{k})\\
= & l(h^{\star},S_{k})-l(\hat{h}_{k},S_{k})\\
\leq & \gamma_{0}\sup_{x}\frac{\mathds{1}\{h^{\star}(x)\neq\hat{h}_{k}(x)\}\log\frac{|\mathcal{H}|}{\delta}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}+\gamma_{0}\sqrt{\sup_{x}\frac{\mathds{1}\{h^{\star}(x)\neq\hat{h}_{k}(x)\}\log\frac{|\mathcal{H}|}{\delta}}{m_k Q_{0}(x)+ n_k Q_{k}(x)}\rho_{S_{k}}(\hat{h}_{k},h^\star)}\\
\leq & \gamma_{0}\sigma(k,\delta)+\sqrt{\gamma_{0}\sigma(k,\delta)\rho_{\tilde{S}_{k}}(\hat{h}_{k},h)}.
\end{align*}
The equality follows from Lemma~\ref{lem:l-diff-S-S_tilde}. The
first inequality follows from (\ref{eq:cor-gen-diff-rho_S}) of Corollary~\ref{cor:gen}
and that $l(h^{\star})\leq l(\hat{h}_{k})$. The last inequality follows
from Lemma~\ref{lem:Q} and that $\rho_{\tilde{S}_{k}}(\hat{h}_{k},h^\star) = \rho_{S_{k}}(\hat{h}_{k},h^\star)$.
\end{proof}
Next, we present two lemmas to bound the probability mass of the disagreement region of candidate sets.
\begin{lem}
\label{lem:dis-radius} For any $k=0,\dots,K$, any $\delta>0$, let $V_{k+1}(\delta):=\{h\in V_{k}\mid l(h,\tilde{S}_{k})\leq l(\hat{h}_{k},\tilde{S}_{k})+\gamma_{0}\sigma(k,\delta)+\gamma_{0}\sqrt{\sigma(k,\delta)\rho_{\tilde{S}_{k}}(\hat{h}_{k},h)}\}$. Then there is an absolute constant $\gamma_{2}>1$ such that for any $0,\dots,K$, any $\delta>0$, on event $\mathcal{E}_{k,\delta}$, if $h^{\star}\in V_{k}$, then for all $h\in V_{k+1}(\delta)$,
\[
l(h)-l(h^{\star})\leq\gamma_{2}\sigma(k,\delta)+\gamma_{2}\sqrt{\sigma(k,\delta)l(h^{\star})}.
\]
\end{lem}
\begin{proof}
For any $h\in V_{k+1}(\delta)$, we have
\begin{align}
\MoveEqLeft l(h)-l(h^{\star})\nonumber \\
\leq & 2(l(h,S_{k})-l(h^{\star},S_{k}))+\gamma_{1}\sigma(k,\frac{\delta}{2})+\gamma_{1}\sqrt{\sigma(k,\frac{\delta}{2})l(h^{\star})}\nonumber \\
= & 2(l(h,\tilde{S}_{k})-l(h^{\star},\tilde{S}_{k}))+\gamma_{1}\sigma(k,\frac{\delta}{2})+\gamma_{1}\sqrt{\sigma(k,\frac{\delta}{2})l(h^{\star})}\nonumber \\
= & 2(l(h,\tilde{S}_{k})-l(\hat{h}_{k},\tilde{S}_{k})+l(\hat{h}_{k},\tilde{S}_{k})-l(h^{\star},\tilde{S}_{k}))+\gamma_{1}\sigma(k,\frac{\delta}{2})+\gamma_{1}\sqrt{\sigma(k,\frac{\delta}{2})l(h^{\star})}\nonumber \\
\leq & 2(l(h,\tilde{S}_{k})-l(\hat{h}_{k},\tilde{S}_{k}))+\gamma_{1}\sigma(k,\frac{\delta}{2})+\gamma_{1}\sqrt{\sigma(k,\frac{\delta}{2})l(h^{\star})}\nonumber \\
\leq & (2\gamma_{0}+\gamma_{1})\sigma(k,\frac{\delta}{2})+2\gamma_{0}\sqrt{\sigma(k,\frac{\delta}{2})\rho_{\tilde{S}_{k}}(h,\hat{h}_{k})}+\gamma_{1}\sqrt{\sigma(k,\frac{\delta}{2})l(h^{\star})}\nonumber \\
\leq & (2\gamma_{0}+\gamma_{1})\sigma(k,\frac{\delta}{2})+2\gamma_{0}\sqrt{\sigma(k,\frac{\delta}{2})(\rho_{S_{k}}(h,h^{\star})+\rho_{S_{k}}(\hat{h_{k}},h^{\star}))}+\gamma_{1}\sqrt{\sigma(k,\frac{\delta}{2})l(h^{\star})} \label{eq:lem-dis-radius-1}
\end{align}
where the first inequality follows from (\ref{eq:cor-gen-diff-h_star})
of Corollary~\ref{cor:gen} and Lemma~\ref{lem:Q}, the first equality
follows from Lemma~\ref{lem:l-diff-S-S_tilde}, the third inequality
follows from the definition of $V_{k}(\delta)$, and the last inequality follows from $\rho_{\tilde{S}_{k}}(h,\hat{h}_{k})=\rho_{S_{k}}(h,\hat{h}_{k})\leq\rho_{S_{k}}(h,h^{\star})+\rho_{S_{k}}(\hat{h_{k}},h^{\star})$.
As for $\rho_{S_{k}}(h,h^{\star})$, we have $\rho_{S_{k}}(h,h^{\star})\leq2\rho(h,h^{\star})+\frac{16}{3}\sigma(k,\frac{\delta}{8})\leq2(l(h)-l(h^{\star}))+4l(h^{\star})+\frac{16}{3}\sigma(k,\frac{\delta}{8})$
where the first inequality follows from (\ref{eq:thm-gen-rho_S}) of
Theorem~\ref{thm:gen-rho} and Lemma~\ref{lem:Q}, and the second
inequality follows from the triangle inequality.
For $\rho_{S_{k}}(\hat{h}_{k},h^{\star})$, we have
\begin{eqnarray*}
\rho_{S_{k}}(\hat{h}_{k},h^{\star}) & \leq & 2\rho(\hat{h}_{k},h^{\star})+\frac{16}{3}\sigma(k,\frac{\delta}{8})\\
& \leq & 2(l(\hat{h}_{k})-l(h^{\star})+2l(h^{\star}))+\frac{16}{3}\sigma(k,\frac{\delta}{8})\\
& \leq & 2(2(l(\hat{h}_{k},S_{k})-l(h^{\star},S_{k}))+\gamma_{1}\sigma(k,\frac{\delta}{2})+\gamma_{1}\sqrt{\sigma(k,\frac{\delta}{2})l(h^{\star})}+2l(h^{\star}))+\frac{16}{3}\sigma(k,\frac{\delta}{8})\\
& \leq & (2\gamma_{1}+\frac{16}{3})\sigma(k,\frac{\delta}{8})+2\gamma_{1}\sqrt{\sigma(k,\frac{\delta}{2})l(h^{\star})}+4l(h^{\star})\\
& \leq & (4+\gamma_{1})l(h^{\star})+(3\gamma_{1}+\frac{16}{3})\sigma(k,\frac{\delta}{8})
\end{eqnarray*}
where the first inequality follows from (\ref{eq:thm-gen-rho_S}) of
Theorem~\ref{thm:gen-rho} and Lemma~\ref{lem:Q}, the second follows from the triangle inequality,
the third follows from (\ref{eq:cor-gen-diff-h_star}) of Theorem~\ref{cor:gen}
and Lemma~\ref{lem:Q}, the fourth follows from the definition of $\hat{h}_{k}$,
the last follows from the fact that $2\sqrt{ab}\leq a+b$ for $a,b\geq0$.
Continuing (\ref{eq:lem-dis-radius-1}) and using the fact that $\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}$
for $a,b\geq0$, we have:
\[
l(h)-l(h^{\star})\leq(2\gamma_{0}+\gamma_{1}+2\gamma_{0}\sqrt{3\gamma_{1}+\frac{32}{3}})\sigma(k,\frac{\delta}{8})+(2\gamma_{0}\sqrt{8+\gamma_{1}}+\gamma_{1})\sqrt{\sigma(k,\frac{\delta}{8})l(h^{\star})}+2\sqrt{2}\gamma_{0}\sqrt{\sigma(k,\frac{\delta}{8})(l(h)-l(h^{\star}))}.
\]
The result follows by applying Proposition~\ref{prop:quad-ineq}
to $l(h)-l(h^{\star})$.
\end{proof}
\begin{lem}
\label{lem:Dk-DISk}On event $\bigcap_{k=0}^{K-1}\mathcal{E}_{k,\delta_k/2}$, for any $k=0,\dots K$, $D_{k}\subseteq\text{DIS}_{k}$.
\end{lem}
\begin{proof}
Recall that $\delta_{k}=\frac{\delta}{(k+1)(k+2)}$. On event $\bigcap_{k=0}^{K-1}\mathcal{E}_{k,\delta_k/2}$, $h^\star \in V_k$ for all $0\leq k\leq K$ by Lemma~\ref{lem:h_star_in} and induction.
The $k=0$ case is obvious since $D_0=\text{DIS}_0=\mathcal{X}$. Now, suppose $0\leq k<K$, and $D_k \subseteq \text{DIS}_k$. We have
\begin{eqnarray*}
D_{k+1} & \subseteq & \text{DIS}\left(\left\{h:l(h)\leq\nu+\gamma_{2}\left(\sigma(k,\delta_{k}/2)+\sqrt{\sigma(k,\delta_{k}/2)\nu}\right)\right\}\right)\\
& \subseteq & \text{DIS}\left(B\left(h^{\star},2\nu+\gamma_{2}\left(\sigma(k,\delta_{k}/2)+\sqrt{\sigma(k,\delta_{k}/2)\nu}\right)\right)\right)
\end{eqnarray*}
where the first line follows from Lemma~\ref{lem:dis-radius} and the
definition of $D_{k}$, and the second line follows from triangle inequality
that $\Pr(h(X)\neq h^{\star}(X))\leq l(h)+l(h^{\star})$ (recall $\nu=l(h^{\star})$).
To prove $D_{k+1}\subseteq \text{DIS}_{k+1}$ it suffices to show $\gamma_{2}\left(\sigma(k,\delta_{k}/2)+\sqrt{\sigma(k,\delta_{k}/2)\nu}\right)\leq\epsilon_{k+1}$.
Note that $\sigma(k,\delta_{k}/2)=\sup_{x\in D_{k}}\frac{\log(2|\mathcal{H}|/\delta_{k})}{m_k Q_{0}(x)+n_k}\leq\sup_{x\in\text{DIS}_{k}}\frac{\log(2|\mathcal{H}|/\delta_{k})}{m_k Q_{0}(x)+n_k}$ since $D_{k}\subseteq \text{DIS}_{k}$. Consequently, $\gamma_2\left(\sigma(k,\delta_{k}/2)+\sqrt{\sigma(k,\delta_{k}/2)\nu}\right)\leq\epsilon_{k+1}$.
\end{proof}
\section{Proof of Consistency}
\begin{proof}
(of Theorem~\ref{thm:Convergence}) Define event
$\mathcal{E}^{(0)}:=\bigcap_{k=0}^{K}\mathcal{E}_{k,\delta_k/2}$. By a union bound, $\Pr(\mathcal{E}^{(0)})\geq 1-\delta$. On event $\mathcal{E}^{(0)}$, by induction and Lemma~\ref{lem:h_star_in}, for all $k=0,\dots,K$, $h^{\star}\in V_{k}$. Observe that $\hat{h}=\hat{h}_K\in V_{K+1}(\delta_{K}/2)$. Applying Lemma~\ref{lem:dis-radius} to $\hat{h}$, we have
\[
l(\hat{h})\leq l(h^{\star})+\gamma_2\left(\sup_{x\in D_K}\frac{\log(2|\mathcal{H}|/\delta_K)}{m_K Q_{0}(x)+ n_K}+\sqrt{\sup_{x\in D_K}\frac{\log(2|\mathcal{H}|/\delta_K)}{m_K Q_{0}(x)+ n_K}l(h^{\star})}\right).
\]
The result follows by noting
that $\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{x\in D_{K}\}}{m_K Q_{0}(x)+n_K}\leq\sup_{x\in\mathcal{X}}\frac{\mathds{1}\{x\in\text{DIS}_{K}\}}{m_K Q_{0}(x)+n_K}$ by Lemma~\ref{lem:Dk-DISk}.
\end{proof}
\section{Proof of Label Complexity}
\begin{proof}
(of Theorem~\ref{thm:Label-Complexity}) Recall that $\xi_{k}=\inf_{x\in D_{k}}Q_{0}(x)$ and $\zeta = \sup_{x\in\text{DIS}_1}\frac1{\alpha Q_0(x)+1}$.
Define event $\mathcal{E}^{(0)}:=\bigcap_{k=0}^{K}\mathcal{E}_{k,\delta_k/2}$. On this event, by induction and Lemma~\ref{lem:h_star_in}, for all $k=0,\dots,K$, $h^{\star}\in V_{k}$, and consequently by Lemma~\ref{lem:Dk-DISk}, $D_k\subseteq\text{DIS}_k$.
For any $k=0,\dots K-1$, let the number of label queries at iteration
$k$ to be $U_k:=\sum_{t=n_{0}+\cdots+n_{k}+1}^{n_{0}+\cdots+n_{k+1}}Z_{t}\mathds{1}\{X_{t}\in D_{k+1}\}$.
\begin{eqnarray*}
Z_{t}\mathds{1}\{X_{t}\in D_{k+1}\} & = & \mathds{1}\{X_{t}\in D_{k+1}\land Q_0(X_t)\leq \inf_{x\in D_{k+1}}Q_0(x)+\frac{1}{\alpha}\} \\
& \leq & \mathds{1}\{X_t \in S(D_{k+1},\alpha)\} \\
& \leq & \mathds{1}\{X_t \in S(\text{DIS}_{k+1},\alpha)\}.
\end{eqnarray*}
Thus, $U_k\leq \sum_{t=n_{0}+\cdots+n_{k}+1}^{n_{0}+\cdots+n_{k+1}}\mathds{1}\{X_t \in S(\text{DIS}_{k+1},\alpha)\}$, where the RHS is a sum of i.i.d. Bernoulli($\Pr(S(\text{DIS}_{k+1},\alpha))$) random variables, so a Bernstein
inequality implies that on an event $\mathcal{E}^{(1,k)}$ of probability
at least $1-\delta_{k}/2$, $\sum_{t=n_{0}+\cdots+n_{k}+1}^{n_{0}+\cdots+n_{k+1}}\mathds{1}\{X_t \in S(\text{DIS}_{k+1},\alpha)\}\leq2n_{k+1}\Pr(S(\text{DIS}_{k+1},\alpha))+2\log\frac{4}{\delta_{k}}$.
Therefore, it suffices to show that on event $\mathcal{E}^{(2)} := \cap_{k=0}^{K}(\mathcal{E}^{(1,k)}\cap \mathcal{E}_{k,\delta_k/2})$, for some absolute constant $c_1$,
\[
\sum_{k=0}^{K-1}n_{k+1}\Pr(S(\text{DIS}_{k+1},\alpha)) \leq c_1\tilde{\theta}(2\nu+\epsilon_K,\alpha)(n\nu+\zeta\log n\log\frac{|\mathcal{H}|\log n}{\delta}+\log n\sqrt{n\nu\zeta\log\frac{|\mathcal{H}|\log n}{\delta}}).
\]
Now, on event $\mathcal{E}^{(2)}$, for any $k<K$, $\Pr(S(\text{DIS}_{k+1},\alpha))= \Pr(S(\text{DIS}(B(h^\star, 2\nu+\epsilon_{k+1})), \alpha))\leq (2\nu+\epsilon_{k+1})\tilde{\theta}(2\nu+\epsilon_{k+1},\alpha)$ where the last inequality follows from Lemma~\ref{lem:dis-coefficient}.
Therefore,
\begin{align*}
\MoveEqLeft \sum_{k=0}^{K-1}n_{k+1}\Pr(S(\text{DIS}_{k+1},\alpha))\\
\leq & n_1+\sum_{k=1}^{K-1}n_{k+1}(2\nu+\epsilon_{k+1})\tilde{\theta}(2\nu+\epsilon_{k+1},\alpha)\\
\leq & 1+\tilde{\theta}(2\nu+\epsilon_K,\alpha)(2n\nu+\sum_{k=1}^{K-1}n_{k+1}\epsilon_{k+1})\\
\leq & 1+\tilde{\theta}(2\nu+\epsilon_K,\alpha)\left(2n\nu+2\gamma_{2}\sum_{k=1}^{K-1}(\sup_{x\in\text{DIS}_{1}}\frac{\log\frac{|\mathcal{H}|}{\delta_k/2}}{(\alpha Q_{0}(x)+1)}+\sqrt{n_k\nu\sup_{x\in\text{DIS}_{1}}\frac{\log\frac{|\mathcal{H}|}{\delta_k/2}}{(\alpha Q_{0}(x)+1)}})\right)\\
\leq & 1+\tilde{\theta}(2\nu+\epsilon_K,\alpha)(2n\nu+2\gamma_2\zeta\log n\log\frac{|\mathcal{H}|(\log n)^2}{\delta}+2\gamma_2\log n\sqrt{n\nu\zeta\log\frac{|\mathcal{H}|(\log n)^2}{\delta}}).\\\end{align*}
\end{proof}
\section{Experiment Details}
\subsection{Implementation}\label{subsec:appendix-implement}
All algorithms considered require empirical risk minimization. Instead of optimizing 0-1 loss which is known to be computationally hard, we approximate it by optimizing a squared loss. We use the online gradient descent method in \cite{KL11} for optimizing importance weighted loss functions.
For IDBAL\xspace, recall that in Algorithm~\ref{alg:main}, we need to find the empirical risk minimizer $\hat{h}_k \gets \arg\min_{h\in V_k} l(h, \tilde{S}_k)$, update the candidate set $V_{k+1} \gets \{ h\in V_k \mid l(h,\tilde{S}_k) \leq l(\hat{h}_k,\tilde{S}_k)+\Delta_k(h,\hat{h}_k)\}$, and check whether $x\in \text{DIS}(V_{k+1})$.
In our experiment, we approximately implement this following Vowpal Wabbit \cite{vw}. More specifically,
\begin{enumerate}
\item Instead of optimizing 0-1 loss which is known to be computationally hard, we use a surrogate loss $l(y,y')=(y-y')^2$.
\item We do not explicitly maintain the candidate set $V_{k+1}$.
\item \label{item:eta} To solve the optimization problem $\min_{h\in V_k} l(h, \tilde{S}_k)=\sum_{(X,\tilde{Y},Z)\in\tilde{S}_k}\frac{\mathds{1}\{h(X)\neq \tilde{Y}\}Z}{m_k Q_0(X)+n_k Q_k(X)}$, we ignore the constraint $h\in V_k$, and use online gradient descent with stepsize $\sqrt{\frac{\eta}{t+\eta}}$ where $\eta$ is a parameter. The start point for gradient descent is set as $\hat{h}_{k-1}$ the ERM in the last iteration, and the step index $t$ is shared across all iterations (i.e. we do not reset $t$ to 1 in each iteration).
\item \label{item:C} To approximately check whether $x\in \text{DIS}(V_{k+1})$, when the hypothesis space $\mathcal{H}$ is linear classifiers, let $w_k$ be the normal vector for current ERM $\hat{h}_k$, and $a$ be current stepsize. We claim $x\in \text{DIS}(V_{k+1})$ if $\frac{|2w_k^\top x|}{a x^\top x} \leq \sqrt{\frac{C\cdot l(\hat{h}_k,\tilde{S}_k)}{m_k\xi_k+n_k}} + \frac{C\log(m_k+n_k)}{m_k\xi_k+n_k}$ (recall $|\tilde{S}_k|=m_k+n_k$ and $\xi_k = \inf_{x\in\text{DIS}(V_k)}Q_0(x)$) where $C$ is a parameter that captures the model capacity. See \citep{KL11} for the rationale of this approximate disagreement test.
\item $\xi_k = \inf_{x\in\text{DIS}(V_k)}Q_0(x)$ can be approximately estimated with a set of unlabeled samples. This estimate is always an upper bound of the true value of $\xi_k$.
\end{enumerate}
DBALw\xspace and DBALwm\xspace can be implemented similarly.
\section{Conclusion and Future Work}
We consider active learning with logged data. The logged data are collected by a predetermined logging policy while the learner's goal is to learn a classifier over the entire population. We propose a new disagreement-based active learning algorithm that makes use of warm start, multiple importance sampling, and a debiasing query strategy. We show that theoretically our algorithm achieves better label complexity than alternative methods. Our theoretical results are further validated by empirical experiments on different datasets and logging policies.
This work can be extended in several ways. First, the derivation and analysis of the debiasing strategy are based on a variant of the concentration inequality (\ref{eq:main-concentration}) in subsection~\ref{subsec:main-ideas}. The inequality relates the generalization error with the best error rate $l(h^\star)$, but has a looser variance term than some existing bounds (for example \citep{CMM10}).
A more refined analysis on the concentration of weighted estimators could better characterize the performance of the proposed algorithm, and might also improve the debiasing strategy. Second, due to the dependency of multiple importance sampling, in Algorithm~\ref{alg:main}, the candidate set $V_{k+1}$ is constructed with only the $k$-th segment of data $\tilde{S}_k$ instead of all data collected so far $\cup_{i=0}^{k}\tilde{S}_i$. One future direction is to investigate how to utilize all collected data while provably controlling the variance of the weighted estimator. Finally, it would be interesting to investigate how to perform active learning from logged observational data without knowing the logging policy.
\section{Experiments}
We now empirically validate our theoretical results by comparing our algorithm with a few alternatives on several datasets and logging policies. In particular, we confirm that the test error of our classifier drops faster than several alternatives as the expected number of label queries increases. Furthermore, we investigate the effectiveness of two key components of our algorithm: multiple importance sampling and the debiasing query strategy.
\subsection{Methodology}
\subsubsection{Algorithms and Implementations}
To the best of our knowledge, no algorithms with theoretical guarantees have been proposed in the literature. We consider the overall performance of our algorithm against two natural baselines: standard passive learning (\textsc{Passive}) and the disagreement-based active learning algorithm with warm start (\textsc{DBALw\xspace}). To understand the contribution of multiple importance sampling and the debiasing query strategy, we also compare the results with the disagreement-based active learning with warm start that uses multiple importance sampling (\textsc{DBALwm\xspace}). We do not compare with the standard disagreement-based active learning that ignores the logged data since the contribution of warm start is clear: it always results in a smaller initial candidate set, and thus leads to less label queries.
Precisely, the algorithms we implement are:
\begin{itemize}
\item \textsc{Passive}: A passive learning algorithm that queries labels for all examples in the online sequence and uses the standard importance sampling estimator to combine logged data and online data.
\item \textsc{DBALw\xspace}: A disagreement-based active learning algorithm that uses the standard importance sampling estimator, and constructs the initial candidate set with logged data. This algorithm only uses only our first key idea -- warm start.
\item \textsc{DBALwm\xspace}: A disagreement-based active learning algorithm that uses the multiple importance sampling estimator, and constructs the initial candidate set with logged data. This algorithm uses our first and second key ideas, but not the debiasing query strategy. In other words, this method sets $Q_k\equiv 1$ in Algorithm~\ref{alg:main}.
\item \textsc{IDBAL\xspace}: The method proposed in this paper: improved disagreement-based active learning algorithm with warm start that uses the multiple importance sampling estimator and the debiasing query strategy.
\end{itemize}
Our implementation of above algorithms follows Vowpal Wabbit \cite{vw}. Details can be found in Appendix.
\subsubsection{Data}
Due to lack of public datasets for learning with logged data, we convert datasets for standard binary classification into our setting. Specifically, we first randomly select 80\% of the whole dataset as training data and the remaining 20\% is test data. We randomly select 50\% of the training set as logged data, and the remaining 50\% is online data. We then run an artificial logging policy (to be specified later) on the logged data to determine whether each label should be revealed to the learning algorithm or not.
Experiments are conducted on synthetic data and 11 datasets from UCI datasets \cite{L13} and LIBSVM datasets \cite{CL11}. The synthetic data is generated as follows: we generate 6000 30-dimensional points uniformly from hypercube $[-1,1]^{30}$, and labels are assigned by a random linear classifier and then flipped with probability 0.1 independently.
We use the following four logging policies:
\begin{itemize}
\item \textsc{Identical:} Each label is revealed with probability 0.005.
\item \textsc{Uniform:} We first assign each instance in the instance space to three groups with (approximately) equal probability. Then the labels in each group are revealed with probability 0.005, 0.05, and 0.5 respectively.
\item \textsc{Uncertainty:} We first train a coarse linear classifier using 10\% of the data. Then, for an instance at distance $r$ to the decision boundary, we reveal its label with probability $\exp(-cr^2)$ where $c$ is some constant. This policy is intended to simulate uncertainty sampling used in active learning.
\item \textsc{Certainty:} We first train a coarse linear classifier using 10\% of the data. Then, for an instance at distance $r$ to the decision boundary, we reveal its label with probability $cr^2$ where $c$ is some constant. This policy is intended to simulate a scenario where an action (i.e. querying for labels in our setting) is taken only if the current model is certain about its consequence.
\end{itemize}
\subsubsection{Metrics and Parameter Tuning}
The experiments are conducted as follows. For a fixed policy, for each dataset $d$, we repeat the following process 10 times. At time $k$, we first randomly generate a simulated logged dataset, an online dataset, and a test dataset as stated above. Then for $i=1, 2, \cdots$, we set the horizon of the online data stream $a_i = 10\times 2^i$ (in other words, we only allow the algorithm to use first $a_i$ examples in the online dataset), and run algorithm $A$ with parameter set $p$ (to be specified later) using the logged dataset and first $a_i$ examples in the online dataset. We record $n(d,k,i,A,p)$ to be the number of label queries, and $e(d,k,i,A,p)$ to be the test error of the learned linear classifier.
Let $\bar{n}(d,i,A,p)=\frac{1}{10}\sum_{k}n(d,k,i,A,p)$, $\bar{e}(d,i,A,p)=\frac{1}{10}\sum_{k}e(d,k,i,A,p)$. To evaluate the overall performance of algorithm $A$ with parameter set $p$, we use the following area under the curve metric (see also \cite{HAHLS15}):
\begin{align*}
\text{AUC}(d,A,p) = \sum_i & \frac{\bar{e}(d,i,A,p)+\bar{e}(d,i+1,A,p)}{2} \\
& \cdot (\bar{n}(d,i+1,A,p)-\bar{n}(d,i,A,p)).
\end{align*}
A small value of AUC means that the test error decays fast as the number of label queries increases.
The parameter set $p$ consists of two parameters:
\begin{itemize}
\item Model capacity $C$ (see also item~\ref{item:C} in Appendix~\ref{subsec:appendix-implement}). In our theoretical analysis there is a term $C:=O(\log \frac{\mathcal{H}}{\delta})$ in the bounds, which is known to be loose in practice \cite{H10}. Therefore, in experiments, we treat $C$ as a parameter to tune. We try $C$ in $\{0.01 \times 2^k\mid k=0, 2, 4, \dots, 18\}$
\item Learning rate $\eta$ (see also item~\ref{item:eta} in Appendix~\ref{subsec:appendix-implement}). We use online gradient descent with stepsize $\sqrt{\frac{\eta}{t+\eta}}$ . We try $\eta$ in $\{0.0001\times 2^k\mid k=0, 2, 4, \dots, 18\}$.
\end{itemize}
For each policy, we report $\text{AUC}(d,A)=\min_p \text{AUC}(d,A,p)$, the AUC under the parameter set that minimizes AUC for dataset $d$ and algorithm $A$.
\subsection{Results and Discussion}
\begin{table*}[tb]
\begin{minipage}{.5\linewidth}
\centering
\caption{AUC under Identical policy}\label{tab:auc-identical}
\begin{tabular}{lllll}
\toprule
Dataset & Passive & DBALw\xspace & DBALwm\xspace & IDBAL\xspace \\
\midrule
synthetic & 121.77 & 123.61 & 111.16 & \textbf{106.66} \\
letter & 4.40 & 3.65 & 3.82 & \textbf{3.48} \\
skin & 27.53 & 27.29 & 21.48 & \textbf{21.44} \\
magic & 109.46 & 101.77 & 89.95 & \textbf{83.82} \\
covtype & 228.04 & 209.56 & \textbf{208.82} & 220.27 \\
mushrooms & 19.22 & 25.29 & \textbf{18.54} & 23.67 \\
phishing & 78.49 & 73.40 & \textbf{70.54} & 71.68 \\
splice & 65.97 & 67.54 & 65.73 & \textbf{65.66} \\
svmguide1 & 59.36 & 55.78 & \textbf{46.79} & 48.04 \\
a5a & 53.34 & \textbf{50.8} & 51.10 & 51.21 \\
cod-rna & 175.88 & 176.42 & 167.42 & \textbf{164.96} \\
german & 65.76 & 68.68 & \textbf{59.31} & 61.54 \\
\bottomrule
\end{tabular}
\end{minipage}%
\begin{minipage}{.5\linewidth}
\centering
\caption{AUC under Uniform policy}\label{tab:auc-uniform}
\begin{tabular}{lllll}
\toprule
Dataset & Passive & DBALw\xspace & DBALwm\xspace & IDBAL\xspace \\
\midrule
synthetic & 113.49 & 106.24 & 92.67 & \textbf{88.38} \\
letter & 1.68 & \textbf{1.29} & 1.45 & 1.59 \\
skin & 23.76 & 21.42 & 20.67 & \textbf{19.58} \\
magic & 53.63 & 51.43 & 51.78 & \textbf{50.19} \\
covtype & \textbf{262.34} & 287.40 & 274.81 & 263.82 \\
mushrooms & 7.31 & 6.81 & \textbf{6.51} & 6.90 \\
phishing & 42.53 & 39.56 & 39.19 & \textbf{37.02} \\
splice & 88.61 & 89.61 & 90.98 & \textbf{87.75} \\
svmguide1 & 110.06 & 105.63 & 98.41 & \textbf{96.46} \\
a5a & \textbf{46.96} & 48.79 & 49.50 & 47.60 \\
cod-rna & 63.39 & 63.30 & 66.32 & \textbf{58.48} \\
german & 63.60 & 55.87 & 56.22 & \textbf{55.79} \\
\bottomrule
\end{tabular}
\end{minipage}
\end{table*}
\begin{table*}[tb]
\begin{minipage}{.5\linewidth}
\centering
\caption{AUC under Uncertainty policy}\label{tab:auc-uncertainty}
\begin{tabular}{lllll}
\toprule
Dataset & Passive & DBALw\xspace & DBALwm\xspace & IDBAL\xspace \\
\midrule
synthetic & 117.86 & 113.34 & 100.82 & \textbf{99.1} \\
letter & \textbf{0.65} & 0.70 & 0.71 & 1.07 \\
skin & 20.19 & 21.91 & \textbf{18.89} & 19.10 \\
magic & 106.48 & 101.90 & 99.44 & \textbf{90.05} \\
covtype & 272.48 & 274.53 & 271.37 & \textbf{251.56} \\
mushrooms & 4.93 & 4.64 & 3.77 & \textbf{2.87} \\
phishing & 52.96 & 48.62 & \textbf{46.55} & 46.59 \\
splice & 62.94 & 63.49 & 60.00 & \textbf{58.56} \\
svmguide1 & 117.59 & 111.58 & \textbf{98.88} & 100.44 \\
a5a & 70.97 & 72.15 & \textbf{65.37} & 69.54 \\
cod-rna & 60.12 & 61.66 & 64.48 & \textbf{53.38} \\
german & 62.64 & 58.87 & 56.91 & \textbf{56.67} \\
\bottomrule
\end{tabular}
\end{minipage}
\begin{minipage}{.5\linewidth}
\centering
\caption{AUC under Certainty policy}\label{tab:auc-certainty}
\begin{tabular}{lllll}
\toprule
Dataset & Passive & DBALw\xspace & DBALwm\xspace & IDBAL\xspace \\
\midrule
synthetic & 114.86 & 111.02 & 92.39 & \textbf{88.82} \\
letter & 2.02 & \textbf{1.43} & 2.46 & 1.87 \\
skin & 22.89 & \textbf{17.92} & 18.17 & 18.11 \\
magic & 231.64 & 225.59 & 205.95 & \textbf{202.29} \\
covtype & 235.68 & 240.86 & 228.94 & \textbf{216.57} \\
mushrooms & 16.53 & 14.62 & 17.97 & \textbf{11.65} \\
phishing & 34.70 & 37.83 & 35.28 & \textbf{33.73} \\
splice & 125.32 & 129.46 & 122.74 & \textbf{122.26} \\
svmguide1 & 94.77 & 91.99 & 92.57 & \textbf{84.86} \\
a5a & \textbf{119.51} & 132.27 & 138.48 & 125.53 \\
cod-rna & 98.39 & 98.87 & 90.76 & \textbf{90.2} \\
german & 63.47 & \textbf{58.05} & 61.16 & 59.12 \\
\bottomrule
\end{tabular}
\end{minipage}
\end{table*}
We report the AUCs for each algorithm under each policy and each dataset in Tables~\ref{tab:auc-identical} to \ref{tab:auc-certainty}. The test error curves can be found in Appendix.
\paragraph{Overall Performance} The results confirm that the test error of the classifier output by our algorithm (\textsc{IDBAL\xspace}) drops faster than the baselines \textsc{passive} and \textsc{DBALw\xspace}: as demonstrated in Tables~\ref{tab:auc-identical} to \ref{tab:auc-certainty}, \textsc{IDBAL\xspace} achieves lower AUC than both \textsc{Passive} and \textsc{DBALw\xspace} for a majority of datasets under all policies. We also see that \textsc{IDBAL\xspace} performs better than or close to \textsc{DBALwm\xspace} for all policies other than Identical. This confirms that among our two key novel ideas, using multiple importance sampling consistently results in a performance gain. Using the debiasing query strategy over multiple importance sampling also leads to performance gains, but these are less consistent.
\paragraph{The Effectiveness of Multiple Importance Sampling} As noted in Section~\ref{sec:estimator}, multiple importance sampling estimators have lower variance than standard importance sampling estimators, and thus can lead to a lower label complexity. This is verified in our experiments that \textsc{DBALwm\xspace} (DBAL with multiple importance sampling estimators) has a lower AUC than \textsc{DBALw\xspace} (DBAL with standard importance sampling estimator) on a majority of datasets under all policies.
\paragraph{The Effectiveness of the Debiasing Query Strategy} Under Identical policy, all labels in the logged data are revealed with equal probability. In this case, our algorithm \textsc{IDBAL\xspace} queries all examples in the disagreement region as \textsc{DBALwm\xspace} does. As shown in Table~\ref{tab:auc-identical}, \textsc{IDBAL\xspace} and \textsc{DBALwm\xspace} achieves the best AUC on similar number of datasets, and both methods outperform \textsc{DBALw\xspace} over most datasets.
Under Uniform, Uncertainty, and Certainty policies, labels in the logged data are revealed with different probabilities. In this case, \textsc{IDBAL\xspace}'s debiasing query strategy takes effect: it queries less frequently the instances that are well-represented in the logged data, and we show that this could lead to a lower label complexity theoretically. In our experiments, as shown in Tables~\ref{tab:auc-uniform} to \ref{tab:auc-certainty}, \textsc{IDBAL\xspace} does indeed outperform \textsc{DBALwm\xspace} on these policies empirically.
\section{Introduction}
We consider learning a classifier from logged data. Here, the learner has access to a logged labeled dataset that has been collected according to a known pre-determined policy, and his goal is to learn a classifier that predicts the labels accurately over the entire population, not just conditioned on the logging policy.
This problem arises frequently in many natural settings. An example is predicting the efficacy of a treatment as a function of patient characteristics based on observed data. Doctors may assign the treatment to patients based on some predetermined rule; recording these patient outcomes produces a logged dataset where outcomes are observed conditioned on the doctors' assignment. A second example is recidivism prediction, where the goal is to predict whether a convict will re-offend. Judges use their own predefined policy to grant parole, and if parole is granted, then an outcome (reoffense or not) is observed. Thus the observed data records outcomes conditioned on the judges' parole policy, while the learner's goal is to learn a predictor over the entire population.
A major challenge in learning from logged data is that the logging policy may leave large areas of the data distribution under-explored. Consequently, empirical risk minimization (ERM) on the logged data leads to classifiers that may be highly suboptimal on the population. When the logging policy is known, a second option is to use a {\em{weighted}} ERM, that reweighs each observed labeled data point to ensure that it reflects the underlying population. However, this may lead to sample inefficiency if the logging policy does not adequately explore essential regions of the population. A final approach, typically used in clinical trials, is controlled random experimentation -- essentially, ignore the logged data, and record outcomes for fresh examples drawn from the population. This approach is expensive due to the high cost of trials, and wasteful since it ignores the observed data.
Motivated by these challenges, we propose active learning to combine logged data with a small amount of strategically chosen labeled data that can be used to correct the bias in the logging policy. This solution has the potential to achieve the best of both worlds by limiting experimentation to achieve higher sample efficiency, and by making the most of the logged data. Specifically, we assume that in addition to the logged data, the learner has some additional unlabeled data that he can selectively ask an annotator to label. The learner's goal is to learn a highly accurate classifier over the entire population by using a combination of the logged data and with as few label queries to the annotator as possible.
How can we utilize logged data for better active learning? This problem has not been studied to the best of our knowledge. A naive approach is to use the logged data to come up with a {\em{warm start}} and then do standard active learning. In this work, we show that we can do even better. In addition to the warm start, we show how to use multiple importance sampling estimators to utilize the logged data more efficiently. Additionally, we introduce a novel debiasing policy that selectively avoids label queries for those examples that are highly represented in the logged data.
Combining these three approaches, we provide a new algorithm. We prove that our algorithm is statistically consistent, and has a lower label requirement than simple active learning that uses the logged data as a warm start. Finally, we evaluate our algorithm experimentally on various datasets and logging policies. Our experiments show that the performance of our method is either the best or close to the best for a variety of datasets and logging policies. This confirms that active learning to combine logged data with carefully chosen labeled data may indeed yield performance gains.
\section{Preliminaries}
\subsection{Problem Setup}
Instances are drawn from an instance space $\mathcal{X}$ and a label space
$\mathcal{Y}=\{0,1\}$. There is an underlying data distribution $D$ over $\mathcal{X}\times\mathcal{Y}$ that describes the population. There is a hypothesis space $\mathcal{H}\subset\mathcal{Y}^{\mathcal{X}}$. For simplicity, we assume $\mathcal{H}$ is a finite set, but our results can be generalized to VC-classes by standard arguments \cite{VC71}.
The learning algorithm has access to two sources of data: logged data, and online data. The logged data are generated from $m$ examples $\{(X_{t},Y_{t})\}_{t=1}^{m}$ drawn i.i.d. from $D$, and a logging policy $Q_0: \mathcal{X} \rightarrow [0,1]$ that determines the probability of observing the label. For each example $(X_{t},Y_{t})$ ($1\leq t \leq m$), an independent Bernoulli random variable $Z_t$ is drawn with expectation $Q_0(X_t)$, and then the label $Y_t$ is revealed to the learning algorithm if $Z_t=1$\footnote{Note that this generating process implies the standard unconfoundedness assumption in the counterfactual inference literature: $\Pr(Y_t,Z_t\mid X_t)=\Pr(Y_t\mid X_t)\Pr(Z_t\mid X_t)$, that is, given the instance $X_t$, its label $Y_t$ is conditionally independent with the action $Z_t$ (whether the label is observed).}. We call $T_{0}=\{(X_{t},Y_{t},Z_{t})\}_{t=1}^{m}$ the logged dataset. From the algorithm's perspective, we assume it knows the logging policy $Q_0$, and only observes instances $\{X_t\}_{t=1}^{m}$, decisions of the policy $\{Z_t\}_{t=1}^{m}$, and revealed labels $\{Y_{t}\mid Z_t=1\}_{t=1}^{m}$.
The online data are generated as follows. Suppose there is a stream of another $n$ examples $\{(X_{t},Y_{t})\}_{t=m+1}^{m+n}$ drawn i.i.d. from distribution $D$. At time $t$ ($m<t\leq m+n$), the algorithm uses its query policy to compute a bit $Z_t \in \{0,1\}$, and then the label $Y_t$ is revealed to the algorithm if $Z_t=1$. The computation of $Z_t$ may in general be randomized, and is based on the observed logged data $T_0$, observed instances $\{X_i\}_{i=m+1}^{t}$, previous decisions$\{Z_i\}_{i=m+1}^{t-1}$, and observed labels $\{Y_i\mid Z_i=1\}_{i=m+1}^{t-1}$.
The goal of the algorithm is to learn a classifier $h\in\mathcal{H}$ from observed logged data and online data. Fixing $D$, $Q_0$, $m$, $n$, the performance measures are: (1) the error rate $l(h):=\Pr_D(h(X)\neq Y)$ of the output classifier, and (2) the number of label queries on the online data. Note that the error rate is over the entire population $D$ instead of conditioned on the logging policy, and that we assume the logged data $T_0$ come at no cost.
In this work, we are interested in the situation where $n$ is about the same as or less than $m$.
\subsection{Background on Disagreement-Based Active Learning}
Our algorithm is based on Disagreement-Based Active Learning (DBAL) which has rigorous theoretical guarantees and can be implemented practically (see \cite{H14} for a survey, and \cite{HY15,HAHLS15} for some recent developments). DBAL iteratively maintains a candidate set of classifiers that contains the optimal classifier $h^{\star}:=\arg\min_{h\in\mathcal{H}}l(h)$ with high probability. At the $k$-th iteration, the candidate set $V_k$ is constructed as all classifiers which have low estimated error on examples observed up to round $k$. Based on $V_k$, the algorithm constructs a disagreement set $D_k$ to be a set of instances on which there are at least two classifiers in $V_k$ that predict different labels. Then the algorithm draws a set $T_k$ of unlabeled examples, where the size of $T_k$ is a parameter of the algorithm. For each instance $X\in T_k$, if it falls into the disagreement region $D_k$, then the algorithm queries for its label; otherwise, observing that all classifiers in $V_k$ have the same prediction on $X$, its label is not queried. The queried labels are then used to update future candidate sets.
\subsection{Background on Error Estimators}\label{sec:estimator}
Most learning algorithms, including DBAL, require estimating the error rate of a classifier. A good error estimator should be unbiased and of low variance. When instances are observed with different probabilities, a commonly used error estimator is the standard importance sampling estimator that reweighs each observed labeled example according to the inverse probability of observing it.
Consider a simplified setting where the logged dataset $T_0 = (X_i, Y_i, Z_i)_{i=1}^{m}$ and $\Pr(Z_i=1\mid X_i)=Q_0(X_i)$. On the online dataset $T_1 = (X_i, Y_i, Z_i)_{i=m+1}^{m+n}$, the algorithm uses a fixed query policy $Q_1$ to determine whether to query for labels, that is, $\Pr(Z_i=1\mid X_i)=Q_1(X_i)$ for $m<i\leq m+n$. Let $S = T_0 \cup T_1$.
In this setting, the standard importance sampling (IS) error estimator for a classifier $h$ is:
\begin{align}
l_{\text{IS}}(h,S) := & \frac{1}{m+n}\sum_{i=1}^{m}\frac{\mathds{1}\{h(X_i)\neq Y_i\}Z_i}{Q_0(X_i)}\nonumber\\
& +\frac{1}{m+n}\sum_{i=m+1}^{m+n}\frac{\mathds{1}\{h(X_i)\neq Y_i\}Z_i}{Q_1(X_i)}.\label{eq:IS}
\end{align}
$l_{\text{IS}}$ is unbiased, and its variance is proportional to $\sup_{i=0,1;x\in\mathcal{X}}\frac{1}{Q_i(x)}$. Although the learning algorithm can choose its query policy $Q_1$ to avoid $Q_1(X_i)$ to be too small for $i>m$, $Q_0$ is the logging policy that cannot be changed. When $Q_0(X_i)$ is small for some $i\leq m$, the estimator in (\ref{eq:IS}) have a high variance such that it may be even better to just ignore the logged dataset $T_0$.
An alternative is the multiple importance sampling (MIS) estimator with balanced heuristic \cite{VG95}:
\begin{equation} \label{eq:MIS}
l_{\text{MIS}}(h,S):=\sum_{i=1}^{m+n}\frac{\mathds{1}\{h(X_i)\neq Y_i\}Z_i}{mQ_0(X_i)+nQ_1(X_i)}.
\end{equation}
It can be proved that $l_{\text{MIS}}(h,S)$ is indeed an unbiased estimator for $l(h)$. Moreover, as proved in \cite{OZ00,ABSJ17}, (\ref{eq:MIS}) always has a lower variance than both (\ref{eq:IS}) and the standard importance sampling estimator that ignores the logged data.
In this paper, we use multiple importance sampling estimators, and write $l_{\text{MIS}}(h,S)$ as $l(h,S)$.
\paragraph{Additional Notations}
In this paper, unless otherwise specified, all probabilities and expectations are over the distribution $D$, and we drop $D$ from subscripts henceforth.
Let $\rho(h_{1},h_{2}):=\Pr(h_{1}(X)\neq h_{2}(X))$ be the disagreement mass between $h_1$ and $h_2$, and $\rho_{S}(h_{1},h_{2}):=\frac{1}{N}\sum_{i=1}^{N}\mathds{1}\{h_{1}(x_{i})\neq h_{2}(x_{i})\}$ for $S=\{x_{1},x_{2},\dots,x_{N}\}\subset\mathcal{X}$ be the empirical disagreement mass between $h_1$ and $h_2$ on $S$.
For any $h\in\mathcal{H}$, $r>0$, define $B(h,r):=\{h'\in\mathcal{H}\mid\rho(h,h')\leq r\}$ to be $r$-ball around $h$.
For any $V\subseteq\mathcal{H}$, define the disagreement region $\text{DIS}(V):=\{x\in\mathcal{X}\mid\exists h_{1}\neq h_{2}\in V\text{ s.t. }h_{1}(x)\neq h_{2}(x)\}$.
\section{Related Work}
Learning from logged observational data is a fundamental problem in machine learning with applications to causal inference \cite{SJS17}, information retrieval \cite{SLLK10, LCKG15, HLR16}, recommender systems \cite{LCLS10, SSSCJ16}, online learning \cite{AHKL+14, WA17}, and reinforcement learning \cite{Thomas2015, TTG15, Mandel16}. This problem is also closely related to covariate shift \cite{Z04, SKM07, BBCKPV10}. Two variants are widely studied -- first, when the logging policy is known, a problem known as learning from logged data \cite{LCKG15, TTG15,SJ15CRM,SJ15self}, and second, when this policy is unknown \cite{JSS16, AI16,Kallus17partitioning, SJS17}, a problem known as learning from observational data. Our work addresses the first problem.
When the logging policy is {\em{unknown}}, the direct method \cite{DLL11} finds a classifier using observed data. This method, however, is vulnerable to selection bias \cite{HLR16,JSS16}. Existing de-biasing procedures include~\cite{AI16,Kallus17partitioning}, which proposes a tree-based method to partition the data space, and \cite{JSS16,SJS17}, which proposes to use deep neural networks to learn a good representation for both the logged and population data.
When the logging policy is {\em{known}}, we can learn a classifier by optimizing a loss function that is an unbiased estimator of the expected error rate. Even in this case, however, estimating the expected error rate of a classifier is not completely straightforward and has been one of the central problems in contextual bandit \citep{WA17}, off-policy evaluation \citep{JL16}, and other related fields. The most common solution is to use importance sampling according to the inverse propensity scores \cite{RR83}. This method is unbiased when propensity scores are accurate, but may have high variance when some propensity scores are close to zero. To resolve this, \cite{BPQC+13, SLLK10,SJ15CRM} propose to truncate the inverse propensity score, \cite{SJ15self} proposes to use normalized importance sampling, and \cite{JL16, DLL11,TB16, WA17} propose doubly robust estimators. Recently, \cite{TTG15} and \cite{ABSJ17} suggest adjusting the importance weights according to data to further reduce the variance. We use the multiple importance sampling estimator (which have also been recently studied in \cite{ABSJ17} for policy evaluation), and we prove this estimator concentrates around the true expected loss tightly.
Most existing work on learning with logged data falls into the passive learning paradigm, that is, they first collect the observational data and then train a classifier. In this work, we allow for active learning, that is, the algorithm could adaptively collect some labeled data. It has been shown in the active learning literature that adaptively selecting data to label can achieve high accuracy at low labeling cost \cite{BBL09,BHLZ10,H14,ZC14,HAHLS15}. \citet{KAHHL17} study active learning with bandit feedback and give a disagreement-based learning algorithm.
To the best of our knowledge, there is no prior work with theoretical guarantees that combines passive and active learning with a logged observational dataset. \citet{BDL09} consider active learning with warm-start where the algorithm is presented with a labeled dataset prior to active learning, but the labeled dataset is not observational: it is assumed to be drawn from the same distribution for the entire population, while in our work, we assume the logged dataset is in general drawn from a different distribution by a logging policy.
|
1,108,101,565,005 | arxiv | \section{Introduction}
The multiple-access channel (MAC) is the fundamental information theory problem that addresses coordination among independent parties. In this problem, multiple transmitters\footnote{Throughout this paper, we will focus on the case with two transmitters.} independently send signals into a noisy channel, and a receiver attempts to recover a message from each transmitter. The MAC was alluded to by Shannon in \cite{Shannon1961}; the discrete-memoryless version was formally stated and its capacity region determined in \cite{Ahlswede1971,Liao1972,Slepian1973}. The capacity region for the Gaussian case was found in \cite{Wyner1974,Cover1975}.
These results were first-order asymptotic, meaning they considered the channel coding rates in the regime where the probability of error goes to zero and the blocklength goes to infinity. One may consider refinements to these results. For example, a strong converse states that, if the probability of error is fixed above zero and the blocklength goes to infinity, then the set of achievable rates is identical to the standard capacity region. The strong converse for the discrete-memoryless MAC was first proved by Dueck in \cite{Dueck1981}; this argument made use of the blowing-up lemma and a so-called wringing step. An alternative strong converse proof was presented by Ahlswede in \cite{Ahlswede1982}; this proof used Augustin's converse argument \cite{Augustin1966} in place of the blowing-up lemma, followed by a more refined wringing step. A strong converse for the Gaussian MAC was proved in \cite{Fong2016}, using an argument based on that of \cite{Ahlswede1982}.
One may refine the strong converse even further by fixing the probability of error, and asking how quickly the coding rates at blocklength $n$ approach the capacity region. This work dates back to Strassen \cite{Strassen}, who showed that for the point-to-point channel coding problem, the backoff from capacity at blocklength $n$ is $O(1/\sqrt{n})$, and also characterized the coefficient on this term. Recently, there has been renewed interest in this second-order (also known as \emph{dispersion}) regime following \new{\cite{Hayashi2009}, which refined Strassen's asymptotic analysis via the information spectrum, and \cite{Polyanskiy2010a}, which also focused} on non-asymptotic information theoretic bounds.
However, in the fixed-error second-order regime, the MAC has turned out to be significantly more difficult than the point-to-point channel. Achievable bounds are proved in \cite{Tan2014,Haim2012,Huang2012,MolavianJazi2012,Scarlett2015a,MolavianJazi2015}, each of which gives lower bounds of order $O(1/\sqrt{n})$ on the back-off term in the coding rate. Second-order results for the related problem of the MAC with degraded message sets were presented in \cite{Scarlett2015,Scarlett2015b}, including matching second-order converse bounds. For the standard MAC under the maximal probability of error criterion, a second-order converse bound is presented in \cite{Moulin2013}. \new{Recently, a bound for the maximal probability of error version, based on the technique of the present paper, was presented in \cite{Wei2021}, which was published after the preprint of this paper. (See Sec.~\ref{sec:maximal} for a brief discussion of the maximal-error case.)} Herein we focus on the average probability of error case. Second-order results for a random-access model, wherein an unknown number of transmitters send messages to a receiver, were derived in \cite{Effros2018}.
Despite this progress, the best converse bound for the second-order rate of the standard MAC with average probability of error has remained \cite{Ahlswede1982}. While \cite{Ahlswede1982} is primarily interested in proving a strong converse, rather than characterizing the asymptotic behavior of the coding rate, the converse bound presented there shows that
\begin{equation}
\mathcal{R}(n,\epsilon)\subseteq\mathcal{C}+O\left(\frac{\log n}{\sqrt{n}}\right)
\end{equation}
where $\mathcal{R}(n,\epsilon)$ is the set of achievable rate pairs at blocklength $n$ and average probability of error $\epsilon$, and $\mathcal{C}$ is the capacity region. In this paper, we improve upon the converse bound from \cite{Ahlswede1982} to show that for most MACs of interest---including discrete-memoryless MACs and the Gaussian MAC---the achievable rate region is bounded by
\begin{equation}\label{eq:second_order}
\mathcal{R}(n,\epsilon)\subseteq\mathcal{C}+O\left(\frac{1}{\sqrt{n}}\right).
\end{equation}
This result asserts that achievable second-order bounds of \cite{Tan2014,Haim2012,Huang2012,MolavianJazi2012,MolavianJazi2015,Scarlett2015a} are order-optimal; that is, the gap between the capacity region and the blocklength-$n$ achievable region, in either direction, is at most $O(1/\sqrt{n})$. We provide a specific upper bound on the coefficient in the $O(1/\sqrt{n})$ term, although for most channels it does not match the achievability bounds.
The main difficulty in proving a second-order converse for the MAC is to properly deal with the independence between the transmitters. The problem variant with degraded message sets, as studied in \cite{Scarlett2015,Scarlett2015b}, seems to be easier precisely because the transmitted signals are \emph{not} independent. The independence that is inherent to the standard MAC prohibits many of the methods to prove second-order converses for the point-to-point channel; for example, one cannot restrict the inputs to a fixed type (empirical distribution), which is one of the steps in the point-to-point converse in \cite{Polyanskiy2010a}, since imposing a fixed joint type on the two input signals creates dependence. An alternative approach adopted in \cite{Liu2019} to prove second-order converses uses the notion of \emph{reverse hypercontractivity}. This technique provides a strengthening of Fano's inequality, wherein the coding rate is upper bounded by the mutual information plus an $O(1/\sqrt{n})$ error term. However, this technique relies on the geometric average error criterion, which is stronger than the usual average error criterion (but weaker than the maximal error criterion). The method of \cite{Liu2019} can be applied to the average error criterion by first expurgating the code---i.e., removing some of the codewords with the largest probability of error. However, with the MAC, we cannot just expurgate codewords, we must expurgate codeword \emph{pairs}, which again introduces some dependence between inputs. For this reason, reverse hypercontractivity can be viewed as a replacement for the blowing-up lemma or Augustin's converse, but does not remove the need for wringing. Interestingly, the technique that we use here seems to be related to \emph{hypercontractivity}; see Sec.~\ref{sec:other_measures} for more details.
To handle the independence between transmitters, the strong converse of \cite{Ahlswede1982} adopted the following approach: given any MAC code, first expurgate it by restricting to those channel inputs with limited maximal probability of error. Of course, this expurgation introduces some dependence between the transmissions. Second, this dependence is ``wrung out'' by further restricting the channel inputs so as to restore some measure of independence between them. Our bound follows the same basic outline, but we use a different technique for wringing. Namely, we introduce a new dependence measure called \emph{wringing dependence}. In the wringing step, we restrict the channel inputs so that the wringing dependence between them is small. This method of wringing proves to be more efficient than that of \cite{Ahlswede1982}. In addition to being critical to our converse proof, the wringing dependence measure is interesting in its own right: it satisfies many natural properties of any dependence measure, including the data processing inequality, and all 7 of the axioms for dependence measures that R\'enyi proposed in \cite{Renyi1959}. Using this tool, we show that a bound of the form \eqref{eq:second_order} holds for any MAC that satisfies two regularity conditions. All discrete-memoryless MACs, and the Gaussian MAC, are shown to satisfy these conditions.
The remainder of the paper is organized as follows. Sec.~\ref{sec:preliminaries} gives notational conventions and describes the setup for the MAC problem. Sec.~\ref{sec:wringing_dependence} is devoted to the wringing dependence: it is defined, some simple examples are presented, and its main properties are proved. Sec.~\ref{sec:fbl} gives a finite blocklength converse bound for the MAC; this bound includes the core steps of our converse argument based on the wringing dependence. In Sec.~\ref{sec:asymptotics}, second-order asymptotic bounds are proved, applying the finite blocklength bound from Sec.~\ref{sec:fbl} to prove \eqref{eq:second_order} under certain regularity conditions. Specifically, two second-order bounds are proved: one that applies to any channel that satisfies two regularity conditions, and a tighter bound that holds for discrete-memoryless channels. Sec.~\ref{sec:examples} illustrates the results with some specific example channels, including the Gaussian MAC. We conclude in Sec.~\ref{sec:conclusion}. Several of the more technical proofs are contained in appendices.
\section{Preliminaries}\label{sec:preliminaries}
\subsection{Notation}
Throughout, all logs and exponential have base $e$ unless otherwise specified; log base 2 is denoted $\log_2$. For a random variable, we use the corresponding calligraphic letter to indicate its alphabet; e.g. $X$ has alphabet $\mathcal{X}$. While most results in the paper hold for arbitrary probability spaces, to simplify notation we do not typically specify the event space. For an alphabet $\mathcal{X}$, the set of all distributions on that alphabet is denoted $\mathcal{P}(\mathcal{X})$. Given two alphabets $\mathcal{X},\mathcal{Y}$, the channel $W$ from $\mathcal{X}$ to $\mathcal{Y}$ is a collection $(W_x)_{x\in\mathcal{X}}$ where $W_x\in\mathcal{P}(\mathcal{Y})$ for each $x\in\mathcal{X}$. The set of all channels from $\mathcal{X}$ to $\mathcal{Y}$ is denoted $\mathcal{P}(\mathcal{X}\to\mathcal{Y})$. We will also sometimes use the notation $P_{Y|X}$ for a channel from $\mathcal{X}$ to $\mathcal{Y}$ where $P_{Y|X=x}\in\mathcal{P}(\mathcal{Y})$ is the conditional distribution given $X=x$. We use $\mathbb{E} [X]$ for expectation of a real-valued random variable $X$; usually the underlying distribution will be clear from context, but if not we write $\mathbb{E}_P [X]$ to mean $\int X dP$. For variance, $\var (X)$ or $\var_P (X)$ are used in the same way. The probability of an event is denoted with $\mathbb{P}$ in a similar manner. For a set $\mathcal{A}\subset\mathcal{X}$, we write the indicator function for $\mathcal{A}$ as $1(x\in \mathcal{A})$. For an integer $n$, we denote $[n]=\{1,\ldots,n\}$. A sequence $x^n\in\mathcal{X}^n$ means $x^n=(x_1,\ldots,x_n)$. We adopt the standard $O(\cdot)$ and $o(\cdot)$ notations. Specifically, for functions $f(n),g(n)$, we write $g(n)=O(f(n))$ to indicate
\begin{equation}
\limsup_{n\to\infty}\left|\frac{g(n)}{f(n)}\right|<\infty.
\end{equation}
Similarly, $g(n)=o(f(n))$ means $\lim_{n\to\infty} g(n)/f(n)=0$. We also use this notation when the limit goes to $0$ instead of infinity; for example $g(\delta)=O(f(\delta))$ means $\limsup_{\delta\to 0} |g(\delta)/f(\delta)|<\infty$. We write $|x|^+=\max\{0,x\}$ for positive part.
We also adopt the following standard definitions. Given two distributions $P,Q\in\mathcal{P}(\mathcal{X})$, the Kullback-Leibler divergence is denoted
\begin{equation}
D(P\|Q)=\mathbb{E}_P \left[\log \frac{dP}{dQ}\right]
\end{equation}
where $\frac{dP}{dQ}$ is the Radon-Nikodym derivative. We will also need the R\'enyi divergence of order $\infty$, given by
\begin{equation}
D_\infty(P\|Q)=\sup_{\mathcal{A}\subset\mathcal{X}} \log \frac{P(\mathcal{A})}{Q(\mathcal{A})}
\end{equation}
where the supremum is over all events $\mathcal{A}$ in the probability space. The total variational distance is
\begin{equation}
d_{TV}(P,Q)=\sup_{\mathcal{A}\subset\mathcal{X}} |P(\mathcal{A})-Q(\mathcal{A})|.
\end{equation}
The hypothesis testing fundamental limit is given by
\begin{equation}
\beta_\alpha(P,Q)=\inf_{\substack{T:\mathcal{X}\to[0,1],\\
\mathbb{E}_P [T(X)]\ge \alpha}} \mathbb{E}_Q [T(X)].
\end{equation}
Here, $T(x)$ represents the probability that a hypothesis test outputs hypothesis $1$ when $X=x$. The divergence variance is denoted
\begin{equation}
V(P\|Q)=\var_P \left(\log \frac{dP}{dQ}\right).
\end{equation}
\new{The third absolute moment of the log-likelihood ratio is given by
\begin{equation}
T(P\|Q)=\mathbb{E}_P\left[\left|\log\frac{dP}{dQ}-D(P\|Q)\right|^3\right].
\end{equation}}%
For distributions $P_X\in\mathcal{P}(\mathcal{X}),Q_Y\in\mathcal{P}(\mathcal{Y})$ and a channel $W\in\mathcal{P}(\mathcal{X}\to\mathcal{Y})$, the conditional divergence and conditional divergence variance are denoted
\begin{align}
D(W\|Q_Y|P_X)&=\int dP_X(x) D(W_x\|Q_Y),\\
V(W\|Q_Y|P_X)&=\int dP_X(x) V(W_x\|Q_Y).
\end{align}
Given joint distribution $P_{XY}\in\mathcal{P}(\mathcal{X}\times\mathcal{Y})$, the mutual information is given by
\begin{equation}
I(X;Y)=D(P_{Y|X}\|P_Y|P_X)
\end{equation}
where $P_{X},P_{Y},P_{Y|X}$ are the induced marginal and conditional distributions. The conditional mutual information is given by
\begin{equation}
I(X;Y|Z)=D(P_{Y|XZ}\|P_{Y|Z}|P_{\new{X}Z}).
\end{equation}
For a discrete distribution $P_X$, the entropy is
\begin{equation}
H(X)=\sum_{x\in\mathcal{X}} -P_X(x)\log P_X(x).
\end{equation}
We also use $H_b(p)$ to denote the binary entropy; i.e. $H_b(p)=H(X)$ where $X\sim\text{Ber}(p)$.
\subsection{Multiple-Access Channel Problem Setup}
A one-shot multiple-access channel (MAC) with two users is given by a channel $W\in\mathcal{P}(\mathcal{X}\times\mathcal{Y}\to \mathcal{Z})$ where $\mathcal{X}$ and $\mathcal{Y}$ are the input alphabets, and $\mathcal{Z}$ is the output alphabet. A (stochastic) code is given by
\begin{enumerate}
\item a user 1 encoder $P_{X|I_1}\in\mathcal{P}([M_1]\to\mathcal{X})$,
\item a user 2 encoder $P_{Y|I_2}\in\mathcal{P}([M_2]\to\mathcal{Y})$,
\item a decoder $P_{\hat{I}_1,\hat{I}_2|Z}\in\mathcal{P}(\mathcal{Z}\to [M_1]\times [M_2])$.
\end{enumerate}
The average probability of error is given by $\mathbb{P}((\hat{I}_1,\hat{I}_2)\ne (I_1,I_2))$
where $(I_1,I_2)$ represent the messages, which are uniformly distribution over $[M_1]\times[M_2]$, and
\begin{equation}
(X,Y,Z,\hat{I}_1,\hat{I}_2)\new{|(I_1,I_2)=(i_1,i_2)\sim P_{X|I_1=i_1}(x)P_{Y|I_2=i_2}(y)W_{xy}(z) P_{\hat{I}_1,\hat{I}_2|Z=z}(\hat{i}_1,\hat{i}_2)}.
\end{equation}
\new{Here, recall that $W$ is the channel distribution from $(X,Y)$ to $Z$.} A code with message counts $M_1,M_2$ and average probability of error at most $\epsilon$ is called an $(M_1,M_2,\epsilon)$ code.
Given a one-shot channel $W$, the $n$-length product channel is given by
\begin{equation}
W_{x^ny^n}=\prod_{t=1}^n W_{x_ty_t}.
\end{equation}
For $n$-length channels, we also impose cost-constraints on the channel inputs. Specifically, there are functions $b_1:\mathcal{X}\to\mathbb{R}$, $b_2:\mathcal{Y}\to\mathbb{R}$, and constants $B_1,B_2\in\mathbb{R}$; we assume that the encoders $P_{X^n|I_1},P_{Y^n|I_2}$ are such that the channel inputs $X^n,Y^n$ satisfy \new{the following almost surely:}
\begin{equation}\label{eq:cost_constraints}
\frac{1}{n}\sum_{t=1}^n b_1(X_t)\le B_1,\qquad
\frac{1}{n}\sum_{t=1}^n b_2(Y_t)\le B_2.
\end{equation}
Of course, a lack of cost constraint is included in this model simply by taking $b_1(x)=b_2(y)=0$ for all $x,y$. We consider $(W,b_1,b_2,B_1,B_2)$ to constitute the channel specification. We say an $(n,M_1,M_2,\epsilon)$ code is a code for $n$-length channel with average probability of error $\epsilon$. For any blocklength $n$ and probability of error $\epsilon\in(0,1)$, the set of achievable rates are
\begin{equation}\label{eq:Rstar_def}
\mathcal{R}(n,\epsilon)=\left\{\left(\frac{\log M_1}{n},\frac{\log M_2}{n}\right): \exists \text{ an }(n,M_1,M_2,\epsilon)\text{ code}\right\}.
\end{equation}
The operational definition for the capacity region is given by\footnote{Recall that the lim-inf of a sequence of sets $\mathcal{A}_n$ is $\bigcup_{n\ge 1} \bigcap_{k\ge n} \mathcal{A}_k$.}
\begin{equation}
\mathcal{C}=\bigcap_{\epsilon>0}\, \liminf_{n\to\infty}\, \mathcal{R}(n,\epsilon).
\end{equation}
The first-order asymptotic result, proved in \cite{Ahlswede1971,Liao1972,Slepian1973,Wyner1974,Cover1975}, is that the capacity region is
\begin{equation}\label{eq:capacity_region}
\mathcal{C}=\bigcup_{\substack{P_{UXY}:X\perp Y|U,\\ \mathbb{E} [b_1(X)]\le B_1,\\ \mathbb{E} [b_2(Y)]\le B_2}}\left\{(R_1,R_2): R_1+R_2\le I(X,Y;Z|U),\ R_1\le I(X;Z|Y,U),\ R_2\le I(Y;Z|X,U)\right\}
\end{equation}
where $X\perp Y|U$ indicates that $X$ and $Y$ are independent given $U$. Here, $U$ is the time-sharing random variable.\footnote{We have chosen to use $U$ rather than the more standard $Q$, since the letter $Q$ is primarily used for other concepts in this paper.} Using Carath\'eodory's theorem, we can restrict the alphabet cardinality of $U$ in the union to $|\mathcal{U}|\le 6$.
Because of the multi-dimensional nature of achievable rate regions for network information theory problems such as the MAC, articulating second-order results can be a bit complicated. There are at least three equivalent methods for describing these results: (i) characterize the region of second-order coding rate pairs around a specific point on the boundary of the capacity region, (ii) fix an angle of approach to a point on the capacity region boundary, or (iii) bound the maximum achievable weighted sum-rate. See \cite[Chapter 6]{Tan2014a} for a discussion of these issues for network information theory problems. We have chosen to focus on the weighted sum-rate approach, which has the advantage that we can work with scalar quantities, and we do not need to specify a point on the capacity region boundary. Specifically, for non-negative constants $\alpha_1,\alpha_2$, we define the largest achievable weighted-sum rate as
\begin{equation}
R^\star_{\alpha_1,\alpha_2}(n,\epsilon)=\sup\left\{\frac{\alpha_1\log M_1+\alpha_2\log M_2}{n}:\exists \text{ an } (n,M_1,M_2,\epsilon)\text{ code}\right\}.
\end{equation}
In particular, $R^\star_{1,1}(n,\epsilon)$ is the largest achievable standard sum rate. Note that for any constant $c$,
\begin{equation}
R^\star_{c\,\alpha_1,c\,\alpha_2}(n,\epsilon)=c\,R^\star_{\alpha_1,\alpha_2}(n,\epsilon).
\end{equation}
Thus, it is enough to consider only pairs $(\alpha_1,\alpha_2)$ where $\max\{\alpha_1,\alpha_2\}=1$. We also define the weighted-sum capacity as
\begin{equation}
C_{\alpha_1,\alpha_2}=\sup\{\alpha_1 R_1+\alpha_2 R_2: (R_1,R_2)\in \mathcal{C}\}.
\end{equation}
Since the capacity region $\mathcal{C}$ is convex, it is equivalently characterized by $C_{\alpha_1,\alpha_2}$. From the result in \eqref{eq:capacity_region}, it is easy to see that
\begin{align}
C_{\alpha_1,\alpha_2}
&=\sup_{\substack{P_{UXY}:X\perp Y|U,\\ \mathbb{E}[ b_1(X)]\le B_1,\\ \mathbb{E}[ b_2(Y)]\le B_2}}
\big[\min\{\alpha_1,\alpha_2\} I(X,Y;Z|U)+|\alpha_1-\alpha_2|^+ I(X;Z|Y,U)+|\alpha_2-\alpha_1|^+ I(Y;Z|X,U)\big].\label{eq:Ca1a2}
\end{align}
Our goal is to prove bounds of the form
\begin{equation}
R^\star_{\alpha_1,\alpha_2}(n,\epsilon)\le C_{\alpha_1,\alpha_2}+O\left(\frac{1}{\sqrt{n}}\right).
\end{equation}
Note that if such a bound can be proved in which the implied constant in the $O(1/\sqrt{n})$ term is uniformly bounded over all $\alpha_1,\alpha_2$ where $\max\{\alpha_1,\alpha_2\}=1$, then
\begin{equation}
\mathcal{R}(n,\epsilon)\subseteq \mathcal{C}+O\left(\frac{1}{\sqrt{n}}\right).
\end{equation}
\section{Wringing Dependence}\label{sec:wringing_dependence}
This section is devoted to defining and characterizing the \emph{wringing dependence}, a new dependence measure that will be critical in our converse proof for the MAC. In Sec.~\ref{sec:motivation}, we first outline Ahlswede's proof of the MAC strong converse from \cite{Ahlswede1982} as motivation for the wringing dependence, and then we define it. The basic properties of wringing dependence are described in Sec.~\ref{sec:props}. The wringing lemma, which is the primary use of wringing dependence in our MAC converse proof, is given in Sec.~\ref{sec:wringing_lemma}. We present some relationships between wringing dependence and other dependence measures---specifically hypercontractivity and maximal correlation---in Sec.~\ref{sec:other_measures}.
\subsection{Motivation and Definition}\label{sec:motivation}
Consider a one-shot MAC given by $W\in\mathcal{P}(\mathcal{X}\times\mathcal{Y}\to \mathcal{Z})$. Ahlswede's converse proof from \cite{Ahlswede1982}, and ours, involves these basic steps:
\begin{enumerate}
\item given any MAC code, expurgate it by restricting to the subset $\Gamma\subset\mathcal{X}\times\mathcal{Y}$ of input pairs with limited maximal probability of error,
\item choose sets $\bar\mathcal{X}\subset\mathcal{X},\bar\mathcal{Y}\subset\mathcal{Y}$ so that when the code is restricted to input pairs $(X,Y)\in\Gamma\cap (\bar\mathcal{X}\times \bar\mathcal{Y})$, the inputs are close to independent,
\item prove a converse bound on the code restricted to $\Gamma\cap (\bar\mathcal{X}\times \bar\mathcal{Y})$,
\item relate this converse bound back to the original code.
\end{enumerate}
Step 2 is called ``wringing,'' as the dependence between $X$ and $Y$ introduced by restricting the code to $\Gamma$ is ``wrung out'' in the choice of $\bar\mathcal{X},\bar\mathcal{Y}$. This step is also where our proof deviates most significantly from Ahlswede's. In the wringing step, choosing the sets $\bar\mathcal{X},\bar\mathcal{Y}$ requires trading-off between two objectives: (i) maximizing the probability of the sets $\bar\mathcal{X}\times\bar\mathcal{Y}$, so that in Step~4, there is limited difference between the subset and the original code; and (ii) minimizing the dependence between the inputs when restricted to $\bar\mathcal{X}\times\bar\mathcal{Y}$, so that the converse bound proved in Step~3 captures the independence between transmissions that is inherent to the MAC. The key result addressing this trade-off in Ahlswede's proof is \cite[Lemma~4]{Ahlswede1982}; the following is a slight modification of this lemma.\footnote{The main difference is that Ahlswede's lemma has only one sequence $X^n$, even though when the lemma is applied in the converse proof, it is done with two sequences $X^n,Y^n$. Here, we have stated the lemma with two sequences to make the connection to our technique clearer.}
\begin{lemma}\label{lemma:ahlswede}
Let $P_{X^nY^n}\in\mathcal{P}(\mathcal{X}^n\times\mathcal{Y}^n)$, $Q_{X^n}\in\mathcal{P}(\mathcal{X}^n)$, and $Q_{Y^n}\in\mathcal{P}(\mathcal{Y}^n)$ be distributions such that
\begin{equation}\label{eq:ahlswede_renyi}
D_\infty(P_{X^nY^n}\|Q_{X^n}Q_{Y^n})\le \log(1+c).
\end{equation}
For any $0<\gamma<c$, $0<\epsilon<1$, there exist sets $\bar\mathcal{X}\subset\mathcal{X}^n,\bar\mathcal{Y}\subset\mathcal{Y}^n$ such that
\begin{equation}\label{eq:ahlswede_prob}
P_{X^nY^n}(\bar\mathcal{X},\bar\mathcal{Y})\ge \epsilon^{c/\gamma}
\end{equation}
and for all $t\in[n]$, $x\in\mathcal{X},y\in\mathcal{Y}$
\begin{equation}\label{eq:ahlswede_indep}
P_{X_tY_t|X^n\in\bar\mathcal{X},Y^n\in\bar\mathcal{Y}}(x,y)\le \max\{\epsilon,(1+\gamma)Q_{X_t|X^n\in\bar\mathcal{X}}(x)Q_{Y_t|Y^n\in\bar\mathcal{Y}}(y)\}.
\end{equation}
\end{lemma}
In this lemma, one can see the two objectives at play: \eqref{eq:ahlswede_prob} is a bound on the probability of $\bar\mathcal{X}\times\bar\mathcal{Y}$, and \eqref{eq:ahlswede_indep} is a guarantee on dependence of the channel inputs. The two parameters $\gamma$ and $\epsilon$ allow one to trade-off between these two objectives; as $\gamma,\epsilon\to 0$, the guarantee on the probability becomes weaker, while the guarantee on the dependence becomes stronger. In the extreme case that $\gamma=\epsilon=0$, \eqref{eq:ahlswede_indep} states that $X_t$ and $Y_t$ are independent, whereas \eqref{eq:ahlswede_prob} becomes trivial.
Ahlswede's lemma is proved iteratively. The process is initialized with $\bar\mathcal{X}=\mathcal{X}^n,\bar\mathcal{Y}=\mathcal{Y}^n$. At each step, if \eqref{eq:ahlswede_indep} is violated for some $t\in[n]$, $\bar{x}_t\in\mathcal{X},\bar{y}_t\in\mathcal{Y}$, then the sets $\bar\mathcal{X},\bar\mathcal{Y}$ are revised to
\begin{equation}\label{eq:bar_sets_reduction}
\bar\mathcal{X}'=\bar\mathcal{X}\cap \{x^n:x_t=\bar{x}_t\},
\qquad
\bar\mathcal{Y}'=\bar\mathcal{Y}\cap \{y^n:y_t=\bar{y}_t\}.
\end{equation}
Because each step involves a violation of \eqref{eq:ahlswede_indep}, at that point
\begin{align}
P_{X_tY_t|X^n\in\bar\mathcal{X},Y^n\in\bar\mathcal{Y}}(\bar{x}_t,\bar{y}_t)&>\epsilon,\label{eq:barxy_prob}\\
\frac{P_{X_tY_t|X^n\in\bar\mathcal{X},Y^n\in\bar\mathcal{Y}}(\bar{x}_t,\bar{y}_t)}{Q_{X_t|X^n\in\bar\mathcal{X}}(\bar{x}_t)Q_{Y_t|Y^n\in\bar\mathcal{Y}}(\bar{y}_t)}
&>1+\gamma.\label{eq:barxy_ratio}
\end{align}
Here, \eqref{eq:barxy_prob} ensures that the probability of the pair $(\bar{x}_t,\bar{y}_t)$ is not too small, while \eqref{eq:barxy_ratio} ensures that each step ``eats into'' the R\'enyi divergence between $P$ and $Q$ from \eqref{eq:ahlswede_renyi} by at least $\log(1+\gamma)$. The latter implies that the number of steps cannot exceed $\frac{\log(1+c)}{\log (1+\gamma)}\le c/\gamma$, which leads to the guarantee on the probability in \eqref{eq:ahlswede_prob}.
To improve on Ahlswede's lemma, we make three principal observations:
\begin{enumerate}
\item Wringing can be done in the one-shot setting.
\item The set reduction steps in \eqref{eq:bar_sets_reduction} need not be limited to individual pairs $(\bar{x}_t,\bar{y}_t)$; we may instead use arbitrary sets $\mathcal{A}\subset\mathcal{X},\mathcal{B}\subset\mathcal{Y}$, and revise the sets as $\bar\mathcal{X}'=\bar\mathcal{X}\cap \mathcal{A}$, $\bar\mathcal{Y}'=\bar\mathcal{Y}\cap \mathcal{B}$.
\item The trade-off between the probability as in \eqref{eq:barxy_prob} and the likelihood ratio as in \eqref{eq:barxy_ratio} is most efficient by maximizing
\begin{equation}\label{eq:ratio_ratio}
\frac{\log \frac{P_{XY}(\mathcal{A},\mathcal{B})}{Q_X(\mathcal{A})Q_Y(\mathcal{B})}}{-\log P_{XY}(\mathcal{A},\mathcal{B})}=\frac{\log Q_X(\mathcal{A}) Q_Y(\mathcal{B})}{\log P_{XY}(\mathcal{A},\mathcal{B})}-1.
\end{equation}
Note that if the quantity in \eqref{eq:ratio_ratio} is maximized, then neither the likelihood ratio nor the probability of $(\mathcal{A},\mathcal{B})$ will be too small. Moreover, maximizing this quantity ensures that if a pair $(\mathcal{A},\mathcal{B})$ has low probability, then the likelihood ratio is larger, ensuring that this step ``eats into'' the R\'enyi divergence by a greater amount.
\end{enumerate}
We are now ready to give the definition for wringing dependence, in which the quantity in \eqref{eq:ratio_ratio} plays a key role.
\begin{definition}
Given random variables $X,Y$ with joint distribution $P_{XY}$, the wringing dependence between $X$ and $Y$ is given by\footnote{\new{While technically, the wringing dependence is a function of the joint distribution $P_{XY}$ rather than a function of the random variables $X,Y$ themselves, we have chosen to use the notation $\Delta(X;Y)$ wherein the dependence measure is an operator on the random variables. This notational choice is made consistently for all dependence measures in the paper: for example mutual information is $I(X;Y)$, maximal correlation is $\rho_m(X;Y)$, etc. In all cases, the underlying distribution will be clear from context, or specified in a subscript such as $\Delta_P(X;Y)$.}}
\begin{equation}\label{eq:Delta_def0}
\Delta(X;Y)=\inf_{Q_X,Q_Y}\,\sup_{\mathcal{A}\subset\mathcal{X},\mathcal{B}\subset\mathcal{Y}}\,\inf\left\{\delta\ge 0:P_{XY}(\mathcal{A},\mathcal{B})^{1+\delta}\le Q_X(\mathcal{A})Q_Y(\mathcal{B})\right\}.
\end{equation}
\end{definition}
Note that for any $p,q\in(0,1)$, $\inf\{\delta\ge 0:p^{1+\delta}\le q\}=\left|\frac{\log q}{\log p}-1\right|^+$.
Therefore an alternative definition is
\begin{equation}\label{eq:Delta_def}
\Delta(X;Y)=\inf_{Q_X,Q_Y}\ \sup_{\substack{\mathcal{A}\subset\mathcal{X},\mathcal{B}\subset\mathcal{Y}}}
\left|\frac{\log Q_X(\mathcal{A})Q_Y(\mathcal{B})}{\log P_{XY}(\mathcal{A},\mathcal{B})}-1\right|^+
\end{equation}
where $\frac{\log q}{\log p}$ really means $\inf\{\theta:p^\theta\le q\}$, so by convention
\begin{equation}\label{eq:log_conventions}
\frac{\log q}{\log p}=0\text{ if }p=0\text{ or }q=1,p<1,\quad
\frac{\log q}{\log p}=\infty\text{ if }p=1,q<1,\quad
\frac{\log 1}{\log 1}=-\infty.
\end{equation}
\new{To compute the wringing dependence given a joint distribution $P_{XY}$ requires optimizing over $Q_X$ and $Q_Y$. In fact, this optimization is convex, as shown as follows. We may write the quantity inside the positive part in \eqref{eq:Delta_def} as
\begin{equation}
\frac{\log Q_X(\mathcal{A})Q_Y(\mathcal{B})}{\log P_{XY}(\mathcal{A},\mathcal{B})}-1
=\frac{\log Q_X(\mathcal{A})}{\log P_{XY}(\mathcal{A},\mathcal{B})}+\frac{\log Q_Y(\mathcal{B})}{\log P_{XY}(\mathcal{A},\mathcal{B})}-1.\label{eq:Delta_convex}
\end{equation}
For fixed sets $\mathcal{A},\mathcal{B}$, $\log P_{XY}(\mathcal{A},\mathcal{B})\le 0$, which means each of terms in the RHS of \eqref{eq:Delta_convex} is jointly convex in $(Q_X,Q_Y)$. Using the fact that the supremum (or maximum) of convex functions is also convex, this implies that
\begin{equation}\label{eq:Delta_convex2}
\sup_{\mathcal{A}\subset\mathcal{X},\mathcal{B}\subset\mathcal{Y}}
\left|\frac{\log Q_X(\mathcal{A})Q_Y(\mathcal{B})}{\log P_{XY}(\mathcal{A},\mathcal{B})}-1\right|^+
\end{equation}
is jointly convex in $(Q_X,Q_Y)$. Thus,}
the wringing dependence can in principle be computed via convex optimization if $\mathcal{X}$ and $\mathcal{Y}$ are finite sets.
However, this computation quickly becomes impractical as the alphabet sizes grow, since the number of sets $\mathcal{A},\mathcal{B}$ is exponential in the alphabet cardinality.
The following is one example of a simple distribution for which it \emph{can} be computed in closed form.
\begin{example}\label{example:DSBS}
Consider a doubly symmetric binary source (DSBS) $(X,Y)$, wherein $X,Y$ are each uniform on $\{0,1\}$, and $P_{XY}(1,1)=P_{XY}(0,0)=\frac{p}{2}$. Since this distribution is symmetric between $X$ and $1-X$, and between $Y$ and $1-Y$, the convexity of \new{\eqref{eq:Delta_convex2}} in $(Q_X,Q_Y)$ means that the optimal $Q_X,Q_Y$ are each uniform on $\{0,1\}$. Thus, if $p\le 1/2$, then $\Delta(X;Y)$ is given by
\begin{align}
\Delta(X;Y)&=\max\left\{0,\frac{\log 1/4}{\log p/2}-1,\frac{\log 1/4}{\log (1-p)/2}-1\right\}
\\&=\frac{\log 4}{\log 2-\log(1-p)}-1
\\&=\frac{1+\log_2(1-p)}{1-\log_2(1-p)}.
\end{align}
Therefore, for any $p$,
\begin{equation}
\Delta(X;Y)=\frac{1+\log_2\max\{p,1-p\}}{1-\log_2\max\{p,1-p\}}.
\end{equation}
The wringing dependence for a DSBS as a function of $p$ is shown in Fig.~\ref{fig:DSBS}.
\end{example}
\begin{figure}
\begin{center}
\includegraphics[width=4in]{DSBS.eps}
\end{center}
\caption{The wringing dependence for a doubly symmetric binary source, as a function of the crossover probability $p$.}
\label{fig:DSBS}
\end{figure}
\subsection{Properties}\label{sec:props}
The most important property of the wringing dependence is a counterpart of Ahlswede's lemma, which is presented in Sec.~\ref{sec:wringing_lemma}. But before stating this result, we prove some basic properties of the dependence measure. In particular, the following result states that wringing dependence satisfies many properties that one would expect of any dependence measure: it is non-negative, is zero iff $X$ and $Y$ are independent, and satisfies the data processing inequality. Indeed, this result shows that wringing dependence satisfies 6 out of the 7 axioms for dependence measures proposed in \cite{Renyi1959}. (It also satisfies the 7th, which is that for bivariate Gaussians, the wringing dependence equals the correlation coefficient; this fact is established in Sec.~\ref{sec:other_measures}.) The theorem also includes some other properties that will be useful throughout the paper.
\begin{theorem}\label{thm:props}
The wringing dependence $\Delta(X;Y)$ satisfies the following:
\begin{enumerate}
\item $\Delta(X;Y)=\Delta(Y;X)$.
\item $0\le \Delta(X;Y)\le 1$.
\item If $\Delta(X;Y)\le\delta$, then for all $\mathcal{A}\subset\mathcal{X},\mathcal{B}\subset\mathcal{Y}$,
\begin{align}
P_{XY}(\mathcal{A},\mathcal{B})&\le (1+2\delta) \left(P_X(\mathcal{A})P_Y(\mathcal{B})\right)^{1/(1+\delta)},\label{eq:dep_pp_bd}\\
|P_{XY}(\mathcal{A},\mathcal{B})-P_X(\mathcal{A})P_Y(\mathcal{B})|&\le 2\delta.\label{eq:gdep_abs_bd}
\end{align}
\item $\Delta(X;Y)=0$ if and only if $X$ and $Y$ are independent.
\item $\Delta(X;Y)=1$ if $X$ and $Y$ are \emph{decomposable}, meaning there exist sets $\mathcal{A}\subset\mathcal{X},\mathcal{B}\subset\mathcal{Y}$ where $0<P_X(\mathcal{A})<1$ and $1(X\in \mathcal{A})=1(Y\in \mathcal{B})$ almost surely\footnote{Decomposability is equivalent to the G\'acs-K\"orner common information being positive \cite{Gacs1973}.}. Moreover, if $\mathcal{X},\mathcal{Y}$ are finite sets and $\Delta(X;Y)=1$, then $X$ and $Y$ are decomposable.
\item For any Markov chain $W-X-Y-Z$, $\Delta(W;Z)\le\Delta(X;Y)$.
\end{enumerate}
\end{theorem}
\begin{IEEEproof}
(1) Symmetry between $X$ and $Y$ follows trivially from the definition.
(2) The fact that $\Delta(X;Y)\ge 0$ follows immediately from the definition. To upper bound $\Delta(X;Y)$, we may take $Q_X=P_X$, $Q_Y=P_Y$, so
\begin{equation}\label{eq:wringing_simple_upper_bound}
\Delta(X;Y)\le \inf\{\delta\ge 0:P_{XY}(\mathcal{A},\mathcal{B})^{1+\delta}\le P_X(\mathcal{A})P_Y(\mathcal{B})\text{ for all }\mathcal{A}\subset\mathcal{X},\mathcal{B}\subset\mathcal{Y}\}.
\end{equation}
Since $P_{XY}(\mathcal{A},\mathcal{B})\le P_X(\mathcal{A})$ and $P_{XY}(\mathcal{A},\mathcal{B})\le P_Y(\mathcal{B})$, $P_{XY}(\mathcal{A},\mathcal{B})^2\le P_X(\mathcal{A})P_Y(\mathcal{B})$ for all $\mathcal{A},\mathcal{B}$. That is, $\delta=1$ is feasible in \eqref{eq:wringing_simple_upper_bound}, so $\Delta(X;Y)\le 1$.
(3) Suppose $\Delta(X;Y)\le \delta$. Thus, for any $\delta'>\delta$, there exist $Q_X,Q_Y$ such that
\begin{equation}\label{eq:wringing_rearrangement}
P_{XY}(\mathcal{A},\mathcal{B})^{1+\delta'}\le Q_X(\mathcal{A})Q_Y(\mathcal{B})\text{ for all }\mathcal{A}\subset\mathcal{X},\mathcal{B}\subset\mathcal{Y}.
\end{equation}
\new{Consider the function $f(p)=p^{1+\delta'}$ for $p\ge 0$. Since $\delta'>0$, $f$ is convex, so it can be lower bounded by any tangent line. In particular, forming the tangent line around $p=1$ gives
\begin{equation}\label{eq:tangent_bound}
p^{1+\delta'}=f(p)\ge f(1)+f'(1)(p-1)=1+(1+\delta')(p-1)
=(1+\delta')p-\delta'.
\end{equation}
Using this bound to lower bound the LHS of \eqref{eq:wringing_rearrangement} gives
\begin{equation}\label{eq:QAB_PAB}
Q_X(\mathcal{A})Q_Y(\mathcal{B})\ge (1+\delta')P_{XY}(\mathcal{A},\mathcal{B})-\delta'.
\end{equation}}%
Taking $\mathcal{B}=\mathcal{Y}$ gives
\begin{equation}
Q_X(\mathcal{A})\ge (1+\delta') P_X(\mathcal{A})-\delta'.
\end{equation}
Since this may hold for $\mathcal{A}^c$ in place of $\mathcal{A}$, we may write
\begin{align}
Q_X(\mathcal{A})&=1-Q_X(\mathcal{A}^c)
\\&\le 1-(1+\delta')P_X(\mathcal{A}^c)+\delta'
\\&=(1+\delta')P_X(\mathcal{A}).\label{eq:gdep_marginalX_upperbd}
\end{align}
By the same argument, for any $\mathcal{B}\subset\mathcal{Y}$, $Q_Y(\mathcal{B})\le (1+\delta')P_Y(\mathcal{B})$. Thus
\begin{align}
P_{XY}(\mathcal{A},\mathcal{B})^{1+\delta'}&\le Q_X(\mathcal{A})Q_Y(\mathcal{B})
\\&\le (1+\delta')^2 P_X(\mathcal{A})P_Y(\mathcal{B}).
\end{align}
As this holds for all $\delta'>\delta$, we have
\begin{equation}\label{eq:PXY_delta2}
P_{XY}(\mathcal{A},\mathcal{B})^{1+\delta}\le (1+\delta)^2 P_X(\mathcal{A})P_Y(\mathcal{B}).
\end{equation}
Thus
\begin{align}
P_{XY}(\mathcal{A},\mathcal{B})&\le \left[(1+\delta)^2 P_X(\mathcal{A})P_Y(\mathcal{B})\right]^{1/(1+\delta)}.
\end{align}
Noting that $(1+\delta)^{2/(1+\delta)}\le 1+2\delta$ proves \eqref{eq:dep_pp_bd}. \new{Using again the tangent line bound from \eqref{eq:tangent_bound} to lower bound the LHS of \eqref{eq:PXY_delta2} gives}
\begin{equation}
(1+\delta)P_{XY}(\mathcal{A},\mathcal{B})-\delta\le (1+\delta)^2 P_X(\mathcal{A})P_Y(\mathcal{B}).
\end{equation}
Thus
\begin{align}
P_{XY}(\mathcal{A},\mathcal{B})&\le (1+\delta)P_X(\mathcal{A})P_Y(\mathcal{B})+\frac{\delta}{1+\delta}\label{eq:gdep_derive2}
\\&\le P_X(\mathcal{A})P_Y(\mathcal{B})+\delta+\frac{\delta}{1+\delta}
\\&\le P_X(\mathcal{A})P_Y(\mathcal{B})+2\delta.\label{eq:gdep_derive3}
\end{align}
We prove the corresponding lower bound as follows:
\begin{align}
P_{XY}(\mathcal{A},\mathcal{B})&=P_X(\mathcal{A})-P_{XY}(\mathcal{A},\mathcal{B}^c)
\\&\ge P_X(\mathcal{A})-P_X(\mathcal{A})P_Y(\mathcal{B}^c)-2\delta\label{eq:gdep_derive4}
\\&=P_X(\mathcal{A})P_Y(\mathcal{B})-2\delta\label{eq:gdep_derive5}
\end{align}
where \eqref{eq:gdep_derive4} is simply an application of \eqref{eq:gdep_derive3} with $\mathcal{B}^c$ swapped with $\mathcal{B}$. Combining \eqref{eq:gdep_derive3} and \eqref{eq:gdep_derive5} proves \eqref{eq:gdep_abs_bd}.
(4) If $\Delta(X;Y)=0$, then \eqref{eq:gdep_abs_bd} immediately gives $P_{XY}(\mathcal{A},\mathcal{B})=P_X(\mathcal{A})P_Y(\mathcal{B})$ for all $\mathcal{A}\subset\mathcal{X},\mathcal{B}\subset\mathcal{Y}$; i.e., $X$ and $Y$ are independent. Conversely, suppose $X$ and $Y$ are independent. Thus, if we take $Q_X=P_X,Q_Y=P_Y$, then
\begin{equation}
P_{XY}(\mathcal{A},\mathcal{B})\le Q_X(\mathcal{A})Q_Y(\mathcal{B}).
\end{equation}
This proves that $\Delta(X;Y)= 0$ by the definition in \eqref{eq:Delta_def0}.
(5) Assume there exist sets $\mathcal{A},\mathcal{B}$ as stated. Since $1(X\in \mathcal{A})=1(Y\in \mathcal{B})$ almost surely, $P_{XY}(\mathcal{A},\mathcal{B})=P_X(\mathcal{A})=P_Y(\mathcal{B})$, and $P_{XY}(\mathcal{A}^c,\mathcal{B}^c)=P_X(\mathcal{A}^c)=P_Y(\mathcal{B}^c)$, and also by assumption each of these probabilities is strictly between $0$ and $1$. For convenience let $p=P_{XY}(\mathcal{A},\mathcal{B})$. Using the definition in \eqref{eq:Delta_def}, we may lower bound the wringing dependence by
\begin{align}
\Delta(X;Y)&\ge \inf_{Q_X,Q_Y}\,\max\left\{\frac{\log Q_X(\mathcal{A})Q_Y(\mathcal{B})}{\log p},\,\frac{\log Q_X(\mathcal{A}^c)Q_Y(\mathcal{B}^c)}{\log (1-p)}\right\}-1\label{eq:full_dependence1}
\\&=\inf_{q\in[0,1]}\, \max\left\{\frac{\log q^2}{\log p},\, \frac{\log (1-q)^2}{\log(1-p)}\right\}-1\label{eq:full_dependence2}
\\&=\max\left\{\frac{\log p^2}{\log p},\, \frac{\log (1-p)^2}{\log(1-p)}\right\}-1\label{eq:full_dependence3}
\\&=1\label{eq:full_dependence4}
\end{align}
where \eqref{eq:full_dependence2} holds since the RHS of \eqref{eq:full_dependence1} is concave in $(Q_X,Q_Y)$ and symmetric between $Q_X(\mathcal{A})$ and $Q_Y(\mathcal{B})$, so the optimal choice is $Q_X(\mathcal{A})=Q_Y(\mathcal{B})=q$ for some $q\in[0,1]$; \eqref{eq:full_dependence3} holds since the first term in the max in \eqref{eq:full_dependence2} is decreasing in $q$ while the second term is increasing, so the infimum is achieved when the two terms in the max are equal, which occurs at $q=p$; and \eqref{eq:full_dependence4} holds by the fact that $0<p<1$. Since we know that in general $\Delta(X;Y)\le 1$, this proves $\Delta(X;Y)=1$. For the partial converse, assume $\mathcal{X},\mathcal{Y}$ are finite sets, and that $\Delta(X;Y)=1$. This implies that
\begin{equation}\label{eq:decomposable_bound}
\sup_{\mathcal{A}\subset\mathcal{X},\mathcal{B}\subset\mathcal{Y}} \frac{\log P_X(\mathcal{A})P_Y(\mathcal{B})}{\new{\log}\, P_{XY}(\mathcal{A},\mathcal{B})}=2.
\end{equation}
Since $\mathcal{X},\mathcal{Y}$ are finite, the supremum is attained, so there exist sets $\mathcal{A},\mathcal{B}$ where $0<P_{XY}(\mathcal{A},\mathcal{B})<1$ and
\begin{equation}
P_X(\mathcal{A})P_Y(\mathcal{B})=P_{XY}(\mathcal{A},\mathcal{B})^2.
\end{equation}
This only holds if $P_{XY}(\mathcal{A},\mathcal{B})=P_X(\mathcal{A})=P_Y(\mathcal{B})$, which implies that $1(X\in \mathcal{A})=1(Y\in \mathcal{B})$ almost surely.
(6) The symmetry of the wringing dependence means that it is enough to show $\Delta(X;Z)\le \Delta(X;Y)$. We have
\begin{align}
\Delta(X;Z)&=\inf_{Q_X,Q_Z}\,\sup_{\mathcal{A}\subset\mathcal{X},\mathcal{B}'\subset\mathcal{Z}} \left|\frac{\log Q_X(\mathcal{A})Q_Z(\mathcal{B}')}{\log P_{XZ}(\mathcal{A},\mathcal{B}')}-1\right|^+
\\&\le \inf_{Q_X,Q_Y}\,\sup_{\mathcal{A}\subset\mathcal{X},\mathcal{B}'\subset\mathcal{Z}} \left|\frac{\log Q_X(\mathcal{A})\int dQ_Y(y)P_{Z|Y=y}(\mathcal{B}')}{\log P_{XZ}(\mathcal{A},\mathcal{B}')}-1\right|^+\label{eq:DPI2}
\\&=\inf_{Q_X,Q_Y}\,\sup_{\substack{\mathcal{A}\subset\mathcal{X},\mathcal{B}'\subset\mathcal{Z}}} \left|\frac{\log Q_X(\mathcal{A})\int dQ_Y(y)P_{Z|Y=y}(\mathcal{B}')}{\log \int dP_{XY}(x,y) 1(x\in \mathcal{A}) P_{Z|Y=y}(\mathcal{B}')}-1\right|^+\label{eq:DPI3}
\\&\le \inf_{Q_X,Q_Y}\, \sup_{\mathcal{A}\subset\mathcal{X}}\, \sup_{g:\mathcal{Y}\to[0,1]} \left| \frac{\log Q_X(\mathcal{A})\,\mathbb{E}_Q [g(Y)]}{\log \mathbb{E}_P [1(X\in \mathcal{A}) g(Y)]}-1\right|^+\label{eq:DPI4}
\end{align}
where \eqref{eq:DPI2} holds because for any $Q_Y$, $Q_Z=\int dQ_Y(y) P_{Z|Y=y}$ is a valid distribution on $\mathcal{Z}$, in the denominator of \eqref{eq:DPI3} we have used the fact that $X-Y-Z$ is a Markov chain, and \eqref{eq:DPI4} holds because in \eqref{eq:DPI3} we may take $g(y)=P_{Z|Y=y}(\mathcal{B}')$ which is feasible for the supremum over $g$ in \eqref{eq:DPI4}.
For fixed $Q_X$, $Q_Y$, and $\mathcal{A}$, define
\begin{equation}
G=\sup_{g:\mathcal{Y}\to[0,1]}\left|\frac{\log Q_X(\mathcal{A})\,\mathbb{E}_Q [g(Y)]}{\log \mathbb{E}_P [1(X\in \mathcal{A}) g(Y)]}-1\right|^+.
\end{equation}
We may also define
\begin{equation}\label{eq:Gprime_def}
\new{G'=\sup_{\mathcal{B}\subset\mathcal{Y}} \left|\frac{\log Q_X(\mathcal{A})\,Q_Y(\mathcal{B})}{\log P_{XY}(\mathcal{A},\mathcal{B})}-1\right|^+.}
\end{equation}
\new{To complete the proof, it is enough to show that $G\le G'$.}
Rearranging \eqref{eq:Gprime_def}, for any \new{$\mathcal{B}\subset\mathcal{Y}$},
\begin{equation}\label{eq:G_condition}
\new{P_{X,Y}(\mathcal{A},\mathcal{B})^{1+G'}\le Q_X(\mathcal{A})Q_Y(\mathcal{B}).}
\end{equation}
\new{For any function $g:\mathcal{Y}\to[0,1]$, define the sets $\mathcal{B}_t=\{y:g(y)<t\}$. Thus
\begin{equation}
g(y)=\int_{0}^1 1(y\in \mathcal{B}_t) dt.
\end{equation}
Since $G'\ge 0$, $f(z)=z^{1+G'}$ is a convex function, which allows us to write
\begin{align}
(\mathbb{E}_P[1(X\in \mathcal{A}) g(Y)])^{1+G'}
&=\left(\mathbb{E}_P\left[1(X\in \mathcal{A}) \int_0^1 1(Y\in \mathcal{B}_t)dt\right]\right)^{1+G'}
\\&\le \int_0^1 dt (\mathbb{E}_P[1(X\in \mathcal{A})1(Y\in \mathcal{B}_t)])^{1+G'}\label{eq:DPI_convexity1}
\\&=\int _0^1 P_{XY}(\mathcal{A},\mathcal{B}_t)^{1+G'} dt
\\&\le \int_0^1 Q_X(\mathcal{A})Q_Y(\mathcal{B}_t)dt\label{eq:DPI_convexity}
\\&=Q_X(\mathcal{A}) \int_0^1 \mathbb{E}_Q[Y\in \mathcal{B}_t]dt
\\&=Q_X(\mathcal{A}) \mathbb{E}_Q [g(Y)]\label{eq:DPI_convexity_end}
\end{align}
where \eqref{eq:DPI_convexity1} follows from Jensen's inequality and the fact that $\int_0^1 dt=1$, and \eqref{eq:DPI_convexity} follows from \eqref{eq:G_condition}. Since \eqref{eq:DPI_convexity_end} holds for all functions $g$, this implies $G\le G'$, which completes the proof.}
\end{IEEEproof}
\subsection{The Wringing Lemma}\label{sec:wringing_lemma}
The following result is our counterpart of Ahlswede's Lemma 4 from \cite{Ahlswede1982}.
\begin{lemma}\label{lemma:wringing}
Let $P_{XY}\in\mathcal{P}(\mathcal{X}\times\mathcal{Y})$, $Q_X\in\mathcal{P}(\mathcal{X})$, and $Q_Y\in\mathcal{P}(\mathcal{Y})$ be distributions such that
\begin{equation}
D_\infty(P_{XY}\|Q_XQ_Y)\le \sigma
\end{equation}
where $\sigma$ is finite. For any $\delta>0$, there exist sets $\bar\mathcal{X}\subset\mathcal{X},\bar\mathcal{Y}\subset\mathcal{Y}$ such that
\begin{equation}\label{eq:bar_sets_prob_bd}
P_{XY}(\bar\mathcal{X},\bar\mathcal{Y})\ge \exp\left\{-\frac{\sigma}{\delta}\right\}
\end{equation}
and
\begin{equation}\label{eq:near_independence}
\Delta(\bar{X};\bar{Y})\le \delta
\end{equation}
where $(\bar{X},\bar{Y})$ are distributed according to $P_{XY|X\in\bar\mathcal{X},Y\in\bar\mathcal{Y}}$.
\end{lemma}
As we outlined in Sec.~\ref{sec:motivation}, Ahlswede's proof of \cite[Lemma~4]{Ahlswede1982} involved iteratively restricting the wringing sets until the desired property is achieved. While a proof of Lemma~\ref{lemma:wringing} along these lines would work for discrete variables, it does not directly generalize to arbitrary variables. Instead, we present a slightly different proof that does work in general.
\begin{IEEEproof}[Proof of Lemma~\ref{lemma:wringing}]
Let $\mathscr{A}$ be the collection of pairs of sets $(\mathcal{A},\mathcal{B})$ where $\mathcal{A}\subset\mathcal{X},\mathcal{B}\subset\mathcal{Y}$ such that $P_{XY}(\mathcal{A},\mathcal{B})>0$ and
\begin{equation}\label{eq:scS_def}
P_{XY}(\mathcal{A},\mathcal{B})^{1+\delta}\ge Q_X(\mathcal{A})Q_Y(\mathcal{B}).
\end{equation}
This set $\mathscr{A}$ is always non-empty, since it includes $(\mathcal{A},\mathcal{B})=(\mathcal{X},\mathcal{Y})$. For any $(\mathcal{A},\mathcal{B})\in\mathscr{A}$, using the assumption that $P_{XY}(\mathcal{A},\mathcal{B})>0$, we may rearrange \eqref{eq:scS_def} to write
\begin{align}
P_{XY}(\mathcal{A},\mathcal{B})&\ge \left(\frac{Q_X(\mathcal{A})Q_Y(\mathcal{B})}{P_{XY}(\mathcal{A},\mathcal{B})}\right)^{1/\delta}
\\&\ge\exp\left\{-\frac{\sigma}{\delta}\right\}\label{eq:scS_prob_bd}
\end{align}
where the second inequality follows from the assumption that $D_{\infty}(P_{XY}\|Q_XQ_Y)\le\sigma$.
We proceed to construct a pair of sets $(\bar\mathcal{X},\bar\mathcal{Y})\in\mathscr{A}$ that satisfy the following property:
\begin{equation}\label{eq:XbarYbar_prop}
\text{for all }\mathcal{A}\subset\bar\mathcal{X},\mathcal{B}\subset\bar\mathcal{Y},\text{ if }P_{XY}(\mathcal{A},\mathcal{B})<P_{XY}(\bar\mathcal{X},\bar\mathcal{Y})\text{ then }(\mathcal{A},\mathcal{B})\notin\mathscr{A}.
\end{equation}
These sets can be easily found if the infimum is attained in
\begin{equation}\label{eq:inf_attained}
\inf_{(\mathcal{A},\mathcal{B})\in\mathscr{A}} P_{XY}(\mathcal{A},\mathcal{B}).
\end{equation}
That is, if there exist $(\bar\mathcal{X},\bar\mathcal{Y})\in\mathscr{A}$ such that $P_{XY}(\bar\mathcal{X},\bar\mathcal{Y})\le P_{XY}(\mathcal{A},\mathcal{B})$ for all $(\mathcal{A},\mathcal{B})\in\mathscr{A}$, then \eqref{eq:XbarYbar_prop} follows easily. Note that the infimum in \eqref{eq:inf_attained} is always attained if $\mathcal{X},\mathcal{Y}$ are finite sets. However, if this infimum is not attained we need a different argument.
We create a sequence of pairs of sets $(\mathcal{A}_k,\mathcal{B}_k)\in\mathscr{A}$ for each non-negative integer $k$, as follows. First let $(\mathcal{A}_0,\mathcal{B}_0)=(\mathcal{X},\mathcal{Y})$. For any $k\ge 1$, given $(\mathcal{A}_{k-1},\mathcal{B}_{k-1})$, define $(\mathcal{A}_k,\mathcal{B}_k)$ as follows. Let
\begin{equation}\label{eq:pk_def}
p_k=\inf_{\mathcal{A}\subset \mathcal{A}_{k-1},\mathcal{B}\subset \mathcal{B}_{k-1}:(\mathcal{A},\mathcal{B})\in\mathscr{A}} P_{XY}(\mathcal{A},\mathcal{B}).
\end{equation}
Let $\mathcal{A}_k\subset \mathcal{A}_{k-1},\mathcal{B}_k\subset \mathcal{B}_{k-1}$ be such that $(\mathcal{A}_k,\mathcal{B}_k)\in\mathscr{A}$ and
\begin{equation}\label{eq:AkBk_def}
P_{XY}(\mathcal{A}_k,\mathcal{B}_k)\le p_k+\frac{1}{k}.
\end{equation}
This iteratively defines the sets $\mathcal{A}_k,\mathcal{B}_k$ for all $k$. We now define
\begin{equation}
\bar\mathcal{X}=\bigcap_{k\ge 0} \mathcal{A}_k,\quad \bar\mathcal{Y}=\bigcap_{k\ge 0} \mathcal{B}_k.
\end{equation}
We need to prove that $(\bar\mathcal{X},\bar\mathcal{Y})\in\mathscr{A}$ and that \eqref{eq:XbarYbar_prop} is satisfied. By the dominated convergence theorem,
\begin{equation}
P_{XY}(\bar\mathcal{X},\bar\mathcal{Y})=\lim_{k\to\infty} P_{XY}(\mathcal{A}_k,\mathcal{B}_k),
\quad
Q_X(\bar\mathcal{X})=\lim_{k\to\infty} Q_X(\mathcal{A}_k),
\quad
Q_Y(\bar\mathcal{Y})=\lim_{k\to\infty} Q_Y(\mathcal{B}_k).
\end{equation}
These limits imply that $\bar\mathcal{X},\bar\mathcal{Y}$ satisfy \eqref{eq:scS_def}. Moreover, since $(\mathcal{A}_k,\mathcal{B}_k)\in\mathscr{A}$ for each $k$, the lower bound in \eqref{eq:scS_prob_bd} implies that $P_{XY}(\mathcal{A}_k,\mathcal{B}_k)\ge \exp\{-\frac{\sigma}{\delta}\}$, so $P_{XY}(\bar\mathcal{X},\bar\mathcal{Y})$ is bounded away from $0$. Thus $(\bar\mathcal{X},\bar\mathcal{Y})\in\mathscr{A}$. To prove \eqref{eq:XbarYbar_prop}, consider any $\mathcal{A}\subset\bar\mathcal{X},\mathcal{B}\subset\bar\mathcal{Y}$ where $P_{XY}(\mathcal{A},\mathcal{B})<P_{XY}(\bar\mathcal{X},\bar\mathcal{Y})$. Note that
\begin{equation}\label{eq:k_limit}
\lim_{k\to\infty} \left[P_{XY}(\mathcal{A}_k,\mathcal{B}_k)-\frac{1}{k}\right]=P_{XY}(\bar\mathcal{X},\bar\mathcal{Y}).
\end{equation}
Thus, there exists a finite $k$ such that $P_{XY}(\mathcal{A},\mathcal{B})<P_{XY}(\mathcal{A}_k,\mathcal{B}_k)-\frac{1}{k}$. By \eqref{eq:AkBk_def}, this implies that $P_{XY}(\mathcal{A},\mathcal{B})<p_k$, which means $\mathcal{A},\mathcal{B}$ cannot be feasible for the infimum defining $p_k$ in \eqref{eq:pk_def}. In particular, since $\mathcal{A}\subset\bar\mathcal{X}\subset \mathcal{A}_{k-1}$ and $\mathcal{B}\subset\bar\mathcal{Y}\subset \mathcal{B}_{k-1}$, it must be that $(\mathcal{A},\mathcal{B})\notin\mathscr{A}$. This proves the desired property of $(\bar\mathcal{X},\bar\mathcal{Y})$ in \eqref{eq:XbarYbar_prop}.
Given \eqref{eq:XbarYbar_prop}, we now complete the proof. Since $(\bar\mathcal{X},\bar\mathcal{Y})\in\mathscr{A}$, we immediately have the probability bound in \eqref{eq:bar_sets_prob_bd}. We now need to prove the bound on the wringing dependence in \eqref{eq:near_independence}. To show that $\Delta(\bar{X};\bar{Y})\le\delta$, it is enough to show that for all $\mathcal{A}\subset\mathcal{X},\mathcal{B}\subset\mathcal{Y}$,
\begin{equation}\label{eq:PQ_XbarYbar}
P_{XY|X\in\bar\mathcal{X},Y\in\bar\mathcal{Y}}(\mathcal{A},\mathcal{B})^{1+\delta}\le Q_{X|X\in\bar\mathcal{X}}(\mathcal{A})\,Q_{Y|Y\in\bar\mathcal{Y}}(\mathcal{B}).
\end{equation}
Letting $\mathcal{A}'=\mathcal{A}\cap\bar\mathcal{X},\mathcal{B}'=\mathcal{B}\cap\bar\mathcal{Y}$, we have
\begin{equation}
P_{XY|X\in\bar\mathcal{X},Y\in\bar\mathcal{Y}}(\mathcal{A},\mathcal{B})=\frac{P_{XY}(\mathcal{A}',\mathcal{B}')}{P_{XY}(\bar\mathcal{X},\bar\mathcal{Y})},
\qquad Q_{X|X\in\bar\mathcal{X}}(\mathcal{A})=\frac{Q_X(\mathcal{A}')}{Q_X(\bar\mathcal{X})},
\qquad Q_{Y|Y\in\bar\mathcal{Y}}(\mathcal{B})=\frac{Q_Y(\mathcal{B}')}{Q_Y(\bar\mathcal{X})}.
\end{equation}
Consider the case that $P_{XY}(\mathcal{A}',\mathcal{B}')=P_{XY}(\bar\mathcal{X},\bar\mathcal{Y})$. Since $\mathcal{A}'\subset\bar\mathcal{X},\mathcal{B}'\subset\bar\mathcal{Y}$, we must have $P_{XY}((\bar\mathcal{X}\times\bar\mathcal{Y})\setminus(\mathcal{A}'\times \mathcal{B}'))=0$. By the assumption that $\sigma$ is finite, $P_{XY}\ll Q_XQ_Y$, so in particular $Q_XQ_Y((\bar\mathcal{X}\times\bar\mathcal{Y})\setminus(\mathcal{A}'\times \mathcal{B}'))=0$, and thus $Q_X(\mathcal{A}')Q_Y(\mathcal{B}')=Q_X(\bar\mathcal{X})Q_Y(\bar\mathcal{Y})$. Thus, each side of \eqref{eq:PQ_XbarYbar} equals $1$, so the inequality holds. Now consider the case that $P_{XY}(\mathcal{A}',\mathcal{B}')=0$. This implies that the LHS of \eqref{eq:PQ_XbarYbar} is $0$, so it holds trivially.
The remaining case is when $0<P_{XY}(\mathcal{A}',\mathcal{B}')<P_{XY}(\bar\mathcal{X},\bar\mathcal{Y})$. By the key property of $(\bar\mathcal{X},\bar\mathcal{Y})$ in \eqref{eq:XbarYbar_prop}, we must have $(\mathcal{A}',\mathcal{B}')\notin\mathscr{A}$. Thus
\begin{align}
P_{XY|X\in\bar\mathcal{X},Y\in\bar\mathcal{Y}}(\mathcal{A},\mathcal{B})^{1+\delta}
&=\frac{P_{XY}(\mathcal{A}',\mathcal{B}')^{1+\delta}}{P_{XY}(\bar\mathcal{X},\bar\mathcal{Y})^{1+\delta}}
\\&<\frac{Q_X(\mathcal{A}')Q_Y(\mathcal{B}')}{P_{XY}(\bar\mathcal{X},\bar\mathcal{Y})^{1+\delta}}\label{eq:AB_derive1}
\\&\le \frac{Q_X(\mathcal{A}')Q_Y(\mathcal{B}')}{Q_X(\bar\mathcal{X})Q_Y(\bar\mathcal{Y})}\label{eq:AB_derive2}
\\&=Q_{X|X\in\bar\mathcal{X}}(\mathcal{A})\,Q_{Y|Y\in\bar\mathcal{Y}}(\mathcal{B})
\end{align}
where \eqref{eq:AB_derive1} follows because $(\mathcal{A}',\mathcal{B}')\notin\mathscr{A}$ and $P_{XY}(\mathcal{A}',\mathcal{B}')>0$, which imply that \eqref{eq:scS_def} must be violated; and \eqref{eq:AB_derive2} follows because $(\bar\mathcal{X},\bar\mathcal{Y})\in\mathscr{A}$. This proves \eqref{eq:PQ_XbarYbar} for all $\mathcal{A}\subset\mathcal{X},\mathcal{B}\subset\mathcal{Y}$.
\end{IEEEproof}
\subsection{Relationship to Other Dependence Measures}\label{sec:other_measures}
\subsubsection{Hypercontractivity}\label{sec:hypercontractivity}
One of the first uses of hypercontractivity in information theory was \cite{Ahlswede1976a}, wherein Ahlswede and G\'acs were interested in establishing conditions under which random variables $X,Y$ satisfy
\begin{equation}\label{eq:sigma_tau}
P_{XY}(\mathcal{A},\mathcal{B})\le P_X(\mathcal{A})^{\sigma} P_Y(\mathcal{B})^{\tau}\text{ for all }\mathcal{A}\subset\mathcal{X},\mathcal{B}\subset\mathcal{Y}.
\end{equation}
To establish this inequality, they actually proved something stronger, namely
\begin{equation}\label{eq:hyper0}
\mathbb{E} [f(X)g(Y)]\le \|f(X)\|_{1/\sigma} \|g(Y)\|_{1/\tau}\text{ for all }f:\mathcal{X}\to\mathbb{R},g:\mathcal{Y}\to\mathbb{R}
\end{equation}
where for a real-valued \new{random} variable $Z$, $\|Z\|_r=(\mathbb{E} [|Z|^r])^{1/r}$. By optimizing over $f$, one finds that \eqref{eq:hyper0} is equivalent to
\begin{equation}\label{eq:hyper1}
\|\mathbb{E}[g(Y)|X]\|_{1/(1-\sigma)}\le \|g(Y)\|_{1/\tau}\text{ for all }g:\mathcal{Y}\to\mathbb{R}.
\end{equation}
Such an inequality is known as \emph{hypercontractivity}. If the inequality is reversed, it is known \emph{reverse hypercontractivity} \cite{Mossel2013}. The advantage of working with hypercontractivity rather than the more operationally meaningful inequality \eqref{eq:sigma_tau} is that hypercontractivity tensorizes: that is, if \eqref{eq:hyper1} holds for $X,Y$, then it also holds for $X^n,Y^n$ where $(X_t,Y_t)$ are i.i.d. with the same distribution as $X,Y$.
The relationship between hypercontractivity and wringing dependence is apparent from \eqref{eq:sigma_tau}; namely this inequality is identical to the inequality defining the wringing dependence in \eqref{eq:Delta_def0} but with $Q_X=P_X,Q_Y=P_Y$, and $\sigma=\tau=1/(1+\delta)$. We make this relationship precise as follows.
For a pair of random variables $X,Y$, \cite{Kamath2012} defined the \emph{hypercontractivity ribbon} $\mathcal{R}_{X;Y}$ as the set of pairs $(r,s)$ where one of the following hold:
\begin{itemize}
\item $1\le s\le r$, and for all $g:\mathcal{Y}\to\mathbb{R}$,
\begin{equation}\label{eq:hyper}
\|\mathbb{E}[g(Y)|X]\|_r\le \|g(Y)\|_s,
\end{equation}
\item $1\ge s\ge r$, and for all $g:\mathcal{Y}\to\mathbb{R}_+$,
\begin{equation}\label{eq:reverse_hyper}
\|\mathbb{E}[g(Y)|X]\|_r\ge \|g(Y)\|_s.
\end{equation}
\end{itemize}
The second condition concerns reverse hypercontractivity, which does not appear to be related to the wringing dependence, but we have included it for completeness. The following proposition, which is proved in Appendix~\ref{appendix:hyper} connects the wringing dependence to the hypercontractivity ribbon.
\begin{proposition}\label{prop:hypercontractivity}
Given random variables $X,Y$, let
\begin{equation}\label{eq:Delta_hyp_def}
\Delta_{\text{hyp}}(X;Y)=\inf\{\delta\in[0,1]: (1+1/\delta,1+\delta)\in\mathcal{R}_{X;Y}\}.
\end{equation}
Then
\begin{equation}\label{eq:hyp_upper_bd}
\Delta(X;Y)\le\Delta_{\text{hyp}}(X;Y).
\end{equation}
Moreover, if we let $X^n,Y^n$ be jointly i.i.d.\ where $P_{X_tY_t}=P_{XY}$ for each $t\in[n]$, then $\Delta(X^n;Y^n)$ is a non-decreasing sequence such that
\begin{equation}\label{eq:hyp_limit}
\lim_{n\to\infty} \Delta(X^n;Y^n)=\Delta_{\text{hyp}}(X;Y).
\end{equation}
\end{proposition}
Note that the quantity $\Delta_{\text{hyp}}(X;Y)$ defined in \eqref{eq:Delta_hyp_def} involves checking whether $(r,s)\in\mathcal{R}_{X;Y}$ where $r=1+1/\delta$ and $s=1+\delta$ for some $\delta\in[0,1]$; this is the regime where $1\le s\le r$, which corresponds to hypercontractivity rather than reverse hypercontractivity. The proof of the upper bound on wringing dependence in \eqref{eq:hyp_upper_bd} follows from essentially the same argument as the one \cite{Ahlswede1976a} used to establish inequalities of the form \eqref{eq:sigma_tau} via hypercontractivity. The limiting behavior of the wringing dependence in \eqref{eq:hyp_limit} is proved by an argument very similar to that of \cite{Nair2014}, which gives several equivalent characterizations of the hypercontractivity ribbon.
We illustrate Prop.~\ref{prop:hypercontractivity} with two examples: the doubly-symmetric binary source, and bivariate Gaussians. For the DSBS, $\Delta_{\text{hyp}}(X;Y)$ is shown to be strictly larger than the wringing dependence, and so \eqref{eq:Delta_hyp_def} is a loose bound. For bivariate Gaussians, \eqref{eq:Delta_hyp_def} gives a tight bound. In fact, the wringing dependence for bivariate Gaussians is quite difficult to compute directly from the definition, but Prop.~\ref{prop:hypercontractivity} allows us to find it exactly: for bivariate Gaussians with correlation coefficient $\rho$, $\Delta(X;Y)=|\rho|$. This establishes that the last of R\'enyi's axioms from \cite{Renyi1959} holds for wringing dependence.
\begin{example}[DSBS]
Let $(X,Y)$ be a DSBS with parameter $p$ as in Example~\ref{example:DSBS}. In \cite{Kamath2012}, it was established that the hypercontractivity ribbon consists of the pairs $(r,s)$ where either $(1-2p)^2 (r-1)+1\le s\le r$ or $r\le s\le (1-2p)^2(r-1)+1$. In particular, $(1+1/\delta,1+\delta)\in\mathcal{R}_{X;Y}$ iff
\begin{equation}
(1-2p)^2 \frac{1}{\delta}+1\le 1+\delta
\end{equation}
which holds if $\delta\ge |1-2p|$. Therefore, $\Delta_{\text{hyp}}(X;Y)=|1-2p|$. Note that this quantity is strictly smaller than the the wringing dependence as calculated in Example~\ref{example:DSBS}, except for the trivial cases where $p\in\{0,1/2,1\}$.
\end{example}
\begin{example}[Bivariate Gaussians] Let $(X,Y)$ have a bivariate Gaussian distribution with correlation coefficient $\rho$. We claim that $\Delta(X;Y)=|\rho|$. Without loss of generality, we may assume that $X,Y$ each have zero mean, and covariance matrix
\begin{equation}
\left[\begin{array}{cc} 1 & \rho \\ \rho & 1\end{array}\right].
\end{equation}
We may assume that $\rho \ge 0$, since if not we may simply replace $Y$ with $-Y$. We upper bound $\Delta(X;Y)$ via Prop.~\ref{prop:hypercontractivity}. A result originally by Nelson \cite{Nelson1973}, which is also a consequence of the Gaussian log-Sobolev inequality \cite{Gross1975}, is that for any function $g:\mathbb{R}\to\mathbb{R}$, \eqref{eq:hyper} holds for $r\ge s\ge 1$ if $\rho\le \sqrt{(s-1)/(r-1)}$. (See \cite[Sec.~3.2]{Raginsky2013} for an information-theoretic treatment of this inequality.) Thus, with $r=1+1/\delta$ and $s=1+\delta$, $(r,s)\in\mathcal{R}_{X;Y}$ if $\rho\le \delta$. Therefore $\Delta_{\text{hyp}}(X;Y)\le\rho$, and so $\Delta(X;Y)\le \rho$ by Prop.~\ref{prop:hypercontractivity}.
We now show that $\Delta(X;Y)\ge\rho$. If $\rho=1$, then $X=Y$, so $\Delta(X;Y)=1$. Now suppose that $\rho<1$. Let $\delta=\Delta(X;Y)$. Applying \eqref{eq:dep_pp_bd} from Thm.~\ref{thm:props}, for any $\mathcal{A},\mathcal{B}\subset\mathbb{R}$
\begin{equation}\label{eq:gaussian_pp_bd}
P_{XY}(\mathcal{A},\mathcal{B})\le (1+2\delta)(P_X(\mathcal{A})P_Y(\mathcal{B}))^{1/(1+\delta)}.
\end{equation}
In particular, for a parameter $a\ge 0$ (we will eventually take the limit $a\to\infty$), we may choose $\mathcal{A}=\mathcal{B}=[a,a+1]$. Let $\phi(x)$ be the standard Gaussian PDF. Since $\phi(x)$ is decreasing for $x\in[a,a+1]$, we have
\begin{equation}
P_X(\mathcal{A})=P_Y(\mathcal{B})=\int_{a}^{a+1} \phi(x)dx\le \phi(a).
\end{equation}
The joint PDF of $(X,Y)$ is
\begin{equation}
f_{XY}(x,y)=\frac{1}{2\pi\sqrt{1-\rho^2}} \exp\left\{-\frac{x^2+y^2-2\rho xy}{2(1-\rho^2)}\right\}.
\end{equation}
In particular, $f_{XY}(x,y)$ is decreasing in $x$ and $y$ if $x\ge \rho y$ and $y\ge \rho x$. From the assumption that $\rho<1$, these conditions hold for all $x,y\in[a,a+1]$ for sufficiently large $a$. Thus
\begin{equation}
P_{XY}(\mathcal{A},\mathcal{B})=\int_{a}^{a+1}dx\int_{a}^{a+1}dy\, f_{XY}(x,y)\ge f_{XY}(a+1,a+1).
\end{equation}
Plugging into \eqref{eq:gaussian_pp_bd} gives
\begin{equation}
\frac{1}{2\pi\sqrt{1-\rho^2}} \exp\left\{-\frac{(a+1)^2(1-\rho)}{1-\rho^2}\right\}
\le (1+2\delta)\exp\left\{-\frac{a^2}{1+\delta}\right\}.
\end{equation}
Thus
\begin{equation}
-\frac{(a+1)^2}{1+\rho}-\log(2\pi\sqrt{1-\rho^2})\le -\frac{a^2}{1+\delta}+\log(1+2\delta).
\end{equation}
Dividing by $a^2$ and taking a limit as $a\to\infty$ gives $\rho\le\delta$. That is, $\Delta(X;Y)\ge\rho$.
\end{example}
\subsubsection{Maximal Correlation}
The maximal correlation, which was introduced in \cite{Hirschfeld1935,Gebelein1941} and further studied in \cite{Renyi1959}, is given by
\begin{equation}\label{eq:max_correlation_def}
\rho_m(X;Y)=\sup_{f,g} \rho(f(X);g(Y))
\end{equation}
where the supremum is over all real-valued functions $f:\mathcal{X}\to\mathbb{R}$ and $g:\mathcal{Y}\to\mathbb{R}$ such that $f(X)$ and $g(Y)$ have finite, non-zero variances, and $\rho(\cdot;\cdot)$ is the correlation coefficient. The maximal correlation shares much in common with the wringing dependence: in particular, both satisfy all 7 axioms from \cite{Renyi1959}. Moreover, the maximal correlation provides a simple bound on the hypercontractivity ribbon (see \cite{Kamath2012}); this implies that $\Delta_\text{hyp}(X;Y)\ge \rho_m(X;Y)$, where $\Delta_{\text{hyp}}$ is defined in \eqref{eq:Delta_hyp_def}. The following result, proved in Appendix~\ref{appendix:max_corr}, shows that if the wringing dependence is small, then the maximal correlation is also small.
\begin{lemma}\label{lemma:max_corr}
If $\Delta(X;Y)\le\delta$, then the maximal correlation is bounded by
\begin{equation}
\rho_m(X;Y)\le O(\delta\log \delta^{-1}).
\end{equation}
\end{lemma}
This result will be particularly useful when addressing the Gaussian MAC; see Sec.~\ref{sec:gaussian}. Unfortunately, the bound in Lemma~\ref{lemma:max_corr} is not linear; in fact, no universal bound of the form $\rho_m(X;Y)\le K\,\Delta(X;Y)$ is possible.\footnote{If there were such a bound, analyzing the Gaussian MAC would dramatically simplify.}
This is illustrated in the following example. This example also shows that Lemma~\ref{lemma:max_corr} is order-optimal; in fact, for any $0<c<1$ and any $\delta>0$, there exists a distribution $P_{XY}$ where $\Delta(X;Y)\le\delta$ and
\begin{equation}\label{eq:binary_example_bd}
\rho_m(X;Y)\ge c\, \delta\log \delta^{-1}.
\end{equation}
\begin{example}\label{example:max_corr}
For any $a\in[0,1/2]$, let $X,Y$ be binary variables with joint PMF given by
\vspace{1ex}
\begin{center}
\begin{tabular}{c|c c}
\diagbox{$Y$}{$X$} & $0$ & $1$\\ \hline
$0$ & $1-2a$ & $a$\\
$1$ & $a$ & $0$
\end{tabular}
\end{center}
\vspace{1ex}
Note that $P_X=P_Y=\text{Ber}(a)$. We first calculate the maximal correlation. Since $X,Y$ are both binary, the only nontrivial functions of them are the identity function and its complement, so
\begin{equation}
\rho_m(X;Y)=|\rho(X;Y)|=\frac{|\mathbb{E}[ XY]-\mathbb{E} [X]\,\mathbb{E} [Y]|}{\sqrt{\var(X)\var(Y)}}=\frac{a^2}{a(1-a)}=\frac{a}{1-a}.
\end{equation}
To compute the wringing dependence, recall that the function of $(Q_X,Q_Y)$ in the definition in \eqref{eq:Delta_def} is concave. Since $X$ and $Y$ have the same distribution, the optimal choice has $Q_X=Q_Y$. If we let $Q_X=Q_Y=\text{Ber}(q)$, then we see that wringing dependence between $X$ and $Y$ is
\begin{equation}
\Delta(X;Y)=\inf_{q\in[0,1]} \max\left\{\frac{\log q(1-q)}{\log a},\,\frac{\log(1-q)^2}{\log(1-2a)}\right\}-1.
\end{equation}
While there is no simpler closed-form expression, this quantity can be easily computed. Fig.~\ref{fig:max_corr_example} shows the relationship between maximal correlation and wringing dependence across the range of $a$. To analytically establish that this example satisfies the claim \eqref{eq:binary_example_bd}, we may upper bound the wringing dependence by plugging in $q=a$, to find
\begin{align}
\Delta(X;Y)&\le \max\left\{\frac{\log a(1-a)}{\log a},\frac{\log(1-a)^2}{\log(1-2a)}\right\}-1
\\&=\frac{\log a(1-a)}{\log a}-1
\\&=\frac{\log (1-a)}{\log a}.
\end{align}
Thus
\begin{align}
\lim_{a\to 0} \frac{\Delta(X;Y)\log \Delta(X;Y)^{-1}}{\rho_m(X;Y)}
&\le \lim_{a\to 0} \frac{1-a}{a}\, \frac{\log(1-a)}{\log a} \log\left( \frac{\log a}{\log(1-a)}\right)
\\&=\lim_{a\to 0} (1-a)\cdot \frac{-\log(1-a)}{a}\cdot \frac{\log(-\log a)-\log(-\log(1-a))}{-\log a}.\label{eq:binary_example_limit}
\end{align}
\new{We proceed to show that the limit as $a\to 0$ of each of the three multiplied terms in \eqref{eq:binary_example_limit} is $1$. The limit of the first term is certainly $1$; the limit of the second term can be seen to be $1$ by an application of L'Hopital's rule. For the third term, we have
\begin{align}
\lim_{a\to 0}\frac{\log(-\log a)-\log(-\log(1-a))}{-\log a}
&=\lim_{a\to 0} \frac{\frac{1}{a\log a}+\frac{1}{(1-a)\log(1-a)}}{-1/a}\label{eq:lhopital1}
\\&=\lim_{a\to 0} \left[\frac{-1}{\log a}-\frac{a}{(1-a)\log(1-a)}\right]\label{eq:lhopital2}
\\&=\lim_{a\to 0}\frac{-a}{(1-a)\log(1-a)}\label{eq:lhopital3}
\\&=\lim_{a\to 0}\frac{-1}{-\log(1-a)-1}\label{eq:lhopital4}
\\&=1\label{eq:lhopital5}
\end{align}
where \eqref{eq:lhopital1} and \eqref{eq:lhopital4} follow from L'Hopital's rule, and \eqref{eq:lhopital3} holds since $\log a\to -\infty$.}
Therefore, for any $0<c<1$, there exists a sufficiently small $a$ such that \eqref{eq:binary_example_bd} holds.
\end{example}
\begin{figure}
\begin{center}
\includegraphics[width=4in]{max_corr_example.eps}
\caption{The relationship between wringing dependence and maximal correlation for Example~\ref{example:max_corr}, plotted across the range of $a\in[0,1/2]$. Of particular note about this example is that, in the vicinity of the point $(0,0)$, the slope of the curve is infinite.}
\label{fig:max_corr_example}
\end{center}
\end{figure}
Another interesting fact is that while Lemma~\ref{lemma:max_corr} upper bounds the maximal correlation by a function of the wringing dependence, no lower bound is possible. The follow example illustrates that the maximal correlation can be arbitrarily close to $0$ while the wringing dependence is arbitrarily close to $1$.
\begin{example}
Given parameter $a$, let $X,Y$ be binary variables with joint PMF given by
\vspace{1ex}
\begin{center}
\begin{tabular}{c|c c}
\diagbox{$Y$}{$X$} & $0$ & $1$\\ \hline
$0$ & $a$ & $a\log a^{-1}$\\
$1$ & $a\log a^{-1}$ & $1-a-2a\log a^{-1}$
\end{tabular}
\end{center}
\vspace{1ex}
We claim that as $a\to 0$, $\rho_m(X;Y)\to 0$ while $\Delta(X;Y)\to 1$. The maximal correlation can be computed as
\begin{equation}
\rho_m(X;Y)
=\frac{a-(a+a\log a^{-1})^2}{(a+a\log a^{-1})(1-a-a\log a^{-1})}
=\frac{a-o(a)}{a\log a^{-1}+o(a\log a^{-1}))}=\frac{1-o(1)}{\log a^{-1}}
\end{equation}
which vanishes as $a\to 0$. We may lower bound the wringing dependence by
\begin{align}
\Delta(X;Y)&\ge \inf_q \max\left\{\frac{\log q^2}{\log a},\, \frac{\log (1-q)^2}{\log (1-a-2a\log a^{-1})}\right\}-1\label{eq:ex_min_max0}
\\&=\sup_q \min\left\{\frac{\log q^2}{\log a},\, \frac{\log (1-q)^2}{\log (1-a-2a\log a^{-1})}\right\}-1\label{eq:ex_min_max}
\end{align}
where \eqref{eq:ex_min_max} holds since the first function inside the maximum in \eqref{eq:ex_min_max0} is decreasing in $q$ while the second function is increasing. We may now lower bound \eqref{eq:ex_min_max} by choosing $q=2a\log a^{-1}$, which gives
\begin{equation}
\frac{\log q^2}{\log a}
=\frac{2\log (2a\log a^{-1})}{\log a}
=\frac{2\log a+2\log (2\log a^{-1})}{\log a}
=2-O\left(\frac{\log \log a^{-1}}{\log a^{-1}}\right)
\end{equation}
and
\begin{equation}
\frac{\log (1-q)^2}{\log (1-a-2a\log a^{-1})}
=\frac{2\log (1-2a\log a^{-1})}{\log (1-a-2a\log a^{-1})}
=\frac{4a\log a^{-1}+O(a^2\log^2 a^{-1})}{2a\log a^{-1}+O(a)}
= 2-O\left(\frac{1}{\log a^{-1}}\right).
\end{equation}
Therefore, in the limit as $a\to 0$, \eqref{eq:ex_min_max} approaches $1$.
\end{example}
\section{Finite Blocklength Converse Bound}\label{sec:fbl}
Before stating our main finite blocklength bound, we need the following definition. Given distributions $P,Q_1,\ldots,Q_k$ on alphabet $\mathcal{X}$, we define the achievable region for a hypothesis test between a simple hypothesis $P$ and the composite hypothesis $\{Q_1,\ldots,Q_k\}$ by the set
\begin{equation}\label{eq:composite_hypothesis_testing}
{\boldsymbol\beta}_\alpha(P,Q_1,\ldots,Q_k)=\bigcup_{\substack{T:\mathcal{X}\to[0,1],\\ \mathbb{E}_P [T(X)]\ge\alpha}}\{(\beta_1,\ldots,\beta_k)\in[0,1]^k:\mathbb{E}_{Q_i}[T(X)]\le\beta_i\text{ for }i=1,\ldots,k\}.
\end{equation}
The following is our finite blocklength converse bound for the MAC. It follows the same core steps as Ahlswede's proof from \cite{Ahlswede1982}, while using wringing dependence in the wringing step, and is also written in a one-shot manner.
\begin{theorem}\label{thm:FBL}
Suppose there exists an $(M_1,M_2,\epsilon)$ code for the one-shot MAC $W\in\mathcal{P}(\mathcal{X}\times\mathcal{Y}\to\mathcal{Z})$. For any $\lambda>\epsilon$, $\delta>0$, there exists a distribution $P_{XY}\in\mathcal{P}(\mathcal{X}\times\mathcal{Y})$ where $\Delta(X;Y)\le\delta$, and for any $Q_Z\in\mathcal{P}(\mathcal{Z}),Q_{Z|Y}\in\mathcal{P}(\mathcal{Z}|\mathcal{Y}),Q_{Z|X}\in\mathcal{P}(\mathcal{Z}|\mathcal{X})$,
\begin{align}
\frac{1}{M_1M_2}&\ge \left(1-\frac{\epsilon}{\lambda}\right)^{1+1/\delta}
\mathbb{E} [\beta_{12}(X,Y)],\label{eq:FBL_MN}\\
\frac{1}{M_1}&\ge \left(1-\frac{\epsilon}{\lambda}\right)^{1+1/\delta}
\mathbb{E}[ \beta_{1}(X,Y)],\label{eq:FBL_M}\\
\frac{1}{M_2}&\ge \left(1-\frac{\epsilon}{\lambda}\right)^{1+1/\delta}
\mathbb{E}[ \beta_{2}(X,Y)]\label{eq:FBL_N}
\end{align}
where the expectations are with respect to $P_{XY}$, and for each $x,y$,
\begin{equation}\label{eq:beta_condition}
(\beta_{12}(x,y),\beta_1(x,y),\beta_2(x,y))\in{\boldsymbol\beta}_{1-\lambda}(W_{xy},Q_Z,Q_{Z|Y=y},Q_{Z|X=x}).
\end{equation}
\end{theorem}
\begin{IEEEproof}
Consider a (stochastic) code given by encoders $P_{X|I_1}\in\mathcal{P}([M_1]\to\mathcal{X})$ and $P_{Y|I_2}\in\mathcal{P}([M_2]\to\mathcal{Y})$, and decoder $P_{\hat{I}_1,\hat{I}_2|Z}\in\mathcal{P}(\mathcal{Z}\to [M_1]\times[M_2])$ with average probability of error at most $\epsilon$. Let $Q_X$ be the distribution induced on $X$ assuming $I_1$ is uniform on $[M_1]$; \new{i.e.,
\begin{equation}\label{eq:QX_written_out}
Q_X(\mathcal{A})=\frac{1}{M_1}\sum_{i_1=1}^{M_1} P_{X|I_1=i_1}(\mathcal{A}).
\end{equation}
Let} $Q_Y$ be the \new{corresponding} distribution induced on $Y$ assuming $I_2$ is uniform on $[M_2]$. Also let $Q_{XY}=Q_XQ_Y$ be the product distribution. Let $\mathcal{E}$ be the error event, that is
\begin{equation}
\mathcal{E}=\{(\hat{I}_1,\hat{I}_2)\ne (I_1,I_2)\}.
\end{equation}
Given any $\lambda>\epsilon$, we may define the expurgation set by
\begin{equation}
\Gamma=\{(x,y)\in\mathcal{X}\times\mathcal{Y}:\mathbb{P}(\mathcal{E}|X=x,Y=y)\le\lambda\}.
\end{equation}
That is, $\Gamma$ is the set of transmitted pairs $(x,y)$ that give probability of error at most $\lambda$. From the assumption that the probability of error is at most $\epsilon$,
\begin{align}
\epsilon&\ge \mathbb{P}(\mathcal{E})
\\&\ge \mathbb{P}(\mathcal{E},(X,Y)\notin\Gamma)
\\&\ge(1-Q_{XY}(\Gamma))\lambda
\end{align}
so
\begin{equation}\label{eq:Q_Gamma_bd}
Q_{XY}(\Gamma)\ge1-\frac{\epsilon}{\lambda}.
\end{equation}
Let $P_{X'Y'}=Q_{XY|(X,Y)\in\Gamma}.$ We may bound the R\'enyi divergence between these two distributions by
\begin{align}
D_\infty(P_{X'Y'}\|Q_{XY})&=\sup_{F\subset\mathcal{X}\times\mathcal{Y}} \log \frac{P_{X'Y'}(F)}{Q_{XY}(F)}
\\&=\sup_{F\subset\mathcal{X}\times\mathcal{Y}} \log \frac{Q_{XY}(F\cap\Gamma)}{Q_{XY}(\Gamma)Q_{XY}(F)}
\\&\le -\log Q_{XY}(\Gamma)
\\&\le -\log\left(1-\frac{\epsilon}{\lambda}\right).
\end{align}
We may now apply Lemma~\ref{lemma:wringing} with $\sigma=-\log(1-\epsilon/\lambda)$ and any fixed $\delta>0$, to find sets $\bar\mathcal{X}\subset\mathcal{X},\bar\mathcal{Y}\subset\mathcal{Y}$. Let $P_{XY}=P_{X'Y'|X'\in\bar\mathcal{X},Y'\in\bar\mathcal{Y}}$. From the lemma,
\begin{align}
\Delta(X;Y)&\le\delta,\\
P_{X'Y'}(\bar\mathcal{X},\bar\mathcal{Y})&\ge \exp\{-\sigma/\delta\}.
\end{align}
Using an identical calculation to the earlier bound on R\'enyi divergence,
\begin{align}
D_\infty(P_{XY}\|Q_{XY})&\le -\log Q_{XY}(\Gamma\cap \bar\mathcal{X}\times\bar\mathcal{Y})
\\&=-\log Q_{XY}(\Gamma) P_{X'Y'}(\bar\mathcal{X},\bar\mathcal{Y})
\\&\le \sigma+ \frac{\sigma}{\delta}
\\&=-\left(1+\frac{1}{\delta}\right)\log \left(1-\frac{\epsilon}{\lambda}\right).
\end{align}
Thus
\begin{equation}\label{eq:radon_nikodym_bd}
\frac{dP_{XY}}{dQ_{XY}}(x,y)\le \exp\{D_\infty(P_{XY}\|Q_{XY})\}\le \left(1-\frac{\epsilon}{\lambda}\right)^{-1-1/\delta}.
\end{equation}
We now define a hypothesis testing function $T:\mathcal{X}\times\mathcal{Y}\times\mathcal{Z}\to[0,1]$ given by
\begin{equation}
T(x,y,z)=\mathbb{P}(\mathcal{E}^c|(X,Y,Z)=(x,y,z)).
\end{equation}
From the definition of $\Gamma$, for any $(x,y)\in\Gamma$,
\begin{equation}
\int dW_{xy}(z) T(x,y,z)=\mathbb{P}(\mathcal{E}^c|(X,Y)=(x,y))\ge 1-\lambda.
\end{equation}
Thus, by the definition of the hypothesis testing quantity in \eqref{eq:composite_hypothesis_testing}, for any $Q_Z,Q_{Z|Y},Q_{Z|X}$, \eqref{eq:beta_condition} holds with
\begin{align}
\beta_{12}(x,y)&=\int dQ_Z(z)T(x,y,z),\\
\beta_1(x,y)&=\int dQ_{Z|Y=y}(z)T(x,y,z),\\
\beta_2(x,y)&=\int dQ_{Z|X=x}(z)T(x,y,z).
\end{align}
Thus
\begin{align}
\mathbb{E}[ \beta_{12}(X,Y)]
&=\int dP_{XY}(x,y) dQ_Z(z) T(x,y,z)
\\&\le \int dP_{XY}(x,y) dQ_Z(z) \mathbb{P}(\mathcal{E}^c|(X,Y,Z)=(x,y,z))
\\&\le \left(1-\frac{\epsilon}{\lambda}\right)^{-1-1/\delta} \int dQ_X(x)dQ_Y(y) dQ_Z(z) \mathbb{P}(\mathcal{E}^c|(X,Y,Z)=(x,y,z))\label{eq:MN_indep0}
\\&\le \left(1-\frac{\epsilon}{\lambda}\right)^{-1-1/\delta} \frac{1}{M_1M_2}\label{eq:MN_indep}
\end{align}
where \eqref{eq:MN_indep0} holds by the bound on the R\'enyi divergence from \eqref{eq:radon_nikodym_bd}, and \eqref{eq:MN_indep} holds because if $(X,Y,Z)\sim Q_XQ_YQ_Z$, then $(I_1,I_2)$ are uniformly random on $[M_1]\times [M_2]$ and $(\hat{I}_1,\hat{I}_2)$ are independent from them, so the probability of correct decoding is at most $\frac{1}{M_1M_2}$. Rearranging \eqref{eq:MN_indep} yields \eqref{eq:FBL_MN}.
By a nearly identical argument,
\begin{align}
\mathbb{E}[\beta_1(X,Y)]
&=\int dP_{XY}(x,y)dQ_{Z|Y=y}(z) T(x,y,z)
\\&\le\left(1-\frac{\epsilon}{\lambda}\right)^{-1-1/\delta} \int dQ_X(x) dQ_{Y}(y) dQ_{Z|Y=y}(z) \mathbb{P}(\mathcal{E}^c|(X,Y,Z)=(x,y,z))
\\&\le \left(1-\frac{\epsilon}{\lambda}\right)^{-1-1/\delta} \frac{1}{M_1}\label{eq:M_indep}
\end{align}
where \eqref{eq:M_indep} holds because if $(X,Y,Z)\sim Q_X Q_Y Q_{Z|Y}$, then $I_1$ and $\hat{I}_1$ are independent. Rearranging yields \eqref{eq:FBL_M}. The same calculation for $\mathbb{E}[\beta_2(X,Y)]$ yields \eqref{eq:FBL_N}.
\end{IEEEproof}
\section{Asymptotic Results}\label{sec:asymptotics}
We present two asymptotic results, each characterizing the second-order rate as $O(1/\sqrt{n})$ under certain assumptions on the channel. The first result \new{(Thm.~\ref{thm:most_general}) aims to bound the second-order rate with minimal assumptions on the channel, while giving the simplest possible proof of the result. In particular, Thm.~\ref{thm:most_general} avoids an assumption on the third-moment of the information density. The second result (Thm.~\ref{thm:DMC_asymptotic}) applies only to MACs with finite alphabets, but it gives a substantially tighter bound on the second-order rate for these channels. Thm.~\ref{thm:DMC_asymptotic} is intended to give the tightest possible bound on the second-order rate, at the cost of a more complicated proof.}
We state both results first, and then prove them in Secs.~\ref{sec:general_proof} and~\ref{sec:dmc_proof}. \new{Sec.~\ref{sec:maximal} provides some discussion of the maximal probability of error case.}
For $\alpha_1\ge \alpha_2\ge 0$, and any $\delta\ge 0$, define
\begin{align}
C_{\alpha_1,\alpha_2}(\delta)
&=\sup_{\substack{P_{UXY}:\Delta(X;Y|U=u)\le\delta\text{ for all }u,\\ \mathbb{E}[ b_1(X)]\le B_1,\\ \mathbb{E}[ b_2(Y)]\le B_2}}
\big[\alpha_2 I(X,Y;Z|U)
+(\alpha_1-\alpha_2) I(X;Z|Y,U)\big].\label{eq:Cdelta_def}
\end{align}
For $\alpha_2\ge \alpha_1\ge 0$, we define $C_{\alpha_1,\alpha_2}(\delta)$ similarly, except there is a term with $I(Y;Z|X,U)$ in place of the $I(X;Z|Y,U)$ term. Note that $C_{\alpha_1,\alpha_2}(0)=C_{\alpha_1,\alpha_2}$. Also let $C'_{\alpha_1,\alpha_2}(\delta)$ be the derivative of $C_{\alpha_1,\alpha_2}(\delta)$ with respect to $\delta$. Since $C_{\alpha_1,\alpha_2}(\delta)$ is non-decreasing in $\delta$, $C'_{\alpha_1,\alpha_2}(\delta)$ is well-defined, although it may be infinite. Let
\begin{equation}\label{eq:Vmax_def}
V_{\max}=\sup_{\substack{P_{UXY}:\\ \mathbb{E}[ b_1(X)]\le B_1, \\ \mathbb{E}[ b_2(Y)]\le B_2}}
\max\big\{V(W\|P_{Z|U}|P_{UXY}),\,V(W\|P_{Z|YU}|P_{UXY}),\,V(W\|P_{Z|XU}|P_{UXY})\big\}
\end{equation}
where $P_{Z|U},P_{Z|YU},P_{Z|XU}$ are the induced distributions from $P_{UXY}$.
Note that in this definition, there is no independence constraint on $P_{UXY}$.
\begin{theorem}\label{thm:most_general}
For any $\alpha_1,\alpha_2$ where $\max\{\alpha_1,\alpha_2\}=1$, and any $\epsilon\in(0,1)$,
\begin{equation}\label{eq:thm:most_general}
R_{\alpha_1,\alpha_2}^\star(n,\epsilon)\le C_{\alpha_1,\alpha_2}+\min_{\lambda\in(\epsilon,1)}\left[ 2\sqrt{C_{\alpha_1,\alpha_2}'(0)\log\frac{\lambda}{\lambda-\epsilon}}+\sqrt{\frac{V_{\max}}{1-\lambda}}\right]\frac{1}{\sqrt{n}}+o\left(\frac{1}{\sqrt{n}}\right).
\end{equation}
\end{theorem}
The proof of this result, found in Sec.~\ref{sec:general_proof}, applies an Augustin-type argument (cf. \cite{Augustin1966}), wherein Chebyshev's inequality is used to bound the hypothesis testing fundamental limit. Thus, the bound is only meaningful if the second moment statistic $V_{\max}$ is finite, but there is no requirement on the third moment, which allows Thm.~\ref{thm:most_general} to hold in a great deal of generality, although it can typically be improved with more careful analysis. The following corollary comes by plugging in, for example, $\lambda=\frac{\epsilon+1}{2}$ into \eqref{eq:thm:most_general}.
\begin{corollary}\label{cor:regularity}
If (i) $V_{\max}<\infty$, and (ii) $C_{\alpha_1,\alpha_2}'(0)$ is uniformly bounded for all $\alpha_1,\alpha_2$ where $\max\{\alpha_1,\alpha_2\}=1$, then for any $\epsilon\in(0,1)$,
\begin{equation}
\mathcal{R}(n,\epsilon)\subseteq \mathcal{C}+O\left(\frac{1}{\sqrt{n}}\right).
\end{equation}
\end{corollary}
As seen from Corollary~\ref{cor:regularity}, the second-order coding rate is $O(1/\sqrt{n})$ as long as two regularity conditions hold. The condition on $V_{\max}$ is not surprising, as any result of this form requires that the information density has a finite second moment. One slight complication arises from the fact that, in the definition of $V_{\max}$ in \eqref{eq:Vmax_def}, one cannot choose the output distribution $P_{Z|U}$ separately from the input distribution. That is, even though in Thm.~\ref{thm:FBL} the distribution $Q_Z$ (and $Q_{Z|Y},Q_{Z|X}$) is a free choice, we select only the induced output distribution. This complicates the analysis for some channels; for example, for the Gaussian point-to-point channel, in the second-order converse bound one typically chooses an i.i.d. Gaussian for the output distribution, as in \cite[Sec.~III-J]{Polyanskiy2010a}. By contrast, here that choice is not available. This difficulty is addressed for the Gaussian MAC in Appendix~\ref{appendix:gaussian}.
The second regularity condition, on the boundedness of $C_{\alpha_1,\alpha_2}'(0)$, wherein the wringing dependence appears, is more particular to our method. Verifying this condition requires analyzing the effect of the wringing dependence between the two inputs on the maximum achievable weighted-sum-rate. In the sequel, we establish that this condition holds in two cases: for any discrete-memoryless channel, as shown in Thm.~\ref{thm:DMC_asymptotic}, and for the Gaussian MAC, as discussed in Sec.~\ref{sec:gaussian} with the proof in Appendix~\ref{appendix:gaussian}.
We now state a more precise result for discrete-memoryless channels, which will require a few new definitions. Let $\mathcal{P}_{\alpha_1,\alpha_2}^{\text{in}}$ be the set of distributions $P_{UXY}$ satisfying the supremum in the characterization of $C_{\alpha_1,\alpha_2}$ in \eqref{eq:Ca1a2}. For any $\alpha\in[0,1]$, let
\begin{equation}\label{eq:Vplus_def}
V_{1,\alpha}^+=\sup_{P_{UXY\in\mathcal{P}_{1,\alpha}^{\text{in}}}}
\left[\alpha \sqrt{V(W\|P_{Z|U}|P_{UXY})}+(1-\alpha)\sqrt{V(W\|P_{Z|YU}|P_{UXY})}\right]^2
\end{equation}
where $P_{Z|U}$ and $P_{Z|YU}$ are the induced distributions from $P_{UXY}$. Also let
\begin{equation}\label{eq:Vminus_def}
V_{1,\alpha}^{-}=\inf_{P_{UXY},P_{X'Y'|U}}
\left[\alpha \sqrt{V(W\|P_{Z|U}|P_{UX'Y'})}+(1-\alpha)\sqrt{V(W\|P_{Z|YU}|P_{UX'Y'})}\right]^2
\end{equation}
where the infimum is over all $P_{UXY}\in\mathcal{P}_{1,\alpha}^{\text{in}}$ and $P_{X'Y'|U}$ satisfying
\begin{equation}\label{eq:Vminus_feasibility}
\alpha D(W\|P_{Z|U}|P_{UX'Y'})+(1-\alpha) D(W\|P_{Z|YU}|P_{UX'Y'})=C_{1,\alpha}.
\end{equation}
Define $V_{\alpha,1}^-$ and $V_{\alpha,1}^+$ analogously. For any $\alpha_1,\alpha_2$ where $\max\{\alpha_1,\alpha_2\}=1$ and any $\lambda\in(0,1)$, let
\begin{equation}
V_{\alpha_1,\alpha_2}^{\lambda}=\begin{cases} V_{\alpha_1,\alpha_2}^{-}, & \lambda<1/2 \\ V_{\alpha_1,\alpha_2}^{+}, & \lambda\ge 1/2.\end{cases}
\end{equation}
\begin{theorem}\label{thm:DMC_asymptotic}
If $\mathcal{X},\mathcal{Y},\mathcal{Z}$ are finite sets, then both regularity conditions in Corollary~\ref{cor:regularity} are satisfied. In addition, for any $\alpha_1,\alpha_2$ where $\max\{\alpha_1,\alpha_2\}=1$, and any $\epsilon\in(0,1)$,
\begin{equation}\label{eq:DMC_asymptotic}
R_{\alpha_1,\alpha_2}^\star(n,\epsilon)\le \left(C_{\alpha_1,\alpha_2}+\min_{\lambda\in(\epsilon,1)}\left[ 2\sqrt{C'_{\alpha_1,\alpha_2}(0)\log\frac{\lambda}{\lambda-\epsilon}} -\sqrt{V_{\alpha_1,\alpha_2}^{\lambda}}\,\mathsf{Q}^{-1}(\lambda)\right]\frac{1}{\sqrt{n}}\right)^{**}+o\left(\frac{1}{\sqrt{n}}\right)
\end{equation}
where $\mathsf{Q}$ is the Gaussian complementary CDF and $\mathsf{Q}^{-1}$ is its inverse function, and $(\cdot)^{**}$ represents the lower convex envelope as a function of $(\alpha_1,\alpha_2)$.
\end{theorem}
Note that $V_{\alpha_1,\alpha_2}^+$ and $V_{\alpha_1,\alpha_2}^-$ are not quite complementary. In particular, $V_{\alpha_1,\alpha_2}^-$ is in general smaller than the quantity obtained by simply replacing the supremum with an infimum in \eqref{eq:Vplus_def}. However, for at least some channels of interest, such as the binary additive erasure channel (see Sec.~\ref{sec:bamac}), all of these divergence variance quantities are equal.
Thm.~\ref{thm:DMC_asymptotic} settles the question, at least for some discrete channels, of whether the maximum achievable rates approach the capacity region from below or above for sufficiently small probability of error. We state this precisely in the following corollary.
\begin{corollary}
Let $\mathcal{X},\mathcal{Y},\mathcal{Z}$ be finite sets. If $V_{\alpha_1,\alpha_2}^{-}>0$, then for sufficiently small $\epsilon$ and sufficiently large $n$,
\begin{equation}
R^\star_{\alpha_1,\alpha_2}(n,\epsilon)<C_{\alpha_1,\alpha_2}.
\end{equation}
\end{corollary}
This corollary is proved by choosing, for example, $\lambda=2\epsilon$ in \eqref{eq:DMC_asymptotic} and taking $\epsilon$ to be sufficiently small.
\subsection{Proof of Thm.~\ref{thm:most_general}}\label{sec:general_proof}
Consider any $(n,M_1,M_2,\epsilon)$ code for the $n$-length product channel. We consider $(\alpha_1,\alpha_2)=(1,\alpha)$ where $\alpha\in[0,1]$. The alternative case is proved identically. We apply Thm.~\ref{thm:FBL} wherein the one-shot input alphabets $\mathcal{X},\mathcal{Y}$ are replaced by the cost-constrained input sets
\begin{equation}\label{eq:cost_constrained_inputs}
\left\{x^n\in\mathcal{X}^n:\sum_{t=1}^n b_1(x_t)\le nB_1\right\},\quad
\left\{y^n\in\mathcal{Y}^n:\sum_{t=1}^n b_2(y_t)\le nB_2\right\}.
\end{equation}
Thus, for any $\lambda>\epsilon,\delta>0$, there exists a distribution $P_{X^nY^n}$ such that $X^n$ and $Y^n$ fall into the sets in \eqref{eq:cost_constrained_inputs} almost surely, $\Delta(X^n;Y^n)\le\delta$, and
\begin{align}
\log (M_1M_2)&\le -\log \mathbb{E}\big[\beta_{1-\lambda}(W_{X^nY^n},
{\textstyle\prod_{t=1}^n P_{Z_t}})\big]+\left(\frac{1}{\delta}+1\right)\log \frac{\lambda}{\lambda-\epsilon},\label{eq:nlength_FBL_bd}\\
\log M_1&\le -\log \mathbb{E}\big[\beta_{1-\lambda}(W_{X^nY^n},
{\textstyle\prod_{t=1}^n P_{Z_t|Y_t=Y_t}})\big]+\left(\frac{1}{\delta}+1\right)\log \frac{\lambda}{\lambda-\epsilon},\label{eq:nlength_FBL_bd1}\\
\log M_2&\le -\log \mathbb{E}\big[\beta_{1-\lambda}(W_{X^nY^n},
{\textstyle\prod_{t=1}^n P_{Z_t|X_t=X_t}})\big]+\left(\frac{1}{\delta}+1\right)\log \frac{\lambda}{\lambda-\epsilon}.\label{eq:nlength_FBL_bd2}
\end{align}
Here, we have relaxed Thm.~\ref{thm:FBL} by noting that if $(\beta_{1},\ldots,\beta_k)\in{\boldsymbol\beta}_{1-\lambda}(P,Q_1,\ldots,Q_k)$, then $\beta_{i}\ge \beta_{1-\lambda}(P,Q_{i})$ for each $i\in[k]$. We have also chosen the induced product distributions for $Q_Z,Q_{Z|Y},Q_{Z|X}$. Since by Thm.~\ref{thm:props}, wringing dependence satisfies the data processing inequality, $\Delta(X_t;Y_t)\le\delta$ for any $t\in[n]$. We will make use of the $\epsilon$-information spectrum divergence (cf. \cite{Han2003,Tan2014a}), which is given by
\begin{equation}
D_s^{\epsilon}(P\|Q)=\sup\left\{R\in\mathbb{R}: P\left(\log \frac{dP}{dQ}(Z)\le R\right)\le\epsilon\right\}.
\end{equation}
The hypothesis testing quantity can be related to the information spectrum divergence as
\begin{equation}
-\log \beta_{1-\lambda}(P,Q)\le \inf_{0<\eta<1-\lambda} \left[D_s^{\lambda+\eta}(P\|Q)-\log\eta\right].
\end{equation}
Using Chebyshev's inequality, the information spectrum divergence may in turn be bounded by (see e.g., \cite[Prop.~2.2]{Tan2014a})
\begin{equation}
D_s^{\epsilon}(P\|Q)\le D(P\|Q)+\sqrt{\frac{V(P\|Q)}{1-\epsilon}}
\end{equation}
and so
\begin{equation}\label{eq:beta_chebyshev}
-\log \beta_{1-\lambda}(P,Q)\le D(P\|Q)+\inf_{0<\eta<1-\lambda} \left(\sqrt{\frac{V(P\|Q)}{1-\lambda-\eta}}-\log\eta\right).
\end{equation}
Applying \eqref{eq:beta_chebyshev} to the bound in \eqref{eq:nlength_FBL_bd} gives, for any $0<\eta<1-\lambda$,
\begin{align}
&\log(M_1M_2)-\left(\frac{1}{\delta}+1\right)\log\frac{\lambda}{\lambda-\epsilon}
\\&\le -\log \int dP_{X^nY^n}(x^n,y^n) \exp\Bigg\{-\sum_{t=1}^n D(W_{x_ty_t}\|P_{Z_t})
-\sqrt{\frac{1}{1-\lambda-\eta} \sum_{t=1}^n V(W_{x_ty_t}\|P_{Z_t})}+\log \eta\Bigg\}
\\&\le \sum_{t=1}^n D(W\|P_{Z_t}|P_{X_tY_t})+\sqrt{\frac{1}{1-\lambda-\eta}\sum_{t=1}^n V(W\|P_{Z_t}|P_{X_tY_t})}-\log\eta\label{eq:beta_bound2}
\\&= n D(W\|P_{Z|U}|P_{XYU})+\sqrt{\frac{n}{1-\lambda-\eta}V(W\|P_{Z|U}|P_{XYU})}-\log\eta\label{eq:beta_bound3}
\\&\le nI(XY;Z|U)+\sqrt{\frac{nV_{\max}}{1-\lambda-\eta}}-\log\eta\label{eq:beta_bound4}
\end{align}
where \eqref{eq:beta_bound2} holds by convexity of the exponential and concavity of the square root; in \eqref{eq:beta_bound3} we have let $U\sim\text{Unif}[n]$, $X=X_U,Y=Y_U,Z=Z_U$; and \eqref{eq:beta_bound4} follows from the definition of $V_{\max}$ in \eqref{eq:Vmax_def}. Applying the same derivation to \eqref{eq:nlength_FBL_bd1} gives
\begin{equation}\label{eq:beta_bound5}
\log M_1-\left(\frac{1}{\delta}+1\right)\log\frac{\lambda}{\lambda-\epsilon}
\le nI(X;Z|YU)+\sqrt{\frac{nV_{\max}}{1-\lambda-\eta}}-\log\eta.
\end{equation}
Recall that for each $t\in[n]$, $\Delta(X_t;Y_t)\le\delta$, which means that for each $u$, $\Delta(X;Y|U=u)\le\delta$. Moreover, by the fact that $X^n,Y^n$ fall into the cost-constrained sets in \eqref{eq:cost_constrained_inputs},
\begin{align}
\mathbb{E}[ b_1(X)]&=\frac{1}{n}\sum_{t=1}^n \mathbb{E} [b_1(X_t)]\le B_1,\\
\mathbb{E}[ b_2(Y)]&=\frac{1}{n}\sum_{t=1}^n \mathbb{E} [b_2(Y_t)]\le B_2.
\end{align}
Thus, from the definition of $C_{1,\alpha}(\delta)$ in \eqref{eq:Cdelta_def},
\begin{equation}\label{eq:beta_bound6}
\alpha I(XY;Z|U)+(1-\alpha)I(X;Z|Y,U)\le C_{1,\alpha}(\delta)= C_{1,\alpha}+C_{1,\alpha}'(0)\,\delta+o(\delta)
\end{equation}
where the equality follows from the definition of the derivative. We may combine \eqref{eq:beta_bound4} and \eqref{eq:beta_bound5}, then plug in \eqref{eq:beta_bound6} to find
\begin{equation}
\log M_1+\alpha \log M_2\le nC_{1,\alpha}+nC_{1,\alpha}'(0)\delta+o(n\delta)+\sqrt{\frac{nV_{\max}}{1-\lambda-\eta}}-\log\eta+\left(\frac{1}{\delta}+1\right)\log \frac{\lambda}{\lambda-\epsilon}.\label{eq:pre_lambda_choice}
\end{equation}
Recall that $\delta$ is a free parameter. The optimal choice (ignoring the $o(n\delta)$ term) is $\delta=\sqrt{\frac{\log\frac{\lambda}{\lambda-\epsilon}}{nC'_{1,\alpha}(0)}}$ which gives
\begin{equation}\label{eq:post_delta_choice}
\log M_1+\alpha \log M_2
\le nC_{1,\alpha}+2\sqrt{nC'_{1,\alpha}(0)\log \frac{\lambda}{\lambda-\epsilon}}+\sqrt{\frac{nV_{\max}}{1-\lambda-\eta}}-\log\eta+\log \frac{\lambda}{\lambda-\epsilon}+o(\sqrt{n})
\end{equation}
We now distinguish two cases. If $V_{\max}>0$, then the optimal value of $\lambda$ in the minimization in \eqref{eq:thm:most_general} is bounded away from $1$. Let $\lambda$ take on this optimal value, and we choose $\eta=1/\sqrt{n}$ to give
\begin{align}
\log M_1+\alpha \log M_2
&\le
nC_{1,\alpha}+2\sqrt{nC'_{1,\alpha}(0)\log\frac{\lambda}{\lambda-\epsilon}}+\sqrt{\frac{nV_{\max}}{1-\lambda}}+o(\sqrt{n}).
\end{align}
If alternatively $V_{\max}=0$, then the optimal value of $\lambda$ in the minimization in \eqref{eq:thm:most_general} is $\lambda=1$, but plugging $\lambda=1$ into \eqref{eq:post_delta_choice} does not quite work, because of the requirement that $\eta<1-\lambda$. Instead we may choose $\lambda=1-2/n$ and $\eta = 1/n$ to give
\begin{align}
\log M_1+\alpha \log M_2
&\le nC_{1,\alpha}+2\sqrt{nC'_{1,\alpha}(0)\log(1-\epsilon)^{-1}}+o(\sqrt{n}).
\end{align}
\subsection{Proof of Thm.~\ref{thm:DMC_asymptotic}} \label{sec:dmc_proof}
We will need the following lemma, which is proved in Appendix~\ref{appendix:dmc}.
\begin{lemma}\label{lemma:DMCs}
Consider a MAC where $\mathcal{X},\mathcal{Y},\mathcal{Z}$ are finite sets. Let $W_{\min}$ be the smallest non-zero value of $W_{xy}(z)$. Consider any random variables $X,Y$ with distribution $P_{XY}$ where $\Delta(X;Y)\le\delta$. Let $(\tilde{X},\tilde{Y},\tilde{Z})\sim P_X P_Y W$. Then
\begin{align}
I(X,Y;Z)&\le I(\tilde{X},\tilde{Y};\tilde{Z})
+\bigg[8\min\{|\mathcal{X}|,|\mathcal{Y}|\}
+|\mathcal{Z}|\left((1-\log W_{\min})e^{-1}+4e^{-2}\right)
+2\min\{|\mathcal{X}|,|\mathcal{Y}|\}\log|\mathcal{Z}|\bigg]\delta+O(\delta^2),\label{eq:MI_DMC_bound}
\\I(X;Z|Y)&\le I(\tilde{X};\tilde{Z}|\tilde{Y})
+\bigg[8\min\{|\mathcal{X}|,|\mathcal{Y}|\}
+|\mathcal{Y}|\cdot|\mathcal{Z}|\left((1-\log W_{\min})e^{-1}+4e^{-2}\right)
+2\min\{|\mathcal{X}|,|\mathcal{Y}|\}\log|\mathcal{Z}|\bigg]\delta+O(\delta^2),\label{eq:MIX_DMC_bound}
\\I(Y;Z|X)&\le I(\tilde{X};\tilde{Z}|\tilde{Y})
+\bigg[8\min\{|\mathcal{X}|,|\mathcal{Y}|\}
+|\mathcal{X}|\cdot|\mathcal{Z}|\left((1-\log W_{\min})e^{-1}+4e^{-2}\right)
+2\min\{|\mathcal{X}|,|\mathcal{Y}|\}\log|\mathcal{Z}|\bigg]\delta+O(\delta^2).\label{eq:MIY_DMC_bound}
\end{align}
\end{lemma}
Lemma~\ref{lemma:DMCs} immediately gives that $C'_{\alpha_1,\alpha_2}(0)$ is uniformly bounded for any $\alpha_1,\alpha_2$ with $\max\{\alpha_1,\alpha_2\}=1$. To prove that $V_{\max}<\infty$, we
note that for any distribution $P_{XY}$ and its induced distribution $P_Z$
\begin{align}
V(W\|P_Z|P_{XY})&\le \mathbb{E} \left[\log^2 \frac{W_{XY}(Z)}{P_Z(Z)}\right]
\\&\le \left(\sqrt{\mathbb{E} \left[\log^2 W_{XY}(Z)\right]}+\sqrt{\mathbb{E} [\log^2 P_Z(Z)]}\right)^2
\\&\le \left(2\sqrt{4e^{-2} |\mathcal{Z}|}\right)^2
\\&=16 e^{-2}|\mathcal{Z}|
\end{align}
where we have used the fact that $p\log^2 p\le 4e^{-2}$. By the same argument, $V(W\|P_{Z|Y}\|P_{XY}),V(W\|P_{Z|X}\|P_{XY})$ are also bounded by $16 e^{-2}|\mathcal{Z}|$.
Recall that $R^\star_{\alpha_1,\alpha_2}(n,\epsilon)$, as defined in \eqref{eq:Rstar_def}, is the supremum of linear functions in $(\alpha_1,\alpha_2)$, so it is convex in $(\alpha_1,\alpha_2)$. Thus, to prove the theorem it is enough to show \eqref{eq:DMC_asymptotic} but without the lower convex envelope. We assume that $(\alpha_1,\alpha_2)=(1,\alpha)$ for $\alpha\in[0,1]$. We proceed with with the first step as in the proof of Thm.~\ref{thm:most_general}; namely from Thm.~\ref{thm:FBL} we derive \eqref{eq:nlength_FBL_bd}--\eqref{eq:nlength_FBL_bd2}. Combining \eqref{eq:nlength_FBL_bd} and \eqref{eq:nlength_FBL_bd1}, and using the fact that $p^\alpha q^{1-\alpha}$ is concave in $(p,q)$, gives
\begin{equation}\label{eq:M1M2_combined_beta}
\log M_1+\alpha \log M_2\le
-\log \mathbb{E} \left[\left(\beta_{1-\lambda}(W_{X^nY^n},
{\textstyle\prod_{t=1}^n P_{Z_t}})\right)^\alpha \left(\beta_{1-\lambda}(W_{X^nY^n},
{\textstyle\prod_{t=1}^n P_{Z_t|Y_t=Y_t}})\right)^{1-\alpha}\right]+\left(\frac{1}{\delta}+1\right)\log \frac{\lambda}{\lambda-\epsilon}
\end{equation}
\new{Since we will apply a Berry-Esseen bound to the hypothesis testing quantities, rather than a Chebyshev bound as in Thm.~\ref{thm:most_general}, we need to avoid some potentially badly-behaving $(x^n,y^n)$ sequences. In particular, define the set
\begin{equation}\label{eq:Omega_0_def}
\Omega_0=\left\{(x^n,y^n):P_{X_tY_t}(x_t,y_t)\le \frac{1}{n^2}\text{ for some }t\in[n]\right\}.
\end{equation}
Let $p_0=P_{X^nY^n}(\Omega_0)$. By the union bound,
\begin{align}
p_0&\le \sum_{t=1}^n \mathbb{P}\left(P_{X_tY_t}(X_t,Y_t)\le \frac{1}{n^2}\right)
\\&= \sum_{t=1}^n \sum_{x,y} P_{X_tY_t}(x,y)\,1\left(P_{X_tY_t}(x,y)\le \frac{1}{n^2}\right)
\\&\le \frac{|\mathcal{X}|\,|\mathcal{Y}|}{n}.\label{eq:p0_bound}
\end{align}
From the fact that the $\beta$ quantities are non-negative, we may further bound \eqref{eq:M1M2_combined_beta} by
\begin{align}
\log M_1+\alpha\log M_2
&\le -\log \mathbb{E} \left[1((X^n,Y^n)\in\Omega_0^c)\big(\beta_{1-\lambda}(W_{X^nY^n},
{\textstyle\prod_{t=1}^n P_{Z_t}})\big)^\alpha \big(\beta_{1-\lambda}(W_{X^nY^n},
{\textstyle\prod_{t=1}^n P_{Z_t|Y_t=Y_t}})\big)^{1-\alpha}\right]\nonumber
\\&\qquad+\left(\frac{1}{\delta}+1\right)\log \frac{\lambda}{\lambda-\epsilon}.\label{eq:M1M2_combined_beta2}
\end{align}}
We now use the Berry-Esseen theorem via \cite[Prop.~2.1]{Tan2014a} to bound each of the hypothesis testing quantities in \eqref{eq:M1M2_combined_beta2}. Specifically, for any $x^n,y^n$
\new{\begin{equation}\label{eq:beta_berry_esseen}
-\log \beta_{1-\lambda}(W_{x^ny^n},{\textstyle\prod_{t=1}^n P_{Z_t}})
\le \inf_{0<\eta\le 1-\lambda} nD_n-\sqrt{nV_n}\,\mathsf{Q}^{-1}\left(\lambda+\eta+\frac{6T_n}{\sqrt{nV_n^3}}\right)-\log\eta
\end{equation}
where
\begin{align}
D_n&=\frac{1}{n}\sum_{t=1}^n D(W_{x_ty_t}\|P_{Z_t}),\\
V_n&=\frac{1}{n}\sum_{t=1}^n V(W_{x_ty_t}\|P_{Z_t}),\\
T_n&=\frac{1}{n}\sum_{t=1}^n T(W_{x_ty_t}\|P_{Z_t}).
\end{align}
For any $(x^n,y^n)\in\Omega_0^c$, any $t\in[n]$, and any $z\in\mathcal{Z}$,
\begin{align}
\log \frac{W_{x_ty_t}(z)}{P_{Z_t}(z)}
&=\log \frac{W_{x_ty_t}(z)}{\sum_{x,y} P_{X_tY_t}(x,y) W_{xy}(z)}
\\&\le \log \frac{1}{P_{X_tY_t}(x_t,y_t)}
\\&\le 2\log n
\end{align}
where the last inequality follows from the definition of $\Omega_0$ in \eqref{eq:Omega_0_def}. (In fact, this is the purpose of the set the set $\Omega_0$ in the first place.)
We may prove a simple lower bound by, for any $z$ where $W_{x_ty_t}(z)>0$,
\begin{equation}
\log \frac{W_{x_ty_t}(z)}{P_{Z_t}(z)}\ge \log W_{x_ty_t}(z)
\ge \log W_{\min}.
\end{equation}
where $W_{\min}=\min_{x,y,z:W_{xy}(z)>0} W_{xy}(z)$. For any fixed channel with finite alphabets, $W_{\min}>0$. Thus, for sufficiently large $n$,
\begin{equation}
\left|\log \frac{W_{x_ty_t}(z)}{P_{Z_t}(z)}\right|\le 2\log n.
\end{equation}
This implies that $0\le D(W_{x_ty_t}\|P_{Z_t})\le 2\log n$, so we have
\begin{equation}
\left|\log \frac{W_{x_ty_t}(z)}{P_{Z_t}(z)}-D(W_{x_ty_t}\|P_{Z_t})\right|
\le 2\log n-\log W_{\min}\le 3\log n
\end{equation}
where the last inequality holds for sufficiently large $n$. Thus, for any $(x^n,y^n)\in\Omega_0^c$,
\begin{equation}
T_n\le \max_{t\in [n]} T(W_{x_ty_t}\|P_{Z_t})
\le (3\log n)^3.\label{eq:Tn_bound}
\end{equation}
By the same argument, $V_n\le (3\log n)^3$.
Applying the upper bound on $T_n$ in \eqref{eq:Tn_bound} to the bound on the hypothesis testing quantity from \eqref{eq:beta_berry_esseen} and selecting $\eta=\min\{1/\sqrt{n},1-\lambda\}$, for any $(x^n,y^n)\in\Omega_0^c$ we have
\begin{equation}
-\log \beta_{1-\lambda}(W_{x^ny^n},{\textstyle\prod_{t=1}^n P_{Z_t}})
\le nD_n-\sqrt{nV_n}\,\mathsf{Q}^{-1}\left(\lambda+\frac{1}{\sqrt{n}}+\frac{6(3\log n)^3}{\sqrt{nV_n^3}}\right)+\frac{1}{2}\log n
\end{equation}
where we adopt the convention that $\mathsf{Q}^{-1}(p)=-\infty$ if $p\ge 1$. We now consider two cases. Consider first the case that $V_n\ge n^{-1/4}$. This implies $\sqrt{nV_n^3}\ge n^{1/8}$, so in particular $\sqrt{nV_n^3}\to\infty$. Thus, applying a Taylor expansion to the $\mathsf{Q}^{-1}$ function, there exists a constant $c_0$ depending only on $\lambda$ such that, for sufficiently large $n$,
\begin{align}
\sqrt{nV_n}\,\mathsf{Q}^{-1}\left(\lambda+\frac{1}{\sqrt{n}}+\frac{6(3\log n)^3}{\sqrt{nV_n^3}}\right)
&\ge\sqrt{nV_n}\left[\mathsf{Q}^{-1}(\lambda)-c_0\left(\frac{1}{\sqrt{n}}+\frac{6(3\log n)^3}{\sqrt{nV_n^3}}\right)\right]
\\&=\sqrt{nV_n}\,Q^{-1}(\lambda)-c_0\left(\sqrt{V_n}+\frac{6(3\log n)^3}{V_n}\right)
\\&\ge \sqrt{nV_n}\,Q^{-1}(\lambda)-c_0\left((3\log n)^{3/2}+162\,n^{1/4}\log^3 n\right).
\end{align}
Now consider the case that $V_n\le n^{-1/4}$. Then we apply the simpler Chebyshev bound of \cite[Prop.~2.2]{Tan2014a} on the hypothesis testing quantity to write
\begin{align}
-\log \beta_{1-\lambda}(W_{x^ny^n},{\textstyle\prod_{t=1}^n P_{Z_t}})
&\le \inf_{0<\eta\le 1-\lambda} nD_n+\frac{\sqrt{nV_n}}{1-\lambda-\eta}-\log\eta
\\&\le nD_n+\frac{2\sqrt{nV_n}}{1-\lambda}-\log \frac{1-\lambda}{2}\label{eq:beta_cheb2}
\\&= nD_n-\sqrt{nV_n}\,\mathsf{Q}^{-1}(\lambda)+\sqrt{nV_n}\left(\mathsf{Q}^{-1}(\lambda)+\frac{2}{1-\lambda}\right)-\log \frac{1-\lambda}{2}\label{eq:beta_cheb3}
\\&\le nD_n-\sqrt{nV_n}\,\mathsf{Q}^{-1}(\lambda)+n^{3/8}\left(\left|\mathsf{Q}^{-1}(\lambda)\right|^++\frac{2}{1-\lambda}\right)-\log \frac{1-\lambda}{2}\label{eq:beta_cheb4}
\end{align}
where in \eqref{eq:beta_cheb2} we have selected $\eta=\frac{1-\lambda}{2}$.
Thus, in all cases, if $(x^n,y^n)\in\Omega_0^c$, then for sufficiently large $n$,
\begin{equation}
-\log \beta_{1-\lambda}(W_{x^ny^n},{\textstyle\prod_{t=1}^n P_{Z_t}})
\le nD_n-\sqrt{nV_n}\,\mathsf{Q}^{-1}(\lambda)+a_n
\end{equation}
where
\begin{equation}
a_n=\max\left\{c_0\left((3\log n)^{3/2}+162\,n^{1/4}\log^3 n\right),\,n^{3/8}\left(\left|\mathsf{Q}^{-1}(\lambda)\right|^++\frac{2}{1-\lambda}\right)-\log \frac{1-\lambda}{2}\right\}.
\end{equation}
Note that the constants in the definition of $a_n$ depend only on $\lambda$, and that for any $\lambda>0$, $a_n=o(\sqrt{n})$.
By a similar argument, if $(x^n,y^n)\in\Omega_0^c$, then for sufficiently large $n$
\begin{equation}
-\log \beta_{1-\lambda}(W_{x^ny^n},
{\textstyle\prod_{t=1}^n P_{Z_t|Y_t=y_t}})
\le \sum_{t=1}^n D(W_{x_ty_t}\|P_{Z_t|Y_t=y_t})
-\sqrt{\sum_{t=1}^n V(W_{x_ty_t}\|P_{Z_t|Y_t=y_t})}\,\mathsf{Q}^{-1}(\lambda)+a_n
\end{equation}
Applying both bounds to \eqref{eq:M1M2_combined_beta2} gives}
\begin{multline}\label{eq:dmc_stat_bound}
\log M_1+\alpha \log M_2\le
-\log \mathbb{E}\left[\new{1((X^n,Y^n)\in\Omega_0^c)}\exp\left\{-nD(X^n,Y^n)+\sqrt{nV(X^n,Y^n)}\,\mathsf{Q}^{-1}(\lambda)-\new{a_n}\right\}\right]
\\+\left(\frac{1}{\delta}+1\right)\log \frac{\lambda}{\lambda-\epsilon}
\end{multline}
where we have defined the statistics
\begin{align}
D(x^n,y^n)&=\frac{1}{n}\sum_{t=1}^n \big[\alpha D(W_{x_ty_t}\|P_{Z_t})+(1-\alpha) D(W_{x_ty_t}\|P_{Z_t|Y_t=y_t})\big],\\
V(x^n,y^n)&=\left(\alpha \sqrt{\frac{1}{n}\sum_{t=1}^n V(W_{x_ty_t}\|P_{Z_t})}+(1-\alpha)\sqrt{\frac{1}{n}\sum_{t=1}^n V(W_{x_ty_t}\|P_{Z_t|Y_t=y_t})}\right)^2.
\end{align}
Consider any $\lambda\ge 1/2$. From \eqref{eq:dmc_stat_bound}, by the convexity of the exponential, we have
\new{
\begin{align}
\log M_1+\alpha \log M_2
&\le \frac{1}{1-p_0}\mathbb{E}\left[1((X^n,Y^n)\in\Omega_0^c)\left(nD(X^n,Y^n)-\sqrt{nV(X^n,Y^n)}\,\mathsf{Q}^{-1}(\lambda)\right)\right]+a_n-\log(1-p_0)\nonumber
\\&\qquad+\left(\frac{1}{\delta}+1\right)\log \frac{\lambda}{\lambda-\epsilon}
\\&\le \frac{1}{1-p_0} \mathbb{E}\left[nD(X^n,Y^n)-\sqrt{nV(X^n,Y^n)}\,\mathsf{Q}^{-1}(\lambda)\right]+a_n-\log(1-p_0)+\left(\frac{1}{\delta}+1\right)\log \frac{\lambda}{\lambda-\epsilon}
\end{align}
where we have used the facts that $D(x^n,y^n)$ and $V(x^n,y^n)$ are non-negative, and since $\lambda\ge 0$, $\mathsf{Q}^{-1}(\lambda)\le 0$.}
Note that
\begin{align}
\mathbb{E}[ D(X^n,Y^n)]&=\frac{1}{n}\sum_{t=1}^n \left[\alpha D(W\|P_{Z_t}|P_{X_tY_t})+(1-\alpha) D(W\|P_{Z_t|Y_t}|P_{X_tY_t})\right]
\\&=\frac{1}{n}\sum_{t=1}^n \left[\alpha I(X_t,Y_t;Z_t)+(1-\alpha)I(X_t;Z_t|Y_t)\right]
\\&=\alpha I(X,Y;Z|U)+(1-\alpha) I(X;Z|Y,U)
\end{align}
where in the last equality we have defined $U\sim\text{Unif}[n]$ and $X=X_U,Y=Y_U,Z=Z_U$.
Moreover, by concavity of the square root,
\begin{align}
\mathbb{E}\left[ \sqrt{V(X^n,Y^n)}\right]
&\le \alpha \sqrt{\frac{1}{n}\sum_{t=1}^n V(W\|P_{Z_t}|P_{X_tY_t})}+(1-\alpha)\sqrt{\frac{1}{n}\sum_{t=1}^n V(W\|Y_{Z_t|Y_t}|P_{X_tY_t})}
\\&=\alpha \sqrt{V(W\|P_{Z|U}|P_{UXY})}+(1-\alpha)\sqrt{V(W\|P_{Z|YU}|P_{UXY})}.
\end{align}
Thus, since $\mathsf{Q}^{-1}(\lambda)\le 0$,
\new{
\begin{align}
&\log M_1+\alpha\log M_2\nonumber
\\&\le \frac{1}{1-p_0}\bigg[n(\alpha I(X,Y;Z|U)+(1-\alpha) I(X;Z|Y,U))\nonumber
\\&\qquad-\sqrt{n}\left(\alpha \sqrt{V(W\|P_{Z|U}|P_{UXY})}+(1-\alpha)\sqrt{V(W\|P_{Z|YU}|P_{UXY})}\right)\mathsf{Q}^{-1}(\lambda)\bigg]\nonumber
\\&\qquad+a_n-\log(1-p_0)+\left(\frac{1}{\delta}+1\right)\log \frac{\lambda}{\lambda-\epsilon}\label{eq:multi_term_bound}
\\&\le n(\alpha I(X,Y;Z|U)+(1-\alpha) I(X;Z|Y,U))\nonumber
\\&\qquad-\sqrt{n}\left(\alpha \sqrt{V(W\|P_{Z|U}|P_{UXY})}+(1-\alpha)\sqrt{V(W\|P_{Z|YU}|P_{UXY})}\right)\mathsf{Q}^{-1}(\lambda)
+\left(\frac{1}{\delta}+1\right)\log \frac{\lambda}{\lambda-\epsilon}+o(\sqrt{n})\label{eq:dmc_lambda_big}
\end{align}
where we have used the facts that $a_n=o(\sqrt{n})$, $p_n= O(1/n)$, and that the quantity inside the square brackets in \eqref{eq:multi_term_bound} is at most $n\log|\mathcal{Z}|-\sqrt{nV_{\max}}\,\mathsf{Q}^{-1}(\lambda)$.}
From the cost-constraint assumptions on the code, we also have $\mathbb{E}[b_1(X)]\le B_1$ and $\mathbb{E}[b_2(Y)]\le B_2$. By Carath\'edory's theorem, we may reduce the cardinality of $\mathcal{U}$ to $|\mathcal{U}|\le 6$ while preserving the following values:
\begin{equation}
\alpha I(X,Y;Z|U)+(1-\alpha) I(X;Z|Y,U),\,
V(W\|P_{Z|U}|P_{UXY}),\,
V(W\|P_{Z|YU}|P_{UXY}),\,
\mathbb{E}[b_1(X)],\, \mathbb{E}[ b_2(Y)].
\end{equation}
Choosing $\delta=O(n^{-1/2})$ allows us to derive the crude bound
\begin{equation}
\log M_1+\alpha \log M_2\le n(\alpha I(X,Y;Z|U)+(1-\alpha) I(X;Z|Y,U))+O(\sqrt{n}).
\end{equation}
Define $\tilde{X},\tilde{Y},\tilde{Z}$ where
\begin{equation}
P_{\tilde{X}\tilde{Y}\tilde{Z}|U=u}(x,y,z)=P_{X|U=u}(x)P_{Y|U=u}(y)W_{xy}(z).
\end{equation}
By Lemma~\ref{lemma:DMCs},
\begin{equation}
\log M_1+\alpha \log M_2
\le n(\alpha I(\tilde{X},\tilde{Y};\tilde{Z},U)+(1-\alpha)I(\tilde{X};\tilde{Z}|\tilde{Y},U))+O(\sqrt{n}).
\end{equation}
Our goal is to prove that
\begin{equation}\label{eq:Vplus_goal}
\log M_1+\alpha \log M_2\le nC_{1,\alpha}
+2\sqrt{n C'_{1,\alpha}(0)\log\frac{\lambda}{\lambda-\epsilon}}-\sqrt{nV_{1,\alpha}^+}\mathsf{Q}^{-1}(\lambda)+o(\sqrt{n}).
\end{equation}
Since $Q^{-1}(\lambda)\le 0$, we may assume that
\begin{equation}
\log M_1+\alpha \log M_2\ge nC_{1,\alpha}
\end{equation}
or else there is nothing to prove. Thus
\begin{equation}\label{eq:tilde_constraint1}
\alpha I(\tilde{X},\tilde{Y};\tilde{Z},U)+(1-\alpha)I(\tilde{X};\tilde{Z}|\tilde{Y},U)\ge C_{1,\alpha}-O\left(\frac{1}{\sqrt{n}}\right).
\end{equation}
Noting that the mutual information is continuous over distributions with finite alphabets, by the definition of $C_{1,\alpha}$, \eqref{eq:tilde_constraint1} implies that there exists a distribution $P^\star_{UXY}\in\mathcal{P}_{1,\alpha}^{\text{in}}$ where $d_{TV}(P_{U\tilde{X}\tilde{Y}},P^\star_{UXY})\le o(1)$. Since $\Delta(X;Y|U=u)\le\delta$, from Thm.~\ref{thm:props} we have
\begin{equation}
|P_{XY|U=u}(x,y)-P_{\tilde{X}\tilde{Y}|U=u}(x,y)|\le 2\delta.
\end{equation}
As we have taken $\delta=O(1/\sqrt{n})$, then $d_{TV}(P_{UXY},P_{U\tilde{X}\tilde{Y}})\le o(1)$. Thus by the triangle inequality, $d_{TV}(P_{UXY},P^\star_{UXY})\le o(1)$. Since the dispersion variance is also is a continuous function of $P_{UXY}$ (again for finite alphabets), we must have
\begin{align}
&\alpha \sqrt{V(W\|P_{Z|U}|P_{UXY})}+(1-\alpha)\sqrt{V(W\|P_{Z|YU}|P_{UXY})}
\\&\le \alpha \sqrt{V(W\|P^\star_{Z|U}|P^\star_{UXY})}+(1-\alpha)\sqrt{V(W\|P^\star_{Z|YU}|P^\star_{UXY})}+o(1)
\\&\le V_{1,\alpha}^++o(1)
\end{align}
where the second inequality holds since $P^\star_{UXY}\in\mathcal{P}_{1,\alpha}^{\text{in}}$ and by the definition of $V_{1,\alpha}^+$ in \eqref{eq:Vplus_def}. Now returning to the bound in \eqref{eq:dmc_lambda_big},
\begin{align}
\log M_1+\alpha \log M_2
&\le
n(\alpha I(X,Y;Z|U)+(1-\alpha) I(X;Z|Y,U))-\sqrt{nV_{1,\alpha}^+}\mathsf{Q}^{-1}(\lambda)+\left(\frac{1}{\delta}+1\right)\log\frac{\lambda}{\lambda-\epsilon}+o(\sqrt{n})
\\&\le n C_{1,\alpha}(\delta)-\sqrt{n V_{1,\alpha}^{+}}\, \mathsf{Q}^{-1}(\lambda)+\left(\frac{1}{\delta}+1\right)\log\frac{\lambda}{\lambda-\epsilon}+o(\sqrt{n})\label{eq:Vplus3}
\\&=nC_{1,\alpha}+C'_{1,\alpha}(0)\delta+o(n\delta)-\sqrt{n V_{1,\alpha}^{+}}\, \mathsf{Q}^{-1}(\lambda)+\left(\frac{1}{\delta}+1\right)\log\frac{\lambda}{\lambda-\epsilon}+o(\sqrt{n})\label{eq:Vplus4}
\end{align}
\eqref{eq:Vplus3} holds by the definition of $C_{1,\alpha}(\delta)$; and \eqref{eq:Vplus4} follows by the definition of the derivative. Selecting $\delta=\sqrt{\frac{\log\frac{\lambda}{\lambda-\epsilon}}{C'_{1,\alpha}(0)}}$, we derive the desired bound in \eqref{eq:Vplus_goal}.
Now consider any $\lambda<1/2$. Our goal is to show that
\begin{equation}\label{eq:V_minus_goal}
\log M_1+\alpha \log M_2\le nC_{1,\alpha}(\delta)-\sqrt{nV_{1,\alpha}^-}\,\mathsf{Q}^{-1}(\lambda)+\left(\frac{1}{\delta}+1\right)\log\frac{\lambda}{\lambda-\epsilon}+o(\sqrt{n})
\end{equation}
where eventually we will choose $\delta=O(n^{-1/2})$. Thus, we may assume
\begin{equation}\label{eq:Omega_assumption}
\log M_1+\alpha \log M_2\ge nC_{1,\alpha}-\sqrt{nV_{1,\alpha}^-}\,\mathsf{Q}^{-1}(\lambda) +\left(\frac{1}{\delta}+1\right)\log\frac{\lambda}{\lambda-\epsilon}
\end{equation}
or else we are done. Now let
\begin{align}
\Omega_1&=\left\{(x^n,y^n):nD(x^n,y^n)\le nC_{1,\alpha}-\sqrt{nV_{1,\alpha}^{-}}\,\mathsf{Q}^{-1}(\lambda)-\new{a_n-\log n}\right\},\\
\Omega_2&=\left\{(x^n,y^n):nD(x^n,y^n)\ge nC_{1,\alpha}(\delta)+\log n\right\}
\end{align}
and let \new{$p_i=P_{X^nY^n}(\Omega_i\cap\Omega_0^c)$ } for $i=1,2$. To upper bound $p_1$, beginning from the bound in \eqref{eq:dmc_stat_bound} we may write
\begin{align}
&\log M_1+\alpha \log M_2\new{+\log(1-p_0)}-\left(\frac{1}{\delta}+1\right)\log\frac{\lambda}{\lambda-\epsilon}
\\&\le -\log \sum_{(x^n,y^n)\in\Omega_1\new{\cap\Omega_0^c}} P_{X^nY^n}(x^n,y^n)
\exp\left\{-nD(x^n,y^n)+\sqrt{nV(x^n,y^n)}\,\mathsf{Q}^{-1}(\lambda)-\new{a_n}\right\}
\\&\le -\log \sum_{(x^n,y^n)\in\Omega_1\new{\cap\Omega_0^c}} P_{X^nY^n}(x^n,y^n)
\exp\left\{-nC_{1,\alpha}+\sqrt{nV_{1,\alpha}^{-}}\,\mathsf{Q}^{-1}(\lambda)+\log n\right\}\label{eq:Omega1_bd1}
\\&=-\log p_1 +nC_{1,\alpha}-\sqrt{nV_{1,\alpha}^{-}}\,\mathsf{Q}^{-1}(\lambda)-\log n\label{eq:Omega1_bd2}
\end{align}
where in \eqref{eq:Omega1_bd1} we have used the definition of $\Omega_1$, and the fact that $\mathsf{Q}^{-1}(\lambda)\ge 0$ since $\lambda<1/2$; and \eqref{eq:Omega1_bd2} holds by the definition of $p_1$. Thus by the assumption in \eqref{eq:Omega_assumption}
\new{\begin{equation}
p_1\le \frac{1}{(1-p_0)n}=O\left(\frac{1}{n}\right)
\end{equation}
since $p_0=O(1/n)$.
}
Let
\begin{equation}\label{eq:O1O2}
V'=\min\{V(x^n,y^n):(x^n,y^n)\in(\Omega_1\cup\Omega_2)^c\}.
\end{equation}
We will prove that $V'\ge V_{1,\alpha}^--o(1)$. Fix $(x^n,y^n)\in (\Omega_1\cup\Omega_2)^c$. By the definitions of $\Omega_1,\Omega_2$, \new{since $a_n=o(\sqrt{n})$ we have
\begin{equation}
C_{1,\alpha}-O(n^{-1/2})\le D(x^n,y^n)\le C_{1,\alpha}(\delta)+\log n.
\end{equation}
Since $\delta=O(n^{-1/2})$, by Taylor's theorem and the fact from Lemma~\ref{lemma:DMCs} that $C'_{1,\alpha}(0)$ is bounded, $C_{1,\alpha}(\delta)=C_{1,\alpha}+O(n^{-1/2})$. Thus }
$|D(x^n,y^n)-C_{1,\alpha}|\le O(n^{-1/2})$.
If we again let $U\sim\text{Unif}[n]$, and
\begin{equation}\label{eq:XprimeYprime_def}
P_{X'Y'|U=t}(x,y)=1(x=x_t,y=y_t)
\end{equation}
then we may write
\begin{align}
D(x^n,y^n)&=\alpha D(W\|P_{Z|U}|P_{U\bar{X}\bar{Y}})+(1-\alpha) D(W\|P_{Z|YU}|P_{UX'Y'}),\\
\sqrt{V(x^n,y^n)}&=\alpha \sqrt{V(W\|P_{Z|U}|P_{UX'Y'})}+(1-\alpha)\sqrt{V(W\|P_{Z|YU}|P_{UX'Y'})}.
\end{align}
Also note that $\mathbb{E} [b_1(X')]=\frac{1}{n}\sum_{t=1}^n b_1(x_t)\le B_1$, and similarly $\mathbb{E} [b_2(Y')]\le B_2$. We
may perform a dimensionality reduction on $\mathcal{U}$ where $|\mathcal{U}|\le 9$ to preserve the following values:
\begin{gather}
\alpha I(X,Y;Z|U)+(1-\alpha) I(X;Z|Y,U),\\
\alpha D(W\|P_{Z|U}|P_{UX'Y'})+(1-\alpha) D(W\|P_{Z|YU}|P_{UX'Y'}),\\
V(W\|P_{Z|U}|P_{UX'Y'}),\,
V(W\|P_{Z|YU}|P_{UX'Y'}),\\
\mathbb{E} [b_1(X)],\,\mathbb{E}[ b_2(Y)],\,\mathbb{E}[ b_1(X')],\,\mathbb{E}[ b_2(Y')].
\end{gather}
Note that this is not the same dimensionality reduction as above; in particular, this one depends on $x^n,y^n$. Since $\delta=O(n^{-1/2})$, by the same argument as above, there exists $P^\star_{UXY}\in\mathcal{P}_{1,\alpha}^{\text{in}}$ where $d_{TV}(P_{UXY},P^\star_{UXY})\le o(1)$. Since $|D(x^n,y^n)-C_{1,\alpha}|\le o(1)$, by continuity of the relative entropy (for finite alphabets) there exists a distribution $P^\star_{X'Y'|U}$ such that $d_{TV}(P_{UX'Y'},P^\star_{UX'Y'})\le o(1)$ and
\begin{equation}
\alpha D(W\|P^\star_{Z|U}|P^\star_{UX'Y'})+(1-\alpha) D(W\|P^\star_{Z|YU}|P^\star_{UX'Y'})=C_{1,\alpha}.
\end{equation}
That is, $(P^\star_{UXY},P^\star_{X'Y'|U})$ satisfy the feasibility condition for the definition of $V_{1,\alpha}^{-}$ in \eqref{eq:Vminus_feasibility}. By continuity of the divergence variance, this implies that $V(x^n,y^n)\ge V_{1,\alpha}^{-}-o(1)$. This proves that $V'\ge V_{1,\alpha}^{-}-o(1)$. Now we \new{may lower bound the expectation in \eqref{eq:dmc_stat_bound} by}
\begin{align}
&\mathbb{E}\left[\new{1((X^n,Y^n)\in\Omega_0^c)}
\exp\left\{-n D(X^n,Y^n)+\sqrt{n V(X^n,Y^n)}\,\mathsf{Q}^{-1}(\lambda)-\new{a_n}\right\}\right]
\\&\ge \sum_{(x^n,y^n)\in (\new{\Omega_0\cup}\Omega_1\cup\Omega_2)^c} P_{X^nY^n}(x^n,y^n) \exp\left\{-n D(x^n,y^n)+\sqrt{nV'}\,\mathsf{Q}^{-1}(\lambda)-\new{a_n}\right\} \label{eq:dmc_final1}
\\&\ge\sum_{(x^n,y^n)\in \new{(\Omega_0\cup\Omega_1)}^c} P_{X^nY^n}(x^n,y^n) \exp\left\{-n D(x^n,y^n)+\sqrt{nV'}\,\mathsf{Q}^{-1}(\lambda)-\new{a_n}\right\} \nonumber
\\&\qquad-\sum_{(x^n,y^n)\in \new{\Omega_0^c\cap}\Omega_2} P_{X^nY^n}(x^n,y^n) \exp\left\{-n D(x^n,y^n)+\sqrt{nV'}\,\mathsf{Q}^{-1}(\lambda)-\new{a_n}\right\} \label{eq:dmc_final2}
\\&\ge (1\new{-p_0}-p_1) \exp\left\{-\frac{1}{1\new{-p_0}-p_1} \sum_{(x^n,y^n)\in\new{(\Omega_0\cup\Omega_1)}^c}P_{X^nY^n}(x^n,y^n) nD(x^n,y^n)+\sqrt{nV'}\,\mathsf{Q}^{-1}(\lambda)-\new{a_n}\right\} \nonumber
\\&\qquad-p_2 \exp\left\{-nC_{1,\alpha}(\delta)+\sqrt{nV'}\,\mathsf{Q}^{-1}(\lambda)-\new{\log n-a_n}\right\}\label{eq:dmc_final3}
\\&\ge (1\new{-p_0}-p_1) \exp\left\{-\frac{1}{1\new{-p_0}-p_1} n\mathbb{E} [D(X^n,Y^n)]+\sqrt{nV'}\,\mathsf{Q}^{-1}(\lambda)-\new{a_n}\right\} \nonumber
\\&\qquad- \exp\left\{-nC_{1,\alpha}(\delta)+\sqrt{nV'}\,\mathsf{Q}^{-1}(\lambda)-\new{\log n-a_n})\right\}\label{eq:dmc_final4}
\\&\ge (1\new{-p_0}-p_1) \exp\left\{-\frac{1}{1\new{-p_0}-p_1} nC_{1,\alpha}(\delta)+\sqrt{nV'}\,\mathsf{Q}^{-1}(\lambda)-\new{a_n}\right\} \nonumber
\\&\qquad- \exp\left\{-nC_{1,\alpha}(\delta)+\sqrt{nV'}\,\mathsf{Q}^{-1}(\lambda)-\new{\log n-a_n}\right\}\label{eq:dmc_final5}
\\&=\exp\left\{-nC_{1,\alpha}(\delta)+\sqrt{nV'}\,\mathsf{Q}^{-1}(\lambda)-\new{a_n}\right\}
\left(\exp\left\{\log(1\new{-p_0}-p_1)-\frac{\new{p_0+}p_1}{1\new{-p_0}-p_1}nC_{1,\alpha}(\delta)-O(1)\right\}-\new{\frac{1}{n}}\right)\label{eq:dmc_final6}
\\&= \exp\left\{-nC_{1,\alpha}(\delta)+\sqrt{nV'}\,\mathsf{Q}^{-1}(\lambda)-\new{a_n}\right\}O(1)\label{eq:dmc_final7}
\\&\ge\exp\left\{-nC_{1,\alpha}(\delta)+\sqrt{nV_{1,\alpha}^{-}}\,\mathsf{Q}^{-1}(\lambda)-o(\sqrt{n})\right\}\label{eq:dmc_final8}
\end{align}
where \eqref{eq:dmc_final1} holds by the definition of $V'$, \eqref{eq:dmc_final3} holds by the definition of $\Omega_2$ and by convexity of the exponential, \eqref{eq:dmc_final4} holds by extending the sum over all $(x^n,y^n)$, \eqref{eq:dmc_final5} holds since $\mathbb{E} [D(X^n,Y^n)]=\alpha I(X,Y;Z|U)+(1-\alpha) I(X;Z|Y,U)\le C_{1,\alpha}(\delta)$; \eqref{eq:dmc_final7} holds \new{since $p_0+p_1=O(1/n)$, which implies that $\log(1-p_0-p_1)=-O(1/n)$ and $\frac{(p_0+p_1)n}{1-p_0-p_1}=O(1)$, and we also use the fact that $C_{1,\alpha}(\delta)\le\log|\mathcal{Z}|$;}%
and \eqref{eq:dmc_final8} holds since $V'\ge V_{1,\alpha}^{-}-o(1)$ \new{and $a_n=o(\sqrt{n})$}. This proves \eqref{eq:V_minus_goal}. Again using the definition of the derivative, and choosing $\delta$ optimally (this involves $\delta=O(n^{-1/2})$ as promised) completes the proof.
\new{
\subsection{Discussion of the Maximal Error Case}
\label{sec:maximal}
While the results in this paper focus on the average error probability criterion, an important variant of the problem is the one using maximal error probability. In a sense, the maximal error variant is an easier problem, because it imposes a stronger condition on each message pair. Unfortunately, as originally shown in \cite{Dueck1978}, the capacity regions for the two problem variants can differ, and in general the capacity region of the maximal error case (with deterministic encoders) is not even known.
A second-order converse bound for the maximal-error case was presented in \cite{Moulin2013}; however, the proof of the main result of \cite{Moulin2013} appears to have a gap (namely, the derivation of equation (28)). The recent work \cite{Wei2021} used a wringing-based proof (following a similar approach as this paper) to derive a similar bound to that claimed in \cite{Moulin2013}. The result derived in \cite{Wei2021} is as follows. Let $R^{\star,\max}_{\alpha_1,\alpha_2}(n,\epsilon)$ be the largest achievable weighted-sum rate for a length-$n$ code with maximal probability of error $\epsilon$. Consider a discrete-memoryless MAC such that there is a unique optimal input distribution for the standard sum-rate; i.e. $\mathcal{P}_{1,1}^{\text{in}}$ contains a single distribution $P_{X}^\star P_Y^\star$. Then \cite{Wei2021} shows that
\begin{equation}\label{eq:max_err_bound}
R^{\star,\max}_{1,1}(n,\epsilon)\le C_{1,1}-\sqrt{\frac{V^\star}{n}}\,\mathsf{Q}^{-1}(\epsilon)+o\left(\frac{1}{\sqrt{n}}\right)
\end{equation}
where $V^\star=V(W\|P_Z^\star|P_X^\star P_Y^\star)$ where $P_Z^\star$ is the induced output distribution from $P_X^\star P_Y^\star$. This constitutes a tighter bound on the sum-rate than Thm.~\ref{thm:DMC_asymptotic}. However, note that in \eqref{eq:max_err_bound}, $C_{1,1}$ is the average-case sum-capacity, which may not be the same as the maximal-error sum-capacity, and indeed the maximal-error sum-capacity may not even be known. Thus, for many channels the gap between the best-known achievability and converse bounds for the maximal-error case is $O(1)$, as opposed to $O(1/\sqrt{n})$ for the average-error case.
}
\section{Example Multiple-Access Channels}\label{sec:examples}
\subsection{Binary Additive Erasure Channel}\label{sec:bamac}
Let $\mathcal{X}\in\{0,1\}$, $\mathcal{Y}\in\{0,1\}$, $\mathcal{Z}=\{0,1,2,\mathsf{e}\}$. Given $(X,Y)=(x,y)$, $Z=\mathsf{e}$ with probability $\gamma$, and $Z=x+y$ with probability $\bar\gamma=1-\gamma$. The capacity region for this channel is the pentagonal region
\begin{equation}
\mathcal{C}=\left\{(R_1,R_2):R_1+R_2\le \frac{3}{2}\bar\gamma\log 2,\, R_1\le \bar\gamma\log 2,\, R_2\le \bar\gamma\log 2.\right\}
\end{equation}
Thus the weighted-sum-capacity is
\begin{equation}
C_{\alpha_1,\alpha_2}=\left(\max\{\alpha_1,\alpha_2\}+\frac{1}{2}\min\{\alpha_1,\alpha_2\}\right)\bar\gamma\log 2.
\end{equation}
In order to apply Thm.~\ref{thm:DMC_asymptotic}, we need to find $C'_{\alpha_1,\alpha_2}(0)$, $V_{\alpha_1,\alpha_2}^+$, and $V_{\alpha_1,\alpha_2}^-$. First we compute $C_{\alpha_1,\alpha_2}(\delta)$. Since the channel is symmetric between the two inputs, $C_{\alpha_1,\alpha_2}(\delta)=C_{\alpha_2,\alpha_1}(\delta)$.
Let $(\alpha_1,\alpha_2)=(1,\alpha)$ for $\alpha\in[0,1]$. Since this channel has no cost constraints, the time sharing variable $U$ can be eliminated in the definition of $C_{\alpha_1,\alpha_2}(\delta)$ in \eqref{eq:Cdelta_def}. Thus
\begin{align}
C_{1,\alpha}(\delta)&=\max_{P_{XY}:\Delta(X;Y)\le\delta} \big[\alpha I(X,Y;Z)+(1-\alpha) I(X;Z|Y)\big]
\\&=\max_{P_{XY}:\Delta(X;Y)\le\delta} \bar\gamma\left[\alpha H(X+Y)+(1-\alpha) H(X|Y)\right].
\end{align}
To lower bound $C_{1,\alpha}(\delta)$, we may take $P_{XY}$ to be a DSBS with parameter $p\le 1/2$. Recalling the calculation from Example~\ref{example:DSBS}, $\Delta(X;Y)=\frac{1+\log_2(1-p)}{1-\log_2(1-p)}$, so
\begin{align}
C_{1,\alpha}(\delta)&\ge \max_{p\le 1/2: \frac{1+\log_2(1-p)}{1-\log_2(1-p)}\le\delta} \bar\gamma\left[\alpha(H_b(p)+(1-p)\log 2)+(1-\alpha) H_b(p)\right]
\\&=\begin{cases} \bar\gamma \left[ H_b(2^{1-2/(1+\delta)})+\alpha 2^{1-2/(1+\delta)}\log 2\right],
& \delta< \frac{1-\log_2(1+2^{-\alpha})}{1+\log_2(1+2^{-\alpha})},
\\ \bar\gamma [\log (1+2^{-\alpha})+\alpha\log 2],
& \delta\ge \frac{1-\log_2(1+2^{-\alpha})}{1+\log_2(1+2^{-\alpha})}\end{cases}\label{eq:BAMAC_lb}
\end{align}
where \eqref{eq:BAMAC_lb} follows from a straightforward entropy calculation. In fact, this lower bound is tight, although the proof is a little more difficult. The following proposition is proved in Appendix~\ref{appendix:bamac}.
\begin{proposition}\label{prop:bamac}
For any $\alpha\in[0,1]$ and $\delta\in[0,1]$, $C_{1,\alpha}(\delta)$ is equal to the expression in \eqref{eq:BAMAC_lb}.
\end{proposition}
Given the expression for $C_{1,\alpha}(\delta)$ in \eqref{eq:BAMAC_lb}, the first-order Taylor expansion is given by
\begin{equation}
C_{1,\alpha}(\delta)=\bar\gamma \left(1+\frac{\alpha}{2}\right)\log 2+\bar\gamma \alpha (\log^2 2)\delta+O(\delta^2).
\end{equation}
In particular, $C'_{1,\alpha}(0)=\bar\gamma\alpha \log^2 2$.
We now calculate the dispersion variance quantities $V_{\alpha_1,\alpha_2}^+,V_{\alpha_1,\alpha_2}^-$. For any\footnote{The $\alpha=0$ case allows other optimal input distributions, although this case is somewhat trivial, as is reduces to a point-to-point binary erasure channel.} $\alpha\in(0,1]$, $\mathcal{P}_{1,\alpha}^{\text{in}}$ is the set of distributions $P_{UXY}$ where $P_{XY|U=u}$ is uniform on $\{0,1\}^2$. That is, $(X,Y)$ are independent of $U$, so we may ignore $U$. Taking $P_Z,P_{Z|Y}$ to be the induced distributions from the unique optimal input distribution, we may calculate
\begin{align}
D(W_{xy}\|P_Z)&=(1+1(x=y))\bar\gamma\log 2,\\
D(W_{xy}\|P_{Z|Y=y})&=\bar\gamma\log 2.
\end{align}
Note that $\alpha D(W\|P_Z|P_{X'Y'})+(1-\alpha) D(W\|P_{Z|Y}|P_{X'Y'})=C_{1,\alpha}$ iff $P_{X'Y'}(0,0)+P_{X'Y'}(1,1)=1/2$. Moreover,
\begin{align}
V(W_{xy}\|P_Z)&=\gamma\bar\gamma (1+4\cdot 1(x=y))\log^2 2,\\
V(W_{xy}\|P_{Z|Y=y})&=\gamma\bar\gamma \log^2 2.
\end{align}
Thus
\begin{equation}
V_{1,\alpha}^-=\gamma\bar\gamma\left(\alpha\sqrt{\frac{5}{2}}+1-\alpha\right)^2.
\end{equation}
Moreover, $V_{1,\alpha}^+$ is the same quantity. Thm.~\ref{thm:DMC_asymptotic} now gives
\begin{equation}\label{eq:unconvex_baec}
R^\star_{1,\alpha}(n,\epsilon)
\le \bar\gamma\left(1+\frac{\alpha}{2}\right)\log 2
+\left(\min_{\lambda\in(\epsilon,1)} 2\sqrt{\bar\gamma\alpha \log\frac{\lambda}{\lambda-\epsilon}}-\sqrt{\gamma\bar\gamma}\left(\alpha\sqrt{\frac{5}{2}}+1-\alpha\right) \mathsf{Q}^{-1}(\lambda)\right)^{**}\frac{\log 2}{\sqrt{n}}+o\left(\frac{1}{\sqrt{n}}\right).
\end{equation}
In fact, the quantity inside the $(\cdot)^{**}$ is concave (see Fig.~\ref{fig:baec_bounds}), so it is equivalent to simply take the convex combination of the points at $\alpha=0$ and $\alpha=1$. At $\alpha=0$ one can see that it is optimal to choose $\lambda=\epsilon$. Thus
\begin{equation}\label{eq:convexified_baec}
R^\star_{1,\alpha}(n,\epsilon)
\le \bar\gamma\left(1+\frac{\alpha}{2}\right)\log 2
+\left[(1-\alpha)\sqrt{\gamma\bar\gamma}\,\mathsf{Q}^{-1}(\epsilon)
+\min_{\lambda\in(\epsilon,1)} \alpha \left( 2\sqrt{\bar\gamma\log \frac{\lambda}{\lambda-\epsilon}}-\sqrt{\gamma\bar\gamma\frac{5}{2}}\,\mathsf{Q}^{-1}(\lambda)\right)\right]\frac{\log 2}{\sqrt{n}}
+o\left(\frac{1}{\sqrt{n}}\right).
\end{equation}
The corresponding achievability bound from any of \cite{Tan2014,Haim2012,Huang2012,MolavianJazi2012,Scarlett2015a}\footnote{The achievable bound from \cite{Scarlett2015a} is in general the strongest, but for this channel these all produce the same bound.} is
\begin{equation}
R_{1,\alpha}^{\star}(n,\epsilon)
\ge \bar\gamma\left(1+\frac{\alpha}{2}\right)\log 2
+L(\alpha,\epsilon)\log 2
-o\left(\frac{1}{\sqrt{n}}\right)
\end{equation}
where
\begin{align}
L(\alpha,\epsilon)&=\sup\{\alpha s_1+(1-\alpha) s_2:\mathbb{P}(S_1\ge s_1,S_2\ge s_2)\ge 1-\epsilon\}
\end{align}
and $(S_1,S_2)$ are jointly Gaussian with zero mean and covariance matrix
\begin{equation}
\gamma\bar\gamma\left[\begin{array}{cc} 5/2 & 3/2 \\ 3/2 & 1\end{array}\right].
\end{equation}
Fig.~\ref{fig:baec_bounds} illustrates the upper and lower bounds on the coefficient in the $O(1/\sqrt{n})$ term. The figure shows bounds on the second-order coefficient for $R_{1,\alpha}(n,\epsilon)$ for $\gamma=0.25,\epsilon=10^{-3}$, and also bounds on $R_{1,1}(n,\epsilon)$---i.e., the standard sum-rate---for all $\gamma\in[0,1]$ and $\epsilon=10^{-3}$. Unfortunately, the upper and lower bounds only match for essentially trivial cases: when $\alpha=0$, wherein the problem reduces to the point-to-point binary erasure channel, and when $\gamma=1$, wherein the output is independent from the inputs so no communication is possible.
\begin{figure}
\begin{minipage}{.49\textwidth}
\centering
\includegraphics[width=3.5in]{baec.eps}\\
{\small (a)}
\end{minipage}
\hfill
\begin{minipage}{.49\textwidth}
\centering
\includegraphics[width=3.5in]{baec_sum_rate.eps}\\
{\small (b)}
\end{minipage}
\caption{Upper and lower bounds on the second-order coefficient for the binary additive erasure channel. Subfigure (a) shows the second-order bounds for the maximum achievable weighted-sum-rate $R^\star_{1,\alpha}(n,\epsilon)$ as a function of $\alpha\in[0,1]$ for erasure probability $\gamma=0.25$ and probability of error $\epsilon=10^{-3}$. Subfigure (b) shows second-order bounds for the standard sum-rate $R^\star_{1,1}(n,\epsilon)$ as a function of $\gamma\in[0,1]$ for $\epsilon=10^{-3}$. The lower bound is from prior work \cite{Tan2014,Haim2012,Huang2012,MolavianJazi2012,Scarlett2015a}, while the upper bound is our contribution. In subfigure (a), along with the upper bound from \eqref{eq:convexified_baec}, we also show the weaker upper bound found by not taking the lower convex envelope in \eqref{eq:unconvex_baec}. Note that the stronger bound is simply the lower convex envelope of the weaker bound.}
\label{fig:baec_bounds}
\end{figure}
\subsection{Gaussian MAC}\label{sec:gaussian}
In the Gaussian MAC, $X,Y,Z$ are all real-valued, the output is $Z=X+Y+N$, where $N\sim\mathcal{N}(0,1)$, and the input sequences $X^n,Y^n$ are subject to power constraints $\sum_{t=1}^n X_t^2\le nS_1$ and $\sum_{t=1}^n Y_t^2\le nS_2$. The following result, proved in Appendix~\ref{appendix:gaussian}, states that the Gaussian MAC satisfies the conditions of Corollary~\ref{cor:regularity}, and so its second-order rate is $O(1/\sqrt{n})$.
\begin{theorem}\label{thm:gaussian}
For the Gaussian MAC, $C_{\alpha_1,\alpha_2}'(0)$ is uniformly bounded for all $\alpha_1,\alpha_2$ where $\max\{\alpha_1,\alpha_2\}=1$, and $V_{\max}<\infty$.
\end{theorem}
In the statement of this theorem, we have omitted any specific bound on $C'_{\alpha_1,\alpha_2}(0)$ or $V_{\max}$. While such bounds can be extracted from the proof, we have sought clarity of the proof over optimality of the bounds\footnote{The length and complexity of the proof in Appendix~\ref{appendix:gaussian} may make you skeptical of this claim, but it's true!}, and so we have elected to highlight the order of the bound on the second-order rate, rather than the coefficient.
\section{Conclusion}\label{sec:conclusion}
The main result of this paper is that, for most multiple-access channels of interest, under the average probability of error constraint the second-order coding rate is $O(1/\sqrt{n})$ bits per channel use. Along the way, we introduced and characterized the wringing dependence, which was a critical element in the proof of the main results.
Possible future work includes extensions to more than two transmitters, or applying similar techniques to other network information theory problems (the interference channel with strong interference should be a straightforward extension). Moreover, there are a number of ways that our results could potentially be improved even for the two-user MAC. First, the regularity conditions given in Corollary~\ref{cor:regularity}, under which we are able to prove the second-order bound of $O(1/\sqrt{n})$, are quite difficult to verify for non-discrete channels. The only continuous channel for which we have successfully verified the conditions is the Gaussian MAC; the proof of this in Appendix~\ref{appendix:gaussian} is quite technical, as well as being very specific to the Gaussian channel. It would be advantageous to find conditions that are easier to verify under which the second-order bound holds.
A second potential improvement has to do with the quantity $V^{\lambda}_{\alpha_1,\alpha_2}$ in Thm.~\ref{thm:DMC_asymptotic}. Specifically, the form of $V_{\alpha_1,\alpha_2}^{-}$ in \eqref{eq:Vminus_def} is not especially natural; it may be possible to improve the result so that this quantity is complementary to $V_{\alpha_1,\alpha_2}^{+}$; that is, \eqref{eq:Vplus_def} with an infimum instead of a supremum. In addition, Thm.~\ref{thm:DMC_asymptotic} could be strengthened using dispersion quantities extracted from multi-dimensional Gaussian CDFs, along the lines of the achievable bounds in \cite{Tan2014,Haim2012,Huang2012,MolavianJazi2012,Scarlett2015a,MolavianJazi2015}. One may also wish to prove something similar to Thm.~\ref{thm:DMC_asymptotic} for non-discrete channels.
Of course, the ultimate goal would be to determine the second-order coefficient exactly. Even if the above improvements could be made, there would remain a gap between achievability and converse bounds for almost all channels, \new{including} such simple examples as the deterministic binary additive channel. It appears that new ideas are required in order to close the gap completely. One possible direction of improvement, which the method used here fails to address, is the following. Consider the distribution of the error probability conditioned on the message pair. That is, let $\epsilon(i_1,i_2)$ be the error probability given message pair $(i_1,i_2)$. Taking $(I_1,I_2)$ to be uniformly random over the message sets, it is critical to characterize the distribution of the random variable $\epsilon(I_1,I_2)$ in any MAC converse proof. In our proof, we do not use anything about the distribution of $\epsilon(I_1,I_2)$ beyond that its expected value is the overall error probability. In particular, the proof would allow $\epsilon(I_1,I_2)$ to take values only $\{0,\lambda\}$ for some $\lambda$. Intuitively, no good code could give rise to such a distribution on $\epsilon(I_1,I_2)$. Indeed, existing achievable bounds produce distributions on $\epsilon(I_1,I_2)$ that are close to Gaussian---very different from a distribution taking only two values. The independence of the messages would seem to impose certain restrictions on the distribution of this variable, but the precise nature of these restrictions remains elusive.
Another intriguing area of inquiry relates to hypercontractivity. As discussed in Sec.~\ref{sec:other_measures}, the wringing dependence can be upper bounded by a quantity related to hypercontractivity. However, this upper bound did not actually help in the converse proof. A \emph{lower} bound on wringing dependence could help establish that the regularity conditions of Corollary~\ref{cor:regularity} are satisfied, as one must show that the information capacity region does not grow too much by allowing a small wringing dependence between the channel inputs. It is unclear whether there is some alternative method of wringing that uses hypercontractivity more directly. Another question along these lines is whether there is any connection between the technique used here and that of \cite{Liu2019}, which proves second-order converses for a variety of problems via \emph{reverse} hypercontractivity.
\appendices
\section{Proof of Proposition~\ref{prop:hypercontractivity}}\label{appendix:hyper}
To prove \eqref{eq:hyp_upper_bd}, we take $\delta\in[0,1]$ to be such that $(1+1/\delta,1+\delta)\in\mathcal{R}_{X;Y}$, and we will show $\Delta(X;Y)\le\delta$. Let $r=1+1/\delta$ and $s=1+\delta$. It was found in \cite{Kamath2012} that an equivalent condition for $(r,s)\in\mathcal{R}_{X;Y}$ is that, for all $f:\mathcal{X}\to\mathbb{R}$, $g:\mathcal{Y}\to\mathbb{R}$,
\begin{equation}
\mathbb{E} [f(X)g(Y)]\le \|f(X)\|_{r'}\|g(Y)\|_s,
\end{equation}
where $r'$ is the H\"older conjugate of $r$, defined by $\frac{1}{r}+\frac{1}{r'}=1$. In this case, since $r=1+1/\delta$, $r'=1+\delta$. Thus, for all real-valued functions $f$ and $g$,
\begin{equation}\label{eq:ribbon_condition}
\mathbb{E} [f(X)g(Y)]\le \|f(X)\|_{1+\delta}\|g(Y)\|_{1+\delta}.
\end{equation}
Given any $\mathcal{A}\subset\mathcal{X},\mathcal{B}\subset\mathcal{Y}$, let $f(x)=1(x\in \mathcal{A})$ and $g(y)=1(y\in \mathcal{B})$. Thus
\begin{align}
P_{XY}(\mathcal{A},\mathcal{B})&=\mathbb{E} [f(X)g(Y)]
\\&\le \|f(X)\|_{1+\delta}\|g(Y)\|_{1+\delta}\label{eq:ribbon2}
\\&= \left(\mathbb{E} \left[f(X)^{1+\delta}\right]\,\mathbb{E} \left[g(Y)^{1+\delta}\right]\right)^{1/(1+\delta)}
\\&= \left(P_X(\mathcal{A})P_Y(\mathcal{B})\right)^{1/(1+\delta)}.\label{eq:ribbon4}
\end{align}
Therefore, $\delta$ satisfies the feasibility condition in \eqref{eq:Delta_def0} with $Q_X=P_X,Q_Y=P_Y$, so $\Delta(X;Y)\le\delta$.
It follows from the data processing inequality for wringing dependence that $\Delta(X^n;Y^n)$ is non-decreasing in $n$. We now prove the limiting behavior in \eqref{eq:hyp_limit}. Due to the tensorization property of hypercontractivity (cf.~\cite{Mossel2013}), $\mathcal{R}_{X^n;Y^n}=\mathcal{R}_{X;Y}$, and so $\Delta_{\text{hyp}}(X^n;Y^n)=\Delta_{\text{hyp}}(X;Y)$. From the upper bound we have already proved, $\Delta(X^n;Y^n)\le \Delta_{\text{hyp}}(X;Y)$ for any $n$.
Now it is enough to show
\begin{equation}\label{eq:n_hyp_lower_bd}
\lim_{n\to\infty} \Delta(X^n;Y^n)\ge \Delta_{\text{hyp}}(X;Y).
\end{equation}
To prove this lower bound, suppose first that $\mathcal{X},\mathcal{Y}$ are finite sets; we will later relax this assumption. We will need some results from the method of types. In particular, let $\mathcal{P}_n(\mathcal{X})$ be the set of $n$-length types on alphabet $\mathcal{X}$; that is, distributions $P\in\mathcal{P}(\mathcal{X})$ where $P(x)$ is a multiple of $1/n$ for each $x\in\mathcal{X}$. For a sequence $x^n$, let $P_{x^n}\in\mathcal{P}_n(\mathcal{X})$ be its type:
\begin{equation}
P_{x^n}(x)=\frac{|\{t:x_t=x\}|}{n}.
\end{equation}
Fix a finite alphabet $\mathcal{U}$, and a conditional distribution $P_{U|XY}$. Let $P_{UXY}=P_{XY}P_{U|XY}$. For each integer $n$, let $P_{UXY}^{(n)}$ be the element of $\mathcal{P}_n(\mathcal{U}\times\mathcal{X}\times\mathcal{Y})$ closest in total variational distance to $P_{UXY}$. Note that $d_{TV}(P^{(n)}_{UXY},P_{UXY})\to 0$ as $n\to\infty$. Define the type class
\begin{equation}
T(X)=\{x^n:P_{x^n}=P_X^{(n)}\};
\end{equation}
$T(U),T(XY)$, etc. are defined similarly. Given a sequence $u^n\in T(U)$, define the conditional type class
\begin{align}
T(X|u^n)&=\{x^n:P_{u^nx^n}=P_{UX}^{(n)}\};
\end{align}
again $T(Y|u^n),T(XY|u^n)$ are defined similarly. A basic result from the method of types (see e.g. \cite[Chap.~11]{Cover1991}) is that
\begin{equation}
\frac{1}{(n+1)^{|\mathcal{X}|\cdot|\mathcal{U}|}} \exp\{nH(X|U)\}\le|T(X|u^n)|\le \exp\{nH(X|U)\}
\end{equation}
where the conditional entropy is with respect to $P^{(n)}_{UXY}$. Moreover, for any $x^n\in T(X|u^n)$,
\begin{equation}
P_{X^n}(x^n)=\exp\{-n(H(X)+D(P_X^{(n)}\|P_X)\}.
\end{equation}
Similar facts hold for $T(Y|u^n),T(XY|u^n)$. We may now lower bound $\Delta(X^n;Y^n)$ by restricting $\mathcal{A}$ and $\mathcal{B}$ to the sets $T(X|u^n)$ and $T(Y|u^n)$ respectively, for some $u^n\in T(U)$. Thus
\begin{align}\label{eq:XnYn_lower_bound}
\Delta(X^n;Y^n)
&\ge \inf_{Q_{X^n},Q_{Y^n}}\ \max_{u^n\in T(U)}
\frac{\log Q_{X^n}(T(X|u^n))Q_{Y^n}(T(Y|u^n))}{\log P_{X^nY^n}(T(X|u^n),T(Y|u^n))}-1.
\end{align}
In this expression, $Q_{X^n}$ is only evaluated on sequences $x^n\in T(X)$. Moreover, the objective function is symmetric among the sequences $x^n$ in this type class. Similar facts hold for $Q_{Y^n}$. Thus, by the convexity of the expression in \eqref{eq:XnYn_lower_bound} in $(Q_{X^n},Q_{Y^n})$, the optimal choices of $Q_{X^n}$ and $Q_{Y^n}$ are uniform over $T(X)$ and $T(Y)$ respectively. Thus, for any $u^n\in T(U)$,
\begin{equation}
Q_{X^n}(T(X|u^n))=\frac{|T(X|u^n)|}{|T(X)|}
\le (n+1)^{|\mathcal{X}|} \exp\{-n I(U;X)\}.
\end{equation}
Similarly
\begin{equation}
Q_{Y^n}(T(Y|u^n))\le (n+1)^{|\mathcal{Y}|} \exp\{-n I(U;Y)\}.
\end{equation}
We may also write
\begin{align}
P_{X^nY^n}(T(X|u^n),T(Y|u^n))
&\ge P_{X^nY^n}(T(XY|u^n))
\\&= |T(XY|u^n)| \exp\{-n(H(XY)+D(P_{XY}^{(n)}\|P_{XY})\}
\\&\ge \frac{1}{(n+1)^{|\mathcal{X}|\cdot|\mathcal{Y}|\cdot|\mathcal{U}|}} \exp\{-n (I(U;XY)+D(P_{XY}^{(n)}\|P_{XY}))\}.
\end{align}
Thus
\begin{align}
\Delta(X^n;Y^n)&\ge
\frac{-n(I(U;X)+I(U;Y))+(|\mathcal{X}|+|\mathcal{Y}|)\log (n+1)}{-n(I(U;XY)+D(P_{XY}^{(n)}\|P_{XY}))-(|\mathcal{X}|\cdot|\mathcal{Y}|\cdot|\mathcal{U}|)\log(n+1)}-1
\end{align}
By the continuity of Kullback-Leibler divergence for finite alphabets, $D(P_{XY}^{(n)}\|P_{XY})\to 0$ as $n\to\infty$. Thus, if we take a limit as $n\to\infty$, we find
\begin{equation}\label{eq:hyp_lower_bound_MI_form}
\lim_{n\to\infty}\Delta(X^n;Y^n)
\ge \sup_{U} \frac{I(U;X)+I(U;Y)}{I(U;XY)}-1
\end{equation}
where we have taken a supremum over all finite alphabets $\mathcal{U}$ and all conditional distributions $P_{U|XY}$, and now the mutual informations are with respect to $P_{UXY}$.
We now show that the RHS of \eqref{eq:hyp_lower_bound_MI_form} is lower bounded by $\Delta_{\text{hyp}}(X;Y)$. As shown in \cite{Nair2014}, for any $r\ge s\ge 1$, $(r,s)\in\mathcal{R}_{X;Y}$ if and only if
\begin{equation}\label{eq:ribbon_MI_form}
s\ge \sup_{U} \frac{r I(U;Y)}{rI(U;XY)-(r-1)I(U;X)}
\end{equation}
where the supremum is over variables $U$ with finite alphabets. (In fact, an alphabet of size $2$ is enough.) Consider any $\delta<\Delta_{\text{hyp}}(X;Y)$. By the definition of $\Delta_{\text{hyp}}$ in \eqref{eq:Delta_hyp_def}, it must be that $(1+1/\delta,1+\delta)\notin\mathcal{R}_{X;Y}$. By the equivalent characterization of $\mathcal{R}_{X;Y}$ in \eqref{eq:ribbon_MI_form}, this implies there exists a variable $U$ such that
\begin{equation}
1+\delta<\frac{(1+\frac{1}{\delta})I(U;Y)}{(1+\frac{1}{\delta})I(U;XY)-\frac{1}{\delta}I(U;X)}.
\end{equation}
Rearranging gives
\begin{equation}
\delta < \frac{I(U;Y)+I(U;X)}{I(U;XY)}-1.
\end{equation}
As this holds for any $\delta<\Delta_{\text{hyp}}(X;Y)$, the RHS of \eqref{eq:hyp_lower_bound_MI_form} is indeed lower bounded by $\Delta_{\text{hyp}}(X;Y)$.
While the above argument only applies for finite alphabets, for infinite alphabets we may apply a quantization argument as follows. Let $[X],[Y]$ be finite quantizations of $X,Y$. We write $[X]^n=([X_1],\ldots,[X_n])$ where each $[X_t]$ is the quantization of $X_t$ using the same quantization. By the data processing inequality and the fact that we have already proved the lower bound in \eqref{eq:n_hyp_lower_bd} for finite alphabets,
\begin{equation}
\lim_{n\to\infty}\Delta(X^n;Y^n)
\ge \lim_{n\to\infty} \Delta([X]^n;[Y]^n)
\ge \Delta_{\text{hyp}}([X];[Y]).
\end{equation}
We may take a supremum on the RHS over all finite quantizations, so it is enough to show that this supremum equals $\Delta_{\text{hyp}}(X;Y)$. Some equivalent forms for $\Delta_{\text{hyp}}$ are as follows:
\begin{align}
\Delta_{\text{hyp}}(X;Y)&=\inf\{\delta\ge 0: \mathbb{E} [f(X)g(Y)]\le \|f(X)\|_{1+\delta}\|g(Y)\|_{1+\delta}\text{ for all }f,g\}\\
&=\sup\{\delta\ge 0: \mathbb{E} [f(X)g(Y)]> \|f(X)\|_{1+\delta}\|g(Y)\|_{1+\delta}\text{ for some }f,g\}.
\end{align}
Recalling the definition of a \emph{simple} function as one that takes on only finitely many values, we may write
\begin{align}\label{eq:finite_quantizations}
\sup_{\text{finite quantizations }[X],[Y]}
\Delta_{\text{hyp}}([X];[Y])
=\sup\{\delta\ge 0:\mathbb{E} [f(X)g(Y)]> \|f(X)\|_{1+\delta}\|g(Y)\|_{1+\delta}\text{ for some simple }f,g\}.
\end{align}
By the usual definition of the Lebesgue integral, if there exist functions $f,g$ such that $\mathbb{E}[ f(X)g(Y)]> \|f(X)\|_{1+\delta}\|g(Y)\|_{1+\delta}$, then there also exist simple functions satisfying the same inequality. This proves that the quantity in \eqref{eq:finite_quantizations} equals $\Delta_{\text{hyp}}(X;Y)$.
\section{Proof of Lemma~\ref{lemma:max_corr}}\label{appendix:max_corr}
Assume $\Delta(X;Y)\le\delta$. One way to express the maximal correlation is
\begin{equation}
\rho_m(X;Y)=\sup_{\substack{f,g:\\ \mathbb{E}[ f(X)]=\mathbb{E} [g(Y)]=0,\\ \var (f(X))=\var (g(Y))=1}} \mathbb{E} [f(X)g(Y)].
\end{equation}
Take any $f,g$ such that $f(X),g(Y)$ have zero mean and unit variance. We wish to show that $\mathbb{E} [f(X)g(Y)]\le O(\delta\log\delta^{-1})$. We may define $X'=f(X)$ and $Y'=g(Y)$. By the fact that $\Delta$ satisfies the data processing inequality, $\Delta(X';Y')\le\delta$. To simplify notation, we drop the primes, and assume that $X$ and $Y$ are themselves real-valued random variables with zero mean and unit variance. Now it is enough to show that $\mathbb{E} [XY]\le O(\delta\log\delta^{-1})$.
We upper bound $\mathbb{E} [XY]$ by breaking into pieces as follows:
\begin{multline}\label{eq:XY_four_parts}
\mathbb{E} [XY]=\mathbb{E} [XY1(X>0,Y>0)]+\mathbb{E} [XY1(X>0,Y<0)]
+\mathbb{E} [XY1(X<0,Y>0)]+\mathbb{E} [XY1(X<0,Y<0)].
\end{multline}
We will proceed to show that
\begin{equation}\label{eq:XY_bound_goal}
\left|\mathbb{E} [XY1(X>0,Y>0)]-\mathbb{E} [X1(X>0)]\,\mathbb{E} [Y1(Y>0)]\right|\le O(\delta\log \delta^{-1}).
\end{equation}
This is enough to prove the lemma, since each term in \eqref{eq:XY_four_parts} can be bounded using \eqref{eq:XY_bound_goal} by swapping $X$ with $-X$ and/or $Y$ with $-Y$. The primary tool we use to prove \eqref{eq:XY_bound_goal} is the consequence of $\Delta(X;Y)\le\delta$ in \eqref{eq:dep_pp_bd}, which upper bounds a joint probability over $P_{XY}$ in terms of the marginal probabilities raised to the power $1/(1+\delta)$. To apply this fact to bound the expectation requires writing the expectation in terms of probabilities, which can be done as follows:
\begin{equation}
\mathbb{E} [XY1(X>0,Y>0)]=\int_0^\infty dx \int_0^\infty dy \mathbb{P}(X>x,Y>y)\label{eq:integral_expansion}.
\end{equation}
We may now apply \eqref{eq:dep_pp_bd} to the probability $\mathbb{P}(X>x,Y>y)$ to derive the upper bound
\begin{align}
\mathbb{E} [XY1(X>0,Y>0)]
&\le (1+2\delta) \int_0^\infty \mathbb{P}(X>x)^{1/(1+\delta)} dx \int_0^\infty \mathbb{P}(Y>y)^{1/(1+\delta)} dy.\label{eq:two_integrals}
\end{align}
We may now bound one of the integrals in \eqref{eq:two_integrals} by writing
\begin{align}
\int_0^\infty \mathbb{P}(X>x)^{1/(1+\delta)}dx-\mathbb{E} [X1(X>0)]
&=\int_0^\infty \left[\mathbb{P}(X>x)^{1/(1+\delta)}-\mathbb{P}(X>x)\right]dx
\\&\le \int_0^\infty \left[\mathbb{P}(X>x)^{1/(1+\delta)}-\frac{1}{1+\delta}\mathbb{P}(X>x)\right]dx\label{eq:chebyshev0}
\\&\le \int_0^1 \frac{\delta}{1+\delta}dx+\int_1^\infty \left[\left(\frac{1}{x^2}\right)^{1/(1+\delta)}-\frac{1}{(1+\delta)x^2}\right]dx\label{eq:chebyshev1}
\\&=\frac{4\delta}{1-\delta^2}\label{eq:chebyshev2}
\\&=O(\delta)\label{eq:chebyshev4}
\end{align}
where \eqref{eq:chebyshev1} holds because the function $p\mapsto p^{1/(1+\delta)}-\frac{p}{1+\delta}$ is an increasing function for any $\delta$ with a maximum value of $\frac{\delta}{1+\delta}$, and since $\mathbb{P}(X>x)\le 1/x^2$ from the assumption that $\mathbb{E} [X^2]=1$ and Chebyshev's inequality.
Since the same argument holds for the integral over $y$ in \eqref{eq:two_integrals}, we have
\begin{align}
\mathbb{E} [XY1(X>0,Y>0)]&\le (1+2\delta)\left(\mathbb{E} [X1(X>0)]+O(\delta)\right)\left(\mathbb{E} [Y1(Y>0)]+O(\delta)\right)
\\&\le \mathbb{E} [X1(X>0)]\,\mathbb{E} [Y1(Y>0)]+O(\delta)\label{eq:EXY_upper_bd}
\end{align}
where we have used the fact that
\begin{equation}
\mathbb{E} [X1(X>0)]\le \sqrt{\mathbb{E} [X^2 1(X>0)]}\le \sqrt{\mathbb{E} [X^2]}\le 1
\end{equation}
and the same holds for $Y$.
We now lower bound $\mathbb{E} [XY1(X>0,Y>0)]$. Again using the integral expansion in \eqref{eq:integral_expansion}, we may do so by lower bounding $\mathbb{P}(X>x,Y>y)$. It will be convenient to define the function
\begin{equation}\label{eq:k_delta_def}
k_\delta(p)=\begin{cases}(1+2\delta) p^{1/(1+\delta)}-p, & p\le 1\\ 2\delta, & p>1\end{cases}.
\end{equation}
For $p\ge 0$, $k_\delta(p)$ is non-decreasing, concave, and $0\le k_\delta(p)\le 2\delta$. For any $x\ge 0,y\ge 0$,
\begin{align}
\mathbb{P}(X>x,Y>y)&=\mathbb{P}(X>x)-\mathbb{P}(X>x,Y\le y)\label{eq:PXY_lower_bd0}
\\&\ge \mathbb{P}(X>x)-(1+2\delta)\left[\mathbb{P}(X>x)\mathbb{P}(Y\le y)\right]^{1/(1+\delta)}\label{eq:PXY_lower_bd1}
\\&= \mathbb{P}(X>x)\mathbb{P}(Y>y)+\mathbb{P}(X>x)\mathbb{P}(Y\le y)-(1+2\delta)\left[\mathbb{P}(X>x)\mathbb{P}(Y\le y)\right]^{1/(1+\delta)}
\\&=\mathbb{P}(X>x)\mathbb{P}(Y>y)-k_\delta(\mathbb{P}(X>x,Y\le y))\label{eq:PXY_lower_bd2}
\\&\ge \mathbb{P}(X>x)\mathbb{P}(Y>y)-k_\delta(\mathbb{P}(X>x))\label{eq:PXY_lower_bd}
\end{align}
where in \eqref{eq:PXY_lower_bd1} we have again applied \eqref{eq:dep_pp_bd}, in \eqref{eq:PXY_lower_bd2} we have used the definition of $k_\delta$, and in \eqref{eq:PXY_lower_bd} we have used the fact that $k_\delta$ is non-decreasing. We may now bound
\begin{align}
&\mathbb{E} [X1(X>0)]\,\mathbb{E} [Y1(Y>0)]-\mathbb{E} [XY1(X>0,Y>0)]
\\&=\int_0^\infty dx \int_0^\infty dy \left[ \mathbb{P}(X>x)\mathbb{P}(Y>y)-\mathbb{P}(X>x,Y>y)\right]
\\&\le \int_0^\infty dx \int_0^\infty dy \min\{\mathbb{P}(X>x)\mathbb{P}(Y>y),\, k_\delta(\mathbb{P}(X>x)),\, k_\delta(\mathbb{P}(Y>y))\label{eq:3min}
\end{align}
where \eqref{eq:3min} holds by three upper bounds on $\mathbb{P}(X>x)\mathbb{P}(Y>y)-\mathbb{P}(X>x,Y>y)$: the fact that $\mathbb{P}(X>x,Y>y)\ge 0$, the bound in \eqref{eq:PXY_lower_bd}, and the bound in \eqref{eq:PXY_lower_bd} with $X$ and $Y$ swapped. To further upper bound \eqref{eq:3min}, we separate the integral over $x$ and $y$ into three regions: when $x,y\ge \delta^{-1/2}$, we upper bound the integrand by $\mathbb{P}(X>x)\mathbb{P}(Y>y)$; when $y\le x$ and $y \le \delta^{-1/2}$, we upper bound the integrand by $k_\delta(\mathbb{P}(X>x))$; when $x\le y$ and $x\le\delta^{-1/2}$, we upper bound the integrand by $k_\delta(\mathbb{P}(Y>y))$. Thus \eqref{eq:3min} is at most
\begin{multline}\label{eq:three_terms}
\int_{\delta^{-1/2}}^\infty \mathbb{P}(X>x) dx \int_{\delta^{-1/2}}^\infty \mathbb{P}(Y>y) dy
+\int_0^\infty dx \int_0^{\min\{x,\delta^{-1/2}\}}dy k_\delta(\mathbb{P}(X>x))
\\ +\int_0^\infty dy \int_0^{\min\{y,\delta^{-1/2}\}}dx k_\delta(\mathbb{P}(Y>y)).
\end{multline}
We now bound each term in \eqref{eq:three_terms} in turn. In the first term in \eqref{eq:three_terms}, Chebyshev's inequality gives
\begin{align}
\int_{\delta^{-1/2}}^\infty \mathbb{P}(X>x)dx
\le \int_{\delta^{-1/2}}^\infty \frac{1}{x^2}dx
=\sqrt{\delta}.
\end{align}
The same calculation holds for $Y$, so the first term in \eqref{eq:three_terms} is at most $\delta$. The second term in \eqref{eq:three_terms} may be bounded by
\begin{align}
&\int_0^\infty \min\{x,\delta^{-1/2}\} k_\delta(\mathbb{P}(X>x))dx
\\&=\int_0^{\delta^{-1/2}} x\, k_\delta(\mathbb{P}(X>x))dx+\delta^{-1/2}\int_{\delta^{-1/2}}^\infty k_\delta(\mathbb{P}(X>x))dx
\\&\le \frac{1}{2\delta} \int_0^{\delta^{-1/2}} 2\delta x\, k_\delta(\mathbb{P}(X>x))dx+\delta^{-1/2}\int_{\delta^{-1/2}}^\infty k_\delta(1/x^2)dx
\label{eq:integral1z}
\\&\le \frac{1}{2\delta} k_\delta\left(\int_0^{\delta^{-1/2}} 2\delta x \mathbb{P}(X>x)\right)+\delta^{-1/2}\left(\frac{(1+2\delta)(1+\delta)}{1-\delta} \delta^{\frac{1-\delta}{2(1+\delta)}}-\delta^{1/2}\right)\label{eq:integral1a}
\\&\le\frac{1}{2\delta}k_\delta(\delta)+\frac{(1+2\delta)(1+\delta)}{1-\delta} \delta^{-\delta/(1+\delta)}-1\label{eq:integral1b}
\\&=\frac{1}{2}\left((1+2\delta)\delta^{-\delta/(1+\delta)}-1\right)+\frac{(1+2\delta)(1+\delta)}{1-\delta} \delta^{-\delta/(1+\delta)}-1\
\label{eq:integral1c}
\\&=O(-\delta\log\delta)\label{eq:integral1d}
\end{align}
where \eqref{eq:integral1z} holds by Chebyshev's inequality and the fact that $k_\delta$ is increasing; \eqref{eq:integral1a} holds since $k_\delta$ is concave and $\int_0^{\delta^{-1/2}} 2\delta x=1$; \eqref{eq:integral1b} holds since
\begin{equation}
\int_0^{\delta^{-1/2}} 2 x \mathbb{P}(X>x)\le \int_0^\infty 2x \mathbb{P}(X>x)= \mathbb{E} [X^2]=1
\end{equation}
and \eqref{eq:integral1d} holds since $\delta^{-\delta/(1+\delta)}=1-\delta\log\delta+O(\delta^2\log^2\delta)$. The third term in \eqref{eq:three_terms} may be bounded by an identical calculation. This completes the proof of \eqref{eq:XY_bound_goal}, which therefore proves the lemma.
\section{Proof of Lemma~\ref{lemma:DMCs}}\label{appendix:dmc}
Given that $\Delta(X;Y)\le\delta$,
\begin{align}
d_{TV}(P_{XY},P_XP_Y)
&=\sum_{x,y} |P_{XY}(x,y)-P_X(x)P_Y(y)|^+
\\&=\sum_x \sum_{y:P_{XY}(x,y)>P_X(x)P_Y(y)} (P_{XY}(x,y)-P_X(x)P_Y(y))
\\&\le \sum_x 2\delta\label{eq:dtv_independence_bd}
\\&=2\delta|\mathcal{X}|
\end{align}
where in \eqref{eq:dtv_independence_bd} we have applied \eqref{eq:gdep_abs_bd} from Thm.~\ref{thm:props} with the particularizations $\mathcal{A}=\{x\}$ and $\mathcal{B}=\{y:P_{XY}(x,y)>P_X(x)P_Y(y)\}$. Applying the same argument swapping $X$ and $Y$ gives
\begin{equation}
d_{TV}(P_{XY},P_XP_Y)\le 2\delta\min\{|\mathcal{X}|,|\mathcal{Y}|\}.
\end{equation}
Since $Z$ is the output of the channel with $X,Y$ as the inputs, while $\tilde{Z}$ is the output of the channel with $\tilde{X},\tilde{Y}$ as the inputs, this also means that $d_{TV}(P_{XYZ},P_{\tilde{X}\tilde{Y}\tilde{Z}})\le 2\delta\min\{|\mathcal{X}|,|\mathcal{Y}|\}$.
We may relate the conditional entropies as follows:
\begin{align}
H(Z|X,Y)
&=\sum_{x,y} P_{XY}(x,y) H(Z|X=x,Y=y)
\\&\ge \sum_{x,y} P_X(x)P_Y(y)H(Z|X=x,Y=y)-\sum_{x,y} |P_{XY}(x,y)-P_X(x)P_Y(y)|^+ H(Z|X=x,Y=y)
\\&\ge H(\tilde{Z}|\tilde{X},\tilde{Y})-2\delta\min\{|\mathcal{X}|,|\mathcal{Y}|\}\log|\mathcal{Z}|.\label{eq:conditional_bound}
\end{align}
To complete the proof of the lemma, we must bound $H(Z)$, $H(Z|X)$, and $H(Z|Y)$. The main difficulty is that the entropy is not Lipschitz continuous, so the fact that the total variational distance is $O(\delta)$ does not immediately imply that the entropies differ by $O(\delta)$. We circumvent this problem using the stronger consequence of $\Delta(X;Y)\le\delta$ in \eqref{eq:dep_pp_bd} from Thm.~\ref{thm:props}. We first bound $H(Z)$. Let $z\in\mathcal{Z}$ be such that $P_{\tilde{Z}}(z)\ge 1/4$. Then by the total variational bound,
\begin{equation}
P_Z(z)\ge P_{\tilde{Z}}(z)-2\delta \min\{|\mathcal{X}|,|\mathcal{Y}|\} \ge e^{-2}
\end{equation}
where the second inequality holds for sufficiently small $\delta$, and since $e^{-2}<1/4$. Consider the function $f(p)=-p\log p$. Since $f'(p)=-\log p-1$, if $p\ge e^{-2}$ then
\begin{equation}
|f'(p)|\le 1.
\end{equation}
Since we have established that $P_Z(z),P_{\tilde{Z}}(z)\ge e^{-2}$, and $|P_Z(z)-P_{\tilde{Z}}(z)|\le 2 \delta\min\{|\mathcal{X}|,|\mathcal{Y}|\}$, we have
\begin{equation}
-P_Z(z)\log P_Z(z)\le -P_{\tilde{Z}}(z)\log P_{\tilde{Z}}(z)+2\min\{|\mathcal{X}|,|\mathcal{Y}|\}\delta.
\end{equation}
Note there are at most $4$ values of $z$ where $P_{\tilde{Z}}(z)\ge 1/4$, so
\begin{equation}
\sum_{z:P_{\tilde{Z}}(z)\ge 1/4}\left[-P_Z(z)\log P_Z(z)+P_Z(z)\log P_{\tilde{Z}}(z)\right]\le 8\min\{|\mathcal{X}|,|\mathcal{Y}|\}\delta.
\end{equation}
Now suppose $z\in\mathcal{Z}$ is such that $P_{\tilde{Z}}(z)<1/4$. Let $r_z=\sum_{x,y} W(z|x,y)$. Assume without loss of generality that all letters in $\mathcal{Z}$ are reachable (i.e. $W(z|x,y)>0$ for some $x,y$). Thus $r_z\ge W_{\min}$. We may now bound
\begin{align}
P_Z(z)&=\sum_{x,y} P_{XY}(x,y) W(z|x,y)
\\&\le \sum_{x,y} (1+2\delta)(P_X(x)P_Y(y))^{1/(1+\delta)} W(z|x,y)\label{eq:PtilZ0b}
\\&=(1+2\delta)r_z \sum_{x,y} \frac{W(z|x,y)}{r_z} (P_X(x)P_Y(y))^{1/(1+\delta)}
\\&\le (1+2\delta) r_z \left(\sum_{x,y} \frac{W(z|x,y)}{r_z} P_X(x)P_Y(y)\right)^{1/(1+\delta)}\label{eq:PtilZ0}
\\&=(1+2\delta) r_z^{-\delta/(1+\delta)} P_{\tilde{Z}}(z)^{1/(1+\delta)}
\\&\le (1+2\delta) W_{\min}^{-\delta/(1+\delta)} P_{\tilde{Z}}(z)^{1/(1+\delta)}
\\&\le (1+2\delta)(1-\delta\log W_{\min}+O(\delta^2)) P_{\tilde{Z}}(z)^{1/(1+\delta)}\label{eq:PtilZ0a}
\end{align}
where \eqref{eq:PtilZ0b} follows from \eqref{eq:dep_pp_bd}, and \eqref{eq:PtilZ0} holds by the definition of $r_z$ and by the concavity of the function $p^{1/(1+\delta)}$.
By the assumption that $P_{\tilde{Z}}(z)<1/4$, for sufficiently small $\delta$, \eqref{eq:PtilZ0a} is less than $e^{-1}$. Thus, we are in the increasing regime of the function $-p\log p$. In particular
\begin{align}
-P_Z(z)\log P_Z(z)&\le- \left[(1+2\delta) (1-\delta\log W_{\min}+O(\delta^2)) P_{\tilde{Z}}(z)^{1/(1+\delta)}\right]\nonumber
\\&\qquad\cdot \log\left[(1+2\delta) (1-\delta\log W_{\min}+O(\delta^2)) P_{\tilde{Z}}(z)^{1/(1+\delta)}\right]
\\&\le-\frac{1+2\delta}{1+\delta}(1-\delta\log W_{\min}+O(\delta^2)) P_{\tilde{Z}}(z)^{1/(1+\delta)}\log P_{\tilde{Z}}(z)\label{eq:PtilZ1}
\end{align}
where in \eqref{eq:PtilZ1} we have simply dropped terms greater than $1$ inside the log. Here we need a technical result. For any $p\in[0,1]$, let $g_p(\delta)=-p^{1/(1+\delta)}\log p$. We claim that for all $\delta\ge 0$,
\begin{equation}\label{eq:gq_claim}
g_p(\delta)\le -p\log p+4e^{-2}\delta.
\end{equation}
Since $g_p(0)=-p\log p$, it is enough to show that $g'_p(\delta)\le 4e^{-2}$ for all $\delta$. The first and second derivatives of $g_p$ are
\begin{align}
g'_p(\delta)&=\frac{p^{1/(1+\delta)}\log^2 p}{(1+\delta)^2},
\\g''_p(\delta)&=p^{1/(1+\delta)}\log^2 p\left(\frac{-2}{(1+\delta)^3}-\frac{\log p}{(1+\delta)^4}\right).
\end{align}
Note that $g''_p(\delta)\le 0$ iff
\begin{equation}
-2(1+\delta)-\log p\le 0.
\end{equation}
That is, $g'_p(\delta)$ is maximized at $\delta=\frac{-\log p}{2}-1$. Thus
\begin{equation}
g'_p(\delta)\le \frac{p^{\frac{2}{-\log p}}\log^2 p}{\left(\frac{-\log p}{2}\right)^2}
=4p^{\frac{2}{-\log p}}=4 \exp\left\{\log p \frac{2}{-\log p}\right\}=4e^{-2}.
\end{equation}
This proves the claim in \eqref{eq:gq_claim}. Applying this result to \eqref{eq:PtilZ1} gives
\begin{align}
-P_Z(z)\log P_Z(z)
&\le \frac{1+2\delta}{1+\delta}(1-\delta\log W_{\min}+O(\delta^2)) \left[-P_{\tilde{Z}}(z)\log P_{\tilde{Z}}(z)+ 4e^{-2}\delta\right]
\\&\le -P_{\tilde{Z}}(z)\log P_{\tilde{Z}}(z)+\left[(1-\log W_{\min})e^{-1}+4e^{-2}\right]\delta+O(\delta^2)\label{eq:PtilZ2}
\end{align}
where in \eqref{eq:PtilZ2} we have used the fact that $-p\log p\le e^{-1}$. Therefore
\begin{align}
H(Z)-H(\tilde{Z})
&\le 8\min\{|\mathcal{X}|,|\mathcal{Y}|\}\delta
+\sum_{z:P_{\tilde{Z}}(z)<1/4}
\left(\left[(1-\log W_{\min})e^{-1}+4e^{-2}\right]\delta+O(\delta^2)\right)
\\&\le
\left[8\min\{|\mathcal{X}|,|\mathcal{Y}|\}
+|\mathcal{Z}|\left((1-\log W_{\min})e^{-1}+4e^{-2}\right)\right]\delta+O(\delta^2)\label{eq:HZ_bound}
\end{align}
Combining \eqref{eq:HZ_bound} with the bound on conditional entropy in \eqref{eq:conditional_bound} proves \eqref{eq:MI_DMC_bound}.
To prove the bound on $I(X;Z|Y)$ in \eqref{eq:MIX_DMC_bound}, we need to bound $H(Z|Y)$, or equivalently $H(Y,Z)$, since $H(Y)=H(\tilde{Y})$. We may almost the same argument as above, but with the joint distribution $P_{YZ}$ in place of $P_Z$. In particular, if $P_{\tilde{Y}\tilde{Z}}(y,z)\ge 1/4$, then
\begin{equation}
-P_{YZ}(y,z)\log P_{YZ}(y,z)\le -P_{\tilde{Y}\tilde{Z}}(y,z)\log P_{\tilde{Y}\tilde{Z}}(y,z)+2\min\{|\mathcal{X}|,|\mathcal{Y}|\}\delta.
\end{equation}
To deal with $P_{\tilde{Y}\tilde{Z}}(y,z)< 1/4$, let $r_{z|y}=\sum_x W(z|x,y)$. If $r_{z|y}=0$, then $P_{YZ}(y,z)=P_{\tilde{Y}\tilde{Z}}(y,z)=0$, so this letter pair can be discarded. Otherwise, $r_{z|y}\ge W_{\min}$, so
\begin{align}
P_{YZ}(y,z)&=\sum_x P_{XY}(x,y)W(z|x,y)
\\&\le \sum_x (1+2\delta) (P_X(x)P_Y(y))^{1/(1+\delta)}W(z|x,y)
\\&\le (1+2\delta)r_{z|y}^{-\delta/(1+\delta)} P_{\tilde{Y}\tilde{Z}}(y,z)^{1/(1+\delta)}
\\&\le (1+2\delta)W_{\min}^{-\delta/(1+\delta)}P_{\tilde{Y}\tilde{Z}}(y,z)^{1/(1+\delta)}.
\end{align}
The remainder of the proof is essentially identical, and so we find
\begin{equation}
H(Z|Y)\le H(\tilde{Z}|\tilde{Y})+\left[8\min\{|\mathcal{X}|,|\mathcal{Y}|\}
+|\mathcal{Y}|\cdot|\mathcal{Z}|\left((1-\log W_{\min})e^{-1}+4e^{-2}\right)\right]\delta+O(\delta^2).
\end{equation}
Combining with the bound on the entropy conditioned on $X,Y$ in \eqref{eq:conditional_bound} proves \eqref{eq:MIX_DMC_bound}. The bound on $I(Y;Z|X)$ in \eqref{eq:MIY_DMC_bound} is proved by the same argument.
\section{Proof of Prop.~\ref{prop:bamac}}\label{appendix:bamac}
If $\delta\ge \frac{1-\log_2(1+2^{-\alpha})}{1+\log_2(1+2^{-\alpha})}$, then we may simply ignore the constraint on the wringing dependence, so
\begin{equation}
C_{1,\alpha}(\delta)\le \max_{P_{XY}} \bar\gamma\left[\alpha H(X+Y)+(1-\alpha)H(X|Y)\right]
=\bar\gamma\left[\log(1+2^{-\alpha})+\alpha\log 2\right].
\end{equation}
Now consider $\delta< \frac{1-\log_2(1+2^{-\alpha})}{1+\log_2(1+2^{-\alpha})}$. We define for convenience $r_{z}=\mathbb{P}(X+Y=z)$ for $z=0,1,2$. Note that
\begin{equation}
\alpha H(X+Y)+(1-\alpha)H(X|Y)
\le \alpha H(X+Y)+(1-\alpha)H(X\oplus Y)
\\=\alpha H(r_0,r_1,r_2)+(1-\alpha)H_b(r_0+r_2)
\end{equation}
where $\oplus$ is modulo 2 addition, and we have used the fact that $X\oplus Y=0$ iff $X+Y\in\{0,2\}$. Since $\Delta(X;Y)\le\delta$, using the properties of the wringing dependence in Thm.~\ref{thm:props}, there exist $Q_X,Q_Y\in\mathcal{P}(\{0,1\})$ such that
\begin{align}
r_0&=P_{XY}(0,0)
\le (Q_X(0)Q_Y(0))^{1/(1+\delta)}.
\end{align}
Similarly $r_2\le (Q_X(1)Q_Y(1))^{1/(1+\delta)}$. Thus
\begin{align}
\sqrt{r_{0}}+\sqrt{r_2}
&\le (Q_X(0)Q_Y(0))^{1/(2(1+\delta))}+(Q_X(1)Q_Y(1))^{1/(2(1+\delta))}\label{eq:r0r2_constraint0}
\\&\le 2^{1-1/(1+\delta)}\label{eq:r0r2_constraint}
\end{align}
where \eqref{eq:r0r2_constraint} holds because $(pq)^{\rho}$ is concave in $(p,q)$ for $0\le\rho\le 1$, and so the quantity in \eqref{eq:r0r2_constraint0} is maximized with $Q_X(0)=Q_Y(0)=1/2$. We may rewrite the constraint in \eqref{eq:r0r2_constraint} as
\begin{equation}
4r_0r_2\le (2^{1-1/(1+\delta)}-r_0-r_2)^2.\label{eq:quadratic_constraint}
\end{equation}
Thus
\begin{align}
&\alpha H(r_0,r_1,r_2)+(1-\alpha) H_b(r_0+r_2)
\\&\le \max_{\substack{r_0,r_2\in[0,1]:\\ r_0+r_2\le 1,\\4r_0r_2\le (2^{1-1/(1+\delta)}-r_0-r_2)^2}}
\big[-(1-r_0-r_2)\log(1-r_0-r_2)+\alpha(-r_0\log r_0-r_2\log r_2)-(1-\alpha)(r_0+r_2)\log(r_0+r_2)\big]
\\&\le \min_{\lambda\ge 0}\ \max_{\substack{r_0,r_2\in[0,1]:\\ r_0+r_2\le 1}}\
\big[-(1-r_0-r_2)\log(1-r_0-r_2)+\alpha(-r_0\log r_0-r_2\log r_2)-(1-\alpha)(r_0+r_2)\log(r_0+r_2)\nonumber
\\&\qquad+\lambda((2^{1-1/(1+\delta)}-r_0-r_2)^2-4r_0r_2)\big].\label{eq:r0r2_function}
\end{align}
Let $f(r_0,r_2;\lambda)$ be the function in \eqref{eq:r0r2_function}. We claim that for any $\lambda\le \alpha$, $f(r_0,r_2;\lambda)$ is concave in $(r_0,r_2)$. The Hessian with respect to $(r_0,r_2)$ is given by
\begin{equation}
\nabla^2 f(r_0,r_2;\lambda)=\left[\begin{array}{cc}
-\frac{r_0+r_2(1-r_0-r_2)\alpha}{r_0(1-r_0-r_2)(r_0+r_2)}+\lambda
& -\frac{1-(1-r_0-r_2)\alpha}{(1-r_0-r_2)(r_0+r_2)}-\lambda\\
-\frac{1-(1-r_0-r_2)\alpha}{(1-r_0-r_2)(r_0+r_2)}-\lambda
& -\frac{r_2+r_0(1-r_0-r_2)\alpha}{r_2(1-r_0-r_2)(r_0+r_2)}+\lambda
\end{array}\right].
\end{equation}
We need to show that $\nabla^2 f(r_0,r_2;\lambda)$ is negative semi-definite; this requires that the upper left element is non-positive, and the determinant is non-negative. The upper left element is given by
\begin{align}
-\frac{r_0+r_2(1-r_0-r_2)\alpha}{r_0(1-r_0-r_2)(r_0+r_2)}+\lambda
&\le-\frac{1}{(1-r_0-r_2)(r_0+r_2)}+\lambda\label{eq:upper_left1}
\\&\le -4+\lambda\label{eq:upper_left2}
\\&\le -3\label{eq:upper_left3}
\end{align}
where \eqref{eq:upper_left1} holds because $\alpha\ge 0$, \eqref{eq:upper_left2} holds because $x(1-x)\le 1/4$, and \eqref{eq:upper_left3} holds by the assumption that $\lambda\le\alpha\le 1$. The determinant of the Hessian is given by
\begin{align}
|\nabla^2 f(r_0,r_2;\lambda)|&=\frac{(r_0+r_2)\alpha-(4r_0r_2+(1-r_0-r_2)(r_0-r_2)^2\alpha)\lambda}{r_0r_2(1-r_0-r_2)(r_0+r_2)}
\\&\ge \frac{\alpha\left[r_0+r_2-4r_0r_2-(1-r_0-r_2)(r_0-r_2)^2\alpha\right]}{r_0r_2(1-r_0-r_2)(r_0+r_2)}\label{eq:hessian0}
\\&\ge \frac{\alpha\left[r_0+r_2-4r_0r_2-(1-r_0-r_2)(r_0-r_2)^2\right]}{r_0r_2(1-r_0-r_2)(r_0+r_2)}\label{eq:hessian1}
\\&=\frac{\alpha\left[1-r_0(1-r_0)-r_2(1-r_2)-2r_0r_2\right]}{r_0r_2(1-r_0-r_2)}\label{eq:hessian5}
\\&\ge 0\label{eq:hessian6}
\end{align}
where \eqref{eq:hessian0} holds by the assumption that $\lambda\le\alpha$, \eqref{eq:hessian1} holds since $\alpha\le 1$, and \eqref{eq:hessian6} holds again since $x(1-x)\le 1/4$ and since $r_0+r_2\le 1$.
We may upper bound \eqref{eq:r0r2_function} by choosing any $\lambda\ge 0$. With some hindsight, we choose
\begin{equation}
\lambda=2^{-2+1/(1+\delta)} \left[\log\left(2^{-1+2/(1+\delta)}-1\right)+\alpha\log 2\right].
\end{equation}
Note that $\lambda\ge 0$ if
\begin{equation}
1\le 2^{\alpha}\left(2^{-1+2/(1+\delta)}-1\right).
\end{equation}
This indeed holds by the assumption that $\delta< \frac{1-\log_2(1+2^{-\alpha})}{1+\log_2(1+2^{-\alpha})}$. In addition, noting that $\lambda$ is decreasing in $\delta$,
\begin{equation}
\lambda\le 2^{-1} \left[\log (2^{1}-1)+\alpha\log 2\right]=\frac{\alpha\log 2}{2}<\alpha.
\end{equation}
Thus, by the above claim, for this value of $\lambda$, $f(r_0,r_2;\lambda)$ is concave. Since the function is also symmetric between $r_0$ and $r_2$, it is maximized at $r_0=r_2=r$. Differentiating this function, the maximizing value of $r$ is found at
\begin{equation}
0=\frac{d}{dr} f(r,r;\lambda)=
2\log(1-2r)-2\log r-(1-\alpha)2\log 2-4\cdot 2^{1-1/(1+\delta)}\lambda
\end{equation}
This is solved at $r=2^{-2/(1+\delta)}$. At this value, the constraint in \eqref{eq:quadratic_constraint} holds with equality. Thus the upper bound from \eqref{eq:r0r2_function} becomes
\begin{align}
\alpha H(r_0,r_1,r_2)+(1-\alpha) H_b(r_0+r_2)
&\le H_b(2^{1-2/(1+\delta)})+\alpha 2^{1-2/(1+\delta)}\log 2.
\end{align}
This gives an upper bound on $C_{1,\alpha}(\delta)$ that exactly matches the lower bound in \eqref{eq:BAMAC_lb}.
\section{Proof of Thm.~\ref{thm:gaussian}}\label{appendix:gaussian}
\subsection{Bounding \texorpdfstring{$C'_{\alpha_1,\alpha_2}(0)$}{Calpha1alpha2'(0)}}
Let $(\alpha_1,\alpha_2)=(1,\alpha)$ for $\alpha\in[0,1]$. Recall that
\begin{equation}\label{eq:gaussian_Cdelta}
C_{1,\alpha}(\delta)=\sup_{\substack{X,Y,U:\Delta(X;Y|U=u)\le\delta\ \forall u,\\ \mathbb{E} [X^2]\le S_1,\\ \mathbb{E} [Y^2]\le S_2}} \big[\alpha I(X,Y;Z|U)+(1-\alpha) I(X;Z|Y,U)\big].
\end{equation}
Note that
\begin{equation}
C_{1,\alpha}(0)=\alpha \frac{1}{2}\log (1+S_1+S_2)+(1-\alpha)\frac{1}{2}\log (1+S_1).
\end{equation}
Since $C_{1,\alpha}(\delta)$ is convex in $\alpha$,
\begin{equation}
C_{1,\alpha}(\delta)\le \alpha C_{1,1}(\delta)+(1-\alpha)C_{1,0}(\delta).
\end{equation}
We may easily bound the second term:
\begin{align}
C_{1,0}(\delta)&=\sup_{\substack{X,Y,U:\Delta(X;Y|U=u)\le\delta\ \forall u,\\ \mathbb{E} [X^2]\le S_1,\\ \mathbb{E} [Y^2]\le S_2}} I(X;Z|Y,U)
\\&\le \sup_{X,Y:\mathbb{E} [X^2]\le S_1,\mathbb{E} [Y^2]\le S_2} h(X+N)-h(N)
\\&\le \frac{1}{2}\log(1+S_1)
\\&=C_{1,0}(0)
\end{align}
where $h(\cdot)$ denotes the differential entropy. This implies that $C'_{1,0}(0)=0$. Thus, to uniformly bound $C'_{1,\alpha}(\delta)$ for all $\alpha$, it is enough to prove that $C'_{1,1}(0)<\infty$. Let $X,Y,U$ be any set variables satisfying the constraints in the infimum in \eqref{eq:gaussian_Cdelta}. Note that
\begin{align}
I(X,Y;Z|U)
&\le h(Z|U)-h(N)
\\&= h(Z|U)-\frac{1}{2}\log 2\pi e.
\end{align}
Now it is enough to show $h(Z|U)\le \frac{1}{2}\log 2\pi e (1+S_1+S_2)+O(\delta)$. For each $u$, let $S_{1u}=\mathbb{E} [X^2|U=u],S_{2u}=\mathbb{E}[Y^2|U=u]$. Thus $\sum_u P_U(u) S_{1u}\le S_1$, $\sum_u P_U(u) S_{2u}\le S_2$. Our goal is to show that, for each $u$
\begin{equation}\label{eq:gaussian_goal}
h(Z|U=u)\le \frac{1}{2}\log 2\pi e(1+S_{1u}+S_{2u})+O(\delta)
\end{equation}
which implies
\begin{equation}
h(Z|U)=\sum_u P_U(u)h(Z|U=u)\le \frac{1}{2}\log 2\pi e(1+S_1+S_2)+O(\delta)
\end{equation}
where we have used the concavity of the log. For convenience, for the remainder of the proof we drop the conditioning on $u$. Throughout this proof, we are careful to use $O(\cdot)$ notation only when the implied constant is universal, and in particular does not depend on $S_1,S_2$.
We may assume without loss of generality that $X$ and $Y$ have zero mean, since if they do not, shifting their means to zero does not change $h(Z)$, and only reduces $\mathbb{E} [X^2],\mathbb{E} [Y^2]$. For convenience define $S=1+S_1+S_2$. Since our goal to is to prove \eqref{eq:gaussian_goal}, we may assume
\begin{equation}
h(Z)\ge \frac{1}{2}\log(2\pi eS)\label{eq:hz_simple_bd}
\end{equation}
because otherwise we have nothing to prove. Let $\sigma_Z^2=\mathbb{E} [Z^2]$. Since $\Delta(X;Y)\le\delta$, from Lemma~\ref{lemma:max_corr}, $\rho_m(X;Y)\le O(\delta\log\delta^{-1})$. This implies that $\mathbb{E} [XY]\le \sqrt{S_1S_2}\, O(\delta\log \delta^{-1})$. Thus,
\begin{align}
\sigma_Z^2&=\mathbb{E} [(X + Y+N)^2]
\\&=S+2\,\mathbb{E} [XY]\label{eq:sigmaZ_bd1}
\\&\le S+2\sqrt{S_1S_2}\,O(\delta\log\delta^{-1})
\\&\le S+S\,O(\delta\log\delta^{-1})\label{eq:sigmaZ_bd}
\end{align}
where in \eqref{eq:sigmaZ_bd1} we have used the fact that $N$ is independent from $(X,Y)$, and \eqref{eq:sigmaZ_bd} follows because $2\sqrt{S_1S_2}\le S_1+S_2\le S$. Let $\tilde{Z}\sim\mathcal{N}(0,S)$, so
\begin{align}
h(Z)&=\frac{1}{2}\log 2\pi S+\frac{\sigma_Z^2}{2S} -D(P_Z\|P_{\tilde{Z}})
\\&\le \frac{1}{2}\log 2\pi S+\frac{1}{2}+O(\delta\log\delta^{-1})-2 d_{TV}(P_Z\|P_{\tilde{Z}})^2\label{eq:pinsker1}
\\&=\frac{1}{2}\log 2\pi e S+O(\delta\log\delta^{-1})-2 d_{TV}(P_Z\|P_{\tilde{Z}})^2
\end{align}
where the \eqref{eq:pinsker1} follows from the bound on $\sigma_Z^2$ in \eqref{eq:sigmaZ_bd} and from Pinsker's inequality. Applying the lower bound on $h(Z)$ from \eqref{eq:hz_simple_bd} gives
\begin{equation}\label{eq:tv_bound}
d_{TV}(P_Z\|P_{\tilde{Z}})\le O(\sqrt{\delta \log\delta^{-1}}).
\end{equation}
For any function $f:\mathbb{R}\to[0,f_{\max}]$,
\begin{align}
\left|\mathbb{E} [f(Z)]-\mathbb{E} [f(\tilde{Z})]\right|
&=\left|\int_0^{f_{\max}} [\mathbb{P}(f(Z)>a)-\mathbb{P}(f(\tilde{Z})>a)]da\right|
\\&\le \int_0^{f_{\max}} \left|\mathbb{P}(f(Z)>a)-\mathbb{P}(f(\tilde{Z})>a)\right|da
\\&\le f_{\max} d_{TV}(P_Z\|P_{\tilde{Z}})\label{eq:f_tv_bound3}
\\&\le f_{\max}\, O(\sqrt{\delta \log\delta^{-1}}).\label{eq:f_tv_bound}
\end{align}
where \eqref{eq:f_tv_bound3} follows from the fact that for any $\mathcal{A}\subset\mathbb{R}$, $|P_Z(\mathcal{A})-P_{\tilde{Z}}(\mathcal{A})|\le d_{TV}(P_Z,P_{\tilde{Z}})$.
The following definitions will be key to the remainder of the proof:
\begin{align}
\tau_X&=\frac{S_1}{\sqrt{S}}-\frac{\sqrt{S}}{8}\log\delta,\\
\tau_Y&=\frac{S_2}{\sqrt{S}}-\frac{\sqrt{S}}{8}\log\delta,\\
\tau_N&=\frac{1}{\sqrt{S}},\\
\tau_Z&=\tau_X+\tau_Y+\tau_N=
\sqrt{S}\left(1-\frac{1}{4}\log\delta\right),\\
m_X&=\mathbb{E} \left[e^{X/\sqrt{S}}1(X<\tau_X)\right],\\
m_Y&=\mathbb{E} \left[e^{Y/\sqrt{S}}1(Y<\tau_Y)\right].
\end{align}
Similarly to the proof of Lemma~\ref{lemma:max_corr}, the core of the proof involves upper and lower bounding
\begin{equation}\label{eq:EXY_difference}
\mathbb{E} [XY1(X>0,Y>0)]-\mathbb{E} [X1(X>0)]\, \mathbb{E} [Y1(Y>0)].
\end{equation}
Since $\Delta(X;Y)\le\delta$, the same argument as in \eqref{eq:integral_expansion}--\eqref{eq:EXY_upper_bd} shows that the quantity \eqref{eq:EXY_difference} is upper bounded by
\begin{equation}
\sqrt{S_1S_2}\,O(\delta)\le S\,O(\delta).
\end{equation}
To lower bound \eqref{eq:EXY_difference}, we cannot use precisely the same argument as in Lemma~\ref{lemma:max_corr}, since we need a bound that eliminates the $\log\delta^{-1}$ term. We first divide \eqref{eq:EXY_difference} into four terms:
\begin{align}
&\mathbb{E} [XY1(X>0,Y>0)]-\mathbb{E} [X1(X>0)]\, \mathbb{E} [Y1(Y>0)]\nonumber
\\&=\big(\mathbb{E} [XY1(0<X<\tau_X,0<Y<\tau_Y)]-\mathbb{E} [X1(0<X<\tau_X)]\, \mathbb{E} [Y1(0<Y<\tau_Y)]\big)\nonumber
\\&\qquad+\big(\mathbb{E} [XY1(X\ge\tau_X,0<Y<\tau_Y)]-\mathbb{E} [X1(X\ge \tau_X)]\, \mathbb{E} [Y1(0<Y<\tau_Y)]\big)\nonumber
\\&\qquad+\big(\mathbb{E} [XY1(0<X<\tau_X,Y\ge\tau_Y)]-\mathbb{E} [X1(0<X<\tau_X)]\, \mathbb{E} [Y1(Y\ge\tau_Y)]\big)\nonumber
\\&\qquad+\big(\mathbb{E} [XY1(X\ge \tau_X,Y\ge\tau_Y)]-\mathbb{E} [X1(X\ge\tau_X)]\, \mathbb{E} [Y1(Y\ge\tau_Y)]\big).\label{eq:four_terms}
\end{align}
In order to bound the first term in the RHS of \eqref{eq:four_terms}, we tighten the proof technique of Lemma~\ref{lemma:max_corr} by bounding $m_X,m_Y$. Since $m_X,m_Y$ are essentially values of the moment generating functions for $X$ and $Y$, bounding $m_X,m_Y$ allows us to apply Chernoff bounds to probabilities involving $X$ and $Y$. We exploit the fact that Chernoff bounds are stronger than the Chebyshev's bounds used in the proof of Lemma~\ref{lemma:max_corr} to prove a tighter bound in this context. We first relate $m_X,m_Y$ to a moment generating function for $Z$, by writing
\begin{align}
&\mathbb{E} \left[e^{Z/\sqrt{S}}1(Z<\tau_Z)\right]\label{eq:Z_moment0}
\\&=\mathbb{E} \left[e^{(X+ Y+N)/\sqrt{S}}1(X+Y+N< \tau_X+ \tau_Y+\tau_N)\right]\label{eq:Z_moment1}
\\&\ge \mathbb{E} \left[e^{(X+Y+N)/\sqrt{S}}1(X<\tau_X,Y<\tau_Y,N<\tau_N)\right]\label{eq:Z_moment2}
\\&=\mathbb{E} \left[e^{(X+Y)/\sqrt{S}}1(X<\tau_X,Y<\tau_Y)\right] \frac{1}{2}e^{1/(2S)}\label{eq:Z_moment3}
\\&\ge \frac{1}{2} \bigg(\mathbb{E} \left[e^{X/\sqrt{S}}1(X<\tau_X)\right]\, \mathbb{E} \left[e^{Y/\sqrt{S}}1(Y<\tau_Y)\right]\nonumber
\\&\qquad-O(\delta\log\delta^{-1})\sqrt{
\var \left(e^{X/\sqrt{S}}1(X<\tau_X)\right)\,\var \left(e^{Y/\sqrt{S}}1(Y<\tau_Y)\right)
}\bigg)\label{eq:Z_moment4}
\\&\ge \frac{1}{2}\bigg(\mathbb{E} \left[e^{X/\sqrt{S}}1(X<\tau_X)\right]\, \mathbb{E} \left[e^{Y/\sqrt{S}}1(Y<\tau_Y)\right]\nonumber
\\&\qquad-O(\delta\log\delta^{-1})\sqrt{
\mathbb{E} \left[e^{2X/\sqrt{S}}1(X<\tau_X)\right]\,\mathbb{E} \left[e^{2Y/\sqrt{S}}1(Y<\tau_Y)\right]
}\bigg)
\\&\ge \frac{1}{2} \left[m_Xm_Y-O(\delta\log \delta^{-1})\exp\left\{\frac{\tau_X+\tau_Y}{\sqrt{S}}\right\}\right]\label{eq:Z_moment5}
\\&=\frac{1}{2} \left[m_Xm_Y-O(\delta\log \delta^{-1})\exp\left\{\frac{S_1+S_2}{S}-\frac{1}{4}\log\delta\right\}\right]
\\&\ge \frac{1}{2} \left[m_Xm_Y-O(\delta^{3/4}\log \delta^{-1})\right]\label{eq:Z_moment7}
\end{align}
where \eqref{eq:Z_moment2} holds because the random quantity in \eqref{eq:Z_moment1} is non-negative and since $X<\tau_X,Y<\tau_Y,N<\tau_N$ implies $Z<\tau_Z$, \eqref{eq:Z_moment3} holds since $N$ is a standard Gaussian independent of $(X,Y)$, \eqref{eq:Z_moment4} holds by the bound on $\rho_m(X;Y)$ from Lemma~\ref{lemma:max_corr}, \eqref{eq:Z_moment5} holds from the simple upper bound on $\mathbb{E} \left[e^{2X/\sqrt{S}}1(X<\tau_X)\right]$ found by plugging in $X=\tau_X$, and \eqref{eq:Z_moment7} holds since $S_1+S_2\le S$. We now apply the total variational bound in \eqref{eq:f_tv_bound} to upper bound the quantity in \eqref{eq:Z_moment0}. Specifically, since $e^{z/\sqrt{S}}1(z<\tau_Z)\le e^{\tau_Z/\sqrt{S}}$,
\begin{align}
\mathbb{E} \left[e^{Z/\sqrt{S}}1(Z<\tau_Z)\right]
&\le \mathbb{E} \left[e^{\tilde{Z}/\sqrt{S}}1(Z<\tau_Z)\right]+e^{\tau_Z/\sqrt{S}}O(\sqrt{\delta\log\delta^{-1}})
\\&\le e^{1/2}+e\, \delta^{-1/4}O(\sqrt{\delta\log\delta^{-1}})\label{eq:Z_moment8a}
\\&=e^{1/2}+O(\delta^{1/4}\sqrt{\log\delta^{-1}})\label{eq:Z_moment8}
\end{align}
where in \eqref{eq:Z_moment8a} we have used the fact that $\tilde{Z}\sim\mathcal{N}(0,S)$. Combining the bounds in \eqref{eq:Z_moment7} and \eqref{eq:Z_moment8} yields
\begin{equation}\label{eq:mxmy_bound}
m_Xm_Y\le 2e^{1/2}+O(\delta^{1/4}\sqrt{\log \delta^{-1}}).
\end{equation}
Since $2e^{1/2}<4$, and recalling that the implied constant in the $O(\cdot)$ term in \eqref{eq:mxmy_bound} is universal, we may assume that $\delta$ is sufficiently small that $m_Xm_Y\le 4$.
We now lower bound the first term in \eqref{eq:four_terms}, or equivalently upper bound the negative of this term. As in the proof of Lemma~\ref{lemma:max_corr}, we will use the function $k_\delta$, defined in \eqref{eq:k_delta_def}. By an identical argument as in \eqref{eq:PXY_lower_bd0}--\eqref{eq:PXY_lower_bd},
\begin{multline}
\mathbb{P}(x<X<\tau_X)\mathbb{P}(y<Y<\tau_Y)-\mathbb{P}(x<X<\tau_X,y<Y<\tau_Y)
\le k_\delta\left(\min\{\mathbb{P}(x<X<\tau_X),\,\mathbb{P}(y<Y<\tau_Y)\}\right).
\end{multline}
Thus
\begin{align}
&\mathbb{E}[ X1(0<X<\tau_X)]\,\mathbb{E} [Y1(0<Y<\tau_Y)]-\mathbb{E} [XY1(0<X<\tau_X,0<Y<\tau_Y)]\label{eq:first_of_four}
\\&=\int_0^{\tau_X}dx \int_0^{\tau_Y} dy \left[\mathbb{P}(x<X<\tau_X)\mathbb{P}(y<Y<\tau_Y)-\mathbb{P}(x<X<\tau_X,y<Y<\tau_Y)\right]
\\&\le \int_0^{\tau_X}dx \int_0^{\tau_Y} dy\, k_\delta\left(\min\{\mathbb{P}(x<X<\tau_X),\, \mathbb{P}(y<Y<\tau_Y)\}\right).
\end{align}
For any $x\le \tau_X$, a Chernoff-type bound gives
\begin{equation}
\mathbb{P}(x<X<\tau_X)\le e^{-x/\sqrt{S}} \mathbb{E} \left[e^{X/\sqrt{S}}1(X<\tau_X)\right]=e^{-x/\sqrt{S}}m_X
\end{equation}
and similarly $\mathbb{P}(y<Y<\tau_X)\le e^{-y/\sqrt{S}}m_Y$, so the difference in \eqref{eq:first_of_four} is at most
\begin{align}
&\int_0^{\tau_X}dx \int_0^{\tau_Y} dy\, k_\delta\left(\min\{e^{-x/\sqrt{S}}m_X,\, e^{-y/\sqrt{S}}m_Y\}\right)
\\&\le \int_0^\infty dx \int_0^\infty dy\, k_\delta\left(e^{-(x+y)/(2\sqrt{S})}\sqrt{m_Xm_Y}\right)\label{eq:chernoff2}
\\&\le \int_0^\infty dx \int_0^\infty dy\, k_\delta\left(2e^{-(x+y)/(2\sqrt{S})}\right)\label{eq:chernoff3}
\\&= 4S \int_0^\infty z\,k_\delta\left(2 e^{-z}\right)dz\label{eq:chernoff4}
\\&= 4S\left[\int_0^{\log 2} 2\delta z dz+\int_{\log 2}^\infty z\left((1+2\delta)(2e^{-z})^{1/(1+\delta)}-2e^{-z}\right)dz\right]\label{eq:chernoff5}
\\&=4S\left[(\log^2 2)\delta+(1+2\delta)(1+\delta)(1+\delta+\log 2)-(1+\log 2)\right]
\\&=S\, O(\delta)
\end{align}
where \eqref{eq:chernoff2} follows since the integrand is non-negative, so the upper limits of the integral may be extended to $\infty$, as well as because $\min\{a,b\}\le \sqrt{ab}$ and $k_\delta$ is non-decreasing; \eqref{eq:chernoff3} holds by the above conclusion that $m_Xm_Y\le 4$; \eqref{eq:chernoff4} holds by the change of variables $z=\frac{x+y}{2\sqrt{S}}$; and \eqref{eq:chernoff5} follows from the definition of $k_\delta$. This proves that the first term in \eqref{eq:four_terms} is lower bounded by $-S\,O(\delta)$.
We now consider the second term in \eqref{eq:four_terms}. Applying again the bound on $\rho_m(X;Y)$ from Lemma~\ref{lemma:max_corr} gives
\begin{align}
&\mathbb{E} [XY1(X\ge \tau_X,0<Y< \tau_X)]-\mathbb{E} [X1(X\ge \tau_X)]\,\mathbb{E} [Y1(0<Y<\tau_Y)]
\\&\ge -O(\delta\log\delta^{-1})\sqrt{\mathbb{E} [X^21(X\ge \tau_X)]\,\mathbb{E} [Y^21(0<Y< \tau_X)]}
\\&\ge -O(\delta\log\delta^{-1})\sqrt{\mathbb{E} [X^21(X\ge \tau_X)]\,S}\label{eq:ax_goal}
\end{align}
where the second inequality holds since $\mathbb{E} [Y^21(0<Y< \tau_X)]\le \mathbb{E} [Y^2]\le S_2\le S$. We now need to upper bound $\mathbb{E} [X^21(X\ge \tau_X)]$. Define
\begin{align}
p_X&=\mathbb{P}(X\ge \tau_X),\\
a_X&=\mathbb{E} [X^21(X\ge \tau_X)].
\end{align}
Intuitively, if $X\ge \tau_X$, then we expect $Z$ also to be large, and so we expect $p_X$ to be small. This intuition can be formalized by writing
\begin{align}
\mathbb{P}(Z\ge \tau_X-2\sqrt{S_2})
&=\mathbb{P}(X+Y+N\ge \tau_X-2\sqrt{S_2})
\\&\ge \mathbb{P}(X\ge \tau_X,Y\ge -2\sqrt{S_2},N\ge 0)
\\&=\frac{1}{2} \mathbb{P}(X\ge \tau_X,Y\ge -2\sqrt{S_2})\label{eq:pX1}
\\&\ge \frac{1}{2} \mathbb{P}(X\ge \tau_X)\mathbb{P}(Y\ge-2\sqrt{S_2})-\delta\label{eq:pX2}
\\&\ge \frac{3}{8} p_X-\delta\label{eq:pX3}
\end{align}
where \eqref{eq:pX1} holds because $N$ is Gaussian and independent of $X,Y$, \eqref{eq:pX2} holds by the consequence of $\Delta(X;Y)\le\delta$ in \eqref{eq:gdep_abs_bd}, and \eqref{eq:pX3} holds by Chebyshev's inequality on $Y$. Thus
\begin{align}
p_X&\le \frac{8}{3}\mathbb{P}(Z\ge \tau_X-2\sqrt{S_2})+O(\delta)
\\&\le \frac{8}{3}\mathbb{P}(\tilde{Z}\ge \tau_X-2\sqrt{S_2})+O(\sqrt{\delta\log\delta^{-1}})\label{eq:px_bd2}
\\&=\frac{8}{3}P\left(\tilde{Z}\ge \frac{S_1}{\sqrt{S}}-\frac{\sqrt{S}}{8}\log\delta-2\sqrt{S_2} \right)+O(\sqrt{\delta\log\delta^{-1}})
\\&\le \frac{8}{3} P\left(\tilde{Z}\ge \sqrt{S}\left(-\frac{1}{8}\log \delta-2\right)\right)+O(\sqrt{\delta\log\delta^{-1}})\label{eq:px_bd4}
\\&\le \frac{8}{3} \exp\left\{-\frac{1}{2}\left(-\frac{1}{8}\log \delta-2\right)^2\right\}+O(\sqrt{\delta\log\delta^{-1}})\label{eq:px_bd5}
\\&=O(\sqrt{\delta\log\delta^{-1}})\label{eq:px_bd6}
\end{align}
where \eqref{eq:px_bd2} holds by the bound on total variational distance in \eqref{eq:tv_bound}, \eqref{eq:px_bd4} holds since $S_2\le S$, \eqref{eq:px_bd5} holds since $\tilde{Z}\sim\mathcal{N}(0,S)$ and by the Chernoff bound on the Gaussian CDF, and \eqref{eq:px_bd6} holds since $\exp\{-O(\log^2\delta)\}$ vanishes faster than $O(\sqrt{\delta\log\delta^{-1}})$. In order to bound $a_X$, we bound the mean-squared of $Z$ conditioned on either $X<\tau_X$ or $X\ge\tau_X$. In particular,
\begin{align}
\mathbb{E} [Z^2 1(X<\tau_X)]&=\mathbb{E}[(X+Y+N)^21(X<\tau_X)]
\\&=1+\mathbb{E} [X^21(X<\tau_X)]+\mathbb{E} [Y^2 1(X<\tau_X)]+2\,\mathbb{E} [XY1(X<\tau_X)]
\\&\le 1+S_1-a_X+S_2+O(\delta\log\delta^{-1})\,\sqrt{\mathbb{E} [X^2 1(X<\tau_X)]\,\mathbb{E} [Y^2]}\label{eq:z2_bd1}
\\&\le S-a_X+S\, O(\delta\log\delta^{-1})\label{eq:z2_bd2}
\end{align}
where \eqref{eq:z2_bd1} again uses the maximal correlation bound from Lemma~\ref{lemma:max_corr}, and \eqref{eq:z2_bd2} follows from the mean squared bounds on $X$ and $Y$. Thus
\begin{equation}\label{eq:z2_conditional1}
\mathbb{E}[Z^2|X<\tau_X]\le \frac{S-a_X+S\,O(\delta\log\delta^{-1})}{1-p_X}.
\end{equation}
Moreover
\begin{equation}\label{eq:z2_conditional2}
\mathbb{E} [Z^2|X\ge \tau_X]\le \frac{\sigma_Z^2}{p_X}\le \frac{S+S\,O(\delta\log\delta^{-1})}{p_X}.
\end{equation}
We now apply these two bounds to upper bound the differential entropy of $Z$. In particular, if we let $F=1(X\ge \tau_X)$, then
\begin{align}
h(Z)&\le H(F)+h(Z|F)
\\&=H_b(p_X)+(1-p_X)h(Z|X<\tau_X)+p_X h(Z|X\ge \tau_X)
\\&\le H_b(p_X)+(1-p_X)\frac{1}{2}\log 2\pi e\frac{S-a_X+S\,O(\delta\log\delta^{-1})}{1-p_X}
+p_X \frac{1}{2}\log 2\pi e \frac{S+S\,O(\delta\log\delta^{-1})}{p_X}\label{eq:z2_bd3}
\\&=\frac{3}{2} H_b(p_X)+(1-p_X)\frac{1}{2}\log 2\pi e(S-a_X+S\,O(\delta\log\delta^{-1}))
+p_X\frac{1}{2}\log 2\pi e(S+S\,O(\delta\log\delta^{-1}))
\end{align}
where \eqref{eq:z2_bd3} follows from the fact that differential entropy is upper bounded by that of a Gaussian with the same variance and the bounds in \eqref{eq:z2_conditional1}--\eqref{eq:z2_conditional2}. Recalling the assumption that $h(Z)\ge \frac{1}{2}\log 2\pi eS$, we have
\begin{align}
0&\le \frac{3}{2} H_b(p_X)+(1-p_X)\frac{1}{2}\log\left(1+\frac{-a_X+S\,O(\delta\log\delta^{-1})}{S}\right)
+p_X \frac{1}{2} \log\left(1+ O(\delta\log\delta^{-1})\right)
\\&\le \frac{3}{2} H_b(p_X)+(1-p_X)\frac{-a_X+S\,O(\delta\log\delta^{-1})}{2S}+p_X O(\delta\log\delta^{-1})
\\&=\frac{3}{2} H_b(p_X)-\frac{(1-p_X)a_X}{2S}+O(\delta\log\delta^{-1}).
\end{align}
Rearranging gives
\begin{align}
a_X&\le \frac{S}{1-p_X}\left[3H_b(p_X)+O(\delta\log\delta^{-1})\right]
\\&\le S (1+O(\sqrt{\delta\log\delta^{-1}}))\left[O(\delta^{1/2}(\log\delta^{-1})^{3/2})+O(\delta\log\delta^{-1})\right]\label{eq:ax_bd1}
\\&=S\, O(\delta^{1/2}(\log\delta^{-1})^{3/2})
\end{align}
where in \eqref{eq:ax_bd1} we have applied the bound on $p_X$ from \eqref{eq:px_bd6}, as well as the fact that for small $p$, $H_b(p)=O(p\log p^{-1})$. Plugging this bound back into \eqref{eq:ax_goal}, we find
\begin{equation}\label{eq:term2_bd}
\mathbb{E} [XY1(X\ge \tau_X,0<Y< \tau_X)]-\mathbb{E} [X1(X\ge \tau_X)]\,\mathbb{E} [Y1(0<Y<\tau_Y)]
\ge -S\,O(\delta^{5/4}(\log \delta^{-1})^{7/4}).
\end{equation}
By the same argument as the above bound on $a_X$, we may similarly find
\begin{equation}
\mathbb{E} [Y^2 1(Y\ge \tau_Y)]\le S\,O(\delta^{1/2}(\log\delta^{-1})^{3/2}).
\end{equation}
This implies that the third term in \eqref{eq:four_terms} is lower bounded by
\begin{equation}\label{eq:term3_bd}
\mathbb{E} [XY1(X<\tau_X,Y\ge \tau_Y)]-\mathbb{E} [X1(X<\tau_X)]\mathbb{E} [Y1(Y\ge \tau_Y)]
\ge -S\,O(\delta^{5/4}(\log \delta^{-1})^{7/4})
\end{equation}
and the fourth term in \eqref{eq:four_terms} is lower bounded by
\begin{equation}\label{eq:term4_bd}
\mathbb{E} [XY1(X\ge \tau_X,Y\ge \tau_Y)]-\mathbb{E} [X1(X\ge \tau_X)]\mathbb{E} [Y1(Y< \tau_Y)]
\ge -S\,O(\delta^{3/2}(\log\delta^{-1})^{5/2}).
\end{equation}
Note that for each of the bounds in \eqref{eq:term2_bd}, \eqref{eq:term3_bd}, and \eqref{eq:term4_bd}, the function of $\delta$ grows smaller than $O(\delta)$. Putting everything together, we now have
\begin{equation}
|\mathbb{E} [XY1(X>0,Y>0)]-\mathbb{E} [X1(X>0)]\,\mathbb{E} [Y1(Y>0)]|\le S\, O(\delta).
\end{equation}
Applying this bound by swapping $X$ with $-X$ and/or $Y$ with $-Y$ gives
\begin{equation}
\mathbb{E} [XY]\le S\,O(\delta).
\end{equation}
Therefore
\begin{equation}
h(Z)\le \frac{1}{2}\log 2\pi eS(1+O(\delta))=\frac{1}{2}\log 2\pi e S+O(\delta).
\end{equation}
This proves \eqref{eq:gaussian_goal}.
\subsection{Bounding \texorpdfstring{$V_{\max}$}{Vmax}}
Recall that
\begin{equation}
V_{\max}=\sup_{P_{UXY}:\mathbb{E} [X^2]\le S_1,\mathbb{E} [Y^2]\le S_2}
\max\{V(W\|P_{Z|U}|P_{UXY}),\,
V(W\|P_{Z|YU}|P_{UXY}),\,
V(W\|P_{Z|XU}|P_{UXY})\}.
\end{equation}
Each of the terms in the maximum can be shown to be finite by showing that the equivalent point-to-point quantity is finite:
\begin{equation}
\sup_{P_{UX}:\mathbb{E} [X^2]\le S} V(W'\|P_{Z|U}|P_{UX})
\end{equation}
where $W'\in\mathcal{P}(\mathbb{R}\to \mathbb{R})$ is the point-to-point channel where $Z=X+N$, $N\sim\mathcal{N}(0,1)$. Consider any $P_{UX}$ where $\mathbb{E} [X^2]\le S$. Fix $u$, and let $S_u=\mathbb{E}[X^2|U=u]$. To simplify notation, we again drop the conditioning on $U=u$. Define the information density
\begin{equation}
\imath(x;z)=\log \frac{dW'_{x}}{dP_Z}(z).
\end{equation}
Note that
\begin{align}
V(W'\|P_Z|P_{X})&=\mathbb{E}\left[\var(\imath(X;Z)|X)\right]
\\&\le \mathbb{E}[ \imath(X;Z)^2]
\\&=\mathbb{E}[\imath(X;Z)^2 1(\imath(X;Z)\le 0)]+\mathbb{E}[\imath(X;Z)^2 1(\imath(X;Z)\ge 0)]\label{eq:two_density_terms}
\end{align}
where $(X,Z)$ are distributed according to $P_{X}W'$. To lower bound the information density, we may upper bound the Radon-Nikodym derivative
\begin{align}
\frac{dP_Z}{dW'_{x}}(z)&=\int dP_{X}(x') \frac{dW'_{x'}}{dW'_{x}}(z)
\\&=\int dP_{X}(x') \exp\left\{-\frac{(z-x')^2}{2}+\frac{(z-x)^2}{2}\right\}
\\&\le \exp\left\{\frac{(z-x)^2}{2}\right\}.
\end{align}
Thus
\begin{equation}
\imath(x;z)\ge -\frac{(z-x)^2}{2}.
\end{equation}
Thus the first term in \eqref{eq:two_density_terms} may now be upper bounded by
\begin{align}
\mathbb{E}[\imath(X;Z)^2 1(\imath(X;Z)\le 0)]
&\le \mathbb{E} \left[\left(\frac{(Z-X)^2}{2}\right)^2 1(\imath(X;Z)\le 0)\right]
\\&\le \mathbb{E} \left[\frac{(Z-X)^4}{4}\right]
\\&=\frac{3}{4}
\end{align}
where we have used the fact that $Z-X=N$ is a standard Gaussian.
We now upper bound the second term in \eqref{eq:two_density_terms}. For any integer $k$, let $\mathcal{A}_k=[k,k+1)$. Let $p_k=\mathbb{P}(X\in \mathcal{A}_k)$. Also let $\mu_k=\mathbb{E}[X|X\in \mathcal{A}_k]$ and $\sigma_k^2=\var(X|X\in \mathcal{A}_k)$. Since $\mathcal{A}_k$ is an interval of length 1, $\sigma_k^2\le 1/4$. Then for any integer $k$, the PDF of $P_Z$ is lower bounded by
\begin{align}
f_Z(z)&=\int dP_{X}(x) \frac{1}{\sqrt{2\pi}} \exp\left\{-\frac{(z-x)^2}{2}\right\}
\\&\ge \int_{x\in \mathcal{A}_k} dP_{X}(x) \frac{1}{\sqrt{2\pi}} \exp\left\{-\frac{(z-x)^2}{2}\right\}
\\&\ge p_k \frac{1}{\sqrt{2\pi}} \exp\left\{\mathbb{E}\left[-\frac{(z-X)^2}{2}\bigg|X\in \mathcal{A}_k\right]\right\}\label{eq:PDF_bd3}
\\&=p_k \frac{1}{\sqrt{2\pi}} \exp\left\{-\frac{(z-\mu_k)^2}{2}-\frac{\sigma_k^2}{2}\right\}\label{eq:PDF_bd4}
\\&\ge p_k \frac{1}{\sqrt{2\pi}} \exp\left\{-\frac{(z-\mu_k)^2}{2}-\frac{1}{8}\right\}\label{eq:PDF_bd5}
\end{align}
where \eqref{eq:PDF_bd3} holds by the convexity of the exponential, \eqref{eq:PDF_bd4} holds by the definitions of $\mu_k$ and $\sigma_k$, and \eqref{eq:PDF_bd5} holds since $\sigma_k^2\le 1/4$. Thus, for any $k$ the information density can be upper bounded by
\begin{equation}
\imath(x;z)\le \frac{-(z-x)^2+(z-\mu_k)^2}{2}+\frac{1}{8}-\log p_k
\end{equation}
Applying this bound to the second term in \eqref{eq:two_density_terms} gives
\begin{align}
&\mathbb{E}[\imath(X;Z)^2 1(\imath(X;Z)\ge 0)]
\\&\le \sum_{k=-\infty}^\infty \int_{x\in \mathcal{A}_k} dP_{X}(x) \mathbb{E}\left[\left(\frac{-(Z-x)^2+(Z-\mu_k)^2}{2}+\frac{1}{8}-\log p_k\right)^2\bigg|X=x\right]
\\&=\sum_k \int_{x\in \mathcal{A}_k} dP_{X}(x) \left[(x-\mu_k)^2+\left(\frac{(x-\mu_k)^2}{2}+\frac{1}{8}-\log p_k\right)^2\right]
\\&\le \sum_k p_k \left[1+\left(\frac{5}{8}-\log p_k\right)^2\right]\label{eq:density_bd3}
\\&\le 2+\sum_k \left[-2 p_k\log p_k+p_k\log^2 p_k\right]\label{eq:density_bd4}
\end{align}
where \eqref{eq:density_bd3} holds since $|x-\mu_k|\le 1$ for $x\in \mathcal{A}_k$, because $\mu_k\in \mathcal{A}_k$ and $\mathcal{A}_k$ has length $1$, and in \eqref{eq:density_bd4} we have upper bounded $5/8$ by $1$ to simplify the expression. By Chebyshev's inequality, for $k>0$
\begin{equation}
p_k=\mathbb{P}(X\in \mathcal{A}_k)\le \mathbb{P}(X\ge k)\le \frac{S_u}{k^2}.
\end{equation}
Note that for $p\in[0,1]$, $-p\log p\le 1/e$, and this function is increasing for $p\le 1/e$. Thus, if we consider the sum of $-p_k\log p_k$ for $k\ge 0$, we have
\begin{align}
\sum_{k=0}^\infty -p_k\log p_k
&\le\sum_{k=0}^{\ceil{\sqrt{eS_u}}} \frac{1}{e}+\sum_{k=\ceil{\sqrt{e S_u}}+1}^\infty -\frac{S_u}{k^2}\log \frac{S_u}{k^2}
\\&\le \frac{1}{e}(\sqrt{eS_u}+2)+\int_{\sqrt{eS_u}}^\infty -\frac{S_u}{r^2}\log \frac{S_u}{r^2} dr
\\&=\frac{\sqrt{S_u}}{\sqrt{e}}+\frac{2}{e}+\frac{3\sqrt{S_u}}{\sqrt{e}}
\\&=\frac{4\sqrt{S_u}}{\sqrt{e}}+\frac{2}{e}.\label{eq:pklogpk_bd}
\end{align}
By an identical calculation, $\sum_{k=-\infty}^{-1}-p_k\log p_k\le \frac{4\sqrt{S_u}}{\sqrt{e}}+\frac{2}{e}$. Similarly, note that $p\log^2 p\le 4/e^2$, and this function is increasing for $p\le 1/e^2$. Thus
\begin{align}
\sum_{k=0}^\infty p_k\log^2 p_k
&\le \sum_{k=0}^{\ceil{e\sqrt{S_u}}} \frac{4}{e^2}+\sum_{\ceil{e\sqrt{S_u}}+1}^\infty \frac{S_u}{k^2}\log^2 \frac{S_u}{k^2}
\\&\le \frac{4}{e^2}(e\sqrt{S_u}+2)+\int_{e\sqrt{S_u}}^\infty \frac{S_u}{r^2}\log^2 \frac{S_u}{r^2}dr
\\&=\frac{4}{e^2}(e\sqrt{S_u}+2)+\frac{20\sqrt{S_u}}{e}
\\&=\frac{24\sqrt{S_u}}{e}+\frac{2}{e^2}.\label{eq:pklog2pk_bd}
\end{align}
Again the same holds for the summation over $k<0$. Applying the bounds in \eqref{eq:pklogpk_bd} and \eqref{eq:pklog2pk_bd} to \eqref{eq:density_bd4} gives
\begin{equation}
\mathbb{E}[\imath(X;Z)^2 1(\imath(X;Z)\le 0)]
\le 2+\frac{8\sqrt{S_u}}{\sqrt{e}}+\frac{4}{e}+\frac{48\sqrt{S_u}}{e}+\frac{4}{e^2}.
\end{equation}
Now combining the bounds on each of the terms in \eqref{eq:two_density_terms} gives
\begin{align}
V(W'\|P_{Z|U}|P_{UX})&\le \sum_u P_U(u)\left[
\frac{11}{4}+\frac{4}{e}+\frac{4}{e^2}+\left(\frac{8}{\sqrt{e}}+\frac{48}{e}\right)\sqrt{S_u}\right]
\\&\le \frac{11}{4}+\frac{4}{e}+\frac{4}{e^2}+\left(\frac{8}{\sqrt{e}}+\frac{48}{e}\right)\sqrt{S}.
\end{align}
\bibliographystyle{IEEETran}
|
1,108,101,565,006 | arxiv | \section{Introduction}
In financial theory, assets are priced according to the expected discounted payoff\cite{nobel2013}:
\begin{equation}\label{p}
P=E(Mx).
\end{equation}
The discount factor $M$ describes both how future money is less valuable due to both the interest rate, and more importantly, how people value various outcomes differently. Take stock and insurance as example: in the United States, the stock market has an annual return of 8\% in the long term, which is significantly higher than the US treasury bond rate at at most 2\%-3\% a year. The interpretation of this fact is that people are worried about losing money potentially; therefore when they price assets such as stocks that may lose value, they discount more when they have a profit, and less when they have a loss. This results in the fact that the price of the stock is lower than the discounted expected payoff. (Notice the opposite order of words; price is the expected discounted payoff where one take the expectation last, whereas the naive valuation process we usually have in mind is to take the expectation at the future where the payoff occurs, then discount it back to the present.) In contrast, when one buys insurance, one is paying extra for that in case the unfortunate happens, he has enough payoff to deal with it. In this case one put more weight on the unfortunate scenario; the price thus ends up higher than the discounted expected payoff. In theory, the prices are therefore not necessarily lower or higher than the discounted expectation, but depends on how people value the various outcome. The difference between price and the discount expected payoff, divided by the standard deviation of the instrument, is usually called the risk premium. So we say that stocks usually have positive risk premium, and insurances have negative risk premium.
If the asset in question is freely tradable, the price is then determined by the market. In particular, the supply, i.e., number of people willing to sell the asset at the market price, should meet the demand, the number of people willing to buy at market price. The discount factor which prices the asset in reality, thus describes the people who are ambivalent toward buying or selling the asset at the market price. In this sense, when there is a market price, we can always derive an effective discount factor, given the forecast, and the current market price.
What is the idea of efficient market then? The usual statement of the efficient market hypothesis (EMH) is that stocks are always traded at fair value, and their value already reflect all available information. A detailed discussion can be found in Ref. \cite{nobel2013} and the references therein. In our words, it means that the price should not deviate significantly from how a \textit{reasonable} discount factor would price it. There are no clear-cut definition of reasonable unfortunately\footnote{This corresponds to the ``joint hyphothesis" problem mentioned in Ref. \cite{nobel2013}.}, but for example, similar instruments should have similar risk premiums; the risk premium should not be too high for any instrument; for one instrument, the risk premium should not vary significantly over time, without any new information. (An important side note is that not all instruments need to have similar risk premium for the market to be efficient. Investors can always benefit from diversifying their portfolio from more independent assets, so instruments with lower risk premium are still valuable, it is just that a reasonable investor will require less quantity of it.)
The EMH in the weak sense, however, is more concrete. It states that one can never predict returns by analyzing past data. In other words, there are no exploitable opportunities to trade for a larger profit. A lot of work (see for example \cite{cl,ito} and references therein) has been done to address whether such hypothesis is true on various assets, but the results are mixed. There are also efforts made\cite{zunino,zunino2,apen} to develop different statistics to measure market inefficiencies. On the other front, there are also countless time series models that have been proposed\cite{taylor,tim,philip} to fit the observed financial time series, whether satisfying EMH or not.
In this paper, we take a different approach. Instead of proposing time series models that have a number parameters that can fit the data such as ARIMA or GARCH, we start from the financial theory side. We think about how market efficiency can be broken in a minimal way. When the market is not efficient, we propose a model to replace the geometric Brownian motion as the next simplest description. In the following, in Sec. 2 we start from simple financial theory considerations and motivate the model. In Sec. 3 we first solve the model analytically, and find its current risk premium based on past history. Then we discuss a maximum likelihood procedure to estimate the model parameters from data. In Sec. 4, we take a look at real data and see if such modeled inefficiencies are indeed present.
\section{A Simple Theory to Model Market Inefficiencies}
In the literature, the simplest model of the price of a stock follows the geometric Brownian motion:
\begin{equation}
\frac{dS}{S}=\mu dt+\sigma dz.
\end{equation}
If we assume that the stock is always priced fairly, then the stock price in the immediate future is the payoff. We can then use the pricing equation relating the immediate price in the future and the current price, via the discount factor:
\begin{equation}
S=E\big((1+\frac{d\Lambda}{\Lambda})(S+dS)\big).
\end{equation}
For simplicity if we further assume that the risk-free interest rate is zero, i.e., $E(d\Lambda/\Lambda)=0$, this will imply that the stochastic discount factor is
\begin{equation}
\frac{d\Lambda}{\Lambda}=-\frac{\mu}{\sigma}dz.
\end{equation}
This discount factor can then be used to price derivatives such as options.
A few observations:
1. The discount factor relates the current price to the price in the future. This is because when the pricing is fair, the future price is just the discounted payoff at that time.
2. We derived the discount factor from the random process that we assume the price to follow. This is always possible for a given random process, but there is no guarantee that the resulting discount factor is reasonable. Conversely, we can also presume a discount factor, then deduce how the stock price should behave. The discount factor does not uniquely determine the stock price however; it only determines the ratio between the drift and the standard deviation.
To model an inefficient market, the simplest way is to keep the pricing equation intact, but forgo the constraint that the discount factor should be reasonable. However, the random process that governs the prices then depends entirely on how we want to choose this unreasonable discount factor. Therefore, the real question is always how to reasonably define a unreasonable discount factor.
\subsection{the Fundamental Problem with Market Inefficiency}
One way of choosing this effective unreasonable discount factor is to assume a reasonable discount factor, but then allow the price to deviate from the pricing equation, Eq. \ref{p}. That is, we imagine there is a reasonable price following the pricing equation of the reasonable discount factor, but the real price does not always match the reasonable price. In terms of equations, it roughly looks like the following:
\begin{equation}\label{p0}
P_0=E(Mx);
\end{equation}
\begin{equation}\label{rp}
\frac{dP}{P}=f(P,P_0)dt+\sigma dz'.
\end{equation}
$P_0$ is the reasonable price, $M$ is the reasonable discount factor we have chosen, $x$ is the payoff, and $P$ is the real price. The second equation is necessary; since we have decoupled the real price from the pricing equation, we need to give additional information on how it is going to evolve. $f$ would be some function that makes the price $P$ track the reasonable price $P_0$ over longer time scales. $dz'$ is a zero mean random noise, different from the random noise in the reasonable price.
This set of equations is not self-consistent, however. The central problem is that if the instrument in question is freely tradable at price $P(t=0)$ at $t=0$, that price constitutes an immediate payoff $x(t=0)$, for which the seller can choose to take if it is higher than his estimate of the long term discounted payoff, and the buyers can choose to pay if it is lower than his estimate. Therefore, the reasonable price $P_0$, being the expected discount payoff, has to be higher than $P$ for the seller, and lower than $P$ for the buyer, if it were to follow Eq. \ref{p0}. For a reasonable price definition independent of the market position, the only possible take is that $P_0=P$, where supply meets demand.
To be clear, let us consider a simple example. Suppose that the reasonable discount factor $M=1$, and imagine we currently have an``underpriced" stock, whose price/payoff $x$ at a later time is expected to be higher than the current price $P$. How do we define $P_0$? From Eq. \ref{p0}, the maximal payoff for a potential buyer would be $P_0=x$, higher than $P$. A potential seller, on the other hand, can only get his best payoff by buying back immediately, and his reasonable price would be $P_0=P$. This reasonable price, defined by the expected discount payoff, is different for the two sides, because one always has the option to trade whenever he wants.
Conceptually the reasonable price we want to define in this example is apparently the buyer's, but as we have seen from above, we cannot define it as the immediate expected discount payoff. The ability to trade at market price at any time prohibits the expected discount payoff to deviate from the real price.
\subsection{Remedy to the Problem}
In order to work around this pitfall, when we define the reasonable price via Eq. \ref{p0}, we have to imagine that the payoff is more about the long term, and the current market price should not have a direct influence. In other words, when defining the payoff $x$ in our theory we have to let it deviate freely from the market price $P$, at least in the short term.
How do we capture the long term payoff using variables at present? The answer is that it is captured by $P_0$ at the very next moment. Indeed, if $P_0$ is the expected discounted payoff at any given time, every $P_0$ captures the long term payoff after its time, and the pricing equation relates $P_0$ at this and the very next moment. Given
\begin{equation}
\frac{d\Lambda }{\Lambda}=-rdt-mdz
\end{equation}
for example, and if we assume the fluctuation of the reasonable price is proportional to its current value, we can then get
\begin{equation}
\frac{dP_0}{P_0}=(r+m\sigma_0)dt+\sigma_0 dz.
\end{equation}
Here $r$ is the risk-free interest rate, $m$ is the market price of risk, and $\sigma_0$ is the standard deviation of the price percentage of the instrument. $dz$ describes the random incoming shocks that affect the instrument; for example this may include the profitability of a company stock, overall macroeconomic conditions for a treasury bond, or international relations for energy futures.
The real price still evolves according to Eq. \ref{rp}. Here we give further specification to the random noise: both real incoming news and current market force fluctuations should affect the market price. We therefore write the following instead:
\begin{equation}
\frac{dP}{P}=f(P,P_0)dt+\sigma_1 dz'+\sigma_2 dz.
\end{equation}
$dz'$ represents market force flucuations, and $dz$ is the same shock as in the pricing equation.
In this approach, the reasonable price $P_0$ is now seemingly defined entirely independent of the real market price $P$, whereas the real market price is trying to regress to $P_0$ with some additional noise. However, current market price can affect the reasonable pricing in some way. The simplest way is to add a term $\sigma_3 dz'$ to the stochastic equation of $P_0$. With this term $P_0$ still satisfies the same pricing equation, since the additional noise $dz'$ does not carry a risk premium. We end up with this following set of equations:
\begin{equation}\label{p02}
\frac{dP_0}{P_0}=(r+m\sigma_0)dt+\sigma_0 dz+\sigma_3 dz';
\end{equation}
\begin{equation}\label{rp2}
\frac{dP}{P}=f(P,P_0)dt+\sigma_1 dz'+\sigma_2 dz.
\end{equation}
Interestingly though, we cannot really tell apart between $dz$ and $dz$' with just the observation of the two prices. All we can say is how large the fluctuation of each price is, and how they are correlated. If we can deduce $\sigma_0$ from our prior knowledge of the risk-free rate $r$ and risk premium $m$, then we can get to know about all the $\sigma$'s. However, if not, then all possible values of $\sigma$'s which give the same variances of the two prices as well as the correlation between them are equivalent. Therefore, without any prior knowledge of the parameters of the model, the model is equivalent to the following:
\begin{equation}\label{p03}
\frac{dP_0}{P_0}=adt+\sigma dz;
\end{equation}
\begin{equation}\label{rp3}
\frac{dP}{P}=f(P,P_0)dt+\sigma' dz';
\end{equation}
\begin{equation}
\mathrm{Cov}(dz,dz')=\rho dt.
\end{equation}
That is, adding this $\sigma_3dz'$ term into the evolution of $P_0$ actually does not change the dynamics of the model. It just maps the original model to a different set of parameter values. In the remaining of the paper, we will study the dynamics of this set of equations.
Incidentally, the mathematics governing this set of equations is identical to quantum mechanics in Euclidean time. Specifically if we choose $f=-k\ln(P/P_0)$, the theory becomes non-interacting and exactly solvable. We shall stick with this choice in the remaining of the paper.
\section{Analytical Solution of the Model}
Usually when one speaks about the solution of such a set of stochastic differential equations, it refers to a probability distribution as a function of time, given the initial values of the prices. In our case, there are a few important differences: (i) we do not observe the reasonable price $P_0$; (ii) we are fine with just knowing about the probabilities in the immediate future.
While the time evolution of $P_0$ is simply geometric Brownian motion, to plug in that distribution into the evolution of $X$ and integrate for a finite amount of time, with $P$ itself also at the right hand side of the equation, is no simple task. Fortunately this prediction of finite time horizon is not that important to us as any newly observed price $X$ will change it. It suffices for us to know how $X$ is going to behave only for the next time step. The important question for us is to find the unobserved $P_0$ at the moment, based on the evolution of $P(t)$ in the past. In addition, we would like to infer the model parameters from the observed price as well.
In the first subsection, we shall assume knowledge of the model parameters, and find the most probable values of $P_0$ as a function of time. In the second subsection, we highlight the result we get for $\ln P_0$ and discuss the resulting dynamics of the market price $P$, in various parameter ranges. In the last subsection, we shall find the maximum likelihood estimate (MLE) of the model parameters, by integrating over all possible values of $P_0$ in Fourier space.
\subsection{Maximum likelihood estimate of $P_0$}
Let $X\equiv\ln P$, $X_0\equiv\ln P_0$, by Ito's Lemma, we then have
\begin{equation}
d X_0=(a-\frac{\sigma^2}{2})dt+\sigma dz;
\end{equation}
\begin{equation}
d X=(-k(X-X_0)-\frac{\sigma'^2}{2})dt+\sigma' dz';
\end{equation}
We can write down the log likelihood function $\ln L(a,k,\sigma,\sigma',\rho;X_0(t),X(t))$:
\begin{equation}
\ln L=-\frac{1}{2(1-\rho^2)\sigma^2\sigma'^2}\int dt\bigg(\sigma'^2\Delta X_0^2+\sigma^2\Delta X^2-2\rho\sigma\sigma'\Delta X_0\Delta X\bigg)+C,
\end{equation}
with
\begin{equation}
\Delta X_0\equiv \dot X_0-(a-\sigma^2/2);
\end{equation}
\begin{equation}
\Delta X\equiv \dot X+k(X-X_0)+\sigma'^2/2.
\end{equation}
C is some function of the parameters that normalizes the likelihood function, and is not a functional of $X$ or $X_0$. Our first step is to find the most probable $X_0$, given the parameters and some history of $X(t)$. The variation on $X_0$ will give the equation that $X_0$ should satisfy to maximize $\ln L$, which is just the equation of motion of $X_0$ in Euclidean time:
\begin{equation}
-2\sigma'^2\ddot X_0-2\sigma^2 k\big(\dot X+k(X-X_0)+\sigma'^2/2\big)
+2\rho\sigma\sigma'\big(-k(a-\sigma^2/2)+\ddot X+k\dot X\big)=0;
\end{equation}
Collecting terms, we get
\begin{equation}
X_0-\frac{\sigma'^2}{k^2\sigma^2}\ddot X_0=X+\frac{1}{k}(1-\frac{\rho\sigma'}{\sigma})\dot X-\frac{\rho\sigma'}{k^2\sigma}\ddot X+\frac{\sigma'^2}{2k}+\frac{\rho\sigma'}{k\sigma}(a-\frac{\sigma^2}{2})\equiv g(t).
\end{equation}
The right-hand side is a known function of time. Let us denote that as $g(t)$, then the most probable $X_0(t)$ shall be
\begin{equation}\label{x0sol}
X_0(t)=\frac{k\sigma}{2\sigma'}\int^\infty_{-\infty} dt'g(t')\exp\big(-\frac{k\sigma}{\sigma'}|t-t'|\big).
\end{equation}
We can have some intuition about the solution of $X_0$. First, $\sigma'/k\sigma$
defines a time scale at which we can recover $X_0$ from $X$. it is inversely proportional to $k$, because $k^{-1}$ is the characteristic time scale for $X$ to respond to $X_0$. The $\sigma's$ are there because they tell us how "noisy" the prices are; for example, if $\sigma$ is small, then we know that $X_0$ is not fluctuating around much, so we can afford to average longer to get a more accurate estimate. If $\sigma'$ is small, then $X$ carries less noise, so it makes sense to average at a smaller time scale to more timely reflect the change of $X_0$.
There is one important thing we have overlooked up to this point. Notice that the kernel $\frac{k\sigma}{2\sigma'}\exp\big(-\frac{k\sigma}{\sigma'}|t-t'|\big)$ is nonzero even when $t'>t$; that is, the estimate of $X_0$ receives contribution from the prices both before and after it. In physics terminology, we are using the Feynman prescription of the Green's function. This naturally occurs in a maximum likelihood estimate, since the current reasonable price evolves gradually from the past reasonable prices (so that it is influenced by the previous prices) and it affects the future prices (so that we need to draw inferences from the future prices to make the estimate.) Furthermore, mathematically this is the only converging Green's function. Specifically, if we think of the problem as an initial value problem (in other words, if we are looking for a retarded Green's function), the solution for a generic $g(t)$ is always running away to $\pm\infty$, due to the same sign between $X_0$ and $\ddot X_0$.
The maximum likelihood estimate of $X_0$ in the middle of a time series is therefore not suitable to be used to understand the dynamics of $X$, as it already contains information from the future. What we are interested instead is something like a maximum likelihood estimate without knowledge to the prices afterwards. What we can think about in our formulation, is then the estimate of $X_0$ at the end of the time series. The special solution, Eq. \ref{x0sol}, is not up for the task unfortunately, because we have not specified what boundary condition it should satisfy at the end of the time series.
The general solution of $X_0$ is in the following form:
\begin{equation}\label{gensol}
X_0=\frac{k\sigma}{2\sigma'}\int^T_{0} dt'g(t')\exp\big(-\frac{k\sigma}{\sigma'}|t-t'|\big)+a\exp(-\frac{k\sigma}{\sigma'}t)+b\exp(\frac{k\sigma}{\sigma'}(t-T)),
\end{equation}
where $a$ and $b$ are constants determined by the boundary conditions. $a$ can be thought of as some prior knowledge about $X_0$ at the start of the time series; for our purpose as long as $(k\sigma /\sigma')T\gg 1$, it does not affect $X_0(T)$. The question is $b$. What boundary condition should we take at the end of the time series in order to determine $b$?
It turns out the answer is simple. When we do the variation to maximize the log likelihood, we have integrated by parts to change terms proportional to $\delta \dot X_0$ to $\delta X_0$. This results in a total derivative, which we have discarded in our derivation of the equation of motion in the bulk. This total derivative will determine the boundary condition that the solution of $X_0(t)$ needs to follow, as shown below:
In the variation, we have
\begin{align}
\delta \ln L&=\int^T_0 dt\;\frac{\partial \mathcal L}{\partial X_0}\delta X_0+\frac{\partial \mathcal L}{\partial \dot X_0}\delta \dot X_0\nonumber\\
&=\int^T_0 dt\;\big(\frac{\partial \mathcal L}{\partial X_0}-\frac{\rm d}{{\rm d}t}\frac{\partial \mathcal L}{\partial \dot X_0}\big)\delta X_0+\big(\frac{\partial \mathcal L}{\partial \dot X_0}\delta X_0\big)|^T_0;
\end{align}
here we borrow the Lagrangian symbol $\mathcal L\equiv \big(\sigma'^2\Delta X_0^2+\sigma^2\Delta X^2-2\rho\sigma\sigma'\Delta X_0\Delta X\big)/2\sigma^2\sigma'^2(1-\rho^2)$ to denote the integrand in the log likelihood function. In addition to the terms in the bulk proportional to the equation of motion, we thus still have a total derivative that can be integrated to the boundary,
\begin{align}
\delta \ln L&({\rm boundary})=\big(\frac{\partial \mathcal L}{\partial \dot X_0}\delta X_0\big)|^T_0\nonumber\\
&=-\frac{\delta X_0}{(1-\rho^2)\sigma^2}\bigg(\dot X_0-(a-\sigma^2/2)-\frac{\rho\sigma}{\sigma'}\big(\dot X+k(X-X_0)+\sigma'^2/2\big)\bigg)|^T_0.
\end{align}
Setting it to zero, we can get an equation at the boundary which relates $X_0$ to $\dot X_0$:
\begin{equation}\label{bc}
\dot X_0(T)-(a-\sigma^2/2)-\frac{\rho\sigma}{\sigma'}\big(\dot X(T)+k(X(T)-X_0(T))+\sigma'^2/2\big)=0.
\end{equation}
This is the boundary condition we are looking for.
There is a neat trick to find the solution of $X_0(t)$. First we extend the range of time to $2T$ and define
\begin{equation}
g(t-T)=g(2T-t); \;T<t<2T.
\end{equation}
That is, in the extra range $g(t)$ is a mirror image of the original $g(t)$. It is important to notice that the induced definition of $X(t)$ in general is not symmetric around $t=T$. Now we write our solution as
\begin{equation}\label{gensol2}
X_0=\frac{k\sigma}{2\sigma'}\int^{2T}_{0} dt'g(t')\exp\big(-\frac{k\sigma}{\sigma'}|t-t'|\big)+b'\exp(\frac{k\sigma}{\sigma'}(t-T)).
\end{equation}
This is just a way to rewrite the same general solution, as we can see if we integrate out $t'$ in the range $T<t'<2T$, for $0<t<T$ we recover Eq. \ref{gensol}, with
\begin{equation}
b=b'+\frac{k\sigma}{2\sigma'}\int^{T}_{0} dt'\;g(t')\exp\big(-\frac{k\sigma}{\sigma'}(T-t')\big).
\end{equation}
Due to the fact that the first term in Eq. \ref{gensol2} is an even function under $t\rightarrow 2T-t$, at $t=T$, its time derivative vanishes. We now can write $X_0$ and $\dot X_0$ as
\begin{align}\label{sol}
X_0(T)&=\frac{k\sigma}{\sigma'}\int^{T}_{0} dt'g(t')\exp\big(-\frac{k\sigma}{\sigma'}(T-t')\big)+b'\nonumber\\
&\equiv \bar {X_0}+b';\\
\dot X_0(T)&=\frac{k\sigma}{\sigma'}b'.
\end{align}
Plugging into Eq. \ref{bc}, we get
\begin{equation}
b'=\frac{\sigma'}{k\sigma(1+\rho)}\big(a-\sigma^2/2+\frac{\rho\sigma}{\sigma'}(\dot X+k(X-\bar X_0)+\sigma'^2/2)\big).
\end{equation}
The answer is complicated, but it simplifies at the end of the time series. In fact, one surprising feature is that $X_0(T)$ is actually independent of $\rho$. To see this, let us consider the differential equation that $X_0(T)$ needs to follow (that is, when we gradually accumulate more data and lengthen our time series, how does the most probable $X_0(T)$ at the end of the current time series change.)
First notice that $\bar X_0$ is the only integral that appears in $X_0(T)$, and it satisfies
\begin{equation}\label{barx0eq}
\bar X_0(T)+\frac{\sigma'}{k\sigma}\frac{{ d}\bar X_0(T)}{{ d}T}=g(T).
\end{equation}
The remaining part
\begin{align}
X_0(T)-\frac{\bar X_0(T)}{1+\rho}&=\frac{\sigma'}{k\sigma(1+\rho)}\big(a-\sigma^2/2+\frac{\rho\sigma}{\sigma'}(\dot X+kX+\sigma'^2/2)\big)\nonumber\\&\equiv C(T)
\end{align}
depends only on the local information at $T$. Plugging in $\bar X_0(T)=(1+\rho)(X_0(T)-C(T))$ to Eq. \ref{barx0eq}, we then get
\begin{equation}
X_0(T)+\frac{\sigma'}{k\sigma}\dot X_0(T)=\frac{1}{1+\rho}g(T)+C+\frac{\sigma'}{k\sigma}\dot C.
\end{equation}
Combining the right hand side, we find that the $\rho$ dependence completely cancels out, and the equation becomes
\begin{equation}
X_0(T)+\frac{\sigma'}{k\sigma}\dot X_0(T)=\frac{\sigma'}{k\sigma}(a-\sigma^2/2)+\frac{\sigma'^2}{2k}+X(T)+\frac{1}{k}\dot X(T)\equiv h(T).
\end{equation}
We can also explicitly write the solution as
\begin{equation}\label{x0ind}
X_0(T)=\frac{k\sigma}{\sigma'}\int^{T}_{0} dt'h(t')\exp\big(-\frac{k\sigma}{\sigma'}(T-t')\big).
\end{equation}
An alternative way of deriving the same thing is to integrate by part all the terms proportional to $\rho$ in $g(t)$ in Eq. \ref{sol}. By explicit calculation one will see all $\rho$ dependence cancels.
What does this mean? Naively it sounds paradoxical. How can the estimate of $X_0$ be independent of $\rho$, which is the correlation between the fluctuation of $X_0$ and the observed $X$? This is because this equation describes strictly the estimate of $X_0$ at the very end of the time series. Once new data comes in, the best estimate at that time needs to be updated. And the updated estimate will have dependence on $\rho$. In other words, for a given time series, the estimate for $X_0(t)$ will be different for different $\rho$, but they will all end up at the same point.
Interestingly, this necessarily means that we can never know about the true $\rho$ from data, if we only observe $X(t)$. This is because the evolution of $X(t)$ depends only on the difference between $X_0$ and $X$. When our estimate of $X_0$ is independent of $\rho$, it implies for any $\rho$ the likelihood to observe the data $X(t)$ is the same.
\subsection{Predicted Dynamics of Log Price}
We now look into how this expected $X_0(T)$ affects $X$. Integrating by parts the part that is proportional to $\dot X$, we can write
\begin{equation}
X_0(T)=(1-\frac{\sigma}{\sigma'})\frac{k\sigma}{\sigma'}\int^T_0dt'\exp\big(-\frac{k\sigma}{\sigma'}(T-t')\big)X(t')+\frac{\sigma}{\sigma'}X(T)+\frac{\sigma}{k\sigma'}(a-\sigma^2/2)+\frac{\sigma'^2}{2k}.
\end{equation}
In other words, $X_0(T)$ is an weighted average between the exponential moving average (EMA) of the past prices and the current price, with a constant shift proportional to the risk premium of the reasonable price. We can also write the predicted drift of $X$ as
\begin{equation}\label{sol_risk_premium}
\mu_X=k(X_0-X)-\sigma'^2/2=k(1-\frac{\sigma}{\sigma'})({\mathrm EMA}-X)+\frac{\sigma}{\sigma'}(a-\sigma^2/2).
\end{equation}
This is the central result of the paper. We now can easily see what the model predicts. Firstly, the combination $(k\sigma/\sigma')$ defines the scale of the average; the model is trend-following when $\sigma>\sigma'$, and mean-reversing when $\sigma<\sigma'$. When $\sigma=\sigma'$, $X_0$ always differs from $X$ by a constant, and the model is equivalent to a pure geometric Brownian process (i.e., the real price is the reasonable price.) It makes intuitive sense too, as when $\sigma$ is large it means the market price a lot of times is playing catch up, so when there is a trend starting one can expect it is just the beginning of a larger movement. When $\sigma'$ is large, then the market price in the short term is freely wiggling around, but in the longer term has to regress back to where the reasonable price is.
For a fixed set of parameters, this model thus has a limitation. It is always either trend following or mean reversing, and not flexible enough to dynamically generate regimes that are of different characters. Still, the virtue of the model is to connect the contrast behavior of either trend following or mean reversing to the volatility ratio of the market price and the reasonable price. It is interesting to see in this model, that when we allow $\sigma$ and $\sigma'$ to vary slowly with time, heteroskedasticity naturally leads to trend following and mean reversing regimes.
In Eq. \ref{sol_risk_premium}, we actually only derived the mean of the risk premium $\mu_X$. As $X_0(T)$ is still an unobserved random variable, it has a variance and will contribute to the variance of $\mu_x$. Fortunately in the continuum limit, the variance of $\mu_X$ does not come into play in the evolution of $X$. This is because at every time step $X$ is observed and has no variance. With
\begin{equation}\label{sol_x}
d X=\mu_X dt+\sigma' dz',
\end{equation}
the variance of the change of $X$ (i.e., the log return) is
\begin{equation}
{\rm Var}(dX)={\rm Var}(\mu_X) dt^2+\sigma'^2 dt.
\end{equation}
As long as the variance of $\mu_X$ is finite (it is in this model) as $dt\rightarrow 0$, the contribution from it is negligible comparing to the contribution from $dz'$. Therefore, if the model assumptions are valid (that is, $\sigma$, $\sigma'$, $k$, and $a$ are really constant in time and both $dz$ and $dz'$ are white noise), the optimal strategy for investing in the instrument will be to hold an amount proportional to $\mu_X(t)/\sigma'$ at any given time.
To conclude this section, we also notice that this model thus gives an theoretical explanation of the concept of ``Bollinger bands"\cite{bollinger} in technical analysis. According to our theory, whether one should buy or sell when the price touches the band edge, depends on the ratio of the two volatilities.
\subsection{Fitting the Model Parameters}
In machine learning, it is common to use either the expectation-maximization technique\cite{EM} or the Viterbi algorithm\cite{viterbi} for model with hidden variables. In our case, however, since the model is integrable, it is more straightforward just to integrate out $P_0$ and find the maximum likelihood estimates of the model parameters. In fact, due to the model being Gaussian, integrating out $P_0$ is the same as plugging the most probable value (this is actually the Viterbi algorithm in continuum) we have found into the likelihood function.
Nevertheless, we shall not use Eq. \ref{gensol2} directly. The log likelihood function can be diagonalized more easily in Fourier space, where it almost becomes a sum of all different frequencies, if not only because the price movement is not periodic. We will decompose $X$ and $X_0$ into periodic functions and a constant drift in $[0,T]$. The drift is needed, because the derivative term in the log likelihood, when transformed into Fourier space, includes the contribution from the difference of the initial value and the final value. If we use only periodic components (which in principle can faithfully represent any function in $[0,T]$), in effect we are saying that there is a very large change from the final value to the initial value, just before the time series ends. This change is extremely unlikely to happen in the correct model; not including a constant shift will thus distort our fit.
First to simplify the equations a little bit we change variables to $X_0'=X_0+\sigma'^2t/2$ and $X'=X+\sigma'^2t/2$. Then we write
\begin{equation}
X_0'=v_0t+\sum_n\mathcal{X}_{0n} e^{i\omega_n t}
\end{equation}
\begin{equation}
X'=vt+\sum_n\mathcal{X}_ne^{i\omega_nt}
\end{equation}
with
\begin{equation}
\omega_n=\frac{2\pi m}{T};\; m\in \mathbf{Z}
\end{equation}
We then have
\begin{equation}
\int_0^T dt\; \Delta X_0^2= (v_0-b)^2T+T\sum_n\omega_n^2|\mathcal{X}_{0n}|^2
\end{equation}
where $b\equiv a-\sigma^2/2+\sigma'^2/2$.
It is a bit messier for $\Delta X^2$ and $\Delta X\Delta X_0$. It contains cross terms between the constant drift and all Fourier components. We will keep these terms in the following calculation, but we shall see that when the series we are analyzing are reasonably well described by the model, they will be small. We have
\begin{multline}\label{dx2}
\int^T_0 dt\; \Delta X^2\sim \int^T_0 dt\;\big(v+k(v-v_0)t+k(\mathcal X_0-\mathcal X_{00})\big)^2\\
+2k(v-v_0)T\sum_{n\neq 0}\big(\mathcal X_n+\frac{k}{i\omega_n}(\mathcal X_n-\mathcal X_{0n})\big)\\
+T\sum_{n> 0}\big(2\omega_n^2|\mathcal X_n|^2+2k^2|\mathcal X_n-\mathcal X_{0n}|^2-2i\omega_n k(\mathcal X_n \mathcal X_{0n}^*-c.c.)\big);
\end{multline}
\begin{multline}
\int^T_0 dt\; \Delta X\Delta X_0\sim\int^T_0 dt\; (v_0-b)\big(v+k(v-v_0)t+k(\mathcal X_0-\mathcal X_{00})\big)\\
+k(v-v_0)T\sum_{n\neq 0}\mathcal X_{0n}\\
+T\sum_{n > 0}\big(\omega^2_n(\mathcal X_{0n} \mathcal X_n^*+ c.c)+i\omega_nk(\mathcal X_{0n}\mathcal X_{n}^*-c.c.)\big).
\end{multline}
First we look at terms that couple with zero frequency variables. Integrating out $v_0$, $\mathcal X_{00}$, and $b$, we get
\begin{equation}\label{nasty}
(\ln L)_0\propto \frac{6(\sigma (\sum_{n\neq0}k/\i\omega_n(\mathcal X_{n}-\mathcal X_{0n}) + \sum_{n\neq0}\mathcal X_{n}) - \sigma' \rho \sum_{n\neq0}\mathcal X_{0n})^2}{\sigma^2\sigma'^2(1-\rho^2)T}.
\end{equation}
Comparing this term to the remaining finite frequency terms which are proportional to $T$, we see that it is typically very small, when $T$ is large. Specifically, the latter two terms in the numerator are negligible as long as the components do not scale with $T$. This will hold as long as the drift in the series does not vary significantly as a function of time. (If it does, such in the case if we study the time series of a stock passing through the 2008 financial crisis, using this model itself is not going to be a good fit anyway.) When we look at the finite frequency solution of $\mathcal X_{0n}$ in below, we will see that $(\mathcal X_n-\mathcal X_{0n})$ is proportional to $\omega_n$ at small $n$; this means that the first term is also small under the same condition.
The argument above indicates that Eq. \ref{nasty} is small comparing to all the finite frequency terms. However, it does not say whether the absolute value of this term is much less than one. In the maximum likelihood estimate, the standard deviation of the estimated parameters are usually taken to be the range where the log likelihood drop by $\frac12$. It is therefore necessary to keep terms of order $1$ in the exponential, even if they are small comparing to the rest of the terms. Nevertheless, the argument does imply that the change to the normalization when we integrate over all possible $\mathcal X$ will be much less than one when we take the log. It is therefore safe to ignore Eq. \ref{nasty} when calculating the normalization.
When we go back a step before we integrate over $b$, if we ignore the terms coupling to the finite frequency, we get
\begin{equation}
(\ln L)_0\propto -\frac{(b-v)^2 T}{2\sigma^2}
\end{equation}
This implies our maximum likelihood estimate of $b$ is $b=v$, with a standard deviation of ($\sigma/\sqrt{T}$). The terms we ignore causes a systematic error that scales with $(1/T)$, which is much smaller than the standard deviation when $T$ is large.
Now we turn to the finite frequency part of the log likelihood function. First let us ignore terms in Eq. \ref{nasty}. It is then a direct sum of all the frequencies. Maximizing with respect to $\mathcal X_{0n}$, we find that it satisfies
\begin{equation}
(\sigma'^2\omega_n^2+\sigma^2k^2)\mathcal X_{0n}=\big(\sigma^2k^2+i\omega_nk(\sigma^2-\rho\sigma\sigma')+\rho\sigma\sigma'\omega_n^2\big)\mathcal X_n.
\end{equation}
Notice that it agrees with Eq. \ref{x0sol}, as it should. Also one can see that $(\mathcal X_{0n}-\mathcal X_n)$ is small and proportional to $\omega_n$ at small $n$, as long as $k$ is finite. Now, integrating out $\mathcal X_{0n}$, we get
\begin{equation}
L_n\propto \exp\bigg(-\frac{T|\mathcal X_n|^2(\omega_n^2+k^2)\omega_n^2}{\sigma'^2\omega_n^2+\sigma^2k^2}\bigg);
\end{equation}
the $\rho$-dependence is complete cancelled out! Also notice that if $\sigma=\sigma'$, the $k$-dependence is gone, and the likelihood function is the same as a pure random walk with standard deviation $\sigma$.
To get the full likelihood function one only needs to compute the normalization constant in front by integrating out $\mathcal X_n$. When there is no interaction between different frequencies this is straightforward to do:
\begin{equation}\label{L0}
\log L=\sum_n \log\frac{T(\omega_n^2+k^2)\omega_n^2}{2\pi(\sigma'^2\omega_n^2+\sigma^2k^2)}+\bigg(\frac{T|\mathcal X_n|^2(\omega_n^2+k^2)\omega_n^2}{\sigma'^2\omega_n^2+\sigma^2k^2}\bigg).
\end{equation}
Now we include of Eq. \ref{nasty}. The log likelihood is no longer a direct sum of all the frequencies; in fact, $\mathcal X_{0n}$ now is a function of all $\mathcal X_n$. We can still numerically solve for $\mathcal X_{0n}$, and plug it in the exponential. To calculate the precise normalization with the inclusion of Eq. \ref{nasty}, however, is quite costly when the time series is long (due to the necessity to calculate the log determinant of a $T\times T$ matrix), and we will take advantage of the argument above and approximate it using the original normalization in Eq. \ref{L0}. It is straightforward to compute the log likelihood numerically and maximize it with respect to parameters $\sigma$, $\sigma'$, $k$, $a$, and $\rho$.
\\\\
In this section, we have thus ``solved" the model: given the model parameters and the observed market prices, we can find the best estimate of the theoretical prices in the past, which depends on the market prices both ahead of and after their time, and the latest theoretical price, which can only depend on the market prices before it. We find that the correlation between the prices does not affect the estimate of the latest theoretical price, and therefore the future market price forecast. From the observed market prices, we can also deduce the most probable values of the parameters $\sigma$, $\sigma'$, $k$, and $a$.
\section{Numerical Test of the Model}
In this section, first we verify our analytic solutions of the model, by applying them on time series that are generated by the model. We then look at real world data and see how this model applies.
\subsection{Verification of Analytic Solutions}
First we generate a time series using the model with some parameters, then we shall find $X_0$ and $X_0(T)$ according to Eq. \ref{gensol2}, and Eq. \ref{x0ind}. There are slight complications due to the fact that our time series is discrete; there is a small difference of the normalization and the exponent of the Green's function, and the precise location of the derivatives on the right hand side needs to be specified. The correct discrete formula can be derived by writing the model explicitly in terms of the discrete variables and their differences.
One interesting difference between the derivation, here we just mention in passing, is that it does not require integrating by parts to do the ``variation'' in discrete time. The derivative of the price becomes a difference between consecutive prices and can easily be differentiated against the two prices. Eq. \ref{bc}, instead of being a total-derivative integrated to the boundary, comes directly from differentiating against the price on the boundary. In the end, the discrete result will approach the result in the continuum limit when $k\Delta t\ll 1$.
\begin{figure}[h]
\includegraphics[width=15cm]{f1.png}
\caption{Randomly generated market price $X$, with model parameters $\sigma=0.5$, $\sigma'=1$, $k=0.2$, $a=0.125$, and $\rho=0$. $X_0$ in dark green is the underlying reasonable price offset by $-\sigma'^2/2k$, such that the market price is expected to follow it. $X_0(T)$ in red is the most probable reasonable price offset by the same amount if we only know the prices up to that time; $X_0(\rho=0)$ is the most probable reasonable price knowing the whole time series, again offset by $-\sigma'^2/2k$.}
\label{fig1}
\end{figure}
In Figure \ref{fig1}, we plot a randomly generated log price, with $\sigma\sqrt{\Delta t}=0.5$, $\sigma'\sqrt{\Delta t}=1$, $k\Delta t=0.2$, $a\Delta t=0.125$, and $\rho=0$. In the plot we have chosen the unit of time such that $\Delta t=1$. We shall use this unit from now on. As we can see, the blue line does have a tendency to go towards the red line. In this parameter regime, the red line fluctuates smaller than the blue line, and the price is mean-regressing. Notice how we cannot really recover the actual reasonable price, but the most probable reasonable price $X_0(\rho=0)$ at least gets close until the end.
In Figure \ref{fig2}, we plot several maximum likelihood estimate of $X_0$ at different $\rho$. One can see that at the end of the series they all converge to roughly same value (there are some weak $\rho$ dependence in discrete time actually.)
\begin{figure}[h]
\includegraphics[width=15cm]{f2.png}
\caption{The underlying reasonable price $X_0$ and a few maximum likelihood estimate of $X_0$ at various $\rho$. Every series is shifted by the same amount $-\sigma'^2/2k$.}
\label{fig2}
\end{figure}
Now we proceed to fit the model parameters. We shall compare two fits, one includes Eq. \ref{nasty} and the other does not. To start with, we generate $20$ independent time series randomly from the model with $500$ time steps each, using the parameter set $(\sigma,\sigma',k,a,\rho)=(0.05,0.1,0.2,0.002,0.5)$. Then we find out the maximum likelihood estimate of the parameters of either model, and record their mean and standard deviation in the following two tables:
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
&ave&std1&std2\\
\hline
$\sigma$&0.0457&0.0144&0.005\\
\hline
$\sigma'$&0.0965&0.0079&0.0019\\
\hline
$k$&0.1919&0.0864&0.0158\\
\hline
$a$&0.0016&0.0027&0.0022\\
\hline
$\rho$&0.2827&0.4967&0.0238\\
\hline
\end{tabular}\;\;\;\;
\begin{tabular}{|c|c|c|c|}
\hline
&ave&std1&std2\\
\hline
$\sigma$&0.0434&0.0103&0.0041\\
\hline
$\sigma'$&0.0967&0.0046&0.0039\\
\hline
$k$&0.1879&0.0569&0.0122\\
\hline
$a$&0.0015&0.0026&0.0022\\
\hline
$\rho$&0.0241&0.0319&0.0272\\
\hline
\end{tabular}
\caption{The result of maximum likelihood estimates. The estimate on the left takes Eq. \ref{nasty} into consideration whereas the estimate on the right does not.}
\end{table}
In the tables, ``ave" denotes the average of the parameter estimates of the $20$ runs. ``std1" is the sample standard deviation of the $20$ runs, and ``std2'' is the sample average of the standard deviation estimate from the $\chi^2$ analysis where the log maximum likelihood falls by $1/2$. The first table comes from MLE analysis where we include the contribution of Eq. \ref{nasty}, whereas the second table is from when we ignore it.
In general, we see that both analyses give pretty close results, except for the estimation of $\rho$. Excluding Eq. \ref{nasty} wrongly prefers $\rho=0$ for all runs, as one can see by the mean not significantly different from zero, and the smallness of both standard deviations. Including it gives a more erratic behavior of $\rho$; the estimator seems to prefer some arbitrary $\rho$ different for each run, shown by the large std1 and the small std2. In any case, it seems our conclusion in continuum that $\rho$ does not affect the dynamics of $X$ is a reasonable approximation to make, as in either fit the estimated value of $\rho$ is far away from the set value in the model, yet the other parameters are estimated correctly.
Overall, including the contribution of Eq. \ref{nasty} does slightly improve the fit, as seen from the average. However, at the same time its estimate is also more erratic as seen from std1. Still, the MLE from both methods are still biased in the same way and underestimates the two $\sigma$s. In the following subsection, we will set $\rho=0$ and use the MLE without Eq. \ref{nasty} to extract the model parameters. We will report if the two MLEs return significantly different model parameters.
\subsection{Application on S\&P500}
Below is a plot of the S\&P500 index for the past 10 years:\footnote{ The data is downloaded from \href{https://research.stlouisfed.org/fred2/series/SP500/downloaddata}{https://research.stlouisfed.org/fred2/series/SP500/downloaddata}}
\begin{figure}[h]
\centering
\includegraphics[width=10cm]{sp500.png}
\caption{S\&P500 index from 2005-08-24 to 2015-08-25.}
\label{fig3}
\end{figure}
Due to the financial crisis around 2008, it is not a very good idea to fit the whole series with uniform volatility $\sigma$, $\sigma'$ and risk premium $a$. Let us instead start from June 2009, where the market is somewhat recovered, until just before the recent crash. Using the MLE procedure, it results in the following optimal parameters $(\sigma,\sigma',k,a)=(0.0066, 0.0094, 0.0965, 0.0006).$ The fit is an improvement to the geometric Brownian hypothesis if we compare them using the Akaike information criterion (AIC):
\begin{equation}
\delta {\rm AIC}=\delta (2k-2\ln L)=4-15.73=-11.73.
\end{equation}
In the equation, $k=2$ is the difference of fitting parameters in the two models, and we have calculated the log likelihood for them. (Conveniently the geometric Brownian motion $dX/X=adt+\sigma dz $ with parameters $(\sigma,a)$ is given by the parameters $(\sigma,\sigma,0,a)$ in our model.) We can also check the model is working, by regressing its predicted risk premium against the actual log return:
\begin{equation}
r_i\equiv (\ln P_{i+1}-\ln P_i)\sim\mu_{Xi}.
\end{equation}
We get
\begin{verbatim}
Estimated Coefficients:
Estimate SE tStat pValue
(Intercept) -0.00014227 0.00034876 -0.40794 0.68337
x1 1.2212 0.43898 2.7818 0.0054706
Number of observations: 1560, Error degrees of freedom: 1558
Root Mean Squared Error: 0.00972
R-squared: 0.00494, Adjusted R-Squared 0.0043
F-statistic vs. constant model: 7.74, p-value = 0.00547
\end{verbatim}
This is to be compared with our theoretical prediction from the model
\begin{equation}
dX=\mu_X dt+\sigma' dz
\end{equation}
such that the intercept is zero, the slope $x1=1$ and $R^2={\rm Var}(\mu_X)/({\rm Var}(\mu_X)+\sigma'^2)=0.0036$. (Notice that this sample variance of $\mu_X$ is different from the variance of the unobserved random variable $X_0(T)$ we talked about in section 3.2.) We plot the obtained risk premium $\mu_X$ as a function of time in Fig. \ref{fig4}a.
\begin{figure}[h]
\includegraphics[width=8cm]{muX.png}
\includegraphics[width=8cm]{earnings.png}
\caption{(a) The predicted risk premium $\mu_X$ as a function of time. The $y$-axis is in units of the average risk premium $a$. (b) The accumulated return curve. The green curve is holding a constant amount (so it is the same curve as in Fig. \ref{fig3}), and the blue curve is holding an amount proportional to the predicted risk premium. The holding amount is normalized such that the standard deviation of the return is the same as the original return.}
\label{fig4}
\end{figure}
If the model assumptions are correct, the optimum strategy to invest is to hold an amount proportional to the risk premium. However, doing so results in the earning curve shown in Fig. \ref{fig4}b: with the standard deviation normalized to be the same, we see that unlike predicted in theory such that the Sharpe ratio should increase by a factor of $\sqrt{E[\mu_x^2/a^2]}\sim\sqrt 2$, it actually decreases by 10\%.
The reason for this investing strategy to be ineffective, given the fact that the predicted risk premium does predict the return within model specification, is the following. First, the volatility of the residual return, as can be seen evidently from the curve, is not a constant. Even if the risk premium predicted from our model ends up to be somewhat accurate, in the strategy the holding amount now needs to be divided by the time-varying standard deviation. Second, one can check that the residual returns are not completely independent with one another, unlike assumed in the model. With such correlation, the holding amount needs to be multiplied by the inverse of the correlation matrix.
When the homoskedasticity and the independent conditions are satisfied, the strategy will perform more satisfactorily. In fact, if we focus on the later half of the curve from 2012 to 2015, the overall Sharpe ratio for the strategy is 75.9, whereas the Sharpe ratio for holding a constant amount is 62.1.
We conclude this section by consider using the strategy starting from 2005. The accumulated return is shown in Fig. \ref{fig5}. There is no theoretical foundation behind, but the strategy seems to work pretty well even during the market collapse!
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{earnings2005.png}
\caption{Accumulated return from 2005 up to now, using the strategy. The green curve is holding constant. Again the two curves are scaled such that the returns have identical sample standard deviations.}
\label{fig5}
\end{figure}
\section{Discussion}
In this paper, we have proposed a model such that there is an unobserved reasonable price following geometric Brownian motion with the reasonable risk premium, and the market price tries to follow it. Our main result is that the market price will be trend following, if the standard deviation of the market price is smaller than of the reasonable price, and mean-reversing if it is the other way around. We have also developed the MLE to estimate the model parameters from a given time series. We have shown that from 2009 upto now the S\&P500 index can be described better by the model than a pure geometric Brownian motion, and it is in the mean-reversing regime.
There are a few technical issues regarding the difference between the continuous time and the discrete time formulation, which I deliberately tried not to mention above.
One problem that is central to our result is whether $\rho$ is really not measurable and have no implications. In the continuum it certainly is, but from my numerical results it does not seem to be entirely true in discrete time. The MLE results all prefer some $\rho$ but the preference seems unrelated to the value set when the time series is generated. The most likely explanation is that this preference is a result of the approximation we have done, either when we ignore part or the entirety of Eq. \ref{nasty} or when we ignore its contribution to the overall normalization. If everything is done accurately, there should still be preference of $\rho$, but the preference should not scale with the length of the time series.
Then there is the question of how the discrete time series approach the continuous time. The answer will be that it solely depends on the value of the parameter $k$ in the discrete formulation. If we view the discrete time series as an approximation to the continuous time, the parameter $k$ is $kdt$ in the continuous time formulation. The analytical results in this paper thus are only valid when $k\ll 1$. One notable example is that in addition to the mean given by Eq. \ref{sol_risk_premium}, the variance of $\mu_X$ will also need to be considered when $k$ is not small.
One question arises when we think about predicting in-sample returns using the predicted risk premium. The question is, does our MLE procedure minimize the error of this prediction, such that it selects the model parameters that produce the minimal error of predicted return? It is not obvious at all from the calculation in section 3.3, especially since the likelihood function contains two errors, one from $X$ and one from $X_0$. The integration over $X_0$ does not eliminate the contributions from $\Delta X_0$, and plugging in the most probable value of $X_0(t)$ in $\Delta X$ does not give the correct error from prediction, since $X_0(t)$ is in general different from $X_0(T)$, which is what we plug in when we calculate the risk premium.
However, our conjecture to this question is that the MLE does minimize the error of the predicted return in the continuum limit. This stems from the fact that in continuum the evolution of $X$ is given by Eq. \ref{sol_x}, where we can just replace $\mu_X$ by its mean value. This necessarily means the likelihood function of the original model integrated over $X_0$ should be the same as calculated from Eq. \ref{sol_x}, which is proportional to $\exp(\int dt \epsilon^2/2\sigma'^2)$.
When the discrete nature of the series become apparent, the randomness of $X_0(T)$ then cannot be ignored in Eq. \ref{sol_x}. The error of the predicted return becomes a sum of two parts: first is the difference between the realized return and $X_0(T)$ and the second is the difference between the mean of $X_0(T)$ and the actual realization. The two parts are optimized with different coefficients and it is unclear such optimization will result in their sum squared being minimized.
One interesting observation is that in the discrete form the model can be rearranged to look very similar to a ARMA(2,1) model with some fixed relations among its parameters; one big difference still is there are two random sources in the model and we are not sure if it can somehow be transformed into one.
Finally, we throw out some ideas how this model can be realistically used. The immediate improvement is to include the possibility that the standard deviations can change with time. In the minimal extension it might be enough to add a few discrete variables, each describes a state with the model with a different set of parameters. switching between the states can be some additional Markovian dynamics, or just make them hidden and determine the current state by some statistic. In practice we can also treat the predicted risk premium as some general indicator and use them along with past return to form some generalized AR(n) models to eliminate the correlations among errors of predicted returns. It would also be interesting to see whether this model can characterize dynamics at a much lower time scale, such as intraday movements in minutes or seconds.
It would also be interesting to extend this model to include multiple instruments (whether each instrument errands from the covariance with the pricing portfolio on mean-variance front, or the pricing portfolio itself errands in its composition or price, or something in between) or to consider derivative pricing, such as options, when the underlying instrument follows the process prescribed by this model.
|
1,108,101,565,007 | arxiv | \section{Introduction}
\label{intro}
Nonabelian gauge symmetries are known since long time to describe
fundamental interactions~\cite{Weinberg-book}. More recently, it has
been pointed out that they may also characterize emerging phenomena in
condensed-matter physics, see, e.g.,
Refs.~\cite{WNMXS-17,GASVW-18,SSST-19,GSF-19,Sachdev-19} and
references therein. As a consequence, the large-scale properties of
gauge models are also of interest in two or three dimensions.
We consider a lattice model of interacting scalar fields in the
presence of nonabelian gauge symmetries, which may be named scalar
chromodynamics or nonabelian Higgs model. In four space-time
dimensions it represents a paradigmatic example to discuss the
nonabelian Higgs mechanism, which is at the basis of the Standard
Model of fundamental interactions. The three-dimensional model may
also be relevant in condensed-matter physics, for systems with
emerging nonabelian gauge symmetries. Its phase diagram and its
behavior at the finite-temperature phase transitions has been
investigated in Refs.~\cite{BPV-19,BPV-20}. In this paper we extend
such a study to two-dimensional (2D) systems.
We consider a 2D lattice nonabelian gauge theory with multicomponent
scalar fields. It is defined starting from a maximally
O($M$)-symmetric multicomponent scalar model. The global symmetry is
partially gauged, obtaining a nonabelian gauge model, in which the
fields belong to the coset $S^M$/SU($N_c$), where $M=2 N_f N_c$,
$N_f$
is the number of flavors, and $S^M=\hbox{SO}(M)/\hbox{SO}(M-1)$ is the
$M$-dimensional sphere. According to the Mermin-Wagner
theorem~\cite{MW-66}, the model is always disordered for finite values
of the temperature. However, a critical behavior develops in the
zero-temperature limit. We investigate its universal features for
generic values of $N_c$ and $N_f\ge 2$, by means of finite-size
scaling (FSS) analyses of Monte Carlo (MC) simulations.
The results provide numerical evidence that the asymptotic
low-temperature behavior of these lattice nonabelian gauge models
belongs to the universality class of the 2D CP$^{N_f-1}$ field theory
when $N_c\ge 3$, and to that of the 2D Sp($N_f$) field theory for
$N_c=2$. This suggests that the renormalization-group (RG) flow of the
2D multiflavor lattice scalar chromodynamics associated with the coset
$S^M$/SU($N_c$) is asymptotically controlled by the 2D statistical
field theories associated with the symmetric
spaces~\cite{BHZ-80,ZJ-book} that have the same global symmetry, i.e.,
SU($N_f$) for $N_c\ge 3$ and Sp($N_f$) for $N_c=2$.
The paper is organized as follows. In Sec.~\ref{SQCD} we introduce the
lattice nonabelian gauge models that we consider. In Sec.~\ref{fsssec}
we discuss the general strategy we use to investigate the nature of
the low-temperature critical behavior. Then, in
Secs.~\ref{resultsncl3} and \ref{resnc2} we report the numerical
results for lattice models with $N_c\ge 3$ and $N_c=2$, respectively.
Finally, in Sec.~\ref{conclu} we summarize and draw our conclusions.
In App.~\ref{largebeta} we report some results on the minimum-energy
configurations of the models considered.
\section{Multiflavor lattice scalar chromodynamics}
\label{SQCD}
We consider a 2D lattice scalar nonabelian gauge theory obtained by
partially gauging a maximally symmetric model of complex matrix
variables $\varphi_{\bm x}^{af}$, where the indices $a=1,..,N_c$ and
$f=1,...,N_f$ are associated with the color and flavor degrees of
freedom, respectively.
We start from the maximally symmetric action
\begin{eqnarray}
S_s = - t \sum_{{\bm x},\mu} {\rm Re} \,
{\rm Tr}\,\varphi_{\bm x}^\dagger \varphi_{{\bm x}+\hat\mu} \,,
\qquad
{\rm Tr}\,\varphi_{\bm x}^\dagger \varphi_{\bm x} = 1\,,
\label{ullimit}
\end{eqnarray}
where the sum is over all sites and links of a square lattice and
$\hat{\mu}=\hat{1},\hat{2}$ denote the unit vectors along the lattice
directions. Model (\ref{ullimit}) with the unit-length constraint for
the $\varphi_{\bm x}$ variables is a particular limit of a model with
a quartic potential $\sum_{\bm x} V( {\rm Tr}\, \varphi^\dagger_{\bm
x}\,\varphi_{\bm x})$ of the form $V(X) = r X + {1\over 2} u\,
X^2$. Indeed, it can be obtained by simply setting $r+u=0$ and taking
the limit $u\to\infty$. In the following we set $t=1$ for simplicity,
which amounts to an appropriate choice of the temperature unit. It is
simple to see that the action $S_s$ has a global O($M$) symmetry, with
$M=2N_f N_c$. Indeed, it can also be written in terms of $M$-component
real vectors ${\bm s}_{\bm x}$ (which are the real and imaginary parts
of $\varphi_{\bm x}^{af})$ as
\begin{eqnarray}
S_s = - \sum_{{\bm x},\mu} {\bm s}_{\bm x}\cdot
{\bm s}_{{\bm x}+\hat{\mu}}\,,
\qquad
{\bm s}_{\bm x} \cdot {\bm s}_{\bm x} = 1\,.
\label{ullimits}
\end{eqnarray}
This is the standard nearest-neighbor $M$-vector lattice model.
We proceed by gauging some of the degrees of freedom using the Wilson
approach~\cite{Wilson-74}. We associate an SU($N_c$) matrix $U_{{\bm
x},\mu}$ with each lattice link [$({\bm x},\mu)$ denotes the link
that starts at site ${\bm x}$ in the $\hat\mu$ direction] and add a
Wilson kinetic term for the gauge fields. We obtain the action of the
2D lattice scalar chromodynamics defined by
\begin{eqnarray}
S_g = - N_f \sum_{{\bm x},\mu}
{\rm Re}\, {\rm Tr} \,\varphi_{\bm x}^\dagger \, U_{{\bm x},\mu}
\, \varphi_{{\bm x}+\hat{\mu}}
- {\gamma\over N_c} \sum_{{\bm x}} {\rm Re} \, {\rm Tr}\,
\Pi_{\bm x},\;
\label{hgauge}
\end{eqnarray}
where $\Pi_{\bm x}$ is the plaquette operator
\begin{equation}
\Pi_{\bm x}=
U_{{\bm x},\hat{1}} \,U_{{\bm x}+\hat{1},2}
\,U_{{\bm x}+\hat{2},1}^\dagger
\,U_{{\bm x},2}^\dagger
\,.
\label{plaquette}
\end{equation}
The plaquette parameter $\gamma$ plays the role of inverse gauge
coupling, and the $N_f$ and $N_c$ factors in Eq.~\eqref{hgauge} are
conventional. The partition function reads
\begin{eqnarray}
Z = \sum_{\{\varphi,U\}} e^{-\beta \,S_g}\,,\qquad \beta\equiv 1/T\,.
\label{partfun}
\end{eqnarray}
The lattice model (\ref{hgauge}) is invariant under
SU($N_c$) gauge transformations:
\begin{equation}
\varphi_{\bm x}\to W_{\bm x} \varphi_{\bm x}\,,\qquad
U_{{\bm x},\mu} \to W_{\bm x} U_{{\bm x},\mu} W_{{\bm x} +
\hat{\mu}}^\dagger\,,
\label{gautra}
\end{equation}
with $W_{\bm x}\in {\rm SU}(N_c)$. For $\gamma\to\infty$, the link
variables $U_{\bm x}$ become equal to the identity (modulo gauge
transformations), thus one recovers the ungauged model
(\ref{ullimit}), or equivalently the O($M$) vector model
(\ref{ullimits}).
For $N_f=1$ the model is trivial. Because of the unit-length
condition, using gauge transformations we can fix $\varphi_{\bm x}$ to
any given unit-length vector on the whole lattice: there is no
dynamics associated with the scalar field. As we shall see,
multiflavor models with $N_f \ge 2$ show instead a nontrivial
behavior.
After gauging, the residual global symmetry depends on the number of
flavors $N_f$ and of colors $N_c$. For $N_c\ge 3$, model
(\ref{hgauge}) is invariant under the transformation
\begin{equation}
\varphi_{\bm x} \to
\varphi_{\bm x} \, V\,,\qquad V\in {\rm U}(N_f)\,,
\label{varphitra}
\end{equation}
thus it has a global U($N_f$)$/\mathbb{Z}_{N_c}$ symmetry,
$\mathbb{Z}_{N_c}$ being the center of SU($N_c$). As discussed in
Ref.~\cite{BPV-20}, when $N_f<N_c$
\begin{equation}
\varphi_{\bm x} \to e^{i\theta} \varphi_{\bm x}
\label{u1varphi}
\end{equation}
can be realized by an appropriate SU($N_c$) local
transformation. Thus, the actual global symmetry group reduces to
SU($N_f$).
For $N_c=2$, model (\ref{hgauge}) is invariant under the larger group
Sp($N_f$)$/\mathbb{Z}_2$, where Sp$(N_f) \supset \hbox{U}(N_f)$ is the
compact complex symplectic group, see also
Refs.~\cite{WNMXS-17,Georgi-book,DP-14,BPV-19, BPV-20}. Indeed, if one
defines the 2$\times 2N_f$ matrix field
\begin{equation}
\Gamma_{\bm x}^{af} = \varphi_{\bm x}^{af}\,, \qquad \Gamma_{\bm
x}^{a(N_f + f)} = \sum_{b} \epsilon^{ab} \bar{\varphi}_{\bm
x}^{bf}\,,
\label{Gammadef}
\end{equation}
where $f=1,...,N_f$, $\epsilon^{ab}=-\epsilon^{ba}$,
$\epsilon^{12}=1$, the action (\ref{hgauge}) is invariant under the
global transformation
\begin{equation}
{\Gamma}_{\bm x}^{al} \to \sum_{m=1}^{2N_f} \Gamma_{\bm x}^{am} Y^{ml}\,,
\qquad Y \in {\rm Sp}(N_f)\,.
\label{invtraspn}
\end{equation}
We recall that the compact complex symplectic group Sp$(N_f)$ is the
group of the $2N_f\times 2N_f$ unitary matrices $U_{\rm sp}$
satisfying the condition
\begin{equation}
U_{\rm sp} \, J \, U_{\rm sp}^T = J\,,\qquad
J=
\left(\begin{array}{cc} \phantom{}0 & -I \\ I &
\phantom{-}0\end{array}\right)\,,
\label{mscond}
\end{equation}
where $I$ is the $N_f\times N_f$ identity matrix.
\section{Universal finite-size scaling}
\label{fsssec}
We exploit FSS techniques~\cite{FB-72,Barber-83,Privman-90,PV-02} to
study the nature of the asymptotic critical behavior of the model for
$T\to 0$. For this purpose we consider models defined on square
lattices of linear size $L$ with periodic boundary conditions.
We mostly focus on the correlations of the gauge-invariant variable
$Q_{\bm x}$ defined by
\begin{equation}
Q_{{\bm x}}^{fg} = P_{\bm x}^{fg}
-{1\over N_f} \delta^{fg}\,,
\qquad
P_{\bm x}^{fg} = \sum_a \bar{\varphi}_{\bm x}^{af}
\varphi_{\bm x}^{ag} \,,
\label{qdef}
\end{equation}
which is a hermitian and traceless $N_f\times N_f$ matrix. The
corresponding two-point correlation function is defined as
\begin{equation}
G({\bm x}-{\bm y}) = \langle {\rm Tr}\, Q_{\bm x} Q_{\bm y} \rangle\,,
\label{gxyp}
\end{equation}
where the translation invariance of the system has been taken into
account. We define the susceptibility $\chi=\sum_{\bm x} G({\bm x})$
and the correlation length
\begin{eqnarray}
\xi^2 = {1\over 4 \sin^2 (\pi/L)}
{\widetilde{G}({\bm 0}) - \widetilde{G}({\bm p}_m)\over
\widetilde{G}({\bm p}_m)}\,,
\label{xidefpb}
\end{eqnarray}
where $\widetilde{G}({\bm p})=\sum_{{\bm x}} e^{i{\bm p}\cdot {\bm x}}
G({\bm x})$ is the Fourier transform of $G({\bm x})$, and ${\bm p}_m =
(2\pi/L,0)$. We also consider the quartic cumulant (Binder) parameter
defined as
\begin{equation}
U = {\langle \mu_2^2\rangle \over \langle \mu_2 \rangle^2} \,, \qquad
\mu_2 = {1\over V^2}
\sum_{{\bm x},{\bm y}} {\rm Tr}\,Q_{\bm x} Q_{\bm y}\,,
\label{binderdef}
\end{equation}
where $V=L^2$.
To identify the universality class of the asymptotic zero-temperature
behavior, we consider the Binder parameter $U$ as a function of the
ratio
\begin{equation}
R_\xi\equiv \xi/L\,.
\label{rxidef}
\end{equation}
Indeed, in the FSS limit we have (see, e.g., Ref.~\cite{BPV-19-2})
\begin{equation}
U(\beta,L) \approx F(R_\xi)\,,
\label{r12sca}
\end{equation}
where $F(x)$ is a universal scaling function that completely
characterizes the universality class of the transition.
Eq.~(\ref{r12sca}) is particularly convenient, as it allows us to
check the universality of the asymptotic zero-temperature behavior
without the need of tuning any parameter. Corrections to
Eq.~(\ref{r12sca}) decay as a power of $L$. In the case of
asymptotically free models, such as the 2D CP$^{N-1}$ and O($N$)
vector models, corrections decrease as $L^{-2}$, multiplied by powers
of $\ln L$ ~\cite{BPV-19-2,CP-98}.
Because of the universality of relation (\ref{r12sca}), we can use the
plots of $U$ versus $R_\xi$ to identify the models that belong to the
same universality class. If the data of $U$ for two different models
follow the same curve when plotted versus $R_\xi$, their critical
behavior is described by the same continuum quantum field theory.
This implies that any other dimensionless RG invariant quantity has
the same critical behavior in the two models, both in the
thermodynamic and in the FSS limit. An analogous strategy was
employed in Ref.~\cite{BPV-19-2} to study the critical behavior of the
2D Abelian-Higgs lattice model in the zero-temperature limit.
The asymptotic values of $F(R_{\xi})$ for $R_{\xi}\to 0$ and
$R_{\xi}\to\infty$ correspond to the values that $U$ takes in the
small-$\beta$ and large-$\beta$ limits. For $R_\xi\to 0$ we have
\begin{eqnarray}
\lim_{R_\xi\to 0} \;U = {N_f^2+1\over N_f^2-1}\, .
\label{uextr}
\end{eqnarray}
independently of the value of $N_c$. The large-$\beta$ limit is
discussed in App.~\ref{largebeta}. For $N_c \ge 3$ we have simply $U =
1$.
In the following we study the large-$\beta$ critical behavior of
lattice scalar chromodynamics for several values of $N_f$ and
$N_c$. We perform numerical simulations using the same upgrading
algorithm employed in three dimensions \cite{BPV-19, BPV-20}. The
analysis of the data of $U$ versus $R_\xi$ outlined above allows us to
conclude that the critical behavior only depends on the global
symmetry group of the model. For any $N_c \ge 3$, the critical
behavior belongs to the universality class of the 2D CP$^{N_f-1}$
field theory. Indeed, the FSS curves (\ref{r12sca}) for the model
(\ref{hgauge}) agree with those computed in the CP$^{N-1}$ model (we
use the results reported in Ref.~\cite{BPV-19-2}). For $N_c = 2$,
instead, the critical behavior is associated with that of the 2D
Sp($N_f$) field theory. Note that the parameter $\gamma$ appears to be
irrelevant in the RG sense (at least for $|\gamma|$ not too
large). Indeed, for all positive and negative values of $N_c$, $N_f$,
and $\gamma$ we investigated, the universal critical behavior does not
depend on $\gamma$.
\section{SU($N_c$) gauge models with $N_c\ge 3$}
\label{resultsncl3}
\subsection{Numerical results}
\label{resnf2ncl2}
In this section we study the critical behavior of scalar
chromodynamics for some values of $N_f$ and of $N_c\ge 3$. We compute
the scaling curve (\ref{r12sca}) and compare it with the corresponding
one computed in the CP$^{N_f-1}$ model. Such a comparison provides
evidence that the asymptotic zero-temperature behavior for finite
values of $\gamma$ in a wide interval around $\gamma=0$ is described
by the 2D CP$^{N_f-1}$ field theory.
In Figs.~\ref{u-beta-nf2nc3} and \ref{u-rxi-nf2nc3} we show MC data
for the two-flavor model (\ref{hgauge}) with SU(3) gauge symmetry,
i.e., for $N_f=2$ and $N_c=3$, and $\gamma=0$. In
Fig.~\ref{u-beta-nf2nc3} the results for the Binder parameter are
shown as a function of $\beta$ for several lattice sizes. The curves
corresponding to different lattice sizes do not intersect, confirming
the absence of a phase transition at finite $\beta$, as expected from
the Mermin-Wagner theorem. The ratio $R_\xi$ behaves analogously. For
each lattice size $R_{\xi}$ is an increasing function of $\beta$,
seemingly divergent for $\beta\to\infty$, but no crossing is present
between curves corresponding to different $L$ values. In
Fig.~\ref{u-rxi-nf2nc3} the data of $U$ appear to approach a FSS curve
in the large-$L$ limit when plotted versus $R_\xi$, in agreement with
the FSS prediction (\ref{r12sca}). This asymptotic FSS curve is
consistent with that of the 2D CP$^1$ universality class (equivalent
to that of the O(3) vector model , see, e.g., Ref.~\cite{ZJ-book})
determined in Ref.~\cite{BPV-19-2}. Moreover, scaling corrections are
consistent with the expected $O(L^{-2})$ behavior.
\begin{figure}[tbp]
\includegraphics*[scale=\twocolumn@sw{0.3}{0.4}]{u-beta-f2c3.eps}
\caption{Plot of $U$ versus $\beta$ for $N_f=2$, $N_c=3$, and
$\gamma=0$. The horizontal dashed line corresponds to $U=5/3$, the
asymptotic value for $\beta \to 0$. }
\label{u-beta-nf2nc3}
\end{figure}
\begin{figure}[tbp]
\includegraphics*[scale=\twocolumn@sw{0.3}{0.4}]{u-rxi-f2c3.eps}
\caption{Plot of $U$ versus $R_\xi$ for $N_f=2$, $N_c=3$, and
$\gamma=0$. Data approach the universal FSS curve of the 2D CP$^1$
or O(3) universality class (full line, taken from
Ref.~\cite{BPV-19-2}). The horizontal dashed line corresponds to
$U=5/3$, the asymptotic value for $R_\xi\to 0$. }
\label{u-rxi-nf2nc3}
\end{figure}
\begin{figure}[tbp]
\includegraphics*[scale=\twocolumn@sw{0.3}{0.4}]{u-rxi-f2c3-gamma-3.eps}
\includegraphics*[scale=\twocolumn@sw{0.3}{0.4}]{u-rxi-f2c3-gamma3.eps}
\caption{Plot of $U$ versus $R_\xi$ for $N_f=2$, $N_c=3$, and
$\gamma=-3$ (upper panel) and $\gamma=3$ (lower panel). Data
approach the universal FSS curve of the 2D CP$^1$ or O(3)
universality class (full line, taken from Ref.~\cite{BPV-19-2}).
The horizontal dashed line corresponds to $U=5/3$, the asymptotic
value for $R_\xi\to 0$. }
\label{u-rxi-nf2nc3-gamma}
\end{figure}
The behavior of the data for different values of the inverse gauge
coupling $\gamma$ shows that the FSS curve is independent of $\gamma$,
at least in a wide interval around $\gamma=0$, as can be seen from
Fig.~\ref{u-rxi-nf2nc3-gamma}, where data for $\gamma=\pm 3$ are
reported. Analogous results are obtained for $N_f=2$ and $N_c=4$, see
Fig.~\ref{u-rxi-nf2nc4}.
\begin{figure}[tbp]
\includegraphics*[scale=\twocolumn@sw{0.3}{0.4}]{u-rxi-f2c4.eps}
\caption{Plot of $U$ versus $R_\xi$ for $N_f=2$, $N_c=4$, and
$\gamma=0$. Data approach the universal FSS curve of the 2D CP$^1$
or O(3) universality class (full line, taken from
Ref.~\cite{BPV-19-2}). The horizontal dashed line corresponds to
$U=5/3$, the asymptotic value for $R_\xi\to 0$. }
\label{u-rxi-nf2nc4}
\end{figure}
\begin{figure}[tbp]
\includegraphics*[scale=\twocolumn@sw{0.3}{0.4}]{u-rxi-f3c3.eps}
\includegraphics*[scale=\twocolumn@sw{0.3}{0.4}]{u-rxi-f3c3-gamma2.eps}
\caption{Plot of $U$ versus $R_\xi$ for $N_f=3$, $N_c=3$. Results for
$\gamma=0$ (top) and $\gamma = 2$ (bottom). Data (empty symbols)
approach the universal FSS curve of the 2D CP$^2$ universality class
(CP$^2$ results, taken from Ref.~\cite{BPV-19-2}, are reported with
full symbols). The horizontal dashed line corresponds to $U=5/4$,
the asymptotic value for $R_\xi\to 0$. }
\label{u-rxi-nf3nc3}
\end{figure}
\begin{figure}[tbp]
\includegraphics*[scale=\twocolumn@sw{0.3}{0.4}]{u-rxi-f4c3.eps}
\caption{Plot of $U$ versus $R_\xi$ for $N_f=4$, $N_c=3$, and
$\gamma=0$. Data (empty symbols) Data (empty symbols) approach the
universal FSS curve of the 2D CP$^3$ universality class (CP$^2$
results, taken from Ref.~\cite{BPV-19-2}, are reported with full
symbols). The horizontal dashed line corresponds to $U=17/15$, the
asymptotic value for $R_\xi\to 0$. }
\label{u-rxi-nf4nc3}
\end{figure}
These results should be considered as a robust evidence that the
asymptotic low-temperature behavior of two-flavor
chromodynamics with SU(3) and SU(4) gauge symmetry belongs to the
universality class of the 2D CP$^1$ [equivalently, O(3)] field theory.
In Fig~\ref{u-rxi-nf3nc3} we report results for the three-flavor
lattice theory with SU(3) gauge symmetry. In this case, for both
$\gamma=0$ and $\gamma=2$, data appear to approach the FSS curve of
the 2D CP$^2$ model, obtained in Ref.~\cite{BPV-19-2} by numerical
simulations. This excellent agreement provides a robust indication
that the three-flavor lattice theory with SU(3) gauge theory has the
same asymptotic critical behavior as the 2D CP$^2$ model. Analogous
results are obtained for $N_f=4$, see Fig.~\ref{u-rxi-nf4nc3} for
results for $\gamma=0$. The FSS curve appears to approach that of the
2D CP${^3}$ model. We note that for $N_f=4$ larger scaling
corrections are present. However, they appear to be consistent with
an $O(L^{-2})$ behavior.
Up to now we have discussed the critical behavior of $Q$ correlations.
However, note that the model has the additional U(1) global
invariance, Eq.~(\ref{u1varphi}). As we have already discussed, for
$N_f < N_c$ such an invariance is only apparent, but in principle it
may be relevant for $N_f \ge N_c$. To understand its role, we have
studied the behavior of an appropriate order parameter. As discussed
in Ref.~\cite{BPV-20}, for $N_f=N_c$ an order parameter is provided by
the composite operator
\begin{equation}
Y_{\bm x} = {\rm det} \,\varphi_{\bm x}\,,
\label{detof}
\end{equation}
which is invariant under both the SU($N_c$) gauge transformations
(\ref{gautra}) and the global transformations $\varphi_{\bm x} \to
\varphi_{\bm x} \, V$ with $V\in {\rm SU}(N_f)$. Starting from $Y_{\bm
x}$, one can define a correlation function
\begin{equation}
G_Y({\bm x} - {\bm y}) = \langle \bar{Y}_{\bm x} {Y}_{\bm y} \rangle,
\end{equation}
and a correlation length $\xi_Y$, using Eq.~(\ref{xidefpb}). Results
for $\xi_Y$ for $N_f=N_c=3$ and $\gamma=0$ are presented in
Fig.~\ref{u1-xi-nf3nc3}. Apparently, $\xi_Y$ remain finite and very
small ($\xi_Y \approx 0.5$) as $\beta$ increases. The U(1) flavor
modes are clearly not relevant for the critical behavior, which is
completely controlled by the U(1)-invariant modes encoded in $Q_{\bm
x}$. The possibility of a U(1) critical behavior, which would imply
the presence of a finite-temperature Berezinskii-Kosterlitz-Thouless
transition~\cite{KT-73,Berezinskii-70,Kosterlitz-74,JKKN-77}, is
excluded by the MC data. The behavior we observe is completely
analogous to what occurs in three dimensions at finite
temperature~\cite{BPV-20}.
\begin{figure}[tbp]
\includegraphics*[scale=\twocolumn@sw{0.3}{0.4}]{u1-xi-f3c3.eps}
\caption{Plot of the correlation length $\xi_Y$ associated with the
correlation function $\langle \bar{Y}_{\bm x} Y_{\bm y}\rangle$
versus $\beta$. The quantity $Y_{\bm x}$ is defined in
Eq.~\eqref{detof}. Results for $N_f=N_c=3$ and $\gamma=0$. }
\label{u1-xi-nf3nc3}
\end{figure}
\subsection{Universality class of the asymptotic
low-temperature behavior}
\label{univclassesncl2}
The numerical FSS analyses reported above suggest that, for $N_c \ge
3$, the low-temperature asymptotic behavior of scalar chromodynamics
with $N_f$ flavors depends only on $N_f$. Irrespective of the values
of $N_c$ and of $\gamma$, the critical behavior is the same as that of
the 2D CP$^{N_f-1}$ model.
Before presenting further arguments to support such a conclusion, we
recall some features of the 2D CP$^{N-1}$
model~\cite{Witten-79,ZJ-book}. This is a 2D quantum field theory
defined on a complex projective space, isomorphic to the symmetric
space U($N$)$/[\mathrm{U(1)}\times \mathrm{U}(N-1)]$. Its Lagrangian
reads
\begin{eqnarray}
&&{\cal L} = {1\over 2 g}
\overline{D_{\mu}{\bm z}}\cdot D_\mu {\bm z}\,, \qquad
\bar{\bm z}\cdot {\bm z}=1\,,\qquad
\label{contham}\\
&&
D_\mu =
\partial_\mu + i A_\mu\,, \qquad
A_\mu = i\bar{{\bm z}}\cdot \partial_\mu {\bm z}\,,
\nonumber
\end{eqnarray}
where ${\bm z}$ is an $N$-component complex field, and $A_\mu$ is a
composite gauge field. The Lagrangian in invariant under the local
U(1) gauge transformations ${\bm z}({\bm x})\to e^{i\theta({\bm x})}
{\bm z}({\bm x})$, and the global transformations ${\bm z}({\bm x})\to
W {\bm z}({\bm x})$ with $W\in {\rm SU}(N)$. The global invariance
group is SU($N$)$/\mathbb{Z}_N$ (again global transformations
differing by a $\mathbb{Z}_N$ factor are gauge equivalent).
For $N=2$ the CP$^1$ field theory is locally isomorphic to the O(3)
non-linear $\sigma$ model with the identification of the
three-component real vector $s_{\bm x}^a=\sum_{ij} \bar{z}_{\bm x}^i
\sigma_{ij}^{a} z_{\bm x}^j$, where $a=1,2,3$ and $\sigma^{a}$ are the
Pauli matrices. Various lattice formulations of CP$^{N-1}$ models
have been considered, see, e.g., Refs.~\cite{CR-93,CRV-92}. The
simplest formulation is
\begin{equation}
S_{CP} = - J \sum_{{\bm x}\,\mu} | \bar{\bm{z}}_{\bm x} \cdot {\bm
z}_{{\bm x}+\hat\mu} |^2 = - J \sum_{{\bm x}\mu} {\rm Tr} \,{\cal
P}_{\bm x} {\cal P}_{{\bm x}+\hat{\mu}} \,,
\label{hcpn}
\end{equation}
where
\begin{equation}
{\cal P}_{\bm x}^{ab} = \bar{z}_{\bm x}^a z_{\bm x}^b\,
\label{pxdef}
\end{equation}
is a projector, i.e., it satisfies ${\cal P}_{\bm x} = {\cal P}_{\bm
x}^2$. This explicitly shows that 2D CP$^{N-1}$ theories describe
the dynamics of projectors on $N$-dimensional complex spaces.
CP$^{N-1}$ models can also be obtained by considering the action
(\ref{hgauge}) with $\gamma = 0$, $z$ replacing the field $\varphi$
and using U(1) gauge fields $U_{{\bm x},\mu}$.
In 2D CP$^{N-1}$ models correlations are always short-ranged at finite
$\beta$ \cite{MW-66}. A critical behavior is only observed for $T\to
0$. In this limit the correlation length increases exponentially
as~\cite{ZJ-book,CR-93}
\begin{equation}
\xi \sim T^p e^{c/T}\,.
\label{xibeta}
\end{equation}
This behavior is related to the asymptotic-freedom property of these
models, which is shared with quantum chromodynamics, the four
dimensional theory of strong interactions. An analogous exponential
behavior is expected to characterize all statistical lattice theories
belonging to the same universality class, therefore also the 2D
$N$-component Abelian-Higgs lattice model~\cite{BPV-19-2} (which is a
lattice version of scalar electrodynamics), and, as we shall argue,
the 2D scalar chromodynamics with $N_f$ flavors and $N_c\ge 3$.
The numerical results reported in Sec.~\ref{resnf2ncl2} show that the
asymptotic low-temperature behavior of the model with $N_c\ge 3$ is
the same as that of the 2D CP$^{N_f-1}$ models or, equivalently, of
the 2D $N_f$-component Abelian-Higgs model with a U(1) gauge
symmetry. This is certainly quite surprising. We shall now argue that
the correspondence is strictly related to the identical nature of the
minimum-energy configurations, which represent the background for the
spin waves that are responsible for the zero-temperature critical
behavior.
The nature of the minimum-energy configurations is discussed in
App.~\ref{largebeta}. For $\gamma \ge 0$, such configurations are
those for which
\begin{equation}
{\rm Re}\,{\rm Tr} \,\varphi_{\bm x}^\dagger
U_{{\bm x},\mu} \varphi_{{\bm x}+\hat\mu} = 1\,
\label{minene}
\end{equation}
on each lattice link. In the appendix, by combining exact and
numerical results, we show that, for $\beta\to\infty$ and $N_c\ge 3$,
by appropriately fixing the gauge, the configurations that dominate
the statistical average have the form
\begin{equation}
\Pi_{\bm x} =
\begin{pmatrix} V & 0 \\
0 & 1
\end{pmatrix}
\label{Pi-largebeta-testo}
\end{equation}
where $V$ is an SU($N_c-1$) matrix, and
\begin{equation}
\begin{array}{ll}
\varphi^{af} = 0 \qquad & {a < N_c}\, , \\
\varphi^{af} = z^f \qquad & {a = N_c}\, ,
\end{array}
\label{phi-largebeta-testo}
\end{equation}
where $z^f$ is a unit-length $N_f$-dimensional vector. In other words,
the analysis shows that gauge and $\varphi$ fields completely
decouple. Moreover, the $\varphi$ field becomes equivalent to a single
unit-length $N_f$-dimensional vector, which is the fundamental field
of the CP$^{N_f-1}$ model. Stated differently, the operator $P_{\bm
x}$ becomes a projector, i.e., satisfies $P_{\bm x}^2 = P_{\bm x}$,
for $T\to 0$. However, we cannot yet, at this point, argue that the
large-$\beta$ behavior of scalar chromodynamics and of the
CP$^{N_f-1}$ model is the same, because in our factorization there is
no U(1) gauge symmetry. However, our numerical data also show that the
critical behavior is only associated with the order parameter $Q_{\bm
x}$: the U(1) modes do not order in the large-$\beta$ limit. This is
also confirmed by the detailed analysis of the low-temperature
configurations presented in Ref.~\cite{BPV-20}. Therefore, in the
effective theory we can quotient out the U(1) degrees of freedom,
which are irrelevant for the behavior of the order parameter $Q_{\bm
x}$, i.e., we can reintroduce the U(1) gauge symmetry. If this
occurs, scalar chromodynamics and CP$^{N_f-1}$ model are expected to
have the same critical large-$\beta$ behavior.
It is interesting to observe that CP$^{N_f-1}$ behavior has also been
observed for several negative values of $\gamma$. This is not an
obvious result, as the system is frustrated. Also in this case, the
result is explained by the nature of the low-temperature
configurations. As discussed in App.~\ref{largebeta} for a specific
value of $\gamma$, $\gamma=-1$, the relevant configurations can again
be parametrized as in Eq.~(\ref{phi-largebeta-testo}), modulo gauge
transformations.
This phenomenological argument explains the numerical evidence that
the asymptotic zero-temperature behaviors for $N_c\ge 3$ is the same
as that of the CP$^{N_f-1}$ continuum theory. Note that this scenario
does not apply to nonabelian gauge theories with $N_c=2$. As discussed
in App.~\ref{largebeta}, the typical low-temperature configurations
cannot be parametrized as in Eq.~(\ref{phi-largebeta-testo}), implying
a different critical behavior. We shall argue that it corresponds to
that of the 2D Sp($N_f)$ field theories.
\section{SU(2) gauge models}
\label{resnc2}
We now discuss the behavior of models with SU(2) gauge symmetry. In
this case the global symmetry group ~\cite{BPV-19,BPV-20} is
Sp($N_f$)/$\mathbb{Z}_2$. In the two-flavor case, because of the
isomorphism Sp(2)$/\mathbb{Z}_2=$SO(5), an O(5) symmetry emerges.
Because of the symmetry enlargement, the order-parameter field is the
$2N_f\times 2N_f$ matrix
\begin{equation}
{\cal T}_{\bm x}^{lm} =
\sum_a \overline{\Gamma}_{\bm x}^{al} \Gamma_{\bm x}^{am} -
{\delta^{lm} \over N_f} \,,
\label{Tdef}
\end{equation}
where the matrix $\Gamma_{\bm x}$ is defined in Eq.~(\ref{Gammadef}).
If $f,g=1,...,N_f$, ${\cal T}_{\bm x}^{lm}$ can be written in the
block form
\begin{equation} \label{tdrel}
\begin{aligned}
&{\cal T}_{\bm x}^{f,g}=Q_{\bm x}^{fg}\,
& &{\cal T}_{\bm x}^{f,g+N_f} = \bar{D}^{fg}\, \\
&{\cal T}_{\bm x}^{f+N_f,g} = -D^{fg}\,
& &{\cal T}_{\bm x}^{f+N_f,g+N_f}=Q_{\bm x}^{gf}\,
\end{aligned}
\end{equation}
where
\begin{equation}
D_{\bm x}^{fg} = \sum_{ab} \epsilon^{ab} \varphi_{\bm x}^{af}
\varphi_{\bm x}^{bg}\, .
\end{equation}
The order parameter ${\cal T}_{\bm x}$ is hermitian and satisfies
\begin{equation}
J \overline{\cal T}_{\bm x} J + {\cal T}_{\bm x} = 0\,,
\label{tcostr}
\end{equation}
where the matrix $J$ is defined in Eq.~(\ref{mscond}).
For $N_f=2$ the matrix ${\cal T}_{\bm x}$ can be parametrized by a
five-dimensional real vector ${\bm \Phi}_{\bm x}$. The first three
components are given by
\begin{equation}
\Phi_{\bm x}^k \equiv \sum_{fg} \sigma^k_{fg} Q_{\bm x}^{fg}\,,
\quad k=1,2,3\,,
\label{phidef}
\end{equation}
while the fourth and fifth component are the real and imaginary parts
of
\begin{equation}
{1\over2} \sum_{fg}\epsilon_{fg} D^{fg}_{\bm x}
\equiv
\Phi_{\bm x}^4 + i\Phi_{\bm x}^5 \,.
\label{psidef}
\end{equation}
The parametrization of ${\cal T}_{\bm x}$ in terms of $\Phi_{\bm x}$
effectively implements the isomorphism between the Sp(2)/${\mathbb
Z}_2$ and the SO(5) groups, since an Sp(2) transformation of ${\cal
T}_{\bm x}$ maps to an SO(5) rotation of ${\bm\Phi}_{\bm x}$.
Moreover, the unit-length condition for $\varphi$ implies
\begin{equation}
{\bm \Phi}_{\bm x}\cdot {\bm \Phi}_{\bm x} = 1\,.
\label{normphipsi}
\end{equation}
\begin{figure}[tbp]
\includegraphics*[scale=\twocolumn@sw{0.3}{0.4}]{u-rxi-f2c2.eps}
\caption{Plot of $U_r$ versus $R_\xi$ for $N_f=2$, $N_c=2$, and
$\gamma=0$. The data approach a universal FSS curve, which
corresponds to that of the standard O(5) nearest-neighbor vector
model (full line, see Fig.~\ref{u-rxi-o5}). The horizontal dashed
line corresponds to the asymptotic value $U_r=7/5$ for $R_\xi\to
0$.}
\label{u-rxi-nf2nc2}
\end{figure}
\begin{figure}[tbp]
\includegraphics*[scale=\twocolumn@sw{0.3}{0.4}]{u-rxi-o5.eps}
\caption{Plot of $U$ versus $R_\xi$ for the O(5) vector universality
class, as obtained by MC simulations of the nearest-neighbor O(5)
vector lattice model. The full line~\cite{footnote-interpolation},
is an interpolation the MC data up to $R_\xi\lesssim 0.8$. It
provides an approximation of the universal FSS curve, with an
accuracy smaller than 0.5\% (we include the uncertainty arising from
scaling corrections). The horizontal dashed line corresponds to the
asymptotic value $U=7/5$ for $R_\xi\to 0$; $U\to 1$ for
$R_\xi\to\infty$. }
\label{u-rxi-o5}
\end{figure}
\begin{figure}[tbp]
\includegraphics*[scale=\twocolumn@sw{0.3}{0.4}]{u-rxi-f2c2-gamma2_-2.eps}
\caption{Plot of $U_r$ versus $R_\xi$ for $N_f=2$, $N_c=2$, and
$\gamma=\pm 2$. The data approach the O(5) FSS curve in the large
$L$ limit (full line~\cite{footnote-interpolation}). The horizontal
dashed line corresponds to the asymptotic value $U_r=7/5$ for
$R_\xi\to 0$.}
\label{u-rxi-nf2nc2-gamma}
\end{figure}
The discussion of the previous section leads us to conjecture that the
global Sp($N_f$)$/\mathbb{Z}_2$ symmetry uniquely determines the
asymptotic zero-temperature critical behavior. For $N_f=2$ this would
imply that the SU(2) gauge theory has the same zero-temperature
behavior of the O(5) vector model. An analogous conjecture proved to
be true in the three-dimensional case~\cite{BPV-19,BPV-20}. To
perform the correct universality check for $N_f=2$, as discussed in
detail in Ref.~\cite{BPV-20}, it is important to consider a Binder
parameter in the SU(2) gauge theory that maps onto the usual vector
O(5) Binder parameter under the isomorphism Sp($N_2$)$/\mathbb{Z}_2
\to $ SO(5). The Binder parameter $U$ defined in
Eq.~(\ref{binderdef}) is not the appropriate one since it only
involves three components of $\Phi_{\bm x}^k$, see
Eq.~(\ref{phidef}). A straightforward group-theory computation shows
that the correct correspondence is achieved by defining the related
quantity~\cite{BPV-19,BPV-20}
\begin{equation}
U_r = {21\over 25}\, U\,.
\label{urdef}
\end{equation}
As for $R_\xi$, the quantity computed using Eq.~(\ref{xidefpb}) corresponds
exactly to the analogous quantity computed in the O(5) vector model.
The results shown in Fig.~\ref{u-rxi-nf2nc2} clearly support the
conjecture. Indeed, the MC data of $U_r$ collapse (without appreciable
scaling violations) on a unique curve when plotted versus $R_\xi$,
which is consistent with that of the Binder parameter $U$ versus
$R_\xi$ for the 2D O(5) vector model (with $U$ and $R_\xi$ defined
analogously in terms of $\Phi$ correlations~\cite{BPV-19-2}). The
O(5) FSS curve is obtained by MC simulations (using the cluster
algorithm) of the nearest-neighbor O(5) vector model (\ref{ullimits}),
see Fig.~\ref{u-rxi-o5}. Again the role of the inverse gauge coupling
is irrelevant. It does not change the universal features of the
low-temperature asymptotic behavior, as shown in
Fig.~\ref{u-rxi-nf2nc2-gamma}, where we report results for $\gamma=2$
and $-2$.
The numerical analysis reported above leads us to conjecture that the
low-temperature asymptotic behavior of the scalar SU(2) chromodynamics
with $N_f$ flavors belongs to the universality class associated with
the 2D Sp($N_f$) field theory. The fundamental field is a complex
$2N_f\times 2N_f$ order-parameter field $\Psi_{\bm x}$, which formally
represents a coarse-grained versio of ${\cal T}_{\bm x}$, defined in
Eq.~(\ref{Tdef}). It is hermitian, traceless, and satisfies
Eq.~(\ref{tcostr}). If we write
\begin{equation}
\Psi =
\left(\begin{array}{cc} A_1 & A_2 \\ A_3 &
A_4\end{array}\right)\,,
\label{psia}
\end{equation}
where $A_i$ are $N_f\times N_f$ matrix fields, the conditions required
are that $A_1$ is hermitian and traceless, $A_3$ is antisymmetric,
$A_4 = \bar{A}_1$, and $A_3=-\bar{A}_2$. The corresponding 2D field
theory is defined by the Lagrangian
\begin{eqnarray}
{\cal L}_{\rm Sp} = {1\over g} \,{\rm Tr}\left[
\partial_{\mu} \Psi^\dagger\, \partial_{\mu} \Psi\right]\,,\qquad
{\rm Tr} \,\Psi^\dagger \Psi = 1\,.
\label{conthamsp}
\end{eqnarray}
For $N_f=2$, using the correspondence
\begin{eqnarray}
&&A_1 ={1\over 2}
\left(\begin{array}{cc} \Phi^3 & \Phi^1-i\Phi^2 \\ \Phi^1+i\Phi^2 &
-\Phi^3\end{array}\right)\,, \label{a1nf2}\\
&&A_2 = {1\over 2}
\left(\begin{array}{cc} 0 & \Phi^4+i\Phi^5 \\ -\Phi^4-i\Phi^5 &
0\end{array}\right)\,, \nonumber
\end{eqnarray}
one can easily show that the Sp(2) field theory is equivalent
to the O(5) $\sigma$-model with Lagrangian
\begin{eqnarray}
{\cal L}_{\rm O} = {1\over g} \,
\partial_{\mu} \Phi \cdot \partial_{\mu} \Phi \,,\qquad
\Phi \cdot \Phi = 1\,.
\label{conthamspOM}
\end{eqnarray}
\section{Conclusions}
\label{conclu}
We have studied a 2D lattice nonabelian gauge model with
multicomponent scalar fields, focusing on the role that global and
local nonabelian gauge symmetries play in determining the universal
features of the asymptotic low-temperature behavior. The lattice
model we consider is obtained by partially gauging a maximally
O($M$)-symmetric multicomponent scalar model, using the Wilson lattice
approach. The resulting theory is locally invariant under SU($N_c$)
gauge transformations ($N_c$ is the number of colors) and globally
invariant under SU($N_f$) transformations ($N_f$ is the number of
flavors). The fields belong to the coset $S^M$/SU($N_c$), where $M=2
N_f N_c$ and $S^M$ is the $M$-dimensional sphere. The model is always
disordered at finite temperature, in agreement with the Mermin-Wagner
theorem \cite{MW-66}. However, it develops a critical behavior in the
zero-temperature limit. The corresponding universal features are
determined by means of numerical analyses of the FSS behavior in the
zero-temperature limit.
We observe universality with respect to the inverse gauge coupling
$\gamma$ that parametrizes the strength of the gauge kinetic term, see
Eq.~(\ref{hgauge}). The RG flow is always controlled by the infinite
gauge-coupling fixed point, corresponding to $\gamma=0$, as it also
occurs in three dimensions~\cite{BPV-19,BPV-20}, and in 2D and 3D
models characterized by an abelian U(1) gauge
symmetry~\cite{BPV-19-2,PV-19}. Indeed, models corresponding to
different values of $\gamma$ have the same universal behavior for
$T\to 0$, at least in a large interval around $\gamma=0$. We
conjecture that the same critical behavior is obtained for all
positive finite values of $\gamma$, since, by increasing $\gamma$, we
do not expect any qualitative change in the structure of the
minimum-energy configurations that control the statistical average.
On the other hand, the behavior for negative values of $\gamma$, i.e.,
when the system is frustrated, is not completely understood.
Therefore, we cannot exclude that the behavior changes for large
negative values of $\gamma$. This issue remains an open problem. It is
important to note that by considering a positive value of $\gamma$, we
are effectively investigating the behavior close to the multicritical
point $\beta = \infty$ and $\beta_g = \beta\gamma = \infty$. Our
results show that approching the point along the lines $\beta_g/\beta
= \gamma$ does not change the universal features of the asymptotic
behavior. However, we expect that, by increasing $\beta_g$ faster than
$\beta$ (in a well-specified way), one can observe a radical change in
the critical behavior. For instance, if we take first the limit
$\beta_g\to\infty$ at fixed finite $\beta$ and then the limit
$\beta\to \infty$, the model becomes equivalent to the standard O($M$)
vector model, characterized by a different asymptotic low-temperature
behavior.
The numerical results and theoretical arguments presented in this
paper suggest the existence of a wide universality class
characterizing 2D lattice abelian and nonabelian gauge models, which
only depends on the global symmetry of the model. The gauge group
does not apparently play any particular role. Indeed, we report
numerical evidence that, for any $N_c$, the asymptotic low-temperature
behavior of the multiflavor scalar gauge theory (\ref{hgauge}) belongs
to the universality class of the 2D CP$^{N_f-1}$ model. This also
implies that it has the same universal features of the $N_f$-component
lattice scalar electrodynamics (abelian Higgs model)
\cite{BPV-19-2}. It is important to note that the global symmetry
group of model (\ref{hgauge}) is U($N_f$), while the global symmetry
group of the CP$^{N_f-1}$ model is SU($N_f$) (we disregard here
discrete subgroups), so that the global symmetry group of the two
models differs by a U(1) flavor group. As we have discussed in
Ref.~\cite{BPV-20}, the U(1) symmetry is only apparent for $N_f <
N_c$. and therefore, the symmetry groups of scalar chromodynamics and
of the CP$^{N_f-1}$ model are the same for $N_f < N_c$. This U(1) symmetry
is instead present for $N_f \ge N_c$. However, our numerical results
indicate that the U(1) flavor symmetry does not play any role in model
(\ref{hgauge}). The universal critical behavior is only associated
with the U(1)-invariant modes that are encoded in the local bilinear
operator $Q_{\bm x}$, so that the global symmetry group that
determines the asymptotic behavior is SU($N_f$). Note, however, that
the decoupling of the U(1) flavor modes may not be true in other
models with the same global and local symmetries. If the U(1) modes
become critical, a different critical behavior might be observed. This
issue deserves further investigations.
For $N_c = 2$, the global symmetry group changes: The action is
invariant under Sp($N_f$) transformations. In this case the
asymptotic low-temperature behavior is expected to be described by the
Sp($N_f)$ continuum theory. We have numerically checked it for the
two-flavor model, for which the global symmetry Sp(2)$/{\mathbb
Z}_2\simeq$ SO(5).
Our results lead us to conjecture that the RG flow of the 2D
multiflavor lattice scalar chromodynamics in which the fields belong
to the coset $S^M$/SU($N_c$), where $M=2N_cN_f$, is asymptotically
controlled by the 2D statistical field theories associated with the
symmetric spaces~\cite{BHZ-80,ZJ-book} that are invariant under
SU($N_f$) (for $N_c\ge 3$) or Sp($N_f$) (for $N_c=2$) global
transformations. These symmetry groups are the same invariance groups
of scalar chromodynamics, apart from a U(1) flavor symmetry that is
present for $N_f \ge N_c > 2$, which does not play any role in
determining the asymptotic behavior of the model.
This conjecture may be further extended to models with different
global and local symmetry groups, for instance, to those considered in
Refs.~\cite{BHZ-80,PRV-01}. It would be interesting to verify whether
generic nonabelian models have an asymptotic critical behavior which
is the same as that of the model defined on a symmetric space that has
the same global symmetry group. This issue deserves further
investigations.
\bigskip
\emph{Acknowledgement}.
Numerical simulations have been performed on the CSN4 cluster of the
Scientific Computing Center at INFN-PISA.
|
1,108,101,565,008 | arxiv | \section{Shifted chain}
The model described in the main text [Equation~\eqref{Eq.couplings}] is not translationally invariant. This requires to treat separately the case where the first atom is not exactly at the origin of the chain. Suppose that the $m-th$ atom is now in position $\mathbf{r}_m = a (m-L + s) \hat{\mathbf{n}}$, with $s\in [0,1)$ a real parameter representing a global shift $s a$ of the chain. Then the couplings are
\begin{equation}
J_{ij}(s) = \cos\left[2 \pi \alpha (i-L+s) (j-L+s) \right] \,.
\end{equation}
To find the periodicity of this matrix, we can search for a $t$ such that $J_{i+t,j}(s)=J_{i,j}(s)$. The result is that $t = u q$, with $u \in \mathbb{N}$, \emph{assuming that} also $s$ is rational, with $s = v/u$. Thus, for a fixed shift $s = v/u$, the periodicity is $uq$. The choice of scaling $L= q u \ell$ makes possible to rewrite the interaction matrix as
\begin{equation}
J_{ij}(s) = \cos\left[2 \pi \alpha (i+s) (j+s)\right]\,,
\end{equation}
and the analysis reduces to the case without shift.
\section{Free energy at \texorpdfstring{$\alpha=\frac{a^2}{\omega_0^2}=\frac{p}{q}$}{alpha=p/q}}
In this Section we derive the expression for the free energy of the model.
The Hamiltonian of the model reads (with $N = 2L$)
\begin{equation}
\begin{split}
E &= \frac{1}{4L} \sum_{i,j=0}^{2L-1} \cos\left(2\pi\frac{\mathbf{r}_i\cdot \mathbf{r}_j}{w_0^2}\right) \sigma_i \sigma_j
= \frac{1}{4L} \sum_{i,j=0}^{2L-1} \cos\left(2 \pi \alpha (i-L) (j-L) \right) \sigma_{i} \sigma_{j} \, ,
\end{split}
\end{equation}
where we introduced $\alpha=a/\omega_0$.
We focus on the case of rational $\alpha$, i.e. $\alpha=p/q$ with $p$ and $q$ coprime positive integers.
Without loss of generality, we can restrict to the case $p<q$.
In fact, if $p>q$, then $p/q = n + p'/q$ for some $n \in \mathbb{Z}$, and $n$ can be dropped thanks to the periodicity of the cosine and to the fact that $i$ and $j$ are integers.
We set $L=q\ell$ with integer $\ell$ for simplicity. We will see that, when $L \neq q \ell$, the thermodynamics of the model is the same up to corrections of order $\mathcal{O}(\ell^{-1})$.
\subsection{The internal energy}
If $\alpha=p/q$, the interaction matrix $J_{i,j}$ is periodic of period $q$, i.e. $J_{i+q,j} = J_{i,j}$ and $J_{i,j+q} = J_{i,j}$.
We can use this symmetry to greatly simplify the energy.
Let us group the spin variables into $q$ magnetizations
\begin{equation}\label{eq:E2}
\begin{split}
E &= \frac{1}{4q\ell} \sum_{i,j=0}^{2L-1} \cos\left(\frac{2\pi p}{q} (i-q\ell)(j-q\ell) \right) \sigma_{i} \sigma_{j} \\
&= \frac{1}{4q\ell} \sum_{m,n=0}^{2 \ell-1} \sum_{r,s=0}^{q-1} \cos\left(\frac{2\pi p}{q} (m q + r) (n q + s) \right) \sigma_{mq+r} \sigma_{nq+s} \\
&= \frac{\ell}{q} \sum_{r,s=0}^{q-1} \cos\left(\frac{2 \pi p}{q} r s \right) m_r m_s \, ,
\end{split}
\end{equation}
where $m_r = 1/(2\ell) \,\sum_{m=0}^{2 \ell-1} \sigma_{mq+r}$ and $r=0, \dots, q-1$.
Notice that $|m_r| \leq 1$.
The reduced interaction matrix of the magnetizations $m_r$, i.e.
\begin{equation}
\begin{split}
J'_{r,s} = \cos\left( \frac{2 \pi p}{q} r s \right) \, , \qquad 0 \leq r,s \leq q-1 \, ,
\end{split}
\end{equation}
has another symmetry, namely $J'_{i,j} = J'_{q-i,j}$ and $J'_{i,j} = J'_{i,q-j}$, due to the parity of the cosine.
This allows to further simplify the energy by introducing the reduced magnetizations
$\tilde{m}_0 = m_0$, $\tilde{m}_s = (m_s + m_{q-s})/2$ for $s = 1, \dots, \lfloor (q-1)/2 \rfloor$ and, if $q$ is even, $\tilde{m}_{q/2} = m_{q/2}$.
Easy computations show that the internal energy can be rewritten as
\begin{equation}\label{eq:E3}
\begin{split}
E = \frac{\ell}{q} &\left[ \tilde{m}_0^2 + 2 \tilde{m}_0 \tilde{m}_{\frac{q}{2}} +4 \tilde{m}_0 \sum_{r=1}^{\lfloor (q-1)/2\rfloor} \tilde{m}_r
+ \cos\left( \frac{\pi p q}{2} \right) \tilde{m}_{\frac{q}{2}}^2 \right. \\
&\left.\quad+ 4 \tilde{m}_{\frac{q}{2}} \sum_{r=1}^{\lfloor (q-1)/2 \rfloor} \cos\left( \pi p r \right) \tilde{m}_r
+ 4 \sum_{r=1}^{\lfloor (q-1)/2 \rfloor}\sum_{s=1}^{\lfloor (q-1)/2 \rfloor}\cos\left( \frac{2 \pi p}{q} r s \right) \tilde{m}_r \tilde{m}_s
\right]
\, ,
\end{split}
\end{equation}
where $m_{q/2} = 0$ if $q$ is odd.
Thus, the internal energy is again a quadratic form in the new magnetizations with interaction matrix $J''_{r,s}$ of the form
\begin{equation}
\begin{split}
J'' =
\begin{bmatrix}
1 & 2 & \dots & 2 & 1 \\
2 & J'_{1,1} & \dots & J'_{1,q'} & 2 (-1)^p \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
2 & J'_{q',1} & \dots & J'_{q',q'} & 2 (-1)^{p q'} \\
1 & 2 (-1)^p & \dots & 2 (-1)^{p q'} & \cos\left( \pi p q /2 \right) \\
\end{bmatrix}
\,
\end{split}
\end{equation}
where $q' = \lfloor (q-1)/2 \rfloor$, and the last row/column are there only if $q$ is even.
In the main text we present Equation~\eqref{eq:E2} since it is more compact and does not depend on the parity of $q$.
In the numerical simulations we used Equation~\eqref{eq:E3} as it has roughly half the degrees of freedom with respect to Equation~\eqref{eq:E2}.
We conclude this section by pointing out that, for $L\neq q\ell$, the only change in our computations is that some of the $\{\tilde{m}_s\}$ are the sum of $2\ell+1$ degrees of freedom instead of just $2 \ell$.
Thus, all equations hold at leading order in a large $\ell$ expansion, with corrections of order $\mathcal{O}(\ell^{-1})$.
\subsection{The entropy}
The entropy of our model depends on which representations we would like to use, either Equation~\ref{eq:E2} or Equation~\ref{eq:E3}.
In both cases, the total entropy $S$ is given by the sum of the individual entropies of the magnetizations $\{m_s\}$ or $\{\tilde{m}_s\}$.
The individual entropy of a magnetization $m$ composed by $n$ microscopic degrees of freedom (spins) is given by
\begin{equation}
\begin{split}
S_n(m) = \binom{n}{\frac{n}{2}(m+1)} = n \log 2 - \frac{n}{2} \left[ (1+m)\log(1+m) + (1-m)\log(1-m) \right] + \mathcal{O}(\log(n)) \,
\end{split}
\end{equation}
that is the number of configurations with $n (m+1)/2$ spins up, i.e. in which the magnatization equals $m$.
The big-O notation refers to the thermodynamic limit $n\rightarrow\infty$.
Thus, the entropy associated with the internal energy given in Equation~\eqref{eq:E2} equals (in the large $\ell$ limit)
\begin{equation}
\begin{split}
S(\{\tilde{m}_s\})
= \sum_{s=0}^{q-1} S_{2\ell}(\tilde{m}_s)
= 2\ell \left[ q \log 2 - \sum_{s=0}^{q-1} \left( \frac{1+\tilde{m}_s}{2}\log(1+\tilde{m}_s) +\frac{1-\tilde{m}_s}{2}\log(1-\tilde{m}_s) \right) + \mathcal{O}\left( \frac{\log(\ell)}{\ell} \right) \right]
\, .
\end{split}
\end{equation}
On the other hand, the entropy associated with the internal energy given in Equation~\eqref{eq:E3} equals (in the large $\ell$ limit)
\begin{equation}
\begin{split}
S(\{m_s\})
&= S_{2\ell}(m_0) + S_{2\ell}(m_{\frac{q}{2}}) + \sum_{s=1}^{\lfloor\frac{q-1}{2}\rfloor} S_{4\ell}(m_s) \\
&= 2\ell \left[ q \log 2
- \frac{1-m_0}{2}\log(1-m_0) - \frac{1+m_0}{2}\log(1+m_0) \right.\\
&\quad- \frac{1-m_{\frac{q}{2}}}{2}\log(1-m_{\frac{q}{2}}) - \frac{1+m_{\frac{q}{2}}}{2}\log(1+m_{\frac{q}{2}})
\\
&\quad\left.
- \sum_{s=1}^{\lfloor\frac{q-1}{2}\rfloor} \Big( (1+m_s)\log(1+m_s)+(1-m_s)\log(1-m_s) \Big) + \mathcal{O}\left( \frac{\log(\ell)}{\ell} \right) \right]
\, ,
\end{split}
\end{equation}
where the terms depending on $m_{\frac{q}{2}}$ must be discarded if $q$ is odd.
Again, in the case $L\neq q\ell$, the corrections to the entropies are of order $\mathcal{O}(\ell^{-1})$.
\section{The spectrum of the reduced interaction matrix}
The reduced interaction matrix is defined as
\begin{equation}
\begin{split}
J'_{r,s} = \cos\left( \frac{2 \pi p}{q} r s \right) \, , \qquad 0 \leq r,s, \leq q-1 \, .
\end{split}
\end{equation}
As we have already seen, this matrix has additional symmetries left.
Nonetheless, it is more tractable than the fully reduced one, i.e. the interaction matrix $J''$ for the magnetizations $\{\tilde{m}_s\}$ in Equation~\ref{eq:E3}.
Here we characterize the spectrum of $J'$.
First of all, we notice that $(J')^2$ is given by:
\begin{equation}
\begin{split}
(J')^2_{i,j}
&= \sum_{k=0}^{q-1} J'_{i,k} J'_{k,j}
= \sum_{k=0}^{q-1} \cos\left( \frac{2\pi p}{q} i k \right) \cos\left( \frac{2\pi p}{q} k j \right)
= \frac{1}{2} \sum_{k=0}^{q-1} \left[ \cos\left( \frac{2\pi p}{q} (i+j) k \right) + \cos\left( \frac{2\pi p}{q} (i-j) k \right) \right]
\, .
\end{split}
\end{equation}
Using the following identity
\begin{equation}
\begin{split}
\sum_{s=0}^{q-1} \cos\left( \frac{2\pi p}{q} r s \right)
= \Re \left[ \sum_{s=0}^{q-1} e^{i \frac{2\pi p r}{q} s} \right]
= \Re \left[ \frac{1-e^{i 2 \pi p r}}{1-e^{i \frac{2 \pi p r}{q}}} \right]
= q \delta_{r,0}
\, ,
\end{split}
\end{equation}
we obtain
\begin{equation}
\begin{split}
(J')^2_{i,j} = \frac{q}{2} \left( \delta_{i,j} + \delta_{i,-j} \right)
\, .
\end{split}
\end{equation}
As such, the spectrum of $(J')^2$ is given by:
\begin{itemize}
\item the eigenvector $(1,0,\dots,0)^T$, with eigenvalue $q$;
\item the $\lceil (q-1)/2 \rceil$ eigenvectors $(0,1,0,0,\dots,0,0,1)^T$, $(0,0,1,0,\dots,0,1,0)^T$ and so on, with eigenvalue $q$;
\item the $\lfloor (q-1)/2 \rfloor$ eigenvectors $(0,1,0,0,\dots,0,0,-1)^T$, $(0,0,1,0,\dots,0,-1,0)^T$ and so on, with eigenvalue $0$.
\end{itemize}
Thus, $J'$ has $\lfloor (q-1)/2 \rfloor$ null eigenvalues, and $\lceil (q-1)/2 \rceil +1$ eigenvalues $\pm \sqrt{q}$.
We notice that the null eigevalues correspond to the additional symmetries of $J'$, and will not be part of the spectrum of $J''$.
Moreover, $J'$ has always at least one negative eigenvalue.
In fact, it is straightforward to check that the vector $v = (1-\sqrt{q},1, \dots, 1)^T$ is an eigenvector of $J'$ with eigenvalue $-\sqrt{q}$:
\begin{equation}
\begin{split}
\left(J'v\right)_r
= (1-\sqrt{q}) + \sum_{s=1}^{q-1} \cos\left( \frac{2\pi p}{q} r s \right)
= -\sqrt{q} + \sum_{s=0}^{q-1} \cos\left( \frac{2\pi p}{q} r s \right)
= -\sqrt{q} + q \delta_{r,0}
= - \sqrt{q} v_r
\, .
\end{split}
\end{equation}
Notice that due to the symmetry in the components $v_r$ with $r = 1, \dots, q-1$ this remark translates to $J''$ as well.
\section{High-temperature phase}
In the large temperature limit $T\rightarrow\infty$, the free energy landscape is dominated by the entropic contribution. The free energy is factorized in each magnetization, and its minimum can be computed by minimizing each of the entropies. This gives a paramagnetic high-temperature phase characterized by the conditions $m\equiv0$ or, equivalently, $\tilde{m}\equiv0$.
Around the paramagnetic minimum, the free energy at finite temperature $T$ can be expanded in Taylor series as
\begin{equation}
\begin{split}
\frac{1}{\ell}(E - T S )
= \frac{1}{q} \sum_{r,s=0}^{q-1} J'_{r,s} m_r m_s
- 2 q T \log2
+ T \sum_{s=0}^{q-1} m_s^2 + \mathcal{O}\left(m^3,\log(\ell)\right)
\, .
\end{split}
\end{equation}
Thus, the stability of the paramagnetic minimum is determined by the sign of the eigenvalues of the matrix $\frac{1}{q} J' + T \mathbb{1}_q$, where $\mathbb{1}_q$ is the $q\times q$ identity matrix.
The minimum eigenvalue is given by $-\frac{1}{\sqrt{q}} + T$, giving a critical temperature for the paramagnetic stability $T_{\rm param} = \beta_{\rm param}^{-1} = \frac{1}{\sqrt{q}}$.
Preliminary numerical investigation (to be reported elsewhere) suggest that this is a second-order phase transition.
\section{Details of numerical simulations}
We performed extensive numerical simulations of the model to characterize its zero-temperature energy landscape.
All simulations were performed using the fully reduced model given in Equation~\ref{eq:E3}.
The code to reproduce our results is available on Github at the link \url{https://github.com/vittorioerba/FreeEnergyMultimodeQEDpaper}.
\subsection{Metastable states}
To sample the minima of the energy landscape described by Equation~\ref{eq:E3} we used a simple steepest descent algorithm with update rule
\begin{equation}
\begin{split}
m(t+1) = m(t) - \eta \nabla E\left( m(t) \right) \,
\end{split}
\end{equation}
and step-size $\eta = 10^{-3}$.
We implemented manually the box constraint by:
\begin{itemize}
\item setting to 1 or to $-1$ each magnetization such that $m_i(t) \in [-1,1]$ and $m_{i+1}(t) \notin [-1,1]$;
\item setting to 0 all the components $i$ of the gradient that point out from the hypercube when $m_i = \pm 1$.
\end{itemize}
Finally, we set the stopping condition to
\begin{equation}
\begin{split}
|| \nabla E\left( m(t) \right) ||_2 < \tau \,
\end{split}
\end{equation}
with threshold $\tau = 10^{-3}$.
All optimization runs that that failed to reach this condition before $t_{\rm max} = 10^6$ steps where discarded, as well as all other simulations that failed for any other reason.
For each value of $p$ and $q$, we performed $N_{\rm run}(p,q)$ gradient descents mainly based on the value of $q$:
\begin{itemize}
\item for $11 \leq q \leq 20$, we performed $N_{\rm run} \geq 1 \times 10^3$;
\item for $21 \leq q \leq 29$, we performed $N_{\rm run} \geq 4 \times 10^3$;
\item for $30 \leq q \leq 40$, we performed $N_{\rm run} \geq 15 \times 10^3$;
\item for $41 \leq q \leq 50$, we performed $N_{\rm run} \geq 25 \times 10^3$;
\item for $41 \leq q \leq 60$, we performed $N_{\rm run} \geq 40 \times 10^3$.
\end{itemize}
For these values of $q$ we studied all the values of $p<q$ comprime with $q$.
Finally, for $q \geq 61$, we performed $N_{\rm run} \geq 150 \times 10^3$ gradient descents for a very restricted set of values of $p$.
The precise number $N_{\rm run}(p,q)$ simulated for each pair $(p,q)$ is reported in the repository hosting the code.
To assess whether for a given value of $p/q$ all local minima had been enumerated, we studied the number of distinct local minima found $N_{p,q}(n)$ as a function of the number of gradient descents performed $n$.
As the model has an obvious $\mathbb{Z}_2$ symmetry left, we counted each minimum $m^*$ and his opposite $-m^*$ as the same minimum.
When $n$ is small, under the hypothesis that the basins of attraction of the local minima are somewhat comparable in size, we expect that $N_{p,q}(n) \sim n$, as each new descent finds an unknown minimum.
When $n$ is large, we expect that $N_{p,q}(n)$ saturates to a constant value $N_{p,q}(\infty)$, as all local minima were already visited by previous descents.
Figure~\ref{fig:SM1} shows the behaviour of $N_{p,q}(n)$ extracted from the simulations for a selection of values of $q$.
Firstly we observe that, at fixed $q$, the curves $N_{p,q}(n)$ may or may not collapse onto a single curve.
In the latter case, one may expect that each value of $p$ has its own $N_{p,q}(n)$ curve.
Suprisingly, there exists only a limited number of possible curves over which the experimental $N_{p,q}(n)$ distribute.
Another important observation is that there are values of $q$, for example $q=36$, where the curves $N_{p,q}(n)$ are still far from saturation, while for larger values of $q$, for example $q=37$, saturation is reached in the same number of descent runs $n$.
We notice that for those values of $q$ where the curves $N_{p,q}$ do not collapse onto a single curve, see for example $q=40$, we obtain that for particular values of $p$, $N_{p,q}(n)$ saturates, while for others, $N_{p,q}(n)$ do not saturate in the same number of descent runs $n$.
Finally, we notice that, even if $N_{p,q}(n_{\rm run})$ is not a good descriptor of $N_{p,q}(\infty)$, it does provide a lower bound for it.
\begin{figure}[t]
\includegraphics{SM_fig1.pdf}
\caption{\textbf{Behaviour of $N_{p,q}(n)$ for a selection of values of $p$ and $q$.}
}
\label{fig:SM1}
\end{figure}
\subsection{Ground states}
To compute the ground states of the model we used the CPLEX \cite{cplex2009v12} optimization library, which provides a solver for non-positive-definite quadratic programming problems.
To interface with the solver, we used the Julia API \cite{DunningHuchetteLubin2017}.
See the Github repo for the details of the implementation.
\end{document}
|
1,108,101,565,009 | arxiv | \section{Introduction}
The Internet of Things (IoT) is the next evolution of the Internet \cite{khan2012future} where devices, of any kind and size, will exchange and share data autonomously among themselves.
By exchanging data, each device can improve their decision-making processes. IoT devices are ubiquitous in our daily lives and critical infrastructure. For example, air conditioners, irrigation systems, refrigerators, and railway sensors \cite{rail} have been connected to the Internet in order to provide services and share information with the relevant controllers. Due to the benefits of connecting devices to the Internet, massive quantities of IoT devices have been developed and deployed. This has led leading experts to believe that by 2020 there will be more than 20 billion devices connected to the Internet \cite{eddy2015gartner}.
While the potential for IoT devices is vast, their success depends on how well we can secure these devices.
However, IoTs are diverse and have limited resources. Therefore, securing them is a difficult challenge which has taken a central stage in both industry and academia.
One significant security concern with IoTs is that many manufactures do not invest in the security of these devices during their development. Furthermore, discovered vulnerabilities are seldom patched by the manufacture \cite{notPatchingIOTs}. These vulnerabilities enable attackers to exploit the IoT devices for nefarious purposes \cite{schneier2014internet} which endanger the users' security and privacy.
There are various security tools for detecting attacks on embedded devices. One such tool is an intrusion detection system (IDS). An anomaly-based IDSs learn the normal behavior of a network or host, and detect when the behavior deviates from the norm. In this way, these systems have the potential to detect new threats without being explicit programmed to do so (e.g., via remote updates). Aside from being able to detect novel `zero-day' attacks, this approach is desirable because there is vertically no maintenance required.
In order to prepare an anomaly-based IDS (or any anomaly detection model), the system must collect and learn from \textit{normal} observations acquired during a time-limited ``training phase''. A fundamental assumption is that the observations obtained during the training phase are both benign and capture all of the device's possible behaviors.
This assumption might hold true in some systems. However, when considering the IoT environment, this assumption is challenging for the following reasons:
\begin{enumerate}
\item \textbf{Model Generality} It is possible to train the anomaly detection model safely in a lab environment. However, it is difficult to simulate all the possible deployments and interactions with the device. This is because some logic may be dependent on one or more environmental sensors, human interaction, and event based triggers. This approach is also costly and required additional resources. Alternatively, the model can be trained on-site during the deployment itself. However, the model will not be available for execution (detection of threats) until the training phase is complete. Furthermore, it is questionable whether the trained model will capture benign yet rare behaviors. For example, the behavior of the motion detection logic of a smart camera or the response generated by a smoke detector while sensing a fire. These rare but legitimate behaviors will generate false alarms during regular execution.
\item \textbf{Adversarial Attacks}
Although training on-site is a more natural approach to learning the normal behavior of an IoT device, the model must assume that all observation during the training-phase are benign. This approach exposes the model to malicious observations, thus enabling an attacker to exploit the device to evade detection or cause some other adverse effect.
\end{enumerate}
To overcome these challenges, the IoT devices can collaborate and train an anomaly detection model together. Consider the following scenario:
Assume that all IoT devices of the same type simultaneously begin training an anomaly detection model, based on their own locally observed behaviors.
The devices then share their models with other devices of the same type. Finally, each device merges the received models, into a single model by filtering out potentially malicious behaviors. Finally, each device uses the combined model as it's own local anomaly detection model. As a result, the devices (1) collectively obtain an anomaly detection model which captures a much wider scope of all possible benign behaviors, and (2) are able to significantly limit adversarial attacks during the training phase. The latter point is because the \textit{initial} training phase is much shorter (scaled according to the number of devices), and rare behaviors unseen by the majority are filtered out.
Using concept, we present a lightweight, scalable framework which utilizes the blockchain concept to perform distributed and collaborative anomaly detection on devices with limited resources.
A blockchain is an innovative protocol for a distributed database, which is implemented as a chain of blocks and managed by the majority of participants in the network \cite{swan2015blockchain}.
Each block contains a list of records and a hash value of the previous block and is accepted into the chain if it satisfies a specific criteria (e.g., bitcoin's proof-of-work criterion \cite{nakamoto2008bitcoin}).
The framework uses the blockchain's concept to define a collaboration protocol which enables devices to autonomously train a trusted anomaly detection model incrementally. The protocol uses self-attestation and consensus among the IoT devices to protect the integrity of the trained model. In our blockchain, a record in a block is a model trained on a specific device, and a block in the chain represents a potential anomaly detection model which has been verified by a significant mass/majority of devices in the system. By using the blockchain as a secured distributed ledger, we ensure that the devices (1) are using the latest validated anomaly detection model, and (2) can continuously contribute to each other's model with newly observed benign behaviors.
Furthermore, in this paper we also propose a novel approach for performing anomaly detection on a local device using an Extensible Markov Model (EMM) \cite{bhat2008extended}. The EMM tracks a program's jump sequences between regions on the application's memory space. The EMM can be incrementally updated and merged with other models, and therefore can be trained with real-world observations across multiple devices in parallel. Although there are many other methods for modeling sequences, we chose the EMM model because:
\begin{enumerate}
\item The update and prediction procedures have a complexity of $O(1)$. This is critical considering that many IoT devices have weak processors.
\item Our collaborative framework requires a model which can be merged with other models efficiently. Moreover, to filter out malicious transitions during the combine step, we needed an efficient and clear algorithm for comparing learned behaviors between different models. The process of comparing and combining other discreet transitional anomaly detection models can be complex or simply has not been defined.
\item In our evaluations, we found that the EMM performs better than other algorithms in our anomaly detection task.
\end{enumerate}
We evaluate both the framework and the anomaly detection model on our own IoT emulation platform, involving 48 Raspberry Pis.
We simulate several different IoT devices to assert that the evaluation results do not depend on the IoT device's functionality.
Moreover, we exploit real vulnerabilities in order to evaluate our method's capability in detecting actual attacks.
From our evaluations, we found that our method is capable in creating strong anomaly detection models in a short period of time, which are resistant to adversarial attacks.
To encourage further research and development, the reader may download our data sets and source code from GitHub.\footnote{\texttt{https://git.io/vAIvd}.} We have also \href{https://drive.google.com/drive/folders/15gLytEJyQyYCmhB-EZSkES77KsuCW0hw?usp=sharing}{published a blockchain simulator for our protocol} to help the reader understand and implement the work in this paper.\footnote{\texttt{https://github.com/ymirsky/CIoTA-Sim}}
In summary, this paper's contributions are:
\begin{itemize}
\item \textbf{A method for detecting code execution attacks by modeling memory jumps sequences} - We define and evaluate a novel approach to efficiently detect abnormal control-flows at a set granularity. The approach is efficient in because we track the program counter's flow between regions of memory, and not actual memory addresses or system calls. As a result, the model is compact (has relatively few states) and is suitable for devices with limited resources (IoT devices).
\item \textbf{A method for enabling safe distributed and collaborative model training on IoTs} - We outline a novel framework and protocol which uses the concept of blockchain to collaboratively train an anomaly detection model. The method is decentralized, reduces train time, false positives, and is robust against potential adversarial attacks during the initial training phase.
\end{itemize}
The rest of the paper is organized as follows.
In Section \ref{sec:relworks}, we review related work, and discuss how the proposed method overcomes their limitations.
In Sections \ref{sec:anom} and \ref{sec:ciota}, we present introduce our novel host-based anomaly detection algorithm and the framework for applying the algorithm in the collaborative distributed setting using the blockchain.
In Section \ref{sec:eval}, we evaluate the proposed method on several different applications and use-cases, and discuss our insights. In section \ref{sec:security} we analyze the framework's security.
In Section \ref{sec:discussion}, we provide a discussion on the security and challenges of implementing the proposed framework. Finally, in Section \ref{sec:conclusion} we present a summary and conclusion.
\section{Related Works}\label{sec:relworks}
The primary aspects of this work relate to both Intrusion Detection and IoT Security. Therefore, in this section we will discuss recent works from both fields, and the limitations of these approaches.
\subsection{Discreet Sequence Anomaly Detection for Intrusion Detection}
Software inevitable contains flaws which pose security vulnerabilities if exploited by an attacker. Many of these vulnerabilities remain unknown until they are discovered and exploited in the wild (referred to as zero days). An effective way to detect these exploits is to analyze a program's behavior in real-time.
A program's behavior can be observed during runtime by monitoring its system calls, or by tracking the program in the memory \cite{maske2016advanced,yoon2017learning,kim2016lstm,khreich2017anomaly}. In both cases, the behavior is observed as an ordered sequence of events on which anomaly detection can be performed \cite{ahmed2016survey,chandola2010anomaly}. To detect attacks in these sequences, many works utilize discreet sequence anomaly detection algorithms. We will now summarize these works in chronological order.
In \cite{forrest1996sense} the authors create a database of normal sequences by windowing over system calls, and flag sequences as anomalous sequences if they do not appear in the database. In \cite{kosoresow1997intrusion} the authors extended the windowing approach to longer sequences via partitioning. In \cite{lee1997learning} the authors use RIPPER to extract concise rule sets from systems calls to classify malicious sequences. The authors then expanded their work in \cite{lee1998data} by using the frequent episodes algorithm, computing inter/intra-audit record patterns, and by proposing a general agent architecture. In \cite{hofmeyr1998intrusion} the authors proposed a system based on the defenses of natural immune systems. First a database of short normal sequences is created. Then new sequences are scored according to the number of matches (substrings) the sequence has in common.
In \cite{warrender1999detecting} the authors performed a comparative evaluation involving Hidden Markov Models (HMM), RIPPER, and threshold-based sequence time delay embedding (t-STIDE). An HMM is similar to a MC except that it model transmissions based on output symbols at each state. We did not use an HMM since the framework needs a light weight model that can be trained efficiently and can be merged with other models. t-STIDE works by looking up the frequency of new sequences (window) in normal dictionary (hash table). Infrequent sequences below a given threshold are considered anomalous. The authors found that the HMM provided the best performance, but t-STIDE had similar performance and was significantly faster to train.
In \cite{michael2000two} the authors propose modeling a finite-state machine over system calls such that novel sequences are labeled anomalous. In \cite{gao2002hmms} the authors apply an HMM over a sliding window of system calls. In \cite{tandon2003learning} the authors revisit the use of association rule mining by considering the system call's arguments. Based on a mining algorithm called LERAD, the authors propose three variants which out performed t-STIDE for certain attacks. In \cite{hoang2003multi} the authors propose a multi-layer approach which first uses a normal database and then passes suspicious sequences to a HMM for further analysis. In \cite{yeung2003host} the authors use a Markov Chain to model normal shell-command sequences, and then detect abnormal (malicious) sequences as an indication of misuse in the system.
In \cite{eskin2001modeling,mazeroff2003probabilistic,mazeroff2008probabilistic} the authors use probability suffix trees (PST) to model normal system call sequences. A PST is a variable length Markovian model which forms a tree-liek data structure.
In \cite{hu2009simple} the authors propose a method for speeding up HMM training on system calls by 50\%. They accomplish this by prepossessing the training sequences and by performing incremental training. In \cite{xie2013evaluating} the authors prose a kernel trick to transfer sequences to Euclidean space in order to perform kNN lookups. In \cite{kim2016lstm} the authors propose the use of a long-term short-term (LSTM) neural networks to detect abnormal system call sequences. In \cite{chawla2018host} the authors extent the work of \cite{chawla2018host} by stacking convolutional networks (CNN) followed by a recurrent neural network (RNN) with Gated Recurrent Units (GRU). Although the use of GRU reduced the training time, the authors needed to use powerful GPUs to train their network.
Our proposed framework uses an EMM, a type of Markov Chain, as the anomaly detection model for detecting abnormal control flows in the memory of applications. In contrast to the above works, the limitation to these approaches are:
\begin{description}
\item[Attack Vector Coverage] Many exploits do not use the shell or require the evocation of abnormal system-calls, so the Markov model would not observe any malicious activity during their exploitation processes. For example, exploitation of a buffer overflow vulnerability can be accomplished without making explicit calls.
\item[Modeling the True Behavior] An application's system and shell calls only capture an application's high-level behavior. As a result, some exploits can be designed so that the executed code will generate seemingly benign sequences (obfuscation) and evade detection. Furthermore, some malware may only require to make benign call sequences to accomplish its objective. For example, a randsomware will read and write files via system-calls (benign) but encrypt the files internally (malicious).
\item[System Overhead] In order to intercept the system calls of a specific application, one must intercept the system-calls of all applications. As a result, these approaches are suitable for devices with strong computational power such as personal computers, but not IoTs. Moreover, models such as HMMs and neural networks cannot be trained (and sometimes not even executed) on IoTs.
\end{description}
By modeling a Markov Chain on a target application's general jumps through its memory space, our approach is not restricted by the above limitations. Namely, our approach can (1) capture the internal behavior of the application, regardless of the system-calls or shell-code, (2) detect exploitation of vulnerabilities occurring within an application's memory space, and (3) be applied to specific applications, as opposed to the whole system, thus minimizing overhead --making our approach appropriate for IoTs.
Similar to our approach, in \cite{7167219} the authors detect anomalous activities by maintaining a heatmap of the \textit{kernel's} memory space. An anomaly is detected when the probability of a region of the kernel's memory being accessed is below a threshold. By doing so, the authors were able o detect abnormal application activities reflected by interactions with the kernel. This work differs from ours in the following ways:
\begin{enumerate}
\item The kernel-heatmap method cannot detect all of those which our method can. For example, code reuse attacks are ignored because the kernel interactions seem normal. Moreover, abnormal interactions with the kernel can be considered benign because \textit{other} applications may be performing similar interactions. For example, when privilege escalation is obtained and abused, restricted system calls will not seem abnormal because the context of the requesting app is not considered.
\item When an anomaly is detected, there is indication of which application has been compromised. This makes it harder to mitigate the threat.
\item The method in \cite{7167219} suffers from significantly higher false alarm rates than an EMM. This is because the probability of accessing a memory region is normalized over all accesses. Therefore, rare benign memory interactions are considered anomalous. In contrast, by using an EMM over memory regions, we consider the transition across the memory space which provides an implicit context for each interaction. Later in section \ref{sec:eval} we provide a comparative evaluation.
\item Like all other anomaly detection algorithms (including EMMs), the method in \cite{7167219} is subject to adversarial attacks (poisoning) during training and false positives due to rare benign behaviors (due to human interactions and other stimuli). In our paper we propose a framework which provides accelerated on-site model training in a hostile environment via collaboration, filtration, and self-attestation.
\end{enumerate}
Another approach to deploying an IDS is to distribute the detection across multiple devices \cite{abraham2007d, zhang2011distributed, snapp1991dids}. In these approaches, the devices share information with one another regarding malicious traffic and the network's state. Similar to our method, a distributed IDS utilizes on collaboration between devices. However, the proposed methods are limited to analyzing network traffic. In many cases, network traffic from a device cannot indicate the exploitation of an application running on the device (e.g., encrypted payloads). When considering the IoT topology and the vision of allowing them to autonomously exchange data, a network based IDS might be problematic, since network traffic near each IoT device may differ significantly. In contrast, our anomaly detection approach on an application's the memory jumps is not affect by the diversity of network traffic near each device. Furthermore, distributed IDS solutions are designed to work collectively as a single intrusion detection system. However, should one node be compromised by an attacker, the security of the entire system may fail. In contrast, our method allows for safe collaboration via self attestation and model anomaly filtration --which makes compromising the whole system much more difficult.
\subsection{IoT Specific Solutions}
The IoT device security solutions have been researched extensively over the last few years.
However, the proposed solutions typically do not address all of an IoT's characteristics: their (1) mass quantity, (2) limited resources, (3) global deployment, (4) dependence on external sensors/triggers.
In \cite{huuck2015iot,oh2014malicious} the authors propose deploying static analysis tools on the IoT devices. However, these approaches require that (1) the device maintain a database of virus signatures and (2) that experts continuously update this database. Furthermore, these approaches are not sufficient when facing viruses which can only be detected during runtime (e.g., execution of a malicious encrypted payload). Our method is anomaly-based and therefore can detect threats automatically without human intervention, and performs continuous dynamic analysis of an application's behavior.
Several studies try to secure IoT devices by deploying an anomaly detection model on the device itself \cite{raza2013svelte,arrington2016behavioral,o2014anomaly, taneja2013analytics}. Some of them suggest to simply apply traditional solutions (meant for stronger devices), while others suggest a novel approaches which are more light-weight. A common denominator for all these approaches is: they neglect of the fact that (1) an anomaly detection model is sensitive to adversarial attacks during the training phase, and (2) rare benign activities (which did not appear in the initial training data) can generate false positives (e.g., an IoT smoke detector being triggered).
Our method, on the other hand, has a very short initial training-phase and learns from the experiences (events) of millions of IoTs.
Other studies propose that a centralized server should be deployed \cite{abera2016c,jager2017rolling,ott2015trust}. However, the centralized approach does not scale well with the number of IoT devices. Our method is distributed and autonomous.
Another direction in the literature is to deploy a network-based IDS at the gateway of IoT distributions \cite{taneja2013analytics,chen2011novel}. Although this is a suitable solution for smart homes and offices, it does not scale to industrial deployments (e.g., smart railways), or where the IoT devices are connected directly to the Internet (e.g., some survallaince cameras). Our method does not depend on the IoT devices' deployment or topology.
Other studies have tried to avoid the issue of training altogether, by using a trust anchor, such as an IoT device's functional relationship to detect anomalies. In \cite{moon2015functional}, the authors propose executing every distributed computation twice across different IoT devices and then compare the results to detect deviations (infected devices). However, this method was only designed to protect specific types of IoT devices, from specific types of attacks. Our method is generic to the type of device, and the type of attack.
Other trust anchors solutions include the Trusted Platform Module (TPM) \cite{morris2011trusted} and Trusted Execution Environment (TEE) \cite{yiu2015armv8}. ARM's TrustZone \cite{su2011multi} is a TEE implemented in the hardware, providing a one-way separation between two worlds: ``unsecured'' and ``secured''. In \cite{abera2016c} the authors proposed C-FLAT which utilizes the Trust Zone for attesting the IoT device's control-flow behavior against a simulation run in parallel on a central server. Although an application's control-flow can be used to detect a vast range of code execution attacks, C-FLAT is limited to specific IoT devices which (1) do not execute code continuously or (2) devices whose behavior is not affected by external sensory events (e.g., smart cameras). Our method analyzes control-flow behavior to detect abnormalities dynamically on-site, and therefore does not have these limitations.
\section{The Anomaly Detection Model}\label{sec:anom}
In this section, we present a novel method for efficiently modeling an application's control-flow, and then detecting abnormal patterns with the trained model. The method is applied locally and continuously on a single IoT device. Later, in Section \ref{sec:ciota}, we will present the proposed framework for enabling the decentralized collaborative training of the anomaly detection model.
\subsection{Motivation}
When an application is executed, the kernel designates a region of memory for the program to operate in. The region contains the program's code (machine instructions) and room for data (e.g., variables) to be manipulated by the program. As a program runs, a program counter (PC) tracks the current location (in memory) of the current instruction being executed. The PC will jump to different locations when functions, if statements, and loops are performed. By following the location of the PC over the application's region in memory, a pattern emerges. This pattern captures the behavior (control-flow) of the application. The objective is to model the normal behavior of an application's control flow, and then later detect when the behavior changes.
When an attacker does not have the victim's credentials, the attacker may attempt to exploit a software vulnerability in order to obtain access to restricted assets, or to perform some other undesirable task (e.g., install a bitcoin mining bot). When an exploit is executed on an application, the control-flow of the app will deviate from the behavior intended by the app's developers. By detecting this abnormality, we can identify the threat and then take the proper steps to alter the user and mitigate it.
Buffer overflow and code-reuse are examples of attacks which abnormally affect the PC's location in memory. Another example is the ``Zimperlich'' \cite{Zimperlich} vulnerability in Android which gives the attacker privileged escalation. When exploited, the ``Zimperlich'' causes the \textit{setuid} operation to fail.
However, by monitoring the control-flow of the application, we can detect that the app was attempting access to the region of memory where \textit{setuid} is located, at an unusual time. As a result, we can raise an alert which will reveal the attack to the user.
With this approach, it is challenging for the attacker to evade detection. This is because most systems cannot change the code loaded into memory. Therefore, in order for the attacker to execute code which will hide the malicious activities, the attacker must either (1) add code of his own (which will make the PC jump), or (2) override existing code with his own (which will change the behavioral flow of the PC). This places the attacker in a \textit{catch-22}, where his exploit will ultimately detected as an anomaly (Fig. \ref{ExecutionFlow}).
\begin{figure}[!t]
\centering
\includegraphics[width=.7\columnwidth]{Figure/ExecutionFlow.pdf}
\caption{A visualization of a smart-light's program control-flow over the memory space, and the affect caused when a vulnerability is exploited to run malicious code.}
\vspace{-0.3cm}
\label{ExecutionFlow}
\end{figure}
\subsection{Markov Chains}
In order to efficiently model sequences, we use a probabilistic model called a Markov chain (MC). An MC is a {\em memory-less process}, i.e., a process where the probability of transition at time $t$ only depends on the state at time $t$ and not on any of the states leading up that state.
Typically, an MC is represented as an adjacent matrix $M$, such that $M_{ij}$ stores the probability of transitioning from state $i$ to state $j$ at any given time $t$. Formally, if $X_t$ is the random variable representing the state at time $t$, then
\begin{equation} \label{eq:3}
M_{ij}=Pr(X_{t+1}=j|X_{t}=i)
\end{equation}
An EMM \cite{bhat2008extended} is the incremental version of the MC. Let $N=[n_{ij}]$ be the frequency matrix, such that $n_{ij}$ is the number of transitions which have occurred from state $i$ to state $j$. From here, the MC can be obtained by
\begin{equation}\label{eq:emm}
M=[M_{ij}]=\left[\frac{n_{ij}}{n_i}\right]
\end{equation}
where $n_i=\sum_j n_{i,j}$ is the total number of outgoing transitions observed by state $i$. By maintaining $N$, we can update $M$ incrementally by simply adding the value '1' to $N_{ij}$ whenever a transition from $i$ to $j$ is observed. In most cases, $N$ is a sparse matrix (most of the entries are zero). When implementing an EMM model, one can use efficient data structures (e.g., compressed row storage or hash maps) to track large numbers of states with a minimal amount of storage space.
If $N$ was generated using normal data only, then the resulting MC can be used for the purpose of anomaly detection \cite{Patcha20073448}. Let $Q_{k}$ be the last $k$ observed states in the MC, where $k\in\{1,2,3,\ldots\}$ is a user defined parameter.
The simplest anomaly score metric is the probability of the observed trajectory $Q_{k}=(s_0,\ldots,s_t)$ w.r.t the given MC. This is given by
\begin{equation}
Pr(Q_{k})=Pr(\bigwedge_{i=0}^k (X_i=s_i))=\prod_{i=0}^{k-1} M_{s_i,s_{i+1}}
\label{eq:trajectory-prob}
\end{equation}
When a new transition from state $i$ to state $j$ is observed, we assert that the transition was anomalous if $Pr(Q_k)<p_{thr}$, where $p_{thr}$ is a user defined cut-off probability. However, for large $k$, or in the case of noisy data, $Pr(Q_k)$ can generate many false positives. In this case, the average probability of the sequence can be used
\begin{equation}
\overline{Pr}(Q_{k})=\frac{1}{t}\sum_{i=0}^{k-1} M_{s_i,s_{i+1}}
\label{eq:trajectory-avprob}
\end{equation}
Lastly, to avoid corrupting the model, one should not update $N_{ij}$ with a transition deemed an abnormal (part of an attack).
\subsection{Detecting Abnormal Control-Flows in Memory}
To track the logical address of the PC in real-time, we can use a kernel debugger such as the Linux performance monitoring API (\texttt{linux/perf\_event.h})\footnote{The API can be found here: \texttt{http://man7.org/linux/man-pages/\\man2/perf\_event\_open.2.html}, and sample code can be found here: \texttt{https://git.io/vAIvd}}. The debugger runs in parallel to the target application and tracks addresses of the PC. The debugger periodically reports the addresses observed since the last report. The sequence of observed addresses can then be modeled in the MC.
Modeling a memory space as states in an MC is a challenging task. Due to memory limitations, it is not practical to store every address in memory as a state. Doing so would also require us to track the location of the PC after every operation. This would incur a significant overhead. Therefore, we propose that a state in the MC should be region of memory (Fig. \ref{fig:memflow}). We also configure the debugger to report right after \texttt{branch} and \texttt{jump} instructions. To accomplish this, we used a kernel feature via the debugger.\footnote{The feature is called \textit{coresight} in ARM CPUs, and \textit{lastjump} in Intel CPUs.}
Let $PC_{addr}$ be the current logical address of the application's program counter, where \texttt{0x0} is the start of the app's logical memory-space. Let $B$ be the partition size in Bytes, defined by the user. The MC state $i$, in which the program is currently located, is obtained by
\begin{equation}
i = \floor*{\frac{PC_{addr}}{B}}
\end{equation}
The partition size of a memory space is a user defined parameter. When selecting the partition size, there is a trade off between the true positive rate and memory requirements. For an Apache web-server, we found that a partition size of 256 Bytes was is enough to detect our evaluated attacks, where the memory consumption of the model $N$ was only 20KB of memory.
In Algorithm \ref{alg:anomDetect}, we present the complete process for modeling and monitoring an application's control-flow in memory. There, $T_{grace}$ is the initial learning time given to the MC, before we start searching for anomalies.
\begin{figure}[!t]
\centering
\includegraphics[width=.7\columnwidth]{Figure/EMM.png}
\caption{A visualization of partitioning a smart-light's memory-space into states in a Markov Chain.}
\label{fig:memflow}
\end{figure}
\begin{algorithm}\label{alg:monitor}
\caption{The algorithm for training an MC, and detecting anomalies in an application's control flow via the memory.}
\label{alg:anomDetect}
\begin{algorithmic}[1]
\Function{Monitor}{app\_name, $k$, $B$, $p_{thr}$, $T_{grace}$}
\State $N \leftarrow DynamicSparseMatrix()$ \Comment{init MC}
\State $Q_k \leftarrow FIFO(k)$ \Comment{init state trace ring-buffer}
\State $i \leftarrow 0$ \Comment{initial state}
\State fd $\leftarrow$ register(app\_name) \Comment{track app with debugger}
\While{buffer $\leftarrow$ read(fd)}
\For{$addr \in$ buffer}
\State $j = \floor*{\frac{PC_{addr}}{B}}$ \Comment{determine current state}
\If{notInGrace($T_{grace}$) \textbf{and} $\overline{Pr}(Q_{k})<p_{thr}$}
\State raise alert
\Else
\State $N_{ij}++$\Comment{update MC}
\EndIf
\State $i = j$
\EndFor
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsection{Collaborative Training of EMMs}
Multiple devices can collaborate in parallel to train $N$. The benefit of parallel collaboration is (1) we arrive at a converged/stable model much faster, and at a rate which scales with the number of collaborators, and (2) we increase the likelihood of observing rare yet benign occurrences, which reduces our false positive rate. To collaborate across multiple devices, we assume that the devices' hardware, kernel, and target application (being monitored) are of the same version. For example, all of the Samsung smart-fridges of the same model.
Let $\mathbf{N}$ be a set of EMM models, where $\mathbf{N}^{(k)}_{ij}$ is the element $N_{ij}$ in the $k$-th model of $\mathbf{N}$.
Since we assume that multiple devices statistically observe the same state sequences, then EMMs ($\mathbf{N}$) trained at separate locations can be merged into a single EMM. Since an EMM is a frequency matrix, we can combine the models by simply adding the matrices together by
\begin{equation}\label{eq:simplecombine}
N^*=[N_{ij}^*]=[\sum_k \mathbf{N}^{(k)}_{ij}]
\end{equation}
It is critical that the combined model $N^*$ be trained on normal behaviors only. However, we cannot assume that all models have not been negatively affected. We propose two security mechanisms to protect $N^*$: model-attestation and abnormality-filtration.
\begin{description}
\item[Abnormality-filtration] is used to combine a set of models $\mathbf{N}$ into a single model $N^*$, in manner which is more robust to noise and adversarial attacks than (\ref{eq:simplecombine}). The approach is to filter out transitions found in $N^*$ if the majority of models in $\mathbf{N}$ have not observed the same transition. To produce $N^*$ from $\mathbf{N}$ in our framework, Algorithm \ref{alg:combine} is performed, where $p_a$ the minimum percent of devices which must observe a transition in order for it to be included into the combined model. After forming the MC model $M^*$ from $N^*$ using (\ref{eq:emm}), an agent can attest that $M^*$ is a verified model via model-attestation.
\item[Model-attestation] is used to determine whether a trusted model $N^{(i)}$ is similar to a given model $N^{(j)}$. If $N^{(j)}$ is similar, than it is considered to be a verified model with respect to $N^{(i)}$. To determine the similarity, we measure the linear distance between the EMMs, defined as
\begin{equation}\label{eq:attest}
d(N^{(i)},N^{(j)}) = \frac{\sum_{k=1}^{\text{dim}(M)}\sum_{l=1}^{\text{dim}(M)} |M^{(i)}_{kl} - M^{(j)}_{kl}| }{\text{dim}(M)^2}
\end{equation}
where dim$(M)$ is the length of $M$'s dimensions, and $M$ is the Markov chain obtained from $N$. A local device $i$ can attest that model $N^{(j)}$ is a self-similar model, if $d(N^{(i)},N^{(j)})<\alpha$, where $\alpha$ is a parameter of our framework (see Section \ref{sec:ciota}).
\end{description}
\begin{algorithm}
\caption{The algorithm for combining a set of EMMs.}
\label{alg:combine}
\begin{algorithmic}[1]
\Function{Combine}{$\mathbf{N}$, $p_a$}
\State $N \leftarrow$ empty\_EMM$()$ \Comment{initialize empty freq. matrix}
\For{$n_{ij} \in N$}
\State $C \gets 0$ \Comment{init the counter}
\For{$k \in 1:|\mathbf{N}|$}
\State $n_{ij} \gets n_{ij} + \mathbf{N}^{(k)}_{ij}$
\If{$\mathbf{N}^{(k)}_{ij} > 0$}
\State $C++$
\EndIf
\EndFor
\If{$\frac{C}{|\mathbf{N}|} \leq p_a$} \Comment{not enough devices have observed $ij$}
\State $n_{ij} \gets 0$
\EndIf
\EndFor
\State{\Return $N$}
\EndFunction
\end{algorithmic}
\end{algorithm}
\section{The Framework}\label{sec:ciota}
In this section, we present the proposed framework and protocol. The framework enables distributed devices to safely and autonomously train anomaly detection models (Section \ref{sec:anom}), by utilizing concepts from the block chain protocol.
First we will provide an overview and intuition of the framework (\ref{subsec:overview}). Then we will present the terminology which we use to describe the blockchain protocol (\ref{subsec:terms}). Finally, we will present the protocol and discuss its operation (\ref{subsec:protocol}). Later in Section \ref{sec:discussion}, we will discuss the various challenges and design considerations.
\subsection{Overview}\label{subsec:overview}
The purpose of the framework is to provide a means for IoTs to perform anomaly detection on themselves, and to autonomously collaborate to find the anomaly detection model.
For example, a company may want gradually deploy thousands or millions of IoT devices. Each of the devices have an application, such as a web server (so that the user can interface and configure the device). The application may have un/known vulnerabilities which can be exploited by an attacker to accomplish some nefarious task. To detect threats affecting the devices, the company installs an agent on each device, and has the agent monitor the application.\footnote{An agent can cover multiple applications on a single device by maintaining separate models and blockchains. For simplicity, we will focus on protecting a single application.}
The job of an agent is to (1) learn the normal behavior of the application, and (2) report abnormal activity in the application (Algorithm \ref{alg:anomDetect} on the local model $M^{(\ell)}$), and (3) report abnormal agents compromised or infected with malware.
Each agent then collaborates with the other agents by trying to figure out how to safely combine everybody's local models into a single global model $M^{(g_1)}$. Once the agents agree upon a global model, each device will replace their $M^{(\ell)}$ with $M^{(\mathit{g}_1)}$. The agents continue to update their $M^{(\ell)}$ and collaborate on creating $M^{(\mathit{g}_2)}$. This collaboration cycle repeats indefinitely.
The benefit of collaboration is:
\begin{enumerate}
\item An agent who has accidentally trained his $M^{(\ell)}$ on malicious behaviors will now detect them as malicious.
\item The agents will benefit from the vast experience of all the devices together, and accurately classify rare benign events.
\item The agents will be able to identify rouge agents by detecting corrupt \textit{partial-blocks} which fail model-attestation.
\end{enumerate}
A critical part of the collaboration process is filtering out rare benign behaviors from possible malicious behaviors.
The difference between the two is that we expect to see rare benign behaviors among more devices than malicious behaviors, especially at the outset of an attack (e.g., the propagation of a worm). This is relative to the parameter $p_a$: we expect at least $p_a\%$ of the agents to experience the rare-benign events, and less than $p_a\%$ to be infected. Note that after $M^{(g)}$ converges the malicious behaviors are detected, and $M^{(\ell)}$ is not updated with detected malicious behaviors (detailed in the protocol later on).
Since agents do not update their $M^{(\ell)}$ when an anomaly is detected, we expect each of the local models to remain pure.
However, there are cases where an $M^{(\ell)}$ can be corrupted. For example, when an agent launches after a malicious behavior begins, but before $M^{(\mathit{g}_1)}$ has been created.
To protect the integrity of the next global model, when an agent which receives a set of local models (under collaboration to become the next global model), an agent will$\ldots$
\begin{enumerate}
\item \textbf{[\textit{trust}]} $\ldots$consider only sets which contain authenticated models from different agents.
\item \textbf{[\textit{filter}]} $\ldots$combine the set into a potential $M^{(\mathit{g})}$, and remove behaviors from $M^{(\mathit{g})}$ which have not been reported by the majority of agents (\textit{abnormality-filtration}).
\item \textbf{[\textit{attest}]} $\ldots$accept the set of models as a potential $M^{(\mathit{g})}$, if it does not conflict with the agent's current local model (\textit{model-attestation}).
\item \textbf{[\textit{inform}]} $\ldots$share the accepted set of models (including his own) with other agents, while reporting abnormal application behaviors and problematic agents (rejected \textit{partial-blocks}).
\end{enumerate}
The following analogy provides some intuition for how the agents create $M^{(\mathit{g})}$:
\begin{tcolorbox}[breakable,title=\textit{Analogy}]
A group of painters (agents) are looking at the same colored object (target application), and they are working together to select a single colored paint ($M^{(\mathit{g})}$) to describe it. Each painter produces a bucket of paint ($M^{(\ell)}$) based on their perception of the object's color. The painters then share their paint buckets with their neighbors, who mix the received paints together, while filtering out imperfections (\textit{abnormality-filtration}), but only if they feel the resulting color will still resemble the colored object (\textit{model-attestation}). The painters continue to adjust the paints, and after a set number of iterations of sharing, each painter pours some of the paint onto his/her pallate, and uses it to paint (perform anomaly detection). Then, the cycle repeats as the painters continue to adjust, filter, and share the paints in hopes of perfecting the color.
\end{tcolorbox}
To enable this autonomous trusted distributed collaboration, we use a \textit{blockchain}. In the following sections, we will detail how blockchain is used for this purpose.
\subsection{Terminology \& Notation}\label{subsec:terms}
In the framework, a blockchain is a linked list of sequential blocks, where each block contains a set of records (EMM models) acquired from different IoT devices of the same type (see Fig. \ref{fig:block}). Each device maintains a copy of the latest chain, and collaborates on the next block.
We will now list the terminology and notations necessary to explain the framework in detail:
\begin{description}
\item[Model] A Markov chain anomaly detection model denoted $M$, where $N$ denotes the model in its EMM frequency matrix form. The model supports (1) the calculation of a distance between models, and (2) combining (merging) several models of the same type together. In this version, we use an EMM. We denote a model which is currently deployed on the local device as $N^{(\ell)}$.
\item[Combined Model] A model created by merging a set of models together. The combined model only contains elements (transitions) which are present in at least $p_a$ percent of the models (see \textit{abnormality-filtration} in Algorithm \ref{alg:combine}).
\item[Verified Model] Let $d(M^{(i)},M^{(j)})$ be the distance between models $M^{(i)}$ and $M^{(j)}$. A model $M^{(i)}$ is said to be verified by a device if $d(M^{(i)},M^{(\ell)}) < \alpha$, where $\alpha$ is a parameter given by the user (see \textit{model-attestation} in (\ref{eq:attest})).
\item[Record] A record is an entry in a block. A record consists of the model $N^{(i)}$ from device $i$, and a digital signature $S_{k_i}(\texttt{m}, n, \textbf{N})$, where $k_i$ is device $i$'s private key, \texttt{m} is the blockchain's meta-data (hash of previous block, target application, version$\ldots$), $\textbf{N}$ is the set of models from the start of the current block up to and including $N^{(i)}$, and $n$ is a counter which is incremented with each new block. The purpose of $n$ is to track the length of the chain and to prevent replay attacks. A record is \textit{valid} if the format is correct and the signature can be verified using device the corresponding device's public key.
\item[Block] A list of exactly $L$ records from different devices and some metadata. Each record is verified by the agent's digital signatures, where each agent's signature covers its model, all preceding models, the current block number ($n$), and its metadata (e.g., the agents' IP addresses). We denote the $i$-th record in a block as $r_i$. The models in a block, when combined, represent a collaborative model $M^{(\mathit{g})}$ which can be used to replace a local model $M^{(\ell)}$. A block is \textit{valid} if the format is correct and contains valid records.
\item[Partial Block] The same as a block, but it is less than $L$ entries long. The combined models in a \textit{partial-block} represent a proposed collaborative model $M^{(g)}$ in progress. An agent contributes (add its own model) to a \textit{partial-block} only if (1) the \textit{partial-block} is valid, (2) the agent does not already have a record there, and (3) the combined model, using the enclosed models, form a verified model (with respect to the agent's local model $N^{(\ell)}$). The length of a \textit{partial-block}, in the perspective of agent $i$, is the number of records in the \textit{partial-block} minus $i$'s if it exists.
\item[Chain] A series of blocks (blockchain), where each block contains a hash of the previous block in \texttt{m}, and where the counter $n$ is the index of the block in the chain ($n=1$ for the first block, etc.) A chain contains the current collaboration for the next $M^{(\mathit{g})}$ (\textit{partial-block}), the current model with consensus (the last block in the chain), and an optional history of collaboration used for analytical purposes (all other blocks). The length of a chain is defined as the total number of full blocks in that chain. Finally, a chain may have at most one \textit{partial block} appended to the end of the chain. We denote the $i$-th block in a chain as $B_i$.
\item[Agent] A program that runs on an IoT device which is responsible for (1) training and executing the local model $N^{(\ell)}$, (2) downloading more advanced broadcasted chains to replace $N^{(\ell)}$ and the locally stored chain, (3) periodically broadcasting the locally stored chain, with the agent's latest $N^{(\ell)}$ as a record in the \textit{partial block}, and (4) reporting any anomalous behaviors/blocks.
\end{description}
\begin{figure}[!t]
\centering
\includegraphics[width=.84\columnwidth]{blockfig.pdf}
\caption{An example of a chain with two blocks and a partial block, where the device IDs are $\{\mathbf{a},\mathbf{b},\mathbf{c}\ldots\}$.}
\label{fig:block}
\end{figure}
\subsection{The Blockchain Protocol}\label{subsec:protocol}
By using a \textit{block-chain}, agents are able to collaborate autonomously in manner which is robust to adversarial attacks. Every agent maintains a local copy of the `best' chain.
Closed blocks in the chain represent past completed global models, where the last completed block in the chain contains the most recently accepted model $M^{(\mathit{g})}$. The next global model is collaborated via a \textit{partial-block} appended to the chain. A \textit{partial block} only grows if agents can verify that it contains a safe model that captures the training distribution (the target app's behaviors). This is accomplished through trust propagation: agents (1) broadcast their \textit{partial block} to other agents, (2) replace their local \textit{partial-block} with received ones if they are both longer and similar to $N^{(\ell)}$ (same distribution check via \textit{model attestation}), and (3) reject and report \textit{partial-blocks} that are significantly different than $N^{(\ell)}$.
The blockchain protocol is as follows (illustrated in the flow-chart of Fig. \ref{LLD}):
\begin{tcolorbox}[breakable,title=\textit{Blockchain Protocol}]
\singlespacing \vspace{-1.5em}
\begin{enumerate}[leftmargin=*,label=\Alph*.]
\item \textbf{Initialize.} An agent starts with an empty chain (an empty \textit{partial-block} with no preceding blocks) stored locally on its device, and initializes an empty local model $N^{(\ell)}$.
\item \textbf{Gather Intelligence (Monitor).} The agent (1) monitors the target application, (2) updates $N^{(\ell)}$ incrementally, and (3) reports anomalies if $T_{grace}$ has passed (Algorithm \ref{alg:monitor}).
\item \textbf{Share Intelligence.} Every $T$ seconds:
\begin{enumerate}[label*=\arabic*.]
\item \label{step:add_self} The agent adds its own local model $N^{(\ell)}$ to the \textit{partial-block} as a record, if $M^{(\ell)}$ is stable (passed $T_{grace}$), and does not yet exist in the \textit{partial-block}.
\item \label{step:broadcast} The agent shares its \textit{block-chain} (\textit{partial-block} and all preceding blocks) with $b$ other agents in a random order.\footnote{The agent only needs to broadcast the chain to a few `neighboring' agents, similar to how Etherium and Bitcoin work.}
\end{enumerate}
\item \textbf{Receive Intelligence.} When an agent receives a \textit{block-chain}:\footnote{To avoid DoS attacks, an agent will at most process $b$ chains once every $T$ seconds.}
\begin{enumerate}[label*=\arabic*.]
\item \textbf{If} the chain is shorter than the local chain: \textit{then} the agent discards the received chain.
\item \textbf{If} the chain is longer than the local chain: \textit{then} the agent checks...
\begin{enumerate}[label*=\arabic*.]
\item \textbf{If} the last block is a valid block: \textit{then} the received chain replaces the local chain, and the models $\mathbf{N}$ in the last block are combined (\textit{abnormality-filtration}) to form $N^{(\mathit{g})}$ which replaces $N^{(\ell)}$.\footnote{The agent does not perform \textit{model-attestation} on a valid block.}\footnote{Option: Agents update $T$ to be a factor of the number of closed blocks in the local chain. Since $M^{(g)}$ converges over time, it is safer to prolong changes to the next version, increasing the response time when an attack on the blockchain is detected. See Section \ref{subec:adversarial} for details.}
\item \textbf{Else}: the agent discards the received chain.
\end{enumerate}
\item \textbf{If} the chain has the same length as the local chain: \textit{then} the agent checks...
\begin{enumerate}[label*=\arabic*.]
\item \label{step:pb_accept}\textbf{If} (1) the received chain's \textit{partial-block} is longer than the local chain's \textit{partial-block} (excluding his own record from both), (2) the received \textit{partial-block} is valid, and (3) the models $\mathbf{N}$ in the \textit{partial-block} form a combined model (\textit{abnormality-filtration}) which the agent can attest is a verified model (\textit{model-attestation}): \textit{then} the received chain replaces the local chain.
\item \label{step:pb_message} \textbf{Else If} (1) the chain's \textit{partial-block} has the same length as local chain's \textit{partial-block} (excluding his own record), (2) the two \textit{partial-blocks} have different agent IDs, (3) the \textit{partial-block} is valid, and (4) this is the $k$-th received chain of equal length whose \textit{partial-block} was that was not used: \textit{then} send the local chain to the agent(s) in received \textit{partial-block} who do not appear in the local \textit{partial-block}.\footnote{The received \textit{partial-block} has the IP addresses of the target agents.}
\item \label{step:pb_reject} \textbf{Else}: (1) the agent discards the received chain, and (2) \textbf{If} in steps \ref{step:pb_accept} or \ref{step:pb_message} the \textit{partial-block} failed the validity check or failed the \textit{model-attestation} to a significant degree, \textit{then} report the block and sending agent.\footnote{Alternative version: If the last block is valid yet different than the local chain's, then merge that block's combined model into $N^{(\ell)}$. This helps form a more general $N^{(g)}$ without communities. A limitation must be placed on the number of merges per $T$ seconds.}
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{tcolorbox}
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{Figure/flow_chart_ABC.pdf}
\includegraphics[width=\textwidth]{Figure/flow_chart_D.pdf}
\caption{A flow-chart of the blockchain protocol.}
\label{LLD}
\end{figure}
\subsection{Proof of Cumulative Majority}
In blockchains, there is often some form of effort which deters an attacker from making false records. In systems like Etherium, it's the effort of solving a crypto challenge. This type of challenge is necessary in systems like Etherium, because there is a base assumption that all participants are untrusted from the start. In contrast, our system assumes that the majority participants (agents) are on uncompromised devices at the start, because they are deployed by the manufacture. This is a common assumption for IDS systems.
Therefore, this blockchain uses ``proof of cumulative majority'' to deter attacks. The cumulative majority refers to the distributed consensus, or significant mass, achieved by accumulating $L$ the participants' signatures on a set of models to be combined as the next global model.
Concretely, an agent only replaces its local \textit{partial-block} with a received \textit{partial-block} if it is similar to the behaviors is has seen locally (\textit{model-attestation}). Therefore, a \textit{partial-block} of length $L$ can only exist if $L$ agents can attest that the model is similar to their own model/observations (i.e., there are $L$ compromised agents within $T$ seconds). Since $L$ is very large in practice (10k-100k), and \textit{partial-blocks} are indiscriminately shared and propagated: (1) a closed block has majority trust on it, and (2) is unlikely to be malicious due to the attacker's significant challenge.
The attacker's challenge/effort in this blockchain is to compromise a significant number of devices before $T$ seconds pass. Otherwise, the attack is reported in step \ref{step:pb_reject} of the protocol. At which point, the attack is discovered and (1) the affected devices' keys can be invalidated, and (2) the devices can be cleaned and patched.
With this in mind, we can see how the proposed blockchain system achieves its objective as an IDS. When a device is compromised, either (1) the agent will detect an abnormal behavior and report it to the SOC, or (2) the model will be corrupted/tainted by the latent behavior. In the latter case, if the compromised device publishes its model, it will be rejected by the other devices, because the tainted partial block (PB) will no longer be self-similar to the other devices’ models in the \textit{model-attestation} step. The other agents will then report rejected blocks, for example, to a Security Operations Center (SOC), and it will be clear who the infected device is (identified with problematic model's key from the reported PBs). The SOC can then invalidate that device's key and investigate the intrusion. Therefore, if the agent is compromised then the tainted model will be detected by the community, and if the model is corrupt (contains abnormal behaviors) then the agent will detect the intrusion when it replaces the local model with the next global model.
\subsection{Model Conflicts in Partial Blocks}\label{subsec:conflicts}
A concern might be that the agents will disagree on the models in the \textit{partial-block} and not reach a consensus. However, all agents monitor the same application running on the same type of hardware. Therefore, their models are very similar to one another. This is intuitive because each agent's training data follows the same distribution, and the Markov chain captures the probabilities of PC transitions.
Since the models are trained on the same distribution, any model formed by combining a subset of all agents' models will also be similar all agents' models. More formally, we observe that
\begin{equation}
d\left(combine\left(N_{i}^{(\ell)}\right),n_{j}^{(\ell)}\right)<\alpha \hspace{1em} \forall i,j : n_{j}^{(\ell)}\in N^{(\ell)},N_{i}^{(\ell)}\in \mathbf{N}^{(\ell)}
\end{equation}
where $\mathbf{N}^{(\ell)}$ is the set of all agents' models, and $d$ is the average parameter distance defined in (\ref{eq:attest}). This holds true since all agents are sampling from the same distribution (hardware and software). In our experiments, we were able to set $\alpha$ to a low value because the agents' benign models were consistently very similar (Section \ref{sec:eval}).
Therefore, it is highly unlikely that the \textit{partial-block} will be in conflict given a reasonable $\alpha$.
\subsection{Deadlock Prevention}\label{subsec:deadlocks}
As mentioned in the protocol, agents should only message a few other agents in step \ref{step:broadcast} to minimize traffic overhead. However, a deadlock can occur if (1) connectivity between agents is incomplete (some agent's cannot directly message other agents), (2) all agents have their neighbor's records in their \textit{partial-block}, and (3) all \textit{partial-blocks} have the same length. Although it is very rare for this to occur (one in a million depending on the connectivity), step \ref{step:pb_message} prevents any deadlocks that may happen.
The following is the formal proof that our revised system will not have any deadlocks in reaching a \textit{partial-block} of length $L$.
Let the undirected graph $G=(E,A)$ represent the agent's connectivity, where $i\in A$ is the set of agent IDs. Let $pb_i$ be the \textit{partial-block} of agent $i$ such that $pb_i \subseteq A$. We denote the set of neighbors which are directly connected to agent $i$ as $\Gamma_i$. Finally, we refer to an epoch as an iteration where all agents have broadcasted a their $pb$ to their neighbors (every $T$ seconds).
In our proof, we assume that $G$ forms a single connected component. We also assume that $L=|A|$ because if a $pb$ reaches length $|A|$ then it will reach all possible $L$, where $L\leq |A|$. We also assume that all agents are drawing observations from the same distribution to train their models, and therefore will not have any issue during the $pb$ validation checks (Section \ref{subsec:conflicts}).
A deadlock occurs if $\forall i\in A:D(i)$ where the predicate $D$ is defined as $D(i) : pb_{i}^{(t)}=pb_{i}^{(t+1)} \land |pb_{i}^{(t)}|<L$.
\begin{lemma}\label{lemma:have_own}
If there was no update after an epoch (deadlock), then all agents have their own ID in their partial block. Formally,
$\forall i\in A:D(i) \rightarrow \forall i \in A : pb_i \cap \{ i \}= \{ i \}$.
\end{lemma}
\begin{proof}
Let's assume that $\forall i \in A : D(i)$ and that $\exists i \in A : pb_i \cap \{i\}=\emptyset$. This cannot be because of step \ref{step:add_self} of the protocol: every agent $i$ adds record `$i$' to $pb_i$ if $pb_i \cap \{i\}=\emptyset$. Therefore, $\forall i \in A : D(i) \rightarrow \forall i \in A : pb_i \cap \{i\}=\{i\}$.
\end{proof}
\begin{lemma}\label{lemma:have_eachother}
If there was no update after an epoch (deadlock), and agents $i$ and $j$ are neighbors, then both agents have IDs $i$ and $j$ in their partial blocks. Formally, $\forall i \in A : D(i)\rightarrow \forall (i,j) \in E : pb_i \cap \{i,j\}=\{i,j\}\}$.
\end{lemma}
\begin{proof}
If we prove $\forall i \in A : D(i) \rightarrow \forall (i,j) \in E : pb_i \cap \{i\} \cap pb_j = \{i\}$, then we have proven Lemma \ref{lemma:have_eachother} by symmetry: Let's assume $\forall i \in A : D(i)$ but $\exists (i,j) \in E : pb_j \cap \{i\} = \emptyset$. This could not be true because (1) agent $i$ shared its $pb$ with agent $j$ and vice versa (step \ref{step:broadcast} of the protocol), (2) $pb_i \cap \{i\}=\{i\}$ and $pb_j \cap \{j\}=\{j\}$ (Lemma \ref{lemma:have_own}, and (3) in all cases, agent $i$ would have replaced it's pb with $j$'s (or vice versa):
\textit{Case 1}: $|pb_i|=|pb_j|$ and $pb_i$ either has $j$ or not. If $pb_i$ has $j$ then agent $i$ should have replaced $pb_i$ with $pb_j$ because $|pb_i \oplus \{i\}|<|pb_j \oplus \{i\}|$ (step \ref{step:pb_accept} of the protocol). Similarly, if $pb_i$ doesn't have $j$ then agent $j$ would have replaced $pb_j$ with $pb_i$ because $|pb_j\oplus \{j\}|<|pb_i \oplus \{j\}|$.
\textit{Case 2}: If $|pb_i|<|pb_j|$ then agent $i$ would have replaced $pb_i$ with $pb_j$ because $|pb_i \oplus\{i\}|<|pb_j \oplus \{i\}|$ since $pb_j \cap \{i\}=\emptyset$.
\textit{Case 3}: If $|pb_i|>|pb_j|$ then there is only one case where $|pb_i \oplus \{i\}|=|pb_j \oplus \{i\}|$ resulting neither agent performing an update: $pb_i \cap \{j\}=\{j\}$ and $|pb_i|=|pb_j|-1$. However, because $pb_j \cap \{j\}=\{j\}$ (Lemma \ref{lemma:have_own}), $|pb_j \oplus \{j\}|<|pb_i \oplus \{j\}|$ so agent $j$ would have replaced $pb_j$ with $pb_i$
Therefore, we conclude that $\forall i \in A : D(i) \rightarrow \forall (i,j) \in E : pb_i \cap \{i\} \cap pb_j=\{i\}$, so Lemma \ref{lemma:have_eachother} holds true.
\end{proof}
\begin{lemma}\label{lemma:same_len}
Lemma 3. If there was no update (deadlock) then all partial blocks have the same length. Formally, $\forall i \in A : D(i) \rightarrow \forall ij \in A : |pb_i|=|pb_j|$
\end{lemma}
\begin{proof}
Let's assume that $\forall i \in A : D(i)$ but $\exists(i,j) \in E : |pb_i|<|pb_j|$. However, $pb_i \cap \{i,j\} \cap pb_j=\{i,j\}$ (Lemma \ref{lemma:have_eachother}). This means that $|pb_i \oplus \{i\}|<|pb_j \oplus \{i\}|$ so agent $i$ would have set $pb_i=pb_j$, and $\forall i \in A : D(i)$ would not hold true. Therefore, it must be that $|pb_i|=|pb_j|$.
\end{proof}
\begin{lemma}\label{lemma:bad_neighbors}
Lemma 4. If there was no update (deadlock), then there exist two neighbors with different partial blocks, of same length. Formally, $\forall i \in A : D(i) \rightarrow \exists(i,j) \in E : pb_i \neq pb_j$.
\end{lemma}
\begin{proof}
Let's assume $\forall i \in A : D(i)$ but $\forall (i,j) \in E : pb_i=pb_j$. According to Lemma \ref{lemma:have_own}, all agents have their own ID in their partial block. However, if all agents have the same partial block, then that means that $|pb_i|=L$ and $\forall i \in A : D(i)$ does not hold true. Therefore, it must be that $\exists (i,j) \in E : pb_i \neq pb_j$.
\end{proof}
\begin{theorem}\label{theorem:nodeadlock_eqL}
Given a set of agents $A$, the connectivity network $G$, and $L=|A|$, there will never be a deadlock. Formally, $L=|A|\rightarrow \nexists i \in A: D(i)$.
\end{theorem}
\begin{proof}
Let's assume that $L=|A|$ but $\forall i \in A : D(i)$ (there is a deadlock). This would mean that $\exists (i,j) \in E : pb_i \neq pb_j$ (Lemma \ref{lemma:bad_neighbors}). If so, it must be that there is at least one ID in $pb_j$ that is not in $pb_i$ (via Lemmas \ref{lemma:have_eachother} and \ref{lemma:same_len}). Let's say that one of these IDs is that of agent $k$. When agent $j$ shared $pb_j$ with agent $i$ (step \ref{step:broadcast} of the protocol), agent $i$ would have sent $pb_i$ directly to its non-neighbor $k$ (step \ref{step:pb_message} of the protocol). Since $|pb_k|=|pb_i|$ (Lemma \ref{lemma:same_len}), and $|pb_k \oplus \{k\}|<|pb_i \oplus \{k\}|$ because $pb_i$ does not have $k$, agent $k$ must have replaced $pb_k$ with $pb_i$. Therefore, it is impossible for $\forall i \in A : D(i)$ to hold true since in the next epoch, agent $k$ would have added itself to its partial block making $|pb_k|>|pb_i|$.
\end{proof}
The continuation can be seen through Lemma \ref{lemma:same_len}: it must be that all other agents will grow their partial blocks to the same length as $pb_k$. Then, if there is another deadlock, the above process repeats until $\exists i \in A : |pb_i|=|A|$ and the block is closed.
\begin{corollary}\label{corr:deadlock_degree}
Corollary 1. Given a set of agents $A$, the connectivity network $G$, and $L \leq |A|$, there will never be a deadlock. Formally, $L \leq|A| \rightarrow \nexists i \in A : D(i)$.
\end{corollary}
\begin{proof}
The proof is trivial via Theorem \ref{theorem:nodeadlock_eqL} since there exists an agent that will reach a partial block length longer than $|A|$, and step \ref{step:add_self} of the protocol ensures that partial block grow in length by one at time.
\end{proof}
As a side note, if step \ref{step:pb_message} (direct messaging) is removed from the protocol, the system will reach a partial block length of the maximum degree plus one without any deadlocks:
\begin{theorem}\label{theorem:nodeadlock_leqL}
Theorem 2. Given a set of agents $A$, the connectivity network $G$, and $L=\Delta(G)+1$, there will never be a deadlock. Formally, $L=\Delta(G)+1 \rightarrow \nexists i \in A : D(i)$.
\end{theorem}
\begin{proof}
Let's assume that agent $i$ has the maximum degree. According to Lemma \ref{lemma:have_eachother}, $pb_i$ must have all of its neighbor's IDs and $i$ before a deadlock can occur. Therefore, there can't be a deadlock because $|pb_i|=|\Delta(G)+1|=L$.
\end{proof}
\subsection{Peer Discovery}\label{subsec:peerdisc}
To broadcast the latest chain, an agent must know the IP addresses of the receiving agents. It is important to note that an agent does not need to broadcast to all other agents. Instead, an agent broadcasts to $b$ other agents where $b$ is much smaller than the population size. In practice, $b$ can be in the order a tens or hundreds where there is a trade-off between the rate at which information is shared across the network (iterations of $T$) and the amount of work that is put into each broadcast. Regarding the discovery and selection of peers, we suggest that the Ethereum's p2p discovery protocol \cite{Discover45:online} be used and that an agent should periodically draw new peers at random.
\subsection{Maintaining Software Versions}\label{subsec:branching}
As time goes on, the target application may receive software updates during its software life-cycle. Although the app's new behavior will be accepted as normal (due to the majority consensus), there may be other devices where not yet updated or may never be updated. To ensure that these outdated devices aren't `forced' to use an incompatible model, we suggest that blockchain should support branching. In this approach, the chain forms a version tree were devices with newer versions can `fork' off to. To enable this the following additions are made to the protocol: (1) the respective software version must be stored in the metadata of each block, (2) multiple partial blocks of different version can be stored at the end of a chain, (3) if a partial block is completed but it has a different version than the current branch, then a separate chain is `forked' from that point, and (4) agents always follow the longest chain with their version.
\section{System Evaluation}\label{sec:eval}
In this section, evaluate the proposed collaboration framework: the experiment testbed, parameters, results, and observations. A video demo of the framework is available online.\footnote{\textit{The short demo of the framework protecting 48 Pis running web servers can be found at \texttt{\url{https://youtu.be/T4t_SnTJV3w}}}}
\subsection{Experiment Setup}
Our experiments were composed of four aspects: the (1) test environment, (2) implementation, (2) target applications, and (3) attack scenarios. We will now discuss each of these aspects in detail.
\subsubsection{Test Environment}
We built a LAN which served as a simulation platform for emulating a distributed IoT environment (Fig. \ref{PiBoard}).
This network involves 48 Raspberry Pis connected together through a single large switch.
In our environment, each Raspberry Pi was equipped with additional boards (sheilds) and sensors. For example, the PiCamera and Pibrella Board\footnote{\textit{Pibrella module can be found at \texttt{\url{www.pibrella.com}}}} which provides programmatic access to three LED lamps and simple 8-bit PC speaker. For each experiment, a target application (IoT software) was loaded and executed on all of the devices, along with an agent.
The source code for the agent can be found on GitHub.\footnote{The agent's code from the experiment can be found at \texttt{https://git.io/vAIvd}}
\subsubsection{Agent Implementation}
To implement Algorithm \ref{alg:monitor} (monitor), we implemented the agent using OS and CPU features. Specifically, we used the performance counters API and Core-sight (on ARM) and Last-Branch (on Intel). By using these libraries and features, we were able to track the application's control-flow in an asynchronous manner.
In our implementation, the kernel fills a large ring-buffer with observed jump and branch addresses.
When the OS scheduler switches to the agent, the agent iterates over the new entries in the buffer and updates $M^{(\ell)}$ accordingly. To improve performance further, the agent was written entirely in C++. However, the code was not optimized to its full potential.
The underlying network protocol we used in our experiment was the UDP Multicast protocol, though in practice, the Bitcoin or Etherium P2P neighbor discovery algorithm should be used. The following lists the parameters used in all experiences, unless noted otherwise:
\begin{itemize}
\item \textbf{\boldmath$T$ (Processing interval)}: one minute
\item \textbf{\boldmath$L$ (Block size)}: $20$
\item \textbf{\boldmath$p_a$ (Percent of reporting devices required to include a transition)}: $25\%$
\item \textbf{\boldmath$\alpha$ (Verification distance)}: $0.05$
\item \textbf{\boldmath$p_{thr}$ (Anomaly score threshold)}: $0.012$
\item \textbf{\boldmath$k$ (Probability averaging window)}: $10,000$
\item \textbf{Region size}: $256$ Bytes
\end{itemize}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Figure/PiBoard.png}
\caption{IoT simulation testbed consisting of 48 Raspberry Pis}
\label{PiBoard}
\end{figure*}
\subsubsection{Target Applications}
Every application has a different control-flow, and reacts differently to environmental stimuli. Therefore, we evaluated the framework using several different target applications:
\begin{description}
\item[Smart Light] Smart lights can perform custom functionalities programmed by the user. By evaluating the framework on a smart light, we are able to determine whether each agent is able to learn its functionality, and how the propagation of these behaviors affect other agents.
To implement the smart light's software, we combined several Open-Source projects \cite{mongoose, pibrellaGitHub, WiringPi}. The final application contained a vulnerable web-based interface for controlling the light's features.
\item[Smart Camera] Smart cameras often consume a significant amount of resources to perform real-time image processing. By monitoring such an application, we are able to evaluate how well the framework performs in resource heavy applications. The application which we used monitors a video feed and sends an alert when it detects a movement. The alert is sernt to a control server and is accompanied with a short video or image of the event. A user interfaces with the camera via the server, and can either (1) change its configuration or (2) view the camera's current frame. We included with the final application a null dereference vulnerability in the communication process with the control server.
\item[Router] Routers are widespread and provide Internet facing IPs (i.e., are not hidden behind a NAT). They are a good example of vulnerable IoTs which have been the target of many recent attacks (e.g., Mirai and the VPNFilter malware\footnote{\texttt{\url{https://www.symantec.com/blogs/threat-intelligence/vpnfilter-iot-malware}}}). By evaluating the framework on a router's software, we are able to consider how well our agent handles complex control-flows. Routers typically have a Linux kernel, and provide their functionality via several different applications. In our evaluation, we chose to target the Hostapd (Host access point daemon) applicaiton. Hostapd is a user space software access point capable of turning normal network interface cards into access points and authentication servers. We took version 2.6 of Hostapd which is vulnerable to a known replay attack.\footnote{The code is available at \texttt{\url{https://github.com/vanhoefm/krackattacks-scripts}}}
\end{description}
\subsubsection{Attack Scenarios}
To understand the framework's detection capabilities, we evaluated how well the agents can detect the exploitation of different vulnerabilities and the execution of malicious code:
\begin{description}
\item[Buffer Overflow] When writing information into a buffer, without proper boundary checks, it is possible to write more data than the buffer's size.
When this occurs, the data overflows and overwrites the code and variables in memory.
If executed correctly, a buffer overflow can be used to alter a programs code and alter the control-flow of the program.
This situation is dangerous because a crafted input data can contain machine instructions, thus causing the program to execute arbitrary code in the software's context \cite{deckard2005buffer}. In this scenario, we (1) exploit a buffer overflow vulnerability in the application, (2) covertly have the app behave like a bot, and (3) preserve the application's original behavior. The bot attempted to connect with a C\&C server once every minute.
\item[Code-Reuse] Instead of injecting new code into the program's memory layout, a code-reuse attack \cite{prandini2012return,elreturn,bletsch2011jump} uses the existing code of the program to create a new logic, mostly by performing jumps to unusual places in the code. For example, jumping to the middle of functions or jumping multiple times to different instructions which perform the desired logic.
These attacks were proved to be, in many cases, tuning complete \cite{tran2011expressiveness}. This means that an attacker can potentially cause a typical program to execute any desired logic.
A common approach is called ``return-to-libc'' \cite{elreturn} which reuses code in the libc library to execute the desired code. More advanced approaches are to use the ROP (Return-oriented programming \cite{prandini2012return}) and JOP (Jump-oriented programming \cite{bletsch2011jump}) techniques.
In this scenario, we attack perform a code-reuse attack on the target application in order to get the app to send sensitive data to a remote server.
\item[Replay Attack (Key Reinstallation Attack)] The Key Reinstallation Attack is a type of replay attack in which one or more protocol's messages are sent again in a different, unexpected, point of the protocol. The Key Reinstallation Attack tries to leak information about encrypted traffic by changing the application state in the middle of the encryption process. Unlike the previous two attacks, this attack does not execute new arbitrary logic within the application's memory space, but rather abuses the control-flow to reveal encryption secrets which can be used to decrypt a user's traffic off-site.
\end{description}
\subsubsection{The Experiments}
To evaluate the framework's anomaly detection capabilities with different applications and attacks, we used several different experiment setups summarized in Table \ref{ExperimentsSummery}. We will refer to these experiments using their short-form notation from the table.
Unless stated differently, for every experiment, the target application and a local agent were launched on 48 Raspberry Pis simultaneously. After two hours, we paused the training and began to record the performance for another two hours. Finally, at the start of the fifth hour, the specified attack was executed. Although it was not part of the protocol, we paused the training in order to observe the performance of a collaborative model which has been trained for exactly two hours. It is critical that the target application would not remain dormant, but rather, is exposed to normal interactions like an IoT device. Therefore, to successfully simulate a real environment, during all of our experiments we legitimately interacted with the target application manually, using random fuzzing, and previously recorded data on the application's input channels. For example, we used a prerecorded video stream in the experiments involving the smart camera.
\begin{table}[!t]
\begin{center}
\caption{Summary of Experiment Setups}
\begin{tabular}{c}
\includegraphics[width=.6\columnwidth]{tab_case.pdf}
\end{tabular}
\label{ExperimentsSummery}
\end{center}
\end{table}
\subsection{Experiment Results}
The contributions of this paper are (1) a method for detecting abnormal control-flows (2) efficiently, and (3) a method for performing collaborative training (4) in the presence of an adversary. We will now present our results accordingly.
\subsubsection{Anomaly Detection}\label{subsubsec:anom}
We will now evaluate the use of EMMs over regions of an application's memory space as a method for anomaly detection, on a \textit{single} device.
The code injection attacks (buffer-overflow and code-reuse) were detected entirely with no false positives. Fig. \ref{EMM_ON_REPLAY:a} plots the EMM probability scores $\overline{Pr}(Q_{k})$ for Exp2.
The Key Reinstallation Attack (Exp5) was more difficult to detect (Fig. \ref{EMM_ON_REPLAY:b}). This is because the attack does not inject own code, and the impact on the control-flow is very brief (a single step in the protocol). However, the attack still influences the probability scores, and we are able to detect the attack when $k$ is increased. Furthermore, when the train time is increased, the performance increases as well. This is evident in the collaborative training setting where two hours of training on 48 devices is equivalent to two days of training. In this case the EMM model yields perfect detection with no false positives.
In summary, given enough train time, our proposed anomaly detection method is capable of detecting arbitrary code injection attacks and other kinds of exploits (such as protocol exploits).
\begin{figure}[p]
\centering
\includegraphics[width=.8\columnwidth]{Figure/EMM_Eval_Camera.pdf}
\label{EMM_ON_REPLAY:a}
\caption{The probability scores of $M^{(\ell)}$ from Exp2 after two hours of training, where the red area marks the attack period.}
\vspace{.3cm}
\centering
\includegraphics[width=.8\columnwidth]{Figure/EMM_Replay.pdf}
\label{EMM_ON_REPLAY:b}
\caption{The probability scores of $M^{(\ell)}$ from Exp5 after two hours of training, where the red area marks the attack period.}
\end{figure}
\subsubsection{Collaboration Training}
In section \ref{subsubsec:anom}, we showed how EMMs can detect a variety of attacks on IoT devices, given enough train time. However, an anomaly detection model is vulnerable during its \textit{initial} train time ($T_{grace}$). Furthermore, a single device may not experience all possible behaviors in the alloted time. In contrast, collaborative training, using multiple IoT devices, can produce a model a shorter period of time which performs better.
\begin{description}
\item[Model Performance] By performing collaborative learning, the final model contains the collective experiences from many different devices. As a result, each device can better differentiate between rare-benign behaviors and malicious behaviors. Fig. \ref{Collaborative_Training_Exp:a} shows that the same amount of train time distributed over 48 devices produces a model which can detect an attack sooner than when simply performing all of the train time on a single device. The reason for this is the distributed model captures a more diverse set of behaviors, which helps it differentiate better between malicious and benign.
\item[Model Train Time] Fig. \ref{Collaborative_Training_Exp:b} shows that several models trained in parallel can produce a stronger model than a single model (Fig. \ref{Collaborative_Training_Exp:a}) in the same amount of time. Thus, we see that $M^{(g)}$ converges at a rate which is inverse to size of the network. As a result, a large IoT deployment will obtain a strong model quickly, and is much less likely to fall victim to an adversarial attack.
\end{description}
\begin{table}[h]
\begin{center}
\caption{False Positive Rates with Collaborative Learning: All Attack Scenarios}
\begin{tabular}{c}
\includegraphics[width=.6\columnwidth]{tab_fpr2.pdf}
\end{tabular}
\label{Collaborative_Training_ExpSummery}
\end{center}
\end{table}
\begin{figure}[p]
\centering
\includegraphics[width=.8\columnwidth]{Figure/CombinedModels1a.pdf}
\label{Collaborative_Training_Exp:a}
\caption{The probability scores of $M^{(\ell)}$ with 48 minutes of training, and $M^{(g)}$ with one minute of training across 48 devices (Exp1).}
\vspace{.3cm}
\centering
\includegraphics[width=.8\columnwidth]{Figure/CombinedModels2a.pdf}
\label{Collaborative_Training_Exp:b}
\caption{The probability scores of $M^{(g)}$ with increasingly larger sets of models (devices) in the case of Exp1.}
\end{figure}
In Table \ref{Collaborative_Training_ExpSummery}, we present the false positive rates (false alarm rates) of the framework with various numbers of devices and train time. The Table shows that just 48 devices training for two hours (2 days of experience) is enough to mitigate the false alarms. For the code-reuse and buffer-overflow attacks, there were no false negatives. However, in the replay-attack (Key-Reinstallation) there were a few false negatives. However, since the attacker sends a malformed packet multiple times, we ultimately detect the attack.
\subsubsection{Resilience Against Adversarial Attacks}
Since agents are constantly learning (even after $T_{grace}$), it is important that the framework be resilient against accidentally learning malicious behaviors as benign (i.e., poisoning). The acceptance criteria of a partial block ensures that these behaviors are not incorporated into the global models.
If some of the IoT devices are infected after the publication of the first block, we expect the collaborated $N^{(g)}$ to detect the malware, and not learn from it by accident. However, let's say that some of the IoT devices were infected prior to the publication of the first block and the elapses of $T_{grace}$. When the infected agents add their poisoned model to the \textit{partial-block}, other poisoned agents will reject their \textit{partial-blocks} because the \textit{model-attestation} step will reveal that the potential new $N^{(g)}$ is very different than their own local models $N^{(\ell)}$. Fig. \ref{Linear_Distance} visualizes this concept as heat maps, where the intensity of index $(i,j)$ represents the linear distance between the probabilities of transition $M^{(\ell)}_{ij}$ and $M^{*}_{ij}$, where $M^{*}_{ij}$ is a combined model from a \textit{partial-block}. In \ref{Linear_Distance:a}, the \textit{partial-block} has $10$ clean models, and in \ref{Linear_Distance:a}, the \textit{partial-block} has $10$ poisoned models. When an agent performs \textit{model-attestation}, the agent will find that $d(N^{(\ell)},N^*)<\alpha$, and reject the \textit{partial-block}. Assuming $L$ is large enough (e.g., $L=10,000$), and that a minority of agents are not infected, we expect that a poisoned \textit{partial-block} will never be closed before a clean one achieves consensus.
Let's say that $\alpha$ was set too low, or that the malicious jump sequences were very similar to the legitimate ones. In this case, the \textit{model-attestation} step will accept the \textit{partial-block}, but the \textit{abnormality-filtration} step will remove the malicious behaviors. This is assuming that less than $p_a$ percent of the models in the \textit{partial-block} contain the malicious transitions. Fig. \ref{Adversarial_evaluation} shows that with $L=20$ and $p_a=75\%$, an attacker must poison $15/20$ models (during $T_{grace}$) in order to evade the detection of the next $M^{(g)}$. This is very difficult for the attacker to achieve because (1) he must infect the IoT devices without detection, (2) there is a chance that not all infected models will appear together in a \textit{partial-block} (e.g., with 48 or 1,000 devices), and (3) if he does not succeed before the first block if published, then it is likely that the new $M^{(g)}$, accepted among \textit{all} agents, will detect the malware.
Another possibility is that the attacker may try and sabotage the agent via target application. However, by accessing the agent's memory from the monitored application will require additional exploits from the malware. Ultimately, the agent will detect either the initial intrusion, or the exploits used to gain access to the agent's memory space.
Another insight is that when a minority of models are infected yet the agent's \textit{model-attestation} accepted the \textit{partial-block}, the \textit{abnormality-filtration} removes the malicious transitions but keeps the benign ones (observed by $p_a$ percent of the models). As a result, healthy information is retained from the poisoned models, while the abnormalities are filtered out.
\begin{figure}[p]
\centering{
\subfloat[Linear distance between a benign model and a clean combined model]
{\includegraphics[width=.8\columnwidth]{Figure/Exp2a_1.pdf}
\label{Linear_Distance:a}
}\quad
\subfloat[Linear distance between a benign model and a positioned combined model.]
{\includegraphics[width=.8\columnwidth]{Figure/Exp2b.pdf}
\label{Linear_Distance:b}
}
}
\caption{Heat maps of the linear distance between models in Exp4.}
\label{Linear_Distance}
\end{figure}
\begin{figure}[h]
\centering{
\includegraphics[width=.8\columnwidth]{Figure/Exp3_1.pdf}
}
\vspace{-0.5cm}
\caption{The combined model normalized probability generated from the latest block $B$, where various numbers of the models in $B$ have been infected (attacked).}
\label{Adversarial_evaluation}
\end{figure}
\subsubsection{Baseline Comparisons}
To understand the capabilities of the proposed collaborative framework, we evaluate the selected the anomaly detection method (EMM over memory regions) and the entire host-based intrusion detection system (the blockchain framework) to their respective baselines.
To validate the use of the EMM, we compare its performance to two well-known sequence-based anomaly detection algorithms: t-STIDE and PST (see \ref{sec:relworks}). For the PST we took a sequence length of 10. We also compare the EMM to the heatmap method proposed in \cite{7167219}. In these experiments, we performed the buffer overflow attack in the Smart Light (Exp1), the code reuse attack on the Smart Camera (Exp4), and the replay attack on the router (Exp5). All of the algorithms were given the same 30 min of normal training data and then were tested on 20 min of normal data followed by 10 min of attacks.
To measure the performance we compute the area under the curve (AUC). The AUC is computed by plotting the true positive and false positive rates (TPR and FPR) for every possible threshold, and then by computing the area under the resulting curve. Intuitively, it provides a single measure for how well a classifier performs. A value of `1' indicates a perfect predictor and a value of `0.5' indicates that the predictor is guessing labels at random. Since the AUC measure ignores precision it is slightly misleading in the case of anomaly detection. Therefore, we also compute the average precision-recall curve (avPRC) which is computed in a similar manner.
In Fig. \ref{fig:aucprc} we present the results from this baseline test. We found that although t-STIDE sometimes our performed the MC, the MC consistently provide the best performance for all target applications. This justifies our use of the EMM for our system. We also note that the PST took several hours to train on a strong PC, and therefore is not practical to train on an IoT.
\begin{figure}[h]
\centering
\includegraphics[height=.25\textheight]{Figure/auc3.pdf} \includegraphics[height=.25\textheight]{Figure/prc3.pdf}
\caption{The AUC (left) and the PRC (right) of each algorithms for each attack/target app.}
\label{fig:aucprc}
\end{figure}
In Fig. \ref{fig:score_time} we plot the anomaly scores (predicted probabilities) of the algorithms over time during the attack phase. From the figure, it is clear why the MC consistently had a high avPRC since there is a clear separation between the anomalous scores and benign scores. This is important when deciding on a threshold. In practice, the threshold is determined based on a statistical measure given the benign data distribution.
\begin{sidewaysfigure}[p]
\centering
\includegraphics[width=\textwidth]{Figure/baselinesa.png}
\caption{The anomaly scores (predicted probabilities) of the algorithms over time during the attack phase. The actual attack periods are marked in red.}
\label{fig:score_time}
\end{sidewaysfigure}
To validate the use of entire framework, we evaluated our host-based intrusion detection system (H-IDS) in comparison to others. Since we targeting IoT devices, we selected H-IDSs which are well-known, operate on Linux, and can be compiled to run on an ARM processor: OSSEC, SAGAN, Samhain, and ClamAV. OSSEC is an open-source system which performs integrity checking, log analysis, rootkit detection, time-based alerting, and active response. We loaded OSSEC with all it's default detection rules. SAGAN is an open source multi-threaded system which performs real-time log analysis with a correlation engine. Sagan's structure and rules work similarly to the Sourcefire Snort IDS/IPS, and we loaded it will all available community rules.
Table \ref{tab:hids} compares the H-IDSs to ours in the context of the content being monitored, and the intrusion detection mechanism used. Samhain is an integrity checker and host intrusion detection system. Finally, ClamAV is a free software open-source antivirus software which we loaded will all current virus signatures.
Once we loaded all four H-IDSs onto a Raspberry Pi, we launched each target application and performed the same attacks described above. We found that none of the four H-IDSs reported any alerts. This makes sense because these systems do not perform dynamic analysis on the target application's control flow. Therefore, the buffer overflow, code reuse, and replay attacks evaded detection.
\begin{table}[!t]
\begin{center}
\caption{The Host-based Intrusion Detection Systems compared to Ours}
\vspace{1em}
\begin{tabular}{c}
\includegraphics[width=.5\columnwidth]{Figure/hids.pdf}
\end{tabular}
\label{tab:hids}
\end{center}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width=.9\textwidth]{Figure/cpu_mem_time.pdf}
\includegraphics[width=0.48\textwidth]{Figure/cpu.pdf}
\includegraphics[width=0.48\textwidth]{Figure/mem.pdf}
\caption{The resource utilization of the agent on a 500 MHz CPU. Top: Resource utilization over the first 15 minutes. Bottom: Resource utilization expressed as density plots.}
\vspace{-0.3cm}
\label{fig:benchmark}
\end{figure}
\subsection{Complexity Analysis \& Benchmark}\label{sec:complexity}
The time complexity of an agent can be broken down according to the three parallel processes in Fig. \ref{LLD}. The Gather Intelligence process periodically receives a ring buffer from the kernel will the last $n$ jump operations, checks for anomalies, and updates the EMM. Therefore, It's complexity is $O(n)$. However, if an averaging window is used over the anomaly cores, then the complexity is $O(n+wn)$ where $w$ is the window size. However, $w<<n$ in practice and thus we can consider the complexity to remain as $O(n)$.
When the Retrieve Intelligence process receives a loner chain than the local one, it will check the legitimacy of the last block by validating its signatures, and then possible validate the signatures int he partial block as well. Therefore, in the worst case scenario, the agent will perform $2L-1$ signature checks. Although this may take a second to process, it will not affect the system since at most $b$ broadcasts will be accepted by the agent during each interval $T$, where $T$ in the order of minutes or greater.
Finally, the Share Intelligence process wakes up and sends the local chain to $b$ other agents at random. Although the p2p discovery protocol and network transfer may take some time, it has a negligible affect on the CPU.
We performed a performance benchmark to evaluate the agent's CPU and memory utilization. The benchmark was performed on a Linux embedded device with a single ARM Cortex CPU clocked at 500 MHz since this CPU configuration is common among IoT devices \cite{ARM}. The test was run for one hour in the presence of 48 other agents having the same protocol configurations used in the evaluations. The target app was a web facing log server with a known CVE. The results can be found in Fig. \ref{fig:benchmark}. The results show that the resource consumption of the agent is negligible using only 1\% of the CPU on average and 4MB of RAM.
\subsection{Blockchain Simulator}\label{subsec:simulator}
To help other reproduce our work and understand how the blockchain protocol works, we have \href{https://drive.google.com/drive/folders/15gLytEJyQyYCmhB-EZSkES77KsuCW0hw?usp=sharing}{published a discreet event simulator (DES)} of the protocol written in Python.\footnote{\texttt{https://github.com/ymirsky/CIoTA-Sim}} The DES is object oriented and creates an instance of each agent to help users follow the protocol logic and the propagation of the chains. The DES only simulates the high level protocol logic (e.g., Section \ref{subsec:deadlocks}), and not the \textit{model-attestation} training or combining. For code on model management, please see our other repository.\footnote{\texttt{https://git.io/vAIvd}}
The user selects the number of agents, $L$, $T$, the number of blocks to close, and the connectivity between the agents. The connectivity can be set to fully connected or random: Barabasi-Albert Algorithm (preferential attachment) or Watts-Strogatz (small world attachment). The DES queue then manages the agent's information sharing (when elapses of $T$) where some small amount of noise is added to the event times.
For each type of graph, we ran the simulator 100k times with 1,000 agents and set $L=800$. For the Barabasi-Albert generator we set attachment to 1, and for the Watts-Strogatz generator we set the neighbors to 5 with a probability of 0.1. For each trial we generated a new random network.
Table \ref{tab:sim} presents the agent connectivity (node degree) of the agents in the simulations and the number of times (epochs) an agent executed step \ref{step:broadcast} of the protocol until a block was closed. Fig. \ref{fig:sim} plots the distribution of the epoch counts over 100k trials.
The results show that blocks are closed faster with better connectivity between agents (larger node degrees). A fully connected network closes a block in 1 epoch and a sparsely connected network (Barabasi-Albert) can take up to 800 epochs (2.2 hours with $T=10$ seconds).
\begin{table}[!t]
\resizebox{\textwidth}{!}{
\begin{tabular}{@{}lccccc|cl@{}}
& \multicolumn{5}{c|}{Node Degree} & \multicolumn{2}{c}{\#Epochs} \\
Graph Generator & Min & Max & Median & \textbf{Mean} & Std. & Mean & Std. \\ \midrule\midrule
Complete & 999 & 999 & 999 & \textbf{999} & 0.00 & 1 & 0 \\
\textit{agents connected to all agents} &&&&&&&\\ \midrule
Watts-Strogatz & 6 & 10 & 6 &\textbf{6.59} & 0.74 & 142.80 & 3.14 \\
\textit{small world attachment} &&&&&&&\\ \midrule
Barabasi-Albert & 1 & 99 & 1 & \textbf{1.99} & 3.56 & 800.46 & 3.16 \\
\textit{preferential attachment} &&&&&&&\\
\bottomrule\bottomrule
\end{tabular}
}
\caption{The network generators' node degrees (number of neighbors per agent), and the number of epochs ($T$) elapsed until a block was completed.}
\label{tab:sim}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{Figure/smallworld_1000.pdf}
\hspace{-.7em}
\includegraphics[width=0.5\textwidth]{Figure/barabasi_1000.pdf}
\caption{The number of epochs (times $T$ elapsed) until a block was completed, sampled one million times from each type of random network (containing 10,000 agents).}\label{fig:sim}
\end{figure}
\vspace{1em}
\section{Security Analysis}\label{sec:security}
In this section we will discuss the security coverage of the agents and potential attacks against the framework.
\subsection{Agent Coverage}
It is not necessary to have an agent monitor every application on an IoT device because the most common attack vector on the IoT is via the devices' Internet-facing applications. Therefore, to provide maximal coverage, we recommend that an agent protect all such applications (web servers, telnet daemons, ...)
The agent can detect the exploitation of software vulnerabilities, misuse, and denial of service attacks if the attacks result in an abnormal control-flow, or an irregular read/write operation in memory:
\begin{description}
\item[Confidentiality] There are many ways in which an attacker can violate the confidentiality of the device. Our agent can only detect these attacks if it causes an irregular control flow. For example, if the attacker performs an algorithm downgrade attack on an encryption channel like in the cases of Beast, Poodle, and Krack \cite{vanhoef2017key,moller2014poodle,sarkar2013attacks}. then the protocol behaves differently than usual (as shown in our evaluation). Another example is where the attacker pulls data from a database via SQL an injection. In this case the interpreter will perform two queries back-to-back or ti may even perform a query that is never performed (e.g., drop table). Lastly, the attacker may perform a buffer overflow to reveal data in the server as was done in heartbleed to obtain private encryption keys \cite{durumeric2014matter}. When a buffer-overflow occurs, the program counter moved operates in a region of memory in a transition that it never does.
\item[Integrity] In some cases, an attacker may want to compromise the integrity of an IoT device by executing custom code or by abusing existing code (reuse). In doing so the attacker could install malware to recruit the device into a botnet, perform an act od ransomware, or some other malicious act. Many, of these code executions are achieved by altering the memory control flow. For example, buffer-overflow attacks via web inputs and irregular interactions with services (e.g., using ssh/telnet to download a payload \cite{antonakakis2017understanding}). Moreover, in some cases the agent can discover when an attacker is perform reconnaissance to reveal potential vulnerabilities. One approach is to brute-force well-known credentials and another approach is to execute a variety of crafted malformed inputs until one succeeds (e.g., directory traversal attacks). In both cases, the operations will generate irregular loops in memory.
\item[Availability] The goal of an attacker may be to disable the device's web server in a denial of service attack (e.g., if the device is a surveillance camera). Our agent can detect and alert for some of these attacks. For example, the agent can detect when a malformed packet causes a server to halt, and when SYN flood or SSL Renegotiation attack occur (due to the looping). However, the agent cannot detect a loss in connectivity if the channel is jammed, overloaded, of if a nearby router has been compromised.
\end{description}
Please refer to Fig. \ref{fig:cwe} in the appendix for a list of high-level Common Weakness Enumerations (CWE) which the agent covers. We note that the agent's performance is not perfect, and the detection of some of these attacks may require a smaller region (state) size to provide the right granularity. In future work we plan to address this issue by letting he agent choose the state size based on the amount of code loaded into memory.
\subsection{Adversarial Attacks}\label{subec:adversarial}
Although the framework aims to protect IoT devices from attackers, to ensure reliability, we must consider how an attacker may target our framework. Once again, we will refer to the `CIA' of security:
\begin{description}
\item[Confidentiality] One potential attack against the system is to intercept and extract $M^{(\ell)}$ for a target device. In doing so, the attacker may be able to violate the user's privacy by inferring the user's \textit{high-level} interactions with the application. This threat only applied during the creation of the first block since the shared models have not yet been generalized (combined) with others. To mitigate this threat, the manufacturer can initiate the blockchain with an initial model, similar to how stream ciphers use IVs (see \textit{cold start} later in section \ref{sec:discussion}). It is also a good idea to choose a relatively large state size for better obscurity. Another concern is if the user installs 3rd party apps, and the manufacturer has an agent automatically assigned each new app. In this case the attacker can infer which apps the user is using and when by monitoring the broadcasted chains. To mitigate this threat and the other cases, all p2p communications should be encrypted using SLL.
\item[Integrity] If an attacker corrupts (poisons) the model in training, he can intentionally cause high false alarm rates or evade detection. There are two ways in which an attacker can poison the model. The first way is to install malware on the majority of the population before the first block is closed (supply chain attack or a regular infection). Infecting the majority of devices is very hard to accomplish because (1) it involves a short time window, (2) requires infection without detection,\footnote{The detection phase begins after $T_{grace}$ and not after completing the first block} and (3) a large number ($p_a\%$) of devices need to be infected. While it is not impossible, it is considerably more difficult than exploiting a single device. To minimize the attack window and bandwidth of the system, one may change the protocol so that $T$ is a monotonically increasing with the length of the local chain. For example, a linear function can be used or the exponential decay function $T(m)=(1-2^{-\lambda m})*(t_{\text{max}}-t_{\text{min}})+t_{\text{min}}$, where $m$ is the length of the local chain, $\lambda$ is the half-life rate, $t_{\text{min}}$ is the shortest interval, and $t_{\text{max}}$ is the longest.
Another way to corrupt the model is to (1) evade detection by compromising the device via an application which is not monitored, or via physical access to the device, then (2) achieve root privileges to compromise the agent, then (3) repeat this process until the majority of agents are under the attacker's control, and then (4) broadcast a compromised model in unison to all other devices. To mitigate this attack, users should consider which apps should be monitored and the physical security of their devices. Another option is to place the agent in the device's TrustZone (e.g., \cite{TrustZon14:online}).
We note that an attacker may attempt to avoid detection by crafting exploit to follow a common flow through the application's memory. However, designing such an attack very limited and difficult since the operations are limited to a normal jump sequence. Moreover, to initiate this flow, the attacker will need to initially override some instructions (e.g., buffer-overflow), and as a result will likely trigger an alert.
\item[Availability] An attacker may attempt a denial of service (DoS) attack to overload the agent thus disabling the device, or to block an agent access to new models by disrupting the agent's connectivity. In our protocol (section \ref{sec:ciota}), we took steps to ensure that an agent cannot be overloaded with broadcasted chains by limiting the processing rate to $b/T$ chains per second (section \ref{subsec:peerdisc}). However, care should be taken regarding the software used in the agent's server implementation to avoid flooding attacks, buffer overflows, and other attack vectors.
Moreover, the system is robust to network and hardware failures. This is because the system is distributed and agents only need to be connected to $b$ other random other agents. Moreover, collaboration is only necessary to accelerate the EMM model's initial training, and to handle future concept drifts (e.g., software updates). Therefore, during an outage, each agent will still continuously (1) execute the latest model to detect attacks on the target application, and (2) update the local model $N^{(\ell)}$ on sequences which are considered safe (above the probability threshold). In this case, each agent will still act as an efficient standalone host-based anomaly detection system, and continue to improve its model until it converges.
\end{description}
\section{Discussion}\label{sec:discussion}
In this section, we discuss the assumptions and design considerations of framework.
\subsection{Assumptions}
The framework's primary goal is to autonomously learn more in less time.
To achieve this goal, we take the following assumptions.
\begin{description}
\item[Population size] \textit{There are enough participants with the same hardware model to support the system.} As noted earlier, a separate block chain is maintained for each device model version. If the homogeneous population is not large enough, then consensus will never be reached. However, IoT products are often mass produced and is likely that there will be tens of thousands of identical devices deployed around the world at a given time. Once a block has been closed, future generations can benefit from it even if the population has decreased below the consensus threshold.
\item[More is better] \textit{Learning from more data will produce a better anomaly detection model.}
This is because more data captures a more complete view of the behaviors, and therefore, the trained model will have a lower the false positive rate (FPR).
\item[Achieving consensus]\textit{Given an appropriate $\alpha$, the majority of participants will reach a consensus for $M^{(g)}$ among themselves.}
With a very large $\alpha$, the agents will surely achieve consensus. Although smaller values of $\alpha$ will improve the quality of the consensus, it will also increase the likelihood that \textit{partial-blocks} will be rejected --increasing the time it takes to complete a block (achieve consensus). If $\alpha$ is too low, consensus may never converge, and a collaboration will never occur. In our evaluation and proof of concept, we used empirical observations to select a constant value for $\alpha$.
However, as future research, a dynamic algorithm for selecting the appropriate value of $\alpha$ should be used to optimize the quality-convergence trade off.
\item[Benign majority] \textit{The majority of local models distributed among the agents are not poisoned.}
The blockchain is a peer-to-peer (P2P) protocol for a distributed (server-less) database that is managed by consensus of the network.
Therefore, the blockchain architecture can only work under the assumption that most of the participants are clean at the outset. In the case of our framework, this means that the majority of agents are not accidentally updating their local models with malicious behaviors.
\item[Benign start] \textit{An agent on a device monitors all applications (on separate chains) which are potential infection vectors from the Internet.} If the IoT device gets infected via an application which is not being monitored, then the agent cannot detect the threat. Therefore, in order to protect the device, all applications which can be exploited via the Internet should be monitored.
\end{description}
\subsection{Implementation}
The following are some discussion points which relate to the implementation of the framework.
\begin{description}
\item[Remediation Policies] While detection is a powerful tool for security, without acting on detected threats, detection is meaningless. When an agent detects an anomaly, the agent can (1) send an alert to a control server, (2) suspend the infected application via the kernel, (3) restart the infected application, or (4) a perform a combination of these options. We note that scope of this paper is detection, whereas remediation is a task specific problem. For example, restarting the infected app may be acceptable for a smart air conditioner, but not a survallaince camera (an implicit DoS attack). Therefore, the remediation should be considered accordingly.
\item[Address Space Layout Randomization] Today, many operating systems use Address Space Layout Randomization (ASLR) \cite{shacham2004effectiveness} to prevent outside entities from knowing the memory layout of applications in execution. ASLR ensures that an identical application will have a different memory layout on each device, making it very hard for attackers to exploit memory corruption vulnerabilities. However, since ASLR is an internal state, each agent can parse the memory layout of other agents to its natural state.
Concretely, for each memory region captured by $N^{(\ell)}$, an agent will include a library identifier and the region's offset from the library's initial address for other agents to rearrange the model.
\item[Authentication and Identification] In the paper, the framework uses PKI (public-key infrastructure \cite{adams2005internet}) in order to prevent an attacker from creating or replaying fake chains or records. Conventional PKI uses certificate authority (CA) servers to sign, manage, verify and revoke public key certificates. The use of a CA may incur some delay in processing blocks. However, there is no immediate rush to process these blocks since $T$ is in minutes or hours, and the agent continues to perform real-time intrusion detection in the meantime. Regardless, using CAs introduces single points of failure since they are centralized. To overcome this, several researchers have developed PKI for IoT networks \cite{8537812,8611563,JIANG2019185,shetty2019blockchain}. For example, in \cite{8537812} the authors propose three different methods for distributing CAs over a blockchains using Etherium smartcontracts and even the Emercoin infrastructure.
Aside from PKI, another option is to use a shared secret among the agents and perform symmetric encryption. By doing so, no additional infrastructure is needed.
The risk of using symmetric encryption is that an attacker can obtain a device an extract the shared key and compromise all agents. Therefore, to implement this approach, we recommend using an IoT devices' TrustZone. A TrustZone is a safe house inside the device (untrusted territory) that has access to the untrusted territory within the device. In this setup, the agent and its symmetric encryption key (provided by the administrator) is located in the TrustZone. By doing so, the agent and it's secrets are secured while avoiding the issues of PKI. However, if the TrustZone does not implement tamper protection, an attacker can physically interact with the device to extract the key.
\item[Memory Region Size (state size)] The memory region size is a parameter configured by the user. This parameter incurs a trade-off: a large region size has low false-positive-rate but a high false-negative-rate, and a small region size has a high true-positive-rate but a high false-alarm-rate. Although we found that $256$ Bytes is a sufficient size, one should consider finding the smallest region size possible for their application. Another option is to use a small region size but increase $T_{grace}$.
\item[Cold-start] In this paper we presented how agents distributed across a set IoT devices can build a detection mechanism with no prior knowledge. Although we expect the collaboration process to help agents learn rare yet benign behaviors, some benign behaviors may never cross the $p_a\%$ threshold. For example, the function which is executed by a smoke detector when a smoke is detected. To ensure that these behaviors make it into $M^{(g)}$, a manufacturer can post $M^{(g)}$ as the starting point for the blockchain. In this case, $M^{(g)}$ is a model from a single device in a lab, which has been exposed to the rare functionalities. By bootstrapping the blockchain with an initial model, it is possible to maintain a secure population with very few devices since the initial window of exploitation is diminished.
We note that this can be implemented by reserving a special certificate for the manufacturer who can sign this block. Doing so would also benefit the manufacturer since he can force updates in cases where there are software updates that significantly affect the applications' behavior.
In the cases where a cold-start is necessary, we stress that majority of the training should still occur on-site and not in the lab. The reasons are that (1) it costs less, (2) it is very difficult to simulate natural dynamic human interactions with the devices, (3) it is challenging to stimulate the sensors realistically, and (4) simulating every single possible control-flow (fuzzing) is not practical.
\item[Deployment] There are two ways the framework can be deployed: open and closed. In an open deployment anyone can register an agent to the network. In this mode of operation, a central entity should be entrusted with registering new users in order prevent an attacker from registering many accounts and overtaking the consensus. In a closed deployment, only invited agents can participate in the blockchain consensus. This deployment prevents unwanted entities from corrupting or eavesdropping on the blockchain.
\end{description}
\section{Conclusion}\label{sec:conclusion}
The number IoT devices is steadily increasing. However, manufacturers seldom patch older models and unintentionally write vulnerable code. As a result, large numbers of IoT devices are being exploited on a daily basis. Due to the scale of the problem, a generic stand-alone method for monitoring and protecting these devices is necessary.
In this paper, we introduced a blockchain-based solution for autonomous collaborative anomaly detection among a large number of IoT devices.
To detect the exploitation of software on an IoT device, an agent is deployed on the device and efficiently models the software's normal control-flow for anomaly detection. However, the model training is vulnerable to adversarial attacks, and it is unlikely that a single IoT device will observe all normal behaviors (sensors readings, triggers, interactions, etc$\ldots$). Therefore, the agent uses a blockchain protocol to incrementally update the anomaly detection model via self-attestation and consensus among other agents running on similar IoT devices. By collaborating among other agents, the training phase (convergence) is significantly shorter, and false-alarm-rate is reduced due to the shared experience.
To evaluate the proposed framework, we used 48 Raspberry Pis running a wide variety of IoT applications. We also made a discreet event simulator of the agents with different connectivity to simulate larger systems and help the reader follow the protocol. Our evaluations show that the proposed method can efficiency detect different types of attacks with no false-alarms (given enough devices and a sufficient training period).
The proposed framework does not require any a manual process of creating virus signatures, or a manual process for pushing updates. Furthermore, IoT devices are able to detect exploits without prior knowledge of the exploits. In terms of practicality, the framework is platform generic, completely autonomous, and scales with the number of IoT devices. Therefore, the proposed framework has the potential to provide IoT manufactures with a cheap and effective solution.
We hope that this framework, and its variants, will assist researchers and the IoT industry in securing the future of the Internet of Things.
|
1,108,101,565,010 | arxiv | \section{Introduction}
Due to the rapid development of deep convolutional neural networks (CNN)~\cite{simonyan2014very,he2016deep}, substantial progress is achieved in image recognition.
However, they usually assume that the training and testing data is balanced.
In practice, training or testing data appears to be long-tailed, e.g., there exist few samples for rare diseases in medical diagnosis~\cite{zhang2020exploring,zhang2021cross,zhao2021deep,zhao2021contralaterally,pan2022computer,zhang2021automatic} or endangered animals in species classification~\cite{zhang2022generalized,zhang2022onfocus,cheng2022compound}.
As mentioned by~\cite{zhang2021weakly}, the case becomes even worse in weakly and semi-supervised learning scenarios~\cite{zhang2018spftn,zhang2019leveraging,zhang2020weakly,zhao2021weakly,huang2021scribble,pan2022learning,zhao2022cross,wang2022double,zhang2022generalized}.
The conventional training process of CNNs is dominated by frequent classes while rare classes are neglected.
A large number of methods focus on rebalancing the training data through biasing the training process towards rare classes~\cite{cao2019learning,hong2021disentangling}.
However, they usually assume the testing data is balanced, while the distribution of real testing data is unknown and arbitrary. The performance of existing algorithms for learning with long-tailed distribution (LLTD) remains to be validated under such circumstance.
This paper revisits existing evaluation strategies for long-tailed image classification and provides a new strategy to evaluate LLTD algorithms on testing data with unknown distribution.
\label{sec:intro}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{pics/adapt.pdf
\caption{In real-world tasks for learning with long-tailed distribution, the distribution of testing data is unknown and may be different from that of training data.}
\label{fig:motivation}
\end{figure}
\begin{table*}[t]
\fontsize{7.5}{9} \selectfont
\centering
\caption{Evaluation strategies for long-tailed image classification, including: 1) testing algorithms on balanced subset~\cite{zhou2020bbn,park2021influence}; 2) testing algorithms on subsets having uniform distribution and forward/backward-trend distribution~\cite{zhang2021test}; 3) testing algorithms on a series of subsets with dynamic evolving distributions.
\setlength{\tabcolsep}{1mm}{
\begin{tabular}{r|l|l|r}
\toprule
\toprule
\multicolumn{1}{l|}{\textbf{Test Data}} & \multicolumn{1}{l}{\textbf{Metrics}} & \textbf{Descriptions} & \multicolumn{1}{l}{\textbf{Evaluation Properties}} \\
\midrule
\midrule
& \cellcolor[rgb]{ .949, .949, .949}ACC$_{all}$ & \cellcolor[rgb]{ .949, .949, .949}Accuracy of all classes & \multicolumn{1}{l}{\cellcolor[rgb]{ .949, .949, .949}1. Only reflect the classification performance under a} \\
\multicolumn{1}{l|}{Balanced} & ACC$_{many}$ & Accuracy of many-shot classes & \multicolumn{1}{l}{specific testing distribution.} \\
\multicolumn{1}{l|}{Distribution} & \cellcolor[rgb]{ .949, .949, .949}ACC$_{mid}$ & \cellcolor[rgb]{ .949, .949, .949}Accuracy of medium-shot classes & \multicolumn{1}{l}{\cellcolor[rgb]{ .949, .949, .949}2. Not able to reflect the stability and upper/lower} \\
& ACC$_{few}$ & Accuracy of few-shot classes & \multicolumn{1}{l}{bound of algorithms on real-world testing data.} \\
\midrule
\midrule
\multicolumn{1}{l|}{Multiple} & \cellcolor[rgb]{ .949, .949, .949}ACC$_{forw}$ & \cellcolor[rgb]{ .949, .949, .949}Accuracy under forward-trend test distribution & \multicolumn{1}{l}{\cellcolor[rgb]{ .949, .949, .949}1. Reflect the classification performance under several} \\
\multicolumn{1}{l|}{Independent} & ACC$_{uni}$ & Accuracy under uniform test distribution & \multicolumn{1}{l}{dependent imbalanced testing distributions.} \\
\multicolumn{1}{l|}{Distributions} & \cellcolor[rgb]{ .949, .949, .949}ACC$_{back}$ & \cellcolor[rgb]{ .949, .949, .949}Accuracy under backward-trend test distribution & \multicolumn{1}{l}{\cellcolor[rgb]{ .949, .949, .949}2. Coarsely reflect stability and upper/lower bound.} \\
\midrule
\midrule
& AUC & Area under accuracy curve & \\
\multicolumn{1}{l|}{Dynamic} & \cellcolor[rgb]{ .949, .949, .949}ACC$_{avg}$ & \cellcolor[rgb]{ .949, .949, .949}Average accuracy & \multicolumn{1}{l}{\cellcolor[rgb]{ .949, .949, .949}1. Reflect the classification performance under real} \\
\multicolumn{1}{l|}{Evolving} & ACC$_{std}$ & Standard deviation of accuracy & \multicolumn{1}{l}{testing data more comprehensively.} \\
\multicolumn{1}{l|}{Distributions} & \cellcolor[rgb]{ .949, .949, .949}ACC$_{max}$ & \cellcolor[rgb]{ .949, .949, .949}Maximum accuracy & \multicolumn{1}{l}{\cellcolor[rgb]{ .949, .949, .949}2. Evaluate stability and upper/lower bound more finely.} \\
& ACC$_{min}$ & Minimum accuracy & \\
& DR & Drop ratio of accuracy & \\
\bottomrule
\bottomrule
\end{tabular}%
}
\label{tab:comp-eval}
\end{table*}
Recently, learning models with long-tailed training data attracts lots of research interest.
Existing methods concentrate on different procedures including data preparation, feature representation learning, objective function design, and class prediction to tackle this task.
According to the focused procedures, they can be categorized into four types, i.e., data balancing~\cite{chawla2002smote,drummond2003c4}, feature balancing~\cite{cui2021parametric}, loss balancing~\cite{lin2017focal,park2021influence,ren2020balanced}, and prediction balancing~\cite{wang2020long,zhang2021test,li2022trustworthy}.
Most of them train models with imbalanced subsets of existing datasets, such as CIFAR-10/100~\cite{krizhevsky2009learning}, ImageNetLT~\cite{liu2019large}, and iNaturalist~\cite{van2018inaturalist}. Then, they evaluate the classification performance on a balanced testing set with accuracy values of all classes and partial classes (e.g., many/middle/few-shot classes).
This evaluation process differs from the real-world situation where the testing data distribution is unknown.
Hence, it may not reflect the actual classification performance objectively.
Moreover, those metrics can not indicate the stability and performance bounds of algorithms on testing data which may have arbitrary distributions.
\cite{zhang2021test} attempts to estimate LLTD algorithms on testing data with multiple distributions: 1) Testing data has an imbalanced distribution sharing the forward trend as the training data; 2) Testing data has uniform sample sizes across classes, namely the distribution is balanced; 3) Testing data has imbalanced distributions with the backward trend of the training data. However, this manner is still limited in comprehensively evaluating LLTD algorithms on unknown testing data. It can only coarsely reflect algorithmic stability and bounds.
To address the above issues, we devise new evaluation metrics based on testing data with dynamic evolving distributions. For covering possible test distributions as fully as possible, we simulate a series of testing sets by shifting the frequent classes according to the class index. Then, each case of testing data is used to calculate the accuracy of LLTD algorithms. A corpus of evaluation metrics are estimated from the accuracy on all testing sets: 1) The area under the curve formed by those accuracy values and the average accuracy can be used for evaluating the performance on universal testing data; 2) The standard deviation and the accuracy drop ratio reflect the stability against the variation of test data distributions; 3) The maximum and minimum accuracy values can approximate the upper and lower bound, respectively. The comparison between our evaluation strategy and existing strategies is provided in Table~\ref{tab:comp-eval}.
Based on the new evaluation metrics, we provide an elaborated analysis about the classification accuracy, robustness, and bounds for existing LLTD algorithms on two benchmarks in which the testing data has same or different imbalance ratio with the training data.
This can guide the selection of data rebalancing techniques.
We observe that a few methods such as the contrastive representation learning algorithm in~\cite{cui2021parametric}, ensembling models learned with different distributions~\cite{zhang2021test}, logit rebalancing strategies~\cite{ren2020balanced,li2022long}, and data distribution disentangling~\cite{hong2021disentangling} have distinguished performance on addressing the long-tailed learning problem.
However, there still exists large space for improving the classification of tail classes under large distribution shift or large-scale number of classes. Meanwhile, middle classes also need attentions during training.
Main contributions of this paper are as follows.
\begin{itemize}
\item[1)] We provide a simple survey for existing long-tailed learning algorithms and classify them into four types according to the key procedures during the pipeline for tackling the long-tailed learning, i.e., data rebalancing, loss design, feature representation learning, and category prediction.
\item[2)] We design a corpus of new evaluation metrics for analyzing the classification accuracy, stability, and bounds of LLTD algorithms on testing data with dynamic evolving distributions more comprehensively.
\item[3)] Based on the new evaluation metrics, two benchmarks where training data and testing data have same or different imbalance ratios are set up to evaluate existing LLTD algorithms.
\end{itemize}
\begin{figure*}[thbp]
\centering
\includegraphics[width=1\linewidth]{pics/long-tail-categorization-v4.pdf}
\put(-482,278){\cite{chawla2002smote}}
\put(-491,256){\cite{drummond2003c4}}
\put(-479,233){\cite{zhang2017mixup}}
\put(-310,278){\cite{wang2020long}}
\put(-232,256){\cite{tang2022invariant}}
\put(-307,233){\cite{zhang2021test}}
\put(-207,278){\cite{cai2021ace}}
\put(-200,233){\cite{li2022trustworthy}}
\put(-473,156){\cite{kang2019decoupling}}
\put(-452,133){\cite{zhou2020bbn}}
\put(-492,111){\cite{tang2020long}}
\put(-457,78){\cite{cui2021parametric}}
\put(-480,55){\cite{li2022targeted}}
\put(-475,32){\cite{alshammari2022long}}
\put(-51,172){\cite{lin2017focal}}
\put(-34,146){\cite{park2021influence}}
\put(-30,119){\cite{cao2019learning}}
\put(-68,97){\cite{ren2020balanced}}
\put(-34,77){\cite{hong2021disentangling}}
\put(-72,55){\cite{menon2020long}}
\put(-38,34){\cite{li2022long}}
\put(-38,11){\cite{zhao2022adaptive}}
\vspace{-5pt}
\caption{Categorization of existing long-tailed image classification works. Based on four primary procedures, existing works can be categorized into four groups: balanced data, balanced feature representation, balanced loss, and balanced prediction.}
\label{figCate}
\end{figure*}
\section{Survey of Long-tailed Learning}
\label{sec:related}
\subsection{Problem Definition}
\label{sec:prob_def}
Real-world data often has a long-tailed distribution.
Suppose the training data be $\mathbb D^{trn}=\{(x_n, y_n)\}_{n=1}^{N^{trn}}$. $x_n$ and $y_n$ denote the $n$-th training image and its class label, respectively; $N^{trn}$ represents the number of training samples.
Let the number of training samples belonging the $c$-th class be $N^{trn}_c$. The imbalance ratio of the training data is denoted as $\rho^{trn}=\max_c N^{trn}_c / \min_c N^{trn}_c$.
Suppose there exist $C$ target classes, i.e., $y_n\in[1,C]$.
We define the testing dataset as $\mathbb D^{tst}=\{(x_n, y_n)\}_{n=1}^{N^{tst}}$.
$N^{tst}$ denotes the number of testing samples. The number of testing samples belonging the $c$-th class is $N^{tst}_c$. The imbalance ratio of the testing data is denoted as $\rho^{tst}=\max_c N^{tst}_c/\min_c N^{tst}_c$.
We define the distribution shift between training and testing data with the JS divergence,
\begin{equation}
\delta(\mathbb D^{trn}, \mathbb D^{tst})=-\sum_{c=1}^C [q_c^{trn} \ln(\frac{q_c^{trn}}{q_c^{tst}})+q_c^{tst} \ln(\frac{q_c^{tst}}{q_c^{trn}})],
\end{equation}
where $q_c^{trn}=\frac{N_c^{trn}}{N^{trn}}$ and $q_c^{tst}=\frac{N_c^{tst}}{N^{tst}}$.
Given the training data $\mathbb D^{trn}$, the goal is to learning a CNN model which can well adapt to the testing data $\mathbb D^{tst}$. We define the inference process of the CNN model as $f(\cdot)$. Namely, given an image $x$, the CNN model can generate a classification probability vector $\mathbf p\in \mathbb R^C$. The inferred label is denoted as $\hat{y}=\arg\max_c p[c]$. Here, $p[c]$ indicates the $c$-th class's probability value.
\subsection{A Survey of Prior Works}
Under long-tailed distribution, head classes are prone to dominate the learning process, thus impairing the accuracy of tail classes. The core factor to address this problem is balancing the learning process. As shown in Figure~\ref{figCate}, we categorize existing works into four groups: balanced data, balanced feature representation, balanced loss, and balanced prediction.
\subsubsection{Balanced Data}
A straightforward method to address the data imbalance problem is to construct balanced data distribution, i.e., re-sampling the training data. However, excessively sampling tail classes~\cite{chawla2002smote} induces the over-fitting issue, while under-sampling head classes~\cite{drummond2003c4} hampers the representation learning and weakens the accuracy of head classes. Based on the effectiveness of mixup-based methods \cite{zhang2017mixup, yun2019cutmix}, combining samples from different classes can alleviate the long-tailed challenge. However, a naive implementation is prone to generate more head-head pairs. To this end, Xu \etal \cite{Xu2021TowardsCM} propose a balance-oriented mixup algorithm by biasing the mixing factor towards tail classes and increasing the occurrence of head-tail pairs.
\subsubsection{Balanced Feature Representation}
Designing algorithms to learn balanced feature representations is the other promising direction for addressing the long-tailed learning problem. cRT~\cite{kang2019decoupling} finds that data imbalancement does not impair the representation ability. Hence, cRT~\cite{kang2019decoupling} learns the feature extraction backbone with the conventional training strategy and then employ the data re-balancing algorithms to train the classifier. Similarly, BBN~\cite{zhou2020bbn} unifies the traditional and re-balanced data sampling strategies, and gradually shifts focus from the former to the latter. Aiming to jointly acquire representative features and discriminative classification scores, Tang \etal \cite{tang2020long} use the moving average of momentum to measure the misleading effect of head classes during training, and build a causal inference algorithm to remove the misleading effect during inference.
Another feature balancing approach is to enhance the representation ability for each class, \eg by contrastive learning.
Cui \etal \cite{cui2021parametric} propose to explicitly learn a feature center for each class, which is used to increase the inter-class separability.
Li \etal \cite{li2022targeted} reveal that the imbalanced sample distribution leads to close feature centers for tail classes. Thus, they propose to constrain feature centers to be uniformly distributed.
Alshammari \etal \cite{alshammari2022long} tackle the long-tailed challenge from the perspective of weight balancing, and apply the weight decay strategy to penalize large weights.
\subsubsection{Balanced Loss}
Another reasonable approach to the imbalance challenge is assigning relatively higher attention to tail classes during the network optimization process. For example, Focal loss~\cite{lin2017focal} assigns larger weights to samples with lower prediction confidences, \ie, the so-called hard samples. However, if the dataset exhibits severe imbalance, excessively emphasizing tail classes would lead to an over-fitting dilemma. To alleviate this challenge, Park \etal \cite{park2021influence} propose to re-weight samples according to the reverse of their influences on decision boundaries. LDAM~\cite{cao2019learning} tackles the data imbalance challenge by increasing margins of tail classes' decision boundaries, considering decision boundaries of head classes are more reliable than those of tail classes.
The other line of works focus on balancing gradients for head and tail classes by adjusting the \textit{Softmax} function. Ren \etal \cite{ren2020balanced} make an early attempt to balance the \textit{Softmax} function and develop the meta-sampling strategy to dynamically adjust the data distribution in the training process. LADE~\cite{hong2021disentangling} proposes the post-compensated \textit{Softmax} strategy to disentangle the source data distribution from network predictions. In addition, Menon \etal~\cite{menon2020long} introduce the logit adjustment strategy. If testing samples obey the independent and identical distribution of training samples, the logit adjustment strategy can generate accurate predictions. However, it is impractical to guarantee the independent and identical distribution between training and testing samples. Moreover, the underlying distribution of testing samples is usually unknown. To cope with this problem, GCL~\cite{li2022long} introduces Gaussian clouds into the logit adjustment process, and adaptively sets the cloud size according to the sample size of each class. Recently, Zhao \etal~\cite{zhao2022adaptive} reveal that previous logit adjustment techniques primarily focus on the sample quantity of each class, while ignoring the difficulty of samples. Thus, Zhao \etal \cite{zhao2022adaptive} propose to prevent the over-optimization on easy samples of tail classes, while highlighting the training on difficult samples of head classes.
\subsubsection{Balanced Prediction}
Another type of methods try to tackle the long-tailed challenge by improving the inference process.
Aiming to balance the accuracy of head and tail classes, Wang \etal~\cite{wang2020long} learn multiple experts simultaneously and ensemble their predictions to reduce the bias of single models. A dynamic routing module is developed to control the computational costs.
ACE~\cite{cai2021ace} attempts to learn multiple complementary expert models. Specifically, each expert model is responsible for distinguishing a specific set of classes while its responses to non-assigned classes are suppressed.
Considering the real-world testing data may exhibit a distinct distribution compared to the training data, Zhang \etal \cite{zhang2021test} learn multiple models under different distributions and combines them with weights generated via testing-time adaptation.
For decreasing the computational cost, Li \etal~\cite{li2022trustworthy} propose to measure the uncertainty of each expert, and assign experts to each sample dynamically.
Tang \etal~\cite{tang2022invariant} utilize uniform intra-class data sampling and confidence-aware data sampling strategies to construct different training environments for learning features invariant to diversified attributes.
\section{The New Evaluation Metrics}
\label{sec:data}
\input{tables/cmp-cifar-same-01}
In real-world applications, limited training data can not reflect the actual data distribution. To evaluate LLTD algorithms more comprehensively, we set up testing datasets which have dynamic evolving distributions.
The percent of the $c$-th class's samples namely $q_c^{tst}$ is determined according to the following formulation,
\begin{equation}
q_c^{tst}(\alpha) = \frac{({\rho^{tst}})^{-\frac{|c-\alpha|}{C-1}}}{\sum_{c=1}^C ({\rho^{tst}})^{-\frac{|c-\alpha|}{C-1}}},
\end{equation}
where $\alpha$ is a variable controlling the peak of the testing data distribution. Varying $\alpha$ can derive testing data with diversified distributions.
Suppose the maximum class sample size be $N_{max}^{tst}$. We choose the the total number of samples namely $N^{tst}$ as below,
\begin{equation}
N^{tst} = \sum_{c=1}^C N_{max}^{tst} ({\rho^{tst}})^{-\frac{|c-1|}{C-1}}.
\end{equation}
Then, the sample size of the $c$-th class in testing data can be obtained as, $N_c^{tst}=N^{tst} q_c^{tst}(\alpha)$. Finally, new testing samples can be randomly drawn out from the original dataset.
\input{tables/cmp-cifar-same-05}
Accuracy is the basic metric for evaluating performance of image classification models.
Intrinsically, it records the percent of correctly predicted testing samples as in the following equation,
\begin{equation}
V^{acc} = \frac{\sum_{n=1}^{N^{tst}} (\hat y_n = y_n) }{N^{tst}}.
\end{equation}
We synthesize a series of testing sets by varying $\alpha$ in $\{\frac{(t-1)C}{T}+1\}_{t=1}^T$, where $T$ denotes the times of data synthesization.
For each times of synthesization, we implement five replaceable samplings.
The overall performance on this testing set synthesization can be evaluated by averaging the five samplings' accuracy, which is defined as $V^{acc}_{t}$.
The distribution shift between the training set and the $t$-th synthesized testing set is estimated as,
\begin{equation}
\delta_t = -\sum_{c=1}^C [q_c^{trn} \ln(\frac{q_c^{trn}}{q_c^{tst}(\alpha_t) })+q_c^{tst}(\alpha_t) \ln(\frac{q_c^{tst}(\alpha_t)}{q_c^{trn}})],
\end{equation}
where $\alpha_t=\frac{(t-1)C}{T}+1$.
One simple manner to unify the results of $T$ synthesizations is directly averaging them:
\begin{equation}
V^{avg} = \frac{1}{T}\sum_{t=1}^T V^{acc}_{t},
\end{equation}
where $V^{avg}$ denotes the averaged accuracy. We can also evaluate the sensitivity to distribution variation with the accuracy drop ratio $V^{dr}$ and the standard deviation $V^{std}$:
\begin{align}
V^{dr} & = \frac{\max_{t} V^{acc}_{t} - \min_{t} V^{acc}_{t}}{\max_{t} V^{acc}_{t}}, \\
V^{std} & = \sqrt{\sum_{t=1}^T (V^{acc}_{t} - V^{avg})^2/T}.
\end{align}
\input{tables/cmp-cifar-same-1}
The other manner for combining the accuracy values is calculating the area under the curve of $T$ synthesizations' results. All testing sets are ranked with respect to the distribution shifts between training and testing sets in the ascending order. Denote the ranked set indices as $\{t_k\}_{k=1}^T$. This metric is calculated as below:
\begin{equation}
V^{auc} = \frac{\sum_{k=1}^{T-1} (V^{acc}_{t_k} + V^{acc}_{t_{k+1}})(\delta_{t_{k+1}} - \delta_{t_{k}})/2}{ \delta_{t_{T}} - \delta_{t_{1}} }.
\end{equation}
\section{Benchmarks}
\label{sec:exper}
\subsection{Experimental settings.}
We use two datasets for training and testing, including CIFAR10 and CIFAR100~\cite{krizhevsky2009learning}.
Both CIFAR10 and CIFAR100 contain 60,000 images with size of 32$\times$32.
The number of classes of CIFAR10 and CIFAR100 is 10 and 100, respectively.
The original datasets are split into 50,000 images for training and 10,000 images for testing.
We follow~\cite{kang2019decoupling,zhou2020bbn} to synthesize imbalanced training set with the imbalance ratio $\rho^{trn}\in\{0.01,0.05,0.1\}$.
$N_c^{trn}$ is set as 5,000 and 500 for CIFAR10 and CIFAR100, respectively.
During training, 10\% images of each class are used for validation.
When generating testing data, we choose $\rho^{tst}$ from $\{ 0.01, 0.05, 0.1\}$ and set $N_{max}^{tst}$ as 1,000 and 100 for CIFAR10 and CIFA100, respectively.
\subsection{Benchmark 1: Testing Data having Same Imbalance Ratio as Training Data} \label{sec:exper-same}
In this subsection, we re-evaluate the performance of existing methods, including Focal~\cite{lin2017focal}, LDAM~\cite{cao2019learning}, cRT~\cite{kang2019decoupling}, BBN~\cite{zhou2020bbn}, MetaS~\cite{ren2020balanced}, DecTDE~\cite{tang2020long}, RIDE~\cite{wang2020long}, IBLoss~\cite{park2021influence}, TADE~\cite{zhang2021test}, LADE~\cite{hong2021disentangling}, Prior-LT~\cite{Xu2021TowardsCM}, PCL~\cite{cui2021parametric}, and GCLLoss~\cite{li2022long}.
The baseline method (CE) is implemented using the conventional uniform data sampling and standard cross-entropy loss function.
For all methods, we use ResNet32~\cite{he2016deep} as the classification backbone model.
Here, we use the same imbalance ratio for training and testing data.
\begin{figure*}[t]
\centering
\begin{subfigure}{0.328\linewidth}
\centering
\includegraphics[width=1\linewidth, trim=10 10 5 5, clip]{pics/acc_curves/CIFAR10_0.01_0.01.pdf}
\caption{$\rho^{trn}=\rho^{tst}=0.01$}
\end{subfigure}
\begin{subfigure}{0.328\linewidth}
\centering
\includegraphics[width=1\linewidth,trim=10 10 5 5, clip]{pics/acc_curves/CIFAR10_0.05_0.05.pdf}
\caption{$\rho^{trn}=\rho^{tst}=0.05$}
\end{subfigure}
\begin{subfigure}{0.328\linewidth}
\centering
\includegraphics[width=1\linewidth, trim=10 10 5 5, clip]{pics/acc_curves/CIFAR10_0.1_0.1.pdf}
\caption{$\rho^{trn}=\rho^{tst}=0.1$}
\end{subfigure}
\caption{Accuracy curves of existing methods on CIFAR10 against the JS divergence. (a) The decay coefficients $\rho^{trn}$ and $\rho^{tst}$ are set to 0.01; (b) $\rho^{trn}$ and $\rho^{tst}$ are set to 0.05; (c) $\rho^{trn}$ and $\rho^{tst}$ are set to 0.1.
\label{fig:acc-curve-cifar10}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.328\linewidth}
\centering
\includegraphics[width=1\linewidth, trim=10 10 5 5, clip]{pics/acc_curves/CIFAR100_0.01_0.01.pdf}
\caption{$\rho^{trn}=\rho^{tst}=0.01$}
\end{subfigure}
\begin{subfigure}{0.328\linewidth}
\centering
\includegraphics[width=1\linewidth,trim=10 10 5 5, clip]{pics/acc_curves/CIFAR100_0.05_0.05.pdf}
\caption{$\rho^{trn}=\rho^{tst}=0.05$}
\end{subfigure}
\begin{subfigure}{0.328\linewidth}
\centering
\includegraphics[width=1\linewidth, trim=10 10 5 5, clip]{pics/acc_curves/CIFAR100_0.1_0.1.pdf}
\caption{$\rho^{trn}=\rho^{tst}=0.1$}
\end{subfigure}
\caption{Accuracy curves of existing methods on CIFAR100 against the JS divergence.
\label{fig:acc-curve-cifar100}
\end{figure*}
\input{tables/cmp-cifar-smaller}
We report the experimental results on re-distributed versions of CIFAR10 and CIFAR100 which have the same imbalance ratios for training and testing data in Table~\ref{tab:CIFAR-01} ($\rho^{trn}=\rho^{tst}=0.01$), Table~\ref{tab:CIFAR-05} ($\rho^{trn}=\rho^{tst}=0.05$), and Table~\ref{tab:CIFAR-1} ($\rho^{trn}=\rho^{tst}=0.1$). The accuracy curves with respect to the JS divergence are illustrated in Fig.~\ref{fig:acc-curve-cifar10} and Fig.~\ref{fig:acc-curve-cifar100}.
\input{tables/cmp-cifar-larger}
We can observe that, the parametric contrastive learning~\cite{cui2021parametric} is very good at coping with the data imbalance problem.
It achieves the highest classification accuracy overall, and showcases high stability across different testing distributions.
This means that learning good representations is a very effective strategy for relieving the influence of data imbalance, due to the improvement in feature generalization ability and the prevention of overfitting with tail classes.
TADE~\cite{zhang2021test} also has very promising ability in tackling the data imbalance problem, due to the combination of multiple expert models learned with different simulated training distributions.
Another effective direction is adjusting the classification logits according to classes' sample sizes like GCLoss~\cite{li2022long} and MetaS~\cite{ren2020balanced}, which can help to enlarge the embedding space of tail classes.
LADE~\cite{zhang2021test} is targeted at disentangling the label distribution of the training data from the model prediction. It also has fine effect in adapting to arbitrary testing distributions. For example, it produces the second best AUC and average ACC metrics on CIFAR10 with $\rho^{trn}=\rho^{tst}=0.01$.
From the metric values on standard deviation (STD) and drop ratio (DR), most methods with relatively better classification performance such as PCL, TADE, and GCLoss are robust against the variance of testing data distribution. Prior-LT~\cite{Xu2021TowardsCM} which uses class-balanced mixup to construct training samples and prior-compensated Softmax for probability prediction have relatively more stable performance than other methods on CIFA100.
The upper bound of LLTD algorithms on testing data with unknown distribution can be approximated by the maximum accuracy (ACC-MAX) metric. Most of algorithms are unable to improve this metric. On CIFA100, a few algorithms such as cRT, BBN, IBLoss and Prior-LT even achieve severely decreased ACC-MAX value compared to the baseline. The reason may be that those methods excessively bias the training process towards tail classes. Several methods are capable of improving the ACC-MAX metric, such as Meta-Softmax, PCL, LADE, focal loss, etc.
The minimum accuracy (ACC-MIN) can approximate the lower bound of LLTD algorithms. This metric is usually obtained when there exists large or moderate shift between training and testing distributions. We can see that most methods are capable of improving this metric, since they have specific designs for enhancing the significance of tail classes during training. The most three effective methods in boosting ACC-MIN are PCL, TADE, and LADE.
\subsection{Benchmark 2: Testing Data with Imbalance Ratios Different to Training Data} \label{sec:larger}
We report the experimental results on re-distributed versions of CIFAR10 and CIFAR100 where the imbalance ratios of the training data are different with that of the testing data in Table~\ref{tab:CIFAR-05-01} ($\rho^{trn}=0.05$, $\rho^{tst}=0.01$) and Table~\ref{tab:CIFAR-05-1} ($\rho^{trn}=0.05$, $\rho^{tst}=0.1$).
We can see that existing methods have similar performance rankings with the situation where the imbalance ratios of training and testing distributions are same.
\section{Discussions and Conclusions}
\label{sec:conclu}
\textbf{Discussions}. Based on the experimental results, we recommend the following directions to improve algorithms for learning with long-tailed distribution.
\begin{itemize}
\item The minimum accuracy of most methods is not high under severe data imbalance or large number of classes (e.g., CIFAR100). This indicates that there still exists huge space for improving the performance on tail classes.
\item From Fig.~\ref{fig:acc-curve-cifar10}, on CIFAR10, a few methods, such as PCL, GCLoss, and LADE, are capable of achieving high accuracy on small and large JS divergence. However, the accuracy on middle JS divergence is relatively low. This means that they are able to achieve good performance on head and tail classes but the middle classes lack attention during training.
\end{itemize}
\textbf{Conclusions}. In this paper, we set up new Benchmarks to analyze the performance of methods for learning with long-tailed distribution. Based on a series of testing sets with evolving data distribution, we devise new metrics to analyze the accuracy, stability, and upper/lower bound of existing methods comprehensively. Extensive experiments on CIFAR10 and CIFAR100 are conducted to evaluate existing methods. We also summarize existing methods into data, feature, loss, and prediction balancing types according to the focused stage in the working pipeline.
{\small
\input{main.bbl}
}
\end{document}
|
1,108,101,565,011 | arxiv | \section{Introduction}
The geometric mean of positive numbers $a$ and $b$ is the number $\sqrt{ab}$, and it satisfies the equations
\begin{equation}
\sqrt{ab}=e^{\frac{1}{2}\left(\log a+\log b\right)}=\lim_{p\rightarrow 0} \left(\frac{a^p+b^p}{2}\right)^{1/p}.
\end{equation}
The quantity
\begin{equation}
f(p)=\left(\frac{a^p+b^p}{2}\right)^{1/p}, \quad -\infty<p< \infty,
\end{equation}
is called the \emph{binomial mean}, or the \emph{power mean}, and is an increasing function of $p$ on $\left(-\infty,\infty\right)$.
Replacing $a$ and $b$ by positive definite matrices $A$ and $B$, let
\begin{equation}
F(p)=\left(\frac{A^p+B^p}{2}\right)^{1/p}.\label{defnofF}
\end{equation}
In \cite{bhagwat} Bhagwat and Subramanian showed that
\begin{equation}
\lim_{p\rightarrow 0} F(p)=e^{\frac{1}{2}\left(\log A+\log B\right)}.\label{limitofF}
\end{equation}
They also showed that the matrix function $F(p)$ is monotone with respect to $p$, on the intervals $(-\infty,-1]$ and $[1,\infty)$ but not on $\left(-1,1\right)$. (The order $X\leq Y$ on the space $\p$ of $n\times n$ positive definite matrices is defined to mean $Y-X$ is a positive semidefinite matrix.)
The entity in \eqref{limitofF} is called the ``log Euclidean mean'' of $A$ and $B$. However it has some drawbacks, and the accepted definition of the geometric mean of $A$ and $B$ is
\begin{equation}
A\#_{1/2} B=A^{1/2} \left(A^{-1/2} B A^{-1/2}\right)^{1/2} A^{1/2}.\label{defnofgm}
\end{equation}
It is of interest to have various comparisons between the quantities in \eqref{defnofF}, \eqref{limitofF} and \eqref{defnofgm}, and that is the question discussed in this note.
Generalising \eqref{defnofgm} various authors have considered for $0\leq t\leq 1$
\begin{equation}
A\#_{t}B=A^{1/2}\left(A^{-1/2} B A^{-1/2}\right)^t A^{1/2},\label{geodesic}
\end{equation}
and called it \emph{$t$-geometric mean}, or \emph{$t$-power mean}. In recent years there has been added interest in this object because of its connections with Riemannian geometry \cite{bhatia2}. The space $\p$ has a natural Riemannian metric, with respect to which there is a unique geodesic joining any two points $A,B$ of $\p$. This geodesic can be parametrised as \eqref{geodesic}.
The linear path
\begin{equation}
(1-t)A+tB, \quad 0\leq t\leq 1,\label{line}
\end{equation}
is another path in $\p$ joining $A$ and $B$. It is well known \cite[Exercise 6.5.6]{bhatia2} that
\begin{equation}
A\#_{t}B \leq (1-t)A+tB \text{ for all }0\leq t\leq 1.\label{pathcomparison}
\end{equation}
The special case $t=1/2$ of this is the matrix arithmetic-geometric mean inequality, first proved by Ando \cite{ando1}.
For $0\leq t\leq 1$ let
\begin{equation}
F_t(p)=\left((1-t)A^p +t B^p\right)^{1/p}.\label{defnofFt}
\end{equation}
For $t=1/2$ this is the $F$ defined in \eqref{defnofF}.
It follows from the work in \cite{bhagwat} that
\begin{equation}
\lim_{p\rightarrow 0} F_t(p)=e^{(1-t) \log A+t \log B},\label{limitofFt}
\end{equation}
and that $F_t(p)$ is monotone with respect to $p$ on $(-\infty,-1]$ and $[1,\infty)$ but not on $\left(-1,1\right)$. We denote by $\lambda_j(X),\ 1\leq j\leq n$, the decreasingly ordered eigenvalues of a Hermitian matrix $X$, and by $|||\cdot|||$ any unitarily invariant norm on the space $\M$ of $n\times n$ matrices. Our first observation is that while the matrix function $F_t(p)$ defined in \eqref{defnofFt} is not monotone on the whole line $\left(-\infty,\infty\right)$, the real functions $\lambda_j(F_t(p))$ are:
\begin{theorem}\label{1}
Given positive definite matrices $A$ and $B$, let $F_t(p)$ be as defined in \eqref{defnofFt}. Then for $1\leq j\leq n$ the function $\lambda_j(F_t(p))$ is an increasing function of $p$ on $\left(-\infty,\infty\right)$.
\end{theorem}
As a corollary $|||F_t(p)|||$ is an increasing function of $p$ on $\left(-\infty,\infty\right)$. In contrast to this, Hiai and Zhan \cite{zhan} have shown that the function $|||\left(A^p+B^p\right)^{1/p}|||$ is \emph{decreasing} on $(0,1]$ (but not necessarily so on $(1,\infty)$). A several variable version of both our Theorem \ref{1} and this result of Hiai and Zhan can be established (see Remark 1).
Combining Theorem \ref{1} with a result of Ando and Hiai \cite{hiai} we obtain a comparison of norms of the means \eqref{defnofF}, \eqref{limitofF}, \eqref{defnofgm}, and their $t$-generalisations:
\begin{corollary}\label{2}
Let $A$ and $B$ be two positive definite matrices. Then for $p>0$
\begin{equation}
|||A\#_{t}B|||\leq |||e^{(1-t)\log A+t\log B}|||\leq |||\left((1-t)A^{p}+t B^p\right)^{1/p}|||.\label{pinequality}
\end{equation}
\end{corollary}
The first inequality in \eqref{pinequality} is proved in \cite{hiai} as a complement to the famous Golden-Thompson inequality: for Hermitian matrices $H, K$ we have $|||e^{H+K}|||\leq |||e^H e^K|||$. Stronger versions of this inequality due to Araki \cite{araki} and Ando-Hiai \cite{hiai} can be used to obtain a refinement of \eqref{pinequality}. We have for $0\leq t\leq 1$
\begin{eqnarray}
|||A\#_{t}B|||&\leq& |||e^{(1-t)\log{A}+t \log{B}}|||\nonumber\\
&\leq& |||(B^{\frac{tp}{2}} A^{(1-t)p} B^{\frac{tp}{2}})^{1/p}|||\nonumber\\
&\leq& |||((1-t)A^p+tB^p)^{1/p}|||.\label{pinequalities}
\end{eqnarray}
We draw special attention to the case $p=1$ for which further refinements are possible.
\begin{theorem}\label{3}
Let $A$ and $B$ be positive definite matrices. Then
\begin{eqnarray}
|||A\#_{t}B|||&\leq& |||e^{(1-t)\log{A}+t \log{B}}|||\nonumber\\
&\leq& |||B^{\frac{t}{2}} A^{1-t} B^{\frac{t}{2}}|||\nonumber\\
&\leq& \left|\left|\left|\frac{1}{2}\left(A^{1-t} B^{t}+ B^{t} A^{1-t}\right)\right|\right|\right|\nonumber\\
&\leq& |||A^{1-t} B^t|||\nonumber\\
&\leq& |||(1-t)A+tB|||.\label{inequalities}
\end{eqnarray}
\end{theorem}
For convenience we have stated these results as inequalities for unitarily invariant norms. Many of these inequalities have stronger versions (with log majorisations instead of weak majorisations). This is explained along with the proofs in Section 2. For the special case $t=1/2$ we provide an alternative special proof for a part of Theorem \ref{3}, and supplement it with other inequalities. Section 3 contains remarks and comparisons with known results, some of which are very recent.
\section{Proofs} \emph{Proof of Theorem \ref{1}}\hspace{0.4cm}
Let $0<p<p'$. Then the map $f(t)= t^{p/p'}$ on $[0,\infty)$ is matrix concave; see \cite[Chapter V]{bhatia1}. Hence
\begin{equation*}
(1-t)A^p+t B^p\leq \left((1-t)A^{p'}+t B^{p'}\right)^{p/p'}.
\end{equation*}
This implies that
\begin{equation*}
\lambda_j\left((1-t)A^p+t B^p\right)\leq \lambda_j\left((1-t)A^{p'}+t B^{p'}\right)^{p/p'}.
\end{equation*}
Taking $p$th roots of both sides, we obtain
\begin{equation}
\lambda_j\left((1-t)A^p+t B^p\right)^{1/p}\leq \lambda_j\left((1-t)A^{p'}+t B^{p'}\right)^{1/p'}.\label{pthroot}
\end{equation}
Next consider the case $p<p'<0$. Then $0<p'/p<1$.
Arguing as above we obtain
\begin{equation*}
\lambda_j\left((1-t)A^{p'}+t B^{p'}\right)\leq \lambda_j\left((1-t)A^{p}+t B^{p}\right)^{p'/p}.
\end{equation*}
Take $p'$th roots of both sides. Since $p'<0$, the inequality is reversed and we get the inequality \eqref{pthroot} in this case too. Now let $p$ be any positive real number. Using the matrix convexity of the function $f(t)=t^{-1}$ we see that
\begin{equation*}
\left((1-t)A^{-p}+t B^{-p}\right)^{-1}\leq (1-t)A^p+tB^p.
\end{equation*}
From this we get an inequality for the $j$th eigenvalues, and then for their $p$th roots; i.e.,
\begin{equation*}
\lambda_j\left((1-t)A^{-p}+t B^{-p}\right)^{-1/p}\leq \lambda_j\left((1-t)A^p+tB^p\right)^{1/p}.
\end{equation*}
It follows from the above cases that for any $p<0<p'$
\begin{equation}
\lambda_j\left((1-t)A^p+tB^p\right)^{1/p} \leq \lambda_j\left((1-t)A^{p'}+tB^{p'}\right)^{1/p'}.\label{pandp'}
\end{equation}
Taking limit as $p'\rightarrow 0$ and using \eqref{limitofFt} we get
$$\lambda_j\left((1-t)A^p+tB^p\right)^{1/p}\leq \lambda_j\left(e^{(1-t)\log A+t \log B}\right)$$
i.e., for any $p<0$ we have $\lambda_j\left(F(p)\right)\leq \lambda_j\left(F(0)\right).$
For the case $p>0$ a similar argument shows that $\lambda_j\left(F(p)\right)\geq \lambda_j\left(F(0)\right).$ \qed
\bigskip
\emph{Proof of Theorem \ref{3}}\hspace{0.4cm}
The first inequality in \eqref{inequalities} follows from a more general result of Ando and Hiai \cite{hiai}. They showed that for Hermitian matrices $H$ and $K$,
$|||\left(e^{pH}\#_t\ e^{pK}\right)^{1/p}|||$ increases to $|||e^{(1-t)H+tK}|||$ as $p\downarrow 0$. Choosing $H=\log A, \ K=\log B$, and $p=1$, we obtain the first inequality in \eqref{inequalities}. The Golden-Thompson inequality generalised to all unitarily invariant norms (see \cite[p. 261]{bhatia1} says that $|||e^{H+K}|||\leq |||e^{K/2} e^H e^{K/2}|||$. Using this we obtain the second inequality in \eqref{inequalities}. (We remark here that it was shown in \cite{bhatiaemi} that the generalised Golden Thompson inequality follows from a generalised exponential metric increasing property. The latter is related to the metric geometry of the manifold $\p$. So its use in the present context seems natural.) Given a matrix $X$ we denote by $\h X$ the matrix $\frac{1}{2}(X+X^*)$. By Proposition IX.1.2 in \cite{bhatia1} if a product $XY$ is Hermitian, then $|||XY|||\leq |||\h (YX)|||$. Using this we obtain the third inequality in \eqref{inequalities}. The fourth inequality follows from the general fact $|||\h X|||\leq |||X|||$ for all $X$. The last inequality in \eqref{inequalities} is a consequence of the matrix Young inequality proved by T. Ando \cite{ando}.\qed
For Hermitian matrices $H,K$ let $\lambda_1(H)\geq \cdots\geq \lambda_n(H)$ and $\lambda_1(K)\geq \cdots\geq \lambda_n(K)$ be the eigenvalues of $H$ and $K$ respectively. Then the \emph{weak majorisation} $\lambda(H)\prec_w \lambda(K)$ means that
$$\sum_{i=1}^k \lambda_i(H)\leq \sum_{i=1}^k \lambda_i(K),\quad k=1,2,\ldots,n.$$
If in addition for $k=n$ there is equality here, then we say $\lambda(H)\prec \lambda(K)$.
For $A,B\geq 0$ we write \begin{equation}
\lambda(A)\prec_{\log}\lambda(B)$$ if
$$\prod_{i=1}^k \lambda_i(A)\leq \prod_{i=1}^k \lambda_i(B), \quad k=1,\ldots,n-1 \label{logmajorisation}
\end{equation}
and $$\prod_{i=1}^n \lambda_i(A)=\prod_{i=1}^n \lambda_i(B), \text{ that is } \det A=\det B.$$
We refer to it as \emph{log majorisation}. We say $A$ is \emph{weakly log majorised} by $B$, in symbols $\lambda(A)\prec_{\weaklog} \lambda(B)$, if \eqref{logmajorisation} is fulfilled. It is known that
$$\lambda(A)\prec_{\weaklog}\lambda(B) \text{ implies } \lambda(A)\prec_w \lambda(B),$$
so that $|||A|||\leq |||B|||$ for any unitarily invariant norm. (See \cite{bhatia1} for facts on majorisation used here.)
There are stronger versions of some of the inequalities in \eqref{pinequalities}. We have for $p>0$
\begin{align}
\lambda(A\#_t B) &\prec_{\log} \lambda(e^{(1-t)\log A+t\log B})\nonumber\\
&\prec_{\log} \lambda\left(B^{tp/2} A^{(1-t)p} B^{tp/2}\right)^{1/p}\nonumber\\
&= \lambda\left(A^{(1-t)p} B^{tp}\right)^{1/p}\nonumber\\
&\prec_{\weaklog} \lambda\left((1-t) A^p+t B^p\right)^{1/p}.\label{stronginequalities}
\end{align}
The first inequality is a result by Ando and Hiai \cite{hiai}. The second inequality follows from a result by Araki \cite{araki}. The last inequality above follows from the matrix version of Young's inequality by Ando \cite{ando}.
A further strengthening of the first inequality in \eqref{stronginequalities} replacing log majorisation by pointwise domination is not possible. For $t=1/2$ this would have said
$$\lambda_j(A\#_{1/2} B) \leq \lambda_j\left(e^{\frac{\log A+\log B}{2}}\right).$$
This is refuted by the example $A=\left[\begin{array}{cc}
2 & \ 0\\
\ 0 & 1
\end{array}\right]$, $B=\left[\begin{array}{cc}
3 & 3 \\
3 & 9/2
\end{array}\right]$. A calculation shows that $\lambda_2(A\#_{1/2} B)=1$ and $\lambda_2(e^{\frac{\log A+\log B}{2}})\approx 0.9806$.
The case $t=1/2, \ p=1$ is special. Following an idea of Lee \cite{lee} we present a different proof of the majorisation
\begin{equation}
\lambda\left(A\#_{1/2}B\right)\prec_{\log} \lambda\left(B^{1/4}A^{1/2}B^{1/4}\right).\label{majorized}
\end{equation}
The geometric mean $A\#_{1/2} B$ satisfies the equation $A\#_{1/2}B=A^{1/2} U B^{1/2}$ for some unitary U. See \cite[p.109]{bhatia1}. Therefore for the operator norm $\|\cdot\|$ we have
\begin{eqnarray}
\|A\#_{1/2} B\|&=& \|A^{1/2} U B^{1/2}\|\nonumber\\
&=& \|A^{1/4} A^{1/4} U B^{1/4} B^{1/4}\|\nonumber\\
&\leq& \|A^{1/4} U B^{1/4} B^{1/4} A^{1/4}\|\nonumber\\
&\leq& \|A^{1/4} U B^{1/4}\| \|B^{1/4} A^{1/4}\|.\label{inequalities2}
\end{eqnarray}
Here the first inequality is a consequence of the fact that if $XY$ is Hermitian, then $\|XY\|\leq\|YX\|$. Next note that
\begin{eqnarray}
\|A^{1/4} U B^{1/4}\|^2&=& \|A^{1/4} U B^{1/2} U^* A^{1/4}\|\nonumber\\
&\leq& \|A^{1/2} U B^{1/2} U^*\|\nonumber\\
&=& \|A^{1/2} U B^{1/2}\|=\|A\#_{1/2}B\|.\label{inequalities3}
\end{eqnarray}
Again, to derive the inequality above we have used the fact that $\|XY\|\leq \|YX\|$ if $XY$ is Hermitian. From \eqref{inequalities2} and \eqref{inequalities3} we see that
\begin{equation*}
\|A\#_{1/2}B\|^{1/2} \leq \|B^{1/4} A^{1/4}\|,
\end{equation*}
and hence
\begin{equation}
\|A\#_{1/2} B\|\leq \|B^{1/4} A^{1/4}\|^2=\|B^{1/4} A^{1/2} B^{1/4}\|.\label{strongaujla}
\end{equation}
This is the same as saying that
\begin{equation}
\lambda_1\left(A\#_{1/2} B\right)\leq \lambda_1\left(B^{1/4} A^{1/2} B^{1/4}\right).\label{lambda1}
\end{equation}
If $\wedge^k (X),\ 1\leq k\leq n$, denotes the $k$th antisymmetric tensor power of $X$, then
\begin{equation*}
\wedge^k\left(A\#_{1/2} B\right)=\wedge^k (A) \#_{1/2} \wedge^k(B).
\end{equation*}
So from \eqref{lambda1} we obtain
\begin{equation*}
\lambda_1\left(\wedge^k\left(A\#_{1/2} B\right)\right)\leq \lambda_1\left(\wedge^k(B)^{1/4} \wedge^k(A)^{1/2} \wedge^k(B)^{1/4}\right).
\end{equation*}
This is the same as saying
\begin{equation}
\prod_{j=1}^k \lambda_j\left(A\#_{1/2} B\right)\leq \prod_{j=1}^k \lambda_j\left(B^{1/4}A^{1/2}B^{1/4}\right),\quad 1\leq k\leq n.\label{productlambda}
\end{equation}
For $k=n$ there is equality here because
\begin{equation*}
\det\left(A\#_{1/2} B\right)=\det\left(A^{1/2} B^{1/2}\right).
\end{equation*}
From \eqref{productlambda} we have the corollary
\begin{equation}
\lambda\left(A\#_{1/2} B\right)\prec_w \lambda\left(B^{1/4} A^{1/2} B^{1/4}\right).\label{lambda}
\end{equation}
Included in this is the trace inequality
\begin{equation*}
\tr\left(A\#_{1/2} B\right)\leq \tr A^{1/2} B^{1/2}.
\end{equation*}
This has been noted in \cite{lee}.
\section{Remarks}
\begin{enumerate}
\item[1.]
Let $A_1,\ldots, A_m$ be positive definite matrices and let $\alpha_1,\ldots,\alpha_m \geq 0$ be such that $\sum \alpha_j=1$. Let
\begin{equation}
F(p)=\left(\alpha_1 A_1^p+\cdots+\alpha_m A_m^p\right)^{1/p}\label{defnofFsevvar}.
\end{equation}
Then by the same argument as in the proof of Theorem \ref{1}, $\lambda_j(F(p))$ is increasing in $p$ on $(-\infty,\infty)$. In particular for $\alpha_1=\cdots=\alpha_m=1/m$ the function $\lambda_j\left(\left(\frac{A_1^p+\cdots+A_m^p}{m}\right)^{1/p}\right)$ is an increasing function of $p$ on $(-\infty,\infty)$. Therefore \\ $\left|\left|\left|\left(\frac{A_1^p+\cdots+A_m^p}{m}\right)^{1/p}\right|\right|\right|$ is an increasing function of $p$ on $(-\infty,\infty)$.
In contrast, it can be shown that $\left|\left|\left|\left(A_1^p+\cdots+A_m^p\right)^{1/p}\right|\right|\right|$ is a decreasing function of $p$ on $(0,1]$. For $m=2$ Hiai and Zhan have shown this using the following result of Ando and Zhan \cite{andozhan}. For positive operators $A,B$ and $r\geq 1$
$$|||(A+B)^r|||\geq |||A^r+B^r||| .$$
A several variable version of this follows from \cite[Theorem 5 (ii)]{kittaneh1} of Bhatia and Kittaneh:
$$|||(A_1+\cdots+A_m)^r|||\geq |||A_1^r+\cdots+A_m^r||| \text{ for } r\geq 1.$$
By imitating the argument in \cite{zhan} one can show $\left|\left|\left|\left(A_1^p+\cdots+A_m^p\right)^{1/p}\right|\right|\right|$ is a decreasing function of $p$ on $(0,1]$.
\item[2.] In \cite{bhagwat} Bhagwat and Subramanian showed that for positive definite matrices $A_1,\ldots, A_m$ and $\alpha_1,\ldots,\alpha_m \geq 0$ such that $\sum \alpha_j=1$
$$\lim_{p\rightarrow 0} \left(\alpha_1 A_1^p+\cdots+\alpha_m A_m^p\right)^{1/p}=e^{\alpha_1 \log A_1+\cdots+\alpha_m \log A_m}.$$ It follows from Remark 1 that
$$|||e^{\alpha_1 \log A_1+\cdots+\alpha_m \log A_m}|||\leq |||\left(\alpha_1 A_1^p+\cdots+\alpha_m A_m^p\right)^{1/p}||| \text{ for }p>0.$$
\item[3.] Recently several versions of geometric mean for more than two positive definite matrices have been considered by various authors. (See \cite{andolimathias}, \cite{bhatiaholbrook} and \cite{bini}.) For positive definite matrices $A_1,\ldots, A_m$ let $G(A_1,\ldots,A_m)$ denote any of these geometric means. Our discussion in Corollary 2 and Remark 2 raises the question whether
$$|||G(A_1,\ldots,A_m)|||\leq |||e^{\frac{\log A_1+\cdots+\log A_m}{m}}|||.$$
\item[4.] By Ando's characterisation of the geometric mean if $X$ is a \emph{Hermitian} matrix and
$$\left[\begin{array}{ccc}
A & X\\
X & B
\end{array}\right]\geq 0, \text{ then } X\leq A\# B.$$
Since $$\left[\begin{array}{ccc}
A & -X\\
-X & B
\end{array}\right]=\left[\begin{array}{ccc}
I & \ 0\\
\ 0 & -I
\end{array}\right] \left[\begin{array}{ccc}
A & X\\
X & B
\end{array}\right] \left[\begin{array}{ccc}
I & \ 0\\
\ 0 & -I
\end{array}\right]$$
we have
$$\left[\begin{array}{ccc}
A & -X\\
-X & B
\end{array}\right]\geq 0 \quad \text{ if } \quad \left[\begin{array}{ccc}
A & X\\
X & B
\end{array}\right]\geq 0.$$
Hence $\pm X \leq A\# B$. Then by \cite[Lemma 2.1]{kittaneh2}, $|||X|||\leq |||A\# B|||$. In contrast to this, we do have that $$\left[\begin{array}{ccc} A & A^{1/2} B^{1/2} \\ B^{1/2} A^{1/2} & B\end{array}\right]=\left[\begin{array}{ccc} A^{1/2} & 0\\ B^{1/2} & 0\end{array}\right] \left[\begin{array}{ccc} A^{1/2} & B^{1/2} \\ 0 & 0\end{array}\right]\geq 0$$ but we have the opposite inequality $|||A\# B|||\leq |||A^{1/2} B^{1/2}|||$.
\item[5.] Among the several matrix versions of the arithmetic-geometric mean inequality proved by Bhatia and Kittaneh \cite{kittaneh} one says that $4|||AB|||\leq |||(A+B)^2|||$. Using this and Theorem \ref{1} we have
\begin{equation}
|||A^{1/2} B^{1/2}|||\leq \left|\left|\left|\left(\frac{A^{1/2}+B^{1/2}}{2}\right)^2\right|\right|\right|\leq \left|\left|\left|\left(\frac{A^p+B^p}{2}\right)^{1/p}\right|\right|\right| \text{ for } p\geq 1/2.
\end{equation}
For $t=1/2$ this extends the chain of inequalities \eqref{inequalities} in another direction.
\item[6.] In a recent paper \cite{aujla} Matharu and Aujla have shown that
\begin{equation}
\lambda\left(A\#_tB\right)\prec_{\log} \lambda\left(A^{1-t} B^t\right).\label{gmandgeodesic}
\end{equation}
For their proof they use the Furuta inequality. The inequality \eqref{majorized} follows from this. As a corollary these authors observe that
\begin{equation}
|||A\#_{1/2}B|||\leq |||(B^{1/2} A B^{1/2})^{1/2}|||. \label{coraujla}
\end{equation}
In fact, from \eqref{gmandgeodesic} one can deduce the stronger inequality \eqref{lambda}. By IX.2.10 in \cite{bhatia1} we have for $A,B$ positive definite and $0\leq t\leq 1$
\begin{equation*}
|||B^t A^t B^t|||\leq |||(BAB)^t|||.
\end{equation*}
So, the inequality \eqref{lambda} is stronger than \eqref{coraujla}. In turn, the latter inequality is stronger than one proved by T. Kosem \cite{kosem} who showed
\begin{equation*}
|||(A\#_{1/2}B)^2|||\leq |||B^{1/2} A B^{1/2}|||.
\end{equation*}
This follows from \eqref{coraujla} because the majorisation $x\prec_w y$ for positive vectors implies $x^2\prec_w y^2$.
\item[7.] The third inequality in \eqref{inequalities} can be derived from the arithmetic-geometric mean inequality of Bhatia-Davis \cite{davis}
\begin{equation*}
|||A^{1/2}XA^{1/2}|||\leq \left|\left|\left|\frac{1}{2} (AX+XA)\right|\right|\right|
\end{equation*}
valid for all $X$ and positive definite $A$. There are several refinements of this inequality, some of which involve different means (Heinz means, logarithmic means, etc.) Each such result can be used to further refine \eqref{inequalities}.
\end{enumerate}
|
1,108,101,565,012 | arxiv | \section{Introduction}
The propagation speed of pulses of structured light has attracted much
attention in recent years
\cite{Gio2015,Horv2015,Bareza,Bouch,MinuComm1,Alf,MinuComm2,Faccio2,MinuPRA2018,KondakciArbitV,AbourVisC2019}%
. It is well known that while different velocities associated with light
propagation are equal to the universal constant $c$ in the case of
one-dimensional plane waves in vacuum, it is not so when the plane waves are
propagating in dispersive media, where the group velocity can take any value
below or above $c$. In particular, relations between the group velocity,
energy transport velocity and the pulse's time of flight become complicated
and even controversial, see, e.g., Refs. \cite{7kiirust,Peatross2000} and
review \cite{MilonniReview}.
In the case of 2- or 3-dimensional structured light pulses, even if they
propagate in empty space, certain space-time couplings can emulate temporal dispersive properties. Such couplings materialize through correlation of
the spatial frequencies involved in the construction of the structured pulses
with the temporal frequencies $\omega$ constituting the pulse temporal
profile. If for all Fourier constituents of the pulse the correlation consists
in a linear functional dependence between $\omega$ and the component $k_{z}$ of
the wave vector, which lies in the direction of propagation of the pulse, then
the pulse is called propagation-invariant. This means that its intensity
profile, or spatial distribution of its energy density, does not change in the
course of propagation---it does not spread either in the lateral or in the
longitudinal direction (or temporally). In reality such a non-diffracting
non-spreading propagation occurs over a large but still finite distance,
because the functional dependence is not strict for practically realizable
(finite-energy and finite aperture) pulses.
The first versions of such propagation-invariant localized pulsed waves were
theoretically discovered in the late 1980-ies and since then a massive literature
has been devoted to them, see collective monographs \cite{LWI,LW2} and reviews
\cite{DonelliSirged,revPIER,revSalo,MeieLorTr,KiselevYlevde,AbourClassif2019}.
The realizability of them in optics was first demonstrated in
Ref.~\cite{PRLmeie} for the example of so-called Bessel-X pulse which is the
only propagation-invariant pulsed version of the monochromatic Bessel beam
introduced in Ref.~ \cite{Durnin}. The group velocity of Bessel-X pulses
exceeds $c$, i.e., it is superluminal in empty space without\ the presence of
any resonance medium. This strange property has been widely discussed in the
literature referred to above and was experimentally verified by several groups
\cite{exp2,exp3,meieXfemto,meieOPNis} for cylindrically symmetric 3D pulses
and recently for 2D (light sheet) counterparts of such superluminal pulses
\cite{AbourClassif2019,Xsheet}.
Motivated by the growing interest in studying the propagation and applications
of structured light pulses in general, and by recently introduced techniques
of generation of pulsed light sheets with space-time couplings in particular,
the following question arises. How is the group velocity of
propagation-invariant pulses related---and whether it is related at all---to
the energy flow velocity in them? Definitely the statement \textquotedblleft
if an energy density is associated with the magnitude of the wave ... the
transport of energy occurs with the group velocity, since that is the rate of
which the pulse travels along\textquotedblright\ (citation from
Ref.~\cite{Jackson}, section 7.8) cannot hold if the group velocity exceeds
$c$. Indeed, very general proofs show that no electromagnetic field can
transport energy faster than $c$ \cite{Lekner2002,Yannis2008,YannisLWII}, i.e., even in the case of superluminal pulses. On the other hand, how should one
comprehend the situation where energy flows slowly, thus as if lagging behind
the pulse?
There are few calculations of the Poynting vector and energy density of
electromagnetic propagation-invariant pulses
\cite{Recami1998,Faccio2010,Salem2011}. To our best knowledge, there is only
one work where the energy flow velocity of a vectorially treated superluminal
Bessel-X light pulse has been calculated \cite{Mugnai2005}. In this work the
velocity is found to be equal to $c$ from the symmetry ($z$) axis up to near the first zero of the Bessel function. As we will see below, this result is
not exact and was obviously obtained due to carrying out the final evaluation
numerically in a paraxial geometry. The authors of Ref.~\cite{Mugnai2005}
conclude that "it is not clear what kind of physical mechanism makes the energy velocity different from the phase and group ones".
We have shown earlier \cite{OttMag,PIERS2013} that the spatial distribution of
the Poynting vector, the energy density and its transport velocity, calculated
numerically by means of scalar approximation formulas for the Bessel beams,
practically coincide with the results of an exact vectorial approach, and the
value of the velocity is slightly below $c$ for various propagation-invariant
scalar fields.
The main objective of the present study is to evaluate analytically how the
energy flow velocity is related---if it is related at all---to the group
velocity in the case of various propagation-invariant vectorial and scalar
fields. Let us note that in this paper we deal primarily with instantaneous
energy flow velocity which, as a matter of fact, does not depend on time in a
frame copropagating with the propagation-invariant field. Since commonly the
average energy flow velocity per period of a time-harmonic EM field is
considered in the literature, for which the name "energy transport velocity"
is used, following Ref.~\cite{Kaiser2011} we will avoid this term if
time-harmonic fields are not considered. We will frequently use simply the
short form "energy velocity".
The paper has been organized as follows. In Section II we reproduce the proof
that the upper limit for the energy flow velocity in any EM field is $c.$ The
same for any scalar field is presented in the Appendix. In Section III we derive
a rather universal relation between the group velocity and the energy velocity
of propagation-invariant transverse magnetic (TM) 2D and 3D superluminal fields.
We start with the 2D case, i.e., with light sheets not only for reasons of
simplicity and transparency but also having in mind that pulsed light sheets
have also practical value, e.g., in microscopy, and are presently studied
intensively
\cite{KondakciArbitV,AbourVisC2019,AbourClassif2019,Xsheet,KondakciSSelfH}.
Section IV deals with energy velocities of several known cylindrical scalar and
vectorial superluminal fields. Section V is devoted to subluminal pulsed
fields and, in particular, to a propagation-\textit{variant }so-called pulsed
Bessel beam, which is generated by a diffractive axicon and is essential for
applications. In Section VI we discuss the interpretation and nature of the
obtained universal expression for the axial energy flow velocity in terms of
the theory of special relativity and in terms of the normalized impedance of
non-null EM fields. Finally, we speculate on the reasons why the value of the energy velocity is different from that of the group velocity.
\section{Upper limit of energy flow velocity}
The local energy flow velocity, as it is well known, is given by ratio of
energy flux (Poynting vector) and the electromagnetic energy density as (SI
units)%
\begin{equation}
\mathbf{V}=2c^{2}\frac{\mathbf{E}\times\mathbf{B}}{\mathbf{E}^{2}%
+c^{2}\mathbf{B}^{2}}~. \label{Vgeneral}%
\end{equation}
The magnitude of this quantity cannot exceed the universal constant $c$, the speed of light in
vacuum. Indeed, by using the general vector identity%
\[
\left( \mathbf{E}\times\mathbf{B}\right) ^{2}=\mathbf{E}^{2}\mathbf{B}%
^{2}-\left( \mathbf{E}\cdot\mathbf{B}\right) ^{2}%
\]
one can write \cite{Lekner2002,Yannis2008,YannisLWII,Kaiser2011}%
\begin{equation}
1-\frac{\mathbf{V}^{2}}{c^{2}}=\frac{\left( \mathbf{E}^{2}-c^{2}%
\mathbf{B}^{2}\right) ^{2}+4c^{2}\left( \mathbf{E}\cdot\mathbf{B}\right)
^{2}}{\left( \mathbf{E}^{2}+c^{2}\mathbf{B}^{2}\right) ^{2}}~. \label{v<c}%
\end{equation}
The right-hand side of Eq.~(\ref{v<c}) is nonnegative and as a consequence
$\left\vert \mathbf{V}\right\vert \leq c$. Luminal velocity $\left\vert
\mathbf{V}\right\vert =c$ is applicable only to TEM waves; also, to
\textit{null} EM fields \cite{BB2003,YannisLWII} a trivial example of which is
a single plane wave. Since the two terms in the numerator on the right-hand
side of Eq.~(\ref{v<c}) are known to be Lorentz invariant, the property of
subluminality or luminality of $\mathbf{V}$ does not depend on the speed of a
reference frame.
Optical fields, especially paraxial ones, can be in good approximation
described by a single scalar function $\psi(x,y,z,t)$. In this case, the
Poynting vector $\mathbf{S}$ and energy density $w$ are given by the
expressions \cite{MW}
\begin{subequations}
\label{scalarSw}%
\begin{align}
\mathbf{S} & \mathbf{=-}\alpha\left( \dot{\psi}^{\ast}\mathbf{\nabla}%
\psi+\dot{\psi}\mathbf{\nabla}\psi^{\ast}\right) ~,\\
w & =\alpha\left( c^{-2}\dot{\psi}\dot{\psi}^{\ast}+\mathbf{\nabla}%
\psi\cdot\mathbf{\nabla}\psi^{\ast}\right) ~,
\end{align}
where the dot denotes derivative with respect to time, and the asterisk
complex conjugation, $\mathbf{\nabla}$ is the gradient operator, and $\alpha$ is a
positive constant whose value depends on the choice of units. The local energy
flow velocity is again given by the ratio $\mathbf{V}=\mathbf{S}/w,$ the
constant $\alpha$ cancels out and, as proved in the Appendix, the limit
$\left\vert \mathbf{V}\right\vert \leq c$ holds for scalar fields as well.
\section{Relation between group velocity and energy velocity of
propagation-invariant electromagnetic fields.}
For a field to be propagation-invariant, it must depend on the propagation
distance $z$ and time through the difference $z-vt$, where $v$ is the group
velocity. Consider, for example, the scalar wave known under name "fundamental
X-wave" first derived in \cite{Lu-X,Zio1993}:%
\end{subequations}
\begin{equation}
\psi_{x}\left( \rho,z,t\right) =\frac{1}{\sqrt{\rho^{2}+\left[
a+i\tilde{\gamma}\left( z-vt\right) \right] ^{2}}}\label{Xpsi}%
\end{equation}
in polar coordinates. Here, $v$ is a superluminal speed of\ propagation of the
whole pulse and $\tilde{\gamma}\equiv1/\sqrt{(v/c)^{2}-1}$ is the superluminal
version of the Lorentz factor, $c$ being the speed of light in vacuum. The
positive free parameter $a$ determines the width of the unipolar
Lorentzian-like temporal profile of the pulse on the $z$ axis. The
double-conical spatial profile of the field looks like the letter "X" in a
meridional plane $(\rho,z)$. In order to get a dc-free optical wave containing
a number of cycles, one has to take temporal derivatives of correspondingly
high order from Eq.~(\ref{Xpsi}). If such field is expanded into monochromatic
plane waves or Bessel beams, $v$ turns out to be also the phase velocity
(along the $z$ axis) of all monochromatic constituents of the field, and is
given as $v=c/\cos\theta$, where $\theta$ is the common inclination angle of
all the constituents with respect to the axis $z$. The angle $\theta$ is
called the Axicon angle in the case of a Bessel-beam expansion. As $\cos
\theta\leq1$, such a field can propagate only with a superluminal group
velocity. In this Section we will derive a universal relation between the
superluminal group velocity and subluminal energy velocity of
propagation-invariant electromagnetic fields.
\subsection{The case of 2D fields (light sheets)}
We start with fields that do not depend on one lateral, say $y,$ coordinate.
Although such 2D fields are simpler, they possess the same properties as 3D ones.
Consider a TEM electromagnetic (generally pulsed) 2D wave propagating along
the positive $z$ direction in vacuum,%
\begin{equation}
\mathbf{E}(x,z,t)=U(z-ct)\mathbf{e}_{x},~\mathbf{B}(x,z,t)=c^{-1}%
U(z-ct)\mathbf{e}_{y},\label{EjaB}%
\end{equation}
where SI units are assumed, with $\varepsilon_{0}\mu_{0}=1/c^{2},$ $U$ is an
arbitrary real localized function of \textit{one} argument, and $\mathbf{e}%
_{x}$ and $\mathbf{e}_{y}$ are unit vectors of a right-handed rectangular
coordinate system. The energy flux density and the energy density of the field
are given by%
\begin{align}
\mathbf{S} & =c^{2}\varepsilon_{0}\ \mathbf{E\times B}=c\varepsilon_{0}%
U^{2}(z-ct)\mathbf{e}_{z}\ ,\label{Poynting}\\
w & =\frac{1}{2}\varepsilon_{0}\mathbf{E}^{2}+\frac{1}{2}\varepsilon_{0}%
c^{2}\mathbf{B}^{2}=\varepsilon_{0}U^{2}(z-ct)\ .\label{Energia}%
\end{align}
Hence, the energy velocity is $\mathbf{V}=\mathbf{S}/w=c\mathbf{e}_{z}$, which
is a well-known result.
Let us take now a symmetrical pair of plane waves---the propagating direction
of the first one lies on the $(x,z)$ plane and is inclined by angle $+\theta$
with respect to the $z$-axis, and the second one by angle $-\theta$ on the
same plane. In this case, the coordinate $z$ in Eq.~(\ref{EjaB}) is replaced
by $z\cos\theta+/-x\sin\theta$ for a member of the pair, respectively. The
components of vectors $\mathbf{E}$ and $\mathbf{B}$ for both waves transform
also according to the rules of rotation around the axis $y$. For the
polarizations given in Eq.~(\ref{EjaB}), the magnetic field remains polarized
along the $y$ axis. However, the electric field has both $x$ and $z$
components. Thus, we are dealing with a TM electromagnetic field. The
resulting expressions for $\mathbf{S}$ and $w$ are rather cumbersome;
therefore, we give them here only for the case $x=0$:%
\begin{align}
\mathbf{S} & =4c\varepsilon_{0}\cos\theta\ U^{2}(z\cos\theta-ct)\mathbf{e}%
_{z}\ ,\label{Steljel}\\
w & =\varepsilon_{0}\left[ \cos2\theta+3\right] U^{2}(z\cos\theta
-ct)\ .\label{wteljel}%
\end{align}
{\small }At any spatiotemporal point with a fixed value of \ $x$, the
quantities $\mathbf{S}$ and $w$ and, hence, the vector field of energy velocity
depend on \textit{the propagation distance and time} solely through the
difference $z\cos\theta-ct$ in the argument of the function $U$. Thus the
vectors fields of energy flux and energy velocity and the scalar field of energy density all move without any change in the $z$-direction with velocity
$v=c/\cos\theta$.
From Eqs.~(\ref{Steljel}-\ref{wteljel}), with the help of some trigonometry we
get for the quantity of our primary interest---the energy velocity on the
propagation axis $z$ (shortly: the axial velocity)---the following expression:%
\begin{gather}
\mathbf{V}=\mathbf{S}/w=V_{z}\ \mathbf{e}_{z}\ ,\qquad V_{z}\equiv
V=cR(\theta),\nonumber\\
R(\theta)\equiv\frac{2\cos\theta}{\cos^{2}\theta+1}\ .\label{R()}%
\end{gather}
We see that the energy velocity does not depend on the function $U$, has only
the axial component on the propagation axis and takes \textit{only subluminal
value}\emph{s} in the interval from $c$ to $0$ depending on the angle $\theta$
as depicted in Fig.~1.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{Fig1.eps}%
\caption{The subluminality factor in Eq.~(\ref{R()})
representing the energy flow velocity (in units of $c$) along the propagation
axis $z$ in dependence on the inclination angle $\theta$ of the plane waves.}
\end{figure}
The case $\theta=90^{\circ}$ corresponds to a standing
wave where, as it is well known, energy does not flow. More precisely: in this
case the plane $(y,z)$ is a node plane outside of which the energy flows back
and forth along the $x$ axis in accordance with the time function $U^{2}.$
(For harmonic time dependence, the behavior of the energy flow instantaneous
velocity of standing waves has been thoroughly studied, e.g., in
\cite{Kaiser2011}). We will see in the following that the obtained ratio
$R(\theta)$ of the energy axial velocity to the universal constant $c$ is not
peculiar to the given simple model 2D field but holds generally for
propagation-invariant fields. Moreover, if we introduce the normalized
velocity $\beta\equiv v/c$, we can rewrite Eq.~(\ref{R()}) in following two
forms:%
\begin{equation}
R(\beta)\equiv\frac{2\beta^{-1}}{\beta^{-2}+1}=\frac{2\beta}{\beta^{2}%
+1}~.\label{R(b)}%
\end{equation}
The last equality is remarkable and we shall comment on it in the discussion
provided in Sec. VI.
In order to get an idea about the energy velocity vectors outside of the axial
region, the velocity field is depicted in Fig.~2 for the case of the pulse
wavefunction $U$ comprised of a single positive half-period of a cosine.
Outside the crossing region the energy flows perpendicularly to the pulse
front with velocity $c$ as expected. In the central region, the pulses sum up
resulting in almost a 4-fold (if the angle $\theta$ is small) increase of both
the energy flux density and the energy density, while the ratio of these two
quantities---the energy velocity---is smaler than $c$. Since the horizontal
axis of Fig.~2 represents the propagation variable $\zeta\equiv z\cos
\theta-ct$, the plots can be interpreted either as "snapshots in flight" made
at a fixed value of time $t$, or as plots on the $(-t,x)$-plane made at a fixed
value of the coordinate $z$. From the latter interpretation it follows
immediately that for all spatial points, except for those on the $z$ axis, the
$x$-component of the Poynting vector as well as $V_{x}$ reverse their sign
with increasing of time. In contrast, $S_{z}$ and $V_{z}$ remain non-negative
for all values of $x$, $z$, and $t$.
\begin{figure}
\centering
\includegraphics[width=9cm]{Fig2.eps}%
\caption{The field of energy flow velocities formed by
the crossing of two unipolar half-cycle pulses, depicted by arrows on a grid
of $21\times21$ points. The length of arrows represents the magnitude of the
vectors, which equals to $c$ on the branches of the X-profile and is smaller
than $c$ on the $z$ axis according to Eq.~(\ref{R()}). The $x$ axis is
vertical. The horizontal axis represents the propagation variable $z\cos
\theta-ct$. For $\theta=25^{\circ}$, the energy velocity on the axis $z$ is
$V_{z}\equiv V=0.99518$ in accordance with Eq.~(\ref{R()}). The vector field
plot is superimposed by a semitransparent greyscale surface (grid $21\times
21$) plot of the square root of the energy density distribution. The velocity
vectors are shown only for regions where the energy density exceeds 0.1\% of
its maximum value.}
\end{figure}
In Fig.~3, the velocity field is depicted for the case where the pulse
wavefunction $U$ comprises a cosine in the interval from $-3\pi/2$ to $3\pi
/2$. We see an embryonic pattern of interference between crossing harmonic
plane waves. While the velocity equals to $c$ and is directed perpendicularly
to the X-branches outside of the interference region, it is much smaller and
points in various directions in regions of destructive interference. Other
features are the same as in Fig. 2.
\begin{figure}
\centering
\includegraphics[width=9cm]{Fig3.eps}%
\caption{The field of energy flow velocities and the
square root of the energy density distribution formed by the crossing of two
bipolar 1.5-cycle pulses. The grid of $\ $the plots consists of $41\times41$
points. For other characteristics see caption of Fig.~2.}
\end{figure}
Despite the rather coarse grid, Fig. 3 indicates some subtleties in the
behavior of the energy velocity at locations of minima of the energy density
in the case when the function $U$ contains an oscillating factor. A detailed
numerical and analytical study with $U$ taken as a sine or cosine function
reveals the following: (i) on a line $\zeta=\zeta_{0}$ corresponding to a
zero of the sine or cosine, the Poynting vector vanishes identically for all
values of $x$ in the region of interference of the two waves, while the energy
density is nonzero, except for the value $x=0$; (ii) the local energy velocity
is zero in such planes, except at the points on the $z$ axis where
Eq.~(\ref{R()}) holds as it does generally on the $z$ axis; (iii) at the
points $(\zeta=\zeta_{0},x=0)$ the velocity makes a jump between zero and the
value given by Eq.~(\ref{R()}); (iv) which of the two values the velocity
takes depend on the order in taking the double limit $\zeta\rightarrow
\zeta_{0},x\rightarrow0$. Although such a discontinuity is unphysical, it is
not a serious problem, since the energy density vanishes at the discontinuity
points and the velocity is therefore undefined in a physical sense anyway
(mathematically there is the $0/0$-uncertainty).
\subsection{ Generalization to 3D cylindrical fields}
Although propagation-invariant light sheets have become a subject of intensive
study recently, as mentioned in the Introduction, 3D counterparts of them, in
particular cylindrically symmetric ones, have been of main interest. One of
the reasons is that the energy density in the central spot of a monochromatic
$J_{0}$-Bessel beam, as well as in the apex of the double-conical profile of a
X-type pulse, exceeds considerably (more than four times) the energy density
outside the center, which further decreases inversely proportionally with the
distance from the propagation axis.
Do the relations Eq.~(\ref{R()}) or Eq.~(\ref{R(b)}) also hold for 3D
propagation-invariant fields? The latter can be considered as summing up the
pairs of plane waves considered in the previous subsection whereas the axis
$x$ there takes all values of the azimuthal angle $\phi$ $\in\left[
0\ldots\pi\right] $ around the axis $z$. Since the quantities $S$ and $w$ are
not linear with respect to fields $\mathbf{E}$ and $\mathbf{B}$, it is far
from being obvious that the expressions Eq.~(\ref{Steljel})-(\ref{wteljel})
hold also for such a cylindrically symmetric superposition of the fields. On
the $z$ axis the EM fields of a pair of plane waves considered above are given
by
\begin{subequations}
\begin{align*}
\mathbf{E}_{p}(x,z,t)|_{x=0} & =2\cos\theta\,U(z\cos\theta-ct)\mathbf{e}%
_{x}\ ,\\
\mathbf{B}_{p}(x,z,t)|_{x=0} & =2c^{-1}U(z\cos\theta-ct)\mathbf{e}_{y}\ .
\end{align*}
Let us now take another such pair for which the axis $x$ is rotated by an
angle $\phi$ around the axis $z$. If we denote the corresponding coordinate
transformation $3\times3$ matrix by $T(\phi)$, we can express the
$1/2$-weighted sum of the fields of the two pairs as $\frac{1}{2}\left[
1\mathbf{+}T(\phi)\right] \mathbf{E}_{p}$ , $\frac{1}{2}\left[
1\mathbf{+}T(\phi)\right] \mathbf{B}_{p}$ and calculate the flux and energy
density. The resulting expressions turn out to be the same as
Eqs.~(\ref{Steljel}) and (\ref{wteljel}) but both multiplied by a factor
$(\cos\phi+1)$ which cancels out from the expression of $V$. As in the case of
obtaining Eq.~(\ref{R()}), the square of the function $U$ also cancels out
from the ratio of the energy flux and density. Thus, Eq.~(\ref{R()}), with the
subluminality factor $R(\theta)$, remains valid for the resultant field of two
or more pairs of plane waves irrespective of the azimuthal angles between
their directions. Likewise, for a rotationally symmetric superposition of the
pairs, one has to integrate $T(\phi)\mathbf{E}_{p}$ and $T(\phi)\mathbf{B}%
_{p}$ over $\phi$ in the interval $\left[ 0\ldots\pi\right] $ and calculate
the flux and energy density in the resultant fields. The results coincide with
Eqs.~(\ref{Steljel}) and (\ref{wteljel}) multiplied by $4$. Hence, the factor
$R(\theta)$ given by Eq.~(\ref{R()}) expresses also in the case of
rotationally symmetric propagation-invariant 3D waves the dependence of the
energy flow velocity along the propagation axis on the Axicon angle $\theta.$
\section{Energy flow velocity of superluminal propagation-invariant
electromagnetic and scalar fields}
\subsection{Fields with fixed value of Axicon angle}
It is interesting first to verify whether the formula in Eq.~(\ref{R()}) or
Eq.~(\ref{R(b)}) holds in the case of the best known superluminal
fields---Bessel beams, Bessel-X and X-waves, where not only the intensity or
energy density but also the field itself is propagation-invariant. All these
comprise cylindrically symmetric superpositions of plane waves directed at a
fixed angle $\theta$ with respect to axis $z$ and differ only by the temporal
wave profile $U$. The function $U$ contains, respectively, infinitely many
(Bessel beam) or few cycles (Bessel-X) or a single unipolar Lorentzian-like pulse (X-wave)
and cancels out in the ratio of energy flux and density.
Therefore the energy velocity on the propagation axis (i.e., where $\rho
\equiv\sqrt{x^{2}+y^{2}}=0)$ of all these waves should be given by
Eq.~(\ref{R()}) or Eq.~(\ref{R(b)}).
Expressions for the time-averaged Poynting vector and energy density for
electromagnetic Bessel beams of the zeroth order can be found in
\cite{Ari1993}. Examination of the complicated Eq.~(44) (for $\mathbf{S}$) and
Eq.~(43) (for $w$) derived there for a plane-polarized $\mathbf{E}$, bearing
in mind that the ratio $\beta/k$ is equal to our $\cos\theta$, shows that on
the axis the ratio $\mathbf{S}/w$ indeed turns out to be same as our
Eq.~(\ref{R()}). Examination of Eq.~(51) (for $w$) and Eq.~(52) (for
$\mathbf{S}$) derived there for the case of a circularly polarized Bessel beam
results in the same conclusion. In the last case of polarization the authors
of Ref.~\cite{Ari1993} have found that at the radial distances from the $z$
axis which correspond to the zeros of the Bessel function, the $z$-component
of the Poynting vector assumes slightly negative values. Since it means also
negative values of $V_{z}$, for the sake of comparison we carried out
calculations of the time-averaged Poynting vector, energy density and velocity
of TM Bessel beams obtained with the Hertz vector, whose $z$-component is
given by a scalar Bessel beam field of arbitrary order $m$. Our results are
the following: (i) the $z$-component of the velocity does not assume negative
values at any radial distance $\rho$ from the $z$-axis; (ii) as $\rho$
increases the velocity oscillates (in accordance with the behavior of the
Bessel function) while the maximum values are given by Eq.~(\ref{R()}); (iii)
for $m>0$ the first maximum is at $\rho=0$, i.e., the formula Eq.~(\ref{R()}
holds on the $z$-axis, while for $m=0$ the velocity is zero on the $z$-axis.
The Poynting vector and energy density for electromagnetic fields derived from
a Hertz potential given by a scalar ultrabroadband X-shaped wave, first
derived in \cite{Lu-X,Zio1993}], were calculated in \cite{Recami1998}, see
Eq.~(6) therein. Again, the angular dependence of the ratio of the two
quantities coincides with our Eq.~(\ref{R()}). However, since the EM field
vectors are obtained from Hertz potentials through spatial and temporal
derivatives, the central maximum of the X-shaped Hertz potential turns into
zero values for some field components, as well as the Poynting vector, on the
$z$ axis. In contrast, the first-order scalar ultrabroadband X-wave, which is
azimuthally asymmetric (has a factor $\exp i\varphi$), is zero in the center.
In this case the axial component of the Poynting vector does not vanish in the
center of the pulse as one can see from Eq.~(14) of Ref.~\cite{Salem2011}
where the Poynting vector of such X-wave has been calculated for a general
(TM+TE) polarization.
In order to clarify when our formula in Eq.~(\ref{R()} or Eq.~(\ref{R(b)}
holds and when not, we calculated the energy velocity for different EM-fields
derived from X-shaped Hertz potentials. Without loss of generality, we
restricted ourselves to the TM field case. The results are the following.
\end{subequations}
\begin{enumerate}
\item In the case of the azimuthally symmetric scalar potential $\psi
_{x}\left( \rho,z,t\right) $ given in Eq.~(\ref{Xpsi}), the local energy
flow velocity turns out to be zero at the center of the pulse, but in the
central cross-sectional plane it increases with the distance from the $z$ axis
and approaches its maximum on a ring of radius $\rho=\sqrt{2}a$, where the
formula in Eq.~(\ref{R()}) holds. Such a behavior is similar to that of the
case of the Bessel beam of order $m=0$ described above.
\item In the case of the azimuthally asymmetric scalar potential given by
$\psi_{x}^{as}\left( \rho,z,\varphi,t\right) =\psi_{x}\left( \rho
,z,t\right) ^{3}\rho\exp(i\varphi)$, the formula in Eq.~(\ref{R()}) holds at
the center of the pulse, as well as on a ring of radius $\rho=2a$, where its
second maximum is located.
\item In the case of the azimuthally asymmetric scalar potential used to form
the Hertz vector in \cite{Salem2011}, the formula in Eq.~(\ref{R()}) holds at
the center of the pulse. The same holds for another azimuthally asymmetric
scalar potential taken for the Hertz vector in \cite{Recami1998}.
\end{enumerate}
\subsection{Fields with frequency-dependent Axicon angle}
Not all non-diffracting fields can be represented as angular superpositions of
plane wave pulses as shown earlier, or---in the case of spectral
representation of cylindrical fields---as superpositions of monochromatic
Bessel beams whose axial wavenumbers are proportional to the frequency. More
general non-diffracting fields, where not the field itself but only its
intensity (energy flux and/or density) is propagation-invariant, can be
represented as angular superpositions of \textit{tilted} pulses or---in
spectral terms---as superpositions of Bessel beams where the axial wavenumber
depends linearly but not simply proportionally on the frequency (see, e.g.,
\cite{LWI,meieLWIs,MeieLorTr}). This means that the angle $\theta$ is not
fixed any more and becomes a function of frequency within the spectral band of
the pulse.
Since (i) directed optical EM fields are in good approximation describable as
scalar fields and (ii) there are many studies of scalar non-diffracting fields
in the literature but few studies of their EM counterparts, in what follows we
deal primarily with the energy velocity of scalar fields. However, first we
must answer the question to what extent is the scalar treatment justified in
calculations of the energy velocity. It is easy to check that in the case of a
pair of scalar plane waves (light sheets), the same expressions for the axial
velocity given in Eqs. (\ref{R()}) and (\ref{R(b)}) follow from Eq.
(\ref{scalarSw}).
Since a single-frequency Bessel beam is the constituent of all cylindrical
non-diffracting pulses, we carried out numerically comparisons between energy
velocity fields of a scalar Bessel beam and vectorial (EM) Bessel beams
\cite{OttMag,PIERS2013} The main result is that the velocity field
calculated using the scalar approximation practically coincides with those
calculated for EM\ Bessel beams of different polarizations, except for small
off-axis regions around minima of energy density, where the discrepancy is
about a few percent of $c$ if $\theta<20^{\circ}$ and much less at paraxial
values of $\theta$.
Bessel beams considered so far have infinite aperture and therefore cannot be
generated in reality. To check that Eq.~(\ref{R()} works also with realistic
Bessel beams, we applied Eq.~(\ref{scalarSw}) to a so-called Bessel-Gauss beam
(see \cite{PorrBessGauss} and Refs. therein), which is a cylindrically
symmetric superposition of Gaussian beams propagating under the Axicon angle
$\theta$ with respect to the $z$ axis and have a superluminal group velocity
in the waist region. Our result is that Eq.~(\ref{R()}) holds if
$\theta<15^{\circ}$ which is understandable since the Bessel-Gauss beam is a
solution of the paraxial wave equation.
The first example of a superluminal scalar field with frequency-dependent
Axicon angle is the so-called Focused X Wave (FXW) \cite{revPIER},
possibilities of optical generation of which have been considered in
Ref.~\cite{meieFXW}. The expression for the FXW reads
\begin{align*}
\psi_{fxw}\left( \rho,z,t\right) & =\psi_{x}\left( \rho,z,t\right)
\times\\
& \exp\left[ \frac{-|k|}{\psi_{x}\left( \rho,z,t\right) }\right]
\exp\left[ ik\tilde{\gamma}\frac{v}{c}\left( z-\frac{c^{2}}{v}t\right)
\right] ~,
\end{align*}
where $\psi_{x}$ had been defined by Eq.~(\ref{Xpsi}) and $k$ is a new
parameter---the smallest wavenumber in the spectrum of the pulse. Obviously
$\psi_{fxw}\left( \rho,z,t\right) \rightarrow\psi_{x}\left( \rho
,z,t\right) $ if $k\rightarrow0$. Due to the second exponential factor,
$\psi_{fxw}$ is not propagation-invariant, while its modulus squared is. \ The energy flow velocity $V_{z}$ along the
$z$-direction evaluated with Eq. (\ref{scalarSw}), that is maximum at $\rho
=0$, does not obey Eq.\ (\ref{R(b)}): in addition to $v$, it depends on $k$
and $a$, but, interestingly, not on $vt$ arising from the second (subluminal)
speed. For relatively small values of the parameter $a$ and superluminal
values of $v$ close to $c$ Eq.\ (\ref{R(b)}) holds for values of $k$ up to 5
(in reciprocal units of $z,\rho,$ and $a$).
Essentially, the same behavior applies for the energy velocity of the
vector-valued (TM) FXW. In this case however, $V_{z}$ is equal to zero at
$\rho=0$; its maximum value occurs for a value of $\rho>0$. Also, in addition
to $k$, the solution is sensitive to $vt$.
All fields considered so far have infinite total energy, i.e., in reality they
can exist only within a limited aperture. It is interesting to consider the
so-called Modified Focused X wave (MFXW) which has finite energy and is given
by \cite{revPIER}%
\begin{align*}
\psi_{mfxw}\left( \rho,z,t\right) & =\psi_{x}\left( \rho,z,t\right)
\times\\
& \left[ \psi_{x}^{-1}\left( \rho,z,t\right) +a^{\prime}-i\tilde{\gamma
}\frac{v}{c}\left( z-\frac{c^{2}}{v}t\right) \right] ^{-1}~,
\end{align*}
{\small }where $a^{\prime}$ is the second width parameter. Again, $V_{z}$,
that is maximum at $\rho=0$, does not obey Eq.\ (\ref{R(b)}) but depends on
$v,a,a^{\prime}$, and $vt$. \ For relatively small values of the parameter
$a$, relatively large values of $a^{\prime}$ and superluminal values of $v$
close to $c$, the formula Eq.\ (\ref{R(b)}) is obeyed very closely because
$vt$ appears as a multiplicative factor of $(v^{2}-c^{2})$.
Essentially, the same behavior applies for the energy velocity of a
vector-valued MFXW. Again, due to the specific construction of the TM field
from the axially oriented Hertz vector potential, in this case $V_{z}$ is
equal to zero at $\rho=0$; its maximum value occurs for a value of $\rho>0$.
\section{Energy flow velocity of subluminal propagation-invariant
electromagnetic and scalar fields}
It is intriguing to ask: if the energy flows always subluminally in a
superluminal pulse, is the flow of the energy of a subluminal pulse faster or
slower than its subluminal group velocity?
The best known and simplest subluminal propagation-invariant scalar field is
the infinite-energy MacKinnon wave packet \cite{MacKinn,revPIER,MeieLorTr}. It is derived from a spherically symmetric standing wave
given by $\exp(-ikct)(\sin kr)/kr$, where $k$ is the wavenumber and
$r=\sqrt{\rho^{2}+z^{2}}$, by applying a Lorentz transformation with
subluminal $\ \beta=v/c$ to the $z$-coordinate and time. For an observer in
another reference frame the field is not any more a monochromatic standing
wave, but a pulse whose intensity distribution propagates with velocity $v$
without any change. We found that the formula Eq.\ (\ref{R(b)}) holds for the
axial energy velocity of the MacKinnon wave packet.
We studied also a finite-energy version of the MacKinnon wave packet given by
\cite{revPIER}%
\[
\psi_{fM}\left( \rho,z,t\right) =\frac{\arctan\left[ \frac{\sqrt{\gamma
^{2}\left( z-vt\right) ^{2}+\rho^{2}}}{a+i\beta\gamma\left( z-ct/\beta
\right) }\right] }{\sqrt{\gamma^{2}\left( z-vt\right) ^{2}+\rho^{2}}}~,
\]
where $\gamma\equiv1/\sqrt{1-(v/c)^{2}}=$ $1/\sqrt{1-\beta^{2}}$ is the common
(subluminal) Lorentz factor and $a$ is a parameter. Again, the formula in Eq.\ (\ref{R(b)}) holds.
As to the vector-valued version of the MacKinnon wave packet, due to the
specific construction of the TM field from the axially oriented Hertz vector
potential with the MacKinnon scalar wavepacket as its z-component, $V_{z}$ is
equal to zero at $\rho=0$; whereas Eq.\ (\ref{R(b)}) holds at its maxima values
that occur at values of $\rho$ corresponding to the maxima of the sinc-function.
The last almost-undistorted spatiotemporally localized field we studied was
the finite energy azimuthally symmetric subluminal splash mode. It arises from
the elementary solution $(\rho^{2}+z^{2}-c^{2}t^{2})^{-1}$ of the scalar wave
equation by first resorting to the complexification $t\rightarrow t+ia$ and
subsequently undertaking a subluminal Lorentz transformation involving the
coordinates $z$ and $t$. A scalar-valued computation yields a maximum axial
energy velocity at the pulse center in conformity with the formula in
Eq.\ (\ref{R(b)}). A vector-valued computation shows that the axial energy
velocity is zero on-axis ($\rho=0$) at the pulse center. The maximum value of
the axial velocity depends on $vt$ and the parameter $a$. For small values of
$vt$ or subluminal speed $v$ very close to $c$ and $a=1$ the maximum of the
axial energy flow velocity occurs on a ring of radius $\rho=1$.
Finally we studied a propagation-\textit{variant }scalar field, which is
important for applications and is called "pulsed Bessel beam"
\cite{DifRefAxiconBB,PorrGaussjaPBB} Since it is formed by a diffractive
axicon (a circular grating), it is like a disk cut off from a Bessel beam---
its radial profile is propagation invariant and given by the zeroth-order
Bessel function, while its longitudinal profile spreads out in the course of
propagation as a chirped pulse. Such behavior has been experimentally studied
in detail with $%
\operatorname{fs}%
$-range temporal and $%
\operatorname{\mu m}%
$-range spatial resolution \cite{meieDifAxicon0,meieDifAxicon1}.
In the given case, the field to be inserted into Eq.~(\ref{scalarSw})
factorizes as $\psi=U(z,t)J_{0}(k_{0}\rho\sin\theta)$, where $k_{0}$ is the
carrier wavenumber (mean wavenumber in the spectrum of the pulse). For the
field of a pulsed Bessel beam with Gaussian temporal profile, the function
$U(z,t)$ was calculated using Eqs. (44), (45) and (54) from
Ref.~\cite{PorrGaussjaPBB}. As one can see in Fig.~4, the pulse broadens and
gets chirped in the course of propagation. The reason is that waves of
different wavenumbers diffract at different angles on the grating, which means
that the Axicon angle is not constant over the spectrum of the pulse.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{Fig4.eps}%
\caption{Time dependence of the real part of
the function $U(z,t)$ of a pulsed Bessel beam with a mean value of the Axicon
angle $\theta=40^{\circ}$ (determined for the carrier wavenumber) and Gaussian
temporal profile of width $1$ (in units of carrier wavelength divided by $c$):
curve 1 -- at the origin $z=0$ where the pulse is formed and has the shortest
duration, curve 2 -- at $z=20$ (in units of the carrier wavelength). The
curves with indicated mean Axicon angles show temporal behavior of the energy
velocity (normalized to $c)$ at the maximum of the pulse envelope.}
\end{figure}
Despite the chirp, locally the pulse looks like a Bessel beam characterized by
an instantaneous wavenumber and corresponding Axicon angle. Therefore, based
upon the results obtained above, Eq.~(\ref{R()}) should hold at time instances
and propagation distances when the pulse contains more than just a few cycles.
This is exactly what we see in Fig.~4: at small mean angles $\theta$---not
speaking about paraxial angles---the energy velocity is almost at a constant
level determined by Eq.~(\ref{R()}) \ The slight drop in its value occurs only
at the origin if the pulse is shorter than 2 wavelengths there and
$\theta>20^{\circ}$. The drop at such extreme parameters may be caused also by
the circumstance that in the calculation of the function $U(z,t)$ in Ref.
\cite{PorrGaussjaPBB} the group velocity dispersion is only approximately taken into account.
To conclude, the answer to the question raised in the beginning of the Section
is: in those regions of a subluminal pulse where Eq.~(\ref{R(b)}) holds, the
energy flows faster than the pulse envelope, i.e. $0<v<V_{z}=V<c$
\section{Discussion}
We have seen that---with a few exceptions---the formula given in
Eq.~(\ref{R(b)}) holds for the energy axial velocity of both superluminal and
subluminal non-diffracting wavefields. Moreover, Eq.~(\ref{R(b)}) indicates
that the velocity does not change if one makes a transition
\textit{superluminal}$\longleftrightarrow$\textit{subluminal}, i.e., replaces
the normalized group velocity $\beta=v/c$ by its reciprocal value $c/v$. To
understand the reason of such an interesting feature of the expression let us
take a look at the expression of the energy velocity in terms of impedance
\cite{ImpedZ}, \textit{viz}.%
\begin{align}
\mathbf{V}& =2c\frac{\mathbf{E}\times c\mathbf{B}}{\mathbf{E}^{2}+c^{2}%
\mathbf{B}^{2}}=c\frac{2Z}{1+Z^{2}}\left( \mathbf{e}_{E}\times \mathbf{e}%
_{B}\right) ~, \label{VZ} \\
Z& \equiv \frac{E}{cB}=\frac{E}{H}Z_{0}^{-1}~, \label{Zdef}
\end{align}%
where $E$, $B$ are the magnitudes of electric and magnetic field vectors and
$\mathbf{e}_{E}$, $\mathbf{e}_{B}$ are corresponding unit vectors; $Z$ is the
impedance normalized to the vacuum impedance $Z_{0}=\sqrt{\mu_{0}%
/\varepsilon_{0}}\approx377\Omega$. Thus, the group velocity is determined by
the impedance $\beta=Z$ and the energy velocity Eq.~(\ref{VZ}) does not change
if we insert $Z^{-1}$ instead of $Z$. This is due to the invariance of the
energy flux and energy density with respect to the duality transformation
$\mathbf{E}\rightarrow c\mathbf{B}$, $\mathbf{B}\rightarrow-\mathbf{E/}c$ .
From the duality also follows that our results obtained for TM pulses apply
also for TE pulses. $Z=1$ and, consequently, $V=c$ holds only for TEM and
\textit{null }electromagnetic waves \cite{Lekner2002,YannisLWII,Kaiser2011}.
Note also that Eq.~(\ref{R(b)}) resembles the relativistic composition law for
velocities. One can speculate that the energy velocity equals the pulse
propagation velocity seen from a reference frame countermoving with exactly
the pulse group velocity.
It is well known that the Poynting vector is not defined uniquely by the
Poynting theorem. Could it be that, consequently, the energy flow velocity we
have dealt with throughout this paper is not also defined uniquely and
therefore its nonequality to the group velocity would not be of interest? The
answer is that any other definition of the Poynting vector might violate the
velocity upper limit $c$ \cite{Lekner2002}. Here a quotation from
\cite{Jackson}, section 8.5 is appropriate: "However the theory of special
relativity, in which energy and momentum are defined locally and invariantly
via the stress--energy tensor, shows that the ... expression for the Poynting
vector is unique."
According to conventional thinking, energy should be tightly coupled to an EM
field pulse. How, then, one has to interpret the results that energy flows
slower than a superluminal pulse itself and flows faster than a subluminal
pulse? Nonequality of energy flow velocity to the velocity of field motion is
not unique for non-diffracting localized waves but takes place in other
non-null fields, e.g., in standing waves, dipole radiation, \cite{Kaiser2011},
etc. In Ref.~\cite{Kaiser2011} this nonequality is explained in terms of
reactive (rest) energy that the field leaves behind. In this paper one can
also find a hint towards comprehension of this inequality: "A rough way to
understand why $V<c$ is by analogy with water waves. The mass carried by the
waves has a definite speed at each point and time, but this need not coincide
with the propagation speed of the wavefronts." Existence of a rest energy
portion in non-diffracting localized waves is obvious because all of them
contain a standing-wave component. It is interesting to note that the standing
wave component inherent to these waves can be given an interpretation as if
the mass of a photon of these wavefields is not equal to zero
\cite{Minupeatykk}.
Finally, let us note that the signal velocity of superluminal nondiffracting
pulses is not superluminal but an instantaneous notch made, e.g., into
Bessel-X pulse, which transforms into the subluminal propagation-variant pulse
considered above, as proved by a thought experiment in Ref.~\cite{Minupeatykk}.
\section{Conclusions}
We have shown that the velocity $V$, with which energy in non-diffracting
pulsed waves flows in the direction of propagation, is not equal to the
propagation velocity $v$ (group velocity) of the pulse itself. Instead, on the
symmetry axis and/or at the locations of the energy density maxima, these two
quantities obey a simple but physically content-rich relation $V=2v/\left[
1+(v/c)^{2}\right] $. This has been proven first for vector-valued
superluminal 2D light sheets and their 3D cylindrical generalizations in Sec. III. Subsequently, it has been shown to be valid for the scalar-valued $m$-order
Bessel beam, the fundamental zero-order X wave, the first-order azimuthally
asymmetric X wave, as well as the corresponding vector-valued TM
electromagnetic fields based on a Hertzian potential approach. The behavior of
the axial velocity for the vector-valued fields differs depending on whether
the scalar potential used as a seed in forming the vector Hertz potential is
azimuthally symmetric or asymmetric. A detailed discussion is provided in Sec. IV.
Purely propagation-invariant fields characterized by the group speed $v$ along
the direction of propagation are physically unrealizable. Physically
realizable spatiotemporally localized waves contain two speeds: the group
speed $v$ (superluminal or subluminal) and a second speed $c^{2}/v$
(subluminal or superluminal). Between the purely propagation-invariant and the
physically realizable \textquotedblleft almost undistorted\textquotedblright%
\ localized waves, there exists a family of only intensity-invariant localized
waves containing the aforementioned two speeds. Examples for such pulses are
the Focus X Wave (FXW) and MacKinnon's wave packet. Both are characterized by
infinite energy content. The energy flow velocity for these two scalar fields,
the scalar finite-energy pulses based on them, as well as the corresponding
vector-valued TM electromagnetic fields determined by a Hertzian potential
approach obey the universal formula given in Eq. (\ref{R(b)}) very closely
provided the group velocity is very close to the speed of light and certain
free parameters are tweaked appropriately. Specific details are given in Secs. III-V.
It is very interesting to note that the universal formula for the axial energy
flow velocity, in the form appearing in Eq. (\ref{R(b)}), is intimately
related to the wave impedance reformulation in Eq. (\ref{VZ}), which, in turn,
is reminiscent of a relativistic expression for the addition of velocities.
Finally, a note on superluminality is appropriate. The presence of a
superluminal speed in a finite-energy solution does not contradict
relativity. If the parameters are chosen appropriately, the pulse moves
superluminally with almost no distortion up to a certain distance $z_{d}$,
which is determined by the geometry (aperture size and an axicon angle) and
then it slows down to a luminal speed $c$, with significant accompanying
distortion. Although the peak of the pulse does move superluminally up to
$z_{d}$, it is not causally related at two distinct ranges $z_{1},z_{2}%
\in\lbrack0,z_{d})$. Thus, no information can be transferred superluminally
from $z_{1}$ to $z_{2}$. The physical significance of such wavepackets is due
to their spatiotemporal localization.
The authors thank Ari Friberg and John Lekner for giving comments to their
papers \cite{Ari1993} and \cite{Lekner2002}, which stimulated undertaking the
present study.
|
1,108,101,565,013 | arxiv | \section{Introduction}
\subsection{States in geometric quantization}
Let $(M,\omega,j)$ be a closed, connected K\"ahler manifold, equipped with a prequantum line bundle $(L,\nabla)$. According to the geometric quantization procedure, due to Kostant and Souriau \cite{Kos,Sou}, we define, for any integer $k \geq 1$, the quantum state space as the Hilbert space $\mathcal{H}\xspace_k = H^{0}(M,L^{\otimes k})$ of holomorphic sections of $L^{\otimes k} \to M$\footnote{In the rest of the paper, we will write $L^k$ instead of $L^{\otimes k}$ to simplify notation.}; the semiclassical limit is $k \to +\infty$. The quantum observables are Berezin-Toeplitz operators, introduced by Berezin \cite{Ber}, whose microlocal analysis has been initiated by Boutet de Monvel and Guillemin \cite{BouGui}, and which have been studied by many authors during the last years (see for instance \cite{ChaBTO,MaMa,Schli} and references therein).
In this paper, we investigate the problem of quantizing a given submanifold $\Sigma$ of $M$, that is constructing a state concentrating on $\Sigma$ in the semiclassical limit (in a sense that we will precise later). This kind of construction has been achieved for a so-called \emph{Bohr-Sommerfeld} Lagrangian submanifold $\Sigma$, that is Lagrangian manifold with trivial holonomy with respect to the connection induced by $\nabla$ on $L^k$ (\cite{BorPauUri}, see also \cite{ChaBS}). The state obtained in this case is a pure state whose microsupport is contained in $\Sigma$. Such states are useful, for instance, to construct quasimodes for Berezin-Toeplitz operators.
Here we adopt a different point of view. We assume that $\Sigma$ is any submanifold, equipped with a smooth density $\sigma$ such that $\int_{\Sigma} \sigma = 1$. Then we construct a mixed state--or rather its density operator--$\rho_k(\Sigma,\sigma)$ associated with this data, by integrating the coherent states projectors along $\Sigma$ with respect to $\sigma$, see Definition \ref{dfn:state}. We prove that this state cannot be pure, and that it concentrates on $\Sigma$ in the semiclassical limit. Similar states, the so-called P-representable or classical quantum states, have been considered in the physics literature \cite{GirBrBr} and have been used recently to explore the links between symplectic displaceability and quantum dislocation \cite{ChaPol}; they are obtained by integrating the coherent projectors along $M$ against a Borel probability measure.
\subsection{Main results}
Given two submanifolds with probability densities $(\Sigma_1, \sigma_1)$ and $(\Sigma_2, \sigma_2)$, we would like to compare the two associated states $\rho_{k,1} = \rho_k(\Sigma_1,\sigma_1)$ and $\rho_{k,2} = \rho_k(\Sigma_2,\sigma_2)$. For the purpose of comparing two mixed states, one often uses the fidelity function \cite{Uhl,Jos}, defined as
\[ F\left(\rho_{k,1},\rho_{k,2}\right) = \Tr\left( \sqrt{\sqrt{\rho_{k,1}} \ \rho_{k,2} \ \sqrt{\rho_{k,1}} } \right)^2 \in [0,1].\]
Because it involves the square roots of the density operators, it is quite complicated to estimate in general. Nevertheless, Miszczak \emph{et al.} \cite{Mis} recently obtained lower and upper bounds for the fidelity function; they introduced two quantities $E(\rho_{k,1},\rho_{k,2})$ and $G(\rho_{k,1},\rho_{k,2})$, respectively called \emph{sub-fidelity} and \emph{super-fidelity}, easier to study, such that $E(\rho_{k,1},\rho_{k,2}) \leq F(\rho_{k,1},\rho_{k,2}) \leq G(\rho_{k,1},\rho_{k,2})$.
We will estimate these quantities, in the semiclassical limit, in the particular case where $\Sigma_1 = \Gamma_1$ and $\Sigma_2 = \Gamma_2$ are two Lagrangian submanifolds intersecting transversally at a finite number of points $m_1, \ldots, m_s$. Our main results can be summarized as follows.
\begin{thmintro}
There exists some constants $C_i((\Gamma_1,\sigma_1),(\Gamma_2,\sigma_2)) > 0$, $i=1,2$, depending on the geometry near the intersection points, such that the sub-fidelity satisfies
\[ E(\rho_{k,1},\rho_{k,2}) = \left(\frac{2\pi}{k}\right)^{n} C_1((\Gamma_1,\sigma_1),(\Gamma_2,\sigma_2)) + \bigO{k^{-(n+1)}} \]
and the super-fidelity satisfies
\[ G(\rho_{k,1},\rho_{k,2}) = 1 - \left(\frac{2\pi}{k}\right)^{\frac{n}{2}} C_2((\Gamma_1,\sigma_1),(\Gamma_2,\sigma_2)) + \bigO{k^{-\min\left(n,\frac{n}{2} + 1\right)}}. \]
\end{thmintro}
For instance, the constant in the sub-fidelity involves the principal angles between the two tangent spaces at the intersection points. We refer the reader to Theorems \ref{thm:subfid} and \ref{thm:superfid} for precise statements and explicit expressions for the constants involved in these estimates. Unfortunately, this result does not allow us to obtain an equivalent for the fidelity function when $k$ goes to infinity, as \emph{a priori} this fidelity could display any behaviour between these two ranges $\bigO{k^{-n}}$ and $\bigO{1}$. However, we will study a family of examples on the two-sphere, for which we prove that the fidelity is a $\bigO{k^{-1 + \varepsilon}}$ for every sufficiently small $\varepsilon > 0$ (Theorem \ref{thm:fid_sphere}); this result is non trivial and requires care and a fine analysis of the interactions near intersection points. We also perform some numerical computations regarding these examples.
\begin{rmk} We believe that our results extend without effort to the case where the quantum state space is the space of holomorphic sections of $L^k \otimes K \to M$ where $K$ is an auxiliary Hermitian holomorphic line bundle, for instance in the case where $K = \delta$ is a half-form bundle (which corresponds to the so-called metaplectic correction). These results should also extend to the case of the quantization of a closed symplectic but non necessarily K\"ahler manifold, using for instance the recipe introduced in \cite{Cha_symp}; the main ingredient, namely the decription of the asymptotics of the Bergman kernel, is still available, only more complicated to describe. We do not treat any of these two cases here for the sake of clarity.
\end{rmk}
\subsection{Structure of the article}
The first half of this manuscript is devoted to the definition of the state associated with a submanifold with density and the computation of the sub-fidelity and super-fidelity of such states in the Lagrangian case, in all generality. In Section \ref{sect:prelim}, we discuss the setting and introduce the notions and notation that will be needed to achieve this goal. In Section \ref{sect:def_states}, we explain how to obtain a state from a submanifold with density, and we study the first properties of such states. In particular, we compute their purity to show that they are always mixed for $k$ large enough. We prove our estimates for the sub-fidelity and the super-fidelity of two states associated with Lagrangian submanifolds intersecting transversally at a finite number of points in Section \ref{sect:sub_super}.
The second half of the paper, corresponding to Sections \ref{sect:examples} and \ref{sect:numerics}, focuses on a family of examples on $\mathbb{S}\xspace^2$. A remarkable fact is that one can obtain much better estimates for the fidelity function itself, employing non trivial methods, that can however not be used as they are to study the general case, although some parts of the analysis may be useful to attack the latter.
\section{Preliminaries and notation}
\label{sect:prelim}
\subsection{The setting: K{\"a}hler quantization}
Throughout the paper, $(M,\omega,j)$ will be a closed, connected K\"ahler manifold, of real dimension $\dim M = 2n$, such that the cohomology class of $(2\pi)^{-1} \omega$ is integral, and $(L,\nabla)$ will be a prequantum line bundle over $M$, that is a Hermitian holomorphic line bundle $L \to M$ whose Chern connection $\nabla$ has curvature $-i\omega$. Let $\mu_M = |\omega^n|/n!$ be the Liouville measure on $M$. For $k \geq 1$ integer, let $h_k$ be the Hermitian form induced on $L^k$, and consider the Hilbert space of holomorphic sections of $L^k \to M$:
\[ \mathcal{H}\xspace_k = H^0(M,L^k), \qquad \scal{\psi}{\phi}_k = \int_M h_k(\psi,\phi) \mu_M. \]
Since $M$ is compact, $\mathcal{H}\xspace_k$ is finite-dimensional; more precisely, it is standard that
\begin{equation} \dim \mathcal{H}\xspace_k = \left( \frac{k}{2\pi} \right)^n \mathrm{vol}(M) + \bigO{k^{n-1}}. \label{eq:asymp_dim} \end{equation}
Let $L^2(M,L^k)$ be the completion of $\classe{\infty}{(M,L^k)}$ with respect to $\scal{\cdot}{\cdot}_k$, and let $\Pi_k: L^2(M,L^k) \to \mathcal{H}\xspace_k$ be the orthogonal projector from $L^2(M,L^k)$ to the space of holomorphic sections of $L^k \to M$. The Berezin-Toeplitz operator associated with $f \in \classe{\infty}(M)$ is
\begin{equation} T_k(f) = \Pi_k f: \mathcal{H}\xspace_k \to \mathcal{H}\xspace_k, \label{eq:def_BTO} \end{equation}
where $f$ stands for the operator of multiplication by $f$. More generally, a Berezin-Toeplitz operator is any sequence of operators $(T_k:\mathcal{H}\xspace_k \to \mathcal{H}\xspace_k)_{k \geq 1}$ of the form $T_k = \Pi_k f(\cdot,k) + R_k$ where $f(\cdot,k)$ is a sequence of smooth functions with an asymptotic expansion of the form $ f(\cdot,k) = \sum_{\ell \geq 0} k^{-\ell} f_{\ell}$ for the $\classe{\infty}{}$ topology, and $\| R_k \| = \bigO{k^{-N}}$ for every $N \geq 1$.
Let $p_1,p_2: M \times M \to M$ be the natural projections on the left and right factor. If $U \to M$, $V \to M$ are two line bundles over $M$, we define the line bundle (sometimes called external tensor product) $U \boxtimes V = p_1^*U \otimes p_2^*V \to M \times M$. The Schwartz kernel of an operator $S_k: \mathcal{H}\xspace_k \to \mathcal{H}\xspace_k$ is the unique section $S_k(\cdot,\cdot)$ of $L^k \boxtimes \bar{L}^k \to M \times M$ such that for every $\varphi \in \mathcal{H}\xspace_k$ and every $x \in M$,
\[ (S_k \varphi)(x) = \int_M S_k(x,y) \cdot \varphi_k(y) \ d\mu_M(y), \]
where the dot corresponds to contraction with respect to $h_k$: for $\bar{u} \in \bar{L}^k_y$ and $v \in L^k_y$, $\bar{u} \cdot v = (h_k)_y(v,u)$. In particular, the Schwartz kernel of $\Pi_k$ is called the \emph{Bergman kernel}.
In this context, Charles \cite{ChaBTO} has obtained, relying on \cite{BouGui}, a very precise description of the Bergman kernel in the semiclassical limit. For our purpose, we will only need part of it, namely that
\begin{equation} \Pi_k(x,y) = \left( \frac{k}{2 \pi} \right)^n S^k(x,y) \left( a_0(x,y) + \bigO{k^{-1}} \right) \label{eq:asympt_projector} \end{equation}
where $S \in \classe{\infty}{(M^2,L \boxtimes \overline{L})}$ satisfies $S(x,x) = 1$ and $|S(x,y)| <1$ whenever $x \neq y$ (among other properties, see \cite[Proposition 1]{ChaBTO}), $a_0 \in \classe{\infty}{(M^2, \mathbb{R}\xspace)}$ is such that $a_0(x,x) = 1$ and the remainder $\bigO{k^{-1}}$ is uniform in $(x,y) \in M^2$. Here $| \cdot |$ denotes the norm induced by $h$ on $L \boxtimes \overline{L}$, and for $x \in M$, we use $h_k$ to identify $L_x \otimes \bar{L}_x$ with $\mathbb{C}\xspace$.
\subsection{Generalities about fidelity}
As already explained, one useful tool to compare two states is the fidelity function, see for instance \cite{Uhl, Jos} or \cite[Chapter 9]{ChN}. Recall that the trace norm of a trace class operator $A$ acting on a Hilbert space $\mathcal{H}\xspace$ is $ \|A\|_{\mathrm{\Tr}} = \Tr(\sqrt{A^*A})$. Given two states $\rho, \eta$ on $\mathcal{H}\xspace$, that is positive semidefinite Hermitian operators on $\mathcal{H}\xspace$ of trace one, their fidelity is defined as\footnote{Note that some authors call fidelity the square root of this function, however we prefer to keep the square in order to simplify some of the computations.}
\[ F(\rho,\eta) = \left\| \sqrt{\rho} \sqrt{\eta} \right\|^2_{\Tr} = \Tr\left( \sqrt{\sqrt{\rho} \ \eta \ \sqrt{\rho} } \right)^2. \]
Even though it is not obvious from this formula, fidelity is symmetric in its arguments. It measures how close the two states are in the following sense; $F(\rho,\eta)$ is a number comprised between $0$ and $1$, and $F(\rho,\eta) = 1$ if and only if $\rho = \eta$, while $F(\rho,\eta) = 0$ if and only if $\rho(\mathcal{H}\xspace)$ and $\eta(\mathcal{H}\xspace)$ are orthogonal. In the particular case where both states are pure, i.e. $\rho$ (respectively $\eta$) is the orthogonal projection on the line spanned by $\phi \in \mathcal{H}\xspace$ (respectively $\psi \in \mathcal{H}\xspace$), where $\phi$ and $\psi$ are unit vectors, one readily checks that $F(\phi,\psi) = |\scal{\phi}{\psi}|^2$. The fidelity function is interesting for further reasons, such as its invariance under conjugation of both arguments by a common unitary operator, its multiplicativity with respect to tensor products, or its joint concavity. It is, however, very hard to compute in general because it involves square roots of operators.
Consequently, some efforts have been made to give bounds for the fidelity function that would be more easily computable. The following remarkable bounds on the fidelity of states $\rho,\eta$ acting on a finite-dimensional Hilbert space have been obtained in \cite{Mis}: $ E(\rho,\eta) \leq F(\rho,\eta) \leq G(\rho,\eta)$ where the function $E$, called sub-fidelity, is defined as
\begin{equation} E(\rho,\eta) = \Tr(\rho \eta) + \sqrt{2} \sqrt{\Tr(\rho \eta)^2 - \Tr((\rho \eta)^2)} \label{eq:sub_fid}\end{equation}
and the function $G$, called super-fidelity, is defined as
\begin{equation} G(\rho,\eta) = \Tr(\rho \eta) + \sqrt{\left( 1 - \Tr(\rho^2) \right)\left( 1 - \Tr(\eta^2) \right)} \label{eq:super_fid}\end{equation}
It turns out that these two quantities keep some of the interesting properties of fidelity, and can be measured using physical experiments; furthermore they both coincide with fidelity when both states are pure. From a mathematical point of view, these quantities seem much more tractable than the fidelity function because they involve only traces of products and powers of operators.
\subsection{Principal angles}
The notion of principal angles (see for example \cite[Section $12.4.3$]{Gol}) will play a crucial part in our estimates. Let $V$ be a real vector space, endowed with an inner product $(\cdot | \cdot)$, and let $E, F$ be two subspaces of $V$ such that $\alpha = \dim E \geq \beta = \dim F \geq 1$.
\begin{dfn}
The \emph{principal angles} $0 \leq \theta_1 \leq \ldots \leq \theta_{\beta} \leq \frac{\pi}{2}$ between $E$ and $F$ are defined recursively by the formula $ \cos(\theta_{\ell}) = (u_{\ell}|v_{\ell}) := \max_{W_{\ell}} (u|v)$, where
\[ W_{\ell} = \left\{ (u,v) \in E \times F \ | \ \|u\| = 1 = \|v\|, \quad \forall m \in \llbracket 1,\ell \rrbracket, \ (u|u_m) = 0 = (v|v_m) \right\}. \]
\end{dfn}
Note that $\theta_1 = 0$ if and only if $E \cap F \neq \{0\}$. We will need the two following properties of principal angles; the first one appears in the computation of $\Tr(\rho_{k,1} \rho_{k,2})$ (Theorem \ref{thm:trace}).
\begin{lm}
\label{lm:angle}
Let $V$ be a real vector space of dimension $2n$, $n \geq 1$, endowed with an inner product $(\cdot | \cdot)$, and let $E,F$ be two subspaces of $V$ of dimension $n$. Let $(e_{p})_{1 \leq p \leq n}$ (respectively $(f_{q})_{1 \leq q \leq n}$) be any orthonormal basis of $E$ (respectively $F$). We introduce the $n \times n$ matrix $G$ with entries $G_{p,q} = (e_{p}|f_q)$; then the quantity $\det(I_n - G^{\top}G)$ does not depend on the choice of $(e_{p})_{1 \leq p \leq n}$ and $(f_{q})_{1 \leq q \leq n}$. Moreover, it satisfies
\[ \det\left(I_n - G^{\top}G\right) = \prod_{\ell = 1}^n \sin^2(\theta_{\ell}) \]
where $0 \leq \theta_1 \leq \ldots \leq \theta_n \leq \frac{\pi}{2}$ are the principal angles between $E$ and $F$.
\end{lm}
\begin{proof}
Let $(\tilde{e}_{p})_{1 \leq p \leq n}$ be another orthonormal basis of $E$, and let $O = (O_{p,q})_{1 \leq p,q \leq n}$ be the matrix such that
\[ \forall p \in \llbracket 1, n \rrbracket, \qquad \tilde{e}_{p} = \sum_{r=1}^n O_{p,r} e_r. \]
Let $\tilde{G}$ be the matrix with entries $\tilde{G}_{p,q} = (\tilde{e}_{p}|f_q)$; then $\tilde{G} = O G$ and
\[ \det(I_n - \tilde{G}^{\top} \tilde{G}) = \det(I_n - \tilde{G}^{\top} \tilde{O}^{\top} \tilde{O} \tilde{G}) = \det(I_n - G^{\top}G) \]
since $O$ is orthogonal. Now, observe that
\[ (G^{\top}G)_{p,q} = \sum_{r=1}^n (e_r|f_p)(e_r|f_q) = \left( \sum_{r=1}^n (e_r|f_p)e_r \Big|f_q \right) = (P f_p|f_q) \]
where $P$ is the orthogonal projector from $V$ to $E$. Consequently, $(I_n - G^{\top} G)_{p,q} = (Q f_p|f_q)$ with $Q$ the orthogonal projector from $V$ to $E^{\perp}$. Thus, if $(e_{n+1}, \ldots, e_{2n})$ is any orthonormal basis of $E^{\perp}$, then
\[ (I_n - G^{\top} G)_{p,q} = \sum_{r=1}^n (f_p|e_{n+r})(e_{n+r}|f_q); \]
this means that $I_n - G^{\top} G = A^{\top} A$ where $A$ is the matrix with entries given by $A_{p,q} = (e_{n+p}|f_q)$. But it is known that the eigenvalues of $A^{\top} A$ are $\cos^2(\zeta_1), \ldots, \cos^2(\zeta_n)$, where $\zeta_1 \leq \ldots \leq \zeta_{n}$ are the principal angles between $E^{\perp}$ and $F$, see for instance \cite{Sho}. Consequently, $\det( A^{\top} A) = \prod_{\ell = 1}^n \cos^2(\zeta_{\ell})$ and the result follows from the fact that for every $\ell \in \llbracket 1,n \rrbracket$, $\zeta_{\ell} = \tfrac{\pi}{2} - \theta_{n - \ell}$ \cite[Property $2.1$]{Zhu}.
\end{proof}
The second property will be used in the proof of Theorem \ref{thm:trace_square}.
\begin{lm}
\label{lm:det_symp}
Let $(V,\omega)$ be a real symplectic vector space of dimension $2n$, $n \geq 1$, endowed with a complex structure $J:V \to V$ which is compatible with $\omega$, and let $(\cdot|\cdot) = \omega(\cdot,J\cdot)$ be the associated inner product. Let $E,F$ be two complementary Lagrangian subspaces of $V$, and let $(e_{p})_{1 \leq p \leq n}$ (respectively $(f_{p})_{1 \leq p \leq n}$) be any orthonormal basis of $E$ (respectively $F$). Let $\Xi$ be the $n \times n$ matrix with entries $\Xi_{p,q} = \omega(e_p,f_q)$; then the quantity $ \det\left(I_n + \Xi^{\top} \Xi\right)$ does not depend on the choice of $(e_{p})_{1 \leq p \leq n}$ and $(f_{p})_{1 \leq p \leq n}$. Moreover, it satisfies
\[ \det\left(I_n + \Xi^{\top} \Xi\right) = \prod_{\ell = 1}^n \left(1 + \sin^2(\theta_{\ell})\right), \]
where $0 \leq \theta_1 \leq \ldots \leq \theta_n \leq \frac{\pi}{2}$ are the principal angles between $E$ and $F$.
\end{lm}
\begin{proof}
The first statement is similar to the first statement of Lemma \ref{lm:angle}. Now, let $G$ be the $n \times n$ matrix defined in the latter, that is the matrix with entries $G_{p,q} = (e_p|f_q)$. A straightforward computation shows that $(\Xi^{\top} \Xi)_{p,q} = (Qf_p|f_q)$, where $Q$ is the orthogonal projection from $V$ to $J(E)$. Since $E$ is Lagrangian, $J(E) = E^{\perp}$, so the previous result means that $\Xi^{\top} \Xi = I_n - G^{\top} G$, which implies (see the proof of Lemma \ref{lm:angle}) that the eigenvalues of the matrix $\Xi^{\top} \Xi$ are $\sin^2(\theta_1), \ldots, \sin^2(\theta_n)$, which yields the result.
\end{proof}
\section{The state associated with a submanifold with density}
\label{sect:def_states}
\subsection{Definition}
We will define the state associated with a submanifold with density by means of coherent states; let us recall how those are constructed in the setting of geometric quantization (here we adopt the convention used in \cite[Section 5]{ChaBTO}). Let $P \subset L$ be the set of elements $u \in L$ such that $h(u,u) = 1$, and let $\pi: P \to M$ denote the natural projection. Given $u \in P$, for every $k \geq 1$, there exists a unique vector $\xi_k^u$ in $\mathcal{H}\xspace_k$ such that
\[ \forall \phi \in \mathcal{H}\xspace_k, \qquad \phi(\pi(u)) = \scal{\phi}{\xi_k^u}_{k} u^k. \]
The vector $\xi_k^u \in \mathcal{H}\xspace_k$ is called the \emph{coherent vector} at $u$.
By the properties of coherent states stated in \cite[Section 5]{ChaBTO} and the description of $\Pi_k$ given in Equation (\ref{eq:asympt_projector}), we have that for every $u \in P$,
\begin{equation} \|\xi_k^u\|^2_{k} = \left(\frac{k}{2\pi}\right)^n + \bigO{k^{n-1}} \label{eq:asymp_rawnsley} \end{equation}
when $k$ goes to infinity, and the remainder is uniform in $u \in P$. In particular, there exists $k_0 \geq 1$ such that $\xi_k^u \neq 0$ whenever $k \geq k_0$. For $k \geq k_0$, we set $\xi_{k}^{u,\mathrm{norm}} = \xi_k^u / \|\xi_k^u \|_k$ (and later on we will always implicitly assume that $k \geq k_0$ to simplify notation). This also means that the class of $\xi_k^u$ in the projective space $\mathbb{P}\xspace(\mathcal{H}\xspace_k)$ is well-defined; this class only depends on $\pi(u)$ and is called the \emph{coherent state} at $x = \pi(u)$. Furthermore, the projection
\[ P_k^x: \mathcal{H}\xspace_k \to \mathcal{H}\xspace_k, \qquad \phi \mapsto \scal{\phi}{\xi_k^{u,\mathrm{norm}}}_k \xi_k^{u,\mathrm{norm}} \]
is also only dependent on $x$, and is called the \emph{coherent projector} at $x$.
Now, let $\Sigma \subset M$ be a closed, connected submanifold of dimension $d \geq 1$, equipped with a positive density $\sigma$ (as defined in \cite[Chapter 3.3]{BerGos}) such that $\int_{\Sigma} \sigma = 1$. Then we can obtain a mixed state by superposition of the coherent projectors over the points of $\Sigma$.
\begin{dfn}
\label{dfn:state}
We define the state associated with $(\Sigma,\sigma)$ as
\begin{equation} \rho_k(\Sigma,\sigma) = \int_{\Sigma} P_k^x \ \sigma(x) \label{eq:Lag_state}\end{equation}
where $P_k^x$ is the coherent state projector at $x \in \Sigma$.
\end{dfn}
Clearly, $\rho_k(\Sigma,\sigma)$ is a positive semidefinite Hermitian operator acting on $\mathcal{H}\xspace_k$, and
\[ \Tr(\rho_k(\Sigma,\sigma)) = \int_{\Sigma} \Tr(P_k^x) \sigma(x) = \int_{\Sigma} \sigma = 1. \]
Therefore $\rho_k(\Sigma,\sigma)$ is indeed (the density operator of) a state.
\begin{ex}
We compute an example in a simple (but non compact) case: $M = \mathbb{R}\xspace^2$ with its standard symplectic form and complex structure. It is well-known that the relevant quantum spaces are the Bargmann spaces \cite{Bar}
\[ \mathcal{H}\xspace_k := \left\{ f \psi^k | \ f: \mathbb{C}\xspace \to \mathbb{C}\xspace \text{ holomorphic}, \quad \int_{\mathbb{C}\xspace} |f(z)|^2 \exp(-k|z|^2) \ |dz \wedge d\bar{z}| < +\infty \right\} \]
where $\psi(z) = \exp\left(-\frac{1}{2}|z|^2\right)$, with orthonormal basis $\phi_{k,\ell}: z \to \sqrt{\frac{k^{\ell+1}}{2\pi \ell!}} \ z^{\ell} \psi^k(z)$ for $\ell \geq 0$. By a straightforward computation,
\[ \scal{P_k^z \phi_{k,\ell}}{\phi_{k,m}}_k = \sqrt{\frac{k^{\ell+m}}{\ell! m!}} z^{\ell} \bar{z}^m \exp(-k|z|^2). \]
We consider $\Sigma = \mathbb{S}\xspace^1 = \{ \exp(it)| \ 0 \leq t \leq 2\pi \} \subset \mathbb{C}\xspace$ with density $\sigma = \frac{dt}{2 \pi}$, and compute the state $\rho_k(\mathbb{S}\xspace^1,\sigma)$ associated with this data. For $\ell,m \geq 0$,
\[ \scal{\rho_k(\mathbb{S}\xspace^1,\sigma)\phi_{k,\ell}}{\phi_{k,m}}_k = \int_0^{2\pi} \scal{P_k^{\exp(it)}\phi_{k,\ell}}{\phi_{k,m}}_k \frac{dt}{2\pi} = \sqrt{\frac{k^{\ell+m}}{\ell! m!}} \exp(-k) \int_0^{2\pi} \exp(i(\ell-m)t) \frac{dt}{2\pi}. \]
Hence for every $\ell \geq 0$,
\[ \rho_k(\mathbb{S}\xspace^1,\sigma) \phi_{k,\ell} = \frac{k^{\ell}\exp(-k)}{\ell!} \phi_{k,\ell}. \]
In other words, this state is prepared according to a Poisson probability distribution of parameter $k$ with respect to the basis $(\phi_{k,\ell})_{\ell \geq 0}$.
\end{ex}
\subsection{Computation of the purity}
In order to see how far $\rho_k(\Sigma,\sigma)$ is from being pure, one can compute its purity $\Tr(\rho_k(\Sigma,\sigma)^2)$, which is equal to one for pure states and strictly smaller than one for mixed states.
\begin{prop}
\label{prop:purity}
Let $\mu_{g,\Sigma}$ be the Riemannian volume on $\Sigma$ corresponding to the Riemannian metric induced by the K\"ahler metric $g$ on $\Sigma$. The purity of $\rho_k(\Sigma,\sigma)$ satisfies
\[ \Tr\left(\rho_k(\Sigma,\sigma)^2\right) = \left( \frac{2 \pi}{k} \right)^{\frac{d}{2}} \left( \int_{\Sigma} f \sigma + \bigO{k^{-1}} \right),\]
where the function $f$ is such that $\sigma = f \mu_{g,\Sigma}$. In particular, for $k$ large enough, this state cannot be pure.
\end{prop}
\begin{proof}
We need to compute
\[ \Tr(\rho_{k}\left(\Sigma,\sigma)^2\right) = \int_{\Sigma} \int_{\Sigma} \Tr\left(P_k^x P_k^y\right) \sigma(x) \sigma(y). \]
In order to do so, let $(\varphi_j)_{1 \leq j \leq d_k}$, where $d_k = \dim(\mathcal{H}\xspace_k)$, be any orthonormal basis of $\mathcal{H}\xspace_k$. Let $x,y \in M$ and let $u,v \in L$ be unit vectors such that $u \in L_x, v \in L_y$. Then
\[ P_k^x P_k^y \varphi_j = \frac{\scal{\varphi_j}{\xi_k^v}_k \scal{\xi_k^v}{\xi_k^u}_k}{\| \xi_k ^v \|^2_k \|\xi_k^u\|^2_k} \xi_k^u \]
for every $j \in \llbracket 1,d_k \rrbracket$. Therefore,
\[ \Tr\left(P_k^x P_k^y\right) = \frac{\scal{\xi_k^v}{\xi_k^u}_k}{\| \xi_k ^u \|^2_k \|\xi_k^v\|^2_k} \sum_{j=1}^{d_k} \scal{\varphi_j}{\xi_k^v}_k \scal{\xi_k^u}{\varphi_j}_k = \frac{\scal{\xi_k^v}{\xi_k^u}_k \scal{\xi_k^u}{\xi_k^v}_k}{\| \xi_k ^u \|^2_k \|\xi_k^v\|^2_k} = \frac{|\scal{\xi_k^u}{\xi_k^v}|^2_k}{\| \xi_k^u \|^2_k \|\xi_k^v\|^2_k}. \]
We can rewrite this expression, using the properties stated in \cite[Section 5]{ChaBTO}, as
\[ \Tr\left(P_k^x P_k^y\right) = \frac{|\Pi_k(x,y)|^2}{|\Pi_k(x,x)| \ |\Pi_k(y,y)|}. \]
Hence, we finally obtain that
\[ \Tr\left(\rho_{k}(\Sigma,\sigma)^2\right) = \int_{\Sigma} \int_{\Sigma} \frac{|\Pi_k(x,y)|^2}{|\Pi_k(x,x)| \ |\Pi_k(y,y)|} \sigma(x) \sigma(y). \]
Since the section $S$ introduced in Equation (\ref{eq:asympt_projector}) satisfies $|S(x,y)| < 1$ whenever $x \neq y$,
\[ \Tr\left(\rho_{k}(\Sigma,\sigma)^2\right) = \int_{(x,y) \in V} \frac{|\Pi_k(x,y)|^2}{|\Pi_k(x,x)| \ |\Pi_k(y,y)|} \sigma(x) \sigma(y) + \bigO{k^{-\infty}} \]
where $V$ is a neighbourhood of the diagonal of $\Sigma^2$ in $\Sigma^2$. By taking a smaller $V$ if necessary, we may assume that $S$ does not vanish on $V$, and define $\varphi = -2\log |S|$ on the latter. We then deduce from Equation (\ref{eq:asympt_projector}) that $\Tr(\rho_{k}(\Sigma,\sigma)^2) = (1 + \bigO{k^{-1}}) I_k$ where
\[ I_k = \int_{V} \exp(-k \varphi(x,y)) a_0(x,y)^2 (\sigma \otimes \sigma)(x,y) . \]
In order to estimate this integral, we will apply the stationary phase lemma \cite[Theorem $7.7.5$]{Hor}, with the subtlety that the phase function $\varphi$ has a submanifold of critical points. Indeed, by \cite[Proposition 1]{ChaBTO}, its critical locus is given by
\[ \mathcal{C}_{\varphi} = \{ (x,y) \in V | \ d\varphi(x,y) = 0 \} = \mathrm{diag}(\Sigma^2). \]
In this situation, we need to check that the Hessian of $\varphi$ is non degenerate in the transverse direction at every critical point $(x,x)$, $x \in \Sigma$. But we know from \cite[Proposition 1]{ChaBTO} that it is the case, since at such a point, the kernel of this Hessian is equal to $T_{(x,x)}\mathrm{diag}(\Sigma^2)$ and its restriction to the orthogonal complement of $T_{(x,x)}\mathrm{diag}(\Sigma^2)$ is equal to $2 \tilde{g}_{(x,x)}$, where $\tilde{g}$ is the K\"ahler metric on $M \times M$ induced by the symplectic form $\omega \oplus -\omega$ and complex structure $j \oplus -j$. We choose a finite cover of $V$ by open sets of the form $U \times U$, with $U$ a coordinate chart for $\Sigma$ with local coordinates $x_1, \ldots x_d$, and use a partition of unity argument to work with
\[ J_k = \int_{U \times U} \exp(-k \varphi(x,y)) a_0(x,y)^2 h(x) h(y) \ dx_1 \ldots dx_d dy_1 \ldots dy_d \]
where $h$ is the function such that $\sigma = h \ dx_1 \ldots dx_d$ on $U$. Observe that if $x$ belongs to $U$, the determinant of the transverse Hessian of $\varphi$ at $(x,x)$ is equal to the determinant $\det g_{x,\Sigma} \neq 0$, where $g_{x,\Sigma}$ is the matrix of ${g_x}_{|T_x \Sigma \times T_x \Sigma}$ in the basis corresponding to our local coordinates. Therefore the stationary phase lemma yields
\[ J_k = \left( \frac{2\pi}{k} \right)^{\frac{d}{2}} \int_U \exp(-k\varphi(x,x)) |\det g_{x,\Sigma}|^{-1/2} a_0(x,x)^2 h(x)^2 \ dx_1 \ldots dx_d + \bigO{k^{-(\frac{d}{2}+1)}}.\]
But by definition, $\mu_{g,\Sigma}(x) = |\det g_{x,\Sigma}|^{1/2} \ dx_1 \ldots dx_d$ on $U$, therefore the function $f$ introduced in the statement of the proposition satisfies $f(x) |\det g_{x,\Sigma}|^{1/2} = h(x) $ on $U$. Since moreover $\varphi(x,x) = 0$ and $a_0(x,x) = 1$, this yields the result.
\end{proof}
\begin{ex}
It follows from the properties of coherent states stated in \cite[Section 5]{ChaBTO} and Equation (\ref{eq:asymp_rawnsley}) that
\[ \mathrm{Id}_{\mathcal{H}\xspace_k} = T_k(1) = \int_M P_k^x \|\xi_k^u\|^2_{k} \ \mu_M(x) = \left( 1 + \bigO{k^{-1}} \right) \left(\frac{k}{2\pi}\right)^n \int_M P_k^x \mu_M(x). \]
where $\pi(u) = x$. Consequently, if we consider the density $\sigma = (\mathrm{Vol}(M))^{-1} \mu_M$ on $M$, then
\[ \rho_k(M,\sigma) = \left(\frac{2\pi}{k}\right)^n \frac{1}{\mathrm{Vol}(M)} \left( 1 + \bigO{k^{-1}} \right) \mathrm{Id}_{\mathcal{H}\xspace_k}. \]
Thus, we finally obtain that
\[ \Tr\left( \rho_k(M,\sigma)^2 \right) = \left(\frac{2\pi}{k}\right)^{2n} \frac{ \dim \mathcal{H}\xspace_k}{\mathrm{Vol}(M)^2} \left( 1 + \bigO{k^{-1}} \right) = \left(\frac{2\pi}{k}\right)^{n} \frac{1}{\mathrm{Vol}(M)}\left( 1 + \bigO{k^{-1}} \right), \]
Thanks to Equation (\ref{eq:asymp_dim})
\[ \Tr\left( \rho_k(M,\sigma)^2 \right) = \left(\frac{2\pi}{k}\right)^{n} \frac{1}{\mathrm{Vol}(M)}\left( 1 + \bigO{k^{-1}} \right), \]
which is consistent with the result of the above proposition because the function $f$ associated with $\sigma$ is $\mathrm{Vol}(M)^{-1}$ (since the Liouville and Riemannian volume forms coincide).
\end{ex}
\subsection{Microsupport and other properties}
Let us now state a few properties of this state $\rho_k(\Sigma,\sigma)$. Given a state $\eta$ and a quantum observable $T$, the expectation of $T$ with respect to $\eta$ is defined as $\mathbb{E}(\eta,T) = \Tr(T \eta)$. In the case where $\eta$ is the state associated with $(\Sigma,\sigma)$, we can obtain a complete asymptotic expansion of this expectation.
\begin{lm}
Let $T_k$ be a self-adjoint Berezin-Toeplitz operator acting on $\mathcal{H}\xspace_k$, and let $\sum_{\ell \geq 0} \hbar^{\ell} t_{\ell}$ be the covariant symbol of $T_k$ (see \cite[Definition 3]{ChaBTO}). Then $\mathbb{E}(\rho_k(\Sigma,\sigma),T_k)$ has the following asymptotic expansion:
\[ \mathbb{E}(\rho_k(\Sigma,\sigma),T_k) = \sum_{\ell \geq 0} k^{-\ell} \int_{\Sigma} t_{\ell}(x) \ \sigma(x) + \bigO{k^{-\infty}}. \]
In particular, if $T_k = \Pi_k f_0$ for some function $f_0 \in \classe{\infty}{(M,\mathbb{R}\xspace)}$, then
\[ \mathbb{E}(\rho_k(\Sigma,\sigma),T_k) = \int_{\Sigma} f_{0}(x) \ \sigma(x) + \bigO{k^{-1}}.\]
\end{lm}
\begin{proof}
Let $(\varphi_j)_{1 \leq j \leq d_k}$, $d_k = \dim(\mathcal{H}\xspace_k)$, be any orthonormal basis of $\mathcal{H}\xspace_k$. For $x$ in $\Sigma$, let $u \in L_x$ be a unit vector. Then
\[ \mathbb{E}(\rho_k(\Sigma,\sigma),T_k) = \int_{\Sigma} \scal{ \sum_{j=1}^{d_k} \scal{T_k \xi_k^{u,\mathrm{norm}}}{\varphi_j} _k \varphi_j}{\xi_k^{u,\mathrm{norm}}}_k \sigma(x) = \int_{\Sigma} \scal{T_k \xi_k^{u,\mathrm{norm}}}{\xi_k^{u,\mathrm{norm}}}_k \sigma(x). \]
The statement follows from the equalities $\scal{T_k \xi_k^u}{\xi_k^u}_k = T_k(x,x)$ and $\| \xi_k^u \|^2_k = \Pi_k(x,x)$, see \cite[Section 5]{ChaBTO}, and from the definition of the covariant symbol, see \cite[Definition 3]{ChaBTO}.
\end{proof}
This result shows in which sense the state associated with $(\Sigma,\sigma)$ concentrates on $\Sigma$ in the semiclassical limit. Indeed, one can introduce, as in \cite[Section 4]{ChaPol}, the microsupport of any state in the following way; the semiclassical measure $\nu_k$ of a state $\eta_k$ is defined as
\[ \int_M f d\nu_k = \Tr(T_k(f) \eta_k) = \mathbb{E}(\eta_k,T_k(f)). \]
Then the microsupport $\mathrm{MS}(\eta_k)$ of $\eta_k$ is the complementary set of the set of points of $M$ having an open neighbourhood $U$ such that $\nu_k(U) = \mathcal{O}(k^{-\infty})$.
\begin{cor}
The microsupport of $\rho_k(\Sigma,\sigma)$ coincides with $\Sigma$.
\end{cor}
\begin{proof}
Let $\nu_k$ be the semiclassical measure of $\rho_k(\Sigma,\sigma)$. Let $m \in M \setminus \Sigma$; since $\Sigma$ is closed, there exists an open neighbourhood $V$ of $m$ in $M$ not intersecting $\Sigma$. Let $U$ be an open neighbourhood of $m$ such that $\overline{U} \subset V$, and let $\chi$ be a nonnegative smooth function equal to one on $U$ and compactly supported in $V$. Then
\[ \nu_k(U) \leq \int_M \chi \ d\nu_k = \mathbb{E}(\rho_k(\Sigma,\sigma),T_k(\chi)). \]
The last term in this equation is given by the previous lemma; it is $\mathcal{O}(k^{-\infty})$ because all the functions in the covariant symbol of $T_k(\chi)$ vanish on $\Sigma$ since the latter does not intersect the support of $\chi$. Therefore $\nu_k(U) = \mathcal{O}(k^{-\infty})$ and $m \notin \mathrm{MS}(\rho_k(\Sigma,\sigma))$.
Conversely, let $m \in \Sigma$ and let $U$ be any open neighbourhood of $m$ in $M$. Choose another open neighbourhood $V$ of $m$ such that $\overline{V} \subset U$, and let $\chi$ be a smooth function, compactly supported in $U$, equal to one on $V$, and such that $0 \leq \chi \leq 1$. Then
\[ \nu_k(U) \geq \int_M \chi \ d\nu_k = \mathbb{E}(\rho_k(\Sigma,\sigma),T_k(\chi)). \]
But by the previous lemma, we have that
\[ \mathbb{E}(\rho_k(\Sigma,\sigma),T_k(\chi)) = \int_{\Sigma} \chi(x) \sigma(x) + \mathcal{O}(k^{-1}) \geq \int_{\Sigma \cap V} \sigma + \mathcal{O}(k^{-1}). \]
Since the integral of $\sigma$ on $\Sigma \cap V$ is positive, this implies that $m$ belongs to $\mathrm{MS}(\rho_k(\Sigma,\sigma))$.
\end{proof}
Similarly, the variance of $T$ with respect to $\eta$ is $\text{Var}(\eta,T) = \Tr(T^2 \eta) - \Tr(T \eta)^2$.
\begin{lm}
Let $T_k$ be a self-adjoint Berezin-Toeplitz operator acting on $\mathcal{H}\xspace_k$, with covariant symbol $\sum_{\ell \geq 0} \hbar^{\ell} t_{\ell}$.
Let $\sum_{\ell \geq 0} \hbar^{\ell} u_{\ell} $ be the covariant symbol of $T_k^2$. Then $\mathrm{Var}(\rho_k(\Sigma,\sigma),T_k)$ has the following asymptotic expansion:
\[ \mathrm{Var}(\rho_k(\Sigma,\sigma),T_k) = \sum_{\ell \geq 0} k^{-\ell} \left( \int_{\Sigma} u_{\ell}(x) \sigma(x) - \sum_{m=0}^{\ell} \int_{\Sigma} \int_{\Sigma} t_m(x) t_{\ell - m}(y) \ \sigma(x) \sigma(y) \right) + \bigO{k^{-\infty}}. \]
In particular, if $T_k = \Pi_k f_0$ with $f_0 \in \classe{\infty}{(M,\mathbb{R}\xspace)}$, then
\[ \mathrm{Var}(\rho_k(\Sigma,\sigma),T_k) = \int_{\Sigma} f_{0}(x)^2 \sigma(x) - \left(\int_{\Sigma} f_{0}(x) \sigma(x)\right)^2 + \bigO{k^{-1}}. \]
\end{lm}
\begin{proof}
Apply the previous lemma to both $T_k^2$ and $T_k$.
\end{proof}
Now, assume that we are in the special case where $T_k = \Pi_k f_0$ and ${f_0}_{|\Sigma} = E \in \mathbb{R}\xspace$; then the previous results yield $\mathbb{E}(\rho_k(\Sigma,\sigma),T_k) = E + \bigO{k^{-1}}$ and $\mathrm{Var}(\rho_k(\Sigma,\sigma),T_k) = \bigO{k^{-1}} $.
\subsection{Fidelity for states associated with non intersecting submanifolds}
Let $\Sigma_1, \Sigma_2 \subset M$ be two closed, connected submanifolds of $M$, endowed with densities $\sigma_1, \sigma_2$ such that
$\int_{\Gamma_i} \sigma_i = 1$, $i=1,2$. Using the notation introduced in Equation (\ref{eq:Lag_state}), we define the states $\rho_{k,i} = \rho_k(\Sigma_i,\sigma_i)$, $i=1,2$. Our goal is to estimate the fidelity $F(\rho_{k,1},\rho_{k,2})$ in the limit $k \to \infty$. Of course, if $(\Sigma_1,\sigma_1) = (\Sigma_2,\sigma_2)$, then $\rho_{k,1} = \rho_{k,2}$ and $F(\rho_{k,1},\rho_{k,2}) = 1$. The following result deals with the case where $\Sigma_1$ and $\Sigma_2$ are disjoint.
\begin{prop}
\label{prop:empty_int}
Assume that $\Sigma_1 \cap \Sigma_2 = \emptyset$. Then $F(\rho_{k,1},\rho_{k,2}) = \bigO{k^{-\infty}}$.
\end{prop}
\begin{proof}
By using the Cauchy-Schwarz inequality for the inner product $(A,B) \mapsto \Tr(B^*A)$ on the space of operators on $\mathcal{H}\xspace_k$, and the fact that the trace is invariant under cyclic permutations, we get that $F(\rho_{k,1},\rho_{k,2}) \leq \dim(\mathcal{H}\xspace_k) \Tr(\rho_{k,1} \rho_{k,2})$. Since the dimension of $\mathcal{H}\xspace_k$ is of order $k^n$, it is therefore sufficient to show that $\Tr(\rho_{k,1} \rho_{k,2}) = \bigO{k^{-\infty}}$. The same computations as in the proof of Proposition \ref{prop:purity} yield
\begin{equation} \Tr(\rho_{k,1} \rho_{k,2}) = \int_{\Sigma_1} \int_{\Sigma_2} \frac{|\Pi_k(x,y)|^2}{|\Pi_k(x,x)| |\Pi_k(y,y)|} \sigma_1(x) \sigma_2(y). \label{eq:trace}\end{equation}
Since $\Sigma_1 \times \Sigma_2$ does not meet the diagonal of $M \times M$, $\Pi_k$ is uniformly $\bigO{k^{-\infty}}$ on $\Sigma_1 \times \Sigma_2$. Moreover, it follows from Equation (\ref{eq:asympt_projector}) that $|\Pi_k(x,x)| \sim \left( \frac{k}{2\pi} \right)^n$ uniformly on $M$, so the above formula yields $\Tr(\rho_{k,1} \rho_{k,2}) = \bigO{k^{-\infty}}$.
\end{proof}
Consequently, we will now be interested in an intermediate case, namely in the situation where $\Sigma_1$ and $\Sigma_2$ are distinct but have non empty intersection at a finite number of points. Of course in this case fidelity is still expected to tend to zero as $k$ goes to infinity, but one might be able to estimate the rate of convergence and the relation between fidelity and the underlying geometry. As already explained, fidelity is in general too complicated to compute and we will rather be interested in the sub and super fidelities. We will explain how to estimate these quantities when $\Sigma_1$ and $\Sigma_2$ are Lagrangian submanifolds, which moreover intersect transversally at a finite number of points.
\section{Sub and super fidelity for two Lagrangian states}
\label{sect:sub_super}
In this section, we assume that $\Gamma_1$ and $\Gamma_2$ are two closed, connected Lagrangian submanifolds of $M$, endowed with densities $\sigma_1, \sigma_2$ such that $\int_{\Gamma_i} \sigma_i = 1$, $i=1,2$, and intersecting transversally at a finite number of points $m_1, \ldots, m_s$. As before, we set $\rho_{k,i} = \rho_k(\Gamma_i, \sigma_i)$.
\begin{dfn} \label{dfn:thetas} For $\nu \in \llbracket 1,s \rrbracket$, we consider the principal angles
\[ 0 < \theta_1(m_{\nu}) \leq \ldots \leq \theta_{n}(m_{\nu}) \leq \frac{\pi}{2} \]
between $T_{m_{\nu}}\Gamma_1$ and $T_{m_{\nu}}\Gamma_2$, computed with respect to $g_{m_{\nu}}$ (recall that $g$ is the K\"ahler metric on $M$).\end{dfn}
For $i=1,2$, we introduce as in the statement of Proposition \ref{prop:purity} the Riemannian volume $\mu_{g,\Gamma_i}$ coming from the Riemannian metric induced by $g$ on $\Gamma_i$, and the function $f_i$ such that $\sigma_i = f_i \mu_{g,\Gamma_i}$. For $\nu \in \llbracket 1, s \rrbracket$, we define
\begin{equation} (\sigma_1,\sigma_2)_{m_{\nu}} := f_1(m_{\nu}) f_2(m_{\nu}) > 0. \label{eq:constant_sigma}\end{equation}
\begin{thm}
\label{thm:subfid}
The sub-fidelity of $\rho_{k,1}$ and $\rho_{k,2}$ satisfies:
\[ E(\rho_{k,1},\rho_{k,2}) = \left(\frac{2\pi}{k}\right)^{n} C((\Gamma_1,\sigma_1),(\Gamma_2,\sigma_2)) + \bigO{k^{-(n+1)}}, \]
where $C((\Gamma_1,\sigma_1),(\Gamma_2,\sigma_2)) = C_1 + \sqrt{2(C_2 + C_3)}$ with
\[ C_1 = \sum_{\nu=1}^s \frac{(\sigma_1,\sigma_2)_{m_{\nu}}}{\prod_{\ell = 1}^n \sin(\theta_{\ell}(m_{\nu}))}, \quad C_2 = \sum_{\substack{\nu=1}}^s \sum_{\substack{\mu=1 \\ \mu \neq \nu}}^s \frac{(\sigma_1,\sigma_2)_{m_{\nu}} (\sigma_1,\sigma_2)_{m_{\mu}}}{\prod_{\ell = 1}^n \sin(\theta_{\ell}(m_{\nu})) \sin(\theta_{\ell}(m_{\mu}))} \]
and finally
\[ C_3 = \sum_{\nu=1}^s \frac{(\sigma_1,\sigma_2)_{m_{\nu}}^2}{\prod_{\ell = 1}^n \sin(\theta_{\ell}(m_{\nu}))} \left( \prod_{\ell = 1}^n\frac{1}{\sin(\theta_{\ell}(m_{\nu}))} - \prod_{\ell = 1}^n\frac{1}{\sqrt{1 + \sin^2(\theta_{\ell}(m_{\nu}))}} \right). \]
\end{thm}
The rest of this section is devoted to the proof of this result; we start by estimating the trace $\Tr(\rho_{k,1}\rho_{k,2})$, which gives $C_1(\Gamma_1,\Gamma_2)$, then we estimate $\Tr((\rho_{k,1}\rho_{k,2})^2)$ to obtain the remaining terms.
\begin{rmk}
As can be seen from the proofs (and using the complete description of the Bergman kernel), the sub-fidelity actually has a complete asymptotic expansion in powers of $k$ smaller than $-n$; we are only interested here in the first term of this expansion.
\end{rmk}
\subsection{The term $\Tr(\rho_{k,1} \rho_{k,2})$}
We are now ready to estimate the trace of $\rho_{k,1} \rho_{k,2}$.
\begin{thm}
\label{thm:trace}
We have the following estimate:
\[ \Tr(\rho_{k,1}\rho_{k,2}) = \left(\frac{2\pi}{k}\right)^n \left( \sum_{\nu=1}^s \frac{(\sigma_1,\sigma_2)_{m_{\nu}}}{\prod_{\ell = 1}^n \sin(\theta_{\ell}(m_{\nu}))} \right) + \bigO{k^{-(n+1)}}, \]
see Definition \ref{dfn:thetas} and Equation (\ref{eq:constant_sigma}) for notation.
\end{thm}
\begin{proof}
By Equation (\ref{eq:trace}), this trace is given by the formula
\[ \Tr(\rho_{k,1} \rho_{k,2}) = \int_{\Gamma_1} \int_{\Gamma_2} \frac{|\Pi_k(x,y)|^2}{|\Pi_k(x,x)| |\Pi_k(y,y)|} \sigma_1(x) \sigma_2(y). \]
The same argument that we used in the proof of Proposition \ref{prop:empty_int} shows that the integral over $x,y \in M \setminus \bigcup_{j=1}^p \Omega_{\nu}$, where $\Omega_{\nu}$ is a neighbourhood of the intersection point $m_{\nu}$, is a $\bigO{k^{-\infty}}$. Therefore, we only need to understand what the contribution of the integral
\[ I_{k,\nu} = \int_{\Gamma_1 \cap \Omega_{\nu}} \int_{\Gamma_2 \cap \Omega_{\nu}} \frac{|\Pi_k(x,y)|^2}{|\Pi_k(x,x)| |\Pi_k(y,y)|} \sigma_1(x) \sigma_2(y). \]
is, for every $\nu \in \llbracket 1,p \rrbracket$, and to sum up these contributions. Equation (\ref{eq:asympt_projector}) implies that
\[ I_{k,\nu} = \left(\int_{\Gamma_1 \cap \Omega_{\nu}} \int_{\Gamma_2 \cap \Omega_{\nu}} |S(x,y)|^{2k} a_0(x,y)^2 \sigma_1(x) \sigma_2(y)\right)\left(1 + \bigO{k^{-1}}\right). \]
By working with a smaller $\Omega_{\nu}$ if necessary, we may assume that $S$ does not vanish on $\Omega_{\nu} \times \Omega_{\nu}$, and define $\varphi = -2\log|S|$ on the latter. Then $I_{k,\nu} = J_{k,\nu}(1+\bigO{k^{-1}})$ with
\[ J_{k,\nu} = \int_{\Gamma_1 \cap \Omega_{\nu}} \int_{\Gamma_2 \cap \Omega_{\nu}} \exp(-k\varphi(x,y)) a_0(x,y)^2 \sigma_1(x) \sigma_2(y). \]
We will evaluate this integral by means of the stationary phase method. By taking a smaller $\Omega_{\nu}$ if necessary, we consider a local diffeomorphism $\eta: (\Omega_{\nu},m_{\nu}) \to (\Theta_{\nu},0) \subset \mathbb{R}\xspace^{2n}$ such that $\eta(\Gamma_1 \cap \Omega_{\nu}) = \{(u,v) \in \Theta_{\nu}| \ v = 0 \}$ and $ \eta(\Gamma_2 \cap \Omega_{\nu}) = \{(u, v) \in \Theta_{\nu}| \ u = 0 \}$. Let $\kappa_1, \kappa_2$ be such that $\kappa_1(u) = \eta^{-1}(u,0)$ and $\kappa_2(v) = \eta^{-1}(0,v)$. We have that
\[ J_{k,\nu} = \int_{\mathrm{pr}_1(\Theta_{\nu})} \int_{\mathrm{pr}_2(\Theta_{\nu})} \exp(-k\psi(u,v)) b_0(u,v)^2 \ \kappa_1^*\sigma_1(u) \ \kappa_2^*\sigma_2(v), \]
where $\mathrm{pr}_i$, $i=1,2$ are the projections on the first and second factor of $\mathbb{R}\xspace^n \times \mathbb{R}\xspace^n$, where the phase reads $\psi(u,v) = \varphi(\kappa_1(u),\kappa_2(v))$ and the amplitude is given by the formula $b_0(u,v) = a_0(\kappa_1(u),\kappa_2(v))$. If $h_1,h_2$ are such that $\kappa_1^*\sigma_1 = h_1(u) du$ and $\kappa_2^*\sigma_2 = h_2(v) dv$ locally, then
\[ J_{k,\nu} = \int_{\mathrm{pr}_1(\Theta_{\nu})} \int_{\mathrm{pr}_2(\Theta_{\nu})} \exp(-k\psi(u,v)) b_0(u,v)^2 h_1(u) h_2(v) \ du \ dv. \]
The phase $\psi$ is non-negative. Its differential is given by
\[ d\psi(u,v) \cdot (U,V) = d\varphi(\kappa_1(u),\kappa_2(v)) \cdot (d\kappa_1(u) \cdot U, d\kappa_2(v) \cdot V); \]
therefore, because of \cite[Proposition 1]{ChaBTO}, the point $(u,v)$ is a critical point for $\psi$ if and only if $\kappa_1(u) = \kappa_2(v)$, thus if and only if $u=0=v$. Furthermore, $\psi(0,0) = 0$, and the second order differential of $\psi$ at the critical point $(0,0)$ reads
\[ d^2\psi(0,0) \cdot ((U,V),(X,Y)) = d^2\varphi(m_{\nu},m_{\nu}) \cdot \left( (d\kappa_1(0) \cdot U, d\kappa_2(0) \cdot V),(d\kappa_1(0) \cdot X, d\kappa_2(0) \cdot Y) \right). \]
We will prove that this bilinear form is positive definite. Let $(e_{\ell})_{1 \leq \ell \leq n}$ (respectively $(f_{\ell})_{1 \leq \ell \leq n}$) be an orthonormal basis (with respect to the restriction of $g_{m_{\nu}}$) of the subspace $T_{m_{\nu}} \Gamma_1 \subset T_{m_{\nu}} M$ (respectively $T_{m_{\nu}} \Gamma_2$). We define the vectors $U_{\ell} = (d\kappa_1(0))^{-1} \cdot e_{\ell}$ and $V_{\ell} = (d\kappa_2(0))^{-1} \cdot f_{\ell}$ of $\mathbb{R}\xspace^{n}$, for $1 \leq \ell \leq n$. By composing $\eta$ with a linear diffeomorphism if necessary, we may assume that $((U_{\ell},0)_{1 \leq \ell \leq n},(0,V_{\ell})_{1 \leq \ell \leq n})$ is the standard basis of $\mathbb{R}\xspace^{2n}$; let us compute the matrix $A$ of $d^2\psi(0,0)$ in this basis. We have that
\[ d^2\psi(0,0) \cdot ((U_{\ell},0),(U_p,0)) = d^2\varphi(m_{\nu},m_{\nu}) \cdot \left( (e_{\ell}, 0), (e_{p}, 0) \right). \]
By \cite[Proposition 1]{ChaBTO}, $d^2\varphi(m_{\nu},m_{\nu})$ has kernel $T_{(m_{\nu},m_{\nu})} \Delta$, where $\Delta$ is the diagonal of $M^2$, and its restriction to $(T_{(m_{\nu},m_{\nu})} \Delta)^{\perp}$ is equal to $2 \tilde{g}_{(m_{\nu},m_{\nu})}$, where we recall that $\tilde{g}$ is the K\"ahler metric on $M \times M$ induced by the symplectic form $\omega \oplus -\omega$ and complex structure $j \oplus -j$. But
\[ (e_{\ell},0) = \frac{1}{2} (e_{\ell},e_{\ell}) + \frac{1}{2} (e_{\ell},-e_{\ell}), \]
is the decomposition of $(e_{\ell},0)$ in the direct sum $T_{(m_{\nu},m_{\nu})} \Delta \oplus (T_{(m_{\nu},m_{\nu})} \Delta)^{\perp}$. Therefore
\[ d^2\psi(0,0) \cdot ((U_{\ell},0),(U_p,0)) = \tilde{g}_{(m_{\nu},m_{\nu})}((e_{\ell},-e_{\ell}),(e_p,0)) = g_{m_{\nu}}(e_{\ell},e_p) = \delta_{\ell,p}. \]
Similarly, $d^2\psi(0,0) \cdot ((0,V_{\ell}),(0,V_p)) = \delta_{\ell,p}$. Finally,
\[ d^2\psi(0,0) \cdot ((U_{\ell},0),(0,V_p)) = \tilde{g}_{(m_{\nu},m_{\nu})}((e_{\ell},-e_{\ell}),(0,f_{p})) = -g_{m_{\nu}}(e_{\ell},f_p), \]
so $A$ is the block matrix
\[ A = \begin{pmatrix} \mathrm{Id} & -G \\ -G^{\top} & \mathrm{Id} \end{pmatrix}, \]
where $G$ is the $n \times n$ matrix with entries $G_{\ell,p} = g_{m_{\nu}}(e_{\ell},f_p)$. Thus its determinant satisfies $\det(A) = \det(\mathrm{Id} - G^{\top}G)$; hence, Lemma \ref{lm:angle} yields
\[\det(A) = \prod_{\ell = 1}^n \sin^2(\theta_{\ell}(m_{\nu})) > 0 \]
and the stationary phase lemma gives the estimate
\[ J_{k,\nu} = \left( \frac{2\pi}{k} \right)^{n} \frac{h_1(0)h_2(0)a_0(m_{\nu},m_{\nu})^2}{\prod_{\ell = 1}^n \sin(\theta_{\ell}(m_{\nu}))} + \bigO{k^{-(n+1)}}. \]
We have that $a_0(m_{\nu},m_{\nu}) = 1$, and we claim that $h_1(0) h_2(0) = (\sigma_1,\sigma_2)_{m_{\nu}}$. Indeed, thanks to our choices, we know that $(\kappa_1^*\mu_{g,\Gamma_1})(0) = du$ and $\kappa_1^*\sigma_1 = h_1(u) du$; therefore $h_1(0) = f_1(m_{\nu})$, and similarly $h_2(0) = f_2(m_{\nu})$. We then obtain the result by summing up the contributions of all the intersection points $m_{\nu}$, $1 \leq \nu \leq s$.
\end{proof}
\begin{rmk}
In this proof we have not used the fact that our submanifolds are Lagrangian, hence the result still holds without this assumption. We also believe that we could even drop the assumption that they are $n$-dimensional and consider instead two submanifolds of respective dimensions $d$ and $2n-d$ intersecting transversally at a finite number of points. Handling this case would require some care but in this setting, $d$ principal angles are still well-defined, and everything should work as if the $n-d$ others are taken to be equal to $\frac{\pi}{2}$. Nevertheless, as we will see below, the Lagrangian assumption is crucial in order to estimate the next term, so we chose to stick to the Lagrangian case for this first result.
\end{rmk}
\subsection{The term $\Tr((\rho_{k,1} \rho_{k,2})^2)$}
We can now estimate the trace of $(\rho_{k,1} \rho_{k,2})^2$, which is equal to
\[ \Tr\left( (\rho_{k,1} \rho_{k,2})^2 \right) = \int_{\Gamma_1} \int_{\Gamma_2} \int_{\Gamma_1} \int_{\Gamma_2} \Tr(P_k^{x_1} P_k^{x_2} P_k^{y_1} P_k^{y_2}) \ \sigma_1(x_1) \sigma_2(x_2) \sigma_1(y_1) \sigma_2(y_2) .\]
A straightforward computation, similar to the one in the proof of Proposition \ref{prop:purity}, yields
\[ \Tr(P_k^{x_1} P_k^{x_2} P_k^{y_1} P_k^{y_2}) = \frac{ \scal{\xi_k^{v_2}}{\xi_k^{v_1}} \scal{\xi_k^{v_1}}{\xi_k^{u_2}} \scal{\xi_k^{u_2}}{\xi_k^{u_1}} \scal{\xi_k^{u_1}}{\xi_k^{v_2}} }{ \|\xi_k^{u_1}\|^2 \|\xi_k^{u_2}\|^2 \|\xi_k^{v_1}\|^2 \|\xi_k^{v_2}\|^2} \]
where $u_i$ (respectively $v_i$), $i=1,2$ is any unit vector in $L_{x_i}$ (respectively $L_{y_i}$). This trace is therefore a $\bigO{k^{-\infty}}$ uniformly on $M^2 \setminus \bigcup_{\nu = 1}^s (\Omega_{\nu} \times \Omega_{\nu})$ where $\Omega_{\nu}$ is a neighbourhood of $m_{\nu}$ in $M$. Consequently, the only non negligible contributions to $\Tr\left( (\rho_{k,1} \rho_{k,2})^2 \right)$ come from the integrals
\[ I_{k,\nu} = \int_{\Gamma_1 \cap \Omega_{\nu}} \int_{\Gamma_2 \cap \Omega_{\nu}} \int_{\Gamma_1 \cap \Omega_{\nu}} \int_{\Gamma_2 \cap \Omega_{\nu}} \Tr(P_k^{x_1} P_k^{x_2} P_k^{y_1} P_k^{y_2}) \ \sigma_1(x_1) \sigma_2(x_2) \sigma_1(y_1) \sigma_2(y_2),\]
for $\nu \in \llbracket 1, s \rrbracket$. In order to estimate the scalar products appearing in this integral, let $S$ be as in Equation (\ref{eq:asympt_projector}), let $t$ be a local section of $L$ over $\Omega_{\nu}$ with unit norm, let $\psi: \Omega_{\nu} \times \Omega_{\nu} \to \mathbb{C}\xspace$ be such that
\[ S(x,y) = \exp(i\psi(x,y)) \ t(x) \otimes \overline{t(y)} \]
over $\Omega_{\nu} \times \Omega_{\nu}$, and set $u_i = t(x_i), v_i = t(y_i)$, $i=1,2$. We derive from {\cite[Section 5]{ChaBTO}} that
\[ \scal{\xi_k^{v_2}}{\xi_k^{v_1}} = \overline{t(y_1)}^k \cdot \Pi_k(y_1,y_2) \cdot t(y_2)^k = \left( \frac{k}{2\pi} \right)^n \exp(ik\psi(y_1,y_2)) \left( a_0(y_1,y_2) + \bigO{k^{-1}} \right) \]
uniformly on any compact subset of $\Omega_{\nu} \times \Omega_{\nu}$, and we obtain similar expressions for the other scalar products. Hence, $I_{k,\nu} = J_{k,\nu}\left( 1 + \bigO{k^{-1}} \right)$ with
\[ J_{k,\nu} = \int_{\Gamma_1 \cap \Omega_{\nu}} \int_{\Gamma_2 \cap \Omega_{\nu}} \int_{\Gamma_1 \cap \Omega_{\nu}} \int_{\Gamma_2 \cap \Omega_{\nu}} \exp(ik\Psi(x_1,x_2,y_1,y_2)) b_0(x_1,x_2,y_1,y_2) \ \sigma_1(x_1) \sigma_2(x_2) \sigma_1(y_1) \sigma_2(y_2) \]
where the phase $\Psi$ is given by
\[ \Psi(x_1,x_2,y_1,y_2) = \psi(y_1,y_2) + \psi(x_2,y_1) + \psi(x_1,x_2) + \psi(y_2,x_1), \]
and $b_0(x_1,x_2,y_1,y_2) = a_0(y_1,y_2) a_0(x_2,y_1) a_0(x_1,x_2) a_0(y_2,x_1)$. Now, we introduce as in the proof of Theorem \ref{thm:trace} a local diffeomorphism $\eta: (\Omega_{\nu},m_{\nu}) \to (\Theta_{\nu},0) \subset \mathbb{R}\xspace^{2n}$ such that
\[\eta(\Gamma_1 \cap \Omega_{\nu}) = \{(u,v) \in \Theta_{\nu}| \ v = 0 \}, \qquad \eta(\Gamma_2 \cap \Omega_{\nu}) = \{(u, v) \in \Theta_{\nu}| \ u = 0 \}, \]
and the functions $\kappa_i: \mathrm{pr}_i(\Theta_{\nu}) \to \Omega_{\nu}$ defined by the formulas $\kappa_1(u) = \eta^{-1}(u,0)$ and $\kappa_2(v) = \eta^{-1}(0,v)$ (here we recall that $\mathrm{pr}_i$, $i=1,2$ are the projections on the first and second factor of $\mathbb{R}\xspace^n \times \mathbb{R}\xspace^n$). We also introduce again the functions $h_1,h_2$ such that $\kappa_1^*\sigma_1 = h_1(u) du$ and $\kappa_2^*\sigma_2 = h_2(v) dv$ locally. Then
\[ J_{k,\nu} = \int_{\mathrm{pr}_1(\Theta_{\nu})} \int_{\mathrm{pr}_2(\Theta_{\nu})} \int_{\mathrm{pr}_1(\Theta_{\nu})} \int_{\mathrm{pr}_2(\Theta_{\nu})} \exp(ik\Phi(u,v,w,z)) c_0(u,v,w,z) \ du \ dv \ dw \ dz\]
where the amplitude $c_0$ is given by
\[ c_0(u,v,w,z) = b_0(\kappa_1(u),\kappa_2(v),\kappa_1(w),\kappa_2(z)) h_1(u)h_2(v)h_1(w)h_2(z) \]
and the phase $\Phi$ reads $\Phi(u,v,w,z) = \Psi(\kappa_1(u),\kappa_2(v),\kappa_1(w),\kappa_2(z))$. We will estimate $J_{k,\nu}$ thanks to another application of the stationary phase method. The imaginary part of $\Phi$ is non-negative and vanishes only at the point $0 = (0,0,0,0)$.
\paragraph{Computation of $d\Phi$ and critical points of $\Phi$.}
Let $\widetilde{\nabla}$ be the connection induced by $\nabla$ on the line bundle $L \boxtimes \overline{L}$.
\begin{lm}
Let $\alpha_S$ be the differential form defined by the equality $\widetilde{\nabla} S = -i \alpha_S \otimes S$ in a neighbourhood of the diagonal $\Delta$ of $M^2$ where $S$ does not vanish; then
\[ d\Phi(u,v,w,z) \cdot (U,V,W,Z) = f(w,z,W,Z) + f(u,v,U,V) + g(v,w,V,W) + g(z,u,Z,U) \]
where $f$ and $g$ are defined as $f(a,b,A,B) = -{\alpha_S}_{(\kappa_1(a),\kappa_2(b))}(d\kappa_1(a) \cdot A, d\kappa_2(b) \cdot B)$ and $g(a,b,A,B) = -{\alpha_S}_{(\kappa_2(a),\kappa_1(b))}(d\kappa_2(a) \cdot A, d\kappa_1(b) \cdot B)$.
\end{lm}
\begin{proof}
We start from the local expression $S(x,y) = \exp(i\psi(x,y)) t(x) \otimes \overline{t(y)}$, which yields
\[ \widetilde{\nabla}S = i d\psi \otimes S + \exp(i \psi) \widetilde{\nabla}(t(x) \otimes \overline{t(y)}). \]
In order to compute the second term, we introduce the local differential form $\beta$ such that $\nabla t = \beta \otimes t$. Then $ \widetilde{\nabla}S = \left(i d\psi + p_1^*\beta + p_2^*\bar{\beta} \right)\otimes S$, where $p_1, p_2$ are the projections on the first and second factor of $M \times M$. This means that $-i\alpha_S = i d\psi + p_1^*\beta + p_2^*\bar{\beta}$. We claim that there exists a real-valued form $\gamma$ such that $\beta = i \gamma$; indeed,
\[ 0 = dh(t,t) = h(\nabla t,t) + h(t, \nabla t) = \beta + \bar{\beta} \]
since $\nabla$ and $h$ are compatible. Consequenly, $d\psi = - \alpha_S - p_1^*\gamma + p_2^*\gamma$. Now, the quantity $d\Phi(u,v,w,z) \cdot (U,V,W,Z)$ is the sum of the following four terms:
\[ d\psi((\kappa_1(w),\kappa_2(z))) \cdot (d\kappa_1(w) \cdot W, d\kappa_2(z) \cdot Z), \quad d\psi((\kappa_2(v),\kappa_1(w))) \cdot (d\kappa_2(v) \cdot V, d\kappa_1(w) \cdot W),\]
\[ d\psi((\kappa_1(u),\kappa_2(v))) \cdot (d\kappa_1(u) \cdot U, d\kappa_2(v) \cdot V), \quad d\psi((\kappa_2(z),\kappa_1(u))) \cdot (d\kappa_2(z) \cdot Z, d\kappa_1(u) \cdot U). \]
The quantity $-\gamma_{\kappa_1(w)}(d\kappa_1(w) \cdot W)$ coming from the first term cancels the quantity $\gamma_{\kappa_1(w)}(d\kappa_1(w) \cdot W)$ coming from the second one, and so on.
\end{proof}
Since $\alpha_S$ vanishes on the diagonal (see \cite[Proposition 1]{ChaBTO} and \cite[Lemma 4.3]{ChaHalf}), an immediate corollary of this result is that the differential of $\Phi$ vanishes at $(0,0,0,0)$.
\paragraph{Computation of the determinant of the Hessian of $\Phi$ at the critical point.}
Let $\overline{M}$ be $M$ endowed with the symplectic structure $-\omega$ and the complex structure $-j$; $M \times \overline{M}$ is equipped with the symplectic form $\tilde{\omega} = \omega \oplus - \omega = p_1^* \omega - p_2^* \omega$ with $p_1, p_2$ the natural projections. Similarly, $\tilde{\text{\j}}$ denotes the complex structure $j \oplus -j$ on $M \times \overline{M}$.
Having in mind \cite[Lemma 4.3]{ChaHalf} (or \cite[Section 2.6]{Cha_symp} in a more general setting), we introduce the section $B_S$ of $\left(T^*(M \times \overline{M}) \otimes T^*(M \times \overline{M}) \right) \otimes \mathbb{C}\xspace \to \Delta$ such that for any vector fields $X, Y$ of $M \times \overline{M}$, $\mathcal{L}\xspace_X (\alpha_S(Y)) = B_S(X,Y)$ along $\Delta$; if we set $C = d\kappa_1(0)$ and $D = d\kappa_2(0)$, then for $\mathcal{U} = (U,V,W,Z)$ and $\mathcal{V} = (\hat{U},\hat{V},\hat{W},\hat{Z})$,
\[ d^2\Phi(0) \cdot \left(\mathcal{U}, \mathcal{V} \right) = f(W,Z,\hat{W},\hat{Z}) + f(U,V,\hat{U},\hat{V}) + g(V,W,\hat{V},\hat{W}) + g(Z,U,\hat{Z},\hat{U}) \]
where $f$ and $g$ read $f(X_1,X_2,X_3,X_4) = - {B_S}_{(m_{\nu},m_{\nu})}((C \cdot X_1, D \cdot X_2),(C \cdot X_3, D \cdot X_4))$ and $g(X_1,X_2,X_3,X_4) = - {B_S}_{(m_{\nu},m_{\nu})}((D \cdot X_1, C \cdot X_2),(D \cdot X_3, C \cdot X_4))$. As before, we consider an orthonormal basis $(e_{\ell})_{1 \leq \ell \leq n}$ (respectively $(f_{\ell})_{1 \leq \ell \leq n}$) of the subspace $T_{m_{\nu}} \Gamma_1 \subset T_{m_{\nu}} M$ (respectively $T_{m_{\nu}} \Gamma_2$), and we assume that the vectors $U_{\ell} = C^{-1} \cdot e_{\ell}, V_{\ell} = D^{-1} \cdot f_{\ell}$ of $\mathbb{R}\xspace^{n}$, for $1 \leq \ell \leq n$ are such that $((U_{\ell},0)_{1 \leq \ell \leq n},(0,V_{\ell})_{1 \leq \ell \leq n})$ is the standard basis of $\mathbb{R}\xspace^{2n}$. Let $G, \Xi$ be the $n \times n$ matrices with entries $G_{p,q} = g_{m_{\nu}}(e_p,f_q)$ and $\Xi_{p,q} = \omega_{m_{\nu}}(e_p,f_q)$.
\begin{lm}
In the basis $(U_{\ell},0,0,0)_{1 \leq \ell \leq n},(0,0,U_{\ell},0)_{1 \leq \ell \leq n}$, $(0,V_{\ell},0,0)_{1 \leq \ell \leq n},(0,0,0,V_{\ell})_{1 \leq \ell \leq n}$ of $\mathbb{R}\xspace^{4n}$, the matrix of $d^2\Phi(0)$ is the block matrix
\[ H = \begin{pmatrix} i I_{2n} & A \\ A^{\top} & i I_{2n} \end{pmatrix}, \qquad A = \frac{1}{2} \begin{pmatrix} -\Xi - iG & \Xi- iG \\ \Xi - iG & -\Xi - iG \end{pmatrix}.\]
\end{lm}
\begin{proof}
It is clear from the above expression of $d^2\Phi(0)$ that
\[d^2\Phi(0) \cdot ((U_p,0,0,0),(0,0,U_q,0)) = 0 = d^2\Phi(0) \cdot ((0,V_p,0,0),(0,0,0,V_q)). \]
In order to compute the other terms, we introduce the projection $q$ from $T_x(M \times \overline{M}) \otimes \mathbb{C}\xspace$ onto $T_x^{0,1}(M \times \overline{M})$ with kernel $T_x \Delta \otimes \mathbb{C}\xspace$, so that $B_S(X,Y) = \tilde{\omega}(q(X),Y)$ (see for instance \cite[Lemma 4.3]{ChaHalf} or \cite[Proposition 2.15]{Cha_symp}). We need to compute $q(e_{\ell},0)$ and $q(0,e_{\ell})$ (and similarly for $f_{\ell}$). So we look for $X,Y \in T_{m_{\nu}}M$ and $Z \in T_{m_{\nu}}M \otimes \mathbb{C}\xspace$ such that
\[ (e_{\ell},0) = (X,Y) + \tilde{\text{\j}}(X,Y) + (Z,Z) = (X+ijX+Z,Y-ijY+Z), \]
in which case $q(e_{\ell},0) = (X+ijX,Y-ijY)$. A straightforward computation shows that $2X = -2Y = e_{\ell}$ and $Z = X-ijX$, hence $q(e_{\ell},0) = \frac{1}{2} \left( e_{\ell} + ij e_{\ell}, - e_{\ell} + i j e_{\ell} \right)$. We obtain in a similar fashion that $q(0,e_{\ell}) = \frac{1}{2} \left( -e_{\ell} - ij e_{\ell}, e_{\ell} - i j e_{\ell} \right)$. Now, we have that
\[ d^2\Phi(0) \cdot ((U_p,0,0,0),(U_q,0,0,0)) = - {\tilde{\omega}}_{(m_{\nu},m_{\nu})}(q(e_p,0),(e_q,0)) - {\tilde{\omega}}_{(m_{\nu},m_{\nu})}(q(0,e_p),(0,e_q)). \]
The first term satisfies
\[ {\tilde{\omega}}_{(m_{\nu},m_{\nu})}(q(e_p,0),(e_q,0)) = \frac{1}{2} \omega_{m_{\nu}}(e_p + ij e_p,e_q) = \frac{1}{2} \omega_{m_{\nu}}(e_p,e_q) + \frac{i}{2} \omega_{m_{\nu}}(j e_p,e_q); \]
since $T_{m_{\nu}} \Gamma_1$ is Lagrangian, $\omega_{m_{\nu}}(e_p,e_q) = 0$, and finally
\[ {\tilde{\omega}}_{(m_{\nu},m_{\nu})}(q(e_p,0),(e_q,0)) = -\frac{i}{2} g_{m_{\nu}}(e_p,e_q) = -\frac{i}{2} \delta_{p,q}. \]
A similar computation shows that ${\tilde{\omega}}_{(m_{\nu},m_{\nu})}(q(0,e_p),(0,e_q)) = -\frac{i}{2} \delta_{p,q}$. Therefore,
\[ d^2\Phi(0) \cdot ((U_p,0,0,0),(U_q,0,0,0)) = i \delta_{p,q}. \]
We find the same result for $d^2\Phi(0) \cdot ((0,V_p,0,0),(0,V_q,0,0))$, $d^2\Phi(0) \cdot ((0,0,U_p,0),(0,0,U_q,0))$ and $d^2\Phi(0) \cdot ((0,0,0,V_p),(0,0,0,V_q))$. Combining this with the previous result, we obtain that the two diagonal blocks of $H$ are equal to $i I_{2n}$. Now, we have that
\[ d^2\Phi(0) \cdot ((U_p,0,0,0),(0,V_q,0,0)) = -\tilde{\omega}_{(m_{\nu},m_{\nu})}(q(e_p,0),(0,f_q)); \]
but we also have that
\[ \tilde{\omega}_{(m_{\nu},m_{\nu})}(q(e_p,0),(0,f_q)) = -\frac{1}{2} \omega_{m_{\nu}}(-e_p + ij e_p,f_q) = \frac{1}{2} \left( \omega_{m_{\nu}}(e_p,f_q) - i \omega_{m_{\nu}}(je_p,f_q) \right). \]
So we finally obtain that
\[ d^2\Phi(0) \cdot ((U_p,0,0,0),(0,V_q,0,0)) = -\frac{1}{2} \left( \omega_{m_{\nu}}(e_p,f_q) + i g_{m_{\nu}}(e_p,f_q) \right). \]
We also immediately deduce from this that
\[ d^2\Phi(0) \cdot ((0,0,U_p,0),(0,0,0,V_q)) = -\frac{1}{2} \left( \omega_{m_{\nu}}(e_p,f_q) + i g_{m_{\nu}}(e_p,f_q) \right). \]
Finally, we derive
\[ d^2\Phi(0) \cdot ((U_p,0,0,0),(0,0,0,V_q)) = \frac{1}{2} \left( \omega_{m_{\nu}}(e_p,f_q) - i g_{m_{\nu}}(e_p,f_q) \right) \]
from a similar computation. The same holds for $d^2\Phi(0) \cdot ((0,0,U_p,0),(0,V_q,0,0))$.
\end{proof}
This result yields $\det(-iH) = \det(I_{2n} + A^{\top}A)$. But one readily checks that
\[ I_{2n} + A^{\top}A= I_{2n} + \frac{1}{2} \begin{pmatrix} \Xi^{\top}\Xi - G^{\top}G & -\Xi^{\top}\Xi - G^{\top}G \\ -\Xi^{\top}\Xi - G^{\top}G & \Xi^{\top}\Xi - G^{\top}G \end{pmatrix} = \begin{pmatrix} P & Q \\ Q & P \end{pmatrix}, \]
where the matrices $P$ and $Q$ are given by
\[ P = I_n + \frac{1}{2}( \Xi^{\top}\Xi - G^{\top}G), \qquad Q = -\frac{1}{2}(\Xi^{\top}\Xi + G^{\top}G). \]
Therefore,we finally obtain that
\[ \det(-iH) = \det(P + Q) \det(P-Q) = \det\left(I_n - G^{\top}G\right) \det\left(I_n + \Xi^{\top} \Xi\right). \]
It follows from Lemma \ref{lm:angle} that $\det(I_n - G^{\top}G) = \prod_{\ell = 1}^n \sin^2(\theta_{\ell})$, and from Lemma \ref{lm:det_symp} that $\det(I_n + \Xi^{\top} \Xi) = \prod_{\ell = 1}^n \left(1 + \sin^2(\theta_{\ell})\right)$, where $0 < \theta_1 \leq \ldots \leq \theta_n \leq \pi/2$ are the principal angles between $T_{m_{\nu}} \Gamma_1$ and $T_{m_{\nu}} \Gamma_2$. Consequently,
\[ \det(-iH) = \prod_{\ell = 1}^n \sin^2(\theta_{\ell}) \left(1 + \sin^2(\theta_{\ell})\right) > 0. \]
An application of the stationary phase lemma, as in the proof of Theorem \ref{thm:trace}, yields the following result.
\begin{thm}
\label{thm:trace_square}
We have the following estimate:
\[ \Tr\left((\rho_{k,1}\rho_{k,2})^2\right) = \left(\frac{2\pi}{k}\right)^{2n} \left( \sum_{\nu=1}^s \frac{(\sigma_1,\sigma_2)_{m_{\nu}}^2}{\prod_{\ell = 1}^n \sin(\theta_{\ell}(m_{\nu})) \sqrt{1 + \sin^2(\theta_{\ell}(m_{\nu}))}} \right) + \bigO{k^{-(2n+1)}}, \]
see Definition \ref{dfn:thetas} and Equation (\ref{eq:constant_sigma}) for notation.
\end{thm}
\subsection{Proof of Theorem \ref{thm:subfid}}
The statement of Theorem \ref{thm:subfid} is a direct consequence of Theorems \ref{thm:trace} and \ref{thm:trace_square}. Recall that $ E(\rho_{k,1},\rho_{k,2}) = \Tr(\rho_{k,1} \rho_{k,2}) + \sqrt{2} \sqrt{\Tr(\rho_{k,1}\rho_{k,2})^2 - \Tr((\rho_{k,1}\rho_{k,2})^2)}$. By Theorem \ref{thm:trace},
\[ \Tr(\rho_{k,1} \rho_{k,2}) = \left(\frac{2\pi}{k}\right)^n \left( \sum_{\nu=1}^s \frac{(\sigma_1,\sigma_2)_{m_{\nu}}}{\prod_{\ell = 1}^n \sin(\theta_{\ell}(m_{\nu}))} \right) + \bigO{k^{-(n+1)}}, \]
which implies that
\[\Tr(\rho_{k,1} \rho_{k,2})^2 = \left(\frac{2\pi}{k}\right)^{2n} \left( \sum_{\nu=1}^s \frac{(\sigma_1,\sigma_2)_{m_{\nu}}}{\prod_{\ell = 1}^n \sin(\theta_{\ell}(m_{\nu}))} \right)^2 + \bigO{k^{-(2n+1)}}. \]
By expanding the square of the sum as
\[ \left( \sum_{\nu=1}^s \frac{(\sigma_1,\sigma_2)_{m_{\nu}}}{\prod_{\ell = 1}^n \sin(\theta_{\ell}(m_{\nu}))} \right)^2 = \sum_{\nu=1}^s \frac{(\sigma_1,\sigma_2)_{m_{\nu}}^2}{\prod_{\ell = 1}^n \sin^2(\theta_{\ell}(m_{\nu}))} + \sum_{\substack{\nu=1}}^s \sum_{\substack{\mu=1 \\ \mu \neq \nu}}^s \frac{(\sigma_1,\sigma_2)_{m_{\nu}} (\sigma_1,\sigma_2)_{m_{\mu}}}{\prod_{\ell = 1}^n \sin(\theta_{\ell}(m_{\nu})) \sin(\theta_{\ell}(m_{\mu}))}, \]
and by using the result of Theorem \ref{thm:trace_square} and the fact that $\sqrt{u_k + \bigO{k^{-1}}} = \sqrt{u_k} + \bigO{k^{-1}}$ whenever $u_k \geq 0$, we obtain the desired expression.
\subsection{Super-fidelity}
Using the previous results, it is now quite easy to estimate the super-fidelity of the states $\rho_{k,1}$ and $\rho_{k,2}$ attached to $(\Gamma_1,\sigma_1)$ and $(\Gamma_2,\sigma_2)$. We introduce as in the statement of Proposition \ref{prop:purity} the functions $f_j$, $j=1,2$ such that $\sigma_j = f_j \mu_{g,\Sigma_j}$ where $\mu_{g,\Sigma_j}$ is the Riemannian volume on $\Sigma_j$ corresponding to the Riemannian metric induced by $g$ on $\Sigma_j$.
\begin{thm}
\label{thm:superfid}
The super-fidelity of $\rho_{k,1}$ and $\rho_{k,2}$ satisfies:
\[ G(\rho_{k,1},\rho_{k,2}) = 1 - \frac{1}{2} \left(\frac{2\pi}{k}\right)^{\frac{n}{2}} \left( \int_{\Gamma_1} f_1 \sigma_1 + \int_{\Gamma_2} f_2 \sigma_2 \right) + \bigO{k^{-\min\left(n,\frac{n}{2} + 1\right)}}. \]
\end{thm}
\begin{proof}
Recall that $ G(\rho_{k,1},\rho_{k,2}) = \Tr(\rho_{k,1} \rho_{k,2}) + \sqrt{\left(1 - \Tr\left(\rho_{k,1}^2\right) \right) \left( 1 - \Tr\left(\rho_{k,2}^2\right) \right)}$. The first term has been estimated in Theorem \ref{thm:trace}; it is a $\mathcal{O}(k^{-n})$. Moreover, thanks to Proposition \ref{prop:purity}, we know that
\[ \Tr\left(\rho_{k,j}^2\right) = \left( \frac{2 \pi}{k} \right)^{\frac{n}{2}} \left( \int_{\Gamma_j} f_j \sigma_j + \bigO{k^{-1}} \right),\]
for $j=1,2$, therefore
\[ \left(1 - \Tr\left(\rho_{k,1}^2\right) \right) \left( 1 - \Tr\left(\rho_{k,2}^2\right) \right) = 1 - \left( \frac{2 \pi}{k} \right)^{\frac{n}{2}} \left( \int_{\Gamma_1} f_1 \sigma_1 + \int_{\Gamma_2} f_2 \sigma_2 \right) + \bigO{k^{-\min\left(n,\frac{n}{2} + 1\right)}}. \]
We deduce from this and from $\sqrt{1-x} = 1 - \tfrac{x}{2} + \bigO{x^2}$ that
\[ \sqrt{\left(1 - \Tr\left(\rho_{k,1}^2\right) \right) \left( 1 - \Tr\left(\rho_{k,2}^2\right) \right)} = 1 - \frac{1}{2} \left( \frac{2 \pi}{k} \right)^{\frac{n}{2}} \left( \int_{\Gamma_1} f_1 \sigma_1 + \int_{\Gamma_2} f_2 \sigma_2 \right) + \bigO{k^{-\min\left(n,\frac{n}{2} + 1\right)}}. \]
\end{proof}
\section{A family of examples on the two-sphere with improved upper bound for fidelity}
\label{sect:examples}
\subsection{Quantization of the sphere}
We consider the sphere $\mathbb{S}^2$ with symplectic form $-\frac{1}{2} \omega_{\mathbb{S}^2} = \frac{1}{2} \sin \varphi \ d\theta \wedge d\varphi$. Its quantization is now quite standard material, hence we only describe it quickly; we work with $\mathbb{C}\xspace\mathbb{P}\xspace^1$ endowed with the Fubini-Study symplectic form $\omega_{\text{FS}} = \frac{i dz \wedge d\bar{z}}{(1+|z|^2)^2}$, and use the fact that the stereographic projection (from the north pole to the equator) $\pi_N: \mathbb{S}\xspace^2 \to \mathbb{C}\xspace\mathbb{P}\xspace^1$ is a symplectomorphism. On $\mathbb{C}\xspace\mathbb{P}\xspace^1$, we consider the hyperplane bundle $L = \mathcal{O}(1)$, i.e. the dual of the tautological line bundle $ \mathcal{O}(-1) = \left\{ ([u],v) \in \mathbb{C}\xspace\mathbb{P}\xspace^1 \times \mathbb{C}\xspace^2| \ v \in \mathbb{C}\xspace u \right\}$. We endow the latter with its natural holomorphic structure and with the Hermitian form induced by the standard one on the trivial bundle $\mathbb{C}\xspace\mathbb{P}\xspace^1 \times \mathbb{C}\xspace^2$. Then $L$ is equipped with the dual Hermitian form, and its Chern connection $\nabla$ has curvature $-i\omega_{\text{FS}}$, thus $L \to \mathbb{C}\xspace\mathbb{P}\xspace^1$ is a prequantum line bundle. The following result is well-known (see for instance \cite[Theorem 15.5]{Dem}).
\begin{prop}
\label{prop:iso_homog}
There is a canonical isomorphism between $\mathcal{H}\xspace_k = H^0(\mathbb{C}\xspace\mathbb{P}\xspace^1,L^k)$ and the space $\mathbb{C}\xspace_k[Z_1,Z_2]$ of homogeneous polynomials of degree $k$ in two complex variables.
\end{prop}
This isomorphism is constructed by sending a section $s$ of $L^k \to \mathbb{C}\xspace\mathbb{P}\xspace^1$ to the function $u \in \mathbb{C}\xspace^2 \setminus \{0\} \mapsto \langle s(u) | u^{\otimes k} \rangle$, where $\langle \cdot | \cdot \rangle$ stands for the duality pairing between fibers of $\mathcal{O}(k)$ and $\mathcal{O}(-k)$. This isomorphism yields the scalar product
\[ \scal{P}{Q}_k = \int_{\mathbb{C}\xspace} \frac{P(1,z) \overline{Q(1,z)}}{(1+|z|^2)^{k+2}} \ |dz \wedge d\bar{z}| \]
on $\mathbb{C}\xspace_k[Z_1,Z_2]$, and one readily checks that the monomials
\[ e_{\ell} = \sqrt{\frac{(k+1)\binom{k}{\ell}}{2\pi}} \ Z_1^{k-\ell} Z_2^{\ell}, \quad 0 \leq \ell \leq k \]
form an orthonormal basis of $\mathbb{C}\xspace_k[Z_1,Z_2]$.
Let $U_0 = \{ [z_0:z_1] \in \mathbb{C}\xspace\mathbb{P}\xspace^1| \ z_0 \neq 0 \}$ be the first standard coordinate chart, endowed with the complex coordinate $z= z_1 / z_0$. Over $U_0$, we define the local non-vanishing section $s_0$ of $\mathcal{O}(-1)$ by $s_0(z) = ([1:z],(1,z))$, and we introduce the dual section $t_0$, i.e. the unique section of $L \to U_0$ such that $t_0(s_0) = 1$. Then the above isomorphism sends $P \in \mathbb{C}\xspace_k[Z_1,Z_2]$ to $P(1,z) t_0(z)$, and one readily checks that
\begin{equation} \Pi_k(z,w) = \frac{k+1}{2\pi} (1+z \bar{w})^k \ t_0^k(z) \otimes \overline{t_0}^k(w). \label{eq:proj_sphere}\end{equation}
The local section $u = (1 + |z|^2)^{1/2} \ t_0$ has unit norm, and the coherent vector $\xi_k^{u(z)}$ satisfies
\[ \xi_k^{u(z)}(w) = \frac{k+1}{2\pi} \frac{(1+\bar{z}w)^k}{(1+|z|^2)^{\frac{k}{2}}} \ t_0^k(w), \qquad \left\| \xi_k^{u(z)} \right\|^2_k = \frac{k+1}{2\pi}. \]
Hence, a straightforward computation yields that for $0 \leq \ell \leq k$,
\[ P_k^z e_{\ell} = \frac{z^{\ell} \sqrt{\binom{k}{\ell}}}{(1+|z|^2)^k} (1+\bar{z}w)^k \ t_0^k(w) = \frac{z^{\ell} \sqrt{\binom{k}{\ell}}}{(1+|z|^2)^k} \sum_{m=0}^k \bar{z}^m \sqrt{\binom{k}{m}} e_m. \]
\subsection{Two orthogonal great circles on the sphere $\mathbb{S}^2$}
We briefly explain the case of orthogonal great circles on $\mathbb{S}^2$. Let $\Gamma_1 = \{x_3=0\}$ and $\Gamma_2 = \{x_1=0\}$, with respective densities $ \sigma_1 = \frac{d\theta}{2\pi}, \sigma_2 = \frac{d\varphi}{2\pi}$. Then $\Gamma_1$ is sent by $\pi_N$ to the unit circle $\{\exp(it)| \ 0 \leq t \leq 2\pi\}$ in $\mathbb{C}\xspace$ and $\Gamma_2$ to the line $i\mathbb{R}\xspace = \{i y | \ y \in \mathbb{R}\xspace \} \subset \mathbb{C}\xspace$; moreover,
\[ (\pi_N)_* \sigma_1 = \frac{dt}{2\pi}, \qquad (\pi_N)_*\sigma_2 = \frac{dy}{\pi(1+y^2)}. \]
Let $\rho_{k,1} = \rho_k(\Gamma_1,\sigma_1)$; by definition,
\[ \scal{\rho_{k,1} e_{\ell}}{e_m} = \int_0^{2\pi} \scal{P_k^{\exp(it)}e_{\ell}}{e_m} \frac{dt}{2\pi} = \frac{1}{2^k} \sqrt{\binom{k}{\ell}\binom{k}{m}} \int_0^{2\pi} \exp(i(\ell-m)t) \frac{dt}{2\pi}; \]
hence we obtain that the matrix of $\rho_{k,1}$ in the orthonormal basis $(e_{\ell})_{0 \leq \ell \leq k}$ reads
\[ \rho_{k,1} = \frac{1}{2^k} \mathrm{diag}\left(\binom{k}{0}, \ldots, \binom{k}{\ell}, \ldots, \binom{k}{k}\right), \]
which means that $\rho_{k,1}$ is prepared according to a binomial probability distribution with respect to this basis. The matrix elements of $\rho_{k,2} = \rho_k(\Gamma_2,\sigma_2)$ are given by the formula
\[ \scal{\rho_{k,2} e_{\ell}}{e_m} = \frac{i^{\ell-m}}{\pi} \sqrt{\binom{k}{\ell}\binom{k}{m}} \int_{-\infty}^{+ \infty} \frac{y^{\ell + m}}{(1+y^2)^{k+1}} \ dy. \]
This integral vanishes when $\ell + m$ is odd, and if $\ell + m = 2p$ is even, it is equal to
\[ I_{k,p} = \int_{-\infty}^{+ \infty} \frac{y^{2p}}{(1+y^2)^{k+1}} \ dy = 2 \int_{0}^{+ \infty} \frac{y^{2p}}{(1+y^2)^{k+1}} \ dy. \]
We can compute this quantity by means of the Beta function, see e.g. \cite[Section 6.2]{AbraSte}.
\begin{lm}
For every $p \in \llbracket 0,k \rrbracket$, $I_{k,p} = \frac{\pi}{4^k} \frac{\binom{2k}{k} \binom{k}{p}}{\binom{2k}{2p}}$.
\end{lm}
Consequently, we obtain that
\[ \scal{\rho_{k,2} e_{m+2q}}{e_m} = \frac{(-1)^q \binom{2k}{k}}{4^k} \frac{\binom{k}{m+q} \sqrt{\binom{k}{m+2q}\binom{k}{m}}}{\binom{2k}{2(m+q)}} \]
for $0 \leq m \leq k$ and $\lceil{-m/2} \rceil \leq q \leq \lfloor (k-m)/2 \rfloor$. In particular,
\[ \scal{\rho_{k,2} e_m}{e_m} = \frac{\binom{2k}{k}}{4^k} \frac{\binom{k}{m}^2 }{\binom{2k}{2m}} = \frac{1}{4^k} \binom{2m}{m} \binom{2(k-m)}{k-m}. \]
The fact that $\Tr(\rho_{k,2}) = 1$ is then equivalent to the identity
\[ \sum_{m=0}^k \binom{2m}{m} \binom{2(k-m)}{k-m} = 4^k, \]
which can be derived from the expansion $(1-4x)^{-1/2} = \sum_{r=0}^{+\infty} \binom{2r}{r} x^r$ for every $x$ satisfying $-1/4 < x < 1/4$. Moreover, we obtain that
\begin{equation} \Tr(\rho_{k,1} \rho_{k,2}) = \frac{1}{8^k} \sum_{m=0}^k \binom{k}{m}\binom{2m}{m} \binom{2(k-m)}{k-m}. \label{eq:trace_ortho_theo} \end{equation}
$\Gamma_1$ and $\Gamma_2$ intersect transversally at $m_1 = (0,-1,0)$ and $m_2 = (0,1,0)$. Obviously $\theta_1(m_1) = \theta_1(m_2) = \frac{\pi}{2}$ and one can check that $ (\sigma_1,\sigma_2)_{m_1} = (\sigma_1,\sigma_2)_{m_2} = \frac{1}{2\pi^2}$. Therefore, Theorem \ref{thm:trace} gives $\Tr(\rho_{k,1} \rho_{k,2}) = \frac{2}{k\pi} + \bigO{k^{-2}}$. We check this numerically by plotting $k \Tr(\rho_{k,1} \rho_{k,2})$ as a function of $k$, see Figure \ref{fig:ktrace_ortho} (there most probably exist direct techniques to estimate the sum in Equation (\ref{eq:trace_ortho_theo}), but we are not familiar with them). Furthermore, Theorem \ref{thm:subfid} yields
\begin{equation} E(\rho_{k,1},\rho_{k,2}) = \frac{2}{k\pi} \left( 1 + \sqrt{\frac{2 \sqrt{2}-1}{\sqrt{2}}} \right) + \mathcal{O}(k^{-2}). \label{eq:subfid_ortho_theo} \end{equation}
Figure \ref{fig:sub_fidelity_ortho} displays $E\left(\rho_{k,1}, \rho_{k,2}\right)$ and $k E\left(\rho_{k,1}, \rho_{k,2}\right)$ as functions of $k$.
\subsection{Non necessarily orthogonal great circles}
Let $(\Gamma_1, \sigma_1)$ be as in the previous example. Let $0 < \alpha \leq \pi/2$ and let $\Gamma_2^{\alpha}$ be the great circle given by the equation $x_3 = x_1 \tan \alpha$ (or $x_1 = 0$ if $\alpha = \frac{\pi}{2}$), so that $\Gamma_2^{0} = \Gamma_1$ and $\Gamma_2^{\pi/2} = \Gamma_2$ (see Figure \ref{fig:sphere}). Let $\sigma_2^{\alpha}$ be the density induced on $\Gamma_2$ by $\sigma_1$ \emph{via} the rotation $R_{\alpha}$ of angle $\alpha$ about the $x_2$ axis, which sends $\Gamma_1$ to $\Gamma_2^{\alpha}$. Trying to compute explicitly the matrix elements of $\rho_{k,2}^{\alpha}$ as in the previous part leads to complicated integrals for which we do not know closed forms; therefore numerical evaluation would require to approximate these integrals and would be costly and possibly not very accurate. Instead, we prefer to use the following method, which is more efficient.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.6]{fidelity_angle}
\end{center}
\caption{The submanifolds $\Gamma_1$ and $\Gamma_2^{\alpha}$.}
\label{fig:sphere}
\end{figure}
Let $\zeta_k: SU(2) \to GL(\mathbb{C}\xspace_k[Z_1,Z_2])$ be the natural representation of $SU(2)$ in $\mathbb{C}\xspace_k[Z_1,Z_2]$:
\begin{equation} \forall (g,P) \in SU(2) \times \mathbb{C}\xspace_k[Z_1,Z_2], \qquad \zeta_k(g)(P) = P \circ g^{-1} \label{eq:rep_su} \end{equation}
where $SU(2)$ acts on $\mathbb{C}\xspace^2$ in the standard way. Observe that this representation is unitary with respect to the scalar product on $\mathbb{C}\xspace_k[Z_1,Z_2]$ defined above. Note also that we are in the presence of other actions of $SU(2)$: the natural action on $\mathbb{C}\xspace^2$, which induces an action on $\mathbb{C}\xspace\mathbb{P}\xspace^1$ and on its tautological bundle, which itself induces by duality an action on the prequantum line bundle $L \to \mathbb{C}\xspace\mathbb{P}\xspace^1$, which in turn induces an action on $L^k \to \mathbb{C}\xspace\mathbb{P}\xspace^1$. Whenever the context allows to distinguish between these actions, we denote by $gu$ the action of $g \in SU(2)$ on $u$ belonging to any of these sets. Furthermore, $SU(2)$ acts on sections of $L^k \to \mathbb{C}\xspace\mathbb{P}\xspace^1$, by the formula
\[ \forall (g,s) \in SU(2) \times \classe{\infty}{(M,L^k)}, \ \forall m \in \mathbb{C}\xspace\mathbb{P}\xspace^1, \quad (g s)(m) = g s(g^{-1} m); \]
this yields an action on holomorphic sections. The latter is compatible with $\zeta_k$ through the isomorphism introduced in Proposition \ref{prop:iso_homog}; therefore we will slightly abuse notation by using $(g,\phi) \in SU(2) \times \mathcal{H}\xspace_k \mapsto \zeta_k(g) \phi $ for this action. We now consider the matrix
\[ \tau_2 = \frac{1}{2} \begin{pmatrix} 0 & -1\\ 1 & 0 \end{pmatrix} \in \mathfrak{su}(2) \simeq \mathfrak{so}(3) \]
which is the infinitesimal generator of rotations about the $x_2$ axis.
\begin{lm}
\label{lm:equiv}
Let $g_{\alpha} = \exp(i \alpha \tau_2) \in SU(2)$ and $U_k(\alpha) = \zeta_k(g_{\alpha})$; then $ \rho_{k,2}^{\alpha} = U_k(\alpha) \rho_{k,1} U_k(\alpha)^*$.
\end{lm}
We believe that this lemma is standard, but nonetheless give a proof in Appendix A. The operator $U_k(\alpha)$ can be computed as follows; let $\zeta_k'$ be the representation of $\mathfrak{su}(2)$ in $\mathbb{C}\xspace_k[Z_1,Z_2]$ which is the derived representation of the one given by Equation (\ref{eq:rep_su}):
\[ \forall (\xi,P) \in \mathfrak{su}(2) \times \mathbb{C}\xspace_k[Z_1,Z_2], \qquad \zeta_k'(\xi)(P) = \left.\frac{d}{dt}\right|_{t=0} \zeta_k(\exp(t \xi))(P). \]
Then $U_k(\alpha)$ can be computed as $ U_k(\alpha) = \exp(i\alpha \zeta_k'(\tau_2))$. A straightforward computation shows that, for $0 \leq \ell \leq k$,
\[ \zeta_k'(\tau_2)(e_{\ell}) = \frac{1}{2} \sqrt{(\ell+1)(k-\ell)} \ e_{\ell+1} - \frac{1}{2} \sqrt{\ell(k-\ell+1)} \ e_{\ell-1}. \]
Consequently, we can compute numerically the matrix of $U_k(\alpha)$, and thus the matrix of $\rho_{k,2}^{\alpha}$, in the basis $(e_{\ell})_{0 \leq \ell \leq k}$; therefore we can evaluate the sub-fidelity of $\rho_{k,1}$ and $\rho_{k,2}^{\alpha}$.
Since $\theta_1(m_1) = \theta_1(m_2) = \alpha$ and $ (\sigma_1,\sigma_2^{\alpha})_{m_1} = (\sigma_1,\sigma_2^{\alpha})_{m_2} = \frac{1}{2\pi^2}$, Theorem \ref{thm:trace} yields
\[ \Tr(\rho_{k,1} \rho_{k,2}^{\alpha}) = \frac{2}{k\pi \sin \alpha} + \bigO{k^{-2}}. \]
We check this numerically for the case $\alpha = \frac{\pi}{4}$, see Figure \ref{fig:ktrace_pisur4}. Moreover, Theorem \ref{thm:subfid} gives
\begin{equation} E\left(\rho_{k,1}, \rho_{k,2}^{\alpha}\right) = \frac{2}{k\pi \sin \alpha} \left( 1 + \sqrt{2 - \frac{\sin \alpha}{\sqrt{1 + \sin^2 \alpha}}} \right) + \bigO{k^{-2}}. \label{eq:subfid_pi4} \end{equation}
We check this for the case $\alpha = \frac{\pi}{4}$ in Figure \ref{fig:sub_fidelity_pisur4}, and in Figure \ref{fig:subfid_angle_varying} we compare the value of the sub-fidelity for a fixed large $k$ to its theoretical equivalent as a function of $\alpha$; note that since $k$ is fixed, we cannot take $\alpha$ arbitrarily close to zero.
\subsection{Obtaining a better estimate for fidelity in this example}
It turns out that one can obtain a much better bound for the fidelity of the states $\rho_{k,1}$ and $\rho_{k,2}^{\alpha}$ defined above, by comparing it to the fidelity of certain Berezin-Toeplitz operators. Unfortunately, this strategy relies on a certain number of symmetries and good properties of this particular example, hence it does not work as it is in the general case. Nevertheless, it is quite remarkable that such a good estimate holds, and perhaps some parts of the proof could give insight on how to handle the general case; this is why we will give a detailed explanation of the method, which includes non trivial steps and requires care.
\subsubsection{Comparing both states to Berezin-Toeplitz operators}
We begin by comparing $\rho_{k,1}$ to a certain Berezin-Toeplitz operator. In order to do so, we may give the following heuristic argument: this state is prepared according to a binomial distribution with respect to the orthonormal basis introduced above, with higher weight at basis elements corresponding to points that are close to the equator, where close means at distance of order $k^{-1/2}$. Indeed, it is standard that the binomial coefficients $\binom{k}{\ell}$ that are of the same order as the central binomial coefficient $\binom{k}{\lfloor k/2 \rfloor}$ are such that $|\lfloor k/2 \rfloor-\ell|$ is of order $\sqrt{k}$, and the corresponding basis elements are supported in a neighbourhood of size $k^{-1/2}$ of the equator. Consequently, when $k \to +\infty$, we expect the appearance of the density function of a normal distribution centered at $x_3 = 0$. Therefore, $\rho_{k,1}$ might be related, for $k$ large, to the Berezin-Toeplitz operator $T_k(\lambda \exp(-c k x_3^2))$ for some $c > 0$ and $\lambda \in \mathbb{R}\xspace$ (see Equation (\ref{eq:def_BTO}) for the definition of this operator). In fact, for technical reasons that will appear later, we prefer to replace $k$ by $k+1$ in this expression.
In order to be more precise, we argue as follows. The largest matrix element of $\rho_{k,1}$ is
\begin{equation} \frac{1}{2^k} \binom{k}{\lfloor k/2 \rfloor} \underset{k \to +\infty}{\sim} \sqrt\frac{2}{\pi k}. \label{eq:middle_coeff_rho1} \end{equation}
Moreover, the matrix elements of the Berezin-Toeplitz operator associated with a function depending only on $x_3$ can be computed as follows.
\begin{lm}
\label{lm:BTO_radial}
Let $g \in \classe{\infty}{(\mathbb{R}\xspace)}$ and let $f: \mathbb{S}\xspace^2 \to \mathbb{R}\xspace$ be defined as $f(x_1,x_2,x_3) = g(x_3)$. Then $\scal{T_k(f) e_{\ell}}{e_m}_k = 0$ if $\ell \neq m$ and
\[ \scal{T_k(f) e_{\ell}}{e_{\ell}}_k = \frac{(k+1) \binom{k}{\ell}}{2^{k+1}} \int_{-1}^1 (1 + x)^{\ell} (1 - x)^{k - \ell} g(x) \ dx. \]
\end{lm}
The proof is more or less a folklore computation; it is available in Appendix A. For $f_k(x_1,x_2,x_3) = \lambda \exp(-c(k+1) x_3^2)$, this gives
\begin{equation} \scal{T_k(f_k) e_{\ell}}{e_{\ell}}_k = \frac{\lambda (k+1) \binom{k}{\ell}}{2^{k+1}} \int_{-1}^1 (1 + x)^{\ell} (1 - x)^{k - \ell} \exp(-c(k+1)x^2) \ dx. \label{eq:coeffs_gaussian} \end{equation}
From this formula, we obtain that the trace
\[ \Tr(T_k(f_k)) = \frac{\lambda (k+1)}{2} \int_{-1}^1 \exp(-c(k+1)x^2) \ dx = \frac{\lambda}{2} \sqrt{\frac{(k+1)\pi}{c}} \mathrm{erf}\left(\sqrt{c(k+1)}\right), \]
where $\mathrm{erf}$ is the error function, is of order $\sqrt{k+1}$. Hence what we really want is to compare $\rho_{k,1}$ to $\tfrac{1}{\sqrt{k+1}} T_k(f_k)$, and we would like that $c$ and $\lambda$ satisfy the relation
\begin{equation} \lambda = 2 \sqrt{\frac{c}{\pi}}, \label{eq:lambda_c_1} \end{equation}
so that the latter has trace close to one. Assume for simplicity that $k$ is even; then
\[ \scal{T_k(f_k) e_{\frac{k}{2}}}{e_{\frac{k}{2}}}_k = \frac{\lambda (k+1) \binom{k}{\frac{k}{2}}}{2^{k+1}} \int_{-1}^1 (1 - x^2)^{\frac{k}{2}} \exp(-c(k+1)x^2) \ dx. \]
We can evaluate the integral by means of Laplace's method; indeed, it is of the form $\int_{-1}^1 \exp(-k\phi(x)) a(x) \ dx$ where $a(x) = \exp(-c x^2)$ and $\phi(x) = c x^2 - \frac{1}{2} \ln(1-x^2)$ for $-1 < x < 1$. We obtain that
\[ \frac{1}{\sqrt{k+1}} \scal{T_k(f_k) e_{\frac{k}{2}}}{e_{\frac{k}{2}}}_k \sim_{k \to +\infty} \frac{\lambda}{ \sqrt{(2 c +1)k}}. \]
Comparing this with Equation (\ref{eq:middle_coeff_rho1}), we see that we want $\lambda$ and $c$ to satisfy the relation
\begin{equation} \lambda = \sqrt{\frac{2(2c+1)}{\pi}}. \label{eq:lambda_c_2} \end{equation}
One cannot choose $c$ and $\lambda$ such that both Equations (\ref{eq:lambda_c_1}) and (\ref{eq:lambda_c_2}) are satisfied. In what follows, we will take any $c$ and choose $\lambda$ so that the latter is satisfied. In this case,
\[ \frac{1}{\sqrt{k+1}} \Tr(T_k(f_k)) \sim_{k \to +\infty} \sqrt{1 + \frac{1}{2c}}, \]
and the way to make this quantity become close to one is to let the constant $c$ go to $+\infty$.
This analysis should lead to a good approximation for the coefficients $\scal{\rho_{k,1} e_{\ell}}{e_\ell}_k$ where $|\ell - \tfrac{k}{2}|$ is of order $\sqrt{k}$, but there is no reason to expect this approximation to still be good for the other coefficients. Nevertheless, the following nice property holds.
\begin{lm}
\label{lm:rho_approx}
For every $c \geq 2$ and every $k \geq 1$, we have that
\begin{equation} \rho_{k,1} \leq \frac{1}{\sqrt{k+1}} T_k(f_k^c) \label{eq:bound_rho1} \end{equation}
where $f_k^c: \mathbb{S}\xspace^2 \to \mathbb{R}\xspace^+$ is given by the formula
\begin{equation} f_k^c(x_1, x_2, x_3) = \sqrt{\frac{2(2c+1)}{\pi}} \exp\left( -c (k +1 ) x_3^2 \right). \label{eq:def_fk} \end{equation}
\end{lm}
\begin{proof}
Since both operators are diagonal in the basis $(e_{\ell})_{0 \leq \ell \leq k}$, we only need to compare their respective coefficients. Since $\scal{\rho_{k,1} e_{\ell}}{e_{\ell}}_k = 2^{-k} \binom{k}{\ell}$ and in view of Equation (\ref{eq:coeffs_gaussian}), this requires to check that the inequality
\[ \sqrt{\frac{(k+1)(2c+1)}{2\pi}} \int_{-1}^1 (1 + x)^{\ell} (1 - x)^{k-\ell} \exp\left( - c(k+1) x^2 \right) \ dx \geq 1 \]
holds. Let us assume for the sake of simplicity that $k$ is even, the odd case being similar. One readily checks that the above integral is minimal for $\ell = \tfrac{k}{2}$. Hence we need to study
\[ I_k(c) = \sqrt{2c+1} \int_{-1}^1 (1 - x^2)^{\frac{k}{2}} \exp\left( - c(k+1) x^2 \right) \ dx. \]
One can check that $I_k$ is decreasing in $c \in [2, +\infty)$, and setting $y = \sqrt{c} \ x$ yields
\[ I_k(c) = \sqrt{2+\frac{1}{c}} \int_{-c}^c \left(1 - \frac{y^2}{c}\right)^{\frac{k}{2}} \exp\left( - (k+1) y^2 \right) \ dy \underset{c \to +\infty}{\longrightarrow} \sqrt{2} \int_{\mathbb{R}\xspace} \exp\left( - (k+1) y^2 \right) \ dy = \sqrt{\frac{2\pi}{k+1}}. \]
Thus for every $c \geq 2$, $I_k(c) \geq \sqrt{\frac{2\pi}{k+1}}$, which implies the above inequality.
\end{proof}
The next step is to observe that there is an exact version of Egorov's theorem for rotations on $\mathbb{S}\xspace^2$. This is well-known, but we give a simple proof using our notation, in Appendix A, for the sake of completeness.
\begin{prop}
\label{prop:exact_egorov}
Let $f \in \classe{\infty}{(M)}$, $k \geq 1$ and $\beta \in [0,2\pi]$. Let $U_k(\beta) = \zeta_k(g_{\beta})$ as above; then $U_k(\beta) T_k(f) U_k(\beta)^* = T_k(f \circ R_{-\beta})$, where we recall that $R_{\gamma}$ is the rotation of angle $\gamma$ about the $x_2$ axis.
\end{prop}
Since conjugation by a unitary operator preserves the order, this implies that
\begin{equation} \rho_{k,2}^{\alpha} \leq \frac{1}{\sqrt{k+1}} T_k(f_k^c \circ R_{-\alpha}) \label{eq:bound_rho2} \end{equation}
for every $c > 0$, with $f_k^c$ as above. This allows us to obtain the following upper bound.
\begin{prop}
\label{prop:bound_fid_rho_BTO}
The fidelity of $\rho_{k,1}$ and $\rho_{k,2}^{\alpha}$ satisfies
\[ F(\rho_{k,1}, \rho_{k,2}^{\alpha}) \leq \frac{1}{k+1} F\left(T_k(f_k^c), T_k(f_k^c \circ R_{-\alpha})\right), \]
for every $c \geq 2$ and $k \geq 1$, where $f_k^c$ is the function defined in Equation (\ref{eq:def_fk}).
\end{prop}
\begin{proof}
This immediately follows from Equations (\ref{eq:bound_rho1}) and (\ref{eq:bound_rho2}) and from the monotonicity of the fidelity, see for instance \cite{Mol}: if $A, B, C$ are positive semidefinite Hermitian operators with $A \leq B$, then $F(A,C) \leq F(B,C)$.
\end{proof}
\subsubsection{Estimating the new fidelity function}
As a consequence of the previous result, if we manage to show that the fidelity of $T_k(f_k^c)$ and $T_k(f_k^c \circ R_{-\alpha})$ is of order $\bigO{1}$, we will know that the fidelity of $\rho_{k,1}$ and $\rho_{k,2}^{\alpha}$ is a $\bigO{k^{-1}}$. In Figures \ref{fig:comp_fid_BTO_pi2} and \ref{fig:comp_fid_BTO_pi4}, we compare $F(T_k(f_k^c), T_k(f_k^c \circ R_{-\alpha}))$ with the rescaled fidelity $k F(\rho_{k,1}, \rho_{k,2}^{\alpha})$ for $\alpha = \frac{\pi}{2}$ and $\alpha = \frac{\pi}{4}$, and different values of $c$. We observe on these numerical simulations that for large $c$, the above inequality seems to give an excellent approximation for $F(\rho_{k,1}, \rho_{k,2}^{\alpha})$. We will now try to use this fact to obtain a good upper bound on this fidelity.
\paragraph{Change of scale.} In order to estimate $F(T_k(f_k^c), T_k(f_k^c \circ R_{-\alpha}))$, a natural idea is to try to approximate the operator $ \sqrt{T_k\left(f_k^c\right)} \sqrt{T_k\left(g_k^c\right)}$ involved in the definition of this fidelity by another Berezin-Toeplitz operator. For instance, it is tempting to conjecture that the square root of $T_k(f_k^c)$ coincides with $T_k(\sqrt{f_k^c})$ up to some small remainder, but one cannot apply the usual symbolic calculus for Berezin-Toeplitz operators here, because $f_k^c$ does not belong to any reasonable symbol class. Indeed, it is of the form $f(k^{1/2} \cdot)$ for some $f$ independent of $k$, and $1/2$ is precisely the critical exponent; the product rule with sharp remainder for Berezin-Toeplitz operators \cite[Equation (P3)]{ChaPolsharp} reads, for functions of the form $f_k = f(k^{\varepsilon} \cdot)$ and $g_k = g(k^{\varepsilon} \cdot)$ with $f$ and $g$ of unit uniform norm,
\[ \| T_k(f_k) T_k(g_k) - T_k(f_k g_k) \| \leq \gamma k^{-1+2\varepsilon} \]
for some constant $\gamma > 0$. Hence the remainder is indeed small if and only if $\varepsilon < 1/2$.
In order to overcome this difficulty, the idea is to replace this power $1/2$ by $1/2 - \delta$ for some $\delta > 0$. More precisely, let
\[ f: \mathbb{S}\xspace^2 \to \mathbb{R}\xspace^+, \quad (x_1, x_2, x_3) \mapsto \sqrt{\frac{2(2c+1)}{\pi}} \exp\left( -x_3^2 \right), \]
so that $f_k^c = f\left(\sqrt{c(k+1)} \ \cdot \right)$, and given $0 < \delta < 1/2$, let $f_k^{c,\delta} = f(\sqrt{c}(k+1)^{\frac{1}{2} - \delta} \cdot)$. In order to simplify notation, we also introduce the function $g = f \circ R_{-\alpha}$, so that $g_k^c := f_k^c \circ R_{-\alpha} = g\left(\sqrt{c(k+1)} \ \cdot \right)$, and define $g_k^{c,\delta}$ in the same way. Then $f_k^c \leq f_k^{c,\delta}$, hence we obtain with the same arguments as above that
\begin{equation} F\left(T_k(f_k^c), T_k(f_k^c \circ R_{-\alpha})\right) \leq F\left(T_k(f_k^{c,\delta}), T_k(g_k^{c,\delta}) \right) = \left\| \sqrt{T_k\left(f_k^{c,\delta}\right)} \sqrt{T_k\left(g_k^{c,\delta}\right)} \right\|_{\Tr}^2. \label{eq:bound_fid_delta} \end{equation}
What we have gained is that we can use the product rule for the operators on the right-hand side of this inequality to replace $\sqrt{T_k(f_k^{c,\delta})} \sqrt{T_k(g_k^{c,\delta})}$ by $T_k\left(\sqrt{f_k^{c,\delta} g_k^{c,\delta}}\right)$, and the trace norm of the latter is easy to compute, as a simple application of the stationary phase lemma, with details available in Appendix B.
\begin{prop}
\label{prop:trace_norm_BTO}
For every $\delta \in (0,\frac{1}{2})$,
\[ \left\| T_k\left(\sqrt{f_k^{c,\delta} g_k^{c,\delta}}\right) \right\|_{\Tr} = \frac{2 k^{2\delta}}{\sqrt{c \pi} \sin \alpha} + \bigO{k^{4\delta-1} c^{-\frac{3}{2}}}. \]
\end{prop}
Note that when $c$ is of order $k^{4 \delta}$, this trace norm is a $\bigO{1}$; however, we will see below that we cannot consider such a $c$.
\paragraph{Control of the remainders.} The tricky part is to understand the structure of the remainders appearing when replacing $\sqrt{T_k(f_k^{c,\delta})} \sqrt{T_k(g_k^{c,\delta})}$ by $T_k\left(\sqrt{f_k^{c,\delta} g_k^{c,\delta}}\right)$. By the product rule \cite[Equation (P3)]{ChaPolsharp}, there exists $\gamma > 0$ such that $T_k\left( \sqrt{f_k^{c,\delta}} \right)^2 = T_k\left(f_k^{c,\delta}\right) + A_k$ where $\| A_k \| \leq \gamma c^{\frac{3}{2}} k^{-2\delta}$. Since the square root is operator monotone, this yields \cite{And} $T_k\left( \sqrt{f_k^{c,\delta}} \right) = \sqrt{T_k\left(f_k^{c,\delta}\right)} + R_k$ where $\| R_k \| \leq \gamma_1 c^{\frac{3}{4}} k^{-\delta}$. Since $\sqrt{T_k\left(f_k^{c,\delta}\right)}$ and $T_k\left( \sqrt{f_k^{c,\delta}} \right)$
have norm smaller than some constant times $c^{\frac{1}{4}}$, $\| R_k \| \leq \gamma_2 \min(c^{\frac{1}{4}}, c^{\frac{3}{4}} k^{-\delta})$. By applying the exact version of Egorov's theorem stated earlier, we deduce from this that $ \sqrt{T_k\left(g_k^{c,\delta}\right)} = T_k\left( \sqrt{g_k^{c,\delta}} \right) + S_k $
with $\| S_k \| \leq \gamma_2 \min(c^{\frac{1}{4}}, c^{\frac{3}{4}} k^{-\delta})$. Now, the triangle inequality for the trace norm reads
\begin{equation} \begin{split} \left\| \sqrt{T_k\left(f_k^{c,\delta}\right)} \sqrt{T_k\left(g_k^{c,\delta}\right)} \right\|_{\Tr} \leq \left\|T_k\left( \sqrt{f_k^{c,\delta}} \right) T_k\left( \sqrt{g_k^{c,\delta}} \right) \right\|_{\Tr} + \left\|T_k\left( \sqrt{f_k^{c,\delta}} \right) S_k \right\|_{\Tr} \\ + \left\|R_k T_k\left( \sqrt{g_k^{c,\delta}} \right) \right\|_{\Tr} + \left\|R_k S_k \right\|_{\Tr}. \end{split} \label{eq:estimate_tracenorm}\end{equation}
We start by estimating the last three terms on the right-hand side of this equation. This is in fact delicate, since we want to discriminate between what happens near the intersection points of $\Gamma_1$ and $\Gamma_2^{\alpha}$ and what happens away of these points. In order to do so, we consider a cutoff function $\chi \in \classe{\infty}{(\mathbb{R}\xspace,\mathbb{R}\xspace^+)}$ smaller than one, equal to one on $[-1/2,1/2]$ and vanishing outside $(-1,1)$, and we define for $r > 1$
\[\chi_k^{r,\delta}: \mathbb{S}\xspace^2 \to \mathbb{R}\xspace, \quad (x_1, x_2, x_3) \mapsto \chi(r k^{\frac{1}{2} - \delta} x_3) \chi(r k^{\frac{1}{2} - \delta} x_3 \circ R_{-\alpha}), \]
so that $\chi_k^{r,\delta}$ vanishes outside the union of two ``parallelograms'' centered at each of these intersection points and with side length of order $r^{-1} k^{\delta - \frac{1}{2}}$. Writing $1 = \chi_k^{r,\delta} + 1 - \chi_k^{r,\delta}$ and using the triangle inequality, we obtain that
\begin{equation} \left\|T_k\left( \sqrt{f_k^{c,\delta}} \right) S_k \right\|_{\Tr} \leq \left\|T_k\left( \chi_k^{r,\delta} \sqrt{f_k^{c,\delta}} \right) S_k \right\|_{\Tr} + \left\|T_k\left( (1 - \chi_k^{r,\delta}) \sqrt{f_k^{c,\delta}} \right) S_k \right\|_{\Tr}. \label{eq:first_term} \end{equation}
Regarding the first term, H\"older's inequality for Schatten norms yields
\[ \left\|T_k\left( \chi_k^{r,\delta} \sqrt{f_k^{c,\delta}} \right) S_k \right\|_{\Tr} \leq \left\|T_k\left( \chi_k^{r,\delta} \sqrt{f_k^{c,\delta}} \right) \right\|_{\Tr} \left\| S_k \right\| = \Tr\left( T_k\left( \chi_k^{r,\delta} \sqrt{f_k^{c,\delta}} \right) \right) \left\| S_k \right\|, \]
where the last equality comes from the fact that $T_k\left( \chi_k^{r,\delta} \sqrt{f_k^{c,\delta}} \right) \geq 0$ since $\chi_k^{r,\delta} \sqrt{f_k^{c,\delta}} $ takes its values in $\mathbb{R}\xspace^+$. The trace of this operator satisfies
\[ \Tr\left( T_k\left( \chi_k^{r,\delta} \sqrt{f_k^{c,\delta}} \right) \right) = \frac{k+1}{2\pi} \int_{\mathbb{S}\xspace^2} \chi_k^{r,\delta} \sqrt{f_k^{c,\delta}} \ d\mu \leq \frac{k+1}{2\pi} \left(\frac{2(2c+1)}{\pi}\right)^{\frac{1}{4}} \int_{\mathbb{S}\xspace^2} \chi_k^{r,\delta} \ d\mu, \]
hence it is a $\bigO{k^{2\delta} r^{-2} c^{\frac{1}{4}}}$, since the area of each of the aforementioned parallelograms is of order $r^{-2} k^{2\delta - 1}$. Consequently,
\[ \left\|T_k\left( \chi_k^{r,\delta} \sqrt{f_k^{c,\delta}} \right) S_k \right\|_{\Tr} = \bigO{k^{2\delta} r^{-2}\min(c^{\frac{1}{2}}, ck^{-\delta})}. \]
In order to estimate the second term on the right-hand side of Equation (\ref{eq:first_term}), we use once again H\"older's inequality to derive
\[ \left\|T_k\left( (1 - \chi_k^{r,\delta}) \sqrt{f_k^{c,\delta}} \right) S_k \right\|_{\Tr} \leq \left\|T_k\left( (1 - \chi_k^{r,\delta}) \sqrt{f_k^{c,\delta}} \right) \right\| \Tr(S_k). \]
We have that
\[ \left\|T_k\left( (1 - \chi_k^{r,\delta}) \sqrt{f_k^{c,\delta}} \right) \right\| \leq \left\| (1 - \chi_k^{r,\delta}) \sqrt{f_k^{c,\delta}} \right\|_{\infty} = \bigO{c^{\frac{1}{4}} \exp(-r^{-2} c)}. \]
Since moreover $\Tr(S_k) \leq \dim(\mathcal{H}\xspace_k) \| S_k \|$, we obtain that
\[ \left\|T_k\left( (1 - \chi_k^{r,\delta}) \sqrt{f_k^{c,\delta}} \right) S_k \right\|_{\Tr} = \bigO{k \exp(-r^{-2} c) \min(c^{\frac{1}{2}}, c k^{-\delta})}, \]
and finally, we deduce from Equation (\ref{eq:first_term}) that $\left\|T_k\left( \sqrt{f_k^{c,\delta}} \right) S_k \right\|_{\Tr} = \bigO{\varepsilon(c,r,k)}$ where
\begin{equation} \varepsilon(c,r,k) = \max\left(k^{2\delta} r^{-2}, k \exp(-r^{-2} c)\right) \min\left(c^{\frac{1}{2}}, c k^{-\delta}\right). \label{eq:epsilon}\end{equation}
The trace norm of $R_k T_k\left( \sqrt{g_k^{c,\delta}} \right)$ can be estimated in a similar way. It remains to control the trace norm of $R_k S_k$; we do not expect this term to be small. However, we can say the following: from Lemma \ref{lm:BTO_radial}, we know that both $T_k\left( \sqrt{f_k^{c,\delta}} \right)$ and $ \sqrt{T_k\left(f_k^{c,\delta}\right)}$ are diagonal in the basis $(e_{\ell})_{0 \leq \ell \leq k}$, hence $R_k$ also is. Since moreover $R_k \leq T_k\left( \sqrt{f_k^{c,\delta}} \right)$, we conclude that $R_k^2 \leq T_k\left( \sqrt{f_k^{c,\delta}} \right)^2$. Thus, it follows from Proposition \ref{prop:exact_egorov} that $S_k^2 \leq T_k\left( \sqrt{g_k^{c,\delta}} \right)^2$ as well. Therefore, the monotonicity of the fidelity function yields
\[ \left\|R_k S_k \right\|_{\Tr} = F(R_k^2,S_k^2) \leq F\left(T_k\left( \sqrt{f_k^{c,\delta}} \right)^2, T_k\left( \sqrt{g_k^{c,\delta}}\right)^2 \right) = \left\|T_k\left( \sqrt{f_k^{c,\delta}} \right) T_k\left( \sqrt{g_k^{c,\delta}} \right) \right\|_{\Tr}. \]
Using all of the above estimates in Equation (\ref{eq:estimate_tracenorm}), we finally obtain that
\begin{equation} \left\| \sqrt{T_k\left(f_k^{c,\delta}\right)} \sqrt{T_k\left(g_k^{c,\delta}\right)} \right\|_{\Tr} \leq 2 \left\|T_k\left( \sqrt{f_k^{c,\delta}} \right) T_k\left( \sqrt{g_k^{c,\delta}} \right) \right\|_{\Tr} + \bigO{\varepsilon(c,r,k)}, \label{eq:second_estimate_tracenorm}\end{equation}
see Equation (\ref{eq:epsilon}). It remains to control the remainders which appear when we replace $\left\|T_k\left( \sqrt{f_k^{c,\delta}} \right) T_k\left( \sqrt{g_k^{c,\delta}} \right) \right\|_{\Tr}$ by $\left\|T_k\left( \sqrt{f_k^{c,\delta} g_k^{c,\delta}} \right) \right\|_{\Tr} $. We claim that we can argue as before to control them, thanks to the cutoff function $\chi_k^{r,\delta}$; indeed, the uniform norm of the function $(1 - \chi_k^{r,\delta})\sqrt{f_k^{c,\delta} g_k^{c,\delta}}$ is also bounded by some constant times $\exp(- \lambda r^{-2} c)$ where $\lambda > 0$ does not depend on $c, r, k$. Hence we get the estimate
\[ \left\| \sqrt{T_k\left(f_k^{c,\delta}\right)} \sqrt{T_k\left(g_k^{c,\delta}\right)} \right\|_{\Tr} \leq 2 \left\|T_k\left( \sqrt{f_k^{c,\delta} g_k^{c,\delta}} \right) \right\|_{\Tr} + \bigO{\varepsilon(c,r,k)}, \]
which yields the following result.
\begin{thm}
\label{thm:fid_sphere}
The fidelity of $\rho_{k,1}$ and $\rho_{k,2}^{\alpha}$ satisfies, for every $\delta \in (0,\frac{1}{2}]$,
\[ F(\rho_{k,1},\rho_{k,2}^{\alpha}) \leq \frac{16 k^{3\delta - 1}}{\pi \sin^2 \alpha} + \bigO{k^{\frac{25 \delta}{12} - 1}}. \]
\end{thm}
\begin{proof}
The above inequality and Proposition \ref{prop:trace_norm_BTO} yield
\[ \left\| \sqrt{T_k\left(f_k^{c,\delta}\right)} \sqrt{T_k\left(g_k^{c,\delta}\right)} \right\|_{\Tr} \leq \frac{4 k^{2\delta}}{\sqrt{c \pi} \sin \alpha} + \nu(c,r,k) \]
where $\nu(c,r,k) = \bigO{k^{4\delta-1} c^{-\frac{3}{2}}} + \bigO{\varepsilon(c,r,k)}$. We would like to take $c = k^{4 \delta}$ so that the first term is a $\bigO{1}$; but then $\nu(c,r,k)$ would be a $\bigO{\max(k^{4\delta} r^{-2}, k^{1 + 2\delta} \exp(- \lambda k^{4 \delta} r^{-2} ))}$, which can not be made into a $o(1)$ no matter which $r$ we choose. So instead we choose $c = k^{\delta}$, so that the first term is a $\bigO{k^{\frac{3\delta}{2}}}$ and
\[ \nu(c,r,k) = \bigO{k^{\frac{5\delta}{2}-1}} + \bigO{\max(k^{2\delta} r^{-2}, k \exp(- \lambda r^{-2} k^{\delta})) }. \]
We want the term in the exponential to be of order $k^{\varepsilon}$ for some $\varepsilon > 0$, and at the same time that $k^{2\delta} r^{-2} = o(k^{\frac{3\delta}{2}})$. In order to do so, we can choose for instance $r = k^{\frac{\delta}{3}}$; then
\[ \nu(c,r,k) = \bigO{k^{\frac{5\delta}{2}-1}} + \bigO{\max(k^{\frac{4\delta}{3}}, k \exp(- \lambda k^{\frac{\delta}{3}})) } = \bigO{k^{\frac{4\delta}{3}}}. \]
Thus, for these choices, we obtain that
\[ \left\| \sqrt{T_k\left(f_k^{c,\delta}\right)} \sqrt{T_k\left(g_k^{c,\delta}\right)} \right\|_{\Tr} \leq \frac{4 k^{\frac{3\delta}{2}}}{\sqrt{\pi} \sin \alpha} + \bigO{k^{\frac{4\delta}{3}}}. \]
Indeed, $\frac{5 \delta}{2} - 1 < \frac{4 \delta}{3}$ since $0 < \delta \leq 1/2$. Consequently, we deduce from Equation (\ref{eq:bound_fid_delta}) that
\[ F\left(T_k(f_k^c), T_k(f_k^c \circ R_{-\alpha})\right) \leq \frac{16 k^{3 \delta}}{\pi \sin^2 \alpha} + \bigO{k^{\frac{25\delta}{12}}} \]
for such $c$, and we use Proposition \ref{prop:bound_fid_rho_BTO} to conclude.
\end{proof}
We conjecture that the constant appearing in this result is not so bad, i.e. that, in fact, this fidelity has an equivalent of the form $F(\rho_{k,1},\rho_{k,2}^{\alpha}) \sim \frac{C}{k \sin^2 \alpha}$ for some constant $C > 0$ when $k$ goes to infinity. We investigate this conjecture in Figure \ref{fig:fid_angle_varying}, where we display the (rescaled) fidelity of $\rho_{k,1}$ and $\rho_{k,2}^{\alpha}$ for some fixed large $k$, as a function of the angle $\alpha$. From this figure, we guess that our conjecture may be true up to allowing that $C = C(\alpha)$ is a function of $\alpha$ taking its values in a small interval.
We display the fidelity of $\rho_{k,1}$ and $\rho_{k,2}^{\alpha}$ together with their sub-fidelity, as functions of $k$, in Figures \ref{fig:fid_vs_subfid_pi2} (where $\alpha = \frac{\pi}{2}$) and \ref{fig:fid_vs_subfid_pi3} (where $\alpha = \frac{\pi}{3}$).
\section{Numerics and a conjecture}
\label{sect:numerics}
\subsection{Numerical computations}
We gather here the outcome of numerical simulations for our examples on $\mathbb{S}\xspace^2$.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.35]{ktrace_ortho.pdf}
\end{center}
\caption{The blue circles represent $k \Tr(\rho_{k,1} \rho_{k,2})$ as a function of $k$, for $1 \leq k \leq 50$, computed numerically from Equation (\ref{eq:trace_ortho_theo}). The red line is the theoretical limit $\frac{2}{\pi}$.}
\label{fig:ktrace_ortho}
\end{figure}
\begin{figure}[H]
\hspace{-5mm}
\subfigure[The blue crosses represent $E(\rho_{k,1},\rho_{k,2})$, while the red circles stand for the first term on the right-hand side of Equation (\ref{eq:subfid_ortho_theo}).]{\includegraphics[scale=0.29]{sub_fidelity_pi_2} }
\hspace{5mm}
\subfigure[The blue crosses represent $k E(\rho_{k,1},\rho_{k,2})$, and the red line corresponds to the constant $\frac{2}{\pi} \left( 1 + \sqrt{\frac{2 \sqrt{2}-1}{\sqrt{2}}} \right)$.]{\includegraphics[scale=0.29]{sub_fidelity_pi_2_resc} }
\caption{Sub-fidelity $E(\rho_{k,1},\rho_{k,2})$ and $k E(\rho_{k,1},\rho_{k,2})$, as functions of $k$, for $1 \leq k \leq 50$.}
\label{fig:sub_fidelity_ortho}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.35]{ktrace_pisur4}
\end{center}
\caption{The blue circles represent $k \Tr(\rho_{k,1} \rho_{k,2}^{\alpha})$ as a function of $k$ for $\alpha = \frac{\pi}{4}$, $1 \leq k \leq 100$. The red line corresponds to the theoretical limit $\frac{2}{\pi \sin \alpha} = \frac{2 \sqrt{2}}{\pi}$.}
\label{fig:ktrace_pisur4}
\end{figure}
\begin{figure}[H]
\hspace{-5mm}
\subfigure[The blue crosses correspond to $E(\rho_{k,1},\rho_{k,2}^{\alpha})$, and the red circles correspond to the first term on the right-hand side of Equation (\ref{eq:subfid_pi4}).]{\includegraphics[scale=0.3]{sub_fidelity_pi_4} }
\hspace{5mm}
\subfigure[The blues crosses correspond to the quantity $k E(\rho_{k,1},\rho_{k,2}^{\alpha})$, while the red line represents the constant $\frac{2}{\pi \sin \alpha} \left( 1 + \sqrt{2 - \frac{\sin \alpha}{\sqrt{1 + \sin^2 \alpha}}} \right) = \frac{2\sqrt{2}}{\pi} \left( 1 + \sqrt{2 - \frac{1}{\sqrt{3}}} \right)$.]{\includegraphics[scale=0.3]{sub_fidelity_pi_4_resc} }
\caption{Sub-fidelity $E(\rho_{k,1},\rho_{k,2}^{\alpha})$ and $k E(\rho_{k,1},\rho_{k,2}^{\alpha})$, as functions of $k$, for $1 \leq k \leq 50$ and $\alpha = \frac{\pi}{4}$. }
\label{fig:sub_fidelity_pisur4}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.33]{sub_fidelity_angle_varying_k_500}
\end{center}
\caption{The blue circles represent the value of $k E\left(\rho_{k,1}, \rho_{k,2}^{\alpha}\right) $ as a function of $\alpha$ for $k = 500$ and $0.2 \leq \alpha \leq \frac{\pi}{2}$. The red line corresponds to the theoretical equivalent $\alpha \mapsto \frac{2}{\pi \sin \alpha} \left( 1 + \sqrt{2 - \frac{\sin \alpha}{\sqrt{1 + \sin^2 \alpha}}} \right)$ obtained in Equation (\ref{eq:subfid_pi4}).}
\label{fig:subfid_angle_varying}
\end{figure}
\begin{figure}[H]
\includegraphics[scale=0.24]{comparison_fid_BTO_200_pi2}
\caption{Comparison between the rescaled fidelity $k F(\rho_{k,1}, \rho_{k,2}^{\alpha})$ (red circles) and $F(T_k(f_k^c), T_k(f_k^c \circ R_{-\alpha}))$ for $c = 2$ (blue diamonds), $c = 10$ (green squares) and $c = 50$ (black pentagons); here $\alpha = \frac{\pi}{2}$ and $1 \leq k \leq 200$.}
\label{fig:comp_fid_BTO_pi2}
\end{figure}
\begin{figure}[H]
\includegraphics[scale=0.24]{comparison_fid_BTO_200_pi4}
\caption{Comparison between the rescaled fidelity $k F(\rho_{k,1}, \rho_{k,2}^{\alpha})$ (red circles) and $F(T_k(f_k^c), T_k(f_k^c \circ R_{-\alpha}))$ for $c = 2$ (blue diamonds), $c = 10$ (green squares) and $c = 50$ (black pentagons); here $\alpha = \frac{\pi}{4}$ and $1 \leq k \leq 200$.}
\label{fig:comp_fid_BTO_pi4}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.35]{fidelity_angle_varying_k_200}
\end{center}
\caption{The blue circles represent the value of $k F\left(\rho_{k,1}, \rho_{k,2}^{\alpha}\right) $ as a function of $\alpha$ for $k = 200$ and $0.2 \leq \alpha \leq \frac{\pi}{2}$. The red line corresponds to the conjectural equivalent $\alpha \mapsto \frac{C}{\sin^2 \alpha}$, where $C$ has been determined numerically from the case $\alpha = \frac{\pi}{2}$.}
\label{fig:fid_angle_varying}
\end{figure}
\begin{figure}[H]
\hspace{-5mm}
\subfigure[$E(\rho_{k,1},\rho_{k,2}^{\alpha})$ and $F(\rho_{k,1},\rho_{k,2}^{\alpha})$.]{\includegraphics[scale=0.3]{comparison_pi_2} }
\subfigure[$k E(\rho_{k,1},\rho_{k,2}^{\alpha})$ and $ k F(\rho_{k,1},\rho_{k,2}^{\alpha})$.]{\includegraphics[scale=0.3]{comparison_pi_2_resc} }
\caption{Comparison between the fidelity and sub-fidelity of $\rho_{k,1}$ and $\rho_{k,2}$, and their rescaled versions, as functions of $k$, $1 \leq k \leq 200$. The blue diamonds correspond to sub-fidelity, while the red circles represent fidelity.}
\label{fig:fid_vs_subfid_pi2}
\end{figure}
\begin{figure}[H]
\hspace{-5mm}
\subfigure[$E(\rho_{k,1},\rho_{k,2}^{\alpha})$ and $F(\rho_{k,1},\rho_{k,2}^{\alpha})$.]{\includegraphics[scale=0.3]{comparison_pi_3} }
\subfigure[$k E(\rho_{k,1},\rho_{k,2}^{\alpha})$ and $ kF(\rho_{k,1},\rho_{k,2}^{\alpha})$.]{\includegraphics[scale=0.3]{comparison_pi_3_resc} }
\caption{Comparison between fidelity and sub-fidelity of $\rho_{k,1}$ and $\rho_{k,2}^{\alpha}$ for $\alpha = \frac{\pi}{3}$, as functions of $k$, $1 \leq k \leq 200$. The blue diamonds correspond to sub-fidelity, while the red circles represent fidelity.}
\label{fig:fid_vs_subfid_pi3}
\end{figure}
\subsection{Comparison between fidelity and sub-fidelity}
In view of the previous results, we expect the fidelity to be of the same order as the sub-fidelity, namely $\bigO{k^{-1}}$, but there is no reason that their equivalents are the same. In fact, we already know how the constants compare since $F \geq E$. These considerations lead us to the following conjecture.
\begin{conj}
Let $(\Gamma_1,\sigma_1)$ and $(\Gamma_1,\sigma_1)$ be two closed connected Lagrangian submanifolds with probability densities of a closed quantizable K\"ahler manifold $M$, intersecting transversally at a finite number of points. Let $C((\Gamma_1,\sigma_1),(\Gamma_2,\sigma_2))$ be as in Theorem \ref{thm:subfid}. Then there exists some constant $\widetilde{C}((\Gamma_1,\sigma_1),(\Gamma_2,\sigma_2)) \geq C((\Gamma_1,\sigma_1),(\Gamma_2,\sigma_2))$
such that
\[ F(\rho_{k,1},\rho_{k,2}) = \left(\frac{2\pi}{k}\right)^{n} \widetilde{C}((\Gamma_1,\sigma_1),(\Gamma_2,\sigma_2)) + \bigO{k^{-(n+1)}}. \]
\end{conj}
This would mean that the fidelity is of the same order of magnitude as the sub-fidelity. Besides evidence given by this example, this conjecture seems reasonable for the two following reasons. The first one is that the states that we consider are far from pure states, hence their super-fidelity is a very bad upper bound for their fidelity and we expect the latter to be much closer to the sub-fidelity. The second one is that when $\psi_k, \phi_k$ are pure states, i.e. elements in $\mathcal{H}\xspace_k$ of unit norm, then their fidelity is given by $F(\phi_k,\psi_k) = |\scal{\phi_k}{\psi_k}|^2$.
But it is known (see \cite{BorPauUri} but also \cite[Theorem 6.1]{ChaPoly} for instance) that the scalar product of two pure states associated with Bohr-Sommerfeld Lagrangians has the following equivalent when $k$ goes to infinity:
\[ \scal{\phi_k}{\psi_k} \sim \left(\frac{2\pi}{k}\right)^{\frac{n}{2}} C(\Gamma_1,\Gamma_2). \]
Therefore our conjecture could be seen as some kind of generalization of this result, in a different context.
\paragraph{Acknowledgements.}
Part of this work was supported by the European Research Council Advanced Grant 338809. We thank Leonid Polterovich for proposing the topic and for numerous useful discussions. We also thank Laurent Charles and Alejandro Uribe for their interest in this work and some helpful remarks. Finally, we thank St\'ephane Nonnenmacher for suggesting the approach that ultimately led to Theorem \ref{thm:fid_sphere}. We thank an anonymous referee for very useful advice regarding the exposition.
|
1,108,101,565,014 | arxiv | \section{Introduction} \label{sec:S1}
Coherent states are widespread in physics \cite{Zhang1990p867}.
In particular, coherent states defined as
\begin{eqnarray}
\vert \alpha \rangle = e^{-\frac{\vert \alpha \vert^{2}}{2}} \sum_{j=0}^{\infty} \frac{\alpha^{j}}{\sqrt{j!}} \vert j \rangle,
\end{eqnarray}
where the states $\vert j \rangle$ are Fock or number states, are of great interest in optics as they have properties related to the classical radiation field \cite{Sudarshan1963p277,Glauber1963p2766}.
These coherent states are eigenstates of the annihilation operator of the harmonic oscillator,
\begin{eqnarray}
\hat{a} \vert \alpha \rangle = \alpha \vert \alpha \rangle,
\end{eqnarray}
and it is possible to generalize them as nonlinear coherent states \cite{Manko1997p528},
\begin{eqnarray}
f(\hat{n}) \hat{a} \vert \xi \rangle = \xi \vert \xi \rangle.
\end{eqnarray}
A class of such nonlinear coherent states are the standard single-photon-added coherent states \cite{Agarwal1991p492,Sivakumar1999p3441},
\begin{eqnarray}
\vert \alpha, m \rangle = \frac{1}{\sqrt{\langle \alpha \vert \hat{a}^{m} \hat{a}^{\dagger m} \vert \alpha \rangle}} \hat{a}^{\dagger m} \vert \alpha \rangle,
\end{eqnarray}
where the nonlinear function is given by,
\begin{eqnarray}
f(\hat{n},m) = \frac{\hat{n} - m +1}{\hat{n} + 1},
\end{eqnarray}
and the photon number operator is $\hat{n}= \hat{a}^{\dagger} \hat{a}$.
These photon-added states can be approximately generated in the laboratory by conditional measurement in the Jaynes-Cummings dynamics \cite{Agarwal1991p492}, in a beam splitter \cite{Dakna1998p309,Jang2014p1230}, or in spontaneous parametric down-conversion \cite{Zavatta2004p660,Zavatta2005p023820,Parigi2007p1890,Barbieri2010p063833}, to mention a few examples.
Additional proposals to realize them in cavity or ion-trap quantum electrodynamics (QED) \cite{Dodonov1998p4087}, Kerr media \cite{RomanAcheyta2014p38}, and in quantum mechanical systems with non-linear potentials \cite{SantosSanchez2011p145307} have also been produced.
Single-photon added coherent states are known to be non-classical non-Gaussian states and, thus, useful for quantum information processing \cite{Barbieri2010p063833}.
Here, we are interested in a different class of photon-added and subtracted states based on the revival time found in the two-photon Jaynes-Cummings model \cite{Phoenix1990p116}, which is known to approximately add or subtract two photons from the initial quantized field state depending on the initial state of the qubit \cite{MoyaCessa1994p1814,MoyaCessa1999p1641}.
We propose to use the London phase operators \cite{London1926p915,London1927p193,Schleich2001}, also known as Susskind-Glogower operators \cite{Susskind1964p49}, that lower and raise the state of a quantized field, $\hat{V} \vert n \rangle = \vert n - 1 \rangle$ and $\hat{V}^{\dagger} \vert n \rangle = \vert n + 1 \rangle$, to define a general $2m$-photon added state,
\begin{eqnarray}
\vert \psi_{+2m} \rangle= \left[ i \hat{V}^{\dagger 2} (-1)^{\hat{n}} \right]^{m} \vert \psi \rangle, \quad \vert \psi\rangle = \sum_{j=0}^{\infty} c_{j} \vert j \rangle,
\end{eqnarray}
that keeps the state normalized, $\langle \psi_{+m} \vert \psi_{+m} \rangle = 1 $, and adds $2m$ photons to the mean photon number of the initial state,
\begin{eqnarray}
\langle \psi_{+} \vert \hat{n} \vert \psi_{+} \rangle = \langle \psi \vert \hat{n} \vert \psi \rangle + 2m.
\end{eqnarray}
The case of $2m$-photon subtracted states can be defined in an equivalent form,
\begin{eqnarray}
\vert \psi_{-2m} \rangle= \frac{1}{\sqrt{1- \sum_{k=0}^{2m-1} \vert c_{k} \vert^{2}}} \left[ i \hat{V}^{2} (-1)^{\hat{n}} \right]^m \vert \psi \rangle, \quad \vert \psi\rangle = \sum_{j=0}^{\infty} c_{j} \vert j \rangle,
\end{eqnarray}
in order to keep the state normalized.
This photon subtracted state will show a mean photon number that depends on the lowest Fock state components of the initial state,
\begin{eqnarray}
\langle \psi_{-2m} \vert \hat{n} \vert \psi_{-2m} \rangle = \frac{1}{1- \sum_{k=0}^{2m-1} \vert c_{k} \vert^{2}} \left[ \langle \psi \vert \hat{n} \vert \psi \rangle -2 m+ \sum_{k=0}^{2m-1} \vert c_{k} \vert^{2} \right].
\end{eqnarray}
Note that as long as the initial state does not have the lowest $2m-1$ Fock state components, we can write
\begin{eqnarray}
\vert \phi_{-m}\rangle= \left[i \hat{V}^{2} (-1)^{\hat{n}}\right]^{m} \vert \phi \rangle, \quad \vert \phi\rangle = \sum_{j=2m}^{\infty} c_{j} \vert j \rangle, \label{eq:Sub}
\end{eqnarray}
and this photon subtracted state will show a mean photon number that has $2m$ less photons than the original state,
\begin{eqnarray}
\langle \phi_{-2m} \vert \hat{n} \vert \phi_{-2m} \rangle = \langle \phi \vert \hat{n} \vert \phi \rangle - 2 m.
\end{eqnarray}
These proposed non-classical states, apart from being an experimentally feasible example of nonlinear coherent states, may be useful for quantum information processing tasks in cavity- or ion-trap-QED.
In the following, we will show that our definition of photon added and subtracted states is experimentally feasible in cavity- and ion-trap-QED.
Then, we will present the particular case of photon added and subtracted coherent states that can be seen as nonlinear coherent states and show how to generate approximated $2m$-photon added and subtracted coherent states with the proposed experimental schemes.
\section{A proposal for experimental realization} \label{sec:S2}
Let us consider the two-photon Jaynes-Cummings model \cite{Buck1981p132,Sukumar1984p885},
\begin{eqnarray}
\hat{H} = \omega \hat{a}^{\dagger} \hat{a} + \frac{\omega_{0}}{2} \hat{\sigma}_{z} + g \left( \hat{a}^{\dagger 2} \hat{\sigma}_{-} + \hat{a}^{\dagger 2} \hat{\sigma}_{-} \right),
\end{eqnarray}
describing the interaction of a quantized field and a qubit; e.g. a two-level atom interacting with the quantized field of a cavity or a two-level trapped ion interacting with the quantized motion of its center of mass.
The field is described by the frequency $\omega$ and the creation (annihilation) operators $\hat{a}^{\dagger}$ ($\hat{a}$) and the qubit by the frequency $\omega_{0}$ and the Pauli operators $\hat{\sigma}_{j}$ with $j=z,+,-$.
The interaction between the qubit and the quantized field is given by the parameter g and in both the cavity- and ion-trap-QED examples it must fulfill $g \ll \omega$.
On resonance, $2 \omega = \omega_{0}$, the evolution operator for the system is given by
\begin{eqnarray}
\hat{U}(t) = \left( \begin{array}{cc}
\cos \left[\Omega(\hat{n}) t\right] & - i \sin \left[\Omega(\hat{n}) t\right] ~\hat{V}^{2} \\
- i \hat{V}^{\dagger 2} \sin \left[\Omega(\hat{n}) t\right] & \cos \left[\Omega(\hat{n}-2) t\right]
\end{array} \right).
\end{eqnarray}
where we have defined the frequency $\Omega(\hat{n})= g \sqrt{(\hat{n}+2)(\hat{n}+1)}$ and used the the lowering and raising operators defined in the introduction.
Thus, an initial field state coupled to a qubit in the excited state, $\vert \epsilon(0) \rangle = \vert \psi, e \rangle$, will evolve as
\begin{eqnarray}
\vert \epsilon(t) \rangle = \left( \begin{array}{cc}
\cos \left[\Omega(\hat{n}) t\right] \\
- i \hat{V}^{\dagger 2} \sin \left[\Omega(\hat{n}) t\right]
\end{array} \right) \vert \epsilon(0) \rangle.
\end{eqnarray}
and for one coupled to a qubit ground state, $\vert \gamma(0) \rangle = \vert \xi, g \rangle$, its time evolution will be
\begin{eqnarray}
\vert \gamma(t) \rangle = \left( \begin{array}{cc}
- i \hat{V}^{2} \sin \left[\Omega(\hat{n}-2) t\right] t \\
\cos \left[\Omega(\hat{n}-2) t\right]t
\end{array} \right) \vert \gamma(0) \rangle.
\end{eqnarray}
Note that there exists a critical Fock state $\vert j_{c} \rangle$ such that, for $j \ge j_{c}$, it is possible to approximate,
\begin{eqnarray}
\sqrt{(\hat{n}+2) (\hat{n} +1)} \vert j \rangle \approx \left( \hat{n} + \frac{3}{2} \right) \vert j \rangle, ~ j_{c} = 3, \label{eq:Ap1}\\
\sqrt{\hat{n} (\hat{n}-1)} \vert j \rangle \approx \left( \hat{n} - \frac{1}{2} \right) \vert j \rangle, ~ j_{c}= 6 \label{eq:Ap2}.
\end{eqnarray}
The values of $j_{c}$ in \eqref{eq:Ap1} and \eqref{eq:Ap2} guarantee a relative error between the approximation and the exact value of the order of $10^{-3}$ or less.
The origin of our definition for addition and subtraction of two-photons is that we obtain the following approximated states of the field at a time $gt = \pi$,
\begin{eqnarray}
\left\vert \epsilon \left(\frac{\pi}{g} \right) \right\rangle &\approx& i \hat{V}^{\dagger 2} (-1)^{\hat{n}} \vert \psi, g \rangle, \quad \vert \psi \rangle = \sum_{j=3}^{\infty} c_{j} \vert j \rangle, \\
\left\vert \gamma \left(\frac{\pi}{g}\right) \right\rangle &\approx& i \hat{V}^{2}(-1)^{\hat{n}} \vert \psi, e \rangle, \quad \vert \psi \rangle = \sum_{j=6}^{\infty} c_{j} \vert j\rangle .
\end{eqnarray}
We require that the initial field state does not have Fock state components below the critical parameter $j_{c}$, in order to satisfy the restrictions in the approximations \eqref{eq:Ap1} and \eqref{eq:Ap2}.
Note that this allows us to fulfill the restriction in \eqref{eq:Sub}.
Thus, in a cavity-QED implementation, if the cavity field starts in the state $ \vert \psi \rangle$ and we let an atom in the excited or ground state fly through the cavity such that $gt = \pi$, then the initial cavity field will approximately end in the two-photon added or subtracted state, $\vert \psi_{+2}\rangle$ or $\vert \psi_{+2}\rangle$, as long as it did not have low Fock state components in the beginning.
The theoretically exact state of the field will be given by the reduced density matrices,
\begin{eqnarray}
\hat{\rho}_{+2} &=& \cos\left[\Omega(\hat{n}) \pi \right] \hat{\rho}_{0} \cos\left[\Omega(\hat{n}) \pi \right] + \nonumber \\
&& + \hat{V}^{\dagger 2} \sin \left[\Omega(\hat{n})\pi\right] \hat{\rho}_{0} \sin \left[\Omega(\hat{n})\pi\right] \hat{V}^{2}, \\
\hat{\rho}_{-2} &=& \cos\left[\Omega(\hat{n}-2) \pi \right] \hat{\rho}_{0} \cos\left[\Omega(\hat{n}-2) \pi \right] + \nonumber \\
&& + \hat{V}^{2} \sin \left[\Omega(\hat{n}-2)\pi\right] \hat{\rho}_{0} \sin \left[\Omega(\hat{n}-2)\pi\right] \hat{V}^{\dagger 2},
\end{eqnarray}
where the initial state of the field, $\vert \psi \rangle$, is encoded in the density matrix $\hat{\rho}_{0}= \vert \psi \rangle \langle \psi \vert$.
This cavity field can be used as the initial field for a second atom passing and repeating the procedure $m$ times will approximately create a multiple two-photon added or subtracted state, $\vert \psi_{+2m}\rangle$ or $\vert \psi_{-2m}\rangle$ respectively.
After $m$ repetitions, the theoretically exact state of the field is
\begin{eqnarray}
\hat{\rho}_{+2m} &=& \cos\left[\Omega(\hat{n}) \pi \right] \hat{\rho}_{2(m-1)} \cos\left[\Omega(\hat{n}) \pi \right] + \nonumber \\
&& + \hat{V}^{\dagger 2} \sin \left[\Omega(\hat{n})\pi\right] \hat{\rho}_{2(m-1)} \sin \left[\Omega(\hat{n})\pi\right] \hat{V}^{2}, \\
\hat{\rho}_{-2m} &=& \cos\left[\Omega(\hat{n}-2) \pi \right] \hat{\rho}_{2(m-1)} \cos\left[\Omega(\hat{n}-2) \pi \right] + \nonumber \\
&& + \hat{V}^{2} \sin \left[\Omega(\hat{n}-2)\pi\right] \hat{\rho}_{2(m-1)} \sin \left[\Omega(\hat{n}-2)\pi\right] \hat{V}^{\dagger 2}.
\end{eqnarray}
In an ion-trap-QED implementation, we can initialize the quantized center of mass motion of the atom in a suitable state that lacks low Fock state components, then start the two-photon Jaynes-Cummings dynamics with the ion in the excited or ground state and at a time such that $gt = \pi$ we can set the ion back to the excited or ground state with an auxiliary laser pulse and repeat the procedure as needed.
\section{Photon added and subtracted coherent states as nonlinear coherent states} \label{sec:S3}
It is straightforward to show that the $2m$-photon added or subtracted coherent state, $\vert \alpha_{\pm 2m} \rangle$, are eigenstates of nonlinear operators, $\hat{A}_{\pm 2m}$, with the coherent parameter as eigenvalue,
\begin{eqnarray}
\hat{A}_{\pm2m} \vert \alpha_{\pm 2m} \rangle = - \alpha \vert \alpha_{\pm 2m} \rangle.
\end{eqnarray}
In the case of the $2m$-photon added coherent state,
\begin{eqnarray}
\vert \alpha_{+2m} \rangle = \left[ i \hat{V}^{\dagger 2} (-1)^{\hat{n}} \right]^{m} \vert \alpha \rangle,
\end{eqnarray}
the nonlinear operator is given by
\begin{eqnarray}
\hat{A}_{+2m} = \sqrt{ \frac{\hat{n} - 2m +1}{\hat{n} + 1}} ~\hat{a}.
\end{eqnarray}
And for the $2m$-photon subtracted coherent state,
\begin{eqnarray}
\vert \alpha_{-2m} \rangle = \left[i \hat{V}^{2} (-1)^{\hat{n}}\right]^{m} \vert \alpha \rangle,
\end{eqnarray}
as long as the absolute value of the coherent parameter, $\vert \alpha \vert$, is large enough to guarantee that
\begin{eqnarray}
e^{-\vert \alpha \vert^{2}} \frac{\vert\alpha\vert^{2j}}{j!} \approx 0, ~ \mathrm{for}~ j\le 2 m,
\end{eqnarray}
in order to fulfill \eqref{eq:Sub}, the nonlinear annihilation operator is given by
\begin{eqnarray}
\hat{A}_{-2m} = \sqrt{ \frac{\hat{n}+ 2m + 1}{\hat{n} +1}} ~\hat{a}.
\end{eqnarray}
Our proposed photon adding or subtracting scheme keeps the shape of the Fock state distribution but changes the mean photon number value.
Then, it is of interest to calculate the Mandel Q parameter \cite{Mandel1995},
\begin{eqnarray}
Q(\psi) = \frac{ \langle \psi \vert \hat{n}^{2} \vert \psi \rangle - \langle \psi \vert \hat{n} \vert \psi \rangle^{2}}{ \langle \psi \vert \hat{n} \vert \psi \rangle} -1,
\end{eqnarray}
for our $2m$-photon added or subtracted states,
\begin{eqnarray}
Q(\psi_{\pm 2m}) = \frac{ \langle \psi \vert \hat{n} \vert \psi \rangle }{ \langle \psi \vert \hat{n} \vert \psi \rangle \pm 2m } Q \mp \frac{2m}{\langle \psi \vert \hat{n} \vert \psi \rangle \pm 2m}.
\end{eqnarray}
In the case of our $2m$-photon added or subtracted coherent states defined above, it reduces to
\begin{eqnarray}
Q(\alpha_{\pm 2m}) = \mp \frac{2m}{\vert \alpha \vert ^{2} \pm 2m},
\end{eqnarray}
because coherent states have a Poissonian distribution over Fock states, $Q(\alpha)=0$.
In other words, these states will always have sub- or super-Poissonian statistics, respectively.
It is possible to create these $2m$-photon added or subtracted in our experimental proposal by initializing the cavity field or the quantized motion of the ion in a coherent state and then implementing the adding or subtracting protocol.
Figure \ref{fig:Fig1}(a) shows the probability to find the initial (black dots) and final (light blue dots) state in the $j$th Fock state, $\hat{\rho}_{j,j}$.
The initial state is a coherent state $\vert \alpha \rangle$ with $\alpha = 5$ and the final is obtained by repeating $50$ times the two-photon adding protocol.
The fidelity, $F(m) = \mathrm{Tr}( \hat{\rho}_{+2m} \hat{\varrho}_{+2m} )$, between the obtained field state, $\hat{\rho}_{+2m}$, and the ideal $2m$-photon added state, $\hat{\varrho}_{+2m} = \vert \alpha_{+2m} \rangle \langle \alpha_{+2m} \vert$, is shown in Fig. \ref{fig:Fig1}(b).
Figure \ref{fig:Fig2}(a) shows the Fock state distributions for an initial coherent state with $\alpha=12$ in black dots and the final state obtained after repeating the two-photon subtracting protocol for $50$ times in light blue dots.
The corresponding fidelity, $F(m) = \mathrm{Tr}( \hat{\rho}_{-2m} \hat{\varrho}_{-2m} )$, between the obtained state, $ \hat{\rho}_{-2m}$, and the ideal $2m$-photon subtracted state $\hat{\varrho}_{-2m} = \vert \alpha_{-2m} \rangle \langle \alpha_{-2m} \vert$ is shown in Fig. \ref{fig:Fig2}(b).
\begin{figure}[t]
\centerline{\includegraphics[scale=1]{Fig1.pdf}}
\caption {(Color online) (a) The Fock state distribution, $\hat{\rho}_{j,h}$, of an initial coherent state $\vert \alpha \rangle$ with (a) $\alpha=5$ (black dots) and the $100$-photon added coherent state $\vert \alpha_{+100}$ (light blue dots) obtained by applying $m=50$ times the two-photon Jaynes-Cummings procedure. (b) The fidelity, $F(m)=\mathrm{Tr}( \hat{\rho}_{+2m} \hat{\varrho}_{+2m} )$, between the exact state given by the two-photon Jaynes-Cummings procedure and the ideal $2m$-photon added coherent state. } \label{fig:Fig1}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[scale=1]{Fig2.pdf}}
\caption {(Color online) (a)The Fock state distribution, $\hat{\rho}_{j,j}$, of an initial coherent state $\vert \alpha \rangle$ with (a) $\alpha=12$ (black dots) and the $100$-photon subtracted coherent state $\vert \alpha_{+100}$ (light blue dots) obtained by applying $m=50$ times the two-photon Jaynes-Cummings procedure. (b) The fidelity, $F(m)=\mathrm{Tr}( \hat{\rho}_{-2m} \hat{\varrho}_{-2m} )$, between the exact state given by the two-photon Jaynes-Cummings procedure and the ideal $2m$-photon subtracted coherent state. } \label{fig:Fig2}
\end{figure}
\section{Conclusions} \label{sec:S4}
We have introduced a definition of photon added and subtracted states that is experimentally feasible in cavity- and ion-trap-QED via the two-photon Jaynes-Cummings model.
These states are defined in terms of the Susskind-Glogower operators that raise or lower a Fock state and, thus, deliver a mean photon number that is just the original mean photon number plus or minus multiples of two photons.
It is important to note that while it is straightforward to define the photon-added states in this form, restrictions must be imposed on the definition of photon-subtracted states.
Adding (subtracting) photons in this manner to (from) coherent states makes them nonlinear coherent states with sub-(super-)Poissonian statistics that are experimentally feasible in the laboratory.
\section*{Acknowledgments}
I. Ramos Prieto acknowledges financial support from CONACYT through scholarship $\#$276331.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.